WorldWideScience

Sample records for modeling code hades

  1. HADES, A Radiographic Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Aufderheide, M.B.; Slone, D.M.; Schach von Wittenau, A.E.

    2000-08-18

    We describe features of the HADES radiographic simulation code. We begin with a discussion of why it is useful to simulate transmission radiography. The capabilities of HADES are described, followed by an application of HADES to a dynamic experiment recently performed at the Los Alamos Neutron Science Center. We describe quantitative comparisons between experimental data and HADES simulations using a copper step wedge. We conclude with a short discussion of future work planned for HADES.

  2. HADES, A Code for Simulating a Variety of Radiographic Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Aufderheide, M B; Henderson, G; von Wittenau, A; Slone, D M; Barty, A; Martz, Jr., H E

    2004-10-28

    It is often useful to simulate radiographic images in order to optimize imaging trade-offs and to test tomographic techniques. HADES is a code that simulates radiography using ray tracing techniques. Although originally developed to simulate X-Ray transmission radiography, HADES has grown to simulate neutron radiography over a wide range of energy, proton radiography in the 1 MeV to 100 GeV range, and recently phase contrast radiography using X-Rays in the keV energy range. HADES can simulate parallel-ray or cone-beam radiography through a variety of mesh types, as well as through collections of geometric objects. HADES was originally developed for nondestructive evaluation (NDE) applications, but could be a useful tool for simulation of portal imaging, proton therapy imaging, and synchrotron studies of tissue. In this paper we describe HADES' current capabilities and discuss plans for a major revision of the code.

  3. HADES. A computer code for fast neutron cross section from the Optical Model; HADES. Un programa numerico para el calculo de seccciones eficaces neutronicas mediante el modelo optico

    Energy Technology Data Exchange (ETDEWEB)

    Guasp, J.; Navarro, C.

    1973-07-01

    A FORTRAN V computer code for UNIVAC 1108/6 using a local Optical Model with spin-orbit interaction is described. The code calculates fast neutron cross sections, angular distribution, and Legendre moments for heavy and intermediate spherical nuclei. It allows for the possibility of automatic variation of potential parameters for experimental data fitting. (Author) 55 refs.

  4. Recent Progress Validating the HADES Model of LLNL's HEAF MicroCT Measurements

    Energy Technology Data Exchange (ETDEWEB)

    White, W. T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bond, K. C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lennox, K. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Aufderheide, M. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Seetho, I. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Roberson, G. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-07-17

    This report compares recent HADES calculations of x-ray linear attenuation coefficients to previous MicroCT measurements made at Lawrence Livermore National Laboratory’s High Energy Applications Facility (HEAF). The chief objective is to investigate what impact recent changes in HADES modeling have on validation results. We find that these changes have no obvious effect on the overall accuracy of the model. Detailed comparisons between recent and previous results are presented.

  5. HADES User's Manual

    Energy Technology Data Exchange (ETDEWEB)

    Aufderheide, M B; Slone, D M; Schach Von Wittenau, A E

    2002-12-26

    HADES is a computer code that simulates transmission radiography through a mesh and/or solid bodies. This code is a successor to an old code called XRAY. HADES is designed to be backward compatible with XRAY (i.e. it will accept and correctly parse XRAY input decks), but HADES has many more features that greatly enhance its versatility and its simulation of experimental effects. HADES can simulate X-Ray radiography, neutron radiography or GeV proton radiography using ray-tracing techniques. A short article has been published which discusses HADES' capabilities. HADES also has some capability for simulating pinhole imaging and backlight imaging. The best way to obtain HADES is to ask one of the authors for a copy. This manual covers all features in version 2.5.0 of HADES.

  6. Strangeness measurements at the HADES experiment

    OpenAIRE

    2010-01-01

    Abstract We report on HADES measurements of strange hadrons in the collision systems Ar(1.756 AGeV)+KCl and p+p at 3.5 GeV. Comparisons of K 0 s transverse mass and rapidity spectra to IQMD transport model calculations give a strong hint to a repulsive kaon-nucleon potential. The effect of the potential shows up strongest at very low transverse momenta, which were measured by HADES with high statistics. Statistical model fits show a fair agreement to the particle yields measured in the hea...

  7. Elementary Collisions with HADES

    CERN Document Server

    Fröhlich, I; Agakichiev, G; Agodi, C; Balanda, A; Bellia, G; Belver, D; Belyaev, A; Blanco, A; Böhmer, M; Boyard, J L; Braun-Munzinger, P; Cabanelas, P; Castro, E; Chernenko, S; Christ, T; Destefanis, M; Daz, J; Dohrmann, F; Dybczak, A; Eberl, T; Fabbietti, L; Fateev, O; Finocchiaro, P; Fonte, Paulo J R; Friese, J; Galatyuk, T; Garzn, J A; Gernhuser, R; Gilardi, C; Golubeva, M; Gonzalez-Diaz, D; Grosse, E; Guber, F; Heilmann, M; Hennino, T; Holzmann, R; Ierusalimov, A; Iori, I; Ivashkin, A; Jurkovic, M; Kmpfer, B; Kanaki, K; Karavicheva, T; Kirschner, D; König, I; König, W; Kolb, B W; Kotte, R; Kozuch, A; Krizek, F; Krcken, R; Khn, W; Kugler, A; Kurepin, A; Lamas-Valverde, J; Lang, S; Lange, J S; Lopes, L; Maier, L; Mangiarotti, A; Marn, J; Markert, J; Metag, V; Michalska, B; Mishra, D; Morinire, E; Mousa, J; Müntz, C; Naumann, Lutz; Novotny, R; Otwinowski, J; Pachmayer, Y C; Palka, M; Parpottas, Y; Pechenov, V; Pechenova, O; Prez Cavalcanti, T; Przygoda, W; Ramstein, B; Reshetin, A; Roy-Stephan, M; Rustamov, A; Sadovskii, A; Sailer, B; Salabura, P; Schmah, A; Simon, R; Spataro, S; Spruck, B; Strbele, H; Stroth, J; Sturm, C; Sudol, M; Tarantola, A; Teilab, K; Tlustý, P; Traxler, M; Trebacz, R; Tsertos, H; Veretenkin, I; Wagner, V; Wen, H; Wisniowski, M; Wojcik, T; Wstenfeld, J; Yurevich, S; Zanevsky, Y; Zumbruch, P

    2007-01-01

    The "High Acceptance DiElectron Spectrometer" (HADES) at GSI, Darmstadt, is investigating the production of e+e- pairs in A+A, p+A and N+N collisions. The latter program allows for the reconstruction of individual sources. This strategy will be roughly outlined in this contribution and preliminary pp/pn data is shown.

  8. Model Children's Code.

    Science.gov (United States)

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  9. Electromagnetic Calorimeter for HADES Experiment

    Directory of Open Access Journals (Sweden)

    Rodríguez-Ramos P.

    2014-01-01

    Full Text Available Electromagnetic calorimeter (ECAL is being developed to complement dilepton spectrometer HADES. ECAL will enable the HADES@FAIR experiment to measure data on neutral meson production in heavy ion collisions at the energy range of 2-10 AGeV on the beam of future accelerator SIS100@FAIR. We will report results of the last beam test with quasi-monoenergetic photons carried out in MAMI facility at Johannes Gutenberg Universität Mainz.

  10. In-medium hadron properties measured with HADES

    Directory of Open Access Journals (Sweden)

    Pietraszko J.

    2014-03-01

    Full Text Available Many QCD based and phenomenological models predict changes of hadron properties in a strongly interacting environment. The results of these models differ significantly and the experimental determination of hadron properties in nuclear matter is essential. In this paper we present a review of selected physics results obtained at GSI Helmholtzzentrum für Schwerionenforschung GmbH by HADES (High-Acceptance Di-Electron Spectrometer. The e+e− pair emission measured for proton and heavy-ion induced collisions is reported together with results on strangeness production. The future HADES activities at the planned FAIR facility are also discussed.

  11. In-medium hadron properties measured with HADES

    Science.gov (United States)

    Pietraszko, J.; Agakishiev, G.; Behnke, C.; Belver, D.; Belyaev, A.; Berger-Chen, J. C.; Blanco, A.; Blume, C.; Böhmer, M.; Cabanelas, P.; Chernenko, S.; Dritsa, C.; Dybczak, A.; Epple, E.; Fabbietti, L.; Fateev, O.; Fonte, P.; Friese, J.; Fröhlich, I.; Galatyuk, T.; Garzón, J. A.; Gill, K.; Golubeva, M.; González-Díaz, D.; Guber, F.; Gumberidze, M.; Harabasz, S.; Hennino, T.; Höhne, C.; Holzmann, R.; Huck, P.; Ierusalimov, A.; Ivashkin, A.; Jurkovic, M.; Kämpfer, B.; Karavicheva, T.; Koenig, I.; Koenig, W.; Kolb, B. W.; Korcyl, G.; Kornakov, G.; Kotte, R.; Krása, A.; Krebs, E.; Krizek, F.; Kuc, H.; Kugler, A.; Kurepin, A.; Kurilkin, A.; Kurilkin, P.; Ladygin, V.; Lalik, R.; Lang, S.; Lapidus, K.; Lebedev, A.; Lopes, L.; Lorenz, M.; Maier, L.; Mangiarotti, A.; Markert, J.; Metag, V.; Michel, J.; Müntz, C.; Münzer, R.; Naumann, L.; Palka, M.; Parpottas, Y.; Pechenov, V.; Pechenova, O.; Przygoda, W.; Ramstein, B.; Rehnisch, L.; Reshetin, A.; Rustamov, A.; Sadovsky, A.; Salabura, P.; Scheib, T.; Schuldes, H.; Siebenson, J.; Sobolev, Yu. G.; Spataro, S.; Ströbele, H.; Stroth, J.; Strzempek, P.; Sturm, C.; Svoboda, O.; Tarantola, A.; Teilab, K.; Tlusty, P.; Traxler, M.; Tsertos, H.; Vasiliev, T.; Wagner, V.; Weber, M.; Wendisch, C.; Wüstenfeld, J.; Yurevich, S.; Zanevsky, Y.

    2014-03-01

    Many QCD based and phenomenological models predict changes of hadron properties in a strongly interacting environment. The results of these models differ significantly and the experimental determination of hadron properties in nuclear matter is essential. In this paper we present a review of selected physics results obtained at GSI Helmholtzzentrum für Schwerionenforschung GmbH by HADES (High-Acceptance Di-Electron Spectrometer). The e+e- pair emission measured for proton and heavy-ion induced collisions is reported together with results on strangeness production. The future HADES activities at the planned FAIR facility are also discussed.

  12. HADES results in elementary reactions

    Directory of Open Access Journals (Sweden)

    Ramstein B.

    2014-01-01

    Full Text Available Recent results obtained with the HADES experimental set-up at GSI are presented with a focus on dielectron production and strangeness in pp and quasi-free np reactions. Perspectives related to the very recent experiment using the pion beam at GSI are also discussed.

  13. HADES results in elementary reactions

    Science.gov (United States)

    Ramstein, B.; Adamczewski-Musch, J.; Arnold, O.; Atomssa, E. T.; Behnke, C.; Berger-Chen, J. C.; Biernat, J.; Blanco, A.; Blume, C.; Böhmer, M.; Bordalo, P.; Chernenko, S.; Deveaux, C.; Dybczak, A.; Epple, E.; Fabbietti, L.; Fateev, O.; Fonte, P.; Franco, C.; Friese, J.; Fröhlich, I.; Galatyuk, T.; Garzón, J. A.; Gill, K.; Golubeva, M.; Guber, F.; Gumberidze, M.; Harabasz, S.; Hennino, T.; Hlavac, S.; Höhne, C.; Holzmann, R.; Ierusalimov, A.; Ivashkin, A.; Jurkovic, M.; Kämpfer, B.; Karavicheva, T.; Kardan, K.; Koenig, I.; Koenig, W.; Kolb, B. W.; Korcyl, G.; Kornakov, G.; Kotte, R.; Krása, A.; Krebs, E.; Kuc, H.; Kugler, A.; Kunz, T.; Kurepin, A.; Kurilkin, A.; Kurilkin, P.; Ladygin, V.; Lalik, R.; Lapidus, K.; Lebedev, A.; Lopes, L.; Lorenz, M.; Mahmoud, T.; Maier, L.; Mangiarotti, A.; Markert, J.; Metag, V.; Michel, J.; Müntz, C.; Münzer, R.; Naumann, L.; Palka, M.; Parpottas, Y.; Pechenov, V.; Pechenova, O.; Petousis, V.; Pietraszko, J.; Przygoda, W.; Rehnisch, L.; Reshetin, A.; Rost, A.; Rustamov, A.; Sadovsky, A.; Salabura, P.; Scheib, T.; Schmidt-Sommerfeld, K.; Schuldes, H.; Sellheim, P.; Siebenson, J.; Silva, L.; Sobolev, Yu. G.; Spataro, S.; Ströbele, H.; Stroth, J.; Strzempek, P.; Sturm, C.; Svoboda, O.; Tarantola, A.; Teilab, K.; Tlusty, P.; Traxler, M.; Tsertos, H.; Vasiliev, T.; Wagner, V.; Wendisch, C.; Wirth, J.; Wüstenfeld, J.; Zanevsky, Y.; Zumbruch, P.

    2014-11-01

    Recent results obtained with the HADES experimental set-up at GSI are presented with a focus on dielectron production and strangeness in pp and quasi-free np reactions. Perspectives related to the very recent experiment using the pion beam at GSI are also discussed.

  14. The HADES-at-FAIR project

    Energy Technology Data Exchange (ETDEWEB)

    Lapidus, K., E-mail: kirill.lapidus@ph.tum.de [Excellence Cluster ' Origin and Structure of the Universe' (Germany); Agakishiev, G. [Joint Institute of Nuclear Research (Russian Federation); Balanda, A. [Jagiellonian University, Smoluchowski Institute of Physics (Poland); Bassini, R. [Sezione di Milano, Istituto Nazionale di Fisica Nucleare (Italy); Behnke, C. [J.W.Goethe-Universitaet, Institut fuer Kernphysik (Germany); Belyaev, A. [Joint Institute of Nuclear Research (Russian Federation); Blanco, A. [LIP-Laboratorio de Instrumentacao e Fisica Experimental de Particulas (Portugal); Boehmer, M. [Technische Universitaet Muenchen, Physik Department E12 (Germany); Cabanelas, P. [Universidad de Santiago de Compostela, Departamento de Fisica de Particulas (Spain); Carolino, N. [LIP-Laboratorio de Instrumentacao e Fisica Experimental de Particulas (Portugal); Chen, J. C. [Excellence Cluster ' Origin and Structure of the Universe' (Germany); Chernenko, S. [Joint Institute of Nuclear Research (Russian Federation); Diaz, J. [Universidad de Valencia-CSIC, Instituto de Fisica Corpuscular (Spain); Dybczak, A. [Jagiellonian University, Smoluchowski Institute of Physics (Poland); Collaboration: HADES Collaboration; and others

    2012-05-15

    After the completion of the experimental program at SIS18 the HADES setup will migrate to FAIR, where it will deliver high-quality data for heavy-ion collisions in an unexplored energy range of up to 8 A GeV. In this contribution, we briefly present the physics case, relevant detector characteristics and discuss the recently completed upgrade of HADES.

  15. The HADES-at-FAIR project

    Science.gov (United States)

    Lapidus, K.; Agakishiev, G.; Balanda, A.; Bassini, R.; Behnke, C.; Belyaev, A.; Blanco, A.; Böhmer, M.; Cabanelas, P.; Carolino, N.; Chen, J. C.; Chernenko, S.; Díaz, J.; Dybczak, A.; Epple, E.; Fabbietti, L.; Fateev, O.; Finocchiaro, P.; Fonte, P.; Friese, J.; Fröhlich, I.; Galatyuk, T.; Garzón, J. A.; Gernhäuser, R.; Gil, A.; Göbel, K.; Golubeva, M.; González-Díaz, D.; Guber, F.; Gumberidze, M.; Harabasz, S.; Heidel, K.; Heinz, T.; Hennino, T.; Holzmann, R.; Huck, P.; Hutsch, J.; Ierusalimov, A.; Iori, I.; Ivashkin, A.; Jurkovic, M.; Kämpfer, B.; Kajetanowicz, M.; Karavicheva, T.; Koenig, I.; Koenig, W.; Kolb, B. W.; Korcyl, G.; Kornakov, G.; Kotte, R.; Kozuch, A.; Krebs, E.; Krücken, R.; Kuc, H.; Kühn, W.; Kugler, A.; Kurepin, A.; Kurilkin, A.; Kurilkin, P.; Ladygin, V.; Lalik, R.; Lange, J. S.; Liu, M.; Liu, T.; Lopes, L.; Lorenz, M.; Lykasov, G.; Maier, L.; Malakhov, A.; Mangiarotti, A.; Markert, J.; Metag, V.; Michalska, B.; Michel, J.; Müntz, C.; Münzer, R.; Naumann, L.; Palka, M.; Parpottas, Y.; Pechenov, V.; Pechenova, O.; Pereira, A.; Pietraszko, J.; Przygoda, W.; Ramstein, B.; Rehnisch, C.; Reshetin, A.; Rosier, P.; Rustamov, A.; Sadovsky, A.; Salabura, P.; Scheib, T.; Schmah, A.; Schuldes, H.; Schwab, E.; Siebenson, J.; Smolyankin, V.; Sobiella, M.; Sobolev, Yu. G.; Spataro, S.; Spruck, B.; Ströbele, H.; Stroth, J.; Sturm, C.; Tarantola, A.; Teilab, K.; Tiflov, V.; Tlusty, P.; Traxler, M.; Trebacz, R.; Troyan, A.; Tsertos, H.; Usenko, E.; Vasiliev, T.; Visotski, S.; Wagner, V.; Weber, M.; Wendisch, C.; Wüstenfeld, J.; Yurevich, S.; Zanevsky, Y.

    2012-05-01

    After the completion of the experimental program at SIS18 the HADES setup will migrate to FAIR, where it will deliver high-quality data for heavy-ion collisions in an unexplored energy range of up to 8 A GeV. In this contribution, we briefly present the physics case, relevant detector characteristics and discuss the recently completed upgrade of HADES.

  16. Strange hadron production at SIS energies: an update from HADES

    Science.gov (United States)

    Lorenz, M.; Adamczewski-Musch, J.; Arnold, O.; Atomssa, E. T.; Behnke, C.; Berger-Chen, J. C.; Biernat, J.; Blanco, A.; Blume, C.; Böhmer, M.; Bordalo, P.; Chernenko, S.; Deveaux, C.; Dybczak, A.; Epple, E.; Fabbietti, L.; Fateev, O.; Fonte, P.; Franco, C.; Friese, J.; Fröhlich, I.; Galatyuk, T.; Garzón, J. A.; Gill, K.; Golubeva, M.; Guber, F.; Gumberidze, M.; Harabasz, S.; Hennino, T.; Hlavac, S.; Höhne, C.; Holzmann, R.; Ierusalimov, A.; Ivashkin, A.; Jurkovic, M.; Kämpfer, B.; Karavicheva, T.; Kardan, B.; Koenig, I.; Koenig, W.; Kolb, B. W.; Korcyl, G.; Kornakov, G.; Kotte, R.; Krása, A.; Krebs, E.; Kuc, G.; Kugler, A.; Kunz, T.; Kurepin, A.; Kurilkin, A.; Kurilkin, P.; Ladygin, V.; Lalik, R.; Lapidus, K.; Lebedev, A.; Lopes, L.; Mahmoud, T.; Maier, L.; Mangiarotti, A.; Markert, J.; Metag, V.; Michel, J.; Müntz, C.; Münzer, R.; Naumann, L.; Palka, M.; Parpottas, Y.; Pechenov, V.; Pechenova, O.; Petousis, V.; Pietraszko, J.; Przygoda, W.; Ramstein, B.; Rehnisch, L.; Reshetin, A.; Rost, A.; Rustamov, A.; Sadovsky, A.; Salabura, P.; Scheib, T.; Schmidt-Sommerfeld, K.; Schuldes, H.; Sellheim, P.; Siebenson, J.; Silva, L.; Sobolev, Yu. G.; Spataro, S.; Ströbele, H.; Stroth, J.; Strzempek, P.; Sturm, C.; Svoboda, O.; Tarantola, A.; Teilab, K.; Tlusty, P.; Traxler, M.; Tsertos, H.; Vasiliev, T.; Wagner, V.; Wendisch, C.; Wirth, J.; Wüstenfeld, J.; Zanevsky, Y.; Zumbruch, P.

    2016-01-01

    We present and discuss recent experimental activities of the HADES collaboration on open and hidden strangeness production close or below the elementary NN threshold. Special emphasis is put on the feed-down from ϕ mesons to antikaons, the presence of the Ξ- excess in cold nuclear matter and the comparison of statistical model rates to elementary p+p data. The implications for the interpretation of heavy-ion data are discussed as well.

  17. Dilepton production in pp and CC collisions with HADES

    CERN Document Server

    Fröhlich, I; Agodi, C; Balanda, A; Bellia, G; Belver, D; Belyaev, A; Blanco, A; Böhmer, M; Boyard, J L; Braun-Munzinger, P; Cabanelas, P; Castro, E; Chernenko, S P; Christ, T; Destefanis, M; Daz, J; Dohrmann, F; Durán, I; Eberl, T; Fabbietti, L; Fateev, O V; Finocchiaro, P; Fonte, Paulo J R; Friese, J; Galatyuk, T; Garzn, J A; Gernhuser, R; Gilardi, C; Golubeva, M; Gonzalez-Diaz, D; Grosse, E; Guber, F; Hadjivasiliou, C; Heilmann, M; Hennino, T; Holzmann, R; Ierusalimov, A; Iori, I; Ivashkin, A; Jurkovic, M; Kmpfer, B; Kanaki, K; Karavicheva, T; Kirschner, D; König, I; König, W; Kolb, B W; Kotte, R; Krizek, F; Krcken, R; Kugler, A; Khn, W; Kurepin, A; Lamas-Valverde, J; Lang, S; Lange, S; López, L; Mangiarotti, A; Marn, J; Markert, J; Metag, V; Michalska, B; Mishra, D; Moriniere, E; Mousa, J; Münch, M; Müntz, C; Naumann, Lutz; Novotny, R; Otwinowski, J; Pachmayer, Y C; Palka, M; Pechenov, V; Perez, T; Pietraszko, J; Pleskac, R; Pospsil, V; Przygoda, W; Ramstein, B; Reshetin, A; Roy-Stephan, M; Rustamov, A; Sadovskii, A; Sailer, B; Salabura, P; Schmah, A; Senger, P; Shileev, K A; Simon, R; Spataro, S; Spruck, B; Ströbele, H; Stroth, J; Sturm, C; Sudol, M; Teilab, K; Tlustý, P; Traxler, M; Trebacz, R; Tsertos, H; Veretenkin, I; Wagner, V; Wen, H; Wisniowski, M; Wojcik, T; Wüstenfeld, J; Zanevsky, Y; Zumbruch, P

    2007-01-01

    Dilepton production has been measured with HADES, the "High Acceptance DiElectron Spectrometer". In pp collisions at 2.2GeV kinetic beam energy, exclusive eta production and the Dalitz decay eta -> gamma e+e- has been reconstructed. The electromagnetic form factor is well in agreement with existing data. In addition, an inclusive e+e- spectrum from the C+C reaction at 2AGeV is presented and compared with a thermal model.

  18. Searching a dark photon with HADES

    Directory of Open Access Journals (Sweden)

    Gumberidze M.

    2014-01-01

    Full Text Available The existence of a photon-like massive particle, the γ‘ or dark photon, is postulated in several extensions of the Standard Model to explain some recent puzzling astrophysical observations, as well as to solve the sofar unexplained deviation between the measured and calculated values of the muon anomaly. The dark photon, unlike the conventional photon, would have mass and would be detectable via its mixing with the latter. We present a search for the e+e− decay of such a hypothetical particle, also named U vector boson, in inclusive dielectron spectra measured by HADES in the p (3.5 GeV +p, Nb reactions, as well as the Ar (1.756 GeV/u + KCl reaction. An upper limit on the kinetic mixing parameter squared (ϵ2 at 90% CL has been obtained for the mass range M(U= 0.02 - 0.55 GeV/c2 and is compared with the present world data set. Furthermore, an improved upper limit of 2.3 × 10−6 at 90% CL has been set on the branching ratio of the helicity-suppressed direct decay of the η meson η → e+e−.

  19. Dilepton production in pp and CC collisions with HADES

    Energy Technology Data Exchange (ETDEWEB)

    Froehlich, I.; Heilmann, M.; Markert, J.; Muentz, C.; Pachmayer, Y.C.; Stroebele, H.; Teilab, K. [Johann Wolfgang Goethe-Universitaet, Institut fuer Kernphysik, Frankfurt (Germany); Agakishiev, G.; Destefanis, M.; Gilardi, C.; Kirschner, D.; Kuehn, W.; Metag, V.; Mishra, D.; Novotny, R.; Pechenov, V.; Pechenova, O.; Perez Cavalcanti, T.; Spruck, B.; Wen, H. [Justus Liebig Univ. Giessen, II.Physikalisches Inst., Giessen (Germany); Agodi, C.; Finocchiaro, P.; Spataro, S. [Istituto Nazionale di Fisica Nucleare, Lab. Nazionali del Sud, Catania (Italy); Balanda, A.; Kozuch, A.; Michalska, B.; Otwinowski, J.; Palka, M.; Przygoda, W.; Salabura, P.; Trebacz, R.; Wisniowski, M.; Wojcik, T. [Jagiellonian Univ. of Cracow, Smoluchowski Inst. of Physics, Cracow (Poland); Bellia, G. [Ist. Nazionale di Fisica Nucleare, Lab. Nazionali del Sud, Catania (Italy)]|[Univ. di Catania (Italy). Dipt. die Fisica e Astronomia; Belver, D.; Cabanelas, P.; Castro, E.; Garzon, J.A.; Lamas-Valverde, J.; Marin, J. [Univ. of Santiago de Compostela, Dept. de Fisica de Particulas, Santiago de Compostela (Spain); Belyaev, A.; Chernenko, S.; Fateev, O.; Ierusalimov, A.; Zanevsky, Y. [Joint Inst. of Nuclear Research, Dubna (Russian Federation); Blanco, A.; Lopes, L.; Mangiarotti, A. [LIP, Dept. de Fisica da Univ. de Coimbra, Coimbra (Portugal); Boehmer, M.; Christ, T.; Eberl, T.; Fabbietti, L.; Friese, J.; Gernhaeuser, R.; Jurkovic, M.; Kruecken, R.; Sailer, B. [Technische Univ. Muenchen, Physik Dept. E12, Garching (Germany); Boyard, J.L.; Hennino, T.; Moriniere, E.; Ramstein, B.; Roy-Stephan, M. [CNRS/IN2P3, Inst. de Physique Nucleaire d' Orsay, Orsay Cedex (France); Braun-Munzinger, P.; Galatyuk, T.; Gonzalez-Diaz, D.; Holzmann, R.; Koenig, I.; Koenig, W.; Kolb, B.W.; Lang, S.; Lange, S.; Pietraszko, J.; Rustamov, A.; Schmah, A.; Simon, R.; Sturm, C.; Traxler, M.; Zumbruch, P. [Gesellschaft fuer Schwerionenforschung mbH, Darmstadt (Germany)] [and others

    2007-03-15

    e{sup +}e{sup -} production was studied using the High Acceptance DiElectron Spectrometer (HADES). In pp collisions at 2.2 GeV kinetic beam energy, the exclusive {eta} production and the Dalitz decay {eta}{yields}{gamma}e{sup +}e{sup -} have been reconstructed. The electromagnetic form factor of the latter decay was found to be in good agreement with the existing theoretical predictions. In addition, an inclusive e{sup +}e{sup -} invariant-mass spectrum from the {sup 12}C+{sup 12}C reaction at 2 AGeV is presented and compared with a simplified thermal model. (orig.)

  20. Production and evolution path of dileptons at HADES energies

    OpenAIRE

    2008-01-01

    Dilepton production in intermediate energy nucleus-nucleus collisions as well as in elementary proton-proton reactions is analysed within the UrQMD transport model. For C+C collisions at 1 AGeV and 2 AGeV the resulting invariant mass spectra are compared to recent HADES data. We find that the experimental spectrum for C+C at 2 AGeV is slightly overestimated by the theoretical calculations in the region around the vector meson peak, but fairly described in the low mass region, where the data i...

  1. Cheetah: Starspot modeling code

    Science.gov (United States)

    Walkowicz, Lucianne; Thomas, Michael; Finkestein, Adam

    2014-12-01

    Cheetah models starspots in photometric data (lightcurves) by calculating the modulation of a light curve due to starspots. The main parameters of the program are the linear and quadratic limb darkening coefficients, stellar inclination, spot locations and sizes, and the intensity ratio of the spots to the stellar photosphere. Cheetah uses uniform spot contrast and the minimum number of spots needed to produce a good fit and ignores bright regions for the sake of simplicity.

  2. Inclusive Dielectron Production in Ar+KCl Collisions at 1.76 AGeV studied with HADES

    CERN Document Server

    Krizek, F; Balanda, A; Bellia, G; Belver, D; Belyaev, A; Blanco, A; Boehmer, M; Boyard, J L; Braun-Munzinger, P; Cabanelas, P; Castro, E; Chernenko, S; Christ, T; Destefanis, M; Díaz, J; Dohrmann, F; Dybczak, A; Fabbietti, L; Fateev, O; Finocchiaro, P; Fonte, P; Friese, J; Fröhlich, I; Galatyuk, T; Garzón, J A; Gernhäuser, R; Gil, A; Gilardi, C; Golubeva, M; Gonzalez-Diaz, D; Grosse, E; Guber, F; Heilmann, M; Hennino, T; Holzmann, R; Ierusalimov, A; Iori, I; Ivashkin, A; Jurkovic, M; Kämpfer, B; Kanaki, K; Karavicheva, T; Kirschner, D; König, I; König, W; Kolb, B W; Kotte, R; Kozuch, A; Krasa, A; Krücken, R; Kühn, W; Kugler, A; Kurepin, A; Lamas-Valverde, J; Lang, S; Lange, J S; Lapidus, K; Liu, T; Lopes, L; Lorenz, M; Maier, L; Mangiarotti, A; Marin, J; Markert, J; Metag, V; Michalska, B; Michel, J; Mishra, D; Moriniere, E; Mousa, J; Müntz, C; Naumann, L; Novotny, R; Otwinowski, J; Pachmayer, Y C; Palka, M; Parpottas, Y; Pechenov, V; Pechenova, O; Cavalcanti, T Perez; Pietraszko, J; Przygoda, W; Ramstein, B; Reshetin, A; Rustamov, A; Sadovskii, A; Salabura, P; Schmah, A; Simon, R; Sobolev, Yu G; Spataro, S; Spruck, B; Ströbele, H; Stroth, J; Sturm, C; Sudol, M; Tarantola, A; Teilab, K; Tlustý, P; Traxler, M; Trebacz, R; Tsertos, H; Veretenkin, I; Wagner, V; Weber, M; Wisniowski, M; Wüstenfeld, J; Yurevich, S; Zanevsky, Y; Zhou, P; Zumbruch, P

    2009-01-01

    Results of the HADES measurement of inclusive dielectron production in Ar+KCl collisions at a kinetic beam energy of 1.76 AGeV are presented. For the first time, high mass resolution spectroscopy was performed. The invariant mass spectrum of dielectrons is compared with predictions of UrQMD and HSD transport codes.

  3. Investigating hadronic resonances in pp interactions with HADES

    Directory of Open Access Journals (Sweden)

    Przygoda Witold

    2015-01-01

    Full Text Available In this paper we report on the investigation of baryonic resonance production in proton-proton collisions at the kinetic energies of 1.25 GeV and 3.5 GeV, based on data measured with HADES. Exclusive channels npπ+ and ppπ0 as well as ppe+e− were studied simultaneously in the framework of a one-boson exchange model. The resonance cross sections were determined from the one-pion channels for Δ(1232 and N(1440 (1.25 GeV as well as further Δ and N* resonances up to 2 GeV/c2 for the 3.5 GeV data. The data at 1.25 GeV energy were also analysed within the framework of the partial wave analysis together with the set of several other measurements at lower energies. The obtained solutions provided the evolution of resonance production with the beam energy, showing a sizeable non-resonant contribution but with still dominating contribution of Δ(1232P33. In the case of 3.5 GeV data, the study of the ppe+e− channel gave the insight on the Dalitz decays of the baryon resonances and, in particular, on the electromagnetic transition form-factors in the time-like region. We show that the assumption of a constant electromagnetic transition form-factors leads to underestimation of the yield in the dielectron invariant mass spectrum below the vector mesons pole. On the other hand, a comparison with various transport models shows the important role of intermediate ρ production, though with a large model dependency. The exclusive channels analysis done by the HADES collaboration provides new stringent restrictions on the parameterizations used in the models.

  4. The HADES RPC inner TOF wall

    Energy Technology Data Exchange (ETDEWEB)

    Belver, D. [LABCAF-Universidad de Santiago de Compostela, 15782 Santiago de Compostela (Spain); Blanco, A. [Laboratorio de Instrumentacao e Fisica Experimental de Particulas, 3004-516 Coimbra (Portugal); Cabanelas, P. [LABCAF-Universidad de Santiago de Compostela, 15782 Santiago de Compostela (Spain); Carolino, N. [Laboratorio de Instrumentacao e Fisica Experimental de Particulas, 3004-516 Coimbra (Portugal); Castro, E. [LABCAF-Universidad de Santiago de Compostela, 15782 Santiago de Compostela (Spain); Diaz, J. [Instituto de Fisica Corpuscular (CSIC-Universidad de Valencia), Valencia 46071 (Spain); Fonte, P. [Laboratorio de Instrumentacao e Fisica Experimental de Particulas, 3004-516 Coimbra (Portugal); Instituto Superior de Engenharia de Coimbra, 3030-199 Coimbra (Portugal)], E-mail: fonte@lipc.fis.uc.pt; Garzon, J.A. [LABCAF-Universidad de Santiago de Compostela, 15782 Santiago de Compostela (Spain); Gonzalez-Diaz, D. [Gesellschaft fuer Schwerionenforschung mbH (GSI), 64291 Darmstadt (Germany); Gil, A. [Instituto de Fisica Corpuscular (CSIC-Universidad de Valencia), Valencia 46071 (Spain); Koenig, W. [Gesellschaft fuer Schwerionenforschung mbH (GSI), 64291 Darmstadt (Germany); Lopes, L.; Mangiarotti, A.; Oliveira, O.; Pereira, A.; Silva, C. [Laboratorio de Instrumentacao e Fisica Experimental de Particulas, 3004-516 Coimbra (Portugal); Sousa, C.C. [Escola Superior de Tecnologia e Gestao de Leiria, 2411-901 Leiria (Portugal); Zapata, M. [LABCAF-Universidad de Santiago de Compostela, 15782 Santiago de Compostela (Spain)

    2009-05-01

    The upgraded HADES inner TOF Wall will cover a total area of 8 m{sup 2} with 1116 variable-geometry 4-gap, symmetric, timing RPCs, readout by 2232 time and charge channels. Each RPC is individually shielded for robust multi-hit performance and optimum use of the readout channels (crosstalk minimization). The double layer configuration provides a useful degree of redundancy for very accurate timing of a large fraction of all particles crossing the detector. In this paper we describe the concept of the detector, its inner structure and the multi-hit performance.

  5. Highlights of Resonance Measurements With HADES

    Directory of Open Access Journals (Sweden)

    Epple Eliane

    2015-01-01

    Full Text Available This contribution aims to give a basic overview of the latest results regarding the production of resonances in different collision systems. The results were extracted from experimental data collected with HADES that is a multipurpose detector located at the GSI Helmholtzzentrum, Darmstadt. The main points discussed here are: the properties of the strange resonances Λ(1405 and Σ(1385, the role of Δ’s as a source of pions in the final state, the production dynamics reflected in form of differential cross sections, and the role of the ϕ meson as a source for K− particles.

  6. Particle flow with the HADES detector

    Energy Technology Data Exchange (ETDEWEB)

    Deveaux, Christina [Justus-Liebig-Universitaet Giessen (Germany); Collaboration: HADES-Collaboration

    2015-07-01

    The high densities and pressures of the nuclear fireball created in heavy ion collisions modify the momenta of the emitted particles. Particle azimuthal anisotropies (flow) form therefore one of the most sensitive observables to the equation of state of the matter created in such collisions. We studied the flow of protons and pions in Au+Au collisions at 1.23 GeV/u beam energy based on data which was recorded with the High Acceptance Di-Electron Spectrometer (HADES) in 2012. First and preliminary results of the study are shown, and the option to extend the study to the flow of direct photons is discussed.

  7. Impacts of Model Building Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Athalye, Rahul A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sivaraman, Deepak [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elliott, Douglas B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Liu, Bing [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bartlett, Rosemarie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-10-31

    The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO2 emissions at the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.

  8. Dilepton Production at SIS Energies Studied with HADES

    Science.gov (United States)

    Hades Collaboration; Holzmann, Romain; Balanda, A.; Belver, D.; Belyaev, A. V.; Blanco, A.; Böhmer, M.; Boyard, J. L.; Braun-Munzinger, P.; Cabanelas, P.; Castro, E.; Chernenko, S.; Díaz, J.; Dybczak, A.; Epple, E.; Fabbietti, L.; Fateev, O. V.; Finocchiaro, P.; Fonte, P.; Friese, J.; Fröhlich, I.; Galatyuk, T.; Garzón, J. A.; Gernhäuser, R.; Gil, A.; Golubeva, M.; González-Díaz, D.; Guber, F.; Hennino, T.; Holzmann, R.; Huck, P.; Ierusalimov, A. P.; Iori, I.; Ivashkin, A.; Jurkovic, M.; Kämpfer, B.; Karavicheva, T.; Koenig, I.; Koenig, W.; Kolb, B. W.; Kopp, A.; Kotte, R.; Kozuch, A.; Krása, A.; Krizek, F.; Krücken, R.; Kühn, W.; Kugler, A.; Kurepin, A.; Khlitz, P. K.; Lamas-Valverde, J.; Lang, S.; Lange, J. S.; Lapidus, K.; Liu, T.; Lopes, L.; Lorenz, M.; Maier, L.; Mangiarotti, A.; Markert, J.; Metag, V.; Michalska, B.; Michel, J.; Morinière, E.; Mousa, J.; Müntz, C.; Naumann, L.; Pachmayer, Y. C.; Palka, M.; Parpottas, Y.; Pechenov, V.; Pietraszko, J.; Przygoda, W.; Ramstein, B.; Reshetin, A.; Roskoss, J.; Rustamov, A.; Sadovsky, A.; Salabura, P.; Schmah, A.; Siebenson, J.; Simon, R.; Sobolev, Yu. G.; Spruck, B.; Ströbele, H.; Stroth, J.; Sturm, C.; Sudol, M.; Tarantola, A.; Teilab, K.; Tlusty, P.; Traxler, M.; Trebacz, R.; Tsertos, H.; Veretenkin, I.; Wagner, V.; Weber, M.; Wisniowski, M.; Wüstenfeld, J.; Yurevich, S.; Zanevsky, Y. V.

    2010-03-01

    One of the main goals of the HADES experiment is to achieve a detailed understanding of dielectron emission from hadronic systems at moderate bombarding energies. Results obtained on electron pair production in elementary N+N collisions pave the way to a better understanding of the origin of the pair excess seen in heavy-ion collisions. This puzzling excess, reported first by the former DLS experiment, is now being investigated systematically by HADES.

  9. Evaluation of help model replacement codes

    Energy Technology Data Exchange (ETDEWEB)

    Whiteside, Tad [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hang, Thong [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Flach, Gregory [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2009-07-01

    This work evaluates the computer codes that are proposed to be used to predict percolation of water through the closure-cap and into the waste containment zone at the Department of Energy closure sites. This work compares the currently used water-balance code (HELP) with newly developed computer codes that use unsaturated flow (Richards’ equation). It provides a literature review of the HELP model and the proposed codes, which result in two recommended codes for further evaluation: HYDRUS-2D3D and VADOSE/W. This further evaluation involved performing actual simulations on a simple model and comparing the results of those simulations to those obtained with the HELP code and the field data. From the results of this work, we conclude that the new codes perform nearly the same, although moving forward, we recommend HYDRUS-2D3D.

  10. Coding with partially hidden Markov models

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Rissanen, J.

    1995-01-01

    Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general...... 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...

  11. The High-Acceptance Dielectron Spectrometer HADES

    CERN Document Server

    Agakichiev, G; Bannier, B; Bassini, R; Belver, D; Belyaev, A V; Blanco, A; Boehmer, M; Boyard, J L; Braun-Munzinger, P; Cabanelas, P; Castro, E; Chernenko, S; Christ, T; Destefanis, M; Díaz, J; Dohrmann, F; Dybczak, A; Eberl, T; Enghardt, W; Fabbietti, L; Fateev, O V; Finocchiaro, P; Fonte, Paulo J R; Friese, J; Fröhlich, I; Galatyuk, T; Garzón, J A; Gernhäuser, R; Gil1, A; Gilardi, C; Golubeva, M; Gonzalez-Diaz, D; Guber, F; Heilmann, M; Heinz, T; Hennino, T; Holzmann, R; Ierusalimov, A; Iori, I; Ivashkin, A; Jurkovic, M; Kämpfer, B; Kanaki, K; Karavicheva, T; Kirschner, D; König, I; König, W; Kolb, B W; Kotte, R; Krizek, F; Krücken, R; Kühn, W; Kugler, A; Kurepin, A; Lang, S; Lange, J S; Lapidus, K; Liu, T; Lopes, L; Lorenz, M; Maier, L; Mangiarotti, A; Markert, J; Metag, V; Michalska, B; Michel, J; Mishra, D; Moriniere, E; Mousa, J; Müntz, C; Naumann, Lutz; Otwinowski, J; Pachmayer, Y C; Palka, M; Parpottas, Y; Pechenov, V; Pechenova, O; PerezCavalcanti, T; Pietraszko, J; Przygoda, W; Ramstein, B; Reshetin, A; Roy-Stephan, M; Rustamov, A; Sadovskii, A; Sailer, B; Salabura, P; Schmah, A; Schwab, E; Sobolev, Yu G; Spataro, S; Spruck, B; Ströbele, H; Stroth, J; Sturm, C; Sudol, M; Tarantola, A; Teilab, K; Tlustý, P; Traxler, M; Trebac, R; Tsertos, H; Wagner, V; Weber, M; Wisniowski, M; Wojcik, T; Wuestenfel, J; Yurevich, S; Zanevsky, Yu V; Zhou, P; Zumbruch, P

    2009-01-01

    HADES is a versatile magnetic spectrometer aimed at studying dielectron production in pion, proton and heavy-ion induced collisions. Its main features include a ring imaging gas Cherenkov detector for electron-hadron discrimination, a tracking system consisting of a set of 6 superconducting coils producing a toroidal field and drift chambers and a multiplicity and electron trigger array for additional electron-hadron discrimination and event characterization. A two-stage trigger system enhances events containing electrons. The physics program is focused on the investigation of hadron properties in nuclei and in the hot and dense hadronic matter. The detector system is characterized by an 85% azimuthal coverage over a polar angle interval from 18 to 85 degree, a single electron efficiency of 50% and a vector meson mass resolution of 2.5%. Identification of pions, kaons and protons is achieved combining time-of-flight and energy loss measurements over a large momentum range. This paper describes the main featur...

  12. The high-acceptance dielectron spectrometer HADES

    Energy Technology Data Exchange (ETDEWEB)

    Agakichiev, G.; Destefanis, M.; Gilardi, C.; Kirschner, D.; Kuehn, W.; Lange, J.S.; Lehnert, J.; Lichtblau, C.; Lins, E.; Metag, V.; Mishra, D.; Novotny, R.; Pechenov, V.; Pechenova, O.; Perez Cavalcanti, T.; Petri, M.; Ritman, J.; Salz, C.; Schaefer, D.; Skoda, M.; Spataro, S.; Spruck, B.; Toia, A. [Justus-Liebig-Univ. Giessen, II. Physikalisches Inst., Giessen (Germany); Agodi, C.; Coniglione, R.; Cosentino, L.; Finocchiaro, P.; Maiolino, C.; Piattelli, P.; Sapienza, P.; Vassiliev, D. [Lab. Nazionali del Sud, Istituto Nazionale di Fisica Nucleare, Catania (Italy); Alvarez-Pol, H.; Belver, D.; Cabanelas, P.; Castro, E.; Duran, I.; Fernandez, C.; Fuentes, B.; Garzon, J.A.; Kurtukian-Nieto, T.; Rodriguez-Prieto, G.; Sabin-Fernandez, J.; Sanchez, M.; Vazquez, A. [Univ. de Santiago de Compostela, Dept. de Fisica de Particulas, Santiago de Compostela (Spain); Atkin, E.; Volkov, Y. [State Univ., Moscow Engineering Physics Inst., Moscow (Russian Federation); Badura, E.; Bertini, D.; Bielcik, J.; Bokemeyer, H.; Dahlinger, M.; Daues, H.W.; Galatyuk, T.; Garabatos, C.; Gonzalez-Diaz, D.; Hehner, J.; Heinz, T.; Hoffmann, J.; Holzmann, R.; Koenig, I.; Koenig, W.; Kolb, B.W.; Kopf, U.; Lang, S.; Leinberger, U.; Magestro, D.; Muench, M.; Niebur, W.; Ott, W.; Pietraszko, J.; Rustamov, A.; Schicker, R.M.; Schoen, H.; Schoen, W.; Schroeder, C.; Schwab, E.; Senger, P.; Simon, R.S.; Stelzer, H.; Traxler, M.; Yurevich, S.; Zovinec, D.; Zumbruch, P. [GSI Helmholtzzentrum fuer Schwerionenforschung, Darmstadt (Germany); Balanda, A.; Kozuch, A.; Przygoda, W. [Jagiellonian Univ. of Krakow, Smoluchowski Inst. of Physics, Krakow (Poland); Pantwowa Wyzsza Szkola Zawodowa, Nowy Sacz (Poland); Bassi, A.; Bassini, R.; Boiano, C.; Bartolotti, A.; Brambilla, S. [Sezione di Milano, Istituto Nazionale di Fisica Nucleare, Milano (Italy); Bellia, G.; Migneco, E. [Lab. Nazionali del Sud, Istituto Nazionale di Fisica Nucleare, Catania (Italy); Univ. di Catania (Italy)] (and others)

    2009-08-15

    HADES is a versatile magnetic spectrometer aimed at studying dielectron production in pion, proton and heavy-ion-induced collisions. Its main features include a ring imaging gas Cherenkov detector for electron-hadron discrimination, a tracking system consisting of a set of 6 superconducting coils producing a toroidal field and drift chambers and a multiplicity and electron trigger array for additional electron-hadron discrimination and event characterization. A two-stage trigger system enhances events containing electrons. The physics program is focused on the investigation of hadron properties in nuclei and in the hot and dense hadronic matter. The detector system is characterized by an 85% azimuthal coverage over a polar angle interval from 18 to 85 , a single electron efficiency of 50% and a vector meson mass resolution of 2.5%. Identification of pions, kaons and protons is achieved combining time-of-flight and energy loss measurements over a large momentum range (0.1< p< 1.0 GeV/c). This paper describes the main features and the performance of the detector system. (orig.)

  13. Searching a Dark Photon with HADES

    CERN Document Server

    Agakishiev, G; Belver, D; Belyaev, A; Berger-Chen, J C; Blanco, A; Boehmer, M; Boyard, J L; Cabanelas, P; Chernenko, S; Dybczak, A; Epple, E; Fabbbietti, L; Fateev, O; Finocchiaro, P; Fonte, P; Friese, J; Froehlich, I; Galatyuk, T; Garzon, J A; Gernhaeuser, R; Goebel, K; Golubeva, M; Gonzalez-Diaz, D; Guber, F; Gumberidze, M; Heinz, T; Hennino, T; Holzmann, R; Ierusalimov, A; Iori, I; Ivashkin, A; Jurkovic, M; Kaempfer, B; Karavicheva, T; Koenig, I; Koenig, W; Kolb, B W; Kornakov, G; Kotte, R; Krasa, A; Krizek, F; Kruecken, R; Kuc, H; Kuehn, W; Kugler, A; Kurepin, A; Ladygin, V; Lalik, R; Lang, S; Lapidus, K; Lebedev, A; Liu, T; Lopes, L; Lorenz, M; Maier, L; Mangiarotti, A; Markert, J; Metag, V; Michalska, B; Michel, J; Muentz, C; Naumann, L; Pachmayer, Y C; Palka, M; Parpottas, Y; Pechenov, V; Pechenova, O; Petousis, V; Pietraszko, J; Przygoda, W; Ramstein, B; Reshetin, A; Rustamov, A; Sadovsky, A; Salabura, P; Scheib, T; Schuldes, H; Schmah, A; Schwab, E; Siebenson, J; Sobolev, Yu G; Spataro, S; Spruck, B; Stroebele, H; Stroth, J; Sturm, C; Tarantola, A; Teilab, K; Tlusty, P; Traxler, M; Trebacz, R; Tsertos, H; Vasiliev, T; Wagner, V; Weber, M; Wendisch, C; Wuestenfeld, J; Yurevich, S; Zanevsky, Y

    2013-01-01

    We present a search for the e+e- decay of a hypothetical dark photon, also names U vector boson, in inclusive dielectron spectra measured by HADES in the p (3.5 GeV) + p, Nb reactions, as well as the Ar (1.756 GeV/u) + KCl reaction. An upper limit on the kinetic mixing parameter squared epsilon^{2} at 90% CL has been obtained for the mass range M(U) = 0.02 - 0.55 GeV/c2 and is compared with the present world data set. For masses 0.03 - 0.1 GeV/c^2, the limit has been lowered with respect to previous results, allowing now to exclude a large part of the parameter region favoured by the muon g-2 anomaly. Furthermore, an improved upper limit on the branching ratio of 2.3 * 10^{-6} has been set on the helicity-suppressed direct decay of the eta meson, eta-> e+e-, at 90% CL.

  14. The high-acceptance dielectron spectrometer HADES

    Science.gov (United States)

    Agakichiev, G.; Agodi, C.; Alvarez-Pol, H.; Atkin, E.; Badura, E.; Balanda, A.; Bassi, A.; Bassini, R.; Bellia, G.; Belver, D.; Belyaev, A. V.; Benovic, M.; Bertini, D.; Bielcik, J.; Böhmer, M.; Boiano, C.; Bokemeyer, H.; Bartolotti, A.; Boyard, J. L.; Brambilla, S.; Braun-Munzinger, P.; Cabanelas, P.; Castro, E.; Chepurnov, V.; Chernenko, S.; Christ, T.; Coniglione, R.; Cosentino, L.; Dahlinger, M.; Daues, H. W.; Destefanis, M.; Díaz, J.; Dohrmann, F.; Dressler, R.; Durán, I.; Dybczak, A.; Eberl, T.; Enghardt, W.; Fabbietti, L.; Fateev, O. V.; Fernández, C.; Finocchiaro, P.; Friese, J.; Fröhlich, I.; Fuentes, B.; Galatyuk, T.; Garabatos, C.; Garzón, J. A.; Genolini, B.; Gernhäuser, R.; Gilardi, C.; Gilg, H.; Golubeva, M.; González-Díaz, D.; Grosse, E.; Guber, F.; Hehner, J.; Heidel, K.; Heinz, T.; Hennino, T.; Hlavac, S.; Hoffmann, J.; Holzmann, R.; Homolka, J.; Hutsch, J.; Ierusalimov, A. P.; Iori, I.; Ivashkin, A.; Jaskula, M.; Jourdain, J. C.; Jurkovic, M.; Kämpfer, B.; Kajetanowicz, M.; Kanaki, K.; Karavicheva, T.; Kastenmüller, A.; Kidon, L.; Kienle, P.; Kirschner, D.; Koenig, I.; Koenig, W.; Körner, H. J.; Kolb, B. W.; Kopf, U.; Korcyl, K.; Kotte, R.; Kozuch, A.; Krizek, F.; Krücken, R.; Kühn, W.; Kugler, A.; Kulessa, R.; Kurepin, A.; Kurtukian-Nieto, T.; Lang, S.; Lange, J. S.; Lapidus, K.; Lehnert, J.; Leinberger, U.; Lichtblau, C.; Lins, E.; Lippmann, C.; Lorenz, M.; Magestro, D.; Maier, L.; Maier-Komor, P.; Maiolino, C.; Malarz, A.; Marek, T.; Markert, J.; Metag, V.; Michalska, B.; Michel, J.; Migneco, E.; Mishra, D.; Morinière, E.; Mousa, J.; Münch, M.; Müntz, C.; Naumann, L.; Nekhaev, A.; Niebur, W.; Novotny, J.; Novotny, R.; Ott, W.; Otwinowski, J.; Pachmayer, Y. C.; Palka, M.; Parpottas, Y.; Pechenov, V.; Pechenova, O.; Pérez Cavalcanti, T.; Petri, M.; Piattelli, P.; Pietraszko, J.; Pleskac, R.; Ploskon, M.; Pospísil, V.; Pouthas, J.; Prokopowicz, W.; Przygoda, W.; Ramstein, B.; Reshetin, A.; Ritman, J.; Roche, G.; Rodriguez-Prieto, G.; Rosenkranz, K.; Rosier, P.; Roy-Stephan, M.; Rustamov, A.; Sabin-Fernandez, J.; Sadovsky, A.; Sailer, B.; Salabura, P.; Salz, C.; Sánchez, M.; Sapienza, P.; Schäfer, D.; Schicker, R. M.; Schmah, A.; Schön, H.; Schön, W.; Schroeder, C.; Schroeder, S.; Schwab, E.; Senger, P.; Shileev, K.; Simon, R. S.; Skoda, M.; Smolyankin, V.; Smykov, L.; Sobiella, M.; Sobolev, Yu. G.; Spataro, S.; Spruck, B.; Stelzer, H.; Ströbele, H.; Stroth, J.; Sturm, C.; Sudoł, M.; Suk, M.; Szczybura, M.; Taranenko, A.; Tarantola, A.; Teilab, K.; Tiflov, V.; Tikhonov, A.; Tlusty, P.; Toia, A.; Traxler, M.; Trebacz, R.; Troyan, A. Yu.; Tsertos, H.; Turzo, I.; Ulrich, A.; Vassiliev, D.; Vázquez, A.; Volkov, Y.; Wagner, V.; Wallner, C.; Walus, W.; Wang, Y.; Weber, M.; Wieser, J.; Winkler, S.; Wisniowski, M.; Wojcik, T.; Wüstenfeld, J.; Yurevich, S.; Zanevsky, Y. V.; Zeitelhack, K.; Zentek, A.; Zhou, P.; Zovinec, D.; Zumbruch, P.

    2009-08-01

    HADES is a versatile magnetic spectrometer aimed at studying dielectron production in pion, proton and heavy-ion-induced collisions. Its main features include a ring imaging gas Cherenkov detector for electron-hadron discrimination, a tracking system consisting of a set of 6 superconducting coils producing a toroidal field and drift chambers and a multiplicity and electron trigger array for additional electron-hadron discrimination and event characterization. A two-stage trigger system enhances events containing electrons. The physics program is focused on the investigation of hadron properties in nuclei and in the hot and dense hadronic matter. The detector system is characterized by an 85% azimuthal coverage over a polar angle interval from 18° to 85° , a single electron efficiency of 50% and a vector meson mass resolution of 2.5%. Identification of pions, kaons and protons is achieved combining time-of-flight and energy loss measurements over a large momentum range ( 0.1 < p < 1.0 GeV/ c . This paper describes the main features and the performance of the detector system.

  15. Searching a dark photon with HADES

    Science.gov (United States)

    Agakishiev, G.; Balanda, A.; Belver, D.; Belyaev, A.; Berger-Chen, J. C.; Blanco, A.; Böhmer, M.; Boyard, J. L.; Cabanelas, P.; Chernenko, S.; Dybczak, A.; Epple, E.; Fabbietti, L.; Fateev, O.; Finocchiaro, P.; Fonte, P.; Friese, J.; Fröhlich, I.; Galatyuk, T.; Garzón, J. A.; Gernhäuser, R.; Göbel, K.; Golubeva, M.; González-Díaz, D.; Guber, F.; Gumberidze, M.; Heinz, T.; Hennino, T.; Holzmann, R.; Ierusalimov, A.; Iori, I.; Ivashkin, A.; Jurkovic, M.; Kämpfer, B.; Karavicheva, T.; Koenig, I.; Koenig, W.; Kolb, B. W.; Kornakov, G.; Kotte, R.; Krása, A.; Krizek, F.; Krücken, R.; Kuc, H.; Kühn, W.; Kugler, A.; Kurepin, A.; Ladygin, V.; Lalik, R.; Lang, S.; Lapidus, K.; Lebedev, A.; Liu, T.; Lopes, L.; Lorenz, M.; Maier, L.; Mangiarotti, A.; Markert, J.; Metag, V.; Michalska, B.; Michel, J.; Müntz, C.; Naumann, L.; Pachmayer, Y. C.; Palka, M.; Parpottas, Y.; Pechenov, V.; Pechenova, O.; Petousis, V.; Pietraszko, J.; Przygoda, W.; Ramstein, B.; Reshetin, A.; Rustamov, A.; Sadovsky, A.; Salabura, P.; Scheib, T.; Schuldes, H.; Schmah, A.; Schwab, E.; Siebenson, J.; Sobolev, Yu. G.; Spataro, S.; Spruck, B.; Ströbele, H.; Stroth, J.; Sturm, C.; Tarantola, A.; Teilab, K.; Tlusty, P.; Traxler, M.; Trebacz, R.; Tsertos, H.; Vasiliev, T.; Wagner, V.; Weber, M.; Wendisch, C.; Wüstenfeld, J.; Yurevich, S.; Zanevsky, Y.

    2014-04-01

    We present a search for the e+e- decay of a hypothetical dark photon, also named U vector boson, in inclusive dielectron spectra measured by HADES in the p(3.5 GeV) + p, Nb reactions, as well as the Ar (1.756 GeV/u) + KCl reaction. An upper limit on the kinetic mixing parameter squared ɛ2 at 90% CL has been obtained for the mass range MU=0.02-0.55 GeV/c2 and is compared with the present world data set. For masses 0.03-0.1 GeV/c2, the limit has been lowered with respect to previous results, allowing now to exclude a large part of the parameter region favored by the muon g-2 anomaly. Furthermore, an improved upper limit on the branching ratio of 2.3×10-6 has been set on the helicity-suppressed direct decay of the eta meson, η→e+e-, at 90% CL.

  16. Dilepton Production Studied with the Hades Spectrometer

    Science.gov (United States)

    Rustamov, A.; Agakishiev, G.; Balanda, A.; Belver, D.; Belyaev, A.; Blanco, A.; Böhmer, M.; Boyard, J. L.; Cabanelas, P.; Castro, E.; Chernenko, S.; Díaz, J.; Dybczak, A.; Epple, E.; Fabbietti, L.; Fateev, O.; Finocchiaro, P.; Fonte, P.; Friese, J.; Fröhlich, I.; Galatyuk, T.; Garzón, J. A.; Gil, A.; Golubeva, M.; González-Díaz, D.; Guber, F.; Gumberidze, M.; Hennino, T.; Holzmann, R.; Huck, P.; Ierusalimov, A.; Iori, I.; Ivashkin, A.; Jurkovic, M.; Kämpfer, B.; Karavicheva, T.; Koenig, I.; Koenig, W.; Kolb, B. W.; Kopp, A.; Korcyl, G.; Kornakov, G. K.; Kotte, R.; Kozuch, A.; Krása, A.; Krizek, F.; Krücken, R.; Kuc, H.; Kühn, W.; Kugler, A.; Kurepin, A.; Kurilkin, A.; Kurilkin, P.; Kählitz, P.; Ladygin, V.; Lamas-Valverde, J.; Lang, S.; Lapidus, K.; Liu, T.; Lopes, L.; Lorenz, M.; Maier, L.; Mangiarotti, A.; Markert, J.; Metag, V.; Michalska, B.; Michel, J.; Müntz, C.; Naumann, L.; Pachmayer, Y. C.; Palka, M.; Parpottas, Y.; Pechenov, V.; Pechenova, O.; Pietraszko, J.; Przygoda, W.; Ramstein, B.; Reshetin, A.; Roskoss, J.; Sadovsky, A.; Salabura, P.; Schmah, A.; Siebenson, J.; Sobolev, Yu. G.; Spataro, S.; Ströbele, H.; Stroth, J.; Sturm, C.; Tarantola, A.; Teilab, K.; Tlusty, P.; Traxler, M.; Trebacz, R.; Tsertos, H.; Vasiliev, T.; Wagner, V.; Weber, M.; Wüstenfeld, J.; Yurevich, S.; Zanevsky, Y.

    With the HADES spectrometer at GSI we have studied dilepton production in various collision systems from elementary N+N, over p+A, up to the medium-heavy Ar+KCl system. We have confirmed the puzzling results of the former DLS collaboration at the Bevalac. While we have traced the origin of the excess pair yield in C+C collisions to elementary p+p and n+p processes, we find a significant contribution from the dense phase of the collision in larger Ar+KCl system. From recently obtained e+e- pair spectra in p+p and p+Nb interactions at 3.5 GeV kinetic beam energy the inclusive production cross sections for neutral pions, η , ω and ρ mesons are extracted for the first time at this beam energy. Furthermore, the production mechanisms of the vector mesons, which are not known at these energies, are investigated. The direct comparison of p+p and p+Nb data allows us to investigate in-medium mass modifications of vector mesons at nuclear ground state density. On the other hand, exclusive production of ω and η mesons in p+p reactions at 3.5 GeV were studied via their π+π-π0 decay channel. Production cross sections and angular distributions for both states were determined.

  17. Transmutation Fuel Performance Code Thermal Model Verification

    Energy Technology Data Exchange (ETDEWEB)

    Gregory K. Miller; Pavel G. Medvedev

    2007-09-01

    FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.

  18. Generation of Java code from Alvis model

    Science.gov (United States)

    Matyasik, Piotr; Szpyrka, Marcin; Wypych, Michał

    2015-12-01

    Alvis is a formal language that combines graphical modelling of interconnections between system entities (called agents) and a high level programming language to describe behaviour of any individual agent. An Alvis model can be verified formally with model checking techniques applied to the model LTS graph that represents the model state space. This paper presents transformation of an Alvis model into executable Java code. Thus, the approach provides a method of automatic generation of a Java application from formally verified Alvis model.

  19. Charged pion production in C+C and Ar+KCl collisions measured with HADES

    CERN Document Server

    Tlustý, P; Balanda, A; Bellia, G; Belver, D; Belyaev, A; Blanco, A; Boehmer, M; Boyard, J L; Braun-Munzinger, P; Cabanelas, P; Castro, E; Chernenko, S; Christ, T; Destefanis, M; Díaz, J; Dohrmann, F; Dybczak, A; Fabbietti, L; Fateev, O; Finocchiaro, P; Fonte, P; Friese, J; Fröhlich, I; Galatyuk, T; Garzón, J A; Gernhäuser, R; Gil, A; Gilardi, C; Golubeva, M; Gonzalez-Diaz, D; Grosse, E; Guber, F; Heilmann, M; Hennino, T; Holzmann, R; Ierusalimov, A; Iori, I; Ivashkin, A; Jurkovic, M; Kämpfer, B; Kanaki, K; Karavicheva, T; Kirschner, D; König, I; König, W; Kolb, B W; Kotte, R; Kozuch, A; Krasa, A; Krizek, F; Krücken, R; Kühn, W; Kugler, A; Kurepin, A; Lamas-Valverde, J; Lang, S; Lange, J S; Lapidus, K; Liu, T; Lopes, L; Lorenz, M; Maier, L; Mangiarotti, A; Marin, J; Markert, J; Metag, V; Michalska, B; Michel, J; Mishra, D; Moriniere, E; Mousa, J; Müntz, C; Naumann, L; Novotny, R; Otwinowski, J; Pachmayer, Y C; Palka, M; Parpottas, Y; Pechenov, V; Pechenova, O; Cavalcanti, T Perez; Pietraszko, J; Przygoda, W; Ramstein, B; Reshetin, A; Rustamov, A; Sadovskii, A; Salabura, P; Schmah, A; Simon, R; Sobolev, Yu G; Spataro, S; Spruck, B; Ströbele, H; Stroth, J; Sturm, C; Sudol, M; Tarantola, A; Teilab, K; Traxler, M; Trebacz, R; Tsertos, H; Veretenkin, I; Wagner, V; Weber, M; Wisniowski, M; Wüstenfeld, J; Yurevich, S; Zanevsky, Y; Zhou, P; Zumbruch, P

    2009-01-01

    Results of a study of charged pion production in 12C+12C collisions at incident beam energies of 1A GeV and 2A GeV, and 40Ar+natKCl at 1.76AGeV, using the spectrometer HADES at GSI, are presented. We have performed a measurement of the transverse momentum distributions of pi+- mesons covering a fairly large rapidity interval, in case of the C+C collision system for the first time. The yields, transverse mass and angular distributions are compared with a transport model as well as with existing data from other experiments.

  20. Economic aspects and models for building codes

    DEFF Research Database (Denmark)

    Bonke, Jens; Pedersen, Dan Ove; Johnsen, Kjeld

    It is the purpose of this bulletin to present an economic model for estimating the consequence of new or changed building codes. The object is to allow comparative analysis in order to improve the basis for decisions in this field. The model is applied in a case study.......It is the purpose of this bulletin to present an economic model for estimating the consequence of new or changed building codes. The object is to allow comparative analysis in order to improve the basis for decisions in this field. The model is applied in a case study....

  1. Mathematical models for the EPIC code

    Energy Technology Data Exchange (ETDEWEB)

    Buchanan, H.L.

    1981-06-03

    EPIC is a fluid/envelope type computer code designed to study the energetics and dynamics of a high energy, high current electron beam passing through a gas. The code is essentially two dimensional (x, r, t) and assumes an axisymmetric beam whose r.m.s. radius is governed by an envelope model. Electromagnetic fields, background gas chemistry, and gas hydrodynamics (density channel evolution) are all calculated self-consistently as functions of r, x, and t. The code is a collection of five major subroutines, each of which is described in some detail in this report.

  2. Model Description of TASS/SMR Code

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Y. D.; Yang, S. H.; Kim, S. H.; Lee, S. W.; Kim, H. K.; Yoon, H. Y.; Lee, G. H.; Bae, K. H.; Chung, Y. J

    2005-12-15

    TASS/SMR(Transient And Setpoint Simulation/System-integrated Modular Reactor) code has been developed for the safety analysis of the SMART-P reactor. TASS/SMR code can be applied for the analysis of design base accidents including small break loss of coolant accident of the SMART research reactor. TASS/SMR code models the primary and secondary system using a node and flow path. A node represents the control volume which defines the fluid mass and energy. Flow path connects the nodes to define the momentum of the fluid. The mass and energy conservation equations are applied to the node and the momentum conservation equation applied to the flow path. In TASS/SMR, the governing equations are applied for both the primary and the secondary coolant system and are solved simultaneously. The governing equations of TASS/SMR are based on the drift-flux model so that the accidents or transients accompanying with two-phase flow can be analyzed. Also, the SMART-P reactor specific thermal-hydraulic models are incorporated, such as non-condensable gas model, helical steam generator heat transfer model, and passive residual heat removal system (PRHRS) heat transfer model. This technical report describes the governing equations, solution method, thermal hydraulics, reactor core, control system models used in TASS/SMR code. Also, the description for the steady state simulation, the minimum CHFR and hottest fuel temperature calculation methods are described in this report.

  3. Mapping Initial Hydrostatic Models in Godunov Codes

    CERN Document Server

    Zingale, M A; Zu Hone, J; Calder, A C; Fryxell, B; Plewa, T; Truran, J W; Caceres, A; Olson, K; Ricker, P M; Riley, K; Rosner, R; Siegel, A; Timmes, F X; Vladimirova, N

    2002-01-01

    We look in detail at the process of mapping an astrophysical initial model from a stellar evolution code onto the computational grid of an explicit, Godunov type code while maintaining hydrostatic equilibrium. This mapping process is common in astrophysical simulations, when it is necessary to follow short-timescale dynamics after a period of long timescale buildup. We look at the effects of spatial resolution, boundary conditions, the treatment of the gravitational source terms in the hydrodynamics solver, and the initialization process itself. We conclude with a summary detailing the mapping process that yields the lowest ambient velocities in the mapped model.

  4. Source coding model for repeated snapshot imaging

    CERN Document Server

    Li, Junhui; Yang, Dongyue; wu, Guohua; Yin, Longfei; Guo, Hong

    2016-01-01

    Imaging based on successive repeated snapshot measurement is modeled as a source coding process in information theory. The necessary number of measurement to maintain a certain level of error rate is depicted as the rate-distortion function of the source coding. Quantitative formula of the error rate versus measurement number relation is derived, based on the information capacity of imaging system. Second order fluctuation correlation imaging (SFCI) experiment with pseudo-thermal light verifies this formula, which paves the way for introducing information theory into the study of ghost imaging (GI), both conventional and computational.

  5. Studying Hadron Properties in Baryonic Matter with HADES

    Science.gov (United States)

    Kugler, A.; Balanda, A.; Belver, D.; Belyaev, A. V.; Blanco, A.; Böhmer, M.; Boyard, J. L.; Braun-Munzinger, P.; Cabanelas, P.; Castro, E.; Chernenko, S.; Díaz, J.; Dybczak, A.; Epple, E.; Fabbietti, L.; Fateev, O. V.; Finocchiaro, P.; Fonte, P.; Friese, J.; Fröhlich, I.; Galatyuk, T.; Garzón, J. A.; Gernhäuser, R.; Gil, A.; Golubeva, M.; González-Díaz, D.; Guber, F.; Hennino, T.; Holzmann, R.; Huck, P.; Ierusalimov, A. P.; Iori, I.; Ivashkin, A.; Jurkovic, M.; Kämpfer, B.; Karavicheva, T.; Koenig, I.; Koenig, W.; Kolb, B. W.; Kopp, A.; Kotte, R.; Kozuch, A.; Krása, A.; Krizek, F.; Krücken, R.; Kühn, W.; Kurepin, A.; Kählitz, P. K.; Lamas-Valverde, J.; Lang, S.; Lange, J. S.; Lapidus, K.; Liu, T.; Lopes, L.; Lorenz, M.; Maier, L.; Mangiarotti, A.; Markert, J.; Metag, V.; Michalska, B.; Michel, J.; Morinière, E.; Mousa, J.; Müntz, C.; Naumann, L.; Pachmayer, Y. C.; Palka, M.; Parpottas, Y.; Pechenov, V.; Pietraszko, J.; Przygoda, W.; Ramstein, B.; Reshetin, A.; Roskoss, J.; Rustamov, A.; Sadovsky, A.; Salabura, P.; Schmah, A.; Siebenson, J.; Simon, R.; Sobolev, Yu. G.; Spataro, S.; Spruck, B.; Ströbele, H.; Stroth, J.; Sturm, C.; Sudol, M.; Tarantola, A.; Teilab, K.; Tlusty, P.; Traxler, M.; Trebacz, R.; Tsertos, H.; Veretenkin, I.; Wagner, V.; Weber, M.; Wisniowski, M.; Wüstenfeld, J.; Yurevich, S.; Zanevsky, Y. V.

    2010-08-01

    The HADES spectrometer installed at GSI Darmstadt is a second generation experiment to study production of lepton pairs from proton, pion and nucleus induced reactions at the SIS/BEVALAC energy regime. The HADES study of the light C+C system at 1 and 2 AGeV confirms former finding of the DLS collaboration. Further studies of the reaction p+p and d+p allowed to reveal contribution to the above mentioned data of di-leptons produced during first chance collision. Finally, the results of the study of heavier system Ar+KCl indicates possible nonlinear dependence of the observed excess over the known long lived sources of di-leptons on the number of participants.

  6. Hades experiments: investigation of hadron in-medium properties

    Science.gov (United States)

    Salabura, Piotr; Hades Collaboration; Agakishiev, G.; Behnke, C.; Belver, D.; Belyaev, A.; Berger-Chen, J. C.; Blanco, A.; Blume, C.; Böhmer, M.; Cabanelas, P.; Chernenko, S.; Dritsa, C.; Dybczak, A.; Epple, E.; Fabbietti, L.; Fateev, O.; Fonte, P.; Friese, J.; Fröhlich, I.; Galatyuk, T.; Garzón, J. A.; Gill, K.; Golubeva, M.; Gonázlez-Díaz, D.; Guber, F.; Gumberidze, M.; Harabasz, S.; Hennino, T.; Holzmann, R.; Huck, P.; Höhne, C.; Ierusalimov, A.; Ivashkin, A.; Jurkovic, M.; Kämpfer, B.; Karavicheva, T.; Koenig, I.; Koenig, W.; Kolb, B. W.; Korcyl, G.; Kornakov, G.; Kotte, R.; Krása, A.; Krebs, E.; Krizek, F.; Kuc, H.; Kugler, A.; Kurepin, A.; Kurilkin, A.; Kurilkin, P.; Ladygin, V.; Lalik, R.; Lang, S.; Lapidus, K.; Lebedev, A.; Lopes, L.; Lorenz, M.; Maier, L.; Mangiarotti, A.; Markert, J.; Metag, V.; Michel, J.; Müntz, C.; Münzer, R.; Naumann, L.; Palka, M.; Parpottas, Y.; Pechenov, V.; Pechenova, O.; Pietraszko, J.; Przygoda, W.; Ramstein, B.; Rehnisch, L.; Reshetin, A.; Rustamov, A.; Sadovsky, A.; Salabura, P.; Scheib, T.; Schuldes, H.; Siebenson, J.; Sobolev, Yu G.; Spataro, S.; Ströbele, H.; Stroth, J.; Strzempek, P.; Sturm, C.; Svoboda, O.; Tarantola, A.; Teilab, K.; Tlusty, P.; Traxler, M.; Tsertos, H.; Vasiliev, T.; Wagner, V.; Weber, M.; Wendisch, C.; Wüstenfeld, J.; Yurevich, S.; Zanevsky, Y.

    2013-03-01

    Hadron modifications in nuclear matter are discussed in connection to chiral symmetry restoration and/or hadronic many body effects. Experiments with photon, proton and heavy ion beams are used to probe properties of hadrons embedded in nuclear matter at different temperatures and densities. Most of the information has been gathered for the light vector mesons ρ ω and ø. HADES is a second generation experiment operating at GSI with the main aim to study in-medium modifications by means of dielectron production at the SIS18/Bevelac energy range. Large acceptance and excellent particle identification capabilities allows also for measurements of strangeness production. These abilities combined with the variety of beams provided by the SIS18 allow for a characterization of properties of the dense baryonic matter properties created in heavy ion collisions at these energies. A review of recent experimental results obtained by HADES is presented, with main emphasis on hadron properties in nuclear matter.

  7. Strange baryon resonances in pp collisions measured with HADES

    Energy Technology Data Exchange (ETDEWEB)

    Siebenson, Johannes, E-mail: johannes.siebenson@ph.tum.de [Technische Universitaet Muenchen, Excellence Cluster Universe (Germany); Collaboration: HADES Collaboration; and others

    2012-12-15

    We present an analysis of the hyperons {Lambda}(1405) and {Sigma}(1385){sup + } for p+p reactions at 3.5 GeV kinetic beam energy. The data were taken with the High Acceptance Di-Electron Spectrometer (HADES). A {Lambda}(1405) signal could be reconstructed in both charged decay channels {Lambda}(1405){yields}{Sigma}{sup {+-}}{pi}{sup Minus-Or-Plus-Sign }. The obtained statistics of the {Sigma}(1385){sup + } signal allows also differential studies.

  8. Strange baryon resonances in pp collisions measured with HADES

    Science.gov (United States)

    Siebenson, Johannes

    We present an analysis of the hyperons Λ(1405) and Σ(1385) + for p+p reactions at 3.5 GeV kinetic beam energy. The data were taken with the High Acceptance Di-Electron Spectrometer (HADES). A Λ(1405) signal could be reconstructed in both charged decay channels Λ(1405)→Σ± π ∓ . The obtained statistics of the Σ(1385) + signal allows also differential studies.

  9. Resonance production and decay in pion induced collisions with HADES

    Directory of Open Access Journals (Sweden)

    Scozzi Federico

    2017-01-01

    Full Text Available The main goal of the High Acceptance Di-Electron experiment (HADES at GSI is the study of hadronic matter in the 1-3.5 GeV/nucleon incident energy range. HADES results on e+ e− production in proton-nucleus reactions and in nucleus-nucleus collisions demonstrate a strong enhancement of the dilepton yield relative to a reference spectrum obtained from elementary nucleon-nucleon reactions. These observations point to a strong modification of the in-medium ρ spectral function driven by the coupling of the ρ to baryon-resonance hole states. However, to scrutinize this conjecture, a precise study of the role of the ρ meson in electromagnetic baryon-resonance transitions in the time-like region is mandatory. In order to perform this study, HADES has started a dedicated pion-nucleon programme. For the first time a combined measurement of hadronic and dielectron final states have been performed in π−-N reactions at four different pion beam momenta (0.656, 0.69, 0.748 and 0.8 GeV/c. First results on exclusive channels with one pion π−-p and two pions (nπ+π−, pπ−π0 in the final state, which are currently analysed within the partial wave analysis (PWA framework of the Bonn-Gatchina group, are discussed. Finally first results for the dielectron analysis will be shown.

  10. Development of the pion tracker for HADES spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Lalik, Rafal [Technische Univ. Muenchen, Garching (Germany). Excellence Cluster Universe; Collaboration: HADES-Collaboration

    2013-07-01

    We are working on the development of the beam detector for experiments with pion beams at HADES spectrometer in GSI Darmstadt. Pions are created impacting nitrogen or proton beams on a secondary beryllium target and are then delivered through a chicane to the experimental areas. The expected momentum spread of the secondary pion beam is about 8 % and the main goal of this beam detector is to deliver information about the pion momentum for each beam particle. The challenging issue is to achieve an accurate measurement (∼ % fj) of each pion in a high intensity (10{sup 8} part./spill) environment along the pion-bean chicane. This translates into a rate of 10{sup 6} pion/spill with a kinetic energy of 1-2 GeV at the HADES target point. We are currently testing a tracker system, based on double-sided silicon strips detectors with higher radiation hardness and n-XYTER ASIC readout. For prototyping we have prepared the acquisition system compatible with the CBM data acquisition system, for the future employment with HADES we are currently working on acquisition based on the novel TRB3 board. In this talk we are showing current status and performance of the system, and recent results obtained in the laboratory and with proton beams.

  11. Modeling peripheral olfactory coding in Drosophila larvae.

    Directory of Open Access Journals (Sweden)

    Derek J Hoare

    Full Text Available The Drosophila larva possesses just 21 unique and identifiable pairs of olfactory sensory neurons (OSNs, enabling investigation of the contribution of individual OSN classes to the peripheral olfactory code. We combined electrophysiological and computational modeling to explore the nature of the peripheral olfactory code in situ. We recorded firing responses of 19/21 OSNs to a panel of 19 odors. This was achieved by creating larvae expressing just one functioning class of odorant receptor, and hence OSN. Odor response profiles of each OSN class were highly specific and unique. However many OSN-odor pairs yielded variable responses, some of which were statistically indistinguishable from background activity. We used these electrophysiological data, incorporating both responses and spontaneous firing activity, to develop a bayesian decoding model of olfactory processing. The model was able to accurately predict odor identity from raw OSN responses; prediction accuracy ranged from 12%-77% (mean for all odors 45.2% but was always significantly above chance (5.6%. However, there was no correlation between prediction accuracy for a given odor and the strength of responses of wild-type larvae to the same odor in a behavioral assay. We also used the model to predict the ability of the code to discriminate between pairs of odors. Some of these predictions were supported in a behavioral discrimination (masking assay but others were not. We conclude that our model of the peripheral code represents basic features of odor detection and discrimination, yielding insights into the information available to higher processing structures in the brain.

  12. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Model code provisions for use in partially accepted code jurisdictions. 200.926c Section 200.926c Housing and Urban Development Regulations... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted...

  13. MEMOPS: data modelling and automatic code generation.

    Science.gov (United States)

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-03-25

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  14. Creating Models for the ORIGEN Codes

    Science.gov (United States)

    Louden, G. D.; Mathews, K. A.

    1997-10-01

    Our research focused on the development of a methodology for creating reactor-specific cross-section libraries for nuclear reactor and nuclear fuel cycle analysis codes available from the Radiation Safety Information Computational Center. The creation of problem-specific models allows more detailed anlaysis than is possible using the generic models provided with ORIGEN2 and ORIGEN-S. A model of the Ohio State University Research Reactor was created using the Coupled 1-D Shielding Analysis (SAS2H) module of the Modular Code System for Performing Standardized Computer Analysis for Licensing Evaluation (SCALE4.3). Six different reactor core models were compared to identify the effect of changing the SAS2H Larger Unit Cell on the predicted isotopic composition of spent fuel. Seven different power histories were then applied to a Core-Average model to determine the ability of ORIGEN-S to distinguish spent fuel produced under varying operating conditions. Several actinide and fission product concentrations were identified which were sensitive to the power history, however the majority of the isotope concentrations were not dependent on operating history.

  15. Study of the pp-> np+ reaction at 1.25 GeV with HADES

    CERN Document Server

    Liu, T; Agakishiev, G; Agodi, C; Balanda, A; Bellia, G; Belver, D; Belyaev, A; Blanco, A; Böhmer, M; Boyard, J L; Braun-Munzinger, P; Cabanelas, P; Castro, E; Christ, T; Destefanis, M; Díaz, J; Dohrmann, F; Dybczak, A; Fabbietti, L; Fateev, O; Finocchiaro, P; Fonte, P; Friese, J; Fröhlich, I; Galatyuk, T; Garzón, J A; Gernhäuser, R; Gil, A; Gilardi, C; Golubeva, M; González-Díaz, D; Grosse, E; Guber, F; Heilmann, M; Hennino, T; Holzmann, R; Ierusalimov, A; Iori, I; Ivashkin, A; Jurkovic, M; Kämpfer, B; Kanaki, K; Karavicheva, T; Kirschner, D; Koenig, I; Koenig, W; Kolb, B W; Kotte, R; Kozuch, A; Krása, A; Krížek, F; Krücken, R; Kühn, W; Kugler, A; Kurepin, A; Lamas-Valverde, J; Lang, S; Lange, J S; Lapidus, K; Lopes, L; Lorenz, M; Maier, L; Mangiarotti, A; Marín, J; Markert, J; Metag, V; Michalska, B; Michel, J; Mishra, D; Moriničre, E; Mousa, J; Müntz, C; Naumann, L; Novotny, R; Otwinowski, J; Pachmayer, Y C; Palka, M; Parpottas, Y; Pechenov, V; Pechenova, O; Pérez Cavalcanti, T; Pietraszko, J; Przygoda, W; Ramstein, B; Reshetin, A; Rustamov, A; Sadovsky, A; Salabura, P; Schmah, A; Simon, R; Sobolev, Yu G; Spataro, S; Spruck, B; Ströbele, H; Stroth, J; Sturm, C; Sudol, M; Tarantola, A; Teilab, K; Tlustý, P; Traxler, M; Trebacz, R; Tsertos, H; Veretenkin, I; Wagner, V; Weber, M; Wisniowski, M; Wüstenfeld, J; Yurevich, S; Zanevsky, Y V; Zhou, P; Zumbruch, P

    2010-01-01

    In pp collisions at 1.25 GeV kinetic energy, the HADES collaboration aimed at investigating the di-electron production related to (1232) Dalitz decay ( + ! pe+e−). In order to constrain the models predicting the cross section and the production mechanisms of resonance, the hadronic channels have been measured and studied in parallel to the leptonic channels. The analyses of pp ! np + and pp ! pp 0 channels and the comparison to simulations are presented in this contribution, in particular the angular distributions being sensitive to production and decay. The accurate acceptance corrections have been performed as well, which could be tested in all the phase space region thanks to the high statistic data. These analyses result in an overall agreement with the one- exchange model and previous data.

  16. A graph model for opportunistic network coding

    KAUST Repository

    Sorour, Sameh

    2015-08-12

    © 2015 IEEE. Recent advancements in graph-based analysis and solutions of instantly decodable network coding (IDNC) trigger the interest to extend them to more complicated opportunistic network coding (ONC) scenarios, with limited increase in complexity. In this paper, we design a simple IDNC-like graph model for a specific subclass of ONC, by introducing a more generalized definition of its vertices and the notion of vertex aggregation in order to represent the storage of non-instantly-decodable packets in ONC. Based on this representation, we determine the set of pairwise vertex adjacency conditions that can populate this graph with edges so as to guarantee decodability or aggregation for the vertices of each clique in this graph. We then develop the algorithmic procedures that can be applied on the designed graph model to optimize any performance metric for this ONC subclass. A case study on reducing the completion time shows that the proposed framework improves on the performance of IDNC and gets very close to the optimal performance.

  17. Optimisation of low-mass drift chambers for HADES

    Energy Technology Data Exchange (ETDEWEB)

    Garabatos, C.; Karig, W.; Muentz, C.; Steigerwald, A.; Stroth, J.; Wuestenfeld, J.; Zentek, A.; Bethge, K. [Frankfurt Univ. (Germany); Chernenko, S.; Fateev, O.; Glonti, L.; Smykov, L.; Zanewsky, Yu. [Joint Institute of Nuclear Research (JINR), 141980 Dubna (Russian Federation); Bokemeyer, H.; Koenig, W.; Stelzer, H.; Zumbruch, P. [Gesellschaft fuer Schwerionenforschung (GSI), 64291 Darmstadt (Germany)

    1998-07-21

    A set of low-mass drift chambers designed for the tracking of di-electrons in the HADES spectrometer is presented. Short radiation length is ensured by using a Helium-based gas mixture and aluminium cathode wires, whereas the acceptance is maximised by limiting the width of the side frames of the detectors. The measured performance of these devices, which improves with increasing concentration of quencher gas, is shown and discussed. Position resolutions down to 70 {mu}m have been obtained under stable operation. Long-term gain stability is also reported. (orig.)

  18. Optimisation of low-mass drift chambers for HADES

    Science.gov (United States)

    Garabatos, C.; Karig, W.; Müntz, C.; Steigerwald, A.; Stroth, J.; Wüstenfeld, J.; Zentek, A.; Bethge, K.; Chernenko, S.; Fateev, O.; Glonti, L.; Smykov, L.; Zanewsky, Yu; Bokemeyer, H.; Koenig, W.; Stelzer, H.; Zumbruch, P.

    A set of low-mass drift chambers designed for the tracking of di-electrons in the HADES spectrometer is presented. Short radiation length is ensured by using a Helium-based gas mixture and aluminium cathode wires, whereas the acceptance is maximised by limiting the width of the side frames of the detectors. The measured performance of these devices, which improves with increasing concentration of quencher gas, is shown and discussed. Position resolutions down to 70 μm have been obtained under stable operation. Long-term gain stability is also reported.

  19. A kinematic refit for an analysis improvement at HADES

    Energy Technology Data Exchange (ETDEWEB)

    Siebenson, Johannes [TU Muenchen (Germany)

    2010-07-01

    In April 2007 pp-reactions at 3.5 GeV were measured with the HADES-Spectrometer at GSI. In these elementary reactions, one can reconstruct resonances via the missing mass technique. The kinematic refit has been employed to recalculate the momentum of measured tracks in exclusive reactions by the assumption of physical constraints. This well known procedure, can improve dramatically both the invariant- and missing- mass distributions, increasing in this way the signal to background ratio. The mathematical procedure underlying the kinematic fit will be presented, as well as experimental results achieved for the reconstruction of the {eta} and {omega} mesons and the {sigma}(1385){sup +}-resonance.

  20. New RPC front-end electronics for hades

    CERN Document Server

    Gil, Alejandro; Cabanelas, P; Díaz, J; Garzón, J A; González-Díaz, D; König, W; Lange, J S; Marín, J; Montes, N; Skott, P; Traxler, M

    2007-01-01

    Time-of-flight (TOF) detectors are mainly used for both particle identification and triggering. Resistive Plate Chamber (RPC) detectors are becoming widely used because of their excellent TOF capabilities and reduced cost. The new ESTRELA* RPC wall, which is being installed in the HADES detector at Darmstadt GSI, will contain 1024 RPC modules, covering an active area of around 7 m2. It has excellent TOF and good charge resolutions. Its Front-End electronics is based on a 8-layer Mother-Board providing impedance matched paths for the output signals of each of the eight 4-channel Daughter-Boards to the TDC.

  1. Electronic Developments for the Hades RPC Wall Overview and Progress

    CERN Document Server

    Gil, A; Cabanelas, P; Castro, E; Díaz, J; Garzón, J A; Gonzales-Diaz, D; König, W; Lange, J S; May, G; Traxler, M

    2007-01-01

    This contribution presents the current status and progress of the electronics developed for the Resistive Plate Chamber detector of HADES. This new detector for the time-of-flight detection system will contain more than 1000 RPC modules, covering a total active area of around 7 m2. The Front-End electronics consist of custom-made boards that exploit the benefit of the use of commercial components to achieve time resolutions below 100 ps. The Readout electronics, also custom-made, is a multipurpose board providing a 128- channel Time to Digital Converter (TDC) based on the HPTDC chip.

  2. HADES, the new electron-pair spectrometer at GSI

    CERN Document Server

    Friese, J

    1999-01-01

    The High Acceptance DiElectron Spectrometer HADES is presently set up at GSI, Darmstadt, for the systematic study of hadron properties inside nuclear matter. Resonance widths and effective masses of vector mesons as well as electromagnetic transition formfactors of neutral mesons and baryon resonances will be investigated in the low invariant mass region M sub i sub n sub v<=1 GeV/c sup 2 with a resolution DELTA M sub i sub n sub v /M sub i sub n sub v approx =1% (sigma).

  3. New trigger-algorithms for the HADES-Detector

    Energy Technology Data Exchange (ETDEWEB)

    Kopp, Andreas; Liu, Ming; Spruck, Bjoern; Lange, Soeren; Kuehn, Wolfgang [II. Physikalisches Institut, JLU Giessen, Heinrich-Buff-Ring 16, 35392 Giessen (Germany); Collaboration: HADES-Collaboration

    2011-07-01

    For the upgrade of the HADES experiment, high data rates and sophisticated real time processing are foreseen. Thus, general purpose Compute Nodes based on FPGAs and modern network technologies have been designed. With these one is able to implement faster and more efficient algorithms for di-electron recognition designed in VHDL. For the selection of electron candidate events the new trigger algorithm applies matching of the MDC track with RICH, Shower and TOF, even through the inhomogenous magnetic field. Software simulation results of the efficiency and trigger reduction factor for real data (Ar+KCl at 1.756 AGeV) are reported.

  4. Single and double pion production in np collisions at 1.25 GeV with HADES

    CERN Document Server

    Kurilkin, A K; Balanda, A; Belver, D; Belyaev, A; Blanco, A; Böhmer, M; Boyard, J L; Cabanelas, P; Castro, E; Chernenko, S; D\\'\\iaz, J; Dybczak, A; Epple, E; Fabbietti, L; Fateev, O; Finocchiaro, P; Fonte, P; Friese, J; Fröhlich, I; Galatyuk, T; Garzón, J A; Gil, A; Golubeva, M; González-D\\'\\iaz, D; Guber, F; Hennino, T; Holzmann, R; Huck, P; Ierusalimov, A; Iori, I; Ivashkin, A; Jurkovic, M; Kämpfer, B; Karavicheva, T; Koenig, I; Koenig, W; Kolb, B W; Kopp, A; Korcyl, G; Kornakov, GK; Kotte, R; Kozuch, A; Krása, A; Krizek, F; Krücken, R; Kuc, H; Kühn, W; Kugler, A; Kurepin, A; Kurilkin, P; Khlitz, P; Ladygin, V; Lamas-Valverde, J; Lang, S; Lapidus, K; Liu, T; Lopes, L; Lorenz, M; Maier, L; Mangiarotti, A; Markert, J; Metag, V; Michalska, B; Michel, J; Müntz, C; Naumann, L; Pachmayer, Y C; Palka, M; Parpottas, Y; Pechenov, V; Pechenova, O; Pietraszko, J; Przygoda, W; Ramstein, B; Reshetin, A; Roskoss, J; Rustamov, A; Sadovsky, A; Salabura, P; Schmah, A; Siebenson, J; Sobolev, Yu G; Spataro, S; Ströbele, H; Stroth, J; Sturm, C; Sudol, M; Tarantola, A; Teilab, K; Tlusty, P; Traxler, M; Trebacz, R; Tsertos, H; Vasiliev, T; Wagner, V; Weber, M; Wüstenfeld, J; Yurevich, S; Zanevsky, Y

    2011-01-01

    The preliminary results on charged pion production in np collisions at an incident beam energy of 1.25 GeV measured with HADES are presented. The np reactions were isolated in dp collisions at 1.25 GeV/u using the Forward Wall hodoscope, which allowed to register spectator protons. The results for np -> pppi-, np -> nppi+pi- and np -> dpi+pi- channels are compared with OPE calculations. A reasonable agreement between experimental results and the predictions of the OPE+OBE model is observed.

  5. ER@CEBAF: Modeling code developments

    Energy Technology Data Exchange (ETDEWEB)

    Meot, F. [Brookhaven National Lab. (BNL), Upton, NY (United States); Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Roblin, Y. [Brookhaven National Lab. (BNL), Upton, NY (United States); Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)

    2016-04-13

    A proposal for a multiple-pass, high energy, energy-recovery experiment using CEBAF is under preparation in the frame of a JLab-BNL collaboration. In view of beam dynamics investigations regarding this project, in addition to the existing model in use in Elegant a version of CEBAF is developed in the stepwise ray-tracing code Zgoubi, Beyond the ER experiment, it is also planned to use the latter for the study of polarization transport in the presence of synchrotron radiation, down to Hall D line where a 12 GeV polarized beam can be delivered. This Note briefly reports on the preliminary steps, and preliminary outcomes, based on an Elegant to Zgoubi translation.

  6. Characteristic Analysis of Fire Modeling Codes

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Hwan; Yang, Joon Eon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Jong Hoon [Kyeongmin College, Ujeongbu (Korea, Republic of)

    2004-04-15

    This report documents and compares key features of four zone models: CFAST, COMPBRN IIIE, MAGIC and the Fire Induced Vulnerability Evaluation (FIVE) methodology. CFAST and MAGIC handle multi-compartment, multi-fire problems, using many equations; COMPBRN and FIVE handle single compartment, single fire source problems, using simpler equation. The increased rigor of the formulation of CFAST and MAGIC does not mean that these codes are more accurate in every domain; for instance, the FIVE methodology uses a single zone approximation with a plume/ceiling jet sublayer, while the other models use a two-zone treatment without a plume/ceiling jet sublayer. Comparisons with enclosure fire data indicate that inclusion of plume/ceiling jet sublayer temperatures is more conservative, and generally more accurate than neglecting them. Adding a plume/ceiling jet sublayer to the two-zone models should be relatively straightforward, but it has not been done yet for any of the two-zone models. Such an improvement is in progress for MAGIC.

  7. The new data acquisition system of the HADES experiment

    Energy Technology Data Exchange (ETDEWEB)

    Palka, Marek [Jagiellonian University, Cracow (Poland); Gesellschaft fuer Schwerionenforschung, Darmstadt (Germany)

    2008-07-01

    The main goal of the HADES DAQ upgrade project is to achieve a primary data acquisition rate of 20 kHz also for events with maximum charged particle multiplicity (i.e Au+Au collisions). We favor a modular design to increase the scope of applications (e.g. to integrate new detector systems) and simplify the debugging. A first version (TRBv1) of a system fullfiling the above requirements was recently used during the production runs to read out four detector subsystems. However, in order to work at the highest rates and provide fast pre-processing for the different trigger levels, an improved board was necessary (TRBv2). Such a board has been built, featuring a fast optical link (2.5Gb/s), Tiger-Shark DSP (600 MHz), 2 Gb SDRAM and ETRAX-FS multiprocessor. In this talk, the real-world performance of this upgraded TRB is presented. To this board customized Add-On boards are plugged providing the connectivity and functionality necessary to interface the different detector systems used in HADES. Following the same concept, an optical Hub with 16 transceivers (32 Gb/s) and the central Trigger system have been implemented as Add-On boards. We present the current status of all these different components and describe how they interact in a complete system.

  8. Development of the pion tracker for HADES spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Lalik, Rafal [Excellence Cluster ' ' Universe' ' , TU Muenchen, Boltzmannstr.2, 85748 Garching (Germany); Collaboration: HADES-Collaboration

    2014-07-01

    The beam detector for experiments with pion beams at the HADES spectrometer in GSI Darmstadt has been developed. The main goal of this development is to provide efficient event by event reconstruction of secondary pion momenta. Pions are created impacting nitrogen or proton beams on a secondary beryllium target and are then delivered through a chicane to the experimental areas. The expected momentum spread of the secondary pion beam is about 8% whereas simulated resolution of reconstructed momenta is below 0.5%. The challenging issue is to achieve an accurate measurement (on order of per-mile) of each pion in a high intensity (10{sup 8} part./spill) environment along the pion-bean chicane. This translates into a rate of 10{sup 6} pion/spill with a kinetic energy of 1-2 GeV at the HADES target point. The tracking system is based on double-sided silicon strips sensors build in radiation hardness technology, and a n-XYTER ASIC front-end readout. The trigger logic is implemented in the TRB3 boards. The whole system is designed as a standalone, easy scalable and portable. In this talk we are showing status and performance of the system, and recent results obtained in the laboratory and with proton beams at COSY with 2 GeV protons.

  9. Recent results from HADES on electron pair production in relativistic heavy-ion collisions

    CERN Document Server

    Galatyuk, T; Balanda, A; Belver, D; Belyaev, A V; Blanco, A; Böhmer, M; Boyard, J L; Braun-Munzinger, P; Cabanelas, P; Castro, E; Chernenko, S; Christ, T; Destefanis, M; Díaz, J; Dohrmann, F; Dybczak, A; Fabbietti, L; Fateev, O V; Finocchiaro, P; Fonte, P; Friese, J; Fröhlich, I; Garzón, J A; Gernhäuser, R; Gil, A; Gilardi, C; Golubeva, M; González-Díaz, D; Guber, F; Hennino, T; Holzmann, R; Iori, I; Ivashkin, A; Jurkovic, M; Kämpfer, B; Karavicheva, T; Kirschner, D; Koenig, I; Koenig, W; Kolb, B W; Kotte, R; Krizek, F; Krücken, R; Kühn, W; Kugler, A; Kurepin, A; Lang, S; Lange, J S; Lapidus, K; Liu, T; Lopes, L; Lorenz, M; Maier, L; Mangiarotti, A; Markert, J; Metag, V; Michalska, B; Michel, J; Morinière, E; Mousa, J; Müntz, C; Naumann, L; Otwinowski, J; Pachmayer, Y C; Palka, M; Parpottas, Y; Pechenov, V; Pechenova, O; Pietraszko, J; Przygoda, W; Ramstein, B; Reshetin, A; Rustamov, A; Sadovsky, A; Salabura, P; Schmah, A; Schwab, E; Sobolev, Yu G; Spataro, S; Spruck, B; Ströbele, H; Stroth, J; Sturm, C; Sudol, M; Tarantola, A; Teilab, K; Tlusty, P; Traxler, M; Trebacz, R; Tsertos, H; Wagner, V; Weber, M; Wisniowski, M; Wojcik, T; Wüstenfeld, J; Yurevich, S; Zanevsky, Y V; Zhou, P

    2009-01-01

    Systematic investigations of dilepton production are performed at the SIS accelerator of GSI with the HADES spectrometer. The goal of this program is a detailed understanding of di-electron emission from hadronic systems at moderate temperatures and densities. New results obtained in HADES experiments focussing on electron pair production in elementary collisions are reported here. They pave the way to a better understanding of the origin of the so-called excess pairs earlier on observed in heavy-ion collisions by the DLS collaboration and lately confirmed in two measurements of the HADES collaboration using C+C and Ar+KCl collisions. Results of these studies are discussed.

  10. Towards Preserving Model Coverage and Structural Code Coverage

    Directory of Open Access Journals (Sweden)

    Raimund Kirner

    2009-01-01

    Full Text Available Embedded systems are often used in safety-critical environments. Thus, thorough testing of them is mandatory. To achieve a required structural code-coverage criteria it is beneficial to derive the test data at a higher program-representation level than machine code. Higher program-representation levels include, beside the source-code level, languages of domain-specific modeling environments with automatic code generation. For a testing framework with automatic generation of test data this will enable high retargetability of the framework. In this article we address the challenge of ensuring that the structural code coverage achieved at a higher program representation level is preserved during the code generations and code transformations down to machine code. We define the formal properties that have to be fullfilled by a code transformation to guarantee preservation of structural code coverage. Based on these properties we discuss how to preserve code coverage achieved at source-code level. Additionally, we discuss how structural code coverage at model level could be preserved. The results presented in this article are aimed toward the integration of support for preserving structural code coverage into compilers and code generators.

  11. Simulink Code Generation: Tutorial for Generating C Code from Simulink Models using Simulink Coder

    Science.gov (United States)

    MolinaFraticelli, Jose Carlos

    2012-01-01

    This document explains all the necessary steps in order to generate optimized C code from Simulink Models. This document also covers some general information on good programming practices, selection of variable types, how to organize models and subsystems, and finally how to test the generated C code and compare it with data from MATLAB.

  12. PetriCode: A Tool for Template-Based Code Generation from CPN Models

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge

    2014-01-01

    Code generation is an important part of model driven methodologies. In this paper, we present PetriCode, a software tool for generating protocol software from a subclass of Coloured Petri Nets (CPNs). The CPN subclass is comprised of hierarchical CPN models describing a protocol system at different...

  13. Measurement of low-mass e{sup +}e{sup -} pair production in 1 and 2 A GeV C-C collision with HADES

    Energy Technology Data Exchange (ETDEWEB)

    Sudol, M.; Boyard, J.L.; Hennino, T.; Moriniere, E.; Ramstein, B.; Roy-Stephan, M. [CNRS/IN2P3 - Univ. Paris Sud, Inst. de Physique Nucleaire (UMR 8608), Orsay Cedex (France); Agakishiev, G.; Destefanis, M.; Gilardi, C.; Kirschner, D.; Kuehn, W.; Lange, J.S.; Metag, V.; Novotny, R.; Pechenov, V.; Pechenova, O.; Perez Cavalcanti, T.; Spataro, S.; Spruck, B. [Justus Liebig Universitaet Giessen, II. Physikalisches Institut, Giessen (Germany); Agodi, C.; Bellia, G.; Coniglione, R.; Finocchiaro, P.; Maiolino, C.; Piattelli, P.; Sapienza, P. [Istituto Nazionale di Fisica Nucleare-Laboratori Nazionali del Sud, Catania (Italy); Balanda, A.; Dybczak, A.; Kozuch, A.; Otwinowski, J.; Przygoda, W.; Salabura, P.; Trebacz, R.; Wisniowski, M.; Wojcik, T. [Jagiellonian Univ. of Cracow, Smoluchowski Inst. of Physics, Krakow (Poland); Belver, D.; Cabanelas, P.; Duran, I.; Garzon, J.A.; Lamas-Valverde, J.; Marin, J. [Univ. of Santiago de Compostela, Santiago de Compostela (Spain); Belyaev, A.; Chernenko, S.; Fateev, O.; Ierusalimov, A.; Zanevsky, Y. [Joint Inst. of Nuclear Research, Dubna (Russian Federation); Bielcik, J.; Braun-Munzinger, P.; Galatyuk, T.; Gonzalez-Diaz, D.; Heinz, T.; Holzmann, R.; Koenig, I.; Koenig, W.; Kolb, B.W.; Lang, S.; Muench, M.; Palka, M.; Pietraszko, J.; Rustamov, A.; Schroeder, C.; Schwab, E.; Simon, R.; Traxler, M.; Yurevich, S.; Zumbruch, P. [Gesellschaft fuer Schwerionenforschung mbH, Darmstadt (Germany); Blanco, A.; Ferreira-Marques, R.; Fonte, P.; Lopes, L.; Mangiarotti, A. [LIP-Lab. de Instrumentacao e Fisica Experimental de Particulas, Coimbra (Portugal); Bortolotti, A.; Iori, I.; Michalska, B. [Sezione di Milano, Istituto Nazionale di Fisica Nucleare, Milano (Italy); Christ, T.; Eberl, T.; Fabbietti, L.; Friese, J.; Gernhaeuser, R.; Jurkovic, M.; Kruecken, R.; Maier, L.; Sailer, B.; Schmah, A.; Weber, M. [Technische Univ. Muenchen, Munich (Germany); Diaz, J.; Gil, A. [Univ. de Valencia-CSIC, Valencia (Spain)] [and others

    2009-07-15

    HADES is a secondary generation experiment operated at GSI Darmstadt with the main goal to study dielectron production in proton, pion and heavy ion induced reactions. The first part of the HADES mission is to reinvestigate the puzzling pair excess measured by the DLS collaboration in C+C and Ca+Ca collisions at 1 A GeV. For this purpose dedicated measurements with the C+C system at 1 and 2 A GeV were performed. The pair excess above a cocktail of free hadronic decays has been extracted and compared to the one measured by DLS. Furthermore, the excess is confronted with predictions of various model calculations. (orig.)

  14. 40 CFR 194.23 - Models and computer codes.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e.,...

  15. Searching a Dark Photon with HADES

    OpenAIRE

    2014-01-01

    The existence of a photon-like massive particle, the $\\gamma'$ or dark photon, is postulated inseveral extensions of the Standard Model. Such a particle could indeed help to explain the puzzlingbehavior of the observed cosmic-ray positron fraction as well as to solve the so far unexplained deviationbetween the measured and calculated values of the muon $g-2$ anomaly. The dark photon, unlike itsconventional counterpart, would have mass and would be detectable via its mixing with the latter.We ...

  16. The analysis of thermal-hydraulic models in MELCOR code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, M. H.; Hur, C.; Kim, D. K.; Cho, H. J. [POhang Univ., of Science and TECHnology, Pohang (Korea, Republic of)

    1996-07-15

    The objective of the present work is to verify the prediction and analysis capability of MELCOR code about the progression of severe accidents in light water reactor and also to evaluate appropriateness of thermal-hydraulic models used in MELCOR code. Comparing the results of experiment and calculation with MELCOR code is carried out to achieve the above objective. Specially, the comparison between the CORA-13 experiment and the MELCOR code calculation was performed.

  17. Searching a dark photon with HADES

    OpenAIRE

    2014-01-01

    The existence of a photon-like massive particle, the γ‘ or dark photon, is postulated in several extensions of the Standard Model to explain some recent puzzling astrophysical observations, as well as to solve the sofar unexplained deviation between the measured and calculated values of the muon anomaly. The dark photon, unlike the conventional photon, would have mass and would be detectable via its mixing with the latter. We present a search for the e+e− decay of such a hypothetical particle...

  18. Image Coding using Markov Models with Hidden States

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto

    1999-01-01

    The Cylinder Partially Hidden Markov Model (CPH-MM) is applied to lossless coding of bi-level images. The original CPH-MM is relaxed for the purpose of coding by not imposing stationarity, but otherwise the model description is the same.......The Cylinder Partially Hidden Markov Model (CPH-MM) is applied to lossless coding of bi-level images. The original CPH-MM is relaxed for the purpose of coding by not imposing stationarity, but otherwise the model description is the same....

  19. HADES - Hydrophone for Acoustic Detection at South Pole

    CERN Document Server

    Semburg, Benjamin

    2008-01-01

    The South Pole Acoustic Test Setup (SPATS) is located in the upper part of the optical neutrino observatory IceCube, currently under construction. SPATS consists of four strings at depths between 80 m and 500 m below the surface of the ice with seven stages per string. Each stage is equipped with an acoustic sensor and a transmitter. Three strings (string A-C) were deployed in the austral summer 2006/07. SPATS was extended by a fourth string (string D) with second generation sensors and transmitters in 2007/08. One second generation sensor type HADES (Hydrophone for Acoustic Detection at South Pole) consists of a ring-shaped piezo-electric element coated with polyurethane. The development of the sensor, optimization of acoustic transmission by acoustic impedance matching and first in-situ results will be discussed.

  20. Meson and di-electron production with HADES

    CERN Document Server

    Fröhlich, I; Agodi, C; Balanda, A; Bellia, G; Belver, D; Belyaev, A; Blanco, A; Böhmer, M; Boyard, J L; Braun-Munzinger, P; Cabanelas, P; Castro, E; Chernenko, S; Christ, T; Destefanis, M; Daz, J; Dohrmann, F; Dybczak, A; Eberl, T; Fabbietti, L; Fateev, O; Finocchiaro, P; Fonte, Paulo J R; Friese, J; Galatyuk, T; Garzn, J A; Gernhuser, R; Gil, A; Gilardi, C; Golubeva, M; Gonzalez-Diaz, D; Grosse, E; Guber, F; Heilmann, M; Hennino, T; Holzmann, R; Ierusalimov, A; Iori, I; Ivashkin, A; Jurkovic, M; Kmpfer, B; Kanaki, K; Karavicheva, T; Kirschner, D; König, I; König, W; Kolb, B W; Kotte, R; Kozuch, A; Krsa, A; Krizek, F; Krcken, R; Khn, W; Kugler, A; Kurepin, A; Lamas-Valverde, J; Lang, S; Lange, J S; Lapidus, K; Lopes, L; Maier, L; Mangiarotti, A; Marn, J; Markert, J; Metag, V; Michalska, B; Michel, J; Mishra, D; Moriniere, E; Mousa, J; Müntz, C; Naumann, Lutz; Novotny, R; Otwinowski, J; Pachmayer, Y C; Palka, M; Parpottas, Y; Pechenov, V; Pechenova, O; Perez Cavalcanti, T; Pietraszko, J; Przygoda, W; Ramstein, B; Reshetin, A; Roy-Stephan, M; Rustamov, A; Sadovskii, A; Sailer, B; Salabura, P; Schmah, A; Simon, R; Spataro, S; Spruck, B; Ströbele, H; Stroth, J; Sturm, C; Sudol, M; Tarantola, A; Teilab, K; Tlustý, P; Traxler, M; Trebacz, R; Tsertos, H; Veretenkin, I; Wagner, V; Wen, H; Wisniowski, M; Wojcik, T; Wstenfeld, J; Yurevich, S; Zanevsky, Y; Zhou, P; Zumbruch, P

    2009-01-01

    The HADES experiment, installed at GSI, Darmstadt, measures di-electron production in A+A, p/pi+N and p/pi+A collisions. Here, the pi0 and eta Dalitz decays have been reconstructed in the exclusive p+p reaction at 2.2 GeV to form a reference cocktail for long-lived di-electron sources. In the C+C reaction at 1 and 2 GeV/u, these long-lived sources have been subtracted from the measured inclusive e+e- yield to exhibit the signal from the early phase of the collision. The results suggest that resonances play an important role in dense nuclear matter.

  1. Interfacial and Wall Transport Models for SPACE-CAP Code

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Soon Joon; Choo, Yeon Joon; Han, Tae Young; Hwang, Su Hyun; Lee, Byung Chul [FNC Tech., Seoul (Korea, Republic of); Choi, Hoon; Ha, Sang Jun [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)

    2009-10-15

    The development project for the domestic design code was launched to be used for the safety and performance analysis of pressurized light water reactors. And CAP (Containment Analysis Package) code has been also developed for the containment safety and performance analysis side by side with SPACE. The CAP code treats three fields (gas, continuous liquid, and dispersed drop) for the assessment of containment specific phenomena, and is featured by its multidimensional assessment capabilities. Thermal hydraulics solver was already developed and now under testing of its stability and soundness. As a next step, interfacial and wall transport models was setup. In order to develop the best model and correlation package for the CAP code, various models currently used in major containment analysis codes, which are GOTHIC, CONTAIN2.0, and CONTEMPT-LT, have been reviewed. The origins of the selected models used in these codes have also been examined to find out if the models have not conflict with a proprietary right. In addition, a literature survey of the recent studies has been performed in order to incorporate the better models for the CAP code. The models and correlations of SPACE were also reviewed. CAP models and correlations are composed of interfacial heat/mass, and momentum transport models, and wall heat/mass, and momentum transport models. This paper discusses on those transport models in the CAP code.

  2. Code Generation for Protocols from CPN models Annotated with Pragmatics

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars Michael; Kindler, Ekkart

    Model-driven engineering (MDE) provides a foundation for automatically generating software based on models. Models allow software designs to be specified focusing on the problem domain and abstracting from the details of underlying implementation platforms. When applied in the context of formal...... modelling languages, MDE further has the advantage that models are amenable to model checking which allows key behavioural properties of the software design to be verified. The combination of formally verified models and automated code generation contributes to a high degree of assurance that the resulting...... of the same model and sufficiently detailed to serve as a basis for automated code generation when annotated with code generation pragmatics. Pragmatics are syntactical annotations designed to make the CPN models descriptive and to address the problem that models with enough details for generating code from...

  3. Conservation of concrete structures in fib model code 2010

    NARCIS (Netherlands)

    Matthews, S.L.; Ueda, T.; Bigaj-van Vliet, A.

    2012-01-01

    Chapter 9: Conservation of concrete structures forms part of fib Model Code 2010, the first draft of which was published for comment as fib Bulletins 55 and 56 (fib 2010). Numerous comments were received and considered by fib Special Activity Group 5 responsible for the preparation of fib Model Code

  4. Conservation of concrete structures in fib model code 2010

    NARCIS (Netherlands)

    Matthews, S.L.; Ueda, T.; Bigaj-van Vliet, A.

    2012-01-01

    Chapter 9: Conservation of concrete structures forms part of fib Model Code 2010, the first draft of which was published for comment as fib Bulletins 55 and 56 (fib 2010). Numerous comments were received and considered by fib Special Activity Group 5 responsible for the preparation of fib Model Code

  5. Code Generation for Embedded Software for Modeling Clear Box Structures

    Directory of Open Access Journals (Sweden)

    V. Chandra Prakash

    2011-09-01

    Full Text Available Cleanroom software Engineering (CRSE recommended that the code related to the Application systems be generated either manually or through code generation models or represents the same as a hierarchy of clear box structures. CRSE has even advocated that the code be developed using the State models that models the internal behavior of the systems. No framework has been recommended by any Author using which the Clear boxes are designed using the code generation methods. Code Generation is one of the important quality issues addressed in cleanroom software engineering. It has been investigated that CRSE can be used for life cycle management of the embedded systems when the hardware-software co-design is in-built as part and parcel of CRSE by way of adding suitable models to CRSE and redefining the same. The design of Embedded Systems involves code generation in respect of hardware and Embedded Software. In this paper, a framework is proposed using which the embedded software is generated. The method is unique that it considers various aspects of the code generation which includes Code Segments, Code Functions, Classes, Globalization, Variable propagation etc. The proposed Framework has been applied to a Pilot project and the experimental results are presented.

  6. Dielectron analysis in p-p collisions at 3.5 GeV with the HADES spectrometer. {omega}-meson line shape and a new electronics readout for the multi-wire drift chambers

    Energy Technology Data Exchange (ETDEWEB)

    Tarantola Peloni, Attilio

    2011-06-15

    The HADES (High Acceptance DiElectron Spectrometer) is an experimental apparatus installed at the heavy-ion synchrotron SIS-18 at GSI, Darmstadt. The main physics motivation of the HADES experiment is the measurement of e{sup +}e{sup -} pairs in the invariant-mass range up to 1 GeV/c{sup 2} in heavy-ion collisions as well as in pion and proton-induced reactions. The HADES physics program is focused on in-medium properties of the light vector mesons {rho}(770), {omega}(783) and {phi}(1020), which decay with a small branching ratio into dileptons. Dileptons are penetrating probes which allow to study the in-medium properties of hadrons. However, in heavy-ion collisions, the measurement of such lepton pairs is difficult because they are rare and have a very large combinatorial background. Recently, HADES has been upgraded with new detectors and new electronics in order to handle higher intensity beams and reactions with heavy nuclei up to Au. HADES will continue for a few more years its rich physics program at its current place at SIS-18 and then move to the upcoming international Facility for Antiproton and Ion Research (FAIR) accelerator complex. In this context the physics results presented in this work are important prerequisites for the investigation of in-medium vector meson properties in p + A and A+A collisions. This work consists of five chapters. The first chapter introduces the physics motivation and a review of recent physics results. In the second chapter, the HADES spectrometer is described and its sub-detectors are presented. Chapter three deals with the issue of lepton identification and the reconstruction of the dielectron spectra in p + p collisions is presented. Here, two reactions are characterized: inclusive and exclusive dilepton production reactions. From the spectra obtained, the corresponding cross sections are presented with the respective statistical and systematical errors. A comparison with theoretical models is included as well

  7. Automatic code generation from the OMT-based dynamic model

    Energy Technology Data Exchange (ETDEWEB)

    Ali, J.; Tanaka, J.

    1996-12-31

    The OMT object-oriented software development methodology suggests creating three models of the system, i.e., object model, dynamic model and functional model. We have developed a system that automatically generates implementation code from the dynamic model. The system first represents the dynamic model as a table and then generates executable Java language code from it. We used inheritance for super-substate relationships. We considered that transitions relate to states in a state diagram exactly as operations relate to classes in an object diagram. In the generated code, each state in the state diagram becomes a class and each event on a state becomes an operation on the corresponding class. The system is implemented and can generate executable code for any state diagram. This makes the role of the dynamic model more significant and the job of designers even simpler.

  8. RELAP5/MOD3 code manual: Code structure, system models, and solution methods. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-08-01

    The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling, approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I provides modeling theory and associated numerical schemes.

  9. Measurement of low-mass e{sup +}e{sup -} pair production in 2 AGeV C-C collisions with HADES

    Energy Technology Data Exchange (ETDEWEB)

    Sudol, Malgorzata

    2007-07-01

    The search for a modification of hadron properties inside nuclear matter at normal and/or high temperature and density is one of the most interesting issues of modern nuclear physics. Dilepton experiments, give insight into the properties of strong interaction and the nature of hadron mass generation. One of these research tools is the HADES spectrometer. HADES is a high acceptance dilepton spectrometer installed at the heavy-ion synchrotron (SIS) at GSI, Darmstadt. The main physics motivation of HADES is the measurement of e{sup +}e{sup -} pairs in the invariant-mass range up to 1 GeV/c{sup 2} in pion- and proton-induced reactions, as well as in heavy-ion collisions. The goal is to investigate the properties of the vector mesons {rho}, {omega} and of other hadrons reconstructed from e{sup +}e{sup -} decay pairs. Dileptons are penetrating probes allowing to study the in-medium properties of hadrons. However, the measurement of such dilepton pairs is difficult because of a very large background from other processes in which leptons are created. This thesis presents the analysis of the data provided by the first physics run with the HADES spectrometer. For the first time e{sup +}e{sup -} pairs produced in C+C collisions at an incident energy of 2 GeV per nucleon have been collected with sufficient statistics. This experiment is of particular importance since it allows to address the puzzling pair excess measured by the former DLS experiment at a beam energy 1.04 AGeV. The thesis consists of five chapters. The first chapter presents the physics case which is addressed in the work. In the second chapter the HADES spectrometer is introduced with the characteristic of specific detectors which are part of the spectrometer. Chapter three focusses on the issue of charged-particle identification. The fourth chapter discusses the reconstruction of the di-electron spectra in C+C collisions. In this part of the thesis a comparison with theoretical models is included as well

  10. Noise Residual Learning for Noise Modeling in Distributed Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Forchhammer, Søren

    2012-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The noise model is one of the inherently difficult challenges in DVC. This paper considers Transform Domain Wyner-Ziv (TDWZ) coding and proposes...... decoding. A residual refinement step is also introduced to take advantage of correlation of DCT coefficients. Experimental results show that the proposed techniques robustly improve the coding efficiency of TDWZ DVC and for GOP=2 bit-rate savings up to 35% on WZ frames are achieved compared with DISCOVER....

  11. Coupling a Basin Modeling and a Seismic Code using MOAB

    KAUST Repository

    Yan, Mi

    2012-06-02

    We report on a demonstration of loose multiphysics coupling between a basin modeling code and a seismic code running on a large parallel machine. Multiphysics coupling, which is one critical capability for a high performance computing (HPC) framework, was implemented using the MOAB open-source mesh and field database. MOAB provides for code coupling by storing mesh data and input and output field data for the coupled analysis codes and interpolating the field values between different meshes used by the coupled codes. We found it straightforward to use MOAB to couple the PBSM basin modeling code and the FWI3D seismic code on an IBM Blue Gene/P system. We describe how the coupling was implemented and present benchmarking results for up to 8 racks of Blue Gene/P with 8192 nodes and MPI processes. The coupling code is fast compared to the analysis codes and it scales well up to at least 8192 nodes, indicating that a mesh and field database is an efficient way to implement loose multiphysics coupling for large parallel machines.

  12. RPC HADES-TOF wall cosmic ray test performance

    Energy Technology Data Exchange (ETDEWEB)

    Blanco, A., E-mail: alberto@coimbra.lip.pt [Laboratorio de Instrumentacao e Fisica Experimental de Particulas, LIP, Coimbra (Portugal); Belver, D.; Cabanelas, P. [LabCAF, Universidade de Santiago de Compostela, USC, Santiago de Compostela (Spain); Diaz, J. [Instituto de Fisica Corpuscular IFIC (CSIC-Universidad de Valencia), Valencia (Spain); Fonte, P. [Laboratorio de Instrumentacao e Fisica Experimental de Particulas, LIP, Coimbra (Portugal); Instituto Superior de Engenharia de Coimbra, ISEC, Coimbra (Portugal); Garzon, J.A. [LabCAF, Universidade de Santiago de Compostela, USC, Santiago de Compostela (Spain); Gil, A. [Instituto de Fisica Corpuscular IFIC (CSIC-Universidad de Valencia), Valencia (Spain); Gonzalez-Diaz, D.; Koenig, W.; Kolb, B. [GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); Lopes, L. [Laboratorio de Instrumentacao e Fisica Experimental de Particulas, LIP, Coimbra (Portugal); Palka, M. [GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); Pereira, A. [Laboratorio de Instrumentacao e Fisica Experimental de Particulas, LIP, Coimbra (Portugal); and others

    2012-01-01

    In this work we present results concerning the cosmic ray test, prior to the final installation and commissioning of the new Resistive Plate Chamber (RPC) Time of Flight (TOF) wall for the High-Acceptance DiElectron Spectrometer (HADES) at GSI. The TOF wall is composed of six equal sectors, each one constituted by 186 individual 4-gaps glass-aluminium shielded RPC cells distributed in six columns and 31 rows in two partially overlapping layers, covering an area of 1.26 m{sup 2}. All sectors were tested with the final Front End Electronic (FEE) and Data AcQuisition system (DAQ) together with Low Voltage (LV) and High Voltage (HV) systems. Results confirm a very uniform average system time resolution of 77 ps sigma together with an average multi-hit time resolution of 83 ps. Crosstalk levels below 1% (in average), moderate timing tails along with an average longitudinal position resolution of 8.4 mm sigma are also confirmed.

  13. Reconstruction of the {lambda}(1405)-resonance with the HADES-spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Siebenson, Johannes; Fabbietti, Laura; Epple, Eliane; Schmah, Alexander [TU Muenchen (Germany)

    2010-07-01

    The {lambda}(1405)-resonance is known for several years, but its inner structure is still a topic of investigation. A coupled channel approach can be used to describe this resonance as a kind of molecular state, oscillating between a anti KN- and a {sigma}{pi}-bound state. From the experimental point of view, the line shape of the {lambda}(1405) can be reconstructed from its decay into the ({sigma}{pi}){sup 0} decay channels. This line shape is expected to be sensitive to the production mechanism. Therefore it is interesting to study the production of this resonance exploiting different beams. We have reconstructed the {lambda}(1405) signal in pp-reactions at 3.5 GeV with the HADES spectrometer at GSI, where 1.2 billion triggered events were collected. We have achieved to disentangle the three different decay channels ({lambda}(1405){yields}{sigma}{sup 0}{pi}{sup 0}, {sigma}{sup -}{pi}{sup +} and {sigma}{sup +}{pi}{sup -}) and the resulting line shapes can be compared. All the steps of the complex reconstruction analysis are shown and the results are compared to the predictions of theoretical models.

  14. A unified model of the standard genetic code.

    Science.gov (United States)

    José, Marco V; Zamudio, Gabriel S; Morgado, Eberto R

    2017-03-01

    The Rodin-Ohno (RO) and the Delarue models divide the table of the genetic code into two classes of aminoacyl-tRNA synthetases (aaRSs I and II) with recognition from the minor or major groove sides of the tRNA acceptor stem, respectively. These models are asymmetric but they are biologically meaningful. On the other hand, the standard genetic code (SGC) can be derived from the primeval RNY code (R stands for purines, Y for pyrimidines and N any of them). In this work, the RO-model is derived by means of group actions, namely, symmetries represented by automorphisms, assuming that the SGC originated from a primeval RNY code. It turns out that the RO-model is symmetric in a six-dimensional (6D) hypercube. Conversely, using the same automorphisms, we show that the RO-model can lead to the SGC. In addition, the asymmetric Delarue model becomes symmetric by means of quotient group operations. We formulate isometric functions that convert the class aaRS I into the class aaRS II and vice versa. We show that the four polar requirement categories display a symmetrical arrangement in our 6D hypercube. Altogether these results cannot be attained, neither in two nor in three dimensions. We discuss the present unified 6D algebraic model, which is compatible with both the SGC (based upon the primeval RNY code) and the RO-model.

  15. Fusion safety codes International modeling with MELCOR and ATHENA- INTRA

    CERN Document Server

    Marshall, T; Topilski, L; Merrill, B

    2002-01-01

    For a number of years, the world fusion safety community has been involved in benchmarking their safety analyses codes against experiment data to support regulatory approval of a next step fusion device. This paper discusses the benchmarking of two prominent fusion safety thermal-hydraulic computer codes. The MELCOR code was developed in the US for fission severe accident safety analyses and has been modified for fusion safety analyses. The ATHENA code is a multifluid version of the US-developed RELAP5 code that is also widely used for fusion safety analyses. The ENEA Fusion Division uses ATHENA in conjunction with the INTRA code for its safety analyses. The INTRA code was developed in Germany and predicts containment building pressures, temperatures and fluid flow. ENEA employs the French-developed ISAS system to couple ATHENA and INTRA. This paper provides a brief introduction of the MELCOR and ATHENA-INTRA codes and presents their modeling results for the following breaches of a water cooling line into the...

  16. RHOCUBE: 3D density distributions modeling code

    Science.gov (United States)

    Nikutta, Robert; Agliozzo, Claudia

    2016-11-01

    RHOCUBE models 3D density distributions on a discrete Cartesian grid and their integrated 2D maps. It can be used for a range of applications, including modeling the electron number density in LBV shells and computing the emission measure. The RHOCUBE Python package provides several 3D density distributions, including a powerlaw shell, truncated Gaussian shell, constant-density torus, dual cones, and spiralling helical tubes, and can accept additional distributions. RHOCUBE provides convenient methods for shifts and rotations in 3D, and if necessary, an arbitrary number of density distributions can be combined into the same model cube and the integration ∫ dz performed through the joint density field.

  17. LMFBR models for the ORIGEN2 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.

    1983-06-01

    Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 233/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.

  18. LMFBR models for the ORIGEN2 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.

    1981-10-01

    Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 238/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.

  19. Information Theoretic Authentication and Secrecy Codes in the Splitting Model

    CERN Document Server

    Huber, Michael

    2011-01-01

    In the splitting model, information theoretic authentication codes allow non-deterministic encoding, that is, several messages can be used to communicate a particular plaintext. Certain applications require that the aspect of secrecy should hold simultaneously. Ogata-Kurosawa-Stinson-Saido (2004) have constructed optimal splitting authentication codes achieving perfect secrecy for the special case when the number of keys equals the number of messages. In this paper, we establish a construction method for optimal splitting authentication codes with perfect secrecy in the more general case when the number of keys may differ from the number of messages. To the best knowledge, this is the first result of this type.

  20. JPEG2000 COMPRESSION CODING USING HUMAN VISUAL SYSTEM MODEL

    Institute of Scientific and Technical Information of China (English)

    Xiao Jiang; Wu Chengke

    2005-01-01

    In order to apply the Human Visual System (HVS) model to JPEG2000 standard,several implementation alternatives are discussed and a new scheme of visual optimization isintroduced with modifying the slope of rate-distortion. The novelty is that the method of visual weighting is not lifting the coefficients in wavelet domain, but is complemented by code stream organization. It remains all the features of Embedded Block Coding with Optimized Truncation (EBCOT) such as resolution progressive, good robust for error bit spread and compatibility of lossless compression. Well performed than other methods, it keeps the shortest standard codestream and decompression time and owns the ability of VIsual Progressive (VIP) coding.

  1. Differences between the 1993 and 1995 CABO Model Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Conover, D.R.; Lucas, R.G.

    1995-10-01

    The Energy Policy Act of 1992 requires the US DOE to determine if changes to the Council of American Building Officials` (CABO) 1993 Model Energy Code (MEC) (CABO 1993), published in the 1995 edition of the MEC (CABO 1995), will improve energy efficiency in residential buildings. The DOE, the states, and others have expressed an interest in the differences between the 1993 and 1995 editions of the MEC. This report describes each change to the 1993 MEC, and its impact. Referenced publications are also listed along with discrepancies between code changes approved in the 1994 and 1995 code-change cycles and what actually appears in the 1995 MEC.

  2. Analysis of pion production data measured by HADES in proton-proton collisions at 1.25 GeV

    Science.gov (United States)

    Agakishiev, G.; Balanda, A.; Belver, D.; Belyaev, A.; Berger-Chen, J. C.; Blanco, A.; Böhmer, M.; Boyard, J. L.; Cabanelas, P.; Chernenko, S.; Dybczak, A.; Epple, E.; Fabbietti, L.; Fateev, O.; Finocchiaro, P.; Fonte, P.; Friese, J.; Fröhlich, I.; Galatyuk, T.; Garzón, J. A.; Gernhäuser, R.; Göbel, K.; Golubeva, M.; D.; Guber, F.; Gumberidze, M.; Heinz, T.; Hennino, T.; Holzmann, R.; Ierusalimov, A.; Iori, I.; Ivashkin, A.; Jurkovic, M.; Kämpfer, B.; Karavicheva, T.; Koenig, I.; Koenig, W.; Kolb, B. W.; Kornakov, G.; Kotte, R.; Krása, A.; Krizek, F.; Krücken, R.; Kuc, H.; Kühn, W.; Kugler, A.; Kurepin, A.; Ladygin, V.; Lalik, R.; Lang, S.; Lapidus, K.; Lebedev, A.; Liu, T.; Lopes, L.; Lorenz, M.; Maier, L.; Mangiarotti, A.; Markert, J.; Metag, V.; Michalska, B.; Michel, J.; Müntz, C.; Münzer, R.; Naumann, L.; Pachmayer, Y. C.; Palka, M.; Parpottas, Y.; Pechenov, V.; Pechenova, O.; Pietraszko, J.; Przygoda, W.; Ramstein, B.; Reshetin, A.; Rustamov, A.; Sadovsky, A.; Salabura, P.; Schmah, A.; Schwab, E.; Siebenson, J.; Sobolev, Yu. G.; Spataro, S.; Spruck, B.; Ströbele, H.; Stroth, J.; Sturm, C.; Tarantola, A.; Teilab, K.; Tlusty, P.; Traxler, M.; Trebacz, R.; Tsertos, H.; Vasiliev, T.; Wagner, V.; Weber, M.; Wendisch, C.; Wüstenfeld, J.; Yurevich, S.; Zanevsky, Y.; Sarantsev, A. V.; Nikonov, V. A.

    2015-10-01

    Baryon resonance production in proton-proton collisions at a kinetic beam energy of 1.25GeV is investigated. The multi-differential data were measured by the HADES Collaboration. Exclusive channels with one pion in the final state ( npπ + and ppπ 0 were put to extended studies based on various observables in the framework of a one-pion exchange model and with solutions obtained within the framework of a partial wave analysis (PWA) of the Bonn-Gatchina group. The results of the PWA confirm the dominant contribution of the Δ(1232), yet with a sizable impact of the N (1440) and non-resonant partial waves.

  3. Analysis of pion production data measured by HADES in proton-proton collisions at 1.25 GeV

    Energy Technology Data Exchange (ETDEWEB)

    Agakishiev, G.; Belyaev, A.; Chernenko, S.; Fateev, O.; Ierusalimov, A.; Ladygin, V.; Vasiliev, T.; Zanevsky, Y. [Joint Institute of Nuclear Research, Dubna (Russian Federation); Balanda, A.; Dybczak, A.; Michalska, B.; Palka, M.; Przygoda, W.; Salabura, P.; Trebacz, R. [Jagiellonian University of Cracow, Smoluchowski Institute of Physics, Krakow (Poland); Belver, D.; Cabanelas, P.; Garzon, J.A. [Univ. de Santiago de Compostela, LabCAF. F. Fisica, Santiago de Compostela (Spain); Berger-Chen, J.C.; Epple, E.; Fabbietti, L.; Lalik, R.; Lapidus, K.; Muenzer, R. [' ' Origin and Structure of the Universe' ' , Excellence Cluster, Garching (Germany); Technische Universitaet Muenchen, Physik Department E12, Garching (Germany); Blanco, A.; Fonte, P.; Lopes, L.; Mangiarotti, A. [Fisica Experimental de Particulas, LIP-Laboratorio de Instrumentacao e, Coimbra (Portugal); Boehmer, M.; Friese, J.; Gernhaeuser, R.; Jurkovic, M.; Kruecken, R.; Maier, L.; Siebenson, J.; Weber, M. [Technische Universitaet Muenchen, Physik Department E12, Garching (Germany); Boyard, J.L.; Hennino, T.; Liu, T.; Ramstein, B. [Univ. Paris-Sud, Universite Paris-Saclay, Institut de Physique Nucleaire, CNRS-IN2P3, Orsay Cedex (France); Finocchiaro, P.; Schmah, A.; Spataro, S. [Laboratori Nazionali del Sud, Istituto Nazionale di Fisica Nucleare, Catania (Italy); Froehlich, I.; Goebel, K.; Lorenz, M.; Markert, J.; Michel, J.; Muentz, C.; Pachmayer, Y.C.; Pechenova, O.; Rustamov, A.; Stroebele, H.; Tarantola, A.; Teilab, K. [Goethe-Universitaet, Institut fuer Kernphysik, Frankfurt (Germany); Galatyuk, T.; Gonzalez-Diaz, D.; Gumberidze, M.; Kornakov, G. [Technische Universitaet Darmstadt, Darmstadt (Germany); Golubeva, M.; Guber, F.; Ivashkin, A.; Karavicheva, T.; Kurepin, A.; Reshetin, A.; Sadovsky, A. [Russian Academy of Science, Institute for Nuclear Research, Moscow (Russian Federation); Heinz, T.; Holzmann, R.; Koenig, I.; Koenig, W.; Kolb, B.W.; Lang, S.; Pechenov, V.; Pietraszko, J.; Schwab, E.; Sturm, C.; Traxler, M.; Yurevich, S. [GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); Iori, I. [Sezione di Milano, Istituto Nazionale di Fisica Nucleare, Milano (Italy); Kaempfer, B.; Kotte, R.; Naumann, L.; Wendisch, C.; Wuestenfeld, J. [Helmholtz-Zentrum Dresden-Rossendorf, Institut fuer Strahlenphysik, Dresden (Germany); Krasa, A.; Krizek, F.; Kugler, A.; Sobolev, Yu.G.; Tlusty, P.; Wagner, V. [Academy of Sciences of Czech Republic, Nuclear Physics Institute, Rez (Czech Republic); Kuc, H. [Jagiellonian University of Cracow, Smoluchowski Institute of Physics, Krakow (Poland); Univ. Paris-Sud, Universite Paris-Saclay, Institut de Physique Nucleaire, CNRS-IN2P3, Orsay Cedex (France); Kuehn, W.; Metag, V.; Spruck, B. [Justus Liebig Universitaet Giessen, II.Physikalisches Institut, Giessen (Germany); Lebedev, A. [Institute of Theoretical and Experimental Physics, Moscow (Russian Federation); Parpottas, Y.; Tsertos, H. [University of Cyprus, Department of Physics, Nicosia (Cyprus); Stroth, J. [GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); Goethe-Universitaet, Institut fuer Kernphysik, Frankfurt (Germany); Sarantsev, A.V.; Nikonov, V.A. [PNPI, NRC ' ' Kurchatov Institute' ' , Gatchina (Russian Federation); Collaboration: HADES Collaboration

    2015-10-15

    Baryon resonance production in proton-proton collisions at a kinetic beam energy of 1.25 GeV is investigated. The multi-differential data were measured by the HADES Collaboration. Exclusive channels with one pion in the final state (npπ{sup +} and ppπ{sup 0}) were put to extended studies based on various observables in the framework of a one-pion exchange model and with solutions obtained within the framework of a partial wave analysis (PWA) of the Bonn-Gatchina group. The results of the PWA confirm the dominant contribution of the Δ(1232), yet with a sizable impact of the N (1440) and non-resonant partial waves. (orig.)

  4. Modeling Guidelines for Code Generation in the Railway Signaling Context

    Science.gov (United States)

    Ferrari, Alessio; Bacherini, Stefano; Fantechi, Alessandro; Zingoni, Niccolo

    2009-01-01

    Modeling guidelines constitute one of the fundamental cornerstones for Model Based Development. Their relevance is essential when dealing with code generation in the safety-critical domain. This article presents the experience of a railway signaling systems manufacturer on this issue. Introduction of Model-Based Development (MBD) and code generation in the industrial safety-critical sector created a crucial paradigm shift in the development process of dependable systems. While traditional software development focuses on the code, with MBD practices the focus shifts to model abstractions. The change has fundamental implications for safety-critical systems, which still need to guarantee a high degree of confidence also at code level. Usage of the Simulink/Stateflow platform for modeling, which is a de facto standard in control software development, does not ensure by itself production of high-quality dependable code. This issue has been addressed by companies through the definition of modeling rules imposing restrictions on the usage of design tools components, in order to enable production of qualified code. The MAAB Control Algorithm Modeling Guidelines (MathWorks Automotive Advisory Board)[3] is a well established set of publicly available rules for modeling with Simulink/Stateflow. This set of recommendations has been developed by a group of OEMs and suppliers of the automotive sector with the objective of enforcing and easing the usage of the MathWorks tools within the automotive industry. The guidelines have been published in 2001 and afterwords revisited in 2007 in order to integrate some additional rules developed by the Japanese division of MAAB [5]. The scope of the current edition of the guidelines ranges from model maintainability and readability to code generation issues. The rules are conceived as a reference baseline and therefore they need to be tailored to comply with the characteristics of each industrial context. Customization of these

  5. Student Model Tools Code Release and Documentation

    DEFF Research Database (Denmark)

    Johnson, Matthew; Bull, Susan; Masci, Drew

    This document contains a wealth of information about the design and implementation of the Next-TELL open learner model. Information is included about the final specification (Section 3), the interfaces and features (Section 4), its implementation and technical design (Section 5) and also a summary...

  6. Code-to-Code Comparison, and Material Response Modeling of Stardust and MSL using PATO and FIAT

    Science.gov (United States)

    Omidy, Ali D.; Panerai, Francesco; Martin, Alexandre; Lachaud, Jean R.; Cozmuta, Ioana; Mansour, Nagi N.

    2015-01-01

    This report provides a code-to-code comparison between PATO, a recently developed high fidelity material response code, and FIAT, NASA's legacy code for ablation response modeling. The goal is to demonstrates that FIAT and PATO generate the same results when using the same models. Test cases of increasing complexity are used, from both arc-jet testing and flight experiment. When using the exact same physical models, material properties and boundary conditions, the two codes give results that are within 2% of errors. The minor discrepancy is attributed to the inclusion of the gas phase heat capacity (cp) in the energy equation in PATO, and not in FIAT.

  7. Single-pion production in proton-proton collisions at 1.25 GeV: measurements by HADES and a PWA

    Directory of Open Access Journals (Sweden)

    Przygoda Witold

    2014-01-01

    Full Text Available We report on the single-pion production in proton-proton collisions at a kinetic energy of 1.25 GeV based on data measured with HADES. Exclusive channels npπ+ and ppπ0 were studied simultaneously. The parametrization of production cross sections of the one-pion final states by means of the resonance model has been obtained. Independently, the extraction of the leading partial waves in the data were analyzed within the framework of the partial wave analysis (PWA. Contributions for the production of ∆(1232 and N(1440 intermediate states have been deduced.

  8. The JCSS probabilistic model code: Experience and recent developments

    NARCIS (Netherlands)

    Chryssanthopoulos, M.; Diamantidis, D.; Vrouwenvelder, A.C.W.M.

    2003-01-01

    The JCSS Probabilistic Model Code (JCSS-PMC) has been available for public use on the JCSS website (www.jcss.ethz.ch) for over two years. During this period, several examples have been worked out and new probabilistic models have been added. Since the engineering community has already been exposed t

  9. A Mathematical Model for Comparing Holland's Personality and Environmental Codes.

    Science.gov (United States)

    Kwak, Junkyu Christopher; Pulvino, Charles J.

    1982-01-01

    Presents a mathematical model utilizing three-letter codes of personality patterns determined from the Self Directed Search. This model compares personality types over time or determines relationships between personality types and person-environment interactions. This approach is consistent with Holland's theory yet more comprehensive than one- or…

  10. Model-building codes for membrane proteins.

    Energy Technology Data Exchange (ETDEWEB)

    Shirley, David Noyes; Hunt, Thomas W.; Brown, W. Michael; Schoeniger, Joseph S. (Sandia National Laboratories, Livermore, CA); Slepoy, Alexander; Sale, Kenneth L. (Sandia National Laboratories, Livermore, CA); Young, Malin M. (Sandia National Laboratories, Livermore, CA); Faulon, Jean-Loup Michel; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA)

    2005-01-01

    We have developed a novel approach to modeling the transmembrane spanning helical bundles of integral membrane proteins using only a sparse set of distance constraints, such as those derived from MS3-D, dipolar-EPR and FRET experiments. Algorithms have been written for searching the conformational space of membrane protein folds matching the set of distance constraints, which provides initial structures for local conformational searches. Local conformation search is achieved by optimizing these candidates against a custom penalty function that incorporates both measures derived from statistical analysis of solved membrane protein structures and distance constraints obtained from experiments. This results in refined helical bundles to which the interhelical loops and amino acid side-chains are added. Using a set of only 27 distance constraints extracted from the literature, our methods successfully recover the structure of dark-adapted rhodopsin to within 3.2 {angstrom} of the crystal structure.

  11. Homero y el perro de Hades (Hiliada VIII 368 y Odisea XI 623

    Directory of Open Access Journals (Sweden)

    H. Thiry

    1974-06-01

    Full Text Available The dog of Hades, afterwards called Cerberus, is in Homer just a dog guardian of the house. It is neither a monstruous nor a deform dog, but has the same characteristics as regular watch dogs in Greek houses.

  12. Multiview coding mode decision with hybrid optimal stopping model.

    Science.gov (United States)

    Zhao, Tiesong; Kwong, Sam; Wang, Hanli; Wang, Zhou; Pan, Zhaoqing; Kuo, C-C Jay

    2013-04-01

    In a generic decision process, optimal stopping theory aims to achieve a good tradeoff between decision performance and time consumed, with the advantages of theoretical decision-making and predictable decision performance. In this paper, optimal stopping theory is employed to develop an effective hybrid model for the mode decision problem, which aims to theoretically achieve a good tradeoff between the two interrelated measurements in mode decision, as computational complexity reduction and rate-distortion degradation. The proposed hybrid model is implemented and examined with a multiview encoder. To support the model and further promote coding performance, the multiview coding mode characteristics, including predicted mode probability and estimated coding time, are jointly investigated with inter-view correlations. Exhaustive experimental results with a wide range of video resolutions reveal the efficiency and robustness of our method, with high decision accuracy, negligible computational overhead, and almost intact rate-distortion performance compared to the original encoder.

  13. Modelling spread of Bluetongue in Denmark: The code

    DEFF Research Database (Denmark)

    Græsbøll, Kaare

    This technical report was produced to make public the code produced as the main project of the PhD project by Kaare Græsbøll, with the title: "Modelling spread of Bluetongue and other vector borne diseases in Denmark and evaluation of intervention strategies".......This technical report was produced to make public the code produced as the main project of the PhD project by Kaare Græsbøll, with the title: "Modelling spread of Bluetongue and other vector borne diseases in Denmark and evaluation of intervention strategies"....

  14. Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach

    Directory of Open Access Journals (Sweden)

    W. Bastiaan Kleijn

    2005-06-01

    Full Text Available Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel coding.

  15. Measurement of the quasi free np → npπ+π− and np → ppπ−π0 reactions at 1.25 GeV With HADES

    Directory of Open Access Journals (Sweden)

    Kurilkin A.

    2014-01-01

    Full Text Available We present the results of two-pion production in tagged quasi-free np collisions at a deutron incident beam energy of 1.25 GeV/c measured with the High-Acceptance Di-Electron Spectrometer (HADES installed at GSI. The specific acceptance of HADES allowed for the first time to obtain high-precision data on π+π− and π−π0 production in np collisions in a region corresponding to large transverse momenta of the secondary particles. The obtained differential cross section data provide strong constraints on the production mechanisms and on the various baryon resonance contributions (∆∆, N(1440, N(1520, ∆(1600. The invariant mass and angular distributions from the np → npπ+π −and np → ppπ−π0 reactions are compared with different theoretical model predictions.

  16. Measurement of the quasi free np → npπ+π- and np → ppπ-π0 reactions at 1.25 GeV With HADES

    Science.gov (United States)

    Kurilkin, A.; Arnold, O.; Atomssa, E. T.; Behnke, C.; Belyaev, A.; Berger-Chen, J. C.; Biernat, J.; Blanco, A.; Blume, C.; Böhmer, M.; Bordalo, P.; Chernenko, S.; Deveaux, C.; Dybczak, A.; Epple, E.; Fabbietti, L.; Fateev, O.; Fonte, P.; Franco, C.; Friese, J.; Fröhlich, I.; Galatyuk, T.; Garzón, J. A.; Gill, K.; Golubeva, M.; González-Díaz, D.; Guber, F.; Gumberidze, M.; Harabasz, S.; Hennino, T.; Höhne, C.; Holzmann, R.; Ierusalimov, A.; Ivashkin, A.; Jurkovic, M.; Kämpfer, B.; Karavicheva, T.; Kardan, K.; Koenig, I.; Koenig, W.; Kolb, B. W.; Korcyl, G.; Kornakov, G.; Kotte, R.; Krása, A.; Krebs, E.; Krizek, F.; Kuc, H.; Kugler, A.; Kunz, T.; Kurepin, A.; Kurilkin, P.; Ladygin, V.; Lalik, R.; Lang, S.; Lapidus, K.; Lebedev, A.; Lopes, L.; Lorenz, M.; Maier, L.; Mangiarotti, A.; Markert, J.; Metag, V.; Michel, J.; Müntz, C.; Münzer, R.; Naumann, L.; Palka, M.; Pechenov, V.; Pechenova, O.; Petousis, V.; Pietraszko, J.; Przygoda, W.; Ramstein, B.; Rehnisch, L.; Reshetin, A.; Rost, A.; Rustamov, A.; Sadovsky, A.; Salabura, P.; Scheib, T.; Schmidt-Sommerfeld, K.; Schuldes, H.; Sellheim, P.; Siebenson, J.; Silva, L.; Sobolev, Yu. G.; Spataro, S.; Ströbele, H.; Stroth, J.; Strzempek, P.; Sturm, C.; Svoboda, O.; Tarantola, A.; Teilab, K.; Tlusty, P.; Traxler, M.; Tsertos, H.; Vasiliev, T.; Wagner, V.; Wendisch, C.; Wirth, J.; Wüstenfeld, J.; Yurevich, S.; Zanevsky, Y.

    2014-11-01

    We present the results of two-pion production in tagged quasi-free np collisions at a deutron incident beam energy of 1.25 GeV/c measured with the High-Acceptance Di-Electron Spectrometer (HADES) installed at GSI. The specific acceptance of HADES allowed for the first time to obtain high-precision data on π+π- and π-π0 production in np collisions in a region corresponding to large transverse momenta of the secondary particles. The obtained differential cross section data provide strong constraints on the production mechanisms and on the various baryon resonance contributions (∆∆, N(1440), N(1520), ∆(1600)). The invariant mass and angular distributions from the np → npπ+π -and np → ppπ-π0 reactions are compared with different theoretical model predictions.

  17. Verification and Validation of Heat Transfer Model of AGREE Code

    Energy Technology Data Exchange (ETDEWEB)

    Tak, N. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Seker, V.; Drzewiecki, T. J.; Downar, T. J. [Department of Nuclear Engineering and Radiological Sciences, Univ. of Michigan, Michigan (United States); Kelly, J. M. [US Nuclear Regulatory Commission, Washington (United States)

    2013-05-15

    The AGREE code was originally developed as a multi physics simulation code to perform design and safety analysis of Pebble Bed Reactors (PBR). Currently, additional capability for the analysis of Prismatic Modular Reactor (PMR) core is in progress. Newly implemented fluid model for a PMR core is based on a subchannel approach which has been widely used in the analyses of light water reactor (LWR) cores. A hexagonal fuel (or graphite block) is discretized into triangular prism nodes having effective conductivities. Then, a meso-scale heat transfer model is applied to the unit cell geometry of a prismatic fuel block. Both unit cell geometries of multi-hole and pin-in-hole types of prismatic fuel blocks are considered in AGREE. The main objective of this work is to verify and validate the heat transfer model newly implemented for a PMR core in the AGREE code. The measured data in the HENDEL experiment were used for the validation of the heat transfer model for a pin-in-hole fuel block. However, the HENDEL tests were limited to only steady-state conditions of pin-in-hole fuel blocks. There exist no available experimental data regarding a heat transfer in multi-hole fuel blocks. Therefore, numerical benchmarks using conceptual problems are considered to verify the heat transfer model of AGREE for multi-hole fuel blocks as well as transient conditions. The CORONA and GAMMA+ codes were used to compare the numerical results. In this work, the verification and validation study were performed for the heat transfer model of the AGREE code using the HENDEL experiment and the numerical benchmarks of selected conceptual problems. The results of the present work show that the heat transfer model of AGREE is accurate and reliable for prismatic fuel blocks. Further validation of AGREE is in progress for a whole reactor problem using the HTTR safety test data such as control rod withdrawal tests and loss-of-forced convection tests.

  18. Code Shift: Grid Specifications and Dynamic Wind Turbine Models

    DEFF Research Database (Denmark)

    Ackermann, Thomas; Ellis, Abraham; Fortmann, Jens

    2013-01-01

    Grid codes (GCs) and dynamic wind turbine (WT) models are key tools to allow increasing renewable energy penetration without challenging security of supply. In this article, the state of the art and the further development of both tools are discussed, focusing on the European and North American...

  19. Study of an electromagnetic calorimeter for HADES (High Acceptance Di-Electron Spectrometer); Etude d`un calorimetre electromagnetique pour HADES (High Acceptance Di-Electron Spectrometer)

    Energy Technology Data Exchange (ETDEWEB)

    Pienne, Cyril [Universite Blaise Pascal, Clermont-Ferrand 2, (CNRS), 63 - Aubiere (France). U.F.R. de Recherche Scientifique et Technique

    1996-11-27

    The physics context of this work is the study of heavy ion collisions at relativistic energies where dielectron are chosen as a probe of the produced hot and dense nuclear matter. The experimental set-up in construction, the HADES spectrometer, is designed to study the decays of {rho}, {omega}, {phi} mesons into e{sup +}e{sup -} pairs inside the excited medium. The goal is to show that restoration of chiral symmetry, theoretically predicted, manifests itself through the in-medium properties of particles, mesons in particular. Moreover, another goal is the study of electromagnetic form factors of hadrons which are involved in production of dileptons, test of a vector dominance model (VDM) in particular. In the case of the {omega}, its Dalitz decay is not well understood, and the use of a calorimeter could help to solve this mystery. In addition, a calorimeter could provide a redundant characterisation of electrons and positrons. Our work consisted in studying two materials: lead glass and lead tungstate. In the first case, only simulations have been made and led to the following conclusions: - energy resolution ({sigma}{sub E}/E) = 3.89/{radical}E+5.2(%); - spatial resolution ({sigma}{sub x,y}) = 0.14/{radical}E+0.73(cm); - possibility of separation e/h, e/{mu}; - accurate study of the {omega} for factor via its Dalitz decay. The study of lead tungstate began with test of quality and homogeneity of crystal samples in order to check that they have similar properties.. Experiments were performed at the MAMI microtron in Mainz (Germany) with electrons of 180, 450, 855 MeV energy and yielded the following results, never obtained so far: - energy resolution ({sigma}{sub E}/E) = 2.45/{radical}E+97(%); - spatial resolution ({approx_equal} 0.3 cm); - time resolution ({sigma}){approx}1.41 ps at 8.55 MeV for T = 20 deg. C. (author) 52 refs., 90 figs., 30 tabs.

  20. Modelling binary rotating stars by new population synthesis code BONNFIRES

    CERN Document Server

    Lau, Herbert H B; Schneider, Fabian R N

    2013-01-01

    BONNFIRES, a new generation of population synthesis code, can calculate nuclear reaction, various mixing processes and binary interaction in a timely fashion. We use this new population synthesis code to study the interplay between binary mass transfer and rotation. We aim to compare theoretical models with observations, in particular the surface nitrogen abundance and rotational velocity. Preliminary results show binary interactions may explain the formation of nitrogen-rich slow rotators and nitrogen-poor fast rotators, but more work needs to be done to estimate whether the observed frequencies of those stars can be matched.

  1. Modeling of Anomalous Transport in Tokamaks with FACETS code

    Science.gov (United States)

    Pankin, A. Y.; Batemann, G.; Kritz, A.; Rafiq, T.; Vadlamani, S.; Hakim, A.; Kruger, S.; Miah, M.; Rognlien, T.

    2009-05-01

    The FACETS code, a whole-device integrated modeling code that self-consistently computes plasma profiles for the plasma core and edge in tokamaks, has been recently developed as a part of the SciDAC project for core-edge simulations. A choice of transport models is available in FACETS through the FMCFM interface [1]. Transport models included in FMCFM have specific ranges of applicability, which can limit their use to parts of the plasma. In particular, the GLF23 transport model does not include the resistive ballooning effects that can be important in the tokamak pedestal region and GLF23 typically under-predicts the anomalous fluxes near the magnetic axis [2]. The TGLF and GYRO transport models have similar limitations [3]. A combination of transport models that covers the entire discharge domain is studied using FACETS in a realistic tokamak geometry. Effective diffusivities computed with the FMCFM transport models are extended to the region near the separatrix to be used in the UEDGE code within FACETS. 1. S. Vadlamani et al. (2009) %First time-dependent transport simulations using GYRO and NCLASS within FACETS (this meeting).2. T. Rafiq et al. (2009) %Simulation of electron thermal transport in H-mode discharges Submitted to Phys. Plasmas.3. C. Holland et al. (2008) %Validation of gyrokinetic transport simulations using %DIII-D core turbulence measurements Proc. of IAEA FEC (Switzerland, 2008)

  2. Model classification rate control algorithm for video coding

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A model classification rate control method for video coding is proposed. The macro-blocks are classified according to their prediction errors, and different parameters are used in the rate-quantization and distortion-quantization model.The different model parameters are calculated from the previous frame of the same type in the process of coding. These models are used to estimate the relations among rate, distortion and quantization of the current frame. Further steps,such as R-D optimization based quantization adjustment and smoothing of quantization of adjacent macroblocks, are used to improve the quality. The results of the experiments prove that the technique is effective and can be realized easily. The method presented in the paper can be a good way for MPEG and H. 264 rate control.

  3. NLTE solar irradiance modeling with the COSI code

    CERN Document Server

    Shapiro, A I; Schoell, M; Haberreiter, M; Rozanov, E

    2010-01-01

    Context. The solar irradiance is known to change on time scales of minutes to decades, and it is suspected that its substantial fluctua- tions are partially responsible for climate variations. Aims. We are developing a solar atmosphere code that allows the physical modeling of the entire solar spectrum composed of quiet Sun and active regions. This code is a tool for modeling the variability of the solar irradiance and understanding its influence on Earth. Methods. We exploit further development of the radiative transfer code COSI that now incorporates the calculation of molecular lines. We validated COSI under the conditions of local thermodynamic equilibrium (LTE) against the synthetic spectra calculated with the ATLAS code. The synthetic solar spectra were also calculated in non-local thermodynamic equilibrium (NLTE) and compared to the available measured spectra. In doing so we have defined the main problems of the modeling, e.g., the lack of opacity in the UV part of the spectrum and the inconsistency in...

  4. A semianalytic Monte Carlo code for modelling LIDAR measurements

    Science.gov (United States)

    Palazzi, Elisa; Kostadinov, Ivan; Petritoli, Andrea; Ravegnani, Fabrizio; Bortoli, Daniele; Masieri, Samuele; Premuda, Margherita; Giovanelli, Giorgio

    2007-10-01

    LIDAR (LIght Detection and Ranging) is an optical active remote sensing technology with many applications in atmospheric physics. Modelling of LIDAR measurements appears useful approach for evaluating the effects of various environmental variables and scenarios as well as of different measurement geometries and instrumental characteristics. In this regard a Monte Carlo simulation model can provide a reliable answer to these important requirements. A semianalytic Monte Carlo code for modelling LIDAR measurements has been developed at ISAC-CNR. The backscattered laser signal detected by the LIDAR system is calculated in the code taking into account the contributions due to the main atmospheric molecular constituents and aerosol particles through processes of single and multiple scattering. The contributions by molecular absorption, ground and clouds reflection are evaluated too. The code can perform simulations of both monostatic and bistatic LIDAR systems. To enhance the efficiency of the Monte Carlo simulation, analytical estimates and expected value calculations are performed. Artificial devices (such as forced collision, local forced collision, splitting and russian roulette) are moreover foreseen by the code, which can enable the user to drastically reduce the variance of the calculation.

  5. Enhancements to the SSME transfer function modeling code

    Science.gov (United States)

    Irwin, R. Dennis; Mitchell, Jerrel R.; Bartholomew, David L.; Glenn, Russell D.

    1995-01-01

    This report details the results of a one year effort by Ohio University to apply the transfer function modeling and analysis tools developed under NASA Grant NAG8-167 (Irwin, 1992), (Bartholomew, 1992) to attempt the generation of Space Shuttle Main Engine High Pressure Turbopump transfer functions from time domain data. In addition, new enhancements to the transfer function modeling codes which enhance the code functionality are presented, along with some ideas for improved modeling methods and future work. Section 2 contains a review of the analytical background used to generate transfer functions with the SSME transfer function modeling software. Section 2.1 presents the 'ratio method' developed for obtaining models of systems that are subject to single unmeasured excitation sources and have two or more measured output signals. Since most of the models developed during the investigation use the Eigensystem Realization Algorithm (ERA) for model generation, Section 2.2 presents an introduction of ERA, and Section 2.3 describes how it can be used to model spectral quantities. Section 2.4 details the Residue Identification Algorithm (RID) including the use of Constrained Least Squares (CLS) and Total Least Squares (TLS). Most of this information can be found in the report (and is repeated for convenience). Section 3 chronicles the effort of applying the SSME transfer function modeling codes to the a51p394.dat and a51p1294.dat time data files to generate transfer functions from the unmeasured input to the 129.4 degree sensor output. Included are transfer function modeling attempts using five methods. The first method is a direct application of the SSME codes to the data files and the second method uses the underlying trends in the spectral density estimates to form transfer function models with less clustering of poles and zeros than the models obtained by the direct method. In the third approach, the time data is low pass filtered prior to the modeling process in an

  6. Video Coding and Modeling with Applications to ATM Multiplexing

    Science.gov (United States)

    Nguyen, Hien

    A new vector quantization (VQ) coding method based on optimized concentric shell partitioning of the image space is proposed. The advantages of using the concentric shell partition vector quantizer (CSPVQ) are that it is very fast and the image patterns found in each different subspace can be more effectively coded by using a codebook that is best matched to that particular subspace. For intra-frame coding, the CSPVQ is shown to have the same performance, if not better, than the optimized gain-shape VQ in terms of encoded picture quality while it definitely surpasses the gain-shape VQ in term of computational complexity. A variable bit rate (VBR) video coder for moving video is then proposed where the idea of CSPVQ is coupled with the idea of regular quadtree decomposition to further reduce the bit rate of the encoded picture sequence. The usefulness of a quadtree coding technique comes from the fact that different homogeneous regions occurring within an image can be compactly represented by various nodes in a quadtree. It is found that this image representation technique is particularly useful in providing a low bit rate video encoder without compromising the image quality when it is used in conjunction with the CSPVQ. The characteristics of the VBR coder's output as applied to ATM transmission are investigated. Three video models are used to study the performance of the ATM multiplexer. These models are the auto regressive (AR) model, the auto regressive hidden Markov model (AR-HMM), and the fluid flow uniform arrival and service (UAS) model. The AR model is allowed to have arbitrary order and is used to model a video source which has a constant amount of motion, that is, a stationary video source. The AR-HMM is a more general video model which is based on the idea of auto regressive hidden Markov chain formulated by Baum and is used to describe highly non-stationary sources. Hence, it is expected that the AR-HMM model may also be used top represent a video

  7. Discovering binary codes for documents by learning deep generative models.

    Science.gov (United States)

    Hinton, Geoffrey; Salakhutdinov, Ruslan

    2011-01-01

    We describe a deep generative model in which the lowest layer represents the word-count vector of a document and the top layer represents a learned binary code for that document. The top two layers of the generative model form an undirected associative memory and the remaining layers form a belief net with directed, top-down connections. We present efficient learning and inference procedures for this type of generative model and show that it allows more accurate and much faster retrieval than latent semantic analysis. By using our method as a filter for a much slower method called TF-IDF we achieve higher accuracy than TF-IDF alone and save several orders of magnitude in retrieval time. By using short binary codes as addresses, we can perform retrieval on very large document sets in a time that is independent of the size of the document set using only one word of memory to describe each document.

  8. Using cryptology models for protecting PHP source code

    Science.gov (United States)

    Jevremović, Aleksandar; Ristić, Nenad; Veinović, Mladen

    2013-10-01

    Protecting PHP scripts from unwanted use, copying and modifications is a big issue today. Existing solutions on source code level are mostly working as obfuscators, they are free, and they are not providing any serious protection. Solutions that encode opcode are more secure, but they are commercial and require closed-source proprietary PHP interpreter's extension. Additionally, encoded opcode is not compatible with future versions of interpreters which imply re-buying encoders from the authors. Finally, if extension source-code is compromised, all scripts encoded with that solution are compromised too. In this paper, we will present a new model for free and open-source PHP script protection solution. Protection level provided by the proposed solution is equal to protection level of commercial solutions. Model is based on conclusions from use of standard cryptology models for analysis of strengths and weaknesses of the existing solutions, when a scripts protection is seen as secure communication channel in the cryptology.

  9. A spectral synthesis code for rapid modelling of supernovae

    CERN Document Server

    Kerzendorf, Wolfgang E

    2014-01-01

    We present TARDIS - an open-source code for rapid spectral modelling of supernovae (SNe). Our goal is to develop a tool that is sufficiently fast to allow exploration of the complex parameter spaces of models for SN ejecta. This can be used to analyse the growing number of high-quality SN spectra being obtained by transient surveys. The code uses Monte Carlo methods to obtain a self-consistent description of the plasma state and to compute a synthetic spectrum. It has a modular design to facilitate the implementation of a range of physical approximations that can be compared to asses both accuracy and computational expediency. This will allow users to choose a level of sophistication appropriate for their application. Here, we describe the operation of the code and make comparisons with alternative radiative transfer codes of differing levels of complexity (SYN++, PYTHON, and ARTIS). We then explore the consequence of adopting simple prescriptions for the calculation of atomic excitation, focussing on four sp...

  10. Transform Coding for Point Clouds Using a Gaussian Process Model.

    Science.gov (United States)

    De Queiroz, Ricardo; Chou, Philip A

    2017-04-28

    We propose using stationary Gaussian Processes (GPs) to model the statistics of the signal on points in a point cloud, which can be considered samples of a GP at the positions of the points. Further, we propose using Gaussian Process Transforms (GPTs), which are Karhunen-Lo`eve transforms of the GP, as the basis of transform coding of the signal. Focusing on colored 3D point clouds, we propose a transform coder that breaks the point cloud into blocks, transforms the blocks using GPTs, and entropy codes the quantized coefficients. The GPT for each block is derived from both the covariance function of the GP and the locations of the points in the block, which are separately encoded. The covariance function of the GP is parameterized, and its parameters are sent as side information. The quantized coefficients are sorted by eigenvalues of the GPTs, binned, and encoded using an arithmetic coder with bin-dependent Laplacian models whose parameters are also sent as side information. Results indicate that transform coding of 3D point cloud colors using the proposed GPT and entropy coding achieves superior compression performance on most of our data sets.

  11. The 2010 fib Model Code for Structural Concrete: A new approach to structural engineering

    NARCIS (Netherlands)

    Walraven, J.C.; Bigaj-Van Vliet, A.

    2011-01-01

    The fib Model Code is a recommendation for the design of reinforced and prestressed concrete which is intended to be a guiding document for future codes. Model Codes have been published before, in 1978 and 1990. The draft for fib Model Code 2010 was published in May 2010. The most important new elem

  12. Improvement of Basic Fluid Dynamics Models for the COMPASS Code

    Science.gov (United States)

    Zhang, Shuai; Morita, Koji; Shirakawa, Noriyuki; Yamamoto, Yuichi

    The COMPASS code is a new next generation safety analysis code to provide local information for various key phenomena in core disruptive accidents of sodium-cooled fast reactors, which is based on the moving particle semi-implicit (MPS) method. In this study, improvement of basic fluid dynamics models for the COMPASS code was carried out and verified with fundamental verification calculations. A fully implicit pressure solution algorithm was introduced to improve the numerical stability of MPS simulations. With a newly developed free surface model, numerical difficulty caused by poor pressure solutions is overcome by involving free surface particles in the pressure Poisson equation. In addition, applicability of the MPS method to interactions between fluid and multi-solid bodies was investigated in comparison with dam-break experiments with solid balls. It was found that the PISO algorithm and free surface model makes simulation with the passively moving solid model stable numerically. The characteristic behavior of solid balls was successfully reproduced by the present numerical simulations.

  13. New Mechanical Model for the Transmutation Fuel Performance Code

    Energy Technology Data Exchange (ETDEWEB)

    Gregory K. Miller

    2008-04-01

    A new mechanical model has been developed for implementation into the TRU fuel performance code. The new model differs from the existing FRAPCON 3 model, which it is intended to replace, in that it will include structural deformations (elasticity, plasticity, and creep) of the fuel. Also, the plasticity algorithm is based on the “plastic strain–total strain” approach, which should allow for more rapid and assured convergence. The model treats three situations relative to interaction between the fuel and cladding: (1) an open gap between the fuel and cladding, such that there is no contact, (2) contact between the fuel and cladding where the contact pressure is below a threshold value, such that axial slippage occurs at the interface, and (3) contact between the fuel and cladding where the contact pressure is above a threshold value, such that axial slippage is prevented at the interface. The first stage of development of the model included only the fuel. In this stage, results obtained from the model were compared with those obtained from finite element analysis using ABAQUS on a problem involving elastic, plastic, and thermal strains. Results from the two analyses showed essentially exact agreement through both loading and unloading of the fuel. After the cladding and fuel/clad contact were added, the model demonstrated expected behavior through all potential phases of fuel/clad interaction, and convergence was achieved without difficulty in all plastic analysis performed. The code is currently in stand alone form. Prior to implementation into the TRU fuel performance code, creep strains will have to be added to the model. The model will also have to be verified against an ABAQUS analysis that involves contact between the fuel and cladding.

  14. Measurement of charged pions in {sup 12}C+{sup 12}C collisions at 1A GeV and 2A GeV with HADES

    Energy Technology Data Exchange (ETDEWEB)

    Agakishiev, G.; Destefanis, M.; Gilardi, C.; Kirschner, D.; Kuehn, W.; Lange, J.S.; Metag, V.; Novotny, R.; Pechenov, V.; Pechenova, O.; Perez Cavalcanti, T.; Spataro, S.; Spruck, B. [Justus Liebig Universitaet Giessen, II. Physikalisches Institut, Giessen (Germany); Agodi, C.; Coniglione, R.; Finocchiaro, P.; Maiolino, C.; Piattelli, P.; Sapienza, P. [Laboratori Nazionali del Sud, Istituto Nazionale di Fisica Nucleare, Catania (Italy); Balanda, A.; Kozuch, A.; Przygoda, W. [Jagiellonian University of Cracow, Smoluchowski Institute of Physics, Krakow (Poland); Panstwowa Wyzsza Szkola Zawodowa, Nowy Sacz (Poland); Bellia, G. [Laboratori Nazionali del Sud, Istituto Nazionale di Fisica Nucleare, Catania (Italy); Universita di Catania, Dipartimento di Fisica e Astronomia, Catania (Italy); Belver, D.; Cabanelas, P.; Duran, I.; Garzon, J.A.; Lamas-Valverde, J.; Marin, J. [University of Santiago de Compostela, Departamento de Fisica de Particulas, Santiago de Compostela (Spain); Belyaev, A.; Chernenko, S.; Fateev, O.; Ierusalimov, A.; Zanevsky, Y. [Joint Institute of Nuclear Research, Dubna (Russian Federation); Bielcik, J.; Braun-Munzinger, P.; Galatyuk, T.; Gonzalez-Diaz, D.; Heinz, T.; Holzmann, R.; Koenig, I.; Koenig, W.; Kolb, B.W.; Lang, S.; Muench, M.; Palka, M.; Pietraszko, J.; Rustamov, A.; Schroeder, C.; Schwab, E.; Simon, R.S.; Traxler, M.; Yurevich, S.; Zumbruch, P. [GSI Helmholtzzentrum fuer Schwerionenforschung, Darmstadt (Germany); Blanco, A.; Lopes, L.; Mangiarotti, A. [LIP-Laboratorio de Instrumentacao e Fisica Experimental de Particulas, Coimbra (Portugal); Bortolotti, A.; Michalska, B. [Sezione di Milano, Istituto Nazionale di Fisica Nucleare, Milano (Italy); Boyard, J.L.; Hennino, T.; Moriniere, E.; Ramstein, B.; Roy-Stephan, M.; Sudol, M. [CNRS/IN2P3 - Universite Paris Sud, Institut de Physique Nucleaire, Orsay Cedex (France); Christ, T.; Eberl, T.; Fabbietti, L.; Friese, J.; Gernhaeuser, R.; Jurkovic, M.; Kruecken, R. [and others

    2009-04-15

    We present the results of a study of charged-pion production in {sup 12}C+{sup 12}C collisions at incident beam energies of 1A GeV and 2A GeV using the HADES spectrometer at GSI. The main emphasis of the HADES program is on the dielectron signal from the early phase of the collision. Here, however, we discuss the data with respect to the emission of charged hadrons, specifically the production of {pi}{sup {+-}} mesons, which are related to neutral pions representing a dominant contribution to the dielectron yield. We have performed the first large-angular-range measurement of the distribution of {pi}{sup {+-}} mesons for the {sup 12}C+{sup 12}C collision system covering a fairly large rapidity interval. The pion yields, transverse-mass and angular distributions are compared with calculations done within a transport model, as well as with existing data from other experiments. The anisotropy of pion production is systematically analyzed. (orig.)

  15. Galactic Cosmic Ray Event-Based Risk Model (GERM) Code

    Science.gov (United States)

    Cucinotta, Francis A.; Plante, Ianik; Ponomarev, Artem L.; Kim, Myung-Hee Y.

    2013-01-01

    This software describes the transport and energy deposition of the passage of galactic cosmic rays in astronaut tissues during space travel, or heavy ion beams in patients in cancer therapy. Space radiation risk is a probability distribution, and time-dependent biological events must be accounted for physical description of space radiation transport in tissues and cells. A stochastic model can calculate the probability density directly without unverified assumptions about shape of probability density function. The prior art of transport codes calculates the average flux and dose of particles behind spacecraft and tissue shielding. Because of the signaling times for activation and relaxation in the cell and tissue, transport code must describe temporal and microspatial density of functions to correlate DNA and oxidative damage with non-targeted effects of signals, bystander, etc. These are absolutely ignored or impossible in the prior art. The GERM code provides scientists data interpretation of experiments; modeling of beam line, shielding of target samples, and sample holders; and estimation of basic physical and biological outputs of their experiments. For mono-energetic ion beams, basic physical and biological properties are calculated for a selected ion type, such as kinetic energy, mass, charge number, absorbed dose, or fluence. Evaluated quantities are linear energy transfer (LET), range (R), absorption and fragmentation cross-sections, and the probability of nuclear interactions after 1 or 5 cm of water equivalent material. In addition, a set of biophysical properties is evaluated, such as the Poisson distribution for a specified cellular area, cell survival curves, and DNA damage yields per cell. Also, the GERM code calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle in a selected material. The GERM code makes the numerical estimates of basic

  16. Studying ρ-N couplings with HADES in pion-induced reactions

    Science.gov (United States)

    Scozzi, Federico

    2016-11-01

    The High-Acceptance Di-Electron Spectrometer (HADES) operates at the GSI Helmholtzzentrum für Schwerionenforschung in Darmstadt with pion, proton and heavyion beams provided by the synchrotron SIS18. In summer 2014, HADES took data using a pion beam on carbon and polyethylene targets. A large part of the data was taken at a pion beam momentum of 0.69 GeV/c in order to explore di-pion and di-electron production in the second resonance region and the sub-threshold coupling of the ρ to baryonic resonances. In this contribution lepton identification will be discussed as well the purity of reconstructed e+e-. Finally we will show the preliminary di-electon raw spectra.

  17. Mosaic diamond based detector for MIPs detection, T0 determination and triggering in HADES

    Energy Technology Data Exchange (ETDEWEB)

    Pietraszko, Jerzy; Koenig, Wolfgang [GSI Helmholtzzentum fuer Schwerionenforschung GmbH, Darmstadt (Germany); Collaboration: HADES-Collaboration

    2015-07-01

    The CVD based diamond detectors were successfully used for HI detection in HADES already in 2001. In the following experiments the polycrystalline diamond material showed very good performance (time resolution below 50 ps sigma) and stable long term operation. Detection of the minimum ionising particles (MIPs) by means of the diamond detectors is a challenging task mainly because of very small energy deposit in the diamond material. In this case the single crystalline CVD diamond material has to be used which is well known for its excellent charge collection efficiency (almost 100 %) and for its very good timing properties. For pion induced experiments at HADES a large area, segmented, position sensitive, operated in vacuum detector was developed. The construction of the detector is presented along with the requirements and the obtained performance.

  18. Low mass dielectrons radiated off cold nuclear matter measured with HADES

    Science.gov (United States)

    Lorenz, M.; Agakishiev, G.; Behnke, C.; Belver, D.; Belyaev, A.; Berger-Chen, J. C.; Blanco, A.; Blume, C.; Böhmer, M.; Cabanelas, P.; Chernenko, S.; Dritsa, C.; Dybczak, A.; Epple, E.; Fabbietti, L.; Fateev, O.; Fonte, P.; Friese, J.; Fröhlich, I.; Galatyuk, T.; Garzón, J. A.; Gill, K.; Golubeva, M.; González-Díaz, D.; Guber, F.; Gumberidze, M.; Harabasz, S.; Hennino, T.; Höhne, C.; Holzmann, R.; Huck, P.; Ierusalimov, A.; Ivashkin, A.; Jurkovic, M.; Kämpfer, B.; Karavicheva, T.; Koenig, I.; Koenig, W.; Kolb, B. W.; Korcyl, G.; Kornakov, G.; Kotte, R.; Krása, A.; Krebs, E.; Krizek, F.; Kuc, H.; Kugler, A.; Kurepin, A.; Kurilkin, A.; Kurilkin, P.; Ladygin, V.; Lalik, R.; Lang, S.; Lapidus, K.; Lebedev, A.; Lopes, L.; Maier, L.; Mangiarotti, A.; Markert, J.; Metag, V.; Michel, J.; Müntz, C.; Münzer, R.; Naumann, L.; Palka, M.; Parpottas, Y.; Pechenov, V.; Pechenova, O.; Pietraszko, J.; Przygoda, W.; Ramstein, B.; Rehnisch, L.; Reshetin, A.; Rustamov, A.; Sadovsky, A.; Salabura, P.; Scheib, T.; Schuldes, H.; Siebenson, J.; Sobolev, Yu. G.; Spataro, S.; Ströbele, H.; Stroth, J.; Strzempek, P.; Sturm, C.; Svoboda, O.; Tarantola, A.; Teilab, K.; Tlusty, P.; Traxler, M.; Tsertos, H.; Vasiliev, T.; Wagner, V.; Weber, M.; Wendisch, C.; Wüstenfeld, J.; Yurevich, S.; Zanevsky, Y.

    2014-03-01

    The High Acceptance DiElectron Spectrometer HADES [1] is installed at the Helmholtzzentrum für Schwerionenforschung (GSI) accelerator facility in Darmstadt. It investigates dielectron emission and strangeness production in the 1-3 AGeV regime. A recent experiment series focusses on medium-modifications of light vector mesons in cold nuclear matter. In two runs, p+p and p+Nb reactions were investigated at 3.5 GeV beam energy; about 9·109 events have been registered. In contrast to other experiments the high acceptance of the HADES allows for a detailed analysis of electron pairs with low momenta relative to nuclear matter, where modifications of the spectral functions of vector mesons are predicted to be most prominent. Comparing these low momentum electron pairs to the reference measurement in the elementary p+p reaction, we find in fact a strong modification of the spectral distribution in the whole vector meson region.

  19. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    Science.gov (United States)

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  20. Current Capabilities of the Fuel Performance Modeling Code PARFUME

    Energy Technology Data Exchange (ETDEWEB)

    G. K. Miller; D. A. Petti; J. T. Maki; D. L. Knudson

    2004-09-01

    The success of gas reactors depends upon the safety and quality of the coated particle fuel. A fuel performance modeling code (called PARFUME), which simulates the mechanical and physico-chemical behavior of fuel particles during irradiation, is under development at the Idaho National Engineering and Environmental Laboratory. Among current capabilities in the code are: 1) various options for calculating CO production and fission product gas release, 2) a thermal model that calculates a time-dependent temperature profile through a pebble bed sphere or a prismatic block core, as well as through the layers of each analyzed particle, 3) simulation of multi-dimensional particle behavior associated with cracking in the IPyC layer, partial debonding of the IPyC from the SiC, particle asphericity, kernel migration, and thinning of the SiC caused by interaction of fission products with the SiC, 4) two independent methods for determining particle failure probabilities, 5) a model for calculating release-to-birth (R/B) ratios of gaseous fission products, that accounts for particle failures and uranium contamination in the fuel matrix, and 6) the evaluation of an accident condition, where a particle experiences a sudden change in temperature following a period of normal irradiation. This paper presents an overview of the code.

  1. Konzeptionelle Untersuchungen für die HADES-Driftkammern am Prototyp 0

    OpenAIRE

    1997-01-01

    Mit dem Dileptonenspektrometer HADES (High Acceptance Di-Electron Spectrometer) sollen Dielektronen, die bei zentralen Au+Au-Kollisionen der Energie von bis zu 2 GeV/u entstehen, spektroskopiert werden. Zentrale Detektorkomponente ist ein Magnetspektrometer, bestehend aus einem toroidalem Magnetfeld und 24 Driftkammern, die zur Orts- und Impulsbestimmung durch Ablenkung im Magnetfeld verwendet werden. Hohe Raten minimal ionisierender Teilchen, eine Massenauflösung von 1% im Massenbereich von ...

  2. Direct containment heating models in the CONTAIN code

    Energy Technology Data Exchange (ETDEWEB)

    Washington, K.E.; Williams, D.C.

    1995-08-01

    The potential exists in a nuclear reactor core melt severe accident for molten core debris to be dispersed under high pressure into the containment building. If this occurs, the set of phenomena that result in the transfer of energy to the containment atmosphere and its surroundings is referred to as direct containment heating (DCH). Because of the potential for DCH to lead to early containment failure, the U.S. Nuclear Regulatory Commission (USNRC) has sponsored an extensive research program consisting of experimental, analytical, and risk integration components. An important element of the analytical research has been the development and assessment of direct containment heating models in the CONTAIN code. This report documents the DCH models in the CONTAIN code. DCH models in CONTAIN for representing debris transport, trapping, chemical reactions, and heat transfer from debris to the containment atmosphere and surroundings are described. The descriptions include the governing equations and input instructions in CONTAIN unique to performing DCH calculations. Modifications made to the combustion models in CONTAIN for representing the combustion of DCH-produced and pre-existing hydrogen under DCH conditions are also described. Input table options for representing the discharge of debris from the RPV and the entrainment phase of the DCH process are also described. A sample calculation is presented to demonstrate the functionality of the models. The results show that reasonable behavior is obtained when the models are used to predict the sixth Zion geometry integral effects test at 1/10th scale.

  3. Modelling of LOCA Tests with the BISON Fuel Performance Code

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, Richard L [Idaho National Laboratory; Pastore, Giovanni [Idaho National Laboratory; Novascone, Stephen Rhead [Idaho National Laboratory; Spencer, Benjamin Whiting [Idaho National Laboratory; Hales, Jason Dean [Idaho National Laboratory

    2016-05-01

    BISON is a modern finite-element based, multidimensional nuclear fuel performance code that is under development at Idaho National Laboratory (USA). Recent advances of BISON include the extension of the code to the analysis of LWR fuel rod behaviour during loss-of-coolant accidents (LOCAs). In this work, BISON models for the phenomena relevant to LWR cladding behaviour during LOCAs are described, followed by presentation of code results for the simulation of LOCA tests. Analysed experiments include separate effects tests of cladding ballooning and burst, as well as the Halden IFA-650.2 fuel rod test. Two-dimensional modelling of the experiments is performed, and calculations are compared to available experimental data. Comparisons include cladding burst pressure and temperature in separate effects tests, as well as the evolution of fuel rod inner pressure during ballooning and time to cladding burst. Furthermore, BISON three-dimensional simulations of separate effects tests are performed, which demonstrate the capability to reproduce the effect of azimuthal temperature variations in the cladding. The work has been carried out in the frame of the collaboration between Idaho National Laboratory and Halden Reactor Project, and the IAEA Coordinated Research Project FUMAC.

  4. Benchmarking of computer codes and approaches for modeling exposure scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Seitz, R.R. [EG and G Idaho, Inc., Idaho Falls, ID (United States); Rittmann, P.D.; Wood, M.I. [Westinghouse Hanford Co., Richland, WA (United States); Cook, J.R. [Westinghouse Savannah River Co., Aiken, SC (United States)

    1994-08-01

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.

  5. Dynamic Model on the Transmission of Malicious Codes in Network

    Directory of Open Access Journals (Sweden)

    Bimal Kumar Mishra

    2013-08-01

    Full Text Available This paper introduces differential susceptible e-epidemic model S_i IR (susceptible class-1 for virus (S1 - susceptible class-2 for worms (S2 -susceptible class-3 for Trojan horse (S3 – infectious (I – recovered (R for the transmission of malicious codes in a computer network. We derive the formula for reproduction number (R0 to study the spread of malicious codes in computer network. We show that the Infectious free equilibrium is globally asymptotically stable and endemic equilibrium is locally asymptotically sable when reproduction number is less than one. Also an analysis has been made on the effect of antivirus software in the infectious nodes. Numerical methods are employed to solve and simulate the system of equations developed.

  6. Finite element code development for modeling detonation of HMX composites

    Science.gov (United States)

    Duran, Adam V.; Sundararaghavan, Veera

    2017-01-01

    In this work, we present a hydrodynamics code for modeling shock and detonation waves in HMX. A stable efficient solution strategy based on a Taylor-Galerkin finite element (FE) discretization was developed to solve the reactive Euler equations. In our code, well calibrated equations of state for the solid unreacted material and gaseous reaction products have been implemented, along with a chemical reaction scheme and a mixing rule to define the properties of partially reacted states. A linear Gruneisen equation of state was employed for the unreacted HMX calibrated from experiments. The JWL form was used to model the EOS of gaseous reaction products. It is assumed that the unreacted explosive and reaction products are in both pressure and temperature equilibrium. The overall specific volume and internal energy was computed using the rule of mixtures. Arrhenius kinetics scheme was integrated to model the chemical reactions. A locally controlled dissipation was introduced that induces a non-oscillatory stabilized scheme for the shock front. The FE model was validated using analytical solutions for SOD shock and ZND strong detonation models. Benchmark problems are presented for geometries in which a single HMX crystal is subjected to a shock condition.

  7. A Mutation Model from First Principles of the Genetic Code.

    Science.gov (United States)

    Thorvaldsen, Steinar

    2016-01-01

    The paper presents a neutral Codons Probability Mutations (CPM) model of molecular evolution and genetic decay of an organism. The CPM model uses a Markov process with a 20-dimensional state space of probability distributions over amino acids. The transition matrix of the Markov process includes the mutation rate and those single point mutations compatible with the genetic code. This is an alternative to the standard Point Accepted Mutation (PAM) and BLOcks of amino acid SUbstitution Matrix (BLOSUM). Genetic decay is quantified as a similarity between the amino acid distribution of proteins from a (group of) species on one hand, and the equilibrium distribution of the Markov chain on the other. Amino acid data for the eukaryote, bacterium, and archaea families are used to illustrate how both the CPM and PAM models predict their genetic decay towards the equilibrium value of 1. A family of bacteria is studied in more detail. It is found that warm environment organisms on average have a higher degree of genetic decay compared to those species that live in cold environments. The paper addresses a new codon-based approach to quantify genetic decay due to single point mutations compatible with the genetic code. The present work may be seen as a first approach to use codon-based Markov models to study how genetic entropy increases with time in an effectively neutral biological regime. Various extensions of the model are also discussed.

  8. Noise Feedback Coding Revisited: Refurbished Legacy Codecs and New Coding Models

    Institute of Scientific and Technical Information of China (English)

    Stephane Ragot; Balazs Kovesi; Alain Le Guyader

    2012-01-01

    Noise feedback coding (NFC) has attracted renewed interest with the recent standardization of backward-compatible enhancements for ITU-T G.711 and G.722. It has also been revisited with the emergence of proprietary speech codecs, such as BV16, BV32, and SILK, that have structures different from CELP coding. In this article, we review NFC and describe a novel coding technique that optimally shapes coding noise in embedded pulse-code modulation (PCM) and embedded adaptive differential PCM (ADPCM). We describe how this new technique was incorporated into the recent ITU-T G.711.1, G.711 App. III, and G.722 Annex B (G.722B) speech-coding standards.

  9. MMA, A Computer Code for Multi-Model Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Eileen P. Poeter and Mary C. Hill

    2007-08-20

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.

  10. MMA, A Computer Code for Multi-Model Analysis

    Science.gov (United States)

    Poeter, Eileen P.; Hill, Mary C.

    2007-01-01

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will

  11. EMPIRE: Nuclear Reaction Model Code System for Data Evaluation

    Science.gov (United States)

    Herman, M.; Capote, R.; Carlson, B. V.; Obložinský, P.; Sin, M.; Trkov, A.; Wienke, H.; Zerkin, V.

    2007-12-01

    EMPIRE is a modular system of nuclear reaction codes, comprising various nuclear models, and designed for calculations over a broad range of energies and incident particles. A projectile can be a neutron, proton, any ion (including heavy-ions) or a photon. The energy range extends from the beginning of the unresolved resonance region for neutron-induced reactions (∽ keV) and goes up to several hundred MeV for heavy-ion induced reactions. The code accounts for the major nuclear reaction mechanisms, including direct, pre-equilibrium and compound nucleus ones. Direct reactions are described by a generalized optical model (ECIS03) or by the simplified coupled-channels approach (CCFUS). The pre-equilibrium mechanism can be treated by a deformation dependent multi-step direct (ORION + TRISTAN) model, by a NVWY multi-step compound one or by either a pre-equilibrium exciton model with cluster emission (PCROSS) or by another with full angular momentum coupling (DEGAS). Finally, the compound nucleus decay is described by the full featured Hauser-Feshbach model with γ-cascade and width-fluctuations. Advanced treatment of the fission channel takes into account transmission through a multiple-humped fission barrier with absorption in the wells. The fission probability is derived in the WKB approximation within the optical model of fission. Several options for nuclear level densities include the EMPIRE-specific approach, which accounts for the effects of the dynamic deformation of a fast rotating nucleus, the classical Gilbert-Cameron approach and pre-calculated tables obtained with a microscopic model based on HFB single-particle level schemes with collective enhancement. A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers, moments of inertia and γ-ray strength functions. The results can be converted into ENDF-6 formatted files using the

  12. Code-to-code benchmark tests for 3D simulation models dedicated to the extraction region in negative ion sources

    Science.gov (United States)

    Nishioka, S.; Mochalskyy, S.; Taccogna, F.; Hatayama, A.; Fantz, U.; Minelli, P.

    2017-08-01

    The development of the kinetic particle model for the extraction region in negative hydrogen ion sources is indispensable and helpful to clarify the H- beam extraction physics. Recently, various 3D kinetic particle codes have been developed to study the extraction mechanism. Direct comparison between each other has not yet been done. Therefore, we have carried out a code-to-code benchmark activity to validate our codes. In the present study, the progress in this benchmark activity is summarized. At present, the reasonable agreement with the result by each code have been obtained using realistic plasma parameters at least for the following items; (1) Potential profile in the case of the vacuum condition (2) Temporal evolution of extracted current densities and profiles of electric potential in the case of the plasma consisting of only electrons and positive ions.

  13. Acoustic Gravity Wave Chemistry Model for the RAYTRACE Code.

    Science.gov (United States)

    2014-09-26

    AU)-AI56 850 ACOlUSTIC GRAVITY WAVE CHEMISTRY MODEL FOR THE IAYTRACE I/~ CODE(U) MISSION RESEARCH CORP SANTA BARBIARA CA T E OLD Of MAN 84 MC-N-SlS...DNA-TN-S4-127 ONAOOI-BO-C-0022 UNLSSIFIlED F/O 20/14 NL 1-0 2-8 1111 po 312.2 1--I 11111* i •. AD-A 156 850 DNA-TR-84-127 ACOUSTIC GRAVITY WAVE...Hicih Frequency Radio Propaoation Acoustic Gravity Waves 20. ABSTRACT (Continue en reveree mide if tteceeemr and Identify by block number) This

  14. SIMULATE-4 multigroup nodal code with microscopic depletion model

    Energy Technology Data Exchange (ETDEWEB)

    Bahadir, T. [Studsvik Scandpower, Inc., Newton, MA (United States); Lindahl, St.O. [Studsvik Scandpower AB, Vasteras (Sweden); Palmtag, S.P. [Studsvik Scandpower, Inc., Idaho Falls, ID (United States)

    2005-07-01

    SIMULATE-4 is a three-dimensional multigroup analytical nodal code with microscopic depletion capability. It has been developed employing 'first principal models' thus avoiding ad hoc approximations. The multigroup diffusion equations or, optionally, the simplified P{sub 3} equations are solved. Cross sections are described by a hybrid microscopic-macroscopic model that includes approximately 50 heavy nuclides and fission products. Heterogeneities in the axial direction of an assembly are treated systematically. Radially, the assembly is divided into heterogeneous sub-meshes, thereby overcoming the shortcomings of spatially-averaged assembly cross sections and discontinuity factors generated with zero net-current boundary conditions. Numerical tests against higher order transport methods and critical experiments show substantial improvements compared to results of existing nodal models. (authors)

  15. C code generation applied to nonlinear model predictive control for an artificial pancreas

    DEFF Research Database (Denmark)

    Boiroux, Dimitri; Jørgensen, John Bagterp

    2017-01-01

    This paper presents a method to generate C code from MATLAB code applied to a nonlinear model predictive control (NMPC) algorithm. The C code generation uses the MATLAB Coder Toolbox. It can drastically reduce the time required for development compared to a manual porting of code from MATLAB to C...

  16. Kinetic models of gene expression including non-coding RNAs

    Science.gov (United States)

    Zhdanov, Vladimir P.

    2011-03-01

    In cells, genes are transcribed into mRNAs, and the latter are translated into proteins. Due to the feedbacks between these processes, the kinetics of gene expression may be complex even in the simplest genetic networks. The corresponding models have already been reviewed in the literature. A new avenue in this field is related to the recognition that the conventional scenario of gene expression is fully applicable only to prokaryotes whose genomes consist of tightly packed protein-coding sequences. In eukaryotic cells, in contrast, such sequences are relatively rare, and the rest of the genome includes numerous transcript units representing non-coding RNAs (ncRNAs). During the past decade, it has become clear that such RNAs play a crucial role in gene expression and accordingly influence a multitude of cellular processes both in the normal state and during diseases. The numerous biological functions of ncRNAs are based primarily on their abilities to silence genes via pairing with a target mRNA and subsequently preventing its translation or facilitating degradation of the mRNA-ncRNA complex. Many other abilities of ncRNAs have been discovered as well. Our review is focused on the available kinetic models describing the mRNA, ncRNA and protein interplay. In particular, we systematically present the simplest models without kinetic feedbacks, models containing feedbacks and predicting bistability and oscillations in simple genetic networks, and models describing the effect of ncRNAs on complex genetic networks. Mathematically, the presentation is based primarily on temporal mean-field kinetic equations. The stochastic and spatio-temporal effects are also briefly discussed.

  17. A simple model of optimal population coding for sensory systems.

    Science.gov (United States)

    Doi, Eizaburo; Lewicki, Michael S

    2014-08-01

    A fundamental task of a sensory system is to infer information about the environment. It has long been suggested that an important goal of the first stage of this process is to encode the raw sensory signal efficiently by reducing its redundancy in the neural representation. Some redundancy, however, would be expected because it can provide robustness to noise inherent in the system. Encoding the raw sensory signal itself is also problematic, because it contains distortion and noise. The optimal solution would be constrained further by limited biological resources. Here, we analyze a simple theoretical model that incorporates these key aspects of sensory coding, and apply it to conditions in the retina. The model specifies the optimal way to incorporate redundancy in a population of noisy neurons, while also optimally compensating for sensory distortion and noise. Importantly, it allows an arbitrary input-to-output cell ratio between sensory units (photoreceptors) and encoding units (retinal ganglion cells), providing predictions of retinal codes at different eccentricities. Compared to earlier models based on redundancy reduction, the proposed model conveys more information about the original signal. Interestingly, redundancy reduction can be near-optimal when the number of encoding units is limited, such as in the peripheral retina. We show that there exist multiple, equally-optimal solutions whose receptive field structure and organization vary significantly. Among these, the one which maximizes the spatial locality of the computation, but not the sparsity of either synaptic weights or neural responses, is consistent with known basic properties of retinal receptive fields. The model further predicts that receptive field structure changes less with light adaptation at higher input-to-output cell ratios, such as in the periphery.

  18. Development of condensation modeling modeling and simulation code for IRWST

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sang Nyung; Jang, Wan Ho; Ko, Jong Hyun; Ha, Jong Baek; Yang, Chang Keun; Son, Myung Seong [Kyung Hee Univ., Seoul (Korea)

    1997-07-01

    One of the design improvements of the KNGR(Korean Next Generation Reactor) which is advanced to safety and economy is the adoption of IRWST(In-Containment Refueling Water Storage Tank). The IRWST, installed inside of the containment building, has more designed purpose than merely the location change of the tank. Since the design functions of the IRWST is similar to these of the BWR's suppression pool, theoretical models applicable to BWR's suppression pool can be mostly applied to the IRWST. But for the PWR, the geometry of the sparger, the operation mode and the steam quantity and temperature and pressure of discharged fluid from primary system to IRWST through PSV or SDV may be different from those of BWR. Also there is some defects in detailed parts of condensation model. Therefore we, as the first nation to construct PWR with IRWST, must carry out profound research for there problems such that the results can be utilized and localized as an exclusive technology. All kinds of thermal hydraulics phenomena was investigated and existing condensation models by Hideki Nariai and Izuo Aya were analyzed. Also throuh a rigorous literature review such as operation experience, experimental data, design document of KNGR, items which need modification and supplementation were derived. Analytical model for chugging phenomena is also presented. 15 refs., 18 figs., 4 tabs. (Author)

  19. A MATLAB based 3D modeling and inversion code for MT data

    Science.gov (United States)

    Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.

    2017-07-01

    The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.

  20. Secondary neutron source modelling using MCNPX and ALEPH codes

    Science.gov (United States)

    Trakas, Christos; Kerkar, Nordine

    2014-06-01

    Monitoring the subcritical state and divergence of reactors requires the presence of neutron sources. But mainly secondary neutrons from these sources feed the ex-core detectors (SRD, Source Range Detector) whose counting rate is correlated with the level of the subcriticality of reactor. In cycle 1, primary neutrons are provided by sources activated outside of the reactor (e.g. Cf252); part of this source can be used for the divergence of cycle 2 (not systematic). A second family of neutron sources is used for the second cycle: the spontaneous neutrons of actinides produced after irradiation of fuel in the first cycle. Both families of sources are not sufficient to efficiently monitor the divergence of the second cycles and following ones, in most reactors. Secondary sources cluster (SSC) fulfil this role. In the present case, the SSC [Sb, Be], after activation in the first cycle (production of Sb124, unstable), produces in subsequent cycles a photo-neutron source by gamma (from Sb124)-neutron (on Be9) reaction. This paper presents the model of the process between irradiation in cycle 1 and cycle 2 results for SRD counting rate at the beginning of cycle 2, using the MCNPX code and the depletion chain ALEPH-V1 (coupling of MCNPX and ORIGEN codes). The results of this simulation are compared with two experimental results of the PWR 1450 MWe-N4 reactors. A good agreement is observed between these results and the simulations. The subcriticality of the reactors is about at -15,000 pcm. Discrepancies on the SRD counting rate between calculations and measurements are in the order of 10%, lower than the combined uncertainty of measurements and code simulation. This comparison validates the AREVA methodology, which allows having an SRD counting rate best-estimate for cycles 2 and next ones and optimizing the position of the SSC, depending on the geographic location of sources, main parameter for optimal monitoring of subcritical states.

  1. TASS/SMR Code Topical Report for SMART Plant, Vol. I: Code Structure, System Models, and Solution Methods

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Young Jong; Kim, Soo Hyoung; Kim, See Darl (and others)

    2008-10-15

    The TASS/SMR code has been developed with domestic technologies for the safety analysis of the SMART plant which is an integral type pressurized water reactor. It can be applied to the analysis of design basis accidents including non-LOCA (loss of coolant accident) and LOCA of the SMART plant. The TASS/SMR code can be applied to any plant regardless of the structural characteristics of a reactor since the code solves the same governing equations for both the primary and secondary system. The code has been developed to meet the requirements of the safety analysis code. This report describes the overall structure of the TASS/SMR, input processing, and the processes of a steady state and transient calculations. In addition, basic differential equations, finite difference equations, state relationships, and constitutive models are described in the report. First, the conservation equations, a discretization process for numerical analysis, search method for state relationship are described. Then, a core power model, heat transfer models, physical models for various components, and control and trip models are explained.

  2. Model code for energy conservation in new building construction

    Energy Technology Data Exchange (ETDEWEB)

    None

    1977-12-01

    In response to the recognized lack of existing consensus standards directed to the conservation of energy in building design and operation, the preparation and publication of such a standard was accomplished with the issuance of ASHRAE Standard 90-75 ''Energy Conservation in New Building Design,'' by the American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc., in 1975. This standard addressed itself to recommended practices for energy conservation, using both depletable and non-depletable sources. A model code for energy conservation in building construction has been developed, setting forth the minimum regulations found necessary to mandate such conservation. The code addresses itself to the administration, design criteria, systems elements, controls, service water heating and electrical distribution and use, both for depletable and non-depletable energy sources. The technical provisions of the document are based on ASHRAE 90-75 and it is intended for use by state and local building officials in the implementation of a statewide energy conservation program.

  3. Mathematical modeling of wiped-film evaporators. [MAIN codes

    Energy Technology Data Exchange (ETDEWEB)

    Sommerfeld, J.T.

    1976-05-01

    A mathematical model and associated computer program were developed to simulate the steady-state operation of wiped-film evaporators for the concentration of typical waste solutions produced at the Savannah River Plant. In this model, which treats either a horizontal or a vertical wiped-film evaporator as a plug-flow device with no backmixing, three fundamental phenomena are described: sensible heating of the waste solution, vaporization of water, and crystallization of solids from solution. Physical property data were coded into the computer program, which performs the calculations of this model. Physical properties of typical waste solutions and of the heating steam, generally as analytical functions of temperature, were obtained from published data or derived by regression analysis of tabulated or graphical data. Preliminary results from tests of the Savannah River Laboratory semiworks wiped-film evaporators were used to select a correlation for the inside film heat transfer coefficient. This model should be a useful aid in the specification, operation, and control of the full-scale wiped-film evaporators proposed for application under plant conditions. In particular, it should be of value in the development and analysis of feed-forward control schemes for the plant units. Also, this model can be readily adapted, with only minor changes, to simulate the operation of wiped-film evaporators for other conceivable applications, such as the concentration of acid wastes.

  4. CODE's new solar radiation pressure model for GNSS orbit determination

    Science.gov (United States)

    Arnold, D.; Meindl, M.; Beutler, G.; Dach, R.; Schaer, S.; Lutz, S.; Prange, L.; Sośnica, K.; Mervart, L.; Jäggi, A.

    2015-08-01

    The Empirical CODE Orbit Model (ECOM) of the Center for Orbit Determination in Europe (CODE), which was developed in the early 1990s, is widely used in the International GNSS Service (IGS) community. For a rather long time, spurious spectral lines are known to exist in geophysical parameters, in particular in the Earth Rotation Parameters (ERPs) and in the estimated geocenter coordinates, which could recently be attributed to the ECOM. These effects grew creepingly with the increasing influence of the GLONASS system in recent years in the CODE analysis, which is based on a rigorous combination of GPS and GLONASS since May 2003. In a first step we show that the problems associated with the ECOM are to the largest extent caused by the GLONASS, which was reaching full deployment by the end of 2011. GPS-only, GLONASS-only, and combined GPS/GLONASS solutions using the observations in the years 2009-2011 of a global network of 92 combined GPS/GLONASS receivers were analyzed for this purpose. In a second step we review direct solar radiation pressure (SRP) models for GNSS satellites. We demonstrate that only even-order short-period harmonic perturbations acting along the direction Sun-satellite occur for GPS and GLONASS satellites, and only odd-order perturbations acting along the direction perpendicular to both, the vector Sun-satellite and the spacecraft's solar panel axis. Based on this insight we assess in the third step the performance of four candidate orbit models for the future ECOM. The geocenter coordinates, the ERP differences w. r. t. the IERS 08 C04 series of ERPs, the misclosures for the midnight epochs of the daily orbital arcs, and scale parameters of Helmert transformations for station coordinates serve as quality criteria. The old and updated ECOM are validated in addition with satellite laser ranging (SLR) observations and by comparing the orbits to those of the IGS and other analysis centers. Based on all tests, we present a new extended ECOM which

  5. A large area timing RPC prototype for ion collisions in the HADES spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Alvarez Pol, H. [Facutad de Fisica, Universidade de Santiago de Compostela, Campus sur, Santiago de Compostela (Spain); Alves, R. [LIP, Coimbra (Portugal); Blanco, A. [LIP, Coimbra (Portugal); Carolino, N. [LIP, Coimbra (Portugal); Eschke, J. [GSI, Darmstadt (Germany); Ferreira-Marques, R. [LIP, Coimbra (Portugal); Universidade de Coimbra, Coimbra (Portugal); Fonte, P. [LIP, Coimbra (Portugal); ISEC, Coimbra (Portugal); Garzon, J.A. [Facutad de Fisica, Universidade de Santiago de Compostela, Campus sur, Santiago de Compostela (Spain); Gonzalez Diaz, D. [Facutad de Fisica, Universidade de Santiago de Compostela, Campus sur, Santiago de Compostela (Spain)]. E-mail: diego@fpddux.usc.es; Pereira, A. [LIP, Coimbra (Portugal); Pietraszko, J. [Smoluchowski Institute of Physics, Cracow (Poland); Pinhao, J. [LIP, Coimbra (Portugal); Policarpo, A. [LIP, Coimbra (Portugal); Universidade de Coimbra, Coimbra (Portugal); Stroth, J. [GSI, Darmstadt (Germany); Johann Wolfgang Goethe-Universitaet, Frankfurt (Germany)

    2004-12-11

    We present a resistive plate chamber (RPC) prototype for time-of-flight measurements over large areas and at high occupancies, minimizing the inter-channel cross-talk. A procedure for the stand-alone calibration of the detector using redundant information is proposed, taking advantage of the very good spatial uniformity observed. Measurements were performed at the GSI (Darmstadt) SIS accelerator for primary collisions of C at 1 GeV/u, as a first step towards the projected high acceptance di-electron spectrometer (HADES) upgrade to work at the highest multiplicities expected in Au-Au collisions.

  6. Dielectron spectroscopy at 1-2 AGeV with HADES

    Energy Technology Data Exchange (ETDEWEB)

    Spataro, S. [Istituto Nazionale di Fisica Nucleare, Lab. Nazionali del Sud, Catania (Italy)]|[Justus Liebig Univ. Giessen, II.Physikalisches Institut, Giessen (Germany); Agakishiev, G.; Destefanis, M.; Gilardi, C.; Kirschner, D.; Kuehn, W.; Lange, J.S.; Metag, V.; Mishra, D.; Novotny, R.; Pechenov, V.; Pechenova, O.; Perez Cavalcanti, T.; Spruck, B.; Wen, H. [Justus Liebig Univ. Giessen, II.Physikalisches Inst., Giessen (Germany); Agodi, C.; Bellia, G.; Finocchiaro, P. [Istituto Nazionale di Fisica Nucleare, Lab. Nazionali del Sud, Catania (Italy); Balanda, A.; Dybczak, A.; Kozuch, A.; Michalska, B.; Otwinowski, J.; Przygoda, W.; Salabura, P.; Trebacz, R.; Wisniowski, M.; Wojcik, T. [Jagiellonian Univ. of Cracow, Smoluchowski Inst. of Physics, Krakow (Poland); Belver, D.; Cabanelas, P.; Castro, E.; Garzon, J.A.; Lamas-Valverde, J.; Marin, J. [Univ. of Santiago de Compostela, Dept. de Fisica de Particulas, Santiago de Compostela (Spain); Belyaev, A.; Chernenko, S.; Fateev, O.; Ierusalimov, A.; Zanevsky, Y. [Joint Inst. of Nuclear Research, Dubna (Russian Federation); Blanco, A.; Fonte, P.; Lopes, L.; Mangiarotti, A. [LIP-Lab. de Instrumentacao e Fisica Experimental de Particulas, Coimbra (Portugal); Boehmer, M.; Christ, T.; Eberl, T.; Fabbietti, L.; Friese, J.; Gernhaeuser, R.; Jurkovic, M.; Kruecken, R.; Maier, L.; Sailer, B. [Technische Univ. Muenchen, Physik Department E12, Muenchen (Germany); Boyard, J.L.; Hennino, T.; Moriniere, E.; Ramstein, B.; Roy-Stephan, M. [Univ. Paris Sud, Inst. de Physique Nucleaire (UMR 8608), CNRS/IN2P3, Orsay Cedex (France); Braun-Munzinger, P.; Galatyuk, T.; Gonzalez-Diaz, D.; Holzmann, R.; Koenig, I.; Koenig, W.; Kolb, B.W.; Lang, S.; Palka, M.; Pietraszko, J.; Rustamov, A.; Schmah, A.; Simon, R.; Sudol, M.; Traxler, M.; Yurevich, S.; Zumbruch, P. [Gesellschaft fuer Schwerionenforschung mbH, Darmstadt (Germany); Diaz, J.; Gil, A. [Univ. de Valencia-CSIC, Inst. de Fisica Corpuscular, Valencia (Spain)] [and others

    2008-11-15

    The HADES spectrometer at GSI (Darmstadt) is investigating the e {sup +}e{sup -} pair production in p+p, p+A and A+A collisions. In this contribution we would like to highlight the physics motivations and the experiments performed so far, focusing mainly on the first results coming from {sup 12}C+{sup 12}C collisions at 1 and 2 AGeV, and on preliminary results from p+p/d+p collisions at 1.25 AGeV. (orig.)

  7. Electric field and drift characteristics studies for the multiwire chambers of the third plane of HADES

    CERN Document Server

    Kanaki, K

    2003-01-01

    The aim of this report is a brief presentation of the physics motivation that lies behind the HADES project, and in more detail, the description of the main features of the spectrometer. More explicitly, it will deal with the operational features of the drift chambers which are incorporated in the experimental setup. The studies of the working characteristics and the simulations aimed to optimize the performance of the drift chambers are also described. Finally mentioned are the conclusions of these simulations and the proposals that will indeed lead to the optimization of the operation of the drift chambers.

  8. Electric field and drift characteristics studies for the multiwire chambers of the third plane of HADES

    Energy Technology Data Exchange (ETDEWEB)

    Kanaki, K.

    2003-02-01

    The aim of this report is a brief presentation of the physics motivation that lies behind the HADES project, and in more detail, the description of the main features of the spectrometer. More explicitly, it will deal with the operational features of the drift chambers which are incorporated in the experimental setup. The studies of the working characteristics and the simulations aimed to optimize the performance of the drift chambers are also described. Finally mentioned are the conclusions of these simulations and the proposals that will indeed lead to the optimization of the operation of the drift chambers. (orig.)

  9. Measurement of the Lambda(1405) in proton proton reactions with HADES

    CERN Document Server

    Siebenson, Johannes; Schmah, Alexander; Epple, Eliane

    2010-01-01

    We present an analysis of the Lambda(1405) resonance in p+p reactions at a kinetic beam energy of 3.5 GeV, measured by the High Acceptance Di-Electron Spectrometer (HADES). The resonance is reconstructed in the two charged decay channels Sigma^(+/-) pi^(-/+), with help of a kinematic refit, which improves the mass resolution. The high misidentification of pions and protons as kaons required the development of a sophisticated sideband analysis, which can describe the misidentification background quite well.

  10. Molecular Code Division Multiple Access: Gaussian Mixture Modeling

    Science.gov (United States)

    Zamiri-Jafarian, Yeganeh

    Communications between nano-devices is an emerging research field in nanotechnology. Molecular Communication (MC), which is a bio-inspired paradigm, is a promising technique for communication in nano-network. In MC, molecules are administered to exchange information among nano-devices. Due to the nature of molecular signals, traditional communication methods can't be directly applied to the MC framework. The objective of this thesis is to present novel diffusion-based MC methods when multi nano-devices communicate with each other in the same environment. A new channel model and detection technique, along with a molecular-based access method, are proposed in here for communication between asynchronous users. In this work, the received molecular signal is modeled as a Gaussian mixture distribution when the MC system undergoes Brownian noise and inter-symbol interference (ISI). This novel approach demonstrates a suitable modeling for diffusion-based MC system. Using the proposed Gaussian mixture model, a simple receiver is designed by minimizing the error probability. To determine an optimum detection threshold, an iterative algorithm is derived which minimizes a linear approximation of the error probability function. Also, a memory-based receiver is proposed to improve the performance of the MC system by considering previously detected symbols in obtaining the threshold value. Numerical evaluations reveal that theoretical analysis of the bit error rate (BER) performance based on the Gaussian mixture model match simulation results very closely. Furthermore, in this thesis, molecular code division multiple access (MCDMA) is proposed to overcome the inter-user interference (IUI) caused by asynchronous users communicating in a shared propagation environment. Based on the selected molecular codes, a chip detection scheme with an adaptable threshold value is developed for the MCDMA system when the proposed Gaussian mixture model is considered. Results indicate that the

  11. Modeling of the CTEx subcritical unit using MCNPX code

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Avelino [Divisao de Defesa Quimica, Biologica e Nuclear. Centro Tecnologico do Exercito - CTEx, Guaratiba, Rio de Janeiro, RJ (Brazil); Silva, Ademir X. da, E-mail: ademir@con.ufrj.br [Programa de Engenharia Nuclear. Universidade Federal do Rio de Janeiro - UFRJ Centro de Tecnologia, Rio de Janeiro, RJ (Brazil); Rebello, Wilson F. [Secao de Engenharia Nuclear - SE/7 Instituto Militar de Engenharia - IME Rio de Janeiro, RJ (Brazil); Cunha, Victor L. Lassance [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2011-07-01

    The present work aims at simulating the subcritical unit of Army Technology Center (CTEx) namely ARGUS pile (subcritical uranium-graphite arrangement) by using the computational code MCNPX. Once such modeling is finished, it could be used in k-effective calculations for systems using natural uranium as fuel, for instance. ARGUS is a subcritical assembly which uses reactor-grade graphite as moderator of fission neutrons and metallic uranium fuel rods with aluminum cladding. The pile is driven by an Am-Be spontaneous neutron source. In order to achieve a higher value for k{sub eff}, a higher concentration of U235 can be proposed, provided it safely remains below one. (author)

  12. Djehuty, a Code for Modeling Stars in Three Dimensions

    CERN Document Server

    Bazán, G; Dossa, D D; Eggleton, P P; Taylor, A; Castor, J I; Murray, S; Cook, K H; Eltgroth, P G; Cavallo, R M; Turcotte, S; Keller, S C; Pudliner, B S

    2003-01-01

    Current practice in stellar evolution is to employ one-dimensional calculations that quantitatively apply only to a minority of the observed stars (single non-rotating stars, or well detached binaries). Even in these systems, astrophysicists are dependent on approximations to handle complex three-dimensional processes like convection. Understanding the structure of binary stars, like those that lead to the Type Ia supernovae used to measure the expansion of the universe, are grossly non-spherical and await a 3D treatment. To approach very large problems like multi-dimensional modeling of stars, the Lawrence Livermore National Laboratory has invested in massively parallel computers and invested even more in developing the algorithms to utilize them on complex physics problems. We have leveraged skills from across the lab to develop a 3D stellar evolution code, Djehuty (after the Egyptian god for writing and calculation) that operates efficiently on platforms with thousands of nodes, with the best available phy...

  13. Maximizing entropy of image models for 2-D constrained coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Danieli, Matteo; Burini, Nino

    2010-01-01

    This paper considers estimating and maximizing the entropy of two-dimensional (2-D) fields with application to 2-D constrained coding. We consider Markov random fields (MRF), which have a non-causal description, and the special case of Pickard random fields (PRF). The PRF are 2-D causal finite...... context models, which define stationary probability distributions on finite rectangles and thus allow for calculation of the entropy. We consider two binary constraints and revisit the hard square constraint given by forbidding neighboring 1s and provide novel results for the constraint that no uniform 2...... £ 2 squares contains all 0s or all 1s. The maximum values of the entropy for the constraints are estimated and binary PRF satisfying the constraint are characterized and optimized w.r.t. the entropy. The maximum binary PRF entropy is 0.839 bits/symbol for the no uniform squares constraint. The entropy...

  14. Modeling Vortex Generators in a Navier-Stokes Code

    Science.gov (United States)

    Dudek, Julianne C.

    2011-01-01

    A source-term model that simulates the effects of vortex generators was implemented into the Wind-US Navier-Stokes code. The source term added to the Navier-Stokes equations simulates the lift force that would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, subsonic flow in an S-duct with 22 corotating vortex generators, and supersonic flow in a rectangular duct with a counter-rotating vortex-generator pair. The model was also used to successfully simulate microramps in supersonic flow by treating each microramp as a pair of vanes with opposite angles of incidence. The validation results indicate that the source-term vortex-generator model provides a useful tool for screening vortex-generator configurations and gives comparable results to solutions computed using gridded vanes.

  15. Modeling Vortex Generators in the Wind-US Code

    Science.gov (United States)

    Dudek, Julianne C.

    2010-01-01

    A source term model which simulates the effects of vortex generators was implemented into the Wind-US Navier Stokes code. The source term added to the Navier-Stokes equations simulates the lift force which would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, supersonic flow in a rectangular duct with a counterrotating vortex generator pair, and subsonic flow in an S-duct with 22 co-rotating vortex generators. The validation results indicate that the source term vortex generator model provides a useful tool for screening vortex generator configurations and gives comparable results to solutions computed using a gridded vane.

  16. Modelling of sprays in containment applications with A CMFD code

    Energy Technology Data Exchange (ETDEWEB)

    Mimouni, S., E-mail: stephane.mimouni@edf.f [Electricite de France R and D Division, 6 Quai Watier, F-78400 Chatou (France); Lamy, J.-S. [Electricite de France R and D Division, 1 av. du General de Gaulle, F-92140 Clamart (France); Lavieville, J. [Electricite de France R and D Division, 6 Quai Watier, F-78400 Chatou (France); Guieu, S.; Martin, M. [Electricite de France SEPTEN Division, 12-14 av. Dutrievoz, 69628 Villeurbanne (France)

    2010-09-15

    During the course of a hypothetical severe accident in a Pressurized Water Reactor (PWR), spray systems are used in the containment in order to prevent overpressure in case of a steam line break, and to enhance the gas mixing in case of the presence of hydrogen. In the frame of the Severe Accident Research Network (SARNET) of the 6th EC Framework Programme, two tests was produced in the TOSQAN facility in order to study the spray behaviour under severe accident conditions: TOSQAN 101 and TOSQAN 113. The TOSQAN facility is a closed cylindrical vessel. The inner spray system is located on the top of the enclosure on the vertical axis. For the TOSQAN 101 case, an initial pressurization in the vessel is performed with superheated steam up to 2.5 bar. Then, steam injection is stopped and spraying starts simultaneously at a given water temperature (around 25 {sup o}C) and water mass flow-rate (around 30 g/s). The depressurization transient starts and continues until the equilibrium phase, which corresponds to the stabilization of the average temperature and pressure of the gaseous mixture inside the vessel. The purpose of the TOSQAN 113 cold spray test is to study helium mixing due to spray activation without heat and mass transfers between gas and droplets. We present in this paper the spray modelling implemented in NEPTUNE{sub C}FD, a three-dimensional multi-fluid code developed especially for nuclear reactor applications. A new model dedicated to the droplet evaporation at the wall is also detailed. Keeping in mind the Best Practice Guidelines, closure laws have been selected to ensure a grid-dependence as weak as possible. For the TOSQAN 113 case, the time evolution of the helium volume fraction calculated shows that the physical approach described in the paper is able to reproduce the mixing of helium by the spray. The prediction of the transient behaviour should be improved by including in the model corrections based on better understanding of the influence of the

  17. Subgrid Combustion Modeling for the Next Generation National Combustion Code

    Science.gov (United States)

    Menon, Suresh; Sankaran, Vaidyanathan; Stone, Christopher

    2003-01-01

    In the first year of this research, a subgrid turbulent mixing and combustion methodology developed earlier at Georgia Tech has been provided to researchers at NASA/GRC for incorporation into the next generation National Combustion Code (called NCCLES hereafter). A key feature of this approach is that scalar mixing and combustion processes are simulated within the LES grid using a stochastic 1D model. The subgrid simulation approach recovers locally molecular diffusion and reaction kinetics exactly without requiring closure and thus, provides an attractive feature to simulate complex, highly turbulent reacting flows of interest. Data acquisition algorithms and statistical analysis strategies and routines to analyze NCCLES results have also been provided to NASA/GRC. The overall goal of this research is to systematically develop and implement LES capability into the current NCC. For this purpose, issues regarding initialization and running LES are also addressed in the collaborative effort. In parallel to this technology transfer effort (that is continuously on going), research has also been underway at Georgia Tech to enhance the LES capability to tackle more complex flows. In particular, subgrid scalar mixing and combustion method has been evaluated in three distinctly different flow field in order to demonstrate its generality: (a) Flame-Turbulence Interactions using premixed combustion, (b) Spatially evolving supersonic mixing layers, and (c) Temporal single and two-phase mixing layers. The configurations chosen are such that they can be implemented in NCCLES and used to evaluate the ability of the new code. Future development and validation will be in spray combustion in gas turbine engine and supersonic scalar mixing.

  18. Low mass dielectrons radiated off cold nuclear matter measured with HADES

    Directory of Open Access Journals (Sweden)

    Lorenz M.

    2014-03-01

    Full Text Available The High Acceptance DiElectron Spectrometer HADES [1] is installed at the Helmholtzzentrum für Schwerionenforschung (GSI accelerator facility in Darmstadt. It investigates dielectron emission and strangeness production in the 1-3 AGeV regime. A recent experiment series focusses on medium-modifications of light vector mesons in cold nuclear matter. In two runs, p+p and p+Nb reactions were investigated at 3.5 GeV beam energy; about 9·109 events have been registered. In contrast to other experiments the high acceptance of the HADES allows for a detailed analysis of electron pairs with low momenta relative to nuclear matter, where modifications of the spectral functions of vector mesons are predicted to be most prominent. Comparing these low momentum electron pairs to the reference measurement in the elementary p+p reaction, we find in fact a strong modification of the spectral distribution in the whole vector meson region.

  19. FPGA based compute nodes for trigger and data acquisition in HADES and PANDA

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Ming; Lang, Johannes; Wang, Qiang; Lange, Soeren; Roskoss, Johannes; Kopp, Andreas; Muenchow, David; Kuehn, Wolfgang [II. Physikalisches Institut, Universitaet Giessen (Germany); Liu, Zhen' an; Xu, Hao; Jin, Dapeng [IHEP, Beijing (China)

    2009-07-01

    Modern experiments in hadron and nuclear physics such as HADES and PANDA at FAIR require high performance trigger and data acquisition solutions which - in the case of PANDA - can cope with more than 10{sup 7} reactions/s and data rates in the order of 100 GB/s. As an universal building block for such high performance systems, an ATCA compliant FPGA based Compute Node (CN) has been designed and built. Sophisticated online filtering algorithms can be executed on 5 XILINX Virtex-4 FX60 FPGAs. Each CN features up to 10 GBytes of DDR2 memory. Multiple CNs can communicate via optical links, GBit Ethernet and the ATCA Full Mesh backplane. The total bandwidth of a single CN exceeds 35 GB/s. The system is highly scalable ranging from small configurations in a single shelf to large multi-shelf solutions. The talk will present the architecture as well as performance results for first algorithms, which have been implemented in the framework of the HADES trigger upgrade.

  20. Methodology Using MELCOR Code to Model Proposed Hazard Scenario

    Energy Technology Data Exchange (ETDEWEB)

    Gavin Hawkley

    2010-07-01

    This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.

  1. Coding conventions and principles for a National Land-Change Modeling Framework

    Science.gov (United States)

    Donato, David I.

    2017-07-14

    This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.

  2. Modeling ion exchange in clinoptilolite using the EQ3/6 geochemical modeling code

    Energy Technology Data Exchange (ETDEWEB)

    Viani, B.E.; Bruton, C.J.

    1992-06-01

    Assessing the suitability of Yucca Mtn., NV as a potential repository for high-level nuclear waste requires the means to simulate ion-exchange behavior of zeolites. Vanselow and Gapon convention cation-exchange models have been added to geochemical modeling codes EQ3NR/EQ6, allowing exchange to be modeled for up to three exchangers or a single exchanger with three independent sites. Solid-solution models that are numerically equivalent to the ion-exchange models were derived and also implemented in the code. The Gapon model is inconsistent with experimental adsorption isotherms of trace components in clinoptilolite. A one-site Vanselow model can describe adsorption of Cs or Sr on clinoptilolite, but a two-site Vanselow exchange model is necessary to describe K contents of natural clinoptilolites.

  3. Modeling ion exchange in clinoptilolite using the EQ3/6 geochemical modeling code

    Energy Technology Data Exchange (ETDEWEB)

    Viani, B.E.; Bruton, C.J. [Lawrence Livermore National Lab., CA (United States)

    1992-12-31

    Potential disposal of high-level nuclear waste at Yucca Mtn., Nevada requires the means to simulate ion-exchange behavior of clays and zeolites. Vanselow and Gapon convention cation-exchange models have been added to geochemical modeling codes EQ3NR/EQ6, allowing exchange to be modeled for up to three exchangers or a single exchanger with three independent sites. Solid-solution models that are numerically equivalent to the ion-exchange models were derived and also implemented in the code. The Gapon model is inconsistent with experimental adsorption isotherms of trace components in clinoptilolite. A one-site Vanselow model can describe adsorption of Cs and Sr on clinoptilolite, but a two-site Vanselow exchange model is necessary to describe K contents of natural clinoptilolites. 15 refs., 5 figs., 1 tab.

  4. Level 3 trigger algorithm and hardware platform for the HADES experiment

    Energy Technology Data Exchange (ETDEWEB)

    Kirschner, Daniel Georg

    2007-10-26

    One focus of the HADES experiment is the investigation of the decay of light vector mesons inside a dense medium into lepton pairs. These decays provide a conceptually ideal tool to study the invariant mass of the vector meson in-medium, since the lepton pairs of these meson decays leave the reaction without further strong interaction. Thus, no final state interaction affects the measurement. Unfortunately, the branching ratios of vector mesons into lepton pairs are very small ({approx} 10{sup -5}). This calls for a high rate, high acceptance experiment. In addition, a sophisticated real time trigger system is used in HADES to enrich the interesting events in the recorded data. The focus of this thesis is the development of a next generation real time trigger method to improve the enrichment of lepton events in the HADES trigger. In addition, a flexible hardware platform (GE-MN) was developed to implement and test the trigger method. The GE-MN features two Gigabit-Ethernet interfaces for data transport, a VMEbus for slow control and configuration, and a TigerSHARC DSP for data processing. It provides the experience to discuss the challenges and benefits of using a commercial standard network technology based system in an experiment. The developed and tested trigger method correlates the ring information of the HADES RICH with the fired wires (cells) of the HADES MDC detector. This correlation method operates by calculating for each event the cells which should have seen the signal of a traversing lepton, and compares these calculated cells to all the cells that did see a signal. The cells which should have fired are calculated from the polar and azimuthal angle information of the RICH rings by assuming a straight line in space, which is starting at the target and extending into a direction given by the ring angles. The line extends through the inner MDC chambers and the traversed cells are those that should have been hit. To compensate different sources for

  5. Improved virtual channel noise model for transform domain Wyner-Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2009-01-01

    Distributed video coding (DVC) has been proposed as a new video coding paradigm to deal with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. A virtual channel noise model is utilized at the decoder to estimate...

  6. A Mathematical Model Accounting for the Organisation in Multiplets of the Genetic Code

    OpenAIRE

    Sciarrino, A.

    2001-01-01

    Requiring stability of genetic code against translation errors, modelised by suitable mathematical operators in the crystal basis model of the genetic code, the main features of the organisation in multiplets of the mitochondrial and of the standard genetic code are explained.

  7. A realistic model under which the genetic code is optimal.

    Science.gov (United States)

    Buhrman, Harry; van der Gulik, Peter T S; Klau, Gunnar W; Schaffner, Christian; Speijer, Dave; Stougie, Leen

    2013-10-01

    The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By comparing this value with a distribution of values belonging to codes generated by random permutations of amino acid assignments, the level of error robustness of a genetic code can be quantified. We present a calculation in which the standard genetic code is shown to be optimal. We obtain this result by (1) using recently updated values of polar requirement as input; (2) fixing seven assignments (Ile, Trp, His, Phe, Tyr, Arg, and Leu) based on aptamer considerations; and (3) using known biosynthetic relations of the 20 amino acids. This last point is reflected in an approach of subdivision (restricting the random reallocation of assignments to amino acid subgroups, the set of 20 being divided in four such subgroups). The three approaches to explain robustness of the code (specific selection for robustness, amino acid-RNA interactions leading to assignments, or a slow growth process of assignment patterns) are reexamined in light of our findings. We offer a comprehensive hypothesis, stressing the importance of biosynthetic relations, with the code evolving from an early stage with just glycine and alanine, via intermediate stages, towards 64 codons carrying todays meaning.

  8. Semantic-preload video model based on VOP coding

    Science.gov (United States)

    Yang, Jianping; Zhang, Jie; Chen, Xiangjun

    2013-03-01

    In recent years, in order to reduce semantic gap which exists between high-level semantics and low-level features of video when the human understanding image or video, people mostly try the method of video annotation where in signal's downstream, namely further (again) attach labels to the content in video-database. Few people focus on the idea that: Use limited interaction and the means of comprehensive segmentation (including optical technologies) from the front-end of collection of video information (i.e. video camera), with video semantics analysis technology and corresponding concepts sets (i.e. ontology) which belong in a certain domain, as well as story shooting script and the task description of scene shooting etc; Apply different-level semantic descriptions to enrich the attributes of video object and the attributes of image region, then forms a new video model which is based on Video Object Plan (VOP) Coding. This model has potential intellectualized features, and carries a large amount of metadata, and embedded intermediate-level semantic concept into every object. This paper focuses on the latter, and presents a framework of a new video model. At present, this new video model is temporarily named "Video Model of Semantic-Preloaded or Semantic-Preload Video Model (simplified into VMoSP or SPVM)". This model mainly researches how to add labeling to video objects and image regions in real time, here video object and image region are usually used intermediate semantic labeling, and this work is placed on signal's upstream (i.e. video capture production stage). Because of the research needs, this paper also tries to analyses the hierarchic structure of video, and divides the hierarchic structure into nine hierarchy semantic levels, of course, this nine hierarchy only involved in video production process. In addition, the paper also point out that here semantic level tagging work (i.e. semantic preloading) only refers to the four middle-level semantic. All in

  9. On the development of LWR fuel analysis code (1). Analysis of the FEMAXI code and proposal of a new model

    Energy Technology Data Exchange (ETDEWEB)

    Lemehov, Sergei; Suzuki, Motoe [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2000-01-01

    This report summarizes the review on the modeling features of FEMAXI code and proposal of a new theoretical equation model of clad creep on the basis of irradiation-induced microstructure change. It was pointed out that plutonium build-up in fuel matrix and non-uniform radial power profile at high burn-up affect significantly fuel behavior through the interconnected effects with such phenomena as clad irradiation-induced creep, fission gas release, fuel thermal conductivity degradation, rim porous band formation and associated fuel swelling. Therefore, these combined effects should be properly incorporated into the models of the FEMAXI code so that the code can carry out numerical analysis at the level of accuracy and elaboration that modern experimental data obtained in test reactors have. Also, the proposed new mechanistic clad creep model has a general formalism which allows the model to be flexibly applied for clad behavior analysis under normal operation conditions and power transients as well for Zr-based clad materials by the use of established out-of-pile mechanical properties. The model has been tested against experimental data, while further verification is needed with specific emphasis on power ramps and transients. (author)

  10. Djehuty A Code for Modeling Whole Stars in Three Dimensions

    CERN Document Server

    Turcotte, S; Castor, J I; Cavallo, R M; Cohl, H S; Cook, K; Dearborn, D S P; Dossa, D D; Eastman, R; Eggleton, P P; Eltgroth, P; Keller, S; Murray, S; Taylor, A

    2001-01-01

    The DJEHUTY project is an intensive effort at the Lawrence Livermore National Laboratory (LLNL) to produce a general purpose 3-D stellar structure and evolution code to study dynamic processes in whole stars.

  11. Algorthms and Regolith Erosion Models for the Alert Code Project

    Data.gov (United States)

    National Aeronautics and Space Administration — ORBITEC and Duke University have teamed on this STTR to develop the ALERT (Advanced Lunar Exhaust-Regolith Transport) code which will include new developments in...

  12. MIG version 0.0 model interface guidelines: Rules to accelerate installation of numerical models into any compliant parent code

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, R.M.; Wong, M.K.

    1996-08-01

    A set of model interface guidelines, called MIG, is presented as a means by which any compliant numerical material model can be rapidly installed into any parent code without having to modify the model subroutines. Here, {open_quotes}model{close_quotes} usually means a material model such as one that computes stress as a function of strain, though the term may be extended to any numerical operation. {open_quotes}Parent code{close_quotes} means a hydrocode, finite element code, etc. which uses the model and enforces, say, the fundamental laws of motion and thermodynamics. MIG requires the model developer (who creates the model package) to specify model needs in a standardized but flexible way. MIG includes a dictionary of technical terms that allows developers and parent code architects to share a common vocabulary when specifying field variables. For portability, database management is the responsibility of the parent code. Input/output occurs via structured calling arguments. As much model information as possible (such as the lists of required inputs, as well as lists of precharacterized material data and special needs) is supplied by the model developer in an ASCII text file. Every MIG-compliant model also has three required subroutines to check data, to request extra field variables, and to perform model physics. To date, the MIG scheme has proven flexible in beta installations of a simple yield model, plus a more complicated viscodamage yield model, three electromechanical models, and a complicated anisotropic microcrack constitutive model. The MIG yield model has been successfully installed using identical subroutines in three vectorized parent codes and one parallel C++ code, all predicting comparable results. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort, thereby reducing the cost of installing and sharing models in diverse new codes.

  13. Adaptive Partially Hidden Markov Models with Application to Bilevel Image Coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto; Rasmussen, Tage

    1999-01-01

    Adaptive Partially Hidden Markov Models (APHMM) are introduced extending the PHMM models. The new models are applied to lossless coding of bi-level images achieving resluts which are better the JBIG standard.......Adaptive Partially Hidden Markov Models (APHMM) are introduced extending the PHMM models. The new models are applied to lossless coding of bi-level images achieving resluts which are better the JBIG standard....

  14. Biocomputational prediction of non-coding RNAs in model cyanobacteria

    Directory of Open Access Journals (Sweden)

    Ude Susanne

    2009-03-01

    Full Text Available Abstract Background In bacteria, non-coding RNAs (ncRNA are crucial regulators of gene expression, controlling various stress responses, virulence, and motility. Previous work revealed a relatively high number of ncRNAs in some marine cyanobacteria. However, for efficient genetic and biochemical analysis it would be desirable to identify a set of ncRNA candidate genes in model cyanobacteria that are easy to manipulate and for which extended mutant, transcriptomic and proteomic data sets are available. Results Here we have used comparative genome analysis for the biocomputational prediction of ncRNA genes and other sequence/structure-conserved elements in intergenic regions of the three unicellular model cyanobacteria Synechocystis PCC6803, Synechococcus elongatus PCC6301 and Thermosynechococcus elongatus BP1 plus the toxic Microcystis aeruginosa NIES843. The unfiltered numbers of predicted elements in these strains is 383, 168, 168, and 809, respectively, combined into 443 sequence clusters, whereas the numbers of individual elements with high support are 94, 56, 64, and 406, respectively. Removing also transposon-associated repeats, finally 78, 53, 42 and 168 sequences, respectively, are left belonging to 109 different clusters in the data set. Experimental analysis of selected ncRNA candidates in Synechocystis PCC6803 validated new ncRNAs originating from the fabF-hoxH and apcC-prmA intergenic spacers and three highly expressed ncRNAs belonging to the Yfr2 family of ncRNAs. Yfr2a promoter-luxAB fusions confirmed a very strong activity of this promoter and indicated a stimulation of expression if the cultures were exposed to elevated light intensities. Conclusion Comparison to entries in Rfam and experimental testing of selected ncRNA candidates in Synechocystis PCC6803 indicate a high reliability of the current prediction, despite some contamination by the high number of repetitive sequences in some of these species. In particular, we

  15. Application of the thermal-hydraulic codes in VVER-440 steam generators modelling

    Energy Technology Data Exchange (ETDEWEB)

    Matejovic, P.; Vranca, L.; Vaclav, E. [Nuclear Power Plant Research Inst. VUJE (Slovakia)

    1995-12-31

    Performances with the CATHARE2 V1.3U and RELAP5/MOD3.0 application to the VVER-440 SG modelling during normal conditions and during transient with secondary water lowering are described. Similar recirculation model was chosen for both codes. In the CATHARE calculation, no special measures were taken with the aim to optimize artificially flow rate distribution coefficients for the junction between SG riser and steam dome. Contrary to RELAP code, the CATHARE code is able to predict reasonable the secondary swell level in nominal conditions. Both codes are able to model properly natural phase separation on the SG water level. 6 refs.

  16. Linear-Time Non-Malleable Codes in the Bit-Wise Independent Tampering Model

    DEFF Research Database (Denmark)

    Cramer, Ronald; Damgård, Ivan Bjerre; Döttling, Nico

    Non-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m' (eventuall...... non-malleable codes of Agrawal et al. (TCC 2015) and of Cher- aghchi and Guruswami (TCC 2014) and improves the previous result in the bit-wise tampering model: it builds the first non-malleable codes with linear-time complexity and optimal-rate (i.e. rate 1 - o(1))....

  17. General Description of Fission Observables: GEF Model Code

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, K.-H. [CENBG, CNRS/IN2 P3, Chemin du Solarium, B.P. 120, F-33175 Gradignan (France); Jurado, B., E-mail: jurado@cenbg.in2p3.fr [CENBG, CNRS/IN2 P3, Chemin du Solarium, B.P. 120, F-33175 Gradignan (France); Amouroux, C. [CEA, DSM-Saclay (France); Schmitt, C., E-mail: schmitt@ganil.fr [GANIL, Bd. Henri Becquerel, B.P. 55027, F-14076 Caen Cedex 05 (France)

    2016-01-15

    The GEF (“GEneral description of Fission observables”) model code is documented. It describes the observables for spontaneous fission, neutron-induced fission and, more generally, for fission of a compound nucleus from any other entrance channel, with given excitation energy and angular momentum. The GEF model is applicable for a wide range of isotopes from Z = 80 to Z = 112 and beyond, up to excitation energies of about 100 MeV. The results of the GEF model are compared with fission barriers, fission probabilities, fission-fragment mass- and nuclide distributions, isomeric ratios, total kinetic energies, and prompt-neutron and prompt-gamma yields and energy spectra from neutron-induced and spontaneous fission. Derived properties of delayed neutrons and decay heat are also considered. The GEF model is based on a general approach to nuclear fission that explains a great part of the complex appearance of fission observables on the basis of fundamental laws of physics and general properties of microscopic systems and mathematical objects. The topographic theorem is used to estimate the fission-barrier heights from theoretical macroscopic saddle-point and ground-state masses and experimental ground-state masses. Motivated by the theoretically predicted early localisation of nucleonic wave functions in a necked-in shape, the properties of the relevant fragment shells are extracted. These are used to determine the depths and the widths of the fission valleys corresponding to the different fission channels and to describe the fission-fragment distributions and deformations at scission by a statistical approach. A modified composite nuclear-level-density formula is proposed. It respects some features in the superfluid regime that are in accordance with new experimental findings and with theoretical expectations. These are a constant-temperature behaviour that is consistent with a considerably increased heat capacity and an increased pairing condensation energy that is

  18. Modelling and Implementation of Network Coding for Video

    Directory of Open Access Journals (Sweden)

    Can Eyupoglu

    2016-08-01

    Full Text Available In this paper, we investigate Network Coding for Video (NCV which we apply for video streaming over wireless networks. NCV provides a basis for network coding. We use NCV algorithm to increase throughput and video quality. When designing NCV algorithm, we take the deadline as well as the decodability of the video packet at the receiver. In network coding, different flows of video packets are packed into a single packet at intermediate nodes and forwarded to other nodes over wireless networks. There are many problems that occur during transmission on the wireless channel. Network coding plays an important role in dealing with these problems. We observe the benefits of network coding for throughput increase thanks to applying broadcast operations on wireless networks. The aim of this study is to implement NCV algorithm using C programming language which takes the output of the H.264 video codec generating the video packets. In our experiments, we investigated improvements in terms of video quality and throughput at different scenarios.

  19. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments

    Directory of Open Access Journals (Sweden)

    Monteagudo Ángel

    2011-02-01

    Full Text Available Abstract Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the

  20. A Low-Mass Drift Chamber System for the HADES-Spectrometer

    Science.gov (United States)

    Dohrmann, F.; Bethge, K.; Enghardt, W.; Fateev, O.; Garabatos, C.; Grosse, E.; Muentz, C.; Karig, W.; Koenig, W.; Smykov, L.; Sobiella, M.; Steigerwald, A.; Stelzer, H.; Stroth, J.; Wuestenfeld, J.; Zanewsky, Y.; Zentek, A.

    1998-11-01

    A new high resolution (Δ M/M < 1%) and high acceptance (45%) di-electron spectrometer (HADES) has been designed to investigate in-medium properties of hadrons. For tracking of all charged particles (in particular with sufficient resolution for electrons) a system of 24 low-mass drift chambers (Helium based counting gas and Aluminum field and cathode wires), arranged in four tracking planes, is used. Design aspects of the chambers are reported. Results of performance optimization using various prototype detectors are discussed, including results of an ageing test. Stable operation in the high-multiplicity environment of heavy ion collisions, and a spatial resolution of 70 μ m(σ) over 80% of a cell have been demonstrated in two beam experiments.

  1. Time of flight measurement in heavy-ion collisions with the HADES RPC TOF wall

    Science.gov (United States)

    Kornakov, G.; Arnold, O.; Atomssa, E. T.; Behnke, C.; Belyaev, A.; Berger-Chen, J. C.; Biernat, J.; Blanco, A.; Blume, C.; Böhmer, M.; Bordalo, P.; Chernenko, S.; Deveaux, C.; Dybczak, A.; Epple, E.; Fabbietti, L.; Fateev, O.; Fonte, P.; Franco, C.; Friese, J.; Fröhlich, I.; Galatyuk, T.; Garzón, J. A.; Gill, K.; Golubeva, M.; González-Díaz, D.; Guber, F.; Gumberidze, M.; Harabasz, S.; Hennino, T.; Höhne, C.; Holzmann, R.; Ierusalimov, A.; Ivashkin, A.; Jurkovic, M.; Kämpfer, B.; Karavicheva, T.; Kardan, K.; Koenig, I.; Koenig, W.; Kolb, B. W.; Korcyl, G.; Kotte, R.; Krása, A.; Krebs, E.; Krizek, F.; Kuc, H.; Kugler, A.; Kunz, T.; Kurepin, A.; Kurilkin, A.; Kurilkin, P.; Ladygin, V.; Lalik, R.; Lang, S.; Lapidus, K.; Lebedev, A.; Lopes, L.; Lorenz, M.; Maier, L.; Mangiarotti, A.; Markert, J.; Metag, V.; Michel, J.; Müntz, C.; Münzer, R.; Naumann, L.; Palka, M.; Pechenov, V.; Pechenova, O.; Petousis, V.; Pietraszko, J.; Przygoda, W.; Ramstein, B.; Rehnisch, L.; Reshetin, A.; Rost, A.; Rustamov, A.; Sadovsky, A.; Salabura, P.; Scheib, T.; Schmidt-Sommerfeld, K.; Schuldes, H.; Sellheim, P.; Siebenson, J.; Silva, L.; Sobolev, Yu. G.; Spataro, S.; Ströbele, H.; Stroth, J.; Strzempek, P.; Sturm, C.; Svoboda, O.; Tarantola, A.; Teilab, K.; Tlusty, P.; Traxler, M.; Tsertos, H.; Vasiliev, T.; Wagner, V.; Wendisch, C.; Wirth, J.; Wüstenfeld, J.; Yurevich, S.; Zanevsky, Y.

    2014-11-01

    This work presents the analysis of the performance of the RPC ToF wall of the HADES, located at GSI Helmholtzzentrum für Schwerionenforschung, Darmstadt. The behavior of the detector is studied in Au+Au collisions at 1.23 AGeV. A main characteristic of the detector is that all the active areas were designed to be electrically shielded in order to operate in high occupancies of the chambers. Here we show the achieved performance regarding efficiency and timing capabilities at different occupancies of this special design after the applied offline corrections to the data. Also the stability of the intrinsic time resolution over time of data taking is presented.

  2. Inclusive reconstruction of hadron resonances in elementary and heavy-ion collisions with HADES

    Directory of Open Access Journals (Sweden)

    Kornakov Georgy

    2016-01-01

    Full Text Available The unambiguous identification of hadron modifications in hot and dense QCD matter is one of the important goals in nuclear physics. In the regime of 1 - 2 GeV kinetic energy per nucleon, HADES has measured rare and penetrating probes in elementary and heavy-ion collisions. The main creation mechanism of mesons is the excitation and decay of baryonic resonances throughout the fireball evolution. The reconstruction of shortlived (≈ 1 fm/c resonance states through their decay products is notoriously difficult. We have developed a new iterative algorithm, which builds the best hypothesis of signal and background by distortion of individual particle properties. This allows to extract signals with signal-to-background ratios of <1%.

  3. Inclusive reconstruction of hadron resonances in elementary and heavy-ion collisions with HADES

    Science.gov (United States)

    Kornakov, Georgy

    2016-11-01

    The unambiguous identification of hadron modifications in hot and dense QCD matter is one of the important goals in nuclear physics. In the regime of 1 - 2 GeV kinetic energy per nucleon, HADES has measured rare and penetrating probes in elementary and heavy-ion collisions. The main creation mechanism of mesons is the excitation and decay of baryonic resonances throughout the fireball evolution. The reconstruction of shortlived (≈ 1 fm/c) resonance states through their decay products is notoriously difficult. We have developed a new iterative algorithm, which builds the best hypothesis of signal and background by distortion of individual particle properties. This allows to extract signals with signal-to-background ratios of <1%.

  4. INTRA/Mod3.2. Manual and Code Description. Volume I - Physical Modelling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Jenny; Edlund, O.; Hermann, J.; Johansson, Lise-Lotte

    1999-01-01

    The INTRA Manual consists of two volumes. Volume I of the manual is a thorough description of the code INTRA, the Physical modelling of INTRA and the ruling numerical methods and volume II, the User`s Manual is an input description. This document, the Physical modelling of INTRA, contains code characteristics, integration methods and applications

  5. Code generation by model transformation: a case study in transformation modularity

    NARCIS (Netherlands)

    Hemel, Z.; Kats, L.C.L.; Groenewegen, D.M.; Viser, E.

    2009-01-01

    The realization of model-driven software development requires effective techniques for implementing code generators for domain-specific languages. This paper identifies techniques for improving separation of concerns in the implementation of generators. The core technique is code generation by model

  6. 49 CFR 41.120 - Acceptable model codes.

    Science.gov (United States)

    2010-10-01

    ... Natural Hazards, Federal Emergency Management Agency, 500 C Street, SW., Washington, DC 20472.): (i) The..., published by the Building Officials and Code Administrators, 4051 West Flossmoor Rd., Country Club Hills... Disaster Relief and Emergency Assistance Act (Stafford Act), 42 U.S.C. 5170a, 5170b, 5192, and 5193, or...

  7. M3: An Open Model for Measuring Code Artifacts

    NARCIS (Netherlands)

    Izmaylova, A.; Klint, P.; Shahi, A.; Vinju, J.J.

    2013-01-01

    In the context of the EU FP7 project ``OSSMETER'' we are developing an infra-structure for measuring source code. The goal of OSSMETER is to obtain insight in the quality of open-source projects from all possible perspectives, including product, process and community. This is a "white paper" on M3,

  8. The modeling of core melting and in-vessel corium relocation in the APRIL code

    Energy Technology Data Exchange (ETDEWEB)

    Kim. S.W.; Podowski, M.Z.; Lahey, R.T. [Rensselaer Polytechnic Institute, Troy, NY (United States)] [and others

    1995-09-01

    This paper is concerned with the modeling of severe accident phenomena in boiling water reactors (BWR). New models of core melting and in-vessel corium debris relocation are presented, developed for implementation in the APRIL computer code. The results of model testing and validations are given, including comparisons against available experimental data and parametric/sensitivity studies. Also, the application of these models, as parts of the APRIL code, is presented to simulate accident progression in a typical BWR reactor.

  9. A computer code for calculations in the algebraic collective model of the atomic nucleus

    CERN Document Server

    Welsh, T A

    2016-01-01

    A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1,1) x SO(5) dynamical group. This, in particular, obviates the use of coefficients of fractional parentage. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [pi x q x pi]_0 and [pi x pi]_{LM}, where q_M are the model's quadrupole moments, and pi_N are corresponding conjugate momenta (-2>=M,N<=2). The code also provides ready access to SO(3)-reduced SO(5) Clebsch-Gordan coefficients through data files provided with the code.

  10. Accelerating scientific codes by performance and accuracy modeling

    CERN Document Server

    Fabregat-Traver, Diego; Bientinesi, Paolo

    2016-01-01

    Scientific software is often driven by multiple parameters that affect both accuracy and performance. Since finding the optimal configuration of these parameters is a highly complex task, it extremely common that the software is used suboptimally. In a typical scenario, accuracy requirements are imposed, and attained through suboptimal performance. In this paper, we present a methodology for the automatic selection of parameters for simulation codes, and a corresponding prototype tool. To be amenable to our methodology, the target code must expose the parameters affecting accuracy and performance, and there must be formulas available for error bounds and computational complexity of the underlying methods. As a case study, we consider the particle-particle particle-mesh method (PPPM) from the LAMMPS suite for molecular dynamics, and use our tool to identify configurations of the input parameters that achieve a given accuracy in the shortest execution time. When compared with the configurations suggested by exp...

  11. A high burnup model developed for the DIONISIO code

    Energy Technology Data Exchange (ETDEWEB)

    Soba, A. [U.A. Combustibles Nucleares, Comisión Nacional de Energía Atómica, Avenida del Libertador 8250, 1429 Buenos Aires (Argentina); Denis, A., E-mail: denis@cnea.gov.ar [U.A. Combustibles Nucleares, Comisión Nacional de Energía Atómica, Avenida del Libertador 8250, 1429 Buenos Aires (Argentina); Romero, L. [U.A. Reactores Nucleares, Comisión Nacional de Energía Atómica, Avenida del Libertador 8250, 1429 Buenos Aires (Argentina); Villarino, E.; Sardella, F. [Departamento Ingeniería Nuclear, INVAP SE, Comandante Luis Piedra Buena 4950, 8430 San Carlos de Bariloche, Río Negro (Argentina)

    2013-02-15

    A group of subroutines, designed to extend the application range of the fuel performance code DIONISIO to high burn up, has recently been included in the code. The new calculation tools, which are tuned for UO{sub 2} fuels in LWR conditions, predict the radial distribution of power density, burnup, and concentration of diverse nuclides within the pellet. The balance equations of all the isotopes involved in the fission process are solved in a simplified manner, and the one-group effective cross sections of all of them are obtained as functions of the radial position in the pellet, burnup, and enrichment in {sup 235}U. In this work, the subroutines are described and the results of the simulations performed with DIONISIO are presented. The good agreement with the data provided in the FUMEX II/III NEA data bank can be easily recognized.

  12. A high burnup model developed for the DIONISIO code

    Science.gov (United States)

    Soba, A.; Denis, A.; Romero, L.; Villarino, E.; Sardella, F.

    2013-02-01

    A group of subroutines, designed to extend the application range of the fuel performance code DIONISIO to high burn up, has recently been included in the code. The new calculation tools, which are tuned for UO2 fuels in LWR conditions, predict the radial distribution of power density, burnup, and concentration of diverse nuclides within the pellet. The balance equations of all the isotopes involved in the fission process are solved in a simplified manner, and the one-group effective cross sections of all of them are obtained as functions of the radial position in the pellet, burnup, and enrichment in 235U. In this work, the subroutines are described and the results of the simulations performed with DIONISIO are presented. The good agreement with the data provided in the FUMEX II/III NEA data bank can be easily recognized.

  13. Test code for the assessment and improvement of Reynolds stress models

    Science.gov (United States)

    Rubesin, M. W.; Viegas, J. R.; Vandromme, D.; Minh, H. HA

    1987-01-01

    An existing two-dimensional, compressible flow, Navier-Stokes computer code, containing a full Reynolds stress turbulence model, was adapted for use as a test bed for assessing and improving turbulence models based on turbulence simulation experiments. To date, the results of using the code in comparison with simulated channel flow and over an oscillating flat plate have shown that the turbulence model used in the code needs improvement for these flows. It is also shown that direct simulation of turbulent flows over a range of Reynolds numbers are needed to guide subsequent improvement of turbulence models.

  14. Evaluation of the analysis models in the ASTRA nuclear design code system

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Nam Jin; Park, Chang Jea; Kim, Do Sam; Lee, Kyeong Taek; Kim, Jong Woon [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    2000-11-15

    In the field of nuclear reactor design, main practice was the application of the improved design code systems. During the process, a lot of basis and knowledge were accumulated in processing input data, nuclear fuel reload design, production and analysis of design data, et al. However less efforts were done in the analysis of the methodology and in the development or improvement of those code systems. Recently, KEPO Nuclear Fuel Company (KNFC) developed the ASTRA (Advanced Static and Transient Reactor Analyzer) code system for the purpose of nuclear reactor design and analysis. In the code system, two group constants were generated from the CASMO-3 code system. The objective of this research is to analyze the analysis models used in the ASTRA/CASMO-3 code system. This evaluation requires indepth comprehension of the models, which is important so much as the development of the code system itself. Currently, most of the code systems used in domestic Nuclear Power Plant were imported, so it is very difficult to maintain and treat the change of the situation in the system. Therefore, the evaluation of analysis models in the ASTRA nuclear reactor design code system in very important.

  15. Resonance production in p+p, p+A and A+A collisions measured with HADES

    Directory of Open Access Journals (Sweden)

    Reshetin A.

    2012-11-01

    Full Text Available The knowledge of baryonic resonance properties and production cross sections plays an important role for the extraction and understanding of medium modifications of mesons in hot and/or dense nuclear matter. We present and discuss systematics on dielectron and strangeness production obtained with HADES on p+p, p+A and A+A collisions in the few GeV energy regime with respect to these resonances.

  16. Resonance production in p+p, p+A and A+A collisions measured with HADES

    Science.gov (United States)

    Lorenz, M.; Agakishiev, G.; Behnke, C.; Belver, D.; Belyaev, A.; Berger-Chen, J. C.; Blanco, A.; Blume, C.; Böhmer, M.; Cabanelas, P.; Dritsa, C.; Dybczak, A.; Epple, E.; Fabbietti, L.; Fonte, P.; Friese, J.; Fröhlich, I.; Galatyuk, T.; Garzón, J. A.; Gill, K.; Golubeva, M.; Gonzáalez-Díaz, D.; Guber, F.; Gumberidze, M.; Harabasz, S.; Hennino, T.; Holzmann, R.; Huck, P.; Höhne, C.; Ierusalimov, A.; Ivashkin, A.; Jurkovic, M.; Kämpfer, B.; Karavicheva, T.; Koenig, I.; Koenig, W.; Kolb, B. W.; Korcyl, G.; Kornakov, G.; Kotte, R.; Krása, A.; Krebs, E.; Krizek, F.; Kuc, H.; Kugler, A.; Kurepin, A.; Kurilkin, A.; Kurilkin, P.; Ladygin, V.; Lalik, R.; Lang, S.; Lapidus, K.; Lebedev, A.; Lopes, L.; Maier, L.; Mangiarotti, A.; Markert, J.; Metag, V.; Michel, J.; Müntz, C.; Münzer, R.; Naumann, L.; Palka, M.; Parpottas, Y.; Pechenov, V.; Pechenova, O.; Pietraszko, J.; Przygoda, W.; Ramstein, B.; Rehnisch, L.; Reshetin, A.; Rustamov, A.; Sadovsky, A.; Salabura, P.; Scheib, T.; Schuldes, H.; Siebenson, J.; Sobolev, Yu. G.; Spataroe, S.; Ströbele, H.; Stroth, J.; Strzempek, P.; Sturm, C.; Svoboda, O.; Tarantola, A.; Teilab, K.; Tlusty, P.; Traxler, M.; Tsertos, H.; Vasiliev, T.; Wagner, V.; Weber, M.; Wendisch, C.; Wüstenfeld, J.; Yurevich, S.; Zanevsky, Y.

    2012-11-01

    The knowledge of baryonic resonance properties and production cross sections plays an important role for the extraction and understanding of medium modifications of mesons in hot and/or dense nuclear matter. We present and discuss systematics on dielectron and strangeness production obtained with HADES on p+p, p+A and A+A collisions in the few GeV energy regime with respect to these resonances.

  17. Modeling Proton- and Light Ion-Induced Reactions at Low Energies in the MARS15 Code

    Energy Technology Data Exchange (ETDEWEB)

    Rakhno, I. L. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Mokhov, N. V. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Gudima, K. K. [National Academy of Sciences, Cisineu (Moldova)

    2015-04-25

    An implementation of both ALICE code and TENDL evaluated nuclear data library in order to describe nuclear reactions induced by low-energy projectiles in the Monte Carlo code MARS15 is presented. Comparisons between results of modeling and experimental data on reaction cross sections and secondary particle distributions are shown.

  18. The Nuremberg Code subverts human health and safety by requiring animal modeling

    OpenAIRE

    Greek Ray; Pippus Annalea; Hansen Lawrence A

    2012-01-01

    Abstract Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive...

  19. Implementation of the critical points model in a SFM-FDTD code working in oblique incidence

    Energy Technology Data Exchange (ETDEWEB)

    Hamidi, M; Belkhir, A; Lamrous, O [Laboratoire de Physique et Chimie Quantique, Universite Mouloud Mammeri, Tizi-Ouzou (Algeria); Baida, F I, E-mail: omarlamrous@mail.ummto.dz [Departement d' Optique P.M. Duffieux, Institut FEMTO-ST UMR 6174 CNRS Universite de Franche-Comte, 25030 Besancon Cedex (France)

    2011-06-22

    We describe the implementation of the critical points model in a finite-difference-time-domain code working in oblique incidence and dealing with dispersive media through the split field method. Some tests are presented to validate our code in addition to an application devoted to plasmon resonance of a gold nanoparticles grating.

  20. Addressing Hate Speech and Hate Behaviors in Codes of Conduct: A Model for Public Institutions.

    Science.gov (United States)

    Neiger, Jan Alan; Palmer, Carolyn; Penney, Sophie; Gehring, Donald D.

    1998-01-01

    As part of a larger study, researchers collected campus codes prohibiting hate crimes, which were then reviewed to determine whether the codes presented constitutional problems. Based on this review, the authors develop and present a model policy that is content neutral and does not use language that could be viewed as unconstitutionally vague or…

  1. PEBBLES: A COMPUTER CODE FOR MODELING PACKING, FLOW AND RECIRCULATIONOF PEBBLES IN A PEBBLE BED REACTOR

    Energy Technology Data Exchange (ETDEWEB)

    Joshua J. Cogliati; Abderrafi M. Ougouag

    2006-10-01

    A comprehensive, high fidelity model for pebble flow has been developed and embodied in the PEBBLES computer code. In this paper, a description of the physical artifacts included in the model is presented and some results from using the computer code for predicting the features of pebble flow and packing in a realistic pebble bed reactor design are shown. The sensitivity of models to various physical parameters is also discussed.

  2. Subgroup A : nuclear model codes report to the Sixteenth Meeting of the WPEC

    Energy Technology Data Exchange (ETDEWEB)

    Talou, P. (Patrick); Chadwick, M. B. (Mark B.); Dietrich, F. S.; Herman, M.; Kawano, T. (Toshihiko); Konig, A.; Obložinský, P.

    2004-01-01

    The Subgroup A activities focus on the development of nuclear reaction models and codes, used in evaluation work for nuclear reactions from the unresolved energy region up to the pion threshold production limit, and for target nuclides from the low teens and heavier. Much of the efforts are devoted by each participant to the continuing development of their own Institution codes. Progresses in this arena are reported in detail for each code in the present document. EMPIRE-II is of public access. The release of the TALYS code has been announced for the ND2004 Conference in Santa Fe, NM, October 2004. McGNASH is still under development and is not expected to be released in the very near future. In addition, Subgroup A members have demonstrated a growing interest in working on common modeling and codes capabilities, which would significantly reduce the amount of duplicate work, help manage efficiently the growing lines of existing codes, and render codes inter-comparison much easier. A recent and important activity of the Subgroup A has therefore been to develop the framework and the first bricks of the ModLib library, which is constituted of mostly independent pieces of codes written in Fortran 90 (and above) to be used in existing and future nuclear reaction codes. Significant progresses in the development of ModLib have been made during the past year. Several physics modules have been added to the library, and a few more have been planned in detail for the coming year.

  3. Mathematical model and computer code for the analysis of advanced fast reactor dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Schukin, N.V. (Moscow Engineering Physics Inst. (Russian Federation)); Korsun, A.S. (Moscow Engineering Physics Inst. (Russian Federation)); Vitruk, S.G. (Moscow Engineering Physics Inst. (Russian Federation)); Zimin, V.G. (Moscow Engineering Physics Inst. (Russian Federation)); Romanin, S.D. (Moscow Engineering Physics Inst. (Russian Federation))

    1993-04-01

    Efficient algorithms for mathematical modeling of 3-D neutron kinetics and thermal hydraulics are described. The model and appropriate computer code make it possible to analyze a variety of transient events ranging from normal operational states to catastrophic accident excursions. To verify the code, a number of calculations of different kind of transients was carried out. The results of the calculations show that the model and the computer code could be used for conceptual design of advanced liquid metal reactors. The detailed description of calculations of TOP WS accident is presented. (orig./DG)

  4. Development of thermal hydraulic models for the reliable regulatory auditing code

    Energy Technology Data Exchange (ETDEWEB)

    Chung, B. D.; Song, C. H.; Lee, Y. J.; Kwon, T. S.; Lee, S. W. [Korea Automic Energy Research Institute, Taejon (Korea, Republic of)

    2004-02-15

    The objective of this project is to develop thermal hydraulic models for use in improving the reliability of the regulatory auditing codes. The current year fall under the second step of the 3 year project, and the main researches were focused on the development of downcorner boiling model. During the current year, the bubble stream model of downcorner has been developed and installed in he auditing code. The model sensitivity analysis has been performed for APR1400 LBLOCA scenario using the modified code. The preliminary calculation has been performed for the experimental test facility using FLUENT and MARS code. The facility for air bubble experiment has been installed. The thermal hydraulic phenomena for VHTR and super critical reactor have been identified for the future application and model development.

  5. A generic method for automatic translation between input models for different versions of simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)

    2014-05-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.

  6. Donghe Sandstone Subtle Reservoir Exploration and Development Technology in Hade 4 Oilfield

    Institute of Scientific and Technical Information of China (English)

    SunLongde; ZhouXinyuan; SongWenjie; JiangTongwen; ZhuWeihong; YangPing; NiuYujie; DiHongli

    2004-01-01

    Hade 4 oilfield is located on the Hadexun tectonic belt north of the Manjiaer depression in the Tarim basin, whose main target layer is the Donghe sandstone reservoir, with a burial depth over 5,000m and an amplitude below 34m, at the bottom of the Carboniferous. The Donghe sandstone reservoir consists of littoral facies deposited quartz sandstones of the transgressive system tract, overlapping northward and pinching out. Exploration and development confirms that water-oil contact tilts from the southeast to the northwest with a drop height of nearly 80m. The reservoir, under the control of both the stratigraphic overlap pinch-out and tectonism, is a typical subtle reservoir. The Donghe sandstone reservoir in Hade 4 oilfield also has the feature of a large oil-bearing area (over 130 km2 proved), a small thickness (average efficient thickness below 6m) and a low abundance (below 50)< 104t/km2). Moreover, above the target layer developed a set of igneous rocks with an uneven thickness in the Permian formation, thus causing a great difficulty in research of the velocity field. Considering these features,an combination mode of exploration and development is adopted, namely by way of whole deployment, step-by-step enforcement and rolling development with key problems to be tackled, in order to further deepen the understanding and enlarge the fruits of exploration and development. The paper technically focuses its study on the following four aspects concerning problem tackling. First, to strengthen the collecting, processing and explanation of seismic data, improve the resolution, accurately recognize the pinch-out line of the Donghe sandstone reservoir by combining the drilling materials in order to make sure its distribution law; second, to strengthen the research on velocity field, improve the accuracy of variable speed mapping, make corrections by the data from newlydrilled key wells and, as a result, the precision of tectonic description is greatly improved; third

  7. An Advanced simulation Code for Modeling Inductive Output Tubes

    Energy Technology Data Exchange (ETDEWEB)

    Thuc Bui; R. Lawrence Ives

    2012-04-27

    During the Phase I program, CCR completed several major building blocks for a 3D large signal, inductive output tube (IOT) code using modern computer language and programming techniques. These included a 3D, Helmholtz, time-harmonic, field solver with a fully functional graphical user interface (GUI), automeshing and adaptivity. Other building blocks included the improved electrostatic Poisson solver with temporal boundary conditions to provide temporal fields for the time-stepping particle pusher as well as the self electric field caused by time-varying space charge. The magnetostatic field solver was also updated to solve for the self magnetic field caused by time changing current density in the output cavity gap. The goal function to optimize an IOT cavity was also formulated, and the optimization methodologies were investigated.

  8. Modeling of Ionization Physics with the PIC Code OSIRIS

    Energy Technology Data Exchange (ETDEWEB)

    Deng, S.; Tsung, F.; Lee, S.; Lu, W.; Mori, W.B.; Katsouleas, T.; Muggli, P.; Blue, B.E.; Clayton, C.E.; O' Connell, C.; Dodd, E.; Decker, F.J.; Huang, C.; Hogan, M.J.; Hemker, R.; Iverson, R.H.; Joshi, C.; Ren, C.; Raimondi, P.; Wang, S.; Walz, D.; /Southern California U. /UCLA /SLAC

    2005-09-27

    When considering intense particle or laser beams propagating in dense plasma or gas, ionization plays an important role. Impact ionization and tunnel ionization may create new plasma electrons, altering the physics of wakefield accelerators, causing blue shifts in laser spectra, creating and modifying instabilities, etc. Here we describe the addition of an impact ionization package into the 3-D, object-oriented, fully parallel PIC code OSIRIS. We apply the simulation tool to simulate the parameters of the upcoming E164 Plasma Wakefield Accelerator experiment at the Stanford Linear Accelerator Center (SLAC). We find that impact ionization is dominated by the plasma electrons moving in the wake rather than the 30 GeV drive beam electrons. Impact ionization leads to a significant number of trapped electrons accelerated from rest in the wake.

  9. A p-Adic Model of DNA Sequence and Genetic Code

    CERN Document Server

    Dragovich, Branko

    2007-01-01

    Using basic properties of p-adic numbers, we consider a simple new approach to describe main aspects of DNA sequence and genetic code. Central role in our investigation plays an ultrametric p-adic information space which basic elements are nucleotides, codons and genes. We show that a 5-adic model is appropriate for DNA sequence. This 5-adic model, combined with 2-adic distance, is also suitable for genetic code and for a more advanced employment in genomics. We find that genetic code degeneracy is related to the p-adic distance between codons.

  10. FREYA-a new Monte Carlo code for improved modeling of fission chains

    Energy Technology Data Exchange (ETDEWEB)

    Hagmann, C A; Randrup, J; Vogt, R L

    2012-06-12

    A new simulation capability for modeling of individual fission events and chains and the transport of fission products in materials is presented. FREYA ( Fission Yield Event Yield Algorithm ) is a Monte Carlo code for generating fission events providing correlated kinematic information for prompt neutrons, gammas, and fragments. As a standalone code, FREYA calculates quantities such as multiplicity-energy, angular, and gamma-neutron energy sharing correlations. To study materials with multiplication, shielding effects, and detectors, we have integrated FREYA into the general purpose Monte Carlo code MCNP. This new tool will allow more accurate modeling of detector responses including correlations and the development of SNM detectors with increased sensitivity.

  11. Trace-Based Code Generation for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.-G.

    2009-01-01

    Paper Submitted for review at the Eighth International Conference on Generative Programming and Component Engineering. Model-based testing can be a powerful means to generate test cases for the system under test. However, creating a useful model for model-based testing requires expertise in the (fo

  12. Trace-Based Code Generation for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.-G.

    2009-01-01

    Paper Submitted for review at the Eighth International Conference on Generative Programming and Component Engineering. Model-based testing can be a powerful means to generate test cases for the system under test. However, creating a useful model for model-based testing requires expertise in the

  13. FRED fuel behaviour code: Main models and analysis of Halden IFA-503.2 tests

    Energy Technology Data Exchange (ETDEWEB)

    Mikityuk, K., E-mail: konstantin.mikityuk@psi.ch [Paul Scherrer Institute, 5232 Villigen PSI (Switzerland); Shestopalov, A., E-mail: shest@dhtp.kiae.ru [RRC' Kurchatov Institute' , Kurchatov sq, 123182 Moscow (Russian Federation)

    2011-07-15

    Highlights: > We developed a new fuel rod behaviour code named FRED. > Main models and assumptions are described. > The code was checked using the IFA-503.2 tests performed at the Halden reactor. - Abstract: The FRED fuel rod code is being developed for thermal and mechanical simulation of fast breeder reactor (FBR) and light-water reactor (LWR) fuel behaviour under base-irradiation and accident conditions. The current version of the code calculates temperature distribution in fuel rods, stress-strain condition of cladding, fuel deformation, fuel-cladding gap conductance, and fuel rod inner pressure. The code was previously evaluated in the frame of two OECD mixed plutonium-uranium oxide (MOX) fuel performance benchmarks and then integrated into PSI's FAST code system to provide the fuel rod temperatures necessary for the neutron kinetics and thermal-hydraulic modules in transient calculations. This paper briefly overviews basic models and material property database of the FRED code used to assess the fuel behaviour under steady-state conditions. In addition, the code was used to simulate the IFA-503.2 tests, performed at the Halden reactor for two PWR and twelve VVER fuel samples under base-irradiation conditions. This paper presents the results of this simulation for two cases using a code-to-data comparison of fuel centreline temperatures, internal gas pressures, and fuel elongations. This comparison has demonstrated that the code adequately describes the important physical mechanisms of the uranium oxide (UOX) fuel rod thermal performance under steady-state conditions. Future activity should be concentrated on improving the model and extending the validation range, especially to the MOX fuel steady-state and transient behaviour.

  14. Numerical modelling of spallation in 2D hydrodynamics codes

    Science.gov (United States)

    Maw, J. R.; Giles, A. R.

    1996-05-01

    A model for spallation based on the void growth model of Johnson has been implemented in 2D Lagrangian and Eulerian hydrocodes. The model has been extended to treat complete separation of material when voids coalesce and to describe the effects of elevated temperatures and melting. The capabilities of the model are illustrated by comparison with data from explosively generated spall experiments. Particular emphasis is placed on the prediction of multiple spall effects in weak, low melting point, materials such as lead. The correlation between the model predictions and observations on the strain rate dependence of spall strength is discussed.

  15. RELAP5/MOD3 code manual. Volume 4, Models and correlations

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-08-01

    The RELAP5 code has been developed for best-estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents and operational transients such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I presents modeling theory and associated numerical schemes; Volume II details instructions for code application and input data preparation; Volume III presents the results of developmental assessment cases that demonstrate and verify the models used in the code; Volume IV discusses in detail RELAP5 models and correlations; Volume V presents guidelines that have evolved over the past several years through the use of the RELAP5 code; Volume VI discusses the numerical scheme used in RELAP5; and Volume VII presents a collection of independent assessment calculations.

  16. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  17. Relativistic modeling capabilities in PERSEUS extended MHD simulation code for HED plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Hamlin, Nathaniel D., E-mail: nh322@cornell.edu [438 Rhodes Hall, Cornell University, Ithaca, NY, 14853 (United States); Seyler, Charles E., E-mail: ces7@cornell.edu [Cornell University, Ithaca, NY, 14853 (United States)

    2014-12-15

    We discuss the incorporation of relativistic modeling capabilities into the PERSEUS extended MHD simulation code for high-energy-density (HED) plasmas, and present the latest hybrid X-pinch simulation results. The use of fully relativistic equations enables the model to remain self-consistent in simulations of such relativistic phenomena as X-pinches and laser-plasma interactions. By suitable formulation of the relativistic generalized Ohm’s law as an evolution equation, we have reduced the recovery of primitive variables, a major technical challenge in relativistic codes, to a straightforward algebraic computation. Our code recovers expected results in the non-relativistic limit, and reveals new physics in the modeling of electron beam acceleration following an X-pinch. Through the use of a relaxation scheme, relativistic PERSEUS is able to handle nine orders of magnitude in density variation, making it the first fluid code, to our knowledge, that can simulate relativistic HED plasmas.

  18. An analytical model for source code distributability verification

    Institute of Scientific and Technical Information of China (English)

    Ayaz ISAZADEH; Jaber KARIMPOUR; Islam ELGEDAWY; Habib IZADKHAH

    2014-01-01

    One way to speed up the execution of sequential programs is to divide them into concurrent segments and execute such segments in a parallel manner over a distributed computing environment. We argue that the execution speedup primarily depends on the concurrency degree between the identified segments as well as communication overhead between the segments. To guar-antee the best speedup, we have to obtain the maximum possible concurrency degree between the identified segments, taking communication overhead into consideration. Existing code distributor and multi-threading approaches do not fulfill such re-quirements;hence, they cannot provide expected distributability gains in advance. To overcome such limitations, we propose a novel approach for verifying the distributability of sequential object-oriented programs. The proposed approach enables users to see the maximum speedup gains before the actual distributability implementations, as it computes an objective function which is used to measure different distribution values from the same program, taking into consideration both remote and sequential calls. Experimental results showed that the proposed approach successfully determines the distributability of different real-life software applications compared with their real-life sequential and distributed implementations.

  19. Modelling the CANMET and Marchwood furnaces using the PCOC code

    Energy Technology Data Exchange (ETDEWEB)

    Stopford, P.J.; Marriott, N. (AEA Decommissioning and Radioactive Waste, Harwell (UK). Theoretical Studies Dept.)

    1990-01-01

    Pulverised coal combustion models are validated by detailed comparison with in-flame measurements of velocity, temperatures and species concentrations on two axisymmetric tunnel furnaces. Nitric oxide formation by the thermal and fuel nitrogen mechanisms is also calculated and compared with experiment. The sensitivity of the predictions to the various aspects of the model and the potential for modelling full-scale, power-generating furnaces are discussed. 18 refs., 13 figs.

  20. Review of release models used in source-term codes

    Energy Technology Data Exchange (ETDEWEB)

    Song, Jongsoon [Department of Nuclear Engineering, Chosen University, Kwangju (Korea, Republic of)

    1999-07-01

    Throughout this reviews, the limitations of current release models are identified and ways of improving them suggested, By incorporation recent experimental results, recommendations for future release modeling activities can be made. All release under review were compared with respect to the following six items: scenario, assumptions, mathematical formulations, solution method, radioactive decay chain considered, and geometry. The following nine models are considered for review: SOTEC and SCCEX (CNWRA), DOE/INTERA, TSPA (SNL), Vault Model (AECL), CCALIBRE (SKI), AREST (PNL), Risk Assessment (EPRI), TOSPAC (SNL). (author)

  1. Comparison of a Coupled Near and Far Wake Model With a Free Wake Vortex Code

    DEFF Research Database (Denmark)

    Pirrung, Georg; Riziotis, Vasilis; Aagaard Madsen, Helge

    2016-01-01

    This paper presents the integration of a near wake model for trailing vorticity, which is based on a prescribed wake lifting line model proposed by Beddoes, with a BEM-based far wake model and a 2D shed vorticity model. The resulting coupled aerodynamics model is validated against lifting surface...... computations performed using a free wake panel code. The focus of the description of the aerodynamics model is on the numerical stability, the computation speed and the accuracy of 5 unsteady simulations. To stabilize the near wake model, it has to be iterated to convergence, using a relaxation factor that has...... induction modeling at slow time scales. Finally, the unsteady airfoil aerodynamics model is extended to provide the unsteady bound circulation for the near wake model and to improve 10 the modeling of the unsteady behavior of cambered airfoils. The model comparison with results from a free wake panel code...

  2. A computer code for calculations in the algebraic collective model of the atomic nucleus

    Science.gov (United States)

    Welsh, T. A.; Rowe, D. J.

    2016-03-01

    A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1 , 1) × SO(5) dynamical group. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code enables a wide range of model Hamiltonians to be analysed. This range includes essentially all Hamiltonians that are rational functions of the model's quadrupole moments qˆM and are at most quadratic in the corresponding conjugate momenta πˆN (- 2 ≤ M , N ≤ 2). The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [ π ˆ ⊗ q ˆ ⊗ π ˆ ] 0 and [ π ˆ ⊗ π ˆ ] LM. The code is made efficient by use of an analytical expression for the needed SO(5)-reduced matrix elements, and use of SO(5) ⊃ SO(3) Clebsch-Gordan coefficients obtained from precomputed data files provided with the code.

  3. Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF

    Energy Technology Data Exchange (ETDEWEB)

    Blyth, Taylor S. [Pennsylvania State Univ., University Park, PA (United States); Avramova, Maria [North Carolina State Univ., Raleigh, NC (United States)

    2017-04-01

    The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics- based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR) cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal- hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.

  4. A flexible FPGA based QDC and TDC for the HADES and the CBM calorimeters

    Science.gov (United States)

    Rost, A.; Galatyuk, T.; Koenig, W.; Michel, J.; Pietraszko, J.; Skott, P.; Traxler, M.

    2017-02-01

    A Charge-to-Digital-Converter (QDC) and Time-to-Digital-Converter (TDC) based on a commercial FPGA (Field Programmable Gate Array) was developed to read out PMT signals of the planned HADES electromagnetic calorimeter (ECAL) at GSI Helmholtzzentrum für Schwerionenforschung GmbH (Darmstadt, Germany). The main idea is to convert the charge measurement of a detector signal into a time measurement, where the charge is encoded in the width of a digital pulse, while the arrival time information is encoded in the leading edge time of the pulse. The PaDiWa-AMPS prototype front-end board for the TRB3 (General Purpose Trigger and Readout Board—version 3) which implements this conversion method was developed and qualified. The already well established TRB3 platform provides the needed precise time measurements and serves as a data acquisition system. We present the read-out concept and the performance of the prototype boards in laboratory and also under beam conditions. First steps have been completed in order to adapt this concept to SiPM signals of the hadron calorimeter in the CBM experiment at the planned FAIR facility (Darmstadt).

  5. Front-End electronics development for the new Resistive Plate Chamber detector of HADES

    Science.gov (United States)

    Gil, A.; Belver, D.; Cabanelas, P.; Díaz, J.; Garzón, J. A.; González-Díaz, D.; Koenig, W.; Lange, J. S.; Marín, J.; Montes, N.; Skott, P.; Traxler, M.

    2007-11-01

    In this paper we present the new RPC wall, which is being installed in the HADES detector at Darmstadt GSI. It consists of time-of-flight (TOF) detectors used for both particle identification and triggering. Resistive Plate Chamber (RPC) detectors are becoming widely used because of their excellent TOF capabilities and reduced cost. The wall will contain 1024 RPC modules, covering an active area of around 7 m2, replacing the old TOFino detector at the low polar angle region. The excellent TOF and good charge resolutions of the new detector will improve the time resolution to values better than 100 ps. The Front-End electronics for the readout of the RPC signals is implemented with two types of boards to satisfy the space constraints: the Daughterboards are small boards that amplify the low level signals from the detector and provide fast discriminators for time of flight measurements, as well as an integrator for charge measurements. The Motherboard provides stable DC voltages and a stable ground, threshold DACs for the discriminators, multiplicity trigger and impedance matched paths for transfer of time window signals that contain information about time and charge. These signals are sent to a custom TDC board that label each event and send data through Ethernet to be conveniently stored.

  6. A tracking system for a secondary pion beam at the HADES spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Wirth, Joana; Fabbietti, Laura; Lalik, Rafal [Physik Deparment of the TUM (E12), Garching (Germany); Exellence Cluster Universe, TUM, Garching (Germany); Maier, Ludwig [Physik Deparment of the TUM (E12), Garching (Germany); Collaboration: HADES-Collaboration

    2015-07-01

    For the secondary pion beam campaign with the HADES spectrometer at GSI, Darmstadt, a beam tracking system has been developed, in order to achieve the momentum measurement of each individual pion with a momentum resolution below 0.5%. A primary Nitrogen beam impacting on a Beryllium production target produces a secondary pion beam strongly defocused in position and momentum, which is transported along the chicane to the experimental area. The overall spread in momentum is only limited by the beamline acceptance, leading to momentum offsets up to 8% of the central beam momentum. The system is based on two tracking stations consisting each of a double-sided silicon strip detector read out by the self-triggered n-XYTER ASCI chip, completed by the TRB3 board on which the trigger logic is implemented. In this talk we are showing the performance of our beam detectors during the proton test beam of 1.9 GeV in the terms of the momentum reconstruction of known momentum, set by the accelerator, as well as the recent result accomplished throughout the pion beam campaign.

  7. Dense Coding in a Two-Spin Squeezing Model with Intrinsic Decoherence

    Science.gov (United States)

    Zhang, Bing-Bing; Yang, Guo-Hui

    2016-11-01

    Quantum dense coding in a two-spin squeezing model under intrinsic decoherence with different initial states (Werner state and Bell state) is investigated. It shows that dense coding capacity χ oscillates with time and finally reaches different stable values. χ can be enhanced by decreasing the magnetic field Ω and the intrinsic decoherence γ or increasing the squeezing interaction μ, moreover, one can obtain a valid dense coding capacity ( χ satisfies χ > 1) by modulating these parameters. The stable value of χ reveals that the decoherence cannot entirely destroy the dense coding capacity. In addition, decreasing Ω or increasing μ can not only enhance the stable value of χ but also impair the effects of decoherence. As the initial state is the Werner state, the purity r of initial state plays a key role in adjusting the value of dense coding capacity, χ can be significantly increased by improving the purity of initial state. For the initial state is Bell state, the large spin squeezing interaction compared with the magnetic field guarantees the optimal dense coding. One cannot always achieve a valid dense coding capacity for the Werner state, while for the Bell state, the dense coding capacity χ remains stuck at the range of greater than 1.

  8. Stimulus-dependent maximum entropy models of neural population codes.

    Science.gov (United States)

    Granot-Atedgi, Einat; Tkačik, Gašper; Segev, Ronen; Schneidman, Elad

    2013-01-01

    Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME) model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.

  9. Stimulus-dependent maximum entropy models of neural population codes.

    Directory of Open Access Journals (Sweden)

    Einat Granot-Atedgi

    Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.

  10. Code modernization and modularization of APEX and SWAT watershed simulation models

    Science.gov (United States)

    SWAT (Soil and Water Assessment Tool) and APEX (Agricultural Policy / Environmental eXtender) are respectively large and small watershed simulation models derived from EPIC Environmental Policy Integrated Climate), a field-scale agroecology simulation model. All three models are coded in FORTRAN an...

  11. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-03-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  12. The aeroelastic code HawC - model and comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Thirstrup Petersen, J. [Risoe National Lab., The Test Station for Wind Turbines, Roskilde (Denmark)

    1996-09-01

    A general aeroelastic finite element model for simulation of the dynamic response of horizontal axis wind turbines is presented. The model has been developed with the aim to establish an effective research tool, which can support the general investigation of wind turbine dynamics and research in specific areas of wind turbine modelling. The model concentrates on the correct representation of the inertia forces in a form, which makes it possible to recognize and isolate effects originating from specific degrees of freedom. The turbine structure is divided into substructures, and nonlinear kinematic terms are retained in the equations of motion. Moderate geometric nonlinearities are allowed for. Gravity and a full wind field including 3-dimensional 3-component turbulence are included in the loading. Simulation results for a typical three bladed, stall regulated wind turbine are presented and compared with measurements. (au)

  13. Evaluation of Computational Codes for Underwater Hull Analysis Model Applications

    Science.gov (United States)

    2014-02-05

    vice versa or scaling the model to be physical scale model ( PSM ) size instead of full size. Another common geometric change is translating the...anodes and cathodes are generally the boundary conditions to the solution. The ability to link anodes to reference cells to mimic PSM testing and full...could be developed that allows the user to mimic how anode values are set on shipboard systems. Since much PSM experimental work uses shipboard system

  14. A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects

    Science.gov (United States)

    Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.

    2016-05-01

    Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.

  15. Once-through CANDU reactor models for the ORIGEN2 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Croff, A.G.; Bjerke, M.A.

    1980-11-01

    Reactor physics calculations have led to the development of two CANDU reactor models for the ORIGEN2 computer code. The model CANDUs are based on (1) the existing once-through fuel cycle with feed comprised of natural uranium and (2) a projected slightly enriched (1.2 wt % /sup 235/U) fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models, as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST, are given.

  16. New higher-order Godunov code for modelling performance of two-stage light gas guns

    Science.gov (United States)

    Bogdanoff, D. W.; Miller, R. J.

    1995-01-01

    A new quasi-one-dimensional Godunov code for modeling two-stage light gas guns is described. The code is third-order accurate in space and second-order accurate in time. A very accurate Riemann solver is used. Friction and heat transfer to the tube wall for gases and dense media are modeled and a simple nonequilibrium turbulence model is used for gas flows. The code also models gunpowder burn in the first-stage breech. Realistic equations of state (EOS) are used for all media. The code was validated against exact solutions of Riemann's shock-tube problem, impact of dense media slabs at velocities up to 20 km/sec, flow through a supersonic convergent-divergent nozzle and burning of gunpowder in a closed bomb. Excellent validation results were obtained. The code was then used to predict the performance of two light gas guns (1.5 in. and 0.28 in.) in service at the Ames Research Center. The code predictions were compared with measured pressure histories in the powder chamber and pump tube and with measured piston and projectile velocities. Very good agreement between computational fluid dynamics (CFD) predictions and measurements was obtained. Actual powder-burn rates in the gun were found to be considerably higher (60-90 percent) than predicted by the manufacturer and the behavior of the piston upon yielding appears to differ greatly from that suggested by low-strain rate tests.

  17. On models of the genetic code generated by binary dichotomic algorithms.

    Science.gov (United States)

    Gumbel, Markus; Fimmel, Elena; Danielli, Alberto; Strüngmann, Lutz

    2015-02-01

    In this paper we introduce the concept of a BDA-generated model of the genetic code which is based on binary dichotomic algorithms (BDAs). A BDA-generated model is based on binary dichotomic algorithms (BDAs). Such a BDA partitions the set of 64 codons into two disjoint classes of size 32 each and provides a generalization of known partitions like the Rumer dichotomy. We investigate what partitions can be generated when a set of different BDAs is applied sequentially to the set of codons. The search revealed that these models are able to generate code tables with very different numbers of classes ranging from 2 to 64. We have analyzed whether there are models that map the codons to their amino acids. A perfect matching is not possible. However, we present models that describe the standard genetic code with only few errors. There are also models that map all 64 codons uniquely to 64 classes showing that BDAs can be used to identify codons precisely. This could serve as a basis for further mathematical analysis using coding theory, for example. The hypothesis that BDAs might reflect a molecular mechanism taking place in the decoding center of the ribosome is discussed. The scan demonstrated that binary dichotomic partitions are able to model different aspects of the genetic code very well. The search was performed with our tool Beady-A. This software is freely available at http://mi.informatik.hs-mannheim.de/beady-a. It requires a JVM version 6 or higher.

  18. Representing Resources in Petri Net Models: Hardwiring or Soft-coding?

    OpenAIRE

    2011-01-01

    This paper presents an interesting design problem in developing a new tool for discrete-event dynamic systems (DEDS). A new tool known as GPenSIM was developed for modeling and simulation of DEDS; GPenSIM is based on Petri Nets. The design issue this paper talks about is whether to represent resources in DEDS hardwired as a part of the Petri net structure (which is the widespread practice) or to soft code as common variables in the program code. This paper shows that soft coding resources giv...

  19. Implementing the WebSocket Protocol Based on Formal Modelling and Automated Code Generation

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars Michael

    2014-01-01

    protocols. Furthermore, we perform formal verification of the CPN model prior to code generation, and test the implementation for interoperability against the Autobahn WebSocket test-suite resulting in 97% and 99% success rate for the client and server implementation, respectively. The tests show...... with pragmatic annotations for automated code generation of protocol software. The contribution of this paper is an application of the approach as implemented in the PetriCode tool to obtain protocol software implementing the IETF WebSocket protocol. This demonstrates the scalability of our approach to real...

  20. ABAREX -- A neutron spherical optical-statistical-model code -- A user`s manual

    Energy Technology Data Exchange (ETDEWEB)

    Smith, A.B. [ed.; Lawson, R.D.

    1998-06-01

    The contemporary version of the neutron spherical optical-statistical-model code ABAREX is summarized with the objective of providing detailed operational guidance for the user. The physical concepts involved are very briefly outlined. The code is described in some detail and a number of explicit examples are given. With this document one should very quickly become fluent with the use of ABAREX. While the code has operated on a number of computing systems, this version is specifically tailored for the VAX/VMS work station and/or the IBM-compatible personal computer.

  1. User's manual for MODCAL: Bounding surface soil plasticity model calibration and prediction code, volume 2

    Science.gov (United States)

    Dennatale, J. S.; Herrmann, L. R.; Defalias, Y. F.

    1983-02-01

    In order to reduce the complexity of the model calibration process, a computer-aided automated procedure has been developed and tested. The computer code employs a Quasi-Newton optimization strategy to locate that set of parameter values which minimizes the discrepancy between the model predictions and the experimental observations included in the calibration data base. Through application to a number of real soils, the automated procedure has been found to be an efficient, reliable and economical means of accomplishing model calibration. Although the code was developed specifically for use with the Bounding Surface plasticity model, it can readily be adapted to other constitutive formulations. Since the code greatly reduces the dependence of calibration success on user expertise, it significantly increases the accessibility and usefulness of sophisticated material models to the general engineering community.

  2. Validation of vortex code viscous models using lidar wake measurements and CFD

    DEFF Research Database (Denmark)

    Branlard, Emmanuel; Machefaux, Ewan; Gaunaa, Mac;

    2014-01-01

    The newly implemented vortex code Omnivor coupled to the aero-servo-elastic tool hawc2 is described in this paper. Vortex wake improvements by the implementation of viscous effects are considered. Different viscous models are implemented and compared with each other. Turbulent flow fields...... with sheared inflow are used to compare the vortex code performance with CFD and lidar measurements. Laminar CFD computations are used to evaluate the performance of the viscous models. Consistent results between the vortex code and CFD tool are obtained up to three diameters downstream. The modelling...... of viscous boundaries appear more important than the modelling of viscosity in the wake. External turbulence and shear appear sufficient but their full potential flow modelling would be preferred....

  3. Improvement of Interfacial Heat Transfer Model and Correlations in SPACE Code

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Sung Won; Kim, Kyung Du [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-05-15

    The SPACE code development project has been successfully proceeded since 2006. The first stage of development program has been finished at April 2010. During the first stage, main logic and conceptual structure have been established under the support of Korea Ministry of Knowledge and Economy. In the second stage, it is focused to assess the physical models and correlations of SPACE code by using the well known SET problems. A problem selection process has been performed under the leading of KEPRI. KEPRI has listed suitable SET problems according to the individual assessment purpose. Among the SET problems, the MIT pressurizer test reveals a improper results by using SPACE code. This paper introduce the problem found during the assessment of MIT pressurizer test assessment and the resolving process about the interfacial heat transfer model and correlations in SPACE code

  4. User manual for ATILA, a finite-element code for modeling piezoelectric transducers

    Science.gov (United States)

    Decarpigny, Jean-Noel; Debus, Jean-Claude

    1987-09-01

    This manual for the user of the finite-element code ATILA provides instruction for entering information and running the code on a VAX computer. The manual does not include the code. The finite element code ATILA has been specifically developed to aid the design of piezoelectric devices, mainly for sonar applications. Thus, it is able to perform the model analyses of both axisymmetrical and fully three-dimensional piezoelectric transducers. It can also provide their harmonic response under radiating conditions: nearfield and farfield pressure, transmitting voltage response, directivity pattern, electrical impedance, as well as displacement field, nodal plane positions, stress field and various stress criteria...Its accuracy and its ability to describe the physical behavior of various transducers (Tonpilz transducers, double headmass symmetrical length expanders, free flooded rings, flextensional transducers, bender bars, cylindrical and trilaminar hydrophones...) have been checked by modelling more than twenty different structures and comparing numerical and experimental results.

  5. A model of a code of ethics for tissue banks operating in developing countries.

    Science.gov (United States)

    Morales Pedraza, Jorge

    2012-12-01

    Ethical practice in the field of tissue banking requires the setting of principles, the identification of possible deviations and the establishment of mechanisms that will detect and hinder abuses that may occur during the procurement, processing and distribution of tissues for transplantation. This model of a Code of Ethics has been prepared with the purpose of being used for the elaboration of a Code of Ethics for tissue banks operating in the Latin American and the Caribbean, Asia and the Pacific and the African regions in order to guide the day-to-day operation of these banks. The purpose of this model of Code of Ethics is to assist interested tissue banks in the preparation of their own Code of Ethics towards ensuring that the tissue bank staff support with their actions the mission and values associated with tissue banking.

  6. Description of codes and models to be used in risk assessment

    Energy Technology Data Exchange (ETDEWEB)

    1991-09-01

    Human health and environmental risk assessments will be performed as part of the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA) remedial investigation/feasibility study (RI/FS) activities at the Hanford Site. Analytical and computer encoded numerical models are commonly used during both the remedial investigation (RI) and feasibility study (FS) to predict or estimate the concentration of contaminants at the point of exposure to humans and/or the environment. This document has been prepared to identify the computer codes that will be used in support of RI/FS human health and environmental risk assessments at the Hanford Site. In addition to the CERCLA RI/FS process, it is recommended that these computer codes be used when fate and transport analyses is required for other activities. Additional computer codes may be used for other purposes (e.g., design of tracer tests, location of observation wells, etc.). This document provides guidance for unit managers in charge of RI/FS activities. Use of the same computer codes for all analytical activities at the Hanford Site will promote consistency, reduce the effort required to develop, validate, and implement models to simulate Hanford Site conditions, and expedite regulatory review. The discussion provides a description of how models will likely be developed and utilized at the Hanford Site. It is intended to summarize previous environmental-related modeling at the Hanford Site and provide background for future model development. The modeling capabilities that are desirable for the Hanford Site and the codes that were evaluated. The recommendations include the codes proposed to support future risk assessment modeling at the Hanford Site, and provides the rational for the codes selected. 27 refs., 3 figs., 1 tab.

  7. PWR hot leg natural circulation modeling with MELCOR code

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jae Hong; Lee, Jong In [Korea Institute of Nuclear Safety, Taejon (Korea, Republic of)

    1997-12-31

    Previous MELCOR and SCDAP/RELAP5 nodalizations for simulating the counter-current, natural circulation behavior of vapor flow within the RCS hot legs and SG U-tubes when core damage progress can not be applied to the steady state and water-filled conditions during the initial period of accident progression because of the artificially high loss coefficients in the hot legs and SG U-tubes which were chosen from results of COMMIX calculation and the Westinghouse natural circulation experiments in a 1/7-scale facility for simulating steam natural circulation behavior in the vessel and circulation modeling which can be used both for the liquid flow condition at steady state and for the vapor flow condition at the later period of in-vessel core damage. For this, the drag forces resulting from the momentum exchange effects between the two vapor streams in the hot leg was modeled as a pressure drop by pump model. This hot leg natural circulation modeling of MELCOR was able to reproduce similar mass flow rates with those predicted by previous models. 6 refs., 2 figs. (Author)

  8. Priliminary Modeling of Air Breakdown with the ICEPIC code

    CERN Document Server

    Schulz, A E; Cartwright, K L; Mardahl, P J; Peterkin, R E; Bruner, N; Genoni, T; Hughes, T P; Welch, D

    2004-01-01

    Interest in air breakdown phenomena has recently been re-kindled with the advent of advanced virtual prototyping of radio frequency (RF) sources for use in high power microwave (HPM) weapons technology. Air breakdown phenomena are of interest because the formation of a plasma layer at the aperture of an RF source decreases the transmitted power to the target, and in some cases can cause significant reflection of RF radiation. Understanding the mechanisms behind the formation of such plasma layers will aid in the development of maximally effective sources. This paper begins with some of the basic theory behind air breakdown, and describes two independent approaches to modeling the formation of plasmas, the dielectric fluid model and the Particle in Cell (PIC) approach. Finally we present the results of preliminary studies in numerical modeling and simulation of breakdown.

  9. Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety.

    Energy Technology Data Exchange (ETDEWEB)

    Carbajo, Juan (Oak Ridge National Laboratory, Oak Ridge, TN); Jeong, Hae-Yong (Korea Atomic Energy Research Institute, Daejeon, Korea); Wigeland, Roald (Idaho National Laboratory, Idaho Falls, ID); Corradini, Michael (University of Wisconsin, Madison, WI); Schmidt, Rodney Cannon; Thomas, Justin (Argonne National Laboratory, Argonne, IL); Wei, Tom (Argonne National Laboratory, Argonne, IL); Sofu, Tanju (Argonne National Laboratory, Argonne, IL); Ludewig, Hans (Brookhaven National Laboratory, Upton, NY); Tobita, Yoshiharu (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Ohshima, Hiroyuki (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Serre, Frederic (Centre d' %C3%94etudes nucl%C3%94eaires de Cadarache %3CU%2B2013%3E CEA, France)

    2011-06-01

    This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the

  10. Δ (1232 ) Dalitz decay in proton-proton collisions at T =1.25 GeV measured with HADES at GSI

    Science.gov (United States)

    Adamczewski-Musch, J.; Arnold, O.; Atomssa, E. T.; Behnke, C.; Belounnas, A.; Belyaev, A.; Berger-Chen, J. C.; Biernat, J.; Blanco, A.; Blume, C.; Böhmer, M.; Bordalo, P.; Chernenko, S.; Chlad, L.; Deveaux, C.; Dreyer, J.; Dybczak, A.; Epple, E.; Fabbietti, L.; Fateev, O.; Filip, P.; Finocchiaro, P.; Fonte, P.; Franco, C.; Friese, J.; Fröhlich, I.; Galatyuk, T.; Garzón, J. A.; Gernhäuser, R.; Golubeva, M.; Guber, F.; Gumberidze, M.; Harabasz, S.; Heinz, T.; Hennino, T.; Hlavac, S.; Höhne, C.; Holzmann, R.; Ierusalimov, A.; Ivashkin, A.; Kämpfer, B.; Karavicheva, T.; Kardan, B.; Koenig, I.; Koenig, W.; Kolb, B. W.; Korcyl, G.; Kornakov, G.; Kotte, R.; Kühn, W.; Kugler, A.; Kunz, T.; Kurepin, A.; Kurilkin, A.; Kurilkin, P.; Ladygin, V.; Lalik, R.; Lapidus, K.; Lebedev, A.; Liu, T.; Lopes, L.; Lorenz, M.; Mahmoud, T.; Maier, L.; Mangiarotti, A.; Markert, J.; Maurus, S.; Metag, V.; Michel, J.; Morinière, E.; Mihaylov, D. M.; Morozov, S.; Müntz, C.; Münzer, R.; Naumann, L.; Nowakowski, K.; Palka, M.; Parpottas, Y.; Pechenov, V.; Pechenova, O.; Petousis, V.; Petukhov, O.; Pietraszko, J.; Przygoda, W.; Ramos, S.; Ramstein, B.; Reshetin, A.; Rodriguez-Ramos, P.; Rosier, P.; Rost, A.; Sadovsky, A.; Salabura, P.; Scheib, T.; Schuldes, H.; Schwab, E.; Scozzi, F.; Seck, F.; Sellheim, P.; Siebenson, J.; Silva, L.; Sobolev, Yu. G.; Spataro, S.; Ströbele, H.; Stroth, J.; Strzempek, P.; Sturm, C.; Svoboda, O.; Tlusty, P.; Traxler, M.; Tsertos, H.; Usenko, E.; Wagner, V.; Wendisch, C.; Wiebusch, M. G.; Wirth, J.; Zanevsky, Y.; Zumbruch, P.; Sarantsev, A. V.; Hades Collaboration

    2017-06-01

    We report on the investigation of Δ (1232) production and decay in proton-proton collisions at a kinetic energy of 1.25 GeV measured with HADES. Exclusive dilepton decay channels p p e+e- and p p e+e-γ have been studied and compared with the partial-wave analysis of the hadronic p p π0 channel. They allow to access both Δ+→p π0(e+e-γ ) and Δ+→p e+e- Dalitz decay channels. The perfect reconstruction of the well-known π0 Dalitz decay serves as a proof of the consistency of the analysis. The Δ Dalitz decay is identified for the first time and the sensitivity to N -Δ transition form factors is tested. The Δ (1232) Dalitz decay branching ratio is also determined for the first time; our result is (4.19 ± 0.62 syst. ± 0.34 stat.) ×10-5 , albeit with some model dependence.

  11. A Dual Coding Theoretical Model of Decoding in Reading: Subsuming the LaBerge and Samuels Model

    Science.gov (United States)

    Sadoski, Mark; McTigue, Erin M.; Paivio, Allan

    2012-01-01

    In this article we present a detailed Dual Coding Theory (DCT) model of decoding. The DCT model reinterprets and subsumes The LaBerge and Samuels (1974) model of the reading process which has served well to account for decoding behaviors and the processes that underlie them. However, the LaBerge and Samuels model has had little to say about…

  12. The non-power model of the genetic code: a paradigm for interpreting genomic information.

    Science.gov (United States)

    Gonzalez, Diego Luis; Giannerini, Simone; Rosa, Rodolfo

    2016-03-13

    In this article, we present a mathematical framework based on redundant (non-power) representations of integer numbers as a paradigm for the interpretation of genomic information. The core of the approach relies on modelling the degeneracy of the genetic code. The model allows one to explain many features and symmetries of the genetic code and to uncover hidden symmetries. Also, it provides us with new tools for the analysis of genomic sequences. We review briefly three main areas: (i) the Euplotid nuclear code, (ii) the vertebrate mitochondrial code, and (iii) the main coding/decoding strategies used in the three domains of life. In every case, we show how the non-power model is a natural unified framework for describing degeneracy and deriving sound biological hypotheses on protein coding. The approach is rooted on number theory and group theory; nevertheless, we have kept the technical level to a minimum by focusing on key concepts and on the biological implications. © 2016 The Author(s).

  13. Domain-specific modeling enabling full code generation

    CERN Document Server

    Kelly, Steven

    2007-01-01

    Domain-Specific Modeling (DSM) is the latest approach tosoftware development, promising to greatly increase the speed andease of software creation. Early adopters of DSM have been enjoyingproductivity increases of 500–1000% in production for over adecade. This book introduces DSM and offers examples from variousfields to illustrate to experienced developers how DSM can improvesoftware development in their teams. Two authorities in the field explain what DSM is, why it works,and how to successfully create and use a DSM solution to improveproductivity and quality. Divided into four parts, the book covers:background and motivation; fundamentals; in-depth examples; andcreating DSM solutions. There is an emphasis throughout the book onpractical guidelines for implementing DSM, including how toidentify the nece sary language constructs, how to generate fullcode from models, and how to provide tool support for a new DSMlanguage. The example cases described in the book are available thebook's Website, www.dsmbook....

  14. A Review of Equation of State Models, Chemical Equilibrium Calculations and CERV Code Requirements for SHS Detonation Modelling

    Science.gov (United States)

    2009-10-01

    Beattie - Bridgeman Virial expansion The above equations are suitable for moderate pressures and are usually based on either empirical constants...CR 2010-013 October 2009 A Review of Equation of State Models, Chemical Equilibrium Calculations and CERV Code Requirements for SHS Detonation...Defence R&D Canada. A Review of Equation of State Models, Chemical Equilibrium Calculations and CERV Code Requirements for SHS Detonation

  15. Model study of the thermal storage system by FEHM code

    Energy Technology Data Exchange (ETDEWEB)

    Tenma, N.; Yasukawa, Kasumi [National Institute of Advanced Industrial Science and Technology, Ibaraki (Japan); Zyvoloski, G. [Los Alamos National Laboratory, Los Alamos, NM (United States). Earth and Environmental Science Division

    2003-12-01

    The use of low-temperature geothermal resources is important from the viewpoint of global warming. In order to evaluate various underground projects that use low-temperature geothermal resources, we have estimated the parameters of a typical underground system using the two-well model. By changing the parameters of the system, six different heat extraction scenarios have been studied. One of these six scenarios is recommended because of its small energy loss. (author)

  16. Model based code generation for distributed embedded systems

    OpenAIRE

    Raghav, Gopal; Gopalswamy, Swaminathan; Radhakrishnan, Karthikeyan; Hugues, Jérôme; Delange, Julien

    2010-01-01

    Embedded systems are becoming increasingly complex and more distributed. Cost and quality requirements necessitate reuse of the functional software components for multiple deployment architectures. An important step is the allocation of software components to hardware. During this process the differences between the hardware and application software architectures must be reconciled. In this paper we discuss an architecture driven approach involving model-based techniques to resolve these diff...

  17. Pre-engineering Spaceflight Validation of Environmental Models and the 2005 HZETRN Simulation Code

    Science.gov (United States)

    Nealy, John E.; Cucinotta, Francis A.; Wilson, John W.; Badavi, Francis F.; Dachev, Ts. P.; Tomov, B. T.; Walker, Steven A.; DeAngelis, Giovanni; Blattnig, Steve R.; Atwell, William

    2006-01-01

    The HZETRN code has been identified by NASA for engineering design in the next phase of space exploration highlighting a return to the Moon in preparation for a Mars mission. In response, a new series of algorithms beginning with 2005 HZETRN, will be issued by correcting some prior limitations and improving control of propagated errors along with established code verification processes. Code validation processes will use new/improved low Earth orbit (LEO) environmental models with a recently improved International Space Station (ISS) shield model to validate computational models and procedures using measured data aboard ISS. These validated models will provide a basis for flight-testing the designs of future space vehicles and systems of the Constellation program in the LEO environment.

  18. Reduced Fast Ion Transport Model For The Tokamak Transport Code TRANSP

    Energy Technology Data Exchange (ETDEWEB)

    Podesta,, Mario; Gorelenkova, Marina; White, Roscoe

    2014-02-28

    Fast ion transport models presently implemented in the tokamak transport code TRANSP [R. J. Hawryluk, in Physics of Plasmas Close to Thermonuclear Conditions, CEC Brussels, 1 , 19 (1980)] are not capturing important aspects of the physics associated with resonant transport caused by instabilities such as Toroidal Alfv en Eigenmodes (TAEs). This work describes the implementation of a fast ion transport model consistent with the basic mechanisms of resonant mode-particle interaction. The model is formulated in terms of a probability distribution function for the particle's steps in phase space, which is consistent with the MonteCarlo approach used in TRANSP. The proposed model is based on the analysis of fast ion response to TAE modes through the ORBIT code [R. B. White et al., Phys. Fluids 27 , 2455 (1984)], but it can be generalized to higher frequency modes (e.g. Compressional and Global Alfv en Eigenmodes) and to other numerical codes or theories.

  19. The Nuremberg Code subverts human health and safety by requiring animal modeling

    Science.gov (United States)

    2012-01-01

    Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive value of animal models when used as test subjects for human response to drugs and disease. We explore the use of animals for models in toxicity testing as an example of the problem with using animal models. Summary We conclude that the requirements for animal testing found in the Nuremberg Code were based on scientifically outdated principles, compromised by people with a vested interest in animal experimentation, serve no useful function, increase the cost of drug development, and prevent otherwise safe and efficacious drugs and therapies from being implemented. PMID:22769234

  20. The Nuremberg Code subverts human health and safety by requiring animal modeling

    Directory of Open Access Journals (Sweden)

    Greek Ray

    2012-07-01

    Full Text Available Abstract Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive value of animal models when used as test subjects for human response to drugs and disease. We explore the use of animals for models in toxicity testing as an example of the problem with using animal models. Summary We conclude that the requirements for animal testing found in the Nuremberg Code were based on scientifically outdated principles, compromised by people with a vested interest in animal experimentation, serve no useful function, increase the cost of drug development, and prevent otherwise safe and efficacious drugs and therapies from being implemented.

  1. Modeling of BWR core meltdown accidents - for application in the MELRPI. MOD2 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Koh, B R; Kim, S H; Taleyarkhan, R P; Podowski, M Z; Lahey, Jr, R T

    1985-04-01

    This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing.

  2. Development of a model and computer code to describe solar grade silicon production processes

    Science.gov (United States)

    Gould, R. K.; Srivastava, R.

    1979-01-01

    Two computer codes were developed for describing flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides. The first is the CHEMPART code, an axisymmetric, marching code which treats two phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. It can be used to described flow reactors in which reactants, mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, deposition of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail.

  3. Evaluation of Advanced Models for PAFS Condensation Heat Transfer in SPACE Code

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Byoung-Uhn; Kim, Seok; Park, Yu-Sun; Kang, Kyung Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Ahn, Tae-Hwan; Yun, Byong-Jo [Pusan National University, Busan (Korea, Republic of)

    2015-10-15

    The PAFS (Passive Auxiliary Feedwater System) is operated by the natural circulation to remove the core decay heat through the PCHX (Passive Condensation Heat Exchanger) which is composed of the nearly horizontal tubes. For validation of the cooling and operational performance of the PAFS, PASCAL (PAFS Condensing Heat Removal Assessment Loop) facility was constructed and the condensation heat transfer and natural convection phenomena in the PAFS was experimentally investigated at KAERI (Korea Atomic Energy Research Institute). From the PASCAL experimental result, it was found that conventional system analysis code underestimated the condensation heat transfer. In this study, advanced condensation heat transfer models which can treat the heat transfer mechanisms with the different flow regimes in the nearly horizontal heat exchanger tube were analyzed. The models were implemented in a thermal hydraulic safety analysis code, SPACE (Safety and Performance Analysis Code for Nuclear Power Plant), and it was evaluated with the PASCAL experimental data. With an aim of enhancing the prediction capability for the condensation phenomenon inside the PCHX tube of the PAFS, advanced models for the condensation heat transfer were implemented into the wall condensation model of the SPACE code, so that the PASCAL experimental result was utilized to validate the condensation models. Calculation results showed that the improved model for the condensation heat transfer coefficient enhanced the prediction capability of the SPACE code. This result confirms that the mechanistic modeling for the film condensation in the steam phase and the convection in the condensate liquid contributed to enhance the prediction capability of the wall condensation model of the SPACE code and reduce conservatism in prediction of condensation heat transfer.

  4. Development of RBMK-1500 Model for BDBA Analysis Using RELAP/SCDAPSIM Code

    Science.gov (United States)

    Uspuras, Eugenijus; Kaliatka, Algirdas

    This article discusses the specificity of RBMK (channel type, boiling water, graphite moderated) reactors and problems of Reactor Cooling System modelling employing computer codes. The article presents, how the RELAP/SCDAPSIM code, which is originally designed for modelling of accidents in vessel type reactors, is fit to simulate the phenomena in the RBMK reactor core and RCS in case of Beyond Design Basis Accidents. For this reason, use of two RELAP/SCDAPSIM models is recommended. First model with described complete geometry of RCS is recommended for analysis of initial phase of accident. The calculations results, received using this model, are used as boundary conditions in simplified model for simulation of later phases of severe accidents. The simplified model was tested comparing results of simulation performed using RELAP5 and RELAP/SCDAPSIM codes. As the typical example of BDBA, large break LOCA in reactor cooling system with failure of emergency core cooling system was analyzed. Use of developed models allows to receive behaviour of thermal-hydraulic parameters, temperatures of core components, amount of generated hydrogen due to steam-zirconium reaction. These parameters will be used as input for other special codes, designed for analysis of processes in reactor containment.

  5. Reduced-order LPV model of flexible wind turbines from high fidelity aeroelastic codes

    DEFF Research Database (Denmark)

    Adegas, Fabiano Daher; Sønderby, Ivan Bergquist; Hansen, Morten Hartvig

    2013-01-01

    space. The obtained LPV model is of suitable size for designing modern gain-scheduling controllers based on recently developed LPV control design techniques. Results are thoroughly assessed on a set of industrial wind turbine models generated by the recently developed aeroelastic code HAWCStab2....

  6. Fine-Grained Energy Modeling for the Source Code of a Mobile Application

    DEFF Research Database (Denmark)

    Li, Xueliang; Gallagher, John Patrick

    2016-01-01

    The goal of an energy model for source code is to lay a foundation for the application of energy-aware programming techniques. State of the art solutions are based on source-line energy information. In this paper, we present an approach to constructing a fine-grained energy model which is able...

  7. The modelling of wall condensation with noncondensable gases for the containment codes

    Energy Technology Data Exchange (ETDEWEB)

    Leduc, C.; Coste, P.; Barthel, V.; Deslandes, H. [Commissariat a l`Energi Atomique, Grenoble (France)

    1995-09-01

    This paper presents several approaches in the modelling of wall condensation in the presence of noncondensable gases for containment codes. The lumped-parameter modelling and the local modelling by 3-D codes are discussed. Containment analysis codes should be able to predict the spatial distributions of steam, air, and hydrogen as well as the efficiency of cooling by wall condensation in both natural convection and forced convection situations. 3-D calculations with a turbulent diffusion modelling are necessary since the diffusion controls the local condensation whereas the wall condensation may redistribute the air and hydrogen mass in the containment. A fine mesh modelling of film condensation in forced convection has been in the developed taking into account the influence of the suction velocity at the liquid-gas interface. It is associated with the 3-D model of the TRIO code for the gas mixture where a k-{xi} turbulence model is used. The predictions are compared to the Huhtiniemi`s experimental data. The modelling of condensation in natural convection or mixed convection is more complex. As no universal velocity and temperature profile exist for such boundary layers, a very fine nodalization is necessary. More simple models integrate equations over the boundary layer thickness, using the heat and mass transfer analogy. The model predictions are compared with a MIT experiment. For the containment compartments a two node model is proposed using the lumped parameter approach. Heat and mass transfer coefficients are tested on separate effect tests and containment experiments. The CATHARE code has been adapted to perform such calculations and shows a reasonable agreement with data.

  8. Two Phase Flow Models and Numerical Methods of the Commercial CFD Codes

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Sung Won; Jeong, Jae Jun; Chang, Seok Kyu; Cho, Hyung Kyu

    2007-11-15

    The use of commercial CFD codes extend to various field of engineering. The thermal hydraulic analysis is one of the promising engineering field of application of the CFD codes. Up to now, the main application of the commercial CFD code is focused within the single phase, single composition fluid dynamics. Nuclear thermal hydraulics, however, deals with abrupt pressure changes, high heat fluxes, and phase change heat transfer. In order to overcome the CFD limitation and to extend the capability of the nuclear thermal hydraulics analysis, the research efforts are made to collaborate the CFD and nuclear thermal hydraulics. To achieve the final goal, the current useful model and correlations used in commercial CFD codes should be reviewed and investigated. This report gives the summary information about the constitutive relationships that are used in the FLUENT, STAR-CD, and CFX. The brief information of the solution technologies are also enveloped.

  9. Comparison of current state residential energy codes with the 1992 model energy code for one- and two-family dwellings; 1994

    Energy Technology Data Exchange (ETDEWEB)

    Klevgard, L.A.; Taylor, Z.T.; Lucas, R.G.

    1995-01-01

    This report is one in a series of documents describing research activities in support of the US Department of Energy (DOE) Building Energy Codes Program. The Pacific Northwest Laboratory (PNL) leads the program for DOE. The goal of the program is to develop and support the adopting, implementation, and enforcement of Federal, State, and Local energy codes for new buildings. The program approach to meeting the goal is to initiate and manage individual research and standards and guidelines development efforts that are planned and conducted in cooperation with representatives from throughout the buildings community. Projects under way involve practicing architects and engineers, professional societies and code organizations, industry representatives, and researchers from the private sector and national laboratories. Research results and technical justifications for standards criteria are provided to standards development and model code organizations and to Federal, State, and local jurisdictions as a basis to update their codes and standards. This effort helps to ensure that building standards incorporate the latest research results to achieve maximum energy savings in new buildings, yet remain responsive to the needs of the affected professions, organizations, and jurisdictions. Also supported are the implementation, deployment, and use of energy-efficient codes and standards. This report documents findings from an analysis conducted by PNL of the State`s building codes to determine if the codes meet or exceed the 1992 MEC energy efficiency requirements (CABO 1992a).

  10. A review of MAAP4 code structure and core T/H model

    Energy Technology Data Exchange (ETDEWEB)

    Song, Yong Mann; Park, Soo Yong

    1998-03-01

    The modular accident analysis program (MAAP) version 4 is a computer code that can simulate the response of LWR plants during severe accident sequences and includes models for all of the important phenomena which might occur during accident sequences. In this report, MAAP4 code structure and core thermal hydraulic (T/H) model which models the T/H behavior of the reactor core and the response of core components during all accident phases involving degraded cores are specifically reviewed and then reorganized. This reorganization is performed via getting the related models together under each topic whose contents and order are same with other two reports for MELCOR and SCDAP/RELAP5 to be simultaneously published. Major purpose of the report is to provide information about the characteristics of MAAP4 core T/H models for an integrated severe accident computer code development being performed under the one of on-going mid/long-term nuclear developing project. The basic characteristics of the new integrated severe accident code includes: 1) Flexible simulation capability of primary side, secondary side, and the containment under severe accident conditions, 2) Detailed plant simulation, 3) Convenient user-interfaces, 4) Highly modularization for easy maintenance/improvement, and 5) State-of-the-art model selection. In conclusion, MAAP4 code has appeared to be superior for 3) and 4) items but to be somewhat inferior for 1) and 2) items. For item 5), more efforts should be made in the future to compare separated models in detail with not only other codes but also recent world-wide work. (author). 17 refs., 1 tab., 12 figs.

  11. Assessment of Turbulence-Chemistry Interaction Models in the National Combustion Code (NCC) - Part I

    Science.gov (United States)

    Wey, Thomas Changju; Liu, Nan-suey

    2011-01-01

    This paper describes the implementations of the linear-eddy model (LEM) and an Eulerian FDF/PDF model in the National Combustion Code (NCC) for the simulation of turbulent combustion. The impacts of these two models, along with the so called laminar chemistry model, are then illustrated via the preliminary results from two combustion systems: a nine-element gas fueled combustor and a single-element liquid fueled combustor.

  12. Verification of the predictive capabilities of the 4C code cryogenic circuit model

    Science.gov (United States)

    Zanino, R.; Bonifetto, R.; Hoa, C.; Richard, L. Savoldi

    2014-01-01

    The 4C code was developed to model thermal-hydraulics in superconducting magnet systems and related cryogenic circuits. It consists of three coupled modules: a quasi-3D thermal-hydraulic model of the winding; a quasi-3D model of heat conduction in the magnet structures; an object-oriented a-causal model of the cryogenic circuit. In the last couple of years the code and its different modules have undergone a series of validation exercises against experimental data, including also data coming from the supercritical He loop HELIOS at CEA Grenoble. However, all this analysis work was done each time after the experiments had been performed. In this paper a first demonstration is given of the predictive capabilities of the 4C code cryogenic circuit module. To do that, a set of ad-hoc experimental scenarios have been designed, including different heating and control strategies. Simulations with the cryogenic circuit module of 4C have then been performed before the experiment. The comparison presented here between the code predictions and the results of the HELIOS measurements gives the first proof of the excellent predictive capability of the 4C code cryogenic circuit module.

  13. Inclusion of models to describe severe accident conditions in the fuel simulation code DIONISIO

    Energy Technology Data Exchange (ETDEWEB)

    Lemes, Martín; Soba, Alejandro [Sección Códigos y Modelos, Gerencia Ciclo del Combustible Nuclear, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina); Daverio, Hernando [Gerencia Reactores y Centrales Nucleares, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina); Denis, Alicia [Sección Códigos y Modelos, Gerencia Ciclo del Combustible Nuclear, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina)

    2017-04-15

    The simulation of fuel rod behavior is a complex task that demands not only accurate models to describe the numerous phenomena occurring in the pellet, cladding and internal rod atmosphere but also an adequate interconnection between them. In the last years several models have been incorporated to the DIONISIO code with the purpose of increasing its precision and reliability. After the regrettable events at Fukushima, the need for codes capable of simulating nuclear fuels under accident conditions has come forth. Heat removal occurs in a quite different way than during normal operation and this fact determines a completely new set of conditions for the fuel materials. A detailed description of the different regimes the coolant may exhibit in such a wide variety of scenarios requires a thermal-hydraulic formulation not suitable to be included in a fuel performance code. Moreover, there exist a number of reliable and famous codes that perform this task. Nevertheless, and keeping in mind the purpose of building a code focused on the fuel behavior, a subroutine was developed for the DIONISIO code that performs a simplified analysis of the coolant in a PWR, restricted to the more representative situations and provides to the fuel simulation the boundary conditions necessary to reproduce accidental situations. In the present work this subroutine is described and the results of different comparisons with experimental data and with thermal-hydraulic codes are offered. It is verified that, in spite of its comparative simplicity, the predictions of this module of DIONISIO do not differ significantly from those of the specific, complex codes.

  14. The implementation of a toroidal limiter model into the gyrokinetic code ELMFIRE

    Energy Technology Data Exchange (ETDEWEB)

    Leerink, S.; Janhunen, S.J.; Kiviniemi, T.P.; Nora, M. [Euratom-Tekes Association, Helsinki University of Technology (Finland); Heikkinen, J.A. [Euratom-Tekes Association, VTT, P.O. Box 1000, FI-02044 VTT (Finland); Ogando, F. [Universidad Nacional de Educacion a Distancia, Madrid (Spain)

    2008-03-15

    The ELMFIRE full nonlinear gyrokinetic simulation code has been developed for calculations of plasma evolution and dynamics of turbulence in tokamak geometry. The code is applicable for calculations of strong perturbations in particle distribution function, rapid transients and steep gradients in plasma. Benchmarking against experimental reflectometry data from the FT2 tokamak is being discussed and in this paper a model for comparison and studying poloidal velocity is presented. To make the ELMFIRE code suitable for scrape-off layer simulations a simplified toroidal limiter model has been implemented. The model is be discussed and first results are presented. (copyright 2008 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  15. Transport Corrections in Nodal Diffusion Codes for HTR Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Abderrafi M. Ougouag; Frederick N. Gleicher

    2010-08-01

    The cores and reflectors of High Temperature Reactors (HTRs) of the Next Generation Nuclear Plant (NGNP) type are dominantly diffusive media from the point of view of behavior of the neutrons and their migration between the various structures of the reactor. This means that neutron diffusion theory is sufficient for modeling most features of such reactors and transport theory may not be needed for most applications. Of course, the above statement assumes the availability of homogenized diffusion theory data. The statement is true for most situations but not all. Two features of NGNP-type HTRs require that the diffusion theory-based solution be corrected for local transport effects. These two cases are the treatment of burnable poisons (BP) in the case of the prismatic block reactors and, for both pebble bed reactor (PBR) and prismatic block reactor (PMR) designs, that of control rods (CR) embedded in non-multiplying regions near the interface between fueled zones and said non-multiplying zones. The need for transport correction arises because diffusion theory-based solutions appear not to provide sufficient fidelity in these situations.

  16. Development and implementation of a new trigger and data acquisition system for the HADES detector

    Energy Technology Data Exchange (ETDEWEB)

    Michel, Jan

    2012-11-16

    One of the crucial points of instrumentation in modern nuclear and particle physics is the setup of data acquisition systems (DAQ). In collisions of heavy ions, particles of special interest for research are often produced at very low rates resulting in the need for high event rates and a fast data acquisition. Additionally, the identification and precise tracking of particles requires fast and highly granular detectors. Both requirements result in very high data rates that have to be transported within the detector read-out system: Typical experiments produce data at rates of 200 to 1,000 MByte/s. The structure of the trigger and read-out systems of such experiments is quite similar: A central instance generates a signal that triggers read-out of all sub-systems. The signals from each detector system are then processed and digitized by front-end electronics before they are transported to a computing farm where data is analyzed and prepared for long-term storage. Some systems introduce additional steps (high level triggers) in this process to select only special types of events to reduce the amount of data to be processed later. The main focus of this work is put on the development of a new data acquisition system for the High Acceptance Di-Electron Spectrometer HADES located at the GSI Helmholtz Center for Heavy Ion Research in Darmstadt, Germany. Fully operational since 2002, its front-end electronics and data transport system were subject to a major upgrade program. The goal was an increase of the event rate capabilities by a factor of more than 20 to reach event rates of 20 kHz in heavy ion collisions and more than 50 kHz in light collision systems. The new electronics are based on FPGA-equipped platforms distributed throughout the detector. Data is transported over optical fibers to reduce the amount of electromagnetic noise induced in the sensitive front-end electronics. Besides the high data rates of up to 500 MByte/s at the design event rate of 20 kHz, the

  17. An evolutionary model for protein-coding regions with conserved RNA structure

    DEFF Research Database (Denmark)

    Pedersen, Jakob Skou; Forsberg, Roald; Meyer, Irmtraud Margret

    2004-01-01

    components of traditional phylogenetic models. We applied this to a data set of full-genome sequences from the hepatitis C virus where five RNA structures are mapped within the coding region. This allowed us to partition the effects of selection on different structural elements and to test various hypotheses...... concerning the relation of these effects. Of particular interest, we found evidence of a functional role of loop and bulge regions, as these were shown to evolve according to a different and more constrained selective regime than the nonpairing regions outside the RNA structures. Other potential applications...... of the model include comparative RNA structure prediction in coding regions and RNA virus phylogenetics....

  18. A WYNER-ZIV VIDEO CODING METHOD UTILIZING MIXTURE CORRELATION NOISE MODEL

    Institute of Scientific and Technical Information of China (English)

    Hu Xiaofei; Zhu Xiuchang

    2012-01-01

    In Wyner-Ziv (WZ) Distributed Video Coding (DVC),correlation noise model is often used to describe the error distribution between WZ frame and the side information.The accuracy of the model can influence the performance of the video coder directly.A mixture correlation noise model in Discrete Cosine Transform (DCT) domain for WZ video coding is established in this paper.Different correlation noise estimation method is used for direct current and alternating current coefficients.Parameter estimation method based on expectation maximization algorithm is used to estimate the Laplace distribution center of direct current frequency band and Mixture Laplace-Uniform Distribution Model (MLUDM) is established for alternating current coefficients.Experimental results suggest that the proposed mixture correlation noise model can describe the heavy tail and sudden change of the noise accurately at high rate and make significant improvement on the coding efficiency compared with the noise model presented by DIStributed COding for Video sERvices (DISCOVER).

  19. Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments

    Directory of Open Access Journals (Sweden)

    Dragutin KERMEK

    2016-04-01

    Full Text Available In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student assignments and projects that involve a lot of source code files these tools are not so effective. Also, issues may occur when source code is given to students in class so they can copy it. In such cases these tools do not provide satisfying results and reports. In this study, we present an improved process model for plagiarism detection when multiple student files exist and allowed source code is present. In the research in this paper we use the Sherlock detection tool, although the presented process model can be combined with any plagiarism detection engine. The proposed model is tested on assignments in three courses in two subsequent academic years.

  20. SENR, A Super-Efficient Code for Gravitational Wave Source Modeling: Latest Results

    Science.gov (United States)

    Ruchlin, Ian; Etienne, Zachariah; Baumgarte, Thomas

    2017-01-01

    The science we extract from gravitational wave observations will be limited by our theoretical understanding, so with the recent breakthroughs by LIGO, reliable gravitational wave source modeling has never been more critical. Due to efficiency considerations, current numerical relativity codes are very limited in their applicability to direct LIGO source modeling, so it is important to develop new strategies for making our codes more efficient. We introduce SENR, a Super-Efficient, open-development numerical relativity (NR) code aimed at improving the efficiency of moving-puncture-based LIGO gravitational wave source modeling by 100x. SENR builds upon recent work, in which the BSSN equations are evolved in static spherical coordinates, to allow dynamical coordinates with arbitrary spatial distributions. The physical domain is mapped to a uniform-resolution grid on which derivative operations are approximated using standard central finite difference stencils. The source code is designed to be human-readable, efficient, parallelized, and readily extensible. We present the latest results from the SENR code.

  1. Self-consistent modeling of DEMOs with 1.5D BALDUR integrated predictive modeling code

    Science.gov (United States)

    Wisitsorasak, A.; Somjinda, B.; Promping, J.; Onjun, T.

    2017-02-01

    Self-consistent simulations of four DEMO designs proposed by teams from China, Europe, India, and Korea are carried out using the BALDUR integrated predictive modeling code in which theory-based models are used, for both core transport and boundary conditions. In these simulations, a combination of the NCLASS neoclassical transport and multimode (MMM95) anomalous transport model is used to compute a core transport. The boundary is taken to be at the top of the pedestal, where the pedestal values are described using a pedestal temperature model based on a combination of magnetic and flow shear stabilization, pedestal width scaling and an infinite- n ballooning pressure gradient model and a pedestal density model based on a line average density. Even though an optimistic scenario is considered, the simulation results suggest that, with the exclusion of ELMs, the fusion gain Q obtained for these reactors is pessimistic compared to their original designs, i.e. 52% for the Chinese design, 63% for the European design, 22% for the Korean design, and 26% for the Indian design. In addition, the predicted bootstrap current fractions are also found to be lower than their original designs, as fractions of their original designs, i.e. 0.49 (China), 0.66 (Europe), and 0.58 (India). Furthermore, in relation to sensitivity, it is found that increasing values of the auxiliary heating power and the electron line average density from their design values yield an enhancement of fusion performance. In addition, inclusion of sawtooth oscillation effects demonstrate positive impacts on the plasma and fusion performance in European, Indian and Korean DEMOs, but degrade the performance in the Chinese DEMO.

  2. Basic Pilot Code Development for Two-Fluid, Three-Field Model

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Jae Jun; Bae, S. W.; Lee, Y. J.; Chung, B. D.; Hwang, M.; Ha, K. S.; Kang, D. H

    2006-03-15

    A basic pilot code for one-dimensional, transient, two-fluid, three-field model has been developed. Using 9 conceptual problems, the basic pilot code has been verified. The results of the verification are summarized below: - It was confirmed that the basic pilot code can simulate various flow conditions (such as single-phase liquid flow, bubbly flow, slug/churn turbulent flow, annular-mist flow, and single-phase vapor flow) and transitions of the flow conditions. A mist flow was not simulated, but it seems that the basic pilot code can simulate mist flow conditions. - The pilot code was programmed so that the source terms of the governing equations and numerical solution schemes can be easily tested. - The mass and energy conservation was confirmed for single-phase liquid and single-phase vapor flows. - It was confirmed that the inlet pressure and velocity boundary conditions work properly. - It was confirmed that, for single- and two-phase flows, the velocity and temperature of non-existing phase are calculated as intended. - During the simulation of a two-phase flow, the calculation reaches a quasisteady state with small-amplitude oscillations. The oscillations seem to be induced by some numerical causes. The research items for the improvement of the basic pilot code are listed in the last section of this report.

  3. A Perceptual Model for Sinusoidal Audio Coding Based on Spectral Integration

    Directory of Open Access Journals (Sweden)

    Jensen Søren Holdt

    2005-01-01

    Full Text Available Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of audio signals. In this paper, we present a new perceptual model that predicts masked thresholds for sinusoidal distortions. The model relies on signal detection theory and incorporates more recent insights about spectral and temporal integration in auditory masking. As a consequence, the model is able to predict the distortion detectability. In fact, the distortion detectability defines a (perceptually relevant norm on the underlying signal space which is beneficial for optimisation algorithms such as rate-distortion optimisation or linear predictive coding. We evaluate the merits of the model by combining it with a sinusoidal extraction method and compare the results with those obtained with the ISO MPEG-1 Layer I-II recommended model. Listening tests show a clear preference for the new model. More specifically, the model presented here leads to a reduction of more than 20% in terms of number of sinusoids needed to represent signals at a given quality level.

  4. THATCH: A computer code for modelling thermal networks of high- temperature gas-cooled nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Kroeger, P.G.; Kennett, R.J.; Colman, J.; Ginsberg, T. (Brookhaven National Lab., Upton, NY (United States))

    1991-10-01

    This report documents the THATCH code, which can be used to model general thermal and flow networks of solids and coolant channels in two-dimensional r-z geometries. The main application of THATCH is to model reactor thermo-hydraulic transients in High-Temperature Gas-Cooled Reactors (HTGRs). The available modules simulate pressurized or depressurized core heatup transients, heat transfer to general exterior sinks or to specific passive Reactor Cavity Cooling Systems, which can be air or water-cooled. Graphite oxidation during air or water ingress can be modelled, including the effects of added combustion products to the gas flow and the additional chemical energy release. A point kinetics model is available for analyzing reactivity excursions; for instance due to water ingress, and also for hypothetical no-scram scenarios. For most HTGR transients, which generally range over hours, a user-selected nodalization of the core in r-z geometry is used. However, a separate model of heat transfer in the symmetry element of each fuel element is also available for very rapid transients. This model can be applied coupled to the traditional coarser r-z nodalization. This report described the mathematical models used in the code and the method of solution. It describes the code and its various sub-elements. Details of the input data and file usage, with file formats, is given for the code, as well as for several preprocessing and postprocessing options. The THATCH model of the currently applicable 350 MW{sub th} reactor is described. Input data for four sample cases are given with output available in fiche form. Installation requirements and code limitations, as well as the most common error indications are listed. 31 refs., 23 figs., 32 tabs.

  5. A Neuronal Model of Predictive Coding Accounting for the Mismatch Negativity

    OpenAIRE

    Wacongne, Catherine; Changeux, Jean-Pierre; Dehaene, Stanislas

    2012-01-01

    International audience; The mismatch negativity (MMN) is thought to index the activation of specialized neural networks for active prediction and deviance detection. However, a detailed neuronal model of the neurobiological mechanisms underlying the MMN is still lacking, and its computational foundations remain debated. We propose here a detailed neuronal model of auditory cortex, based on predictive coding, that accounts for the critical features of MMN. The model is entirely composed of spi...

  6. Benchmarking Defmod, an open source FEM code for modeling episodic fault rupture

    Science.gov (United States)

    Meng, Chunfang

    2017-03-01

    We present Defmod, an open source (linear) finite element code that enables us to efficiently model the crustal deformation due to (quasi-)static and dynamic loadings, poroelastic flow, viscoelastic flow and frictional fault slip. Ali (2015) provides the original code introducing an implicit solver for (quasi-)static problem, and an explicit solver for dynamic problem. The fault constraint is implemented via Lagrange Multiplier. Meng (2015) combines these two solvers into a hybrid solver that uses failure criteria and friction laws to adaptively switch between the (quasi-)static state and dynamic state. The code is capable of modeling episodic fault rupture driven by quasi-static loadings, e.g. due to reservoir fluid withdraw or injection. Here, we focus on benchmarking the Defmod results against some establish results.

  7. Present capabilities and new developments in antenna modeling with the numerical electromagnetics code NEC

    Energy Technology Data Exchange (ETDEWEB)

    Burke, G.J.

    1988-04-08

    Computer modeling of antennas, since its start in the late 1960's, has become a powerful and widely used tool for antenna design. Computer codes have been developed based on the Method-of-Moments, Geometrical Theory of Diffraction, or integration of Maxwell's equations. Of such tools, the Numerical Electromagnetics Code-Method of Moments (NEC) has become one of the most widely used codes for modeling resonant sized antennas. There are several reasons for this including the systematic updating and extension of its capabilities, extensive user-oriented documentation and accessibility of its developers for user assistance. The result is that there are estimated to be several hundred users of various versions of NEC world wide. 23 refs., 10 figs.

  8. Bootstrap imputation with a disease probability model minimized bias from misclassification due to administrative database codes.

    Science.gov (United States)

    van Walraven, Carl

    2017-04-01

    Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Performances of the Front-End Electronics for the HADES RPC TOF wall on a 12C beam

    Science.gov (United States)

    Belver, D.; Cabanelas, P.; Castro, E.; Díaz, J.; Garzón, J. A.; Gil, A.; Gonzalez-Diaz, D.; Koenig, W.; Traxler, M.; Zapata, M.

    2009-05-01

    A Front-End Electronics (FEE) chain for timing accurate measurements has been developed for the RPC wall upgrade of the High-Acceptance DiElectron Spectrometer (HADES). The wall will cover an area of around 8 m with 1122 RPC cells (2244 electronic channels). The FEE chain consists of two boards: a four-channel DaughterBOard (DBO) and a 32-channel MotherBOard (MBO). The DBO uses a fast 2 GHz amplifier feeding a discriminator. The time and the charge information are encoded in the leading and the trailing edge (by a charge to width method) of an LVDS signal. Each MBO houses up to eight DBOs providing them regulated voltage supply, threshold values via DACs, test signals and collection of their trigger outputs. The MBO delivers LVDS signals to a time-to-digital converter readout board (TRB) based on HPTDC for data acquisition. In this work, we present the performance of the FEE measured using: (a) narrow electronic test pulses and (b) real signals read out in a fully instrumented RPC sextant installed in its final position at the HADES. The detector was exposed to particles coming from reactions of a 12C beam on Be and Nb targets at 2 GeV/A kinetic energy. Results for the whole electronic chain (DBO+MBO+TRB) show a timing jitter of around 40 ps/channel for pulses above 100 fC and 80 ps/channel for beam data taken with the RPC.

  10. Development Of Sputtering Models For Fluids-Based Plasma Simulation Codes

    Science.gov (United States)

    Veitzer, Seth; Beckwith, Kristian; Stoltz, Peter

    2015-09-01

    Rf-driven plasma devices such as ion sources and plasma processing devices for many industrial and research applications benefit from detailed numerical modeling. Simulation of these devices using explicit PIC codes is difficult due to inherent separations of time and spatial scales. One alternative type of model is fluid-based codes coupled with electromagnetics, that are applicable to modeling higher-density plasmas in the time domain, but can relax time step requirements. To accurately model plasma-surface processes, such as physical sputtering and secondary electron emission, kinetic particle models have been developed, where particles are emitted from a material surface due to plasma ion bombardment. In fluid models plasma properties are defined on a cell-by-cell basis, and distributions for individual particle properties are assumed. This adds a complexity to surface process modeling, which we describe here. We describe the implementation of sputtering models into the hydrodynamic plasma simulation code USim, as well as methods to improve the accuracy of fluids-based simulation of plasmas-surface interactions by better modeling of heat fluxes. This work was performed under the auspices of the Department of Energy, Office of Basic Energy Sciences Award #DE-SC0009585.

  11. Motion-compensated coding and frame rate up-conversion: models and analysis.

    Science.gov (United States)

    Dar, Yehuda; Bruckstein, Alfred M

    2015-07-01

    Block-based motion estimation (ME) and motion compensation (MC) techniques are widely used in modern video processing algorithms and compression systems. The great variety of video applications and devices results in diverse compression specifications, such as frame rates and bit rates. In this paper, we study the effect of frame rate and compression bit rate on block-based ME and MC as commonly utilized in inter-frame coding and frame rate up-conversion (FRUC). This joint examination yields a theoretical foundation for comparing MC procedures in coding and FRUC. First, the video signal is locally modeled as a noisy translational motion of an image. Then, we theoretically model the motion-compensated prediction of available and absent frames as in coding and FRUC applications, respectively. The theoretic MC-prediction error is studied further and its autocorrelation function is calculated, yielding useful separable-simplifications for the coding application. We argue that a linear relation exists between the variance of the MC-prediction error and temporal distance. While the relevant distance in MC coding is between the predicted and reference frames, MC-FRUC is affected by the distance between the frames available for interpolation. We compare our estimates with experimental results and show that the theory explains qualitatively the empirical behavior. Then, we use the models proposed to analyze a system for improving of video coding at low bit rates, using a spatio-temporal scaling. Although this concept is practically employed in various forms, so far it lacked a theoretical justification. We here harness the proposed MC models and present a comprehensive analysis of the system, to qualitatively predict the experimental results.

  12. Solar optical codes evaluation for modeling and analyzing complex solar receiver geometries

    Science.gov (United States)

    Yellowhair, Julius; Ortega, Jesus D.; Christian, Joshua M.; Ho, Clifford K.

    2014-09-01

    Solar optical modeling tools are valuable for modeling and predicting the performance of solar technology systems. Four optical modeling tools were evaluated using the National Solar Thermal Test Facility heliostat field combined with flat plate receiver geometry as a benchmark. The four optical modeling tools evaluated were DELSOL, HELIOS, SolTrace, and Tonatiuh. All are available for free from their respective developers. DELSOL and HELIOS both use a convolution of the sunshape and optical errors for rapid calculation of the incident irradiance profiles on the receiver surfaces. SolTrace and Tonatiuh use ray-tracing methods to intersect the reflected solar rays with the receiver surfaces and construct irradiance profiles. We found the ray-tracing tools, although slower in computation speed, to be more flexible for modeling complex receiver geometries, whereas DELSOL and HELIOS were limited to standard receiver geometries such as flat plate, cylinder, and cavity receivers. We also list the strengths and deficiencies of the tools to show tool preference depending on the modeling and design needs. We provide an example of using SolTrace for modeling nonconventional receiver geometries. The goal is to transfer the irradiance profiles on the receiver surfaces calculated in an optical code to a computational fluid dynamics code such as ANSYS Fluent. This approach eliminates the need for using discrete ordinance or discrete radiation transfer models, which are computationally intensive, within the CFD code. The irradiance profiles on the receiver surfaces then allows for thermal and fluid analysis on the receiver.

  13. A study on the dependency between turbulent models and mesh configurations of CFD codes

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Jungjin; Heo, Yujin; Jerng, Dong-Wook [CAU, Seoul (Korea, Republic of)

    2015-10-15

    This paper focuses on the analysis of the behavior of hydrogen mixing and hydrogen stratification, using the GOTHIC code and the CFD code. Specifically, we examined the mesh sensitivity and how the turbulence model affects hydrogen stratification or hydrogen mixing, depending on the mesh configuration. In this work, sensitivity analyses for the meshes and the turbulence models were conducted for missing and stratification phenomena. During severe accidents in a nuclear power plants, the generation of hydrogen may occur and this will complicate the atmospheric condition of the containment by causing stratification of air, steam, and hydrogen. This could significantly impact containment integrity analyses, as hydrogen could be accumulated in local region. From this need arises the importance of research about stratification of gases in the containment. Two computation fluid dynamics code, i.e. GOTHIC and STAR-CCM+ were adopted and the computational results were benchmarked against the experimental data from PANDA facility. The main findings observed through the present work can be summarized as follows: 1) In the case of the GOTHIC code, it was observed that the aspect ratio of the mesh was found more important than the mesh size. Also, if the number of the mesh is over 3,000, the effects of the turbulence models were marginal. 2) For STAR-CCM+, the tendency is quite different from the GOTHIC code. That is, the effects of the turbulence models were small for fewer number of the mesh, however, as the number of mesh increases, the effects of the turbulence models becomes significant. Another observation is that away from the injection orifice, the role of the turbulence models tended to be important due to the nature of mixing process and inducted jet stream.

  14. A model code on co-determination and CSR : The Netherlands: A bottom-up approach

    NARCIS (Netherlands)

    Lambooy, T.E.

    2011-01-01

    This article discusses the works council’s role in the determination of a company’s CSR strategy and the implementation thereof throughout the organisation. The association of the works councils of multinational companies with a base in the Netherlands has recently developed a ‘Model Code on Co-Dete

  15. Code-switched English pronunciation modeling for Swahili spoken term detection

    CSIR Research Space (South Africa)

    Kleynhans, N

    2016-05-01

    Full Text Available Computer Science 81 ( 2016 ) 128 – 135 5th Workshop on Spoken Language Technology for Under-resourced Languages, SLTU 2016, 9-12 May 2016, Yogyakarta, Indonesia Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection Neil...

  16. Validation of an electroseismic and seismoelectric modeling code, for layered earth models, by the explicit homogeneous space solutions

    NARCIS (Netherlands)

    Grobbe, N.; Slob, E.C.

    2013-01-01

    We have developed an analytically based, energy fluxnormalized numerical modeling code (ESSEMOD), capable of modeling the wave propagation of all existing ElectroSeismic and SeismoElectric source-receiver combinations in horizontally layered configurations. We compare the results of several of these

  17. The production of {eta} and {omega} mesons in 3.5 GeV p+p interaction in HADES

    Energy Technology Data Exchange (ETDEWEB)

    Teilab, Khaled

    2011-08-31

    The study of meson production in proton-proton collisions in the energy range up to one GeV above the production threshold provides valuable information about the nature of the nucleon-nucleon interaction. Theoretical models describe the interaction between nucleons via the exchange of mesons. In such models, different mechanisms contribute to the production of the mesons in nucleon-nucleon collisions. The measurement of total and differential production cross sections provide information which can help in determining the magnitude of the various mechanisms. Moreover, such cross section information serves as an input to the transport calculations which describe e.g. the production of e{sup +}e{sup -} pairs in proton- and pion-induced reactions as well as in heavy ion collisions. In this thesis, the production of {omega} and {eta} mesons in proton-proton collisions at 3.5 GeV beam energy was studied using the High Acceptance DiElectron Spectrometer (HADES) installed at the Schwerionensynchrotron (SIS 18) at the Helmholtzzentrum fuer Schwerionenforschung in Darmstadt. About 80 000 {omega} mesons and 35 000 {eta} mesons were reconstructed. Total production cross sections of both mesons were determined. Furthermore, the collected statistics allowed for extracting angular distributions of both mesons as well as performing Dalitz plot studies. The {omega} and {eta} mesons were reconstructed via their decay into three pions ({pi}{sup +}{pi}{sup -}{pi}{sup 0}) in the exclusive reaction pp {yields} pp{pi}{sup +}{pi}{sup -}{pi}{sup 0}. The charged particles were identified via their characteristic energy loss, via the measurement of their time of flight and momentum, or using kinematics. The neutral pion was reconstructed using the missing mass method. A kinematic fit was applied to improve the resolution and to select events in which a {pi}{sup 0} was produced. The correction of measured yields for the effects of spectrometer acceptance was done as a function of four

  18. Dynamic Model for the Z Accelerator Vacuum Section Based on Transmission Line Code%Dynamic Model for the Z Accelerator Vacuum Section Based on Transmission Line Code

    Institute of Scientific and Technical Information of China (English)

    呼义翔; 雷天时; 吴撼宇; 郭宁; 韩娟娟; 邱爱慈; 王亮平; 黄涛; 丛培天; 张信军; 李岩; 曾正中; 孙铁平

    2011-01-01

    The transmission-line-circuit model of the Z accelerator, developed originally by W. A. STYGAR, P. A. CORCORAN, et al., is revised. The revised model uses different calculations for the electron loss and flow impedance in the magnetically insulated transmission line system of the Z accelerator before and after magnetic insulation is established. By including electron pressure and zero electric field at the cathode, a closed set of equations is obtained at each time step, and dynamic shunt resistance (used to represent any electron loss to the anode) and flow impedance are solved, which have been incorporated into the transmission line code for simulations of the vacuum section in the Z accelerator. Finally, the results are discussed in comparison with earlier findings to show the effectiveness and limitations of the model.

  19. Model document for code officials on solar heating and cooling of buildings. Second draft

    Energy Technology Data Exchange (ETDEWEB)

    Trant, B. S.

    1979-09-01

    Guidelines and codes for the construction, alteration, moving, demolition, repair and use of solar energy systems and parts thereof used for space heating and cooling, for water heating and for processing purposes in, on, or adjacent to buildings and appurtenant structures are presented. The necessary references are included wherever these provisions affect or are affected by the requirments of nationally recognized standards or model codes. The purpose of this document is to safeguard life and limb, health, property and public welfare by regulating and controlling the design, construction, quality of materials, location and maintenance of solar energy systems in, on, or adjacent to buildings and appurtenant structures.

  20. Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence

    OpenAIRE

    Pastawski, Fernando; Yoshida, Beni; Harlow, Daniel; Preskill, John

    2015-01-01

    We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be ...

  1. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC).

    Energy Technology Data Exchange (ETDEWEB)

    Schultz, Peter Andrew

    2011-12-01

    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomic scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V&V) is required throughout the system to establish evidence-based metrics for the level of confidence in M&S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V&V challenge at the subcontinuum scale, an approach to incorporate V&V concepts into subcontinuum scale modeling and simulation (M&S), and a plan to incrementally incorporate effective V&V into subcontinuum scale M&S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.

  2. Improvement of reflood model in RELAP5 code based on sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Li, Dong; Liu, Xiaojing; Yang, Yanhua, E-mail: yanhuay@sjtu.edu.cn

    2016-07-15

    Highlights: • Sensitivity analysis is performed on the reflood model of RELAP5. • The selected influential models are discussed and modified. • The modifications are assessed by FEBA experiment and better predictions are obtained. - Abstract: Reflooding is an important and complex process to the safety of nuclear reactor during loss of coolant accident (LOCA). Accurate prediction of the reflooding behavior is one of the challenge tasks for the current system code development. RELAP5 as a widely used system code has the capability to simulate this process but with limited accuracy, especially for low inlet flow rate reflooding conditions. Through the preliminary assessment with six FEBA (Flooding Experiments with Blocked Arrays) tests, it is observed that the peak cladding temperature (PCT) is generally underestimated and bundle quench is predicted too early compared to the experiment data. In this paper, the improvement of constitutive models related to reflooding is carried out based on single parametric sensitivity analysis. Film boiling heat transfer model and interfacial friction model of dispersed flow are selected as the most influential models to the results of interests. Then studies and discussions are specifically focused on these sensitive models and proper modifications are recommended. These proposed improvements are implemented in RELAP5 code and assessed against FEBA experiment. Better agreement between calculations and measured data for both cladding temperature and quench time is obtained.

  3. Improved solidification influence modelling for Eulerian fuel-coolant interaction codes

    Energy Technology Data Exchange (ETDEWEB)

    Ursic, Mitja, E-mail: mitja.ursic@ijs.s [Jozef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Leskovar, Matjaz; Mavko, Borut [Jozef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia)

    2011-04-15

    Steam explosion experiments revealed important differences in the efficiency between simulant alumina and oxidic corium melts. The experimentally observed differences are importantly attributed to the differences in the melt droplets solidification and void production, which are limiting phenomena in the steam explosion process and have to be adequately modelled in fuel-coolant interaction codes. This article focuses on the modelling of the solidification effect. An improved solidification influence modelling approach for Eulerian fuel-coolant interaction codes was developed and is presented herein. The solidification influence modelling in fuel-coolant interaction codes is strongly related to the modelling of the temperature profile and the mechanical effect of the crust on the fragmentation process. Therefore the first objective was to introduce an improved temperature profile modelling and a fragmentation criterion for partly solidified droplets. The fragmentation criterion was based on the established modified Weber number, which considers the crust stiffness as a stabilizing force acting to retain the crust under presence of the hydrodynamic forces. The modified Weber number was validated on experimental data. The application of the developed improved solidification influence modelling enables an improved determination of the melt droplet mass, which can be efficiently involved in the fine fragmentation during the steam explosion process. Additionally, also the void production modelling is improved, because it is strongly related to the temperature profile modelling in the frame of the solidification influence modelling. Therefore the second objective was to enable an improved solidification influence modelling in codes with an Eulerian formulation of the droplet field. Two additional transported model parameters based on the most important droplets features regarding the fuel-coolant interaction behaviour, were derived. First, the crust stiffness was

  4. TRIPOLI-4{sup ®} Monte Carlo code ITER A-lite neutronic model validation

    Energy Technology Data Exchange (ETDEWEB)

    Jaboulay, Jean-Charles, E-mail: jean-charles.jaboulay@cea.fr [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France); Cayla, Pierre-Yves; Fausser, Clement [MILLENNIUM, 16 Av du Québec Silic 628, F-91945 Villebon sur Yvette (France); Damian, Frederic; Lee, Yi-Kang; Puma, Antonella Li; Trama, Jean-Christophe [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France)

    2014-10-15

    3D Monte Carlo transport codes are extensively used in neutronic analysis, especially in radiation protection and shielding analyses for fission and fusion reactors. TRIPOLI-4{sup ®} is a Monte Carlo code developed by CEA. The aim of this paper is to show its capability to model a large-scale fusion reactor with complex neutron source and geometry. A benchmark between MCNP5 and TRIPOLI-4{sup ®}, on the ITER A-lite model was carried out; neutron flux, nuclear heating in the blankets and tritium production rate in the European TBMs were evaluated and compared. The methodology to build the TRIPOLI-4{sup ®} A-lite model is based on MCAM and the MCNP A-lite model. Simplified TBMs, from KIT, were integrated in the equatorial-port. A good agreement between MCNP and TRIPOLI-4{sup ®} is shown; discrepancies are mainly included in the statistical error.

  5. Underwater Acoustic Networks: Channel Models and Network Coding based Lower Bound to Transmission Power for Multicast

    CERN Document Server

    Lucani, Daniel E; Stojanovic, Milica

    2008-01-01

    The goal of this paper is two-fold. First, to establish a tractable model for the underwater acoustic channel useful for network optimization in terms of convexity. Second, to propose a network coding based lower bound for transmission power in underwater acoustic networks, and compare this bound to the performance of several network layer schemes. The underwater acoustic channel is characterized by a path loss that depends strongly on transmission distance and signal frequency. The exact relationship among power, transmission band, distance and capacity for the Gaussian noise scenario is a complicated one. We provide a closed-form approximate model for 1) transmission power and 2) optimal frequency band to use, as functions of distance and capacity. The model is obtained through numerical evaluation of analytical results that take into account physical models of acoustic propagation loss and ambient noise. Network coding is applied to determine a lower bound to transmission power for a multicast scenario, fo...

  6. A simple modelling of mass diffusion effects on condensation with noncondensable gases for the CATHARE Code

    Energy Technology Data Exchange (ETDEWEB)

    Coste, P.; Bestion, D. [Commissariat a l Energie Atomique, Grenoble (France)

    1995-09-01

    This paper presents a simple modelling of mass diffusion effects on condensation. In presence of noncondensable gases, the mass diffusion near the interface is modelled using the heat and mass transfer analogy and requires normally an iterative procedure to calculate the interface temperature. Simplifications of the model and of the solution procedure are used without important degradation of the predictions. The model is assessed on experimental data for both film condensation in vertical tubes and direct contact condensation in horizontal tubes, including air-steam, Nitrogen-steam and Helium-steam data. It is implemented in the Cathare code, a french system code for nuclear reactor thermal hydraulics developed by CEA, EDF, and FRAMATOME.

  7. Distortion-rate models for entropy-coded lattice vector quantization.

    Science.gov (United States)

    Raffy, P; Antonini, M; Barlaud, M

    2000-01-01

    The increasing demand for real-time applications requires the use of variable-rate quantizers having good performance in the low bit rate domain. In order to minimize the complexity of quantization, as well as maintaining a reasonably high PSNR ratio, we propose to use an entropy-coded lattice vector quantizer (ECLVQ). These quantizers have proven to outperform the well-known EZW algorithm's performance in terms of rate-distortion tradeoff. In this paper, we focus our attention on the modeling of the mean squared error (MSE) distortion and the prefix code rate for ECLVQ. First, we generalize the distortion model of Jeong and Gibson (1993) on fixed-rate cubic quantizers to lattices under a high rate assumption. Second, we derive new rate models for ECLVQ, efficient at low bit rates without any high rate assumptions. Simulation results prove the precision of our models.

  8. A finite-temperature Hartree-Fock code for shell-model Hamiltonians

    Science.gov (United States)

    Bertsch, G. F.; Mehlhaff, J. M.

    2016-10-01

    The codes HFgradZ.py and HFgradT.py find axially symmetric minima of a Hartree-Fock energy functional for a Hamiltonian supplied in a shell model basis. The functional to be minimized is the Hartree-Fock energy for zero-temperature properties or the Hartree-Fock grand potential for finite-temperature properties (thermal energy, entropy). The minimization may be subjected to additional constraints besides axial symmetry and nucleon numbers. A single-particle operator can be used to constrain the minimization by adding it to the single-particle Hamiltonian with a Lagrange multiplier. One can also constrain its expectation value in the zero-temperature code. Also the orbital filling can be constrained in the zero-temperature code, fixing the number of nucleons having given Kπ quantum numbers. This is particularly useful to resolve near-degeneracies among distinct minima.

  9. Physics Based Model for Cryogenic Chilldown and Loading. Part IV: Code Structure

    Science.gov (United States)

    Luchinsky, D. G.; Smelyanskiy, V. N.; Brown, B.

    2014-01-01

    This is the fourth report in a series of technical reports that describe separated two-phase flow model application to the cryogenic loading operation. In this report we present the structure of the code. The code consists of five major modules: (1) geometry module; (2) solver; (3) material properties; (4) correlations; and finally (5) stability control module. The two key modules - solver and correlations - are further divided into a number of submodules. Most of the physics and knowledge databases related to the properties of cryogenic two-phase flow are included into the cryogenic correlations module. The functional form of those correlations is not well established and is a subject of extensive research. Multiple parametric forms for various correlations are currently available. Some of them are included into correlations module as will be described in details in a separate technical report. Here we describe the overall structure of the code and focus on the details of the solver and stability control modules.

  10. Modeling of transient dust events in fusion edge plasmas with DUSTT-UEDGE code

    Science.gov (United States)

    Smirnov, R. D.; Krasheninnikov, S. I.; Pigarov, A. Yu.; Rognlien, T. D.

    2016-10-01

    It is well known that dust can be produced in fusion devices due to various processes involving structural damage of plasma exposed materials. Recent computational and experimental studies have demonstrated that dust production and associated with it plasma contamination can present serious challenges in achieving sustained fusion reaction in future fusion devices, such as ITER. To analyze the impact, which dust can have on performance of fusion plasmas, modeling of coupled dust and plasma transport with DUSTT-UEDGE code is used by the authors. In past, only steady-state computational studies, presuming continuous source of dust influx, were performed due to iterative nature of DUSTT-UEDGE code coupling. However, experimental observations demonstrate that intermittent injection of large quantities of dust, often associated with transient plasma events, may severely impact fusion plasma conditions and even lead to discharge termination. In this work we report on progress in coupling of DUSTT-UEDGE codes in time-dependent regime, which allows modeling of transient dust-plasma transport processes. The methodology and details of the time-dependent code coupling, as well as examples of simulations of transient dust-plasma transport phenomena will be presented. These include time-dependent modeling of impact of short out-bursts of different quantities of tungsten dust in ITER divertor on the edge plasma parameters. The plasma response to the out-bursts with various duration, location, and ejected dust sizes will be analyzed.

  11. Development of full wave code for modeling RF fields in hot non-uniform plasmas

    Science.gov (United States)

    Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo

    2016-10-01

    FAR-TECH, Inc. is developing a full wave RF modeling code to model RF fields in fusion devices and in general plasma applications. As an important component of the code, an adaptive meshless technique is introduced to solve the wave equations, which allows resolving plasma resonances efficiently and adapting to the complexity of antenna geometry and device boundary. The computational points are generated using either a point elimination method or a force balancing method based on the monitor function, which is calculated by solving the cold plasma dispersion equation locally. Another part of the code is the conductivity kernel calculation, used for modeling the nonlocal hot plasma dielectric response. The conductivity kernel is calculated on a coarse grid of test points and then interpolated linearly onto the computational points. All the components of the code are parallelized using MPI and OpenMP libraries to optimize the execution speed and memory. The algorithm and the results of our numerical approach to solving 2-D wave equations in a tokamak geometry will be presented. Work is supported by the U.S. DOE SBIR program.

  12. Large Discriminative Structured Set Prediction Modeling With Max-Margin Markov Network for Lossless Image Coding.

    Science.gov (United States)

    Dai, Wenrui; Xiong, Hongkai; Wang, Jia; Zheng, Yuan F

    2014-02-01

    Inherent statistical correlation for context-based prediction and structural interdependencies for local coherence is not fully exploited in existing lossless image coding schemes. This paper proposes a novel prediction model where the optimal correlated prediction for a set of pixels is obtained in the sense of the least code length. It not only exploits the spatial statistical correlations for the optimal prediction directly based on 2D contexts, but also formulates the data-driven structural interdependencies to make the prediction error coherent with the underlying probability distribution for coding. Under the joint constraints for local coherence, max-margin Markov networks are incorporated to combine support vector machines structurally to make max-margin estimation for a correlated region. Specifically, it aims to produce multiple predictions in the blocks with the model parameters learned in such a way that the distinction between the actual pixel and all possible estimations is maximized. It is proved that, with the growth of sample size, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. Incorporated into the lossless image coding framework, the proposed model outperforms most prediction schemes reported.

  13. CPAT: Coding-Potential Assessment Tool using an alignment-free logistic regression model.

    Science.gov (United States)

    Wang, Liguo; Park, Hyun Jung; Dasari, Surendra; Wang, Shengqin; Kocher, Jean-Pierre; Li, Wei

    2013-04-01

    Thousands of novel transcripts have been identified using deep transcriptome sequencing. This discovery of large and 'hidden' transcriptome rejuvenates the demand for methods that can rapidly distinguish between coding and noncoding RNA. Here, we present a novel alignment-free method, Coding Potential Assessment Tool (CPAT), which rapidly recognizes coding and noncoding transcripts from a large pool of candidates. To this end, CPAT uses a logistic regression model built with four sequence features: open reading frame size, open reading frame coverage, Fickett TESTCODE statistic and hexamer usage bias. CPAT software outperformed (sensitivity: 0.96, specificity: 0.97) other state-of-the-art alignment-based software such as Coding-Potential Calculator (sensitivity: 0.99, specificity: 0.74) and Phylo Codon Substitution Frequencies (sensitivity: 0.90, specificity: 0.63). In addition to high accuracy, CPAT is approximately four orders of magnitude faster than Coding-Potential Calculator and Phylo Codon Substitution Frequencies, enabling its users to process thousands of transcripts within seconds. The software accepts input sequences in either FASTA- or BED-formatted data files. We also developed a web interface for CPAT that allows users to submit sequences and receive the prediction results almost instantly.

  14. Applications of the 3-dim ICRH global wave code FISIC and comparison with other models

    Energy Technology Data Exchange (ETDEWEB)

    Kruecken, T.; Brambilla, M. (Association Euratom-Max-Planck-Institut fuer Plasmaphysik, Garching (Germany, F.R.))

    1989-02-01

    Numerical simulations of two ICRF heating experiments in ASDEX are presented, using the FISIC code to solve the integrodifferential wave equations in the finite Larmor radius (FLR) approximation model and of ray tracing. The different models show on the whole good agreement; we can however identify a few interesting toroidal effects, in particular on the efficiency of mode conversion and on the propagation of ion Bernstein waves. (author).

  15. AMICON: A multi-model interpretative code for two phase flow instrumentation with uncertainty analysis

    Science.gov (United States)

    Teague, J. W., II

    1981-08-01

    The code was designed to calculate mass fluxes and mass flux standard deviations, as well as certain other fluid physical properties. Several models are used to compute mass fluxes and uncertainties since some models provide more reliable results than others under certain flow situations. The program was specifically prepared to compute these variables using data gathered from spoolpiece instrumentation on the Thermal-Hydraulic Test Facility (THTF) and written to an Engineering Units (EU) data set.

  16. Complexity modeling for context-based adaptive binary arithmetic coding (CABAC) in H.264/AVC decoder

    Science.gov (United States)

    Lee, Szu-Wei; Kuo, C.-C. Jay

    2007-09-01

    One way to save the power consumption in the H.264 decoder is for the H.264 encoder to generate decoderfriendly bit streams. By following this idea, a decoding complexity model of context-based adaptive binary arithmetic coding (CABAC) for H.264/AVC is investigated in this research. Since different coding modes will have an impact on the number of quantized transformed coeffcients (QTCs) and motion vectors (MVs) and, consequently, the complexity of entropy decoding, the encoder with a complexity model can estimate the complexity of entropy decoding and choose the best coding mode to yield the best tradeoff between the rate, distortion and decoding complexity performance. The complexity model consists of two parts: one for source data (i.e. QTCs) and the other for header data (i.e. the macro-block (MB) type and MVs). Thus, the proposed CABAC decoding complexity model of a MB is a function of QTCs and associated MVs, which is verified experimentally. The proposed CABAC decoding complexity model can provide good estimation results for variant bit streams. Practical applications of this complexity model will also be discussed.

  17. New high burnup fuel models for NRC`s licensing audit code, FRAPCON

    Energy Technology Data Exchange (ETDEWEB)

    Lanning, D.D.; Beyer, C.E.; Painter, C.L. [Pacific Northwest Laboratory, Richland, WA (United States)

    1996-03-01

    Fuel behavior models have recently been updated within the U.S. Nuclear Regulatory Commission steady-state FRAPCON code used for auditing of fuel vendor/utility-codes and analyses. These modeling updates have concentrated on providing a best estimate prediction of steady-state fuel behavior up to the maximum burnup level s of current data (60 to 65 GWd/MTU rod-average). A decade has passed since these models were last updated. Currently, some U.S. utilities and fuel vendors are requesting approval for rod-average burnups greater than 60 GWd/MTU; however, until these recent updates the NRC did not have valid fuel performance models at these higher burnup levels. Pacific Northwest Laboratory (PNL) has reviewed 15 separate effects models within the FRAPCON fuel performance code (References 1 and 2) and identified nine models that needed updating for improved prediction of fuel behavior at high burnup levels. The six separate effects models not updated were the cladding thermal properties, cladding thermal expansion, cladding creepdown, fuel specific heat, fuel thermal expansion and open gap conductance. Comparison of these models to the currently available data indicates that these models still adequately predict the data within data uncertainties. The nine models identified as needing improvement for predicting high-burnup behavior are fission gas release (FGR), fuel thermal conductivity (accounting for both high burnup effects and burnable poison additions), fuel swelling, fuel relocation, radial power distribution, fuel-cladding contact gap conductance, cladding corrosion, cladding mechanical properties and cladding axial growth. Each of the updated models will be described in the following sections and the model predictions will be compared to currently available high burnup data.

  18. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  19. THELMA code electromagnetic model of ITER superconducting cables and application to the ENEA stability experiment

    Science.gov (United States)

    Ciotti, M.; Nijhuis, A.; Ribani, P. L.; Savoldi Richard, L.; Zanino, R.

    2006-10-01

    The new THELMA code, including a thermal-hydraulic (TH) and an electro-magnetic (EM) model of a cable-in-conduit conductor (CICC), has been developed. The TH model is at this stage relatively conventional, with two fluid components (He flowing in the annular cable region and He flowing in the central channel) being particular to the CICC of the International Thermonuclear Experimental Reactor (ITER), and two solid components (superconducting strands and jacket/conduit). In contrast, the EM model is novel and will be presented here in full detail. The results obtained from this first version of the code are compared with experimental results from pulsed tests of the ENEA stability experiment (ESE), showing good agreement between computed and measured deposited energy and subsequent temperature increase.

  20. Incorporation of the capillary hysteresis model HYSTR into the numerical code TOUGH

    Energy Technology Data Exchange (ETDEWEB)

    Niemi, A.; Bodvarsson, G.S.; Pruess, K.

    1991-11-01

    As part of the work performed to model flow in the unsaturated zone at Yucca Mountain Nevada, a capillary hysteresis model has been developed. The computer program HYSTR has been developed to compute the hysteretic capillary pressure -- liquid saturation relationship through interpolation of tabulated data. The code can be easily incorporated into any numerical unsaturated flow simulator. A complete description of HYSTR, including a brief summary of the previous hysteresis literature, detailed description of the program, and instructions for its incorporation into a numerical simulator are given in the HYSTR user`s manual (Niemi and Bodvarsson, 1991a). This report describes the incorporation of HYSTR into the numerical code TOUGH (Transport of Unsaturated Groundwater and Heat; Pruess, 1986). The changes made and procedures for the use of TOUGH for hysteresis modeling are documented.

  1. Revised uranium--plutonium cycle PWR and BWR models for the ORIGEN computer code

    Energy Technology Data Exchange (ETDEWEB)

    Croff, A. G.; Bjerke, M. A.; Morrison, G. W.; Petrie, L. M.

    1978-09-01

    Reactor physics calculations and literature searches have been conducted, leading to the creation of revised enriched-uranium and enriched-uranium/mixed-oxide-fueled PWR and BWR reactor models for the ORIGEN computer code. These ORIGEN reactor models are based on cross sections that have been taken directly from the reactor physics codes and eliminate the need to make adjustments in uncorrected cross sections in order to obtain correct depletion results. Revised values of the ORIGEN flux parameters THERM, RES, and FAST were calculated along with new parameters related to the activation of fuel-assembly structural materials not located in the active fuel zone. Recommended fuel and structural material masses and compositions are presented. A summary of the new ORIGEN reactor models is given.

  2. NRMC - A GPU code for N-Reverse Monte Carlo modeling of fluids in confined media

    Science.gov (United States)

    Sánchez-Gil, Vicente; Noya, Eva G.; Lomba, Enrique

    2017-08-01

    NRMC is a parallel code for performing N-Reverse Monte Carlo modeling of fluids in confined media [V. Sánchez-Gil, E.G. Noya, E. Lomba, J. Chem. Phys. 140 (2014) 024504]. This method is an extension of the usual Reverse Monte Carlo method to obtain structural models of confined fluids compatible with experimental diffraction patterns, specifically designed to overcome the problem of slow diffusion that can appear under conditions of tight confinement. Most of the computational time in N-Reverse Monte Carlo modeling is spent in the evaluation of the structure factor for each trial configuration, a calculation that can be easily parallelized. Implementation of the structure factor evaluation in NVIDIA® CUDA so that the code can be run on GPUs leads to a speed up of up to two orders of magnitude.

  3. Modeling precipitation from concentrated solutions with the EQ3/6 chemical speciation codes

    Energy Technology Data Exchange (ETDEWEB)

    Brown, L.F.; Ebinger, M.H.

    1995-01-13

    One of the more important uncertainties of using chemical speciation codes to study dissolution and precipitation of compounds is the results of modeling which depends on the particular thermodynamic database being used. The authors goal is to investigate the effects of different thermodynamic databases on modeling precipitation from concentrated solutions. They used the EQ3/6 codes and the supplied databases to model precipitation in this paper. One aspect of this goal is to compare predictions of precipitation from ideal solutions to similar predictions from nonideal solutions. The largest thermodynamic databases available for use by EQ3/6 assume that solutions behave ideally. However, two databases exist that allow modeling nonideal solutions. The two databases are much less extensive than the ideal solution data, and they investigated the comparability of modeling ideal solutions and nonideal solutions. They defined four fundamental problems to test the EQ3/6 codes in concentrated solutions. Two problems precipitate Ca(OH){sub 2} from solutions concentrated in Ca{sup ++}. One problem tests the precipitation of Ca(OH){sub 2} from high ionic strength (high concentration) solutions that are low in the concentrations of precipitating species (Ca{sup ++} in this case). The fourth problem evaporates the supernatant of the problem with low concentrations of precipitating species. The specific problems are discussed.

  4. Wind turbine control systems: Dynamic model development using system identification and the fast structural dynamics code

    Energy Technology Data Exchange (ETDEWEB)

    Stuart, J.G.; Wright, A.D.; Butterfield, C.P.

    1996-10-01

    Mitigating the effects of damaging wind turbine loads and responses extends the lifetime of the turbine and, consequently, reduces the associated Cost of Energy (COE). Active control of aerodynamic devices is one option for achieving wind turbine load mitigation. Generally speaking, control system design and analysis requires a reasonable dynamic model of {open_quotes}plant,{close_quotes} (i.e., the system being controlled). This paper extends the wind turbine aileron control research, previously conducted at the National Wind Technology Center (NWTC), by presenting a more detailed development of the wind turbine dynamic model. In prior research, active aileron control designs were implemented in an existing wind turbine structural dynamics code, FAST (Fatigue, Aerodynamics, Structures, and Turbulence). In this paper, the FAST code is used, in conjunction with system identification, to generate a wind turbine dynamic model for use in active aileron control system design. The FAST code is described and an overview of the system identification technique is presented. An aileron control case study is used to demonstrate this modeling technique. The results of the case study are then used to propose ideas for generalizing this technique for creating dynamic models for other wind turbine control applications.

  5. A Mathematical Model and MATLAB Code for Muscle-Fluid-Structure Simulations.

    Science.gov (United States)

    Battista, Nicholas A; Baird, Austin J; Miller, Laura A

    2015-11-01

    This article provides models and code for numerically simulating muscle-fluid-structure interactions (FSIs). This work was presented as part of the symposium on Leading Students and Faculty to Quantitative Biology through Active Learning at the society-wide meeting of the Society for Integrative and Comparative Biology in 2015. Muscle mechanics and simple mathematical models to describe the forces generated by muscular contractions are introduced in most biomechanics and physiology courses. Often, however, the models are derived for simplifying cases such as isometric or isotonic contractions. In this article, we present a simple model of the force generated through active contraction of muscles. The muscles' forces are then used to drive the motion of flexible structures immersed in a viscous fluid. An example of an elastic band immersed in a fluid is first presented to illustrate a fully-coupled FSI in the absence of any external driving forces. In the second example, we present a valveless tube with model muscles that drive the contraction of the tube. We provide a brief overview of the numerical method used to generate these results. We also include as Supplementary Material a MATLAB code to generate these results. The code was written for flexibility so as to be easily modified to many other biological applications for educational purposes.

  6. Beyond the Business Model: Incentives for Organizations to Publish Software Source Code

    Science.gov (United States)

    Lindman, Juho; Juutilainen, Juha-Pekka; Rossi, Matti

    The software stack opened under Open Source Software (OSS) licenses is growing rapidly. Commercial actors have released considerable amounts of previously proprietary source code. These actions beg the question why companies choose a strategy based on giving away software assets? Research on outbound OSS approach has tried to answer this question with the concept of the “OSS business model”. When studying the reasons for code release, we have observed that the business model concept is too generic to capture the many incentives organizations have. Conversely, in this paper we investigate empirically what the companies’ incentives are by means of an exploratory case study of three organizations in different stages of their code release. Our results indicate that the companies aim to promote standardization, obtain development resources, gain cost savings, improve the quality of software, increase the trustworthiness of software, or steer OSS communities. We conclude that future research on outbound OSS could benefit from focusing on the heterogeneous incentives for code release rather than on revenue models.

  7. A code reviewer assignment model incorporating the competence differences and participant preferences

    Directory of Open Access Journals (Sweden)

    Wang Yanqing

    2016-03-01

    Full Text Available A good assignment of code reviewers can effectively utilize the intellectual resources, assure code quality and improve programmers’ skills in software development. However, little research on reviewer assignment of code review has been found. In this study, a code reviewer assignment model is created based on participants’ preference to reviewing assignment. With a constraint of the smallest size of a review group, the model is optimized to maximize review outcomes and avoid the negative impact of “mutual admiration society”. This study shows that the reviewer assignment strategies incorporating either the reviewers’ preferences or the authors’ preferences get much improvement than a random assignment. The strategy incorporating authors’ preference makes higher improvement than that incorporating reviewers’ preference. However, when the reviewers’ and authors’ preference matrixes are merged, the improvement becomes moderate. The study indicates that the majority of the participants have a strong wish to work with reviewers and authors having highest competence. If we want to satisfy the preference of both reviewers and authors at the same time, the overall improvement of learning outcomes may be not the best.

  8. Dual coding with STDP in a spiking recurrent neural network model of the hippocampus.

    Directory of Open Access Journals (Sweden)

    Daniel Bush

    Full Text Available The firing rate of single neurons in the mammalian hippocampus has been demonstrated to encode for a range of spatial and non-spatial stimuli. It has also been demonstrated that phase of firing, with respect to the theta oscillation that dominates the hippocampal EEG during stereotype learning behaviour, correlates with an animal's spatial location. These findings have led to the hypothesis that the hippocampus operates using a dual (rate and temporal coding system. To investigate the phenomenon of dual coding in the hippocampus, we examine a spiking recurrent network model with theta coded neural dynamics and an STDP rule that mediates rate-coded Hebbian learning when pre- and post-synaptic firing is stochastic. We demonstrate that this plasticity rule can generate both symmetric and asymmetric connections between neurons that fire at concurrent or successive theta phase, respectively, and subsequently produce both pattern completion and sequence prediction from partial cues. This unifies previously disparate auto- and hetero-associative network models of hippocampal function and provides them with a firmer basis in modern neurobiology. Furthermore, the encoding and reactivation of activity in mutually exciting Hebbian cell assemblies demonstrated here is believed to represent a fundamental mechanism of cognitive processing in the brain.

  9. Summary of Interfacial Heat Transfer Model and Correlations in SPACE Code

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Sung Won; Lee, Seung Wook; Kim, Kyung Du [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-10-15

    The first stage of development program for a nuclear reactor safety analysis code named as SPACE which will be used by utility bodies has been finished at last April 2010. During the first stage, main logic and conceptual sculpture have been established successfully under the support of Korea Ministry of Knowledge and Economy. The code, named as SPACE, has been designed to solve the multi-dimensional 3-field 2 phase equations. From the beginning of second stage of development, KNF has moved to concentrate on the methodology evaluation by using he SPACE code. Thus, KAERI, KOPEC, KEPRI have been remained as the major development organizations. In the second stage, it is focused to assess the physical models and correlations of SPACE code by using the well known SET problems. For the successful SET assessment procedure, a problem selection process has been performed under the leading of KEPRI. KEPRI has listed suitable SET problems according to the individual assessment purpose. For the interfacial area concentration, the models and correlations are continuously modified and verified

  10. Motion-adaptive model-assisted compatible coding with spatiotemporal scalability

    Science.gov (United States)

    Lee, JaeBeom; Eleftheriadis, Alexandros

    1997-01-01

    We introduce the concept of motion adaptive spatio-temporal model-assisted compatible (MA-STMAC) coding, a technique to selectively encode areas of different importance to the human eye in terms of space and time in moving images with the consideration of object motion. PRevious STMAC was proposed base don the fact that human 'eye contact' and 'lip synchronization' are very important in person-to-person communication. Several areas including the eyes and lips need different types of quality, since different areas have different perceptual significance to human observers. The approach provides a better rate-distortion tradeoff than conventional image coding techniques base don MPEG-1, MPEG- 2, H.261, as well as H.263. STMAC coding is applied on top of an encoder, taking full advantage of its core design. Model motion tracking in our previous STMAC approach was not automatic. The proposed MA-STMAC coding considers the motion of the human face within the STMAC concept using automatic area detection. Experimental results are given using ITU-T H.263, addressing very low bit-rate compression.

  11. A NOVEL TEXTBOOK MODEL BASED ON HIGHER - ORDER QR CODES%基于高版本 QR Code 的新型教材模式探讨

    Institute of Scientific and Technical Information of China (English)

    樊荣; 段会川

    2015-01-01

    2 - D bar codes have become one of the most popular techniques in recent years,in which QR Code is one of the most applied in practice. However,present applications mainly involve the mid or lower version codes which can only accommodate moderate amount of information,and application of higher version QR Codes, i. e. level 20 or more,is seldom reported. To improve teaching in programming languages and algorithms,this paper proposes to employ higher version QR Code images that can be printed on handouts or textbooks to represent program and algorithm codes that may require a number of characters to represent. When learning,students can photograph the QR Code images with smart phones which can then figure out the program or algorithm code and display them in color,highlighted and structured form by means of special coding editors. Moreover,the code can also be run in a browser to show the running process as well as results dynamically. This new model can bring students plenty of convenience compared with the traditional textbook model. The dynamic running also greatly increases learning efficiency and augments learning effects as well.%近几年来二维条形码已成为最流行的技术之一,而 QR Code 又是其中应用最广泛的一种码型。然而,目前针对 QR Code 的应用大多使用的是承载信息量不太多的中低版本,而级别20以上的承载信息量较多的 QR Code 的研究和应用却很少见。笔者针对程序设计语言及算法课程的教学,提出使用高版本 QR Code 图像对一定字符量的程序和算法代码进行编码存储并印刷在讲义或教材上,使学生在学习过程中可以使用智能手机拍摄和解析出 QR Code 图像中的算法代码,再以专用的代码展示和编辑软件进行彩色、高亮和结构化的演示,并可在浏览器中运行以查看动态的运行过程和结果。这种方式较传统的教材模式给学生学习带来了极大的方便,动态的运行

  12. Development and implementation of a new trigger and data acquisition system for the HADES detector

    Energy Technology Data Exchange (ETDEWEB)

    Michel, Jan

    2012-11-16

    One of the crucial points of instrumentation in modern nuclear and particle physics is the setup of data acquisition systems (DAQ). In collisions of heavy ions, particles of special interest for research are often produced at very low rates resulting in the need for high event rates and a fast data acquisition. Additionally, the identification and precise tracking of particles requires fast and highly granular detectors. Both requirements result in very high data rates that have to be transported within the detector read-out system: Typical experiments produce data at rates of 200 to 1,000 MByte/s. The structure of the trigger and read-out systems of such experiments is quite similar: A central instance generates a signal that triggers read-out of all sub-systems. The signals from each detector system are then processed and digitized by front-end electronics before they are transported to a computing farm where data is analyzed and prepared for long-term storage. Some systems introduce additional steps (high level triggers) in this process to select only special types of events to reduce the amount of data to be processed later. The main focus of this work is put on the development of a new data acquisition system for the High Acceptance Di-Electron Spectrometer HADES located at the GSI Helmholtz Center for Heavy Ion Research in Darmstadt, Germany. Fully operational since 2002, its front-end electronics and data transport system were subject to a major upgrade program. The goal was an increase of the event rate capabilities by a factor of more than 20 to reach event rates of 20 kHz in heavy ion collisions and more than 50 kHz in light collision systems. The new electronics are based on FPGA-equipped platforms distributed throughout the detector. Data is transported over optical fibers to reduce the amount of electromagnetic noise induced in the sensitive front-end electronics. Besides the high data rates of up to 500 MByte/s at the design event rate of 20 kHz, the

  13. HADES : A Mission Concept for the Identification of New Saline Aquifer Sites Suitable for Carbon Capture & Storage (CCS)

    Science.gov (United States)

    Pechorro, Ed; Lecuyot, Arnaud; Bacon, Andrew; Chalkley, Simon; Milnes, Martin; Williams, Ivan; Williams, Stuart; Muthu, Kavitha

    2014-05-01

    The Hidden Aquifer & Deep Earth Sounder (HADES) is a ground penetrating radar mission concept for identifying new saline aquifer sites suitable for Carbon Capture & Storage (CCS). HADES uses a newly proposed type of Earth Observation technique, previously deployed in Mars orbit to search for water. It has been proposed to globally map the sub-surface layers of Earth's land area down to a maximum depth of 3km to detect underground aquifers of suitable depth and geophysical conditions for CCS. We present the mission concept together with the approach and findings of the project from which the concept has arisen, a European Space Agency (ESA) study on "Future Earth Observation Missions & Techniques for the Energy Sector" performed by a consortium of partners comprising CGI and SEA. The study aims to improve and increase the current and future application of Earth Observation in provision of data and services to directly address long term energy sector needs for a de-carbonised economy. This is part of ESA's cross-agency "Space and Energy" initiative. The HADES mission concept is defined by our specification of (i) mission requirements, reflecting the challenges and opportunities with identifying CCS sites from space, (ii) the observation technique, derived from ground penetrating radar, and (iii) the preliminary system concept, including specification of the resulting satellite, ground and launch segments. Activities have also included a cost-benefit analysis of the mission, a defined route to technology maturation, and a preliminary strategic plan towards proposed implementation. Moreover, the mission concept maps to a stakeholder analysis forming the initial part of the study. Its method has been to first identify the user needs specific to the energy sector in the global transition towards a de-carbonised economy. This activity revealed the energy sector requirements geared to the identification of suitable CCS sites. Subsequently, a qualitative and quantitative

  14. Numerical modeling of immiscible two-phase flow in micro-models using a commercial CFD code

    Energy Technology Data Exchange (ETDEWEB)

    Crandall, Dustin; Ahmadia, Goodarz; Smith, Duane H.

    2009-01-01

    Off-the-shelf CFD software is being used to analyze everything from flow over airplanes to lab-on-a-chip designs. So, how accurately can two-phase immiscible flow be modeled flowing through some small-scale models of porous media? We evaluate the capability of the CFD code FLUENT{trademark} to model immiscible flow in micro-scale, bench-top stereolithography models. By comparing the flow results to experimental models we show that accurate 3D modeling is possible.

  15. Modeling compositional dynamics based on GC and purine contents of protein-coding sequences

    KAUST Repository

    Zhang, Zhang

    2010-11-08

    Background: Understanding the compositional dynamics of genomes and their coding sequences is of great significance in gaining clues into molecular evolution and a large number of publically-available genome sequences have allowed us to quantitatively predict deviations of empirical data from their theoretical counterparts. However, the quantification of theoretical compositional variations for a wide diversity of genomes remains a major challenge.Results: To model the compositional dynamics of protein-coding sequences, we propose two simple models that take into account both mutation and selection effects, which act differently at the three codon positions, and use both GC and purine contents as compositional parameters. The two models concern the theoretical composition of nucleotides, codons, and amino acids, with no prerequisite of homologous sequences or their alignments. We evaluated the two models by quantifying theoretical compositions of a large collection of protein-coding sequences (including 46 of Archaea, 686 of Bacteria, and 826 of Eukarya), yielding consistent theoretical compositions across all the collected sequences.Conclusions: We show that the compositions of nucleotides, codons, and amino acids are largely determined by both GC and purine contents and suggest that deviations of the observed from the expected compositions may reflect compositional signatures that arise from a complex interplay between mutation and selection via DNA replication and repair mechanisms.Reviewers: This article was reviewed by Zhaolei Zhang (nominated by Mark Gerstein), Guruprasad Ananda (nominated by Kateryna Makova), and Daniel Haft. 2010 Zhang and Yu; licensee BioMed Central Ltd.

  16. Chiefly Symmetric: Results on the Scalability of Probabilistic Model Checking for Operating-System Code

    Directory of Open Access Journals (Sweden)

    Marcus Völp

    2012-11-01

    Full Text Available Reliability in terms of functional properties from the safety-liveness spectrum is an indispensable requirement of low-level operating-system (OS code. However, with evermore complex and thus less predictable hardware, quantitative and probabilistic guarantees become more and more important. Probabilistic model checking is one technique to automatically obtain these guarantees. First experiences with the automated quantitative analysis of low-level operating-system code confirm the expectation that the naive probabilistic model checking approach rapidly reaches its limits when increasing the numbers of processes. This paper reports on our work-in-progress to tackle the state explosion problem for low-level OS-code caused by the exponential blow-up of the model size when the number of processes grows. We studied the symmetry reduction approach and carried out our experiments with a simple test-and-test-and-set lock case study as a representative example for a wide range of protocols with natural inter-process dependencies and long-run properties. We quickly see a state-space explosion for scenarios where inter-process dependencies are insignificant. However, once inter-process dependencies dominate the picture models with hundred and more processes can be constructed and analysed.

  17. Assessment of wall condensation model in the presence of noncondensable gas for the SPACE code

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Hyuk; Kim, Byong Jae; Lee, Seung Wook; Kim, Kyung Doo [KAERI, Daejeon (Korea, Republic of); Cho, Hyung Kyu [Seoul National University, Seoul (Korea, Republic of)

    2015-05-15

    A vapor containing noncondensable (NC) gases mostly was used in the cases for many postulated light water reactor accidents. The NC gases reduce the heat transfer and condensation rates even though they are present in the bulk vapor in a small amount. To understand the characteristics of condensation heat transfer in the presence of NC gases, a large number of analytical and experimental studies have performed. SPACE code, which have been developed since 2006 as a thermal hydraulic system analysis code, also had a capability of analysis for wall condensation with NC gases. To assess the model, three kinds of experiments are introduced: COPAIN test, University of Wisconsin condensation test, and KAIST reflux condensation test. The Colburn-Hougen model has been widely used in thermal hydraulic system codes for the wall condensation problem in the presence of noncondensable (NC) gases. However, we notice that there is a mistake in the used derived equation. The assessment of the modified Colburn-Hougen model was conducted by validating with variable experiments: COPAIN, University of Wisconsin condensation test, and KAIST reflux condensation test. Through the comparison of calculated results using SPACE with experimental data, we concluded that modified Colburn-Houngen model can more precisely simulate wall condensation heat transfer. And, calculated results have a better agreement with experimental data. Commonly, the calculated heat flux and vapor mass flux with higher air mass fraction cases are more increased and show a better agreement with experimental data.

  18. Capabilities of the ATHENA computer code for modeling the SP-100 space reactor concept

    Science.gov (United States)

    Fletcher, C. D.

    1985-09-01

    The capability to perform thermal-hydraulic analyses of an SP-100 space reactor was demonstrated using the ATHENA computer code. The preliminary General Electric SP-100 design was modeled using Athena. The model simulates the fast reactor, liquid-lithium coolant loops, and lithium-filled heat pipes of this design. Two ATHENA demonstration calculations were performed simulating accident scenarios. A mask for the SP-100 model and an interface with the Nuclear Plant Analyzer (NPA) were developed, allowing a graphic display of the calculated results on the NPA.

  19. Comparison of different methods used in integral codes to model coagulation of aerosols

    Science.gov (United States)

    Beketov, A. I.; Sorokin, A. A.; Alipchenkov, V. M.; Mosunova, N. A.

    2013-09-01

    The methods for calculating coagulation of particles in the carrying phase that are used in the integral codes SOCRAT, ASTEC, and MELCOR, as well as the Hounslow and Jacobson methods used to model aerosol processes in the chemical industry and in atmospheric investigations are compared on test problems and against experimental results in terms of their effectiveness and accuracy. It is shown that all methods are characterized by a significant error in modeling the distribution function for micrometer particles if calculations are performed using rather "coarse" spectra of particle sizes, namely, when the ratio of the volumes of particles from neighboring fractions is equal to or greater than two. With reference to the problems considered, the Hounslow method and the method applied in the aerosol module used in the ASTEC code are the most efficient ones for carrying out calculations.

  20. Bayesian Regularization in a Neural Network Model to Estimate Lines of Code Using Function Points

    Directory of Open Access Journals (Sweden)

    K. K. Aggarwal

    2005-01-01

    Full Text Available It is a well known fact that at the beginning of any project, the software industry needs to know, how much will it cost to develop and what would be the time required ? . This paper examines the potential of using a neural network model for estimating the lines of code, once the functional requirements are known. Using the International Software Benchmarking Standards Group (ISBSG Repository Data (release 9 for the experiment, this paper examines the performance of back propagation feed forward neural network to estimate the Source Lines of Code. Multiple training algorithms are used in the experiments. Results demonstrate that the neural network models trained using Bayesian Regularization provide the best results and are suitable for this purpose.

  1. Quasi-periodic oscillations in accreting magnetic white dwarfs II. The asset of numerical modelling for interpreting observations

    CERN Document Server

    Busschaert, C; Michaut, C; Bonnet-Bidaud, J -M; Mouchet, M

    2015-01-01

    Magnetic cataclysmic variables are close binary systems containing a strongly magnetized white dwarf that accretes matter coming from an M-dwarf companion. High-energy radiation coming from those objects is emitted from the accretion column close to the white dwarf photosphere at the impact region. Its properties depend on the characteristics of the white dwarf and an accurate accretion column model allows the properties of the binary system to be inferred, such as the white dwarf mass, its magnetic field, and the accretion rate. We study the temporal and spectral behaviour of the accretion region and use the tools we developed to accurately connect the simulation results to the X-ray and optical astronomical observations. The radiation hydrodynamics code Hades was adapted to simulate this specific accretion phenomena. Classical approaches were used to model the radiative losses of the two main radiative processes: bremsstrahlung and cyclotron. The oscillation frequencies and amplitudes in the X-ray and optic...

  2. Development of thermal hydraulic models for the reliable regulatory auditing code

    Energy Technology Data Exchange (ETDEWEB)

    Chung, B. D.; Song, C. H.; Lee, Y. J.; Kwon, T. S. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2003-04-15

    The objective of this project is to develop thermal hydraulic models for use in improving the reliability of the regulatory auditing codes. The current year fall under the first step of the 3 year project, and the main researches were focused on identifying the candidate thermal hydraulic models for improvement and to develop prototypical model development. During the current year, the verification calculations submitted for the APR 1400 design certification have been reviewed, the experimental data from the MIDAS DVI experiment facility in KAERI have been analyzed and evaluated, candidate thermal hydraulic models for improvement have been identified, prototypical models for the improved thermal hydraulic models have been developed, items for experiment in connection with the model development have been identified, and preliminary design of the experiment has been carried out.

  3. The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.

    1999-07-01

    capability of performing iterated-source (criticality), multiplied-fixed-source, and fixed-source calculations. MCV uses a highly detailed continuous-energy (as opposed to multigroup) representation of neutron histories and cross section data. The spatial modeling is fully three-dimensional (3-D), and any geometrical region that can be described by quadric surfaces may be represented. The primary results are region-wise reaction rates, neutron production rates, slowing-down-densities, fluxes, leakages, and when appropriate the eigenvalue or multiplication factor. Region-wise nuclidic reaction rates are also computed, which may then be used by other modules in the system to determine time-dependent nuclide inventories so that RACER can perform depletion calculations. Furthermore, derived quantities such as ratios and sums of primary quantities and/or other derived quantities may also be calculated. MCV performs statistical analyses on output quantities, computing estimates of the 95% confidence intervals as well as indicators as to the reliability of these estimates. The remainder of this chapter provides an overview of the MCV algorithm. The following three chapters describe the MCV mathematical, physical, and statistical treatments in more detail. Specifically, Chapter 2 discusses topics related to tracking the histories including: geometry modeling, how histories are moved through the geometry, and variance reduction techniques related to the tracking process. Chapter 3 describes the nuclear data and physical models employed by MCV. Chapter 4 discusses the tallies, statistical analyses, and edits. Chapter 5 provides some guidance as to how to run the code, and Chapter 6 is a list of the code input options.

  4. Experience with the UHV box coater and the evaporation procedure for VUV reflective coatings on the HADES RICH mirror

    CERN Document Server

    Maier-Komor, P; Wieser, J; Ulrich, A

    1999-01-01

    An UHV box coater was set up for the deposition of highly reflective layers in the vacuum ultraviolet (VUV) wavelength range on large-area mirror substrates. The VUV mirrors are needed for the ring imaging Cherenkov (RICH) detector of the high-acceptance di-electron spectrometer (HADES). The complete dry vacuum system is described. The spatial deposition distribution from the evaporation sources was measured. The reflectivity of the Al mirror layer was optimized for the wavelength range of 145-210 nm by varying the thickness of the MgF sub 2 protective layer. The setup for measuring the reflectivity in the VUV range is described and reflectivity data are presented.

  5. On Network-Error Correcting Convolutional Codes under the BSC Edge Error Model

    CERN Document Server

    Prasad, K

    2010-01-01

    Convolutional network-error correcting codes (CNECCs) are known to provide error correcting capability in acyclic instantaneous networks within the network coding paradigm under small field size conditions. In this work, we investigate the performance of CNECCs under the error model of the network where the edges are assumed to be statistically independent binary symmetric channels, each with the same probability of error $p_e$($0\\leq p_e<0.5$). We obtain bounds on the performance of such CNECCs based on a modified generating function (the transfer function) of the CNECCs. For a given network, we derive a mathematical condition on how small $p_e$ should be so that only single edge network-errors need to be accounted for, thus reducing the complexity of evaluating the probability of error of any CNECC. Simulations indicate that convolutional codes are required to possess different properties to achieve good performance in low $p_e$ and high $p_e$ regimes. For the low $p_e$ regime, convolutional codes with g...

  6. Dysregulation of the long non-coding RNA transcriptome in a Rett syndrome mouse model.

    Science.gov (United States)

    Petazzi, Paolo; Sandoval, Juan; Szczesna, Karolina; Jorge, Olga C; Roa, Laura; Sayols, Sergi; Gomez, Antonio; Huertas, Dori; Esteller, Manel

    2013-07-01

    Mecp2 is a transcriptional repressor protein that is mutated in Rett syndrome, a neurodevelopmental disorder that is the second most common cause of mental retardation in women. It has been shown that the loss of the Mecp2 protein in Rett syndrome cells alters the transcriptional silencing of coding genes and microRNAs. Herein, we have studied the impact of Mecp2 impairment in a Rett syndrome mouse model on the global transcriptional patterns of long non-coding RNAs (lncRNAs). Using a microarray platform that assesses 41,232 unique lncRNA transcripts, we have identified the aberrant lncRNA transcriptome that is present in the brain of Rett syndrome mice. The study of the most relevant lncRNAs altered in the assay highlighted the upregulation of the AK081227 and AK087060 transcripts in Mecp2-null mice brains. Chromatin immunoprecipitation demonstrated the Mecp2 occupancy in the 5'-end genomic loci of the described lncRNAs and its absence in Rett syndrome mice. Most importantly, we were able to show that the overexpression of AK081227 mediated by the Mecp2 loss was associated with the downregulation of its host coding protein gene, the gamma-aminobutyric acid receptor subunit Rho 2 (Gabrr2). Overall, our findings indicate that the transcriptional dysregulation of lncRNAs upon Mecp2 loss contributes to the neurological phenotype of Rett syndrome and highlights the complex interaction between ncRNAs and coding-RNAs.

  7. Full SED fitting with the KOSMA-\\tau\\ PDR code - I. Dust modelling

    CERN Document Server

    Röllig, M; Ossenkopf, V; Glück, C

    2012-01-01

    We revised the treatment of interstellar dust in the KOSMA-\\tau\\ PDR model code to achieve a consistent description of the dust-related physics in the code. The detailed knowledge of the dust properties is then used to compute the dust continuum emission together with the line emission of chemical species. We coupled the KOSMA-\\tau\\ PDR code with the MCDRT (multi component dust radiative transfer) code to solve the frequency-dependent radiative transfer equations and the thermal balance equation in a dusty clump under the assumption of spherical symmetry, assuming thermal equilibrium in calculating the dust temperatures, neglecting non-equilibrium effects. We updated the calculation of the photoelectric heating and extended the parametrization range for the photoelectric heating toward high densities and UV fields. We revised the computation of the H2 formation on grain surfaces to include the Eley-Rideal effect, thus allowing for high-temperature H2 formation. We demonstrate how the different optical propert...

  8. Efficient Work Team Scheduling: Using Psychological Models of Knowledge Retention to Improve Code Writing Efficiency

    Directory of Open Access Journals (Sweden)

    Michael J. Pelosi

    2014-12-01

    Full Text Available Development teams and programmers must retain critical information about their work during work intervals and gaps in order to improve future performance when work resumes. Despite time lapses, project managers want to maximize coding efficiency and effectiveness. By developing a mathematically justified, practically useful, and computationally tractable quantitative and cognitive model of learning and memory retention, this study establishes calculations designed to maximize scheduling payoff and optimize developer efficiency and effectiveness.

  9. PREMOR: a point reactor exposure model computer code for survey analysis of power plant performance

    Energy Technology Data Exchange (ETDEWEB)

    Vondy, D.R.

    1979-10-01

    The PREMOR computer code was written to exploit a simple, two-group point nuclear reactor power plant model for survey analysis. Up to thirteen actinides, fourteen fission products, and one lumped absorber nuclide density are followed over a reactor history. Successive feed batches are accounted for with provision for from one to twenty batches resident. The effect of exposure of each of the batches to the same neutron flux is determined.

  10. Modeling of Vector Quantization Image Coding in an Ant Colony System

    Institute of Scientific and Technical Information of China (English)

    LIXia; LUOXuehui; ZHANGJihong

    2004-01-01

    Ant colony algorithm is a newly emerged stochastic searching optimization algorithm in recent years. In this paper, vector quantization image coding is modeled as a stochastic optimization problem in an Ant colony system (ACS). An appropriately adapted ant colony algorithm is proposed for vector quantization codebook design. Experimental results show that the ACS-based algorithm can produce a better codebook and the improvement of Pixel signal-to-noise ratio (PSNR) exceeds 1dB compared with the conventional LBG algorithm.

  11. Analysis, simulation and modeling of atmospheric stratification erosion with lumped parameter codes; Analyse, Simulation und Modellierung der Erosion atmosphaerischer Schichtungen mit Lumped Parameter-Codes

    Energy Technology Data Exchange (ETDEWEB)

    Burkhardt, Joerg

    2013-07-01

    The courses and consequences of severe accidents in nuclear power plants are usually simulated with the help of so called Lumped Parameter-Codes which are especially designed for this purpose. These codes are able to simulate complex physical phenomena within short computing times since they are based on a simplified zone principle. Furthermore they are provided with a simplified flow model basis. This dissertation aims at the ability of the German Containment Code System (COCOSYS) to simulate local accumulations of hydrogen. During severe accidents with a melting reactor core (as in Harrisburg or Fukushima) hydrogen can be generated and then be released to the containment. In case of a local accumulation a detonation can occur that endangers the buildings integrity. The results show that the development and the erosion of these hydrogen accumulations based on bouant flows are qualitatively well simulated. From a systematic grid study general rules concerning the simulation of the stratification erosion have been derivated. Those have been applied and confirmed by several blind code-benchmarks. A detailed analysis has shown that the simulated erosion rate and the resistance of simulated hydrogen accumulations are directly related to the grid discretisation chosen by the user. Based upon this analysis a model concept has been developed, which is able to detect hydrogen accumulations and to determine their intensity of interaction with impinging flows by non-dimensional numbers. The erosion flow is controlled by adjusting local grid effects. The model is in the development phase.

  12. QEFSM model and Markov Algorithm for translating Quran reciting rules into Braille code

    Directory of Open Access Journals (Sweden)

    Abdallah M. Abualkishik

    2015-07-01

    Full Text Available The Holy Quran is the central religious verbal text of Islam. Muslims are expected to read, understand, and apply the teachings of the Holy Quran. The Holy Quran was translated to Braille code as a normal Arabic text without having its reciting rules included. It is obvious that the users of this transliteration will not be able to recite the Quran the right way. Through this work, Quran Braille Translator (QBT presents a specific translator to translate Quran verses and their reciting rules into the Braille code. Quran Extended Finite State Machine (QEFSM model is proposed through this study as it is able to detect the Quran reciting rules (QRR from the Quran text. Basis path testing was used to evaluate the inner work for the model by checking all the test cases for the model. Markov Algorithm (MA was used for translating the detected QRR and Quran text into the matched Braille code. The data entries for QBT are Arabic letters and diacritics. The outputs of this study are seen in the double lines of Braille symbols; the first line is the proposed Quran reciting rules and the second line is for the Quran scripts.

  13. Development of computer code models for analysis of subassembly voiding in the LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Hinkle, W [ed.

    1979-12-01

    The research program discussed in this report was started in FY1979 under the combined sponsorship of the US Department of Energy (DOE), General Electric (GE) and Hanford Engineering Development Laboratory (HEDL). The objective of the program is to develop multi-dimensional computer codes which can be used for the analysis of subassembly voiding incoherence under postulated accident conditions in the LMFBR. Two codes are being developed in parallel. The first will use a two fluid (6 equation) model which is more difficult to develop but has the potential for providing a code with the utmost in flexibility and physical consistency for use in the long term. The other will use a mixture (< 6 equation) model which is less general but may be more amenable to interpretation and use of experimental data and therefore, easier to develop for use in the near term. To assure that the models developed are not design dependent, geometries and transient conditions typical of both foreign and US designs are being considered.

  14. Development of a Field-Aligned Integrated Conductivity Model Using the SAMI2 Open Source Code

    Science.gov (United States)

    Hildebrandt, Kyle; Gearheart, Michael; West, Keith

    2003-03-01

    The SAMI2 open source code is a middle and low latitude ionspheric model developed by the Naval Research Lab for the dual purposes of research and education. At the time of this writing the source code has no component for the integrated magnetic field-aligned conductivity. The dependence of human activities on conditions in the space environment, such as communications, has grown and will continue to do so. With this growth comes higher financial stakes, as changes in the space environment have greater economic impact. In order to minimize the adverse effects of these changes, predictive models are being developed. Among the geophysical parameters that affect communications is the conductivity in the ionosphere. As part of the commitment of Texas A & M Univeristy-Commerce to build a strong undergraduate research program, a team consisting of two students and a faculty mentor are developing a model of the integrated field-aligned conductivity using the SAMI2 code. The current status of the research and preliminary results are presented as well as a summary of future work.

  15. Mesh-based Monte Carlo code for fluorescence modeling in complex tissues with irregular boundaries

    Science.gov (United States)

    Wilson, Robert H.; Chen, Leng-Chun; Lloyd, William; Kuo, Shiuhyang; Marcelo, Cynthia; Feinberg, Stephen E.; Mycek, Mary-Ann

    2011-07-01

    There is a growing need for the development of computational models that can account for complex tissue morphology in simulations of photon propagation. We describe the development and validation of a user-friendly, MATLAB-based Monte Carlo code that uses analytically-defined surface meshes to model heterogeneous tissue geometry. The code can use information from non-linear optical microscopy images to discriminate the fluorescence photons (from endogenous or exogenous fluorophores) detected from different layers of complex turbid media. We present a specific application of modeling a layered human tissue-engineered construct (Ex Vivo Produced Oral Mucosa Equivalent, EVPOME) designed for use in repair of oral tissue following surgery. Second-harmonic generation microscopic imaging of an EVPOME construct (oral keratinocytes atop a scaffold coated with human type IV collagen) was employed to determine an approximate analytical expression for the complex shape of the interface between the two layers. This expression can then be inserted into the code to correct the simulated fluorescence for the effect of the irregular tissue geometry.

  16. Modelling of aspherical nebulae. I. A quick pseudo-3D photoionization code

    CERN Document Server

    Morisset, C; Peña, M

    2005-01-01

    We describe a pseudo-3D photoionization code, NEBU_3D and its associated visualization tool, VIS_NEB3D, which are able to easily and rapidly treat a wide variety of nebular geometries, by combining models obtained with a 1D photoionization code. The only requirement for the code to work is that the ionization source is uniqu e and not extended. It is applicable as long as the diffuse ionizing radiation f ield is not dominant and strongly inhomogeneous. As examples of the capabilities of these new tools, we consider two very differ ent theoretical cases. One is that of a high excitation planetary nebula that ha s an ellipsoidal shape with two polar density knots. The other one is that of a blister HII region, for which we have also constructed a spherical model (the sp herical impostor) which has exactly the same Hbeta surface brightness distrib ution as the blister model and the same ionizing star. These two examples warn against preconceived ideas when interpreting spectroscop ic and imaging data of HII regi...

  17. Integration of CFD codes and advanced combustion models for quantitative burnout determination

    Energy Technology Data Exchange (ETDEWEB)

    Javier Pallares; Inmaculada Arauzo; Alan Williams [University of Zaragoza, Zaragoza (Spain). Centre of Research for Energy Resources and Consumption (CIRCE)

    2007-10-15

    CFD codes and advanced kinetics combustion models are extensively used to predict coal burnout in large utility boilers. Modelling approaches based on CFD codes can accurately solve the fluid dynamics equations involved in the problem but this is usually achieved by including simple combustion models. On the other hand, advanced kinetics combustion models can give a detailed description of the coal combustion behaviour by using a simplified description of the flow field, this usually being obtained from a zone-method approach. Both approximations describe correctly general trends on coal burnout, but fail to predict quantitative values. In this paper a new methodology which takes advantage of both approximations is described. In the first instance CFD solutions were obtained of the combustion conditions in the furnace in the Lamarmora power plant (ASM Brescia, Italy) for a number of different conditions and for three coals. Then, these furnace conditions were used as inputs for a more detailed chemical combustion model to predict coal burnout. In this, devolatilization was modelled using a commercial macromolecular network pyrolysis model (FG-DVC). For char oxidation an intrinsic reactivity approach including thermal annealing, ash inhibition and maceral effects, was used. Results from the simulations were compared against plant experimental values, showing a reasonable agreement in trends and quantitative values. 28 refs., 4 figs., 4 tabs.

  18. Development of orthogonal 2-dimensional numerical code TFC2D for fluid flow with various turbulence models and numerical schemes

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ju Yeop; In, Wang Kee; Chun, Tae Hyun; Oh, Dong Seok [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-02-01

    The development of orthogonal 2-dimensional numerical code is made. The present code contains 9 kinds of turbulence models that are widely used. They include a standard k-{epsilon} model and 8 kinds of low Reynolds number ones. They also include 6 kinds of numerical schemes including 5 kinds of low order schemes and 1 kind of high order scheme such as QUICK. To verify the present numerical code, pipe flow, channel flow and expansion pipe flow are solved by this code with various options of turbulence models and numerical schemes and the calculated outputs are compared to experimental data. Furthermore, the discretization error that originates from the use of standard k-{epsilon} turbulence model with wall function is much more diminished by introducing a new grid system than a conventional one in the present code. 23 refs., 58 figs., 6 tabs. (Author)

  19. Model Checking and Code Generation for UML Diagrams Using Graph Transformation

    Directory of Open Access Journals (Sweden)

    Wafa Chama

    2012-12-01

    Full Text Available UML is considered as the standard for object-oriented modelling language adopted by the ObjectManagement Group. However, UML has been criticized due to the lack of formal semantics and theambiguity of its models. In other hands, UML models can be mathematically verified and checked by usingits equivalent formal representation. So, in this paper, we propose an approach and a tool based on graphtransformation to perform an automatic mapping for verification purposes. This transformation aims tobridge the gap between informal and formal notations and allows a formal verification of concurrent UMLmodels using Maude language. We consider both static (Class Diagram and dynamic (StateChart andCommunication Diagrams features of concurrent object-oriented system. Then, we use Maude LTL ModelChecker to verify the formal model obtained (Automatic Code Generation Maude. The meta-modellingAToM3 tool is used. A case study is presented to illustrate our approach.

  20. Assessment of uncertainties of the models used in thermal-hydraulic computer codes

    Science.gov (United States)

    Gricay, A. S.; Migrov, Yu. A.

    2015-09-01

    The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.

  1. Mitigating BeiDou Satellite-Induced Code Bias: Taking into Account the Stochastic Model of Corrections.

    Science.gov (United States)

    Guo, Fei; Li, Xin; Liu, Wanke

    2016-06-18

    The BeiDou satellite-induced code biases have been confirmed to be orbit type-, frequency-, and elevation-dependent. Such code-phase divergences (code bias variations) severely affect absolute precise applications which use code measurements. To reduce their adverse effects, an improved correction model is proposed in this paper. Different from the model proposed by Wanninger and Beer (2015), more datasets (a time span of almost two years) were used to produce the correction values. More importantly, the stochastic information, i.e., the precision indexes, were given together with correction values in the improved model. However, only correction values were given while the precision indexes were completely missing in the traditional model. With the improved correction model, users may have a better understanding of their corrections, especially the uncertainty of corrections. Thus, it is helpful for refining the stochastic model of code observations. Validation tests in precise point positioning (PPP) reveal that a proper stochastic model is critical. The actual precision of the corrected code observations can be reflected in a more objective manner if the stochastic model of the corrections is taken into account. As a consequence, PPP solutions with the improved model outperforms the traditional one in terms of positioning accuracy, as well as convergence speed. In addition, the Melbourne-Wübbena (MW) combination which serves for ambiguity fixing were verified as well. The uncorrected MW values show strong systematic variations with an amplitude of half a wide-lane cycle, which prevents precise ambiguity determination and successful ambiguity resolution. After application of the code bias correction models, the systematic variations can be greatly removed, and the resulting wide lane ambiguities are more likely to be fixed. Moreover, the code residuals show more reasonable distributions after code bias corrections with either the traditional or the improved model.

  2. Mitigating BeiDou Satellite-Induced Code Bias: Taking into Account the Stochastic Model of Corrections

    Science.gov (United States)

    Guo, Fei; Li, Xin; Liu, Wanke

    2016-01-01

    The BeiDou satellite-induced code biases have been confirmed to be orbit type-, frequency-, and elevation-dependent. Such code-phase divergences (code bias variations) severely affect absolute precise applications which use code measurements. To reduce their adverse effects, an improved correction model is proposed in this paper. Different from the model proposed by Wanninger and Beer (2015), more datasets (a time span of almost two years) were used to produce the correction values. More importantly, the stochastic information, i.e., the precision indexes, were given together with correction values in the improved model. However, only correction values were given while the precision indexes were completely missing in the traditional model. With the improved correction model, users may have a better understanding of their corrections, especially the uncertainty of corrections. Thus, it is helpful for refining the stochastic model of code observations. Validation tests in precise point positioning (PPP) reveal that a proper stochastic model is critical. The actual precision of the corrected code observations can be reflected in a more objective manner if the stochastic model of the corrections is taken into account. As a consequence, PPP solutions with the improved model outperforms the traditional one in terms of positioning accuracy, as well as convergence speed. In addition, the Melbourne-Wübbena (MW) combination which serves for ambiguity fixing were verified as well. The uncorrected MW values show strong systematic variations with an amplitude of half a wide-lane cycle, which prevents precise ambiguity determination and successful ambiguity resolution. After application of the code bias correction models, the systematic variations can be greatly removed, and the resulting wide lane ambiguities are more likely to be fixed. Moreover, the code residuals show more reasonable distributions after code bias corrections with either the traditional or the improved model

  3. Sodium spray and jet fire model development within the CONTAIN-LMR code

    Energy Technology Data Exchange (ETDEWEB)

    Scholtyssek, W. [Kernforschungszentrum Karlsruhe GmbH (Germany). Inst. fuer Neutronenphysik und Reaktortechnik; Murata, K.K. [Sandia National Labs., Albuquerque, NM (United States)

    1993-12-31

    An assessment was made of the sodium spray fire model implemented in the CONTAIN code. The original droplet burn model, which was based on the NACOM code, was improved in several aspects, especially concerning evaluation of the droplet burning rate, reaction chemistry and heat balance, spray geometry and droplet motion, and consistency with CONTAIN standards of gas property evaluation. An additional droplet burning model based on a proposal by Krolikowski was made available to include the effect of the chemical equilibrium conditions at the flame temperature. The models were validated against single-droplet burn experiments as well as spray and jet fire experiments. Reasonable agreement was found between the two burn models and experimental data. When the gas temperature in the burning compartment reaches high values, the Krolikowski model seems to be preferable. Critical parameters for spray fire evaluation were found to be the spray characterization, especially the droplet size, which largely determines the burning efficiency, and heat transfer conditions at the interface between the atmosphere and structures, which controls the thermal hydraulic behavior in the burn compartment.

  4. DISCRETE DYNAMIC MODEL OF BEVEL GEAR – VERIFICATION THE PROGRAM SOURCE CODE FOR NUMERICAL SIMULATION

    Directory of Open Access Journals (Sweden)

    Krzysztof TWARDOCH

    2014-06-01

    Full Text Available In the article presented a new model of physical and mathematical bevel gear to study the influence of design parameters and operating factors on the dynamic state of the gear transmission. Discusses the process of verifying proper operation of copyright calculation program used to determine the solutions of the dynamic model of bevel gear. Presents the block diagram of a computing algorithm that was used to create a program for the numerical simulation. The program source code is written in an interactive environment to perform scientific and engineering calculations, MATLAB

  5. Nuclear numerical range and quantum error correction codes for non-unitary noise models

    Science.gov (United States)

    Lipka-Bartosik, Patryk; Życzkowski, Karol

    2017-01-01

    We introduce a notion of nuclear numerical range defined as the set of expectation values of a given operator A among normalized pure states, which belong to the nucleus of an auxiliary operator Z. This notion proves to be applicable to investigate models of quantum noise with block-diagonal structure of the corresponding Kraus operators. The problem of constructing a suitable quantum error correction code for this model can be restated as a geometric problem of finding intersection points of certain sets in the complex plane. This technique, worked out in the case of two-qubit systems, can be generalized for larger dimensions.

  6. The Hadronic Models for Cosmic Ray Physics: the FLUKA Code Solutions

    Energy Technology Data Exchange (ETDEWEB)

    Battistoni, G.; Garzelli, M.V.; Gadioli, E.; Muraro, S.; Sala, P.R.; Fasso, A.; Ferrari, A.; Roesler, S.; Cerutti, F.; Ranft, J.; Pinsky, L.S.; Empl, A.; Pelliccioni, M.; Villari, R.; /INFN, Milan /Milan U. /SLAC /CERN /Siegen U. /Houston U. /Frascati /ENEA, Frascati

    2007-01-31

    FLUKA is a general purpose Monte Carlo transport and interaction code used for fundamental physics and for a wide range of applications. These include Cosmic Ray Physics (muons, neutrinos, EAS, underground physics), both for basic research and applied studies in space and atmospheric flight dosimetry and radiation damage. A review of the hadronic models available in FLUKA and relevant for the description of cosmic ray air showers is presented in this paper. Recent updates concerning these models are discussed. The FLUKA capabilities in the simulation of the formation and propagation of EM and hadronic showers in the Earth's atmosphere are shown.

  7. A numerical code for a three-dimensional magnetospheric MHD equilibrium model

    Science.gov (United States)

    Voigt, G.-H.

    1992-01-01

    Two dimensional and three dimensional MHD equilibrium models were begun for Earth's magnetosphere. The original proposal was motivated by realizing that global, purely data based models of Earth's magnetosphere are inadequate for studying the underlying plasma physical principles according to which the magnetosphere evolves on the quasi-static convection time scale. Complex numerical grid generation schemes were established for a 3-D Poisson solver, and a robust Grad-Shafranov solver was coded for high beta MHD equilibria. Thus, the effects were calculated of both the magnetopause geometry and boundary conditions on the magnetotail current distribution.

  8. An Efficient Code-Based Threshold Ring Signature Scheme with a Leader-Participant Model

    Directory of Open Access Journals (Sweden)

    Guomin Zhou

    2017-01-01

    Full Text Available Digital signature schemes with additional properties have broad applications, such as in protecting the identity of signers allowing a signer to anonymously sign a message in a group of signers (also known as a ring. While these number-theoretic problems are still secure at the time of this research, the situation could change with advances in quantum computing. There is a pressing need to design PKC schemes that are secure against quantum attacks. In this paper, we propose a novel code-based threshold ring signature scheme with a leader-participant model. A leader is appointed, who chooses some shared parameters for other signers to participate in the signing process. This leader-participant model enhances the performance because every participant including the leader could execute the decoding algorithm (as a part of signing process upon receiving the shared parameters from the leader. The time complexity of our scheme is close to Courtois et al.’s (2001 scheme. The latter is often used as a basis to construct other types of code-based signature schemes. Moreover, as a threshold ring signature scheme, our scheme is as efficient as the normal code-based ring signature.

  9. Evolutionary modeling and prediction of non-coding RNAs in Drosophila.

    Directory of Open Access Journals (Sweden)

    Robert K Bradley

    Full Text Available We performed benchmarks of phylogenetic grammar-based ncRNA gene prediction, experimenting with eight different models of structural evolution and two different programs for genome alignment. We evaluated our models using alignments of twelve Drosophila genomes. We find that ncRNA prediction performance can vary greatly between different gene predictors and subfamilies of ncRNA gene. Our estimates for false positive rates are based on simulations which preserve local islands of conservation; using these simulations, we predict a higher rate of false positives than previous computational ncRNA screens have reported. Using one of the tested prediction grammars, we provide an updated set of ncRNA predictions for D. melanogaster and compare them to previously-published predictions and experimental data. Many of our predictions show correlations with protein-coding genes. We found significant depletion of intergenic predictions near the 3' end of coding regions and furthermore depletion of predictions in the first intron of protein-coding genes. Some of our predictions are colocated with larger putative unannotated genes: for example, 17 of our predictions showing homology to the RFAM family snoR28 appear in a tandem array on the X chromosome; the 4.5 Kbp spanned by the predicted tandem array is contained within a FlyBase-annotated cDNA.

  10. A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects

    CERN Document Server

    Schäfer, Christoph M; Maindl, Thomas I; Speith, Roland; Scherrer, Samuel; Kley, Wilhelm

    2016-01-01

    Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. We find an impressive performance gain using NVIDIA consumer devices compared to ou...

  11. Distortion Modeling and Error Robust Coding Scheme for H.26L Video

    Institute of Scientific and Technical Information of China (English)

    CHENChuan; YUSongyu; CHENGLianji

    2004-01-01

    Transmission of hybrid-coded video including motion compensation and spatial prediction over error prone channel results in the well-known problem of error propagation because of the drift in reference frames between encoder and decoder. The prediction loop propa-gates errors and causes substantial degradation in video quality. Especially in H.26L video, both intra and inter prediction strategies are used to improve compression efficiency, however, they make error propagation more serious. This work proposes distortion models for H.26L video to optimally estimate the overall distortion of decoder frame reconstruction due to quantization, error propagation, and error concealment. Based on these statistical distortion models, our error robust coding scheme only integrates the distinct distortion between intra and inter macroblocks into a rate-distortlon based framework to select suitable coding mode for each macroblock, and so,the cost in computation complexity is modest. Simulations under typical 3GPP/3GPP2 channel and Internet channel conditions have shown that our proposed scheme achieves much better performance than those currently used in H.26L. The error propagation estimation and effect at high fractural pixel-level prediction have also been tested. All the results have demonstrated that our proposed scheme achieves a good balance between compression efficiency and error robustness for H.26L video, at the cost of modest additional complexity.

  12. Coding Model and Mapping Method of Spherical Diamond Discrete Grids Based on Icosahedron

    Directory of Open Access Journals (Sweden)

    LIN Bingxian

    2016-12-01

    Full Text Available Discrete Global Grid(DGG provides a fundamental environment for global-scale spatial data's organization and management. DGG's encoding scheme, which blocks coordinate transformation between different coordination reference frames and reduces the complexity of spatial analysis, contributes a lot to the multi-scale expression and unified modeling of spatial data. Compared with other kinds of DGGs, Diamond Discrete Global Grid(DDGG based on icosahedron is beneficial to the spherical spatial data's integration and expression for much better geometric properties. However, its structure seems more complicated than DDGG on octahedron due to its initial diamond's edges cannot fit meridian and parallel. New challenges are posed when it comes to the construction of hierarchical encoding system and mapping relationship with geographic coordinates. On this issue, this paper presents a DDGG's coding system based on the Hilbert curve and designs conversion methods between codes and geographical coordinates. The study results indicate that this encoding system based on the Hilbert curve can express space scale and location information implicitly with the similarity between DDG and planar grid put into practice, and balances efficiency and accuracy of conversion between codes and geographical coordinates in order to support global massive spatial data's modeling, integrated management and all kinds of spatial analysis.

  13. Evaluation of turbulence models in the PARC code for transonic diffuser flows

    Science.gov (United States)

    Georgiadis, N. J.; Drummond, J. E.; Leonard, B. P.

    1994-01-01

    Flows through a transonic diffuser were investigated with the PARC code using five turbulence models to determine the effects of turbulence model selection on flow prediction. Three of the turbulence models were algebraic models: Thomas (the standard algebraic turbulence model in PARC), Baldwin-Lomax, and Modified Mixing Length-Thomas (MMLT). The other two models were the low Reynolds number k-epsilon models of Chien and Speziale. Three diffuser flows, referred to as the no-shock, weak-shock, and strong-shock cases, were calculated with each model to conduct the evaluation. Pressure distributions, velocity profiles, locations of shocks, and maximum Mach numbers in the duct were the flow quantities compared. Overall, the Chien k-epsilon model was the most accurate of the five models when considering results obtained for all three cases. However, the MMLT model provided solutions as accurate as the Chien model for the no-shock and the weak-shock cases, at a substantially lower computational cost (measured in CPU time required to obtain converged solutions). The strong shock flow, which included a region of shock-induced flow separation, was only predicted well by the two k-epsilon models.

  14. Offshore Code Comparison Collaboration within IEA Wind Annex XXIII: Phase II Results Regarding Monopile Foundation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Jonkman, J.; Butterfield, S.; Passon, P.; Larsen, T.; Camp, T.; Nichols, J.; Azcona, J.; Martinez, A.

    2008-01-01

    This paper presents an overview and describes the latest findings of the code-to-code verification activities of the Offshore Code Comparison Collaboration, which operates under Subtask 2 of the International Energy Agency Wind Annex XXIII.

  15. Trading speed and accuracy by coding time: a coupled-circuit cortical model.

    Directory of Open Access Journals (Sweden)

    Dominic Standage

    2013-04-01

    Full Text Available Our actions take place in space and time, but despite the role of time in decision theory and the growing acknowledgement that the encoding of time is crucial to behaviour, few studies have considered the interactions between neural codes for objects in space and for elapsed time during perceptual decisions. The speed-accuracy trade-off (SAT provides a window into spatiotemporal interactions. Our hypothesis is that temporal coding determines the rate at which spatial evidence is integrated, controlling the SAT by gain modulation. Here, we propose that local cortical circuits are inherently suited to the relevant spatial and temporal coding. In simulations of an interval estimation task, we use a generic local-circuit model to encode time by 'climbing' activity, seen in cortex during tasks with a timing requirement. The model is a network of simulated pyramidal cells and inhibitory interneurons, connected by conductance synapses. A simple learning rule enables the network to quickly produce new interval estimates, which show signature characteristics of estimates by experimental subjects. Analysis of network dynamics formally characterizes this generic, local-circuit timing mechanism. In simulations of a perceptual decision task, we couple two such networks. Network function is determined only by spatial selectivity and NMDA receptor conductance strength; all other parameters are identical. To trade speed and accuracy, the timing network simply learns longer or shorter intervals, driving the rate of downstream decision processing by spatially non-selective input, an established form of gain modulation. Like the timing network's interval estimates, decision times show signature characteristics of those by experimental subjects. Overall, we propose, demonstrate and analyse a generic mechanism for timing, a generic mechanism for modulation of decision processing by temporal codes, and we make predictions for experimental verification.

  16. Rhythmic complexity and predictive coding: A novel approach to modeling rhythm and meter perception in music

    Directory of Open Access Journals (Sweden)

    Peter eVuust

    2014-10-01

    Full Text Available Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of predictive coding, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Third, we develop a predictive coding model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard (‘rhythm’ and the brain’s anticipatory structuring of music (‘meter’. Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the predictive coding theory. We argue that musical rhythm exploits the brain’s general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms.

  17. Phenomenological modeling of critical heat flux: The GRAMP code and its validation

    Energy Technology Data Exchange (ETDEWEB)

    Ahmad, M. [Pakistan Institute of Engineering and Applied Sciences (PIEAS), Islamabad (Pakistan); Chandraker, D.K. [Bhabha Atomic Research Centre, Mumbai (India); Hewitt, G.F. [Imperial College, London SW7 2BX (United Kingdom); Vijayan, P.K. [Bhabha Atomic Research Centre, Mumbai (India); Walker, S.P., E-mail: s.p.walker@imperial.ac.uk [Imperial College, London SW7 2BX (United Kingdom)

    2013-01-15

    Highlights: Black-Right-Pointing-Pointer Assessment of CHF limits is vital for LWR optimization and safety analysis. Black-Right-Pointing-Pointer Phenomenological modeling is a valuable adjunct to pure empiricism. Black-Right-Pointing-Pointer It is based on empirical representations of the (several, competing) phenomena. Black-Right-Pointing-Pointer Phenomenological modeling codes making 'aggregate' predictions need careful assessment against experiments. Black-Right-Pointing-Pointer The physical and mathematical basis of a phenomenological modeling code GRAMP is presented. Black-Right-Pointing-Pointer The GRAMP code is assessed against measurements from BARC (India) and Harwell (UK), and the Look Up Tables. - Abstract: Reliable knowledge of the critical heat flux is vital for the design of light water reactors, for both safety and optimization. The use of wholly empirical correlations, or equivalently 'Look Up Tables', can be very effective, but is generally less so in more complex cases, and in particular cases where the heat flux is axially non-uniform. Phenomenological models are in principle more able to take into account of a wider range of conditions, with a less comprehensive coverage of experimental measurements. These models themselves are in part based upon empirical correlations, albeit of the more fundamental individual phenomena occurring, rather than the aggregate behaviour, and as such they too require experimental validation. In this paper we present the basis of a general-purpose phenomenological code, GRAMP, and then use two independent 'direct' sets of measurement, from BARC in India and from Harwell in the United Kingdom, and the large dataset embodied in the Look Up Tables, to perform a validation exercise on it. Very good agreement between predictions and experimental measurements is observed, adding to the confidence with which the phenomenological model can be used. Remaining important uncertainties in the

  18. The identification of rare charged kaons in heavy-ion collisions at relativistic energies by time-of-flight with the HADES spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Mangiarotti, A. [Laboratorio de Instrumentacao e Fisica Experimental de Particulas, 3004-516 Coimbra (Portugal)], E-mail: alessio@lipc.fis.uc.pt; Blanco, A. [Laboratorio de Instrumentacao e Fisica Experimental de Particulas, 3004-516 Coimbra (Portugal); Fonte, P. [Laboratorio de Instrumentacao e Fisica Experimental de Particulas, 3004-516 Coimbra (Portugal); Instituto Superior de Engenharia de Coimbra, Rua Pedro Nunes, 3030-199 Coimbra (Portugal)

    2009-05-01

    Detailed performance simulations of the HADES RPC time-of-flight wall have been undertaken. The most delicate situation is the identification of K{sup -} mesons close to the mid-rapidity region. While momentum resolution does not appear to be an issue, the small non-Gaussian tails in the time response are a clear problem. It will be argued that the use of redundant information supplied by time coincidence between the two layers (possible in the HADES design) can drastically improve the situation. The uniformity of acceptance for the two layer coincidence will be discussed. A possible distortion to the reconstructed kaon flow due to the wall occupancy, which is not isotropic with respect to the reaction plane because of the flow pattern of the much more abundant protons, pions and deuterons, has also been studied.

  19. Study of {lambda} hyperon production in C+C collisions at 2 AGeV beam energy with the HADES spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Kanaki, K.

    2007-03-15

    The HADES spectrometer is a high resolution detector installed at the SIS/GSI, Darmstadt. It was primarily designed for studying dielectron decay channels of vector mesons. However, its high accuracy capabilities make it an attractive tool for investigating other rare probes at these beam energies, like strange baryons. Development and investigation of Multiwire Drift Chambers for high spatial resolution have been provided. One of the early experimental runs of HADES was analyzed and the {lambda} hyperon signal was successfully reconstructed for the first time in C+C collisions at 2 AGeV beam kinetic energy. The total {lambda} production cross section is contrasted with expectations from simulations and compared with measurements of the {lambda} yield in heavier systems at the same energy. In addition, the result is considered in the context of strangeness balance and the relative strangeness content of the reaction products is determined. (orig.)

  20. Automated model integration at source code level: An approach for implementing models into the NASA Land Information System

    Science.gov (United States)

    Wang, S.; Peters-Lidard, C. D.; Mocko, D. M.; Kumar, S.; Nearing, G. S.; Arsenault, K. R.; Geiger, J. V.

    2014-12-01

    Model integration bridges the data flow between modeling frameworks and models. However, models usually do not fit directly into a particular modeling environment, if not designed for it. An example includes implementing different types of models into the NASA Land Information System (LIS), a software framework for land-surface modeling and data assimilation. Model implementation requires scientific knowledge and software expertise and may take a developer months to learn LIS and model software structure. Debugging and testing of the model implementation is also time-consuming due to not fully understanding LIS or the model. This time spent is costly for research and operational projects. To address this issue, an approach has been developed to automate model integration into LIS. With this in mind, a general model interface was designed to retrieve forcing inputs, parameters, and state variables needed by the model and to provide as state variables and outputs to LIS. Every model can be wrapped to comply with the interface, usually with a FORTRAN 90 subroutine. Development efforts need only knowledge of the model and basic programming skills. With such wrappers, the logic is the same for implementing all models. Code templates defined for this general model interface could be re-used with any specific model. Therefore, the model implementation can be done automatically. An automated model implementation toolkit was developed with Microsoft Excel and its built-in VBA language. It allows model specifications in three worksheets and contains FORTRAN 90 code templates in VBA programs. According to the model specification, the toolkit generates data structures and procedures within FORTRAN modules and subroutines, which transfer data between LIS and the model wrapper. Model implementation is standardized, and about 80 - 90% of the development load is reduced. In this presentation, the automated model implementation approach is described along with LIS programming

  1. Modeling latency code processing in the electric sense: from the biological template to its VLSI implementation.

    Science.gov (United States)

    Engelmann, Jacob; Walther, Tim; Grant, Kirsty; Chicca, Elisabetta; Gómez-Sena, Leonel

    2016-09-13

    Understanding the coding of sensory information under the temporal constraints of natural behavior is not yet well resolved. There is a growing consensus that spike timing or latency coding can maximally exploit the timing of neural events to make fast computing elements and that such mechanisms are essential to information processing functions in the brain. The electric sense of mormyrid fish provides a convenient biological model where this coding scheme can be studied. The sensory input is a physically ordered spatial pattern of current densities, which is coded in the precise timing of primary afferent spikes. The neural circuits of the processing pathway are well known and the system exhibits the best known illustration of corollary discharge, which provides the reference to decoding the sensory afferent latency pattern. A theoretical model has been constructed from available electrophysiological and neuroanatomical data to integrate the principal traits of the neural processing structure and to study sensory interaction with motor-command-driven corollary discharge signals. This has been used to explore neural coding strategies at successive stages in the network and to examine the simulated network capacity to reproduce output neuron responses. The model shows that the network has the ability to resolve primary afferent spike timing differences in the sub-millisecond range, and that this depends on the coincidence of sensory and corollary discharge-driven gating signals. In the integrative and output stages of the network, corollary discharge sets up a proactive background filter, providing temporally structured excitation and inhibition within the network whose balance is then modulated locally by sensory input. This complements the initial gating mechanism and contributes to amplification of the input pattern of latencies, conferring network hyperacuity. These mechanisms give the system a robust capacity to extract behaviorally meaningful features of the

  2. Geochemical modeling of diagenetic reactions in Snorre Field reservoir sandstones: a comparative study of computer codes

    Directory of Open Access Journals (Sweden)

    Marcos Antonio Klunk

    Full Text Available ABSTRACTDiagenetic reactions, characterized by the dissolution and precipitation of minerals at low temperatures, control the quality of sedimentary rocks as hydrocarbon reservoirs. Geochemical modeling, a tool used to understand diagenetic processes, is performed through computer codes based on thermodynamic and kinetic parameters. In a comparative study, we reproduced the diagenetic reactions observed in Snorre Field reservoir sandstones, Norwegian North Sea. These reactions had been previously modeled in the literature using DISSOL-THERMAL code. In this study, we modeled the diagenetic reactions in the reservoirs using Geochemist's Workbench (GWB and TOUGHREACT software, based on a convective-diffusive-reactive model and on the thermodynamic and kinetic parameters compiled for each reaction. TOUGHREACT and DISSOL-THERMAL modeling showed dissolution of quartz, K-feldspar and plagioclase in a similar temperature range from 25 to 80°C. In contrast, GWB modeling showed dissolution of albite, plagioclase and illite, as well as precipitation of quartz, K-feldspar and kaolinite in the same temperature range. The modeling generated by the different software for temperatures of 100, 120 and 140°C showed similarly the dissolution of quartz, K-feldspar, plagioclase and kaolinite, but differed in the precipitation of albite and illite. At temperatures of 150 and 160°C, GWB and TOUGHREACT produced different results from the DISSOL-THERMAL, except for the dissolution of quartz, plagioclase and kaolinite. The comparative study allows choosing the numerical modeling software whose results are closer to the diagenetic reactions observed in the petrographic analysis of the modeled reservoirs.

  3. Learning to Act Like a Lawyer: A Model Code of Professional Responsibility for Law Students

    Directory of Open Access Journals (Sweden)

    David M. Tanovich

    2009-02-01

    incidents at law schools that raise serious issues about the professionalism of law students. They include, for example, the UofT marks scandal, the Windsor first year blog and the proliferation of blogs like www.lawstudents.ca and www.lawbuzz.ca with gratuitous, defamatory and offensive entries. It is not clear that all of this conduct would be caught by University codes of conduct which often limit their reach to on campus behaviour or University sanctioned events. What should a law school code of professional responsibility look like and what ethical responsibilities should it identify? For example, should there be a mandatory pro bono obligation on students or a duty to report misconduct. The last part of the article addresses this question by setting out a model code of professional responsibility for law students. Les étudiants et étudiantes en droit constituent l’avenir de la profession juridique. Comment bien préparés sont-ils lorsqu’ils quittent la faculté de droit pour assumer leurs obligations professionnelles et éthiques envers eux-mêmes, envers la profession et envers le public? Cette question a mené à un intérêt grandissant au Canada à l’enseignement de l’éthique juridique. Elle a aussi mené à plus d’emphase sur le développement de formation clinique et expérientielle tel que l’exemplifie le savoir et l’enseignement de la professeure Rose Voyvodic. Toutefois, moins d’attention a été consacrée à identifier les responsabilités éthiques générales d’étudiants et étudiantes en droit lorsqu’ils n’oeuvrent pas dans une clinique ou dans un autre contexte légal. Cela se voit dans les faits qu’il y a très peu d’articles canadiens qui portent sur la question, et, de plus grande importance, qu’il y a pénurie, au sein de facultés de droit, de politiques disciplinaires ou de codes déontologiques qui présentent les obligations professionnelles d’étudiants et étudiantes en droit. Cet article développe une id

  4. Guidelines for selecting codes for ground-water transport modeling of low-level waste burial sites. Executive summary

    Energy Technology Data Exchange (ETDEWEB)

    Simmons, C.S.; Cole, C.R.

    1985-05-01

    This document was written to provide guidance to managers and site operators on how ground-water transport codes should be selected for assessing burial site performance. There is a need for a formal approach to selecting appropriate codes from the multitude of potentially useful ground-water transport codes that are currently available. Code selection is a problem that requires more than merely considering mathematical equation-solving methods. These guidelines are very general and flexible and are also meant for developing systems simulation models to be used to assess the environmental safety of low-level waste burial facilities. Code selection is only a single aspect of the overall objective of developing a systems simulation model for a burial site. The guidance given here is mainly directed toward applications-oriented users, but managers and site operators need to be familiar with this information to direct the development of scientifically credible and defensible transport assessment models. Some specific advice for managers and site operators on how to direct a modeling exercise is based on the following five steps: identify specific questions and study objectives; establish costs and schedules for achieving answers; enlist the aid of professional model applications group; decide on approach with applications group and guide code selection; and facilitate the availability of site-specific data. These five steps for managers/site operators are discussed in detail following an explanation of the nine systems model development steps, which are presented first to clarify what code selection entails.

  5. A neural network model of general olfactory coding in the insect antennal lobe.

    Science.gov (United States)

    Getz, W M; Lutz, A

    1999-08-01

    A central problem in olfaction is understanding how the quality of olfactory stimuli is encoded in the insect antennal lobe (or in the analogously structured vertebrate olfactory bulb) for perceptual processing in the mushroom bodies of the insect protocerebrum (or in the vertebrate olfactory cortex). In the study reported here, a relatively simple neural network model, inspired by our current knowledge of the insect antennal lobes, is used to investigate how each of several features and elements of the network, such as synapse strengths, feedback circuits and the steepness of neural activation functions, influences the formation of an olfactory code in neurons that project from the antennal lobes to the mushroom bodies (or from mitral cells to olfactory cortex). An optimal code in these projection neurons (PNs) should minimize potential errors by the mushroom bodies in misidentifying the quality of an odor across a range of concentrations while maximizing the ability of the mushroom bodies to resolve odors of different quality. Simulation studies demonstrate that the network is able to produce codes independent or virtually independent of concentration over a given range. The extent of this range is moderately dependent on a parameter that characterizes how long it takes for the voltage in an activated neuron to decay back to its resting potential, strongly dependent on the strength of excitatory feedback by the PNs onto antennal lobe intrinsic neurons (INs), and overwhelmingly dependent on the slope of the activation function that transforms the voltage of depolarized neurons into the rate at which spikes are produced. Although the code in the PNs is degraded by large variations in the concentration of odor stimuli, good performance levels are maintained when the complexity of stimuli, as measured by the number of component odorants, is doubled. When excitatory feedback from the PNs to the INs is strong, the activity in the PNs undergoes transitions from initial

  6. A neurophysiologically plausible population code model for feature integration explains visual crowding.

    Directory of Open Access Journals (Sweden)

    Ronald van den Berg

    2010-01-01

    Full Text Available An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called "crowding". Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, "compulsory averaging", and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.

  7. A neurophysiologically plausible population code model for feature integration explains visual crowding.

    Science.gov (United States)

    van den Berg, Ronald; Roerdink, Jos B T M; Cornelissen, Frans W

    2010-01-22

    An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called "crowding". Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, "compulsory averaging", and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.

  8. Active magnetic bearing control loop modeling for a finite element rotordynamics code

    Science.gov (United States)

    Genta, Giancarlo; Delprete, Cristiana; Carabelli, Stefano

    1994-05-01

    A mathematical model of an active electromagnetic bearing which includes the actuator, the sensor and the control system is developed and implemented in a specialized finite element code for rotordynamic analysis. The element formulation and its incorporation in the model of the machine are described in detail. A solution procedure, based on a modal approach in which the number of retained modes is controlled by the user, is then shown together with other procedures for computing the steady-state response to both static and unbalance forces. An example of application shows the numerical results obtained on a model of an electric motor suspended on a five active-axis magnetic suspension. The comparison of some of these results with the experimental characteristics of the actual system shows the ability of the present model to predict its performance.

  9. Comparison of two numerical modelling codes for hydraulic and transport calculations in the near-field

    Energy Technology Data Exchange (ETDEWEB)

    Kalin, J., E-mail: jan.kalin@zag.s [Slovenian National Building and Civil Engineering Institute, Dimiceva 12, SI-1000 Ljubljana (Slovenia); Petkovsek, B., E-mail: borut.petkovsek@zag.s [Slovenian National Building and Civil Engineering Institute, Dimiceva 12, SI-1000 Ljubljana (Slovenia); Montarnal, Ph., E-mail: philippe.montarnal@cea.f [CEA/Saclay, DM2S/SFME/LSET, Gif-sur-Yvette, 91191 cedex (France); Genty, A., E-mail: alain.genty@cea.f [CEA/Saclay, DM2S/SFME/LSET, Gif-sur-Yvette, 91191 cedex (France); Deville, E., E-mail: estelle.deville@cea.f [CEA/Saclay, DM2S/SFME/LSET, Gif-sur-Yvette, 91191 cedex (France); Krivic, J., E-mail: jure.krivic@geo-zs.s [Geological Survey of Slovenia, Dimiceva 14, SI-1000 Ljubljana (Slovenia); Ratej, J., E-mail: joze.ratej@geo-zs.s [Geological Survey of Slovenia, Dimiceva 14, SI-1000 Ljubljana (Slovenia)

    2011-04-15

    In the past years the Slovenian Performance Analysis/Safety Assessment team has performed many generic studies for the future Slovenian low and intermediate level waste repository, most recently a Special Safety Analysis for the Krsko site. The modelling approach was to split the problem into three parts: near-field (detailed model of the repository), far-field (i.e., geosphere) and biosphere. In the Special Safety Analysis the code used to perform the near-field calculations was Hydrus2D. Recently the team has begun a cooperation with the French Commisariat al'Energie Atomique/Saclay (CEA/Saclay) and, as a part of this cooperation, began investigations into using the Alliances numerical platform for near-field calculations in order to compare the overall approach and calculated results. The article presents the comparison between these two codes for a silo-type repository that was considered in the Special Safety Analysis. The physical layout and characteristics of the repository are presented and a hydraulic and transport model of the repository is developed and implemented in Alliances. Some analysis of sensitivity to mesh fineness and to simulation timestep has been preformed and is also presented. The compared quantity is the output flux of radionuclides on the boundary of the model. Finally the results from Hydrus2D and Alliances are compared and the differences and similarities are commented.

  10. Development of the component models for the KALIMER safety analysis code SSC-K

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Young Min; Lee, Yong Bum; Chang, Won Pyo [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-05-01

    The SSC-K code is intended to simulate system responses to operational transients or accidents of the pool-type KALIMER (Korea Advanced Liquid Metal Reactor). As a part of the SSC-K development task, some primary component models have been developed and generalized correlations were recommended based on the technical review. Plant modules for the PSDRS, EMP were developed based on the analytical models. The IHX model of SSC-L was modified to take into account the thermo-hydraulic characteristics of pool. Correlations used for piping and in-core assemblies were reviewed and user options were provided for SSC-K. The merged version of SSC-K with PSDRE program was proved out to be valid by test run of ULOHS. The developed component models will be implemented into the SSC-K code and options will be provided for user to select proper correlations for friction factor and heat transfer coefficients. 30 refs., 31 figs., 9 tabs. (Author)

  11. RELAP5-3D Code Includes Athena Features and Models

    Energy Technology Data Exchange (ETDEWEB)

    Richard A. Riemke; Cliff B. Davis; Richard R. Schultz

    2006-07-01

    Version 2.3 of the RELAP5-3D computer program includes all features and models previously available only in the ATHENA version of the code. These include the addition of new working fluids (i.e., ammonia, blood, carbon dioxide, glycerol, helium, hydrogen, lead-bismuth, lithium, lithium-lead, nitrogen, potassium, sodium, and sodium-potassium) and a magnetohydrodynamic model that expands the capability of the code to model many more thermal-hydraulic systems. In addition to the new working fluids along with the standard working fluid water, one or more noncondensable gases (e.g., air, argon, carbon dioxide, carbon monoxide, helium, hydrogen, krypton, nitrogen, oxygen, sf6, xenon) can be specified as part of the vapor/gas phase of the working fluid. These noncondensable gases were in previous versions of RELAP5- 3D. Recently four molten salts have been added as working fluids to RELAP5-3D Version 2.4, which has had limited release. These molten salts will be in RELAP5-3D Version 2.5, which will have a general release like RELAP5-3D Version 2.3. Applications that use these new features and models are discussed in this paper.

  12. Edge Transport Modeling using the 3D EMC3-Eirene code on Tokamaks and Stellarators

    Science.gov (United States)

    Lore, J. D.; Ahn, J. W.; Briesemeister, A.; Ferraro, N.; Labombard, B.; McLean, A.; Reinke, M.; Shafer, M.; Terry, J.

    2015-11-01

    The fluid plasma edge transport code EMC3-Eirene has been applied to aid data interpretation and understanding the results of experiments with 3D effects on several tokamaks. These include applied and intrinsic 3D magnetic fields, 3D plasma facing components, and toroidally and poloidally localized heat and particle sources. On Alcator C-Mod, a series of experiments explored the impact of toroidally and poloidally localized impurity gas injection on core confinement and asymmetries in the divertor fluxes, with the differences between the asymmetry in L-mode and H-mode qualitatively reproduced in the simulations due to changes in the impurity ionization in the private flux region. Modeling of NSTX experiments on the effect of 3D fields on detachment matched the trend of a higher density at which the detachment occurs when 3D fields are applied. On DIII-D, different magnetic field models were used in the simulation and compared against the 2D Thomson scattering diagnostic. In simulating each device different aspects of the code model are tested pointing to areas where the model must be further developed. The application to stellarator experiments will also be discussed. Work supported by U.S. DOE: DE-AC05-00OR22725, DE AC02-09CH11466, DE-FC02-99ER54512, and DE-FC02-04ER54698.

  13. Development of sump model for containment hydrogen distribution calculations using CFD code

    Energy Technology Data Exchange (ETDEWEB)

    Ravva, Srinivasa Rao, E-mail: srini@aerb.gov.in [Indian Institute of Technology-Bombay, Mumbai (India); Nuclear Safety Analysis Division, Atomic Energy Regulatory Board, Mumbai (India); Iyer, Kannan N. [Indian Institute of Technology-Bombay, Mumbai (India); Gaikwad, A.J. [Nuclear Safety Analysis Division, Atomic Energy Regulatory Board, Mumbai (India)

    2015-12-15

    Highlights: • Sump evaporation model was implemented in FLUENT using three different approaches. • Validated the implemented sump evaporation models against TOSQAN facility. • It was found that predictions are in good agreement with the data. • Diffusion based model would be able to predict both condensation and evaporation. - Abstract: Computational Fluid Dynamics (CFD) simulations are necessary for obtaining accurate predictions and local behaviour for carrying out containment hydrogen distribution studies. However, commercially available CFD codes do not have all necessary models for carrying out hydrogen distribution analysis. One such model is sump or suppression pool evaporation model. The water in the sump may evaporate during the accident progression and affect the mixture concentrations in the containment. Hence, it is imperative to study the sump evaporation and its effect. Sump evaporation is modelled using three different approaches in the present work. The first approach deals with the calculation of evaporation flow rate and sump liquid temperature and supplying these quantities through user defined functions as boundary conditions. In this approach, the mean values of the domain are used. In the second approach, the mass, momentum, energy and species sources arise due to the sump evaporation are added to the domain through user defined functions. Cell values adjacent to the sump interface are used in this. Heat transfer between gas and liquid is calculated automatically by the code itself. However, in these two approaches, the evaporation rate was computed using an experimental correlation. In the third approach, the evaporation rate is directly estimated using diffusion approximation. The performance of these three models is compared with the sump behaviour experiment conducted in TOSQAN facility.Classification: K. Thermal hydraulics.

  14. A self-organized internal models architecture for coding sensory-motor schemes

    Directory of Open Access Journals (Sweden)

    Esaú eEscobar Juárez

    2016-04-01

    Full Text Available Cognitive robotics research draws inspiration from theories and models on cognition, as conceived by neuroscience or cognitive psychology, to investigate biologically plausible computational models in artificial agents. In this field, the theoretical framework of Grounded Cognition provides epistemological and methodological grounds for the computational modeling of cognition. It has been stressed in the literature that textit{simulation}, textit{prediction}, and textit{multi-modal integration} are key aspects of cognition and that computational architectures capable of putting them into play in a biologically plausible way are a necessity.Research in this direction has brought extensive empirical evidencesuggesting that textit{Internal Models} are suitable mechanisms forsensory-motor integration. However, current Internal Models architectures show several drawbacks, mainly due to the lack of a unified substrate allowing for a true sensory-motor integration space, enabling flexible and scalable ways to model cognition under the embodiment hypothesis constraints.We propose the Self-Organized Internal ModelsArchitecture (SOIMA, a computational cognitive architecture coded by means of a network of self-organized maps, implementing coupled internal models that allow modeling multi-modal sensory-motor schemes. Our approach addresses integrally the issues of current implementations of Internal Models.We discuss the design and features of the architecture, and provide empirical results on a humanoid robot that demonstrate the benefits and potentialities of the SOIMA concept for studying cognition in artificial agents.

  15. Code and Solution Verification of 3D Numerical Modeling of Flow in the Gust Erosion Chamber

    Science.gov (United States)

    Yuen, A.; Bombardelli, F. A.

    2014-12-01

    Erosion microcosms are devices commonly used to investigate the erosion and transport characteristics of sediments at the bed of rivers, lakes, or estuaries. In order to understand the results these devices provide, the bed shear stress and flow field need to be accurately described. In this research, the UMCES Gust Erosion Microcosm System (U-GEMS) is numerically modeled using Finite Volume Method. The primary aims are to simulate the bed shear stress distribution at the surface of the sediment core/bottom of the microcosm, and to validate the U-GEMS produces uniform bed shear stress at the bottom of the microcosm. The mathematical model equations are solved by on a Cartesian non-uniform grid. Multiple numerical runs were developed with different input conditions and configurations. Prior to developing the U-GEMS model, the General Moving Objects (GMO) model and different momentum algorithms in the code were verified. Code verification of these solvers was done via simulating the flow inside the top wall driven square cavity on different mesh sizes to obtain order of convergence. The GMO model was used to simulate the top wall in the top wall driven square cavity as well as the rotating disk in the U-GEMS. Components simulated with the GMO model were rigid bodies that could have any type of motion. In addition cross-verification was conducted as results were compared with numerical results by Ghia et al. (1982), and good agreement was found. Next, CFD results were validated by simulating the flow within the conventional microcosm system without suction and injection. Good agreement was found when the experimental results by Khalili et al. (2008) were compared. After the ability of the CFD solver was proved through the above code verification steps. The model was utilized to simulate the U-GEMS. The solution was verified via classic mesh convergence study on four consecutive mesh sizes, in addition to that Grid Convergence Index (GCI) was calculated and based on

  16. Compressor Modeling for Transient Analysis of Supercritical CO2 Brayton Cycle by using MARS code

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo Hyun; Park, Hyun Sun; Kim, Tae Ho; Kwon, Jin Gyu [POSTECH, Pohang (Korea, Republic of); Bae, Sung Won; Cha, Jae Eun [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    In this study, SCIEL (Supercritical CO{sub 2} Integral Experimental Loop) was chosen as a reference loop and the MARS code was as the transient cycle analysis code. As a result, the compressor homologous curve was developed from the SCIEL experimental data and MARS analysis was performed and presented in the paper. The advantages attract SCO{sub 2}BC as a promising next generation power cycles. The high thermal efficiency comes from the operation of compressor near the critical point where the properties of SCO{sub 2}. The approaches to those of liquid phase, leading drastically lower the compression work loss. However, the advantage requires precise and smooth operation of the cycle near the critical point. However, it is one of the key technical challenges. The experimental data was steady state at compressor rotating speed of 25,000 rpm. The time, 3133 second, was starting point of steady state. Numerical solutions were well matched with the experimental data. The mass flow rate from the MARS analysis of approximately 0.7 kg/s was close to the experimental result of 0.9 kg/s. It is expected that the difference come from the measurement error in the experiment. In this study, the compressor model was developed and implemented in MARS to study the transient analysis of SCO{sub 2}BC in SCIEL. We obtained the homologous curves for the SCIEL compressor using experimental data and performed nodalization of the compressor model using MARS code. In conclusions, it was found that numerical solutions from the MARS model were well matched with experimental data.

  17. ASTEC V2 severe accident integral code: Fission product modelling and validation

    Energy Technology Data Exchange (ETDEWEB)

    Cantrel, L., E-mail: laurent.cantrel@irsn.fr; Cousin, F.; Bosland, L.; Chevalier-Jabet, K.; Marchetto, C.

    2014-06-01

    One main goal of the severe accident integral code ASTEC V2, jointly developed since almost more than 15 years by IRSN and GRS, is to simulate the overall behaviour of fission products (FP) in a damaged nuclear facility. ASTEC applications are source term determinations, level 2 Probabilistic Safety Assessment (PSA2) studies including the determination of uncertainties, accident management studies and physical analyses of FP experiments to improve the understanding of the phenomenology. ASTEC is a modular code and models of a part of the phenomenology are implemented in each module: the release of FPs and structural materials from degraded fuel in the ELSA module; the transport through the reactor coolant system approximated as a sequence of control volumes in the SOPHAEROS module; and the radiochemistry inside the containment nuclear building in the IODE module. Three other modules, CPA, ISODOP and DOSE, allow respectively computing the deposition rate of aerosols inside the containment, the activities of the isotopes as a function of time, and the gaseous dose rate which is needed to model radiochemistry in the gaseous phase. In ELSA, release models are semi-mechanistic and have been validated for a wide range of experimental data, and noticeably for VERCORS experiments. For SOPHAEROS, the models can be divided into two parts: vapour phase phenomena and aerosol phase phenomena. For IODE, iodine and ruthenium chemistry are modelled based on a semi-mechanistic approach, these FPs can form some volatile species and are particularly important in terms of potential radiological consequences. The models in these 3 modules are based on a wide experimental database, resulting for a large part from international programmes, and they are considered at the state of the art of the R and D knowledge. This paper illustrates some FPs modelling capabilities of ASTEC and computed values are compared to some experimental results, which are parts of the validation matrix.

  18. User Manual for ATILA, a Finite-Element Code for Modeling Piezoelectric Transducers.

    Science.gov (United States)

    1987-09-01

    STUPFEL, A. LAVIE, J.N. DECARPIGNY, "Implantation dlans le code ATILA du coupiage elements flnls et equations integrales ", G.E.R.D.S.M., Convention C...In a large part , automatically obtained, as explained In chapter 6. A detailed check of the data file can be processed by running a specific progam...page displays a simple flow chart of an ATILA finite element modelling. A large part of this flow chart will be reproduced In a more detailed and

  19. A Perceptual Audio Representation for Low Rate Coding Based on Sines+Noise Modeling

    Institute of Scientific and Technical Information of China (English)

    AL-MoussawyRaed; YINJunxun; HUANGJiancheng

    2003-01-01

    This work is concerned with the develop-ment and optimization of an efficient (which allows high compression ratios) and flexible (which allows scalability)signal model for perceptual audio coding at low bitrates.A novel, complementary two-part model for audio consist-ing of sines+ noise (SN) is presented. The SN model uses a sinusoidal model that explicitly takes into account the human hearing system by using psychoacoustically based matching pursuits. This technique iteratively extracts si-nusoidal components according to their perceptually im-portant signal-to-mask ratio (SMR). The second modeling stage is for noise-like components. The SN model uses the equivalent rectangular bandwidth (ERB) noise model;that is based on observations that for noise-like signals,energy in the ERBs describes the underlying signal with perceptual accuracy. The SN model has an intuitive inter-pretation in terms of discrete fourier transform (DFT) and can be efficiently implemented via the fast fourier trans-form (FFT). Informal listening tests demonstrate that the synthesized (sines + noise) signal is almost perceptually identical to the original. A compression ratio of typically 16 to 19.5 can be readily reached with SN model.

  20. Modelling of the Gadolinium Fuel Test IFA-681 using the BISON Code

    Energy Technology Data Exchange (ETDEWEB)

    Pastore, Giovanni [Idaho National Laboratory; Hales, Jason Dean [Idaho National Laboratory; Novascone, Stephen Rhead [Idaho National Laboratory; Spencer, Benjamin Whiting [Idaho National Laboratory; Williamson, Richard L [Idaho National Laboratory

    2016-05-01

    In this work, application of Idaho National Laboratory’s fuel performance code BISON to modelling of fuel rods from the Halden IFA-681 gadolinium fuel test is presented. First, an overview is given of BISON models, focusing on UO2/UO2-Gd2O3 fuel and Zircaloy cladding. Then, BISON analyses of selected fuel rods from the IFA-681 test are performed. For the first time in a BISON application to integral fuel rod simulations, the analysis is informed by detailed neutronics calculations in order to accurately capture the radial power profile throughout the fuel, which is strongly affected by the complex evolution of absorber Gd isotopes. In particular, radial power profiles calculated at IFE–Halden Reactor Project with the HELIOS code are used. The work has been carried out in the frame of the collaboration between Idaho National Laboratory and Halden Reactor Project. Some slide have been added as an Appendix to present the newly developed PolyPole-1 algorithm for modeling of intra-granular fission gas release.

  1. FOI-PERFECT code: 3D relaxation MHD modeling and Applications

    Science.gov (United States)

    Wang, Gang-Hua; Duan, Shu-Chao; Comutational Physics Team Team

    2016-10-01

    One of the challenges in numerical simulations of electromagnetically driven high energy density (HED) systems is the existence of vacuum region. FOI-PERFECT code adopts a full relaxation magnetohydrodynamic (MHD) model. The electromagnetic part of the conventional model adopts the magnetic diffusion approximation. The vacuum region is approximated by artificially increasing the resistivity. On one hand the phase/group velocity is superluminal and hence non-physical in the vacuum region, on the other hand a diffusion equation with large diffusion coefficient can only be solved by implicit scheme which is difficult to be parallelized and converge. A better alternative is to solve the full electromagnetic equations. Maxwell's equations coupled with the constitutive equation, generalized Ohm's law, constitute a relaxation model. The dispersion relation is given to show its transition from electromagnetic propagation in vacuum to resistive MHD in plasma in a natural way. The phase and group velocities are finite for this system. A better time stepping is adopted to give a 3rd full order convergence in time domain without the stiff relaxation term restriction. Therefore it is convenient for explicit & parallel computations. Some numerical results of FOI-PERFECT code are also given. Project supported by the National Natural Science Foundation of China (Grant No. 11571293) And Foundation of China Academy of Engineering Physics (Grant No. 2015B0201023).

  2. Development Of A Parallel Performance Model For The THOR Neutral Particle Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Yessayan, Raffi; Azmy, Yousry; Schunert, Sebastian

    2017-02-01

    The THOR neutral particle transport code enables simulation of complex geometries for various problems from reactor simulations to nuclear non-proliferation. It is undergoing a thorough V&V requiring computational efficiency. This has motivated various improvements including angular parallelization, outer iteration acceleration, and development of peripheral tools. For guiding future improvements to the code’s efficiency, better characterization of its parallel performance is useful. A parallel performance model (PPM) can be used to evaluate the benefits of modifications and to identify performance bottlenecks. Using INL’s Falcon HPC, the PPM development incorporates an evaluation of network communication behavior over heterogeneous links and a functional characterization of the per-cell/angle/group runtime of each major code component. After evaluating several possible sources of variability, this resulted in a communication model and a parallel portion model. The former’s accuracy is bounded by the variability of communication on Falcon while the latter has an error on the order of 1%.

  3. SNR and BER Models and the Simulation for BER Performance of Selected Spectral Amplitude Codes for OCDMA

    Directory of Open Access Journals (Sweden)

    Abdul Latif Memon

    2014-01-01

    Full Text Available Many encoding schemes are used in OCDMA (Optical Code Division Multiple Access Network but SAC (Spectral Amplitude Codes is widely used. It is considered an effective arrangement to eliminate dominant noise called MAI (Multi Access Interference. Various codes are studied for evaluation with respect to their performance against three noises namely shot noise, thermal noise and PIIN (Phase Induced Intensity Noise. Various Mathematical models for SNR (Signal to Noise Ratios and BER (Bit Error Rates are discussed where the SNRs are calculated and BERs are computed using Gaussian distribution assumption. After analyzing the results mathematically, it is concluded that ZCC (Zero Cross Correlation Code performs better than the other selected SAC codes and can serve larger number of active users than the other codes do. At various receiver power levels, analysis points out that RDC (Random Diagonal Code also performs better than the other codes. For the power interval between -10 and -20 dBm performance of RDC is better ZCC. Their lowest BER values suggest that these codes should be part of an efficient and cost effective OCDM access network in the future.

  4. Samovar: a thermomechanical code for modeling of geodynamic processes in the lithosphere-application to basin evolution

    DEFF Research Database (Denmark)

    Elesin, Y; Gerya, T; Artemieva, Irina;

    2010-01-01

    We present a new 2D finite difference code, Samovar, for high-resolution numerical modeling of complex geodynamic processes. Examples are collision of lithospheric plates (including mountain building and subduction) and lithosphere extension (including formation of sedimentary basins, regions...... of extended crust, and rift zones). The code models deformation of the lithosphere with viscoelastoplastic rheology, including erosion/sedimentation processes and formation of shear zones in areas of high stresses. It also models steady-state and transient conductive and advective thermal processes including...... partial melting and magma transport in the lithosphere. The thermal and mechanical parts of the code are tested for a series of physical problems with analytical solutions. We apply the code to geodynamic modeling by examining numerically the processes of lithosphere extension and basin formation...

  5. Defmod - Parallel multiphysics finite element code for modeling crustal deformation during the earthquake/rifting cycle

    CERN Document Server

    Ali, S Tabrez

    2014-01-01

    In this article, we present Defmod, a fully unstructured, two or three dimensional, parallel finite element code for modeling crustal deformation over time scales ranging from milliseconds to thousands of years. Defmod can simulate deformation due to all major processes that make up the earthquake/rifting cycle, in non-homogeneous media. Specifically, it can be used to model deformation due to dynamic and quasistatic processes such as co-seismic slip or dike intrusion(s), poroelastic rebound due to fluid flow and post-seismic or post-rifting viscoelastic relaxation. It can also be used to model deformation due to processes such as post-glacial rebound, hydrological (un)loading, injection and/or withdrawal of compressible or incompressible fluids from subsurface reservoirs etc. Defmod is written in Fortran 95 and uses PETSc's parallel sparse data structures and implicit solvers. Problems can be solved using (stabilized) linear triangular, quadrilateral, tetrahedral or hexahedral elements on shared or distribut...

  6. Implementation of an offset-dipole magnetic field in a pulsar modelling code

    CERN Document Server

    Breed, M; Harding, A K; Johnson, T J

    2014-01-01

    The light curves of gamma-ray pulsars detected by the Fermi Large Area Telescope show great variety in profile shape and position relative to their radio profiles. Such diversity hints at distinct underlying magnetospheric and/or emission geometries for the individual pulsars. We implemented an offset-dipole magnetic field in an existing geometric pulsar modelling code which already includes static and retarded vacuum dipole fields. In our model, this offset is characterised by a parameter epsilon (with epsilon = 0 corresponding to the static dipole case). We constructed sky maps and light curves for several pulsar parameters and magnetic fields, studying the effect of an offset dipole on the resulting light curves. A standard two-pole caustic emission geometry was used. As an application, we compared our model light curves with Fermi data for the bright Vela pulsar.

  7. Exact Modeling of the Performance of Random Linear Network Coding in Finite-buffer Networks

    CERN Document Server

    Torabkhani, Nima; Beirami, Ahmad; Fekri, Faramarz

    2011-01-01

    In this paper, we present an exact model for the analysis of the performance of Random Linear Network Coding (RLNC) in wired erasure networks with finite buffers. In such networks, packets are delayed due to either random link erasures or blocking by full buffers. We assert that because of RLNC, the content of buffers have dependencies which cannot be captured directly using the classical queueing theoretical models. We model the performance of the network using Markov chains by a careful derivation of the buffer occupancy states and their transition rules. We verify by simulations that the proposed framework results in an accurate measure of the network throughput offered by RLNC. Further, we introduce a class of acyclic networks for which the number of state variables is significantly reduced.

  8. LHC-GCS a model-driven approach for automatic PLC and SCADA code generation

    CERN Document Server

    Thomas, Geraldine; Barillère, Renaud; Cabaret, Sebastien; Kulman, Nikolay; Pons, Xavier; Rochez, Jacques

    2005-01-01

    The LHC experiments’ Gas Control System (LHC GCS) project [1] aims to provide the four LHC experiments (ALICE, ATLAS, CMS and LHCb) with control for their 23 gas systems. To ease the production and maintenance of 23 control systems, a model-driven approach has been adopted to generate automatically the code for the Programmable Logic Controllers (PLCs) and for the Supervision Control And Data Acquisition (SCADA) systems. The first milestones of the project have been achieved. The LHC GCS framework [4] and the generation tools have been produced. A first control application has actually been generated and is in production, and a second is in preparation. This paper describes the principle and the architecture of the model-driven solution. It will in particular detail how the model-driven solution fits with the LHC GCS framework and with the UNICOS [5] data-driven tools.

  9. Offshore Code Comparison Collaboration within IEA Wind Task 23: Phase IV Results Regarding Floating Wind Turbine Modeling; Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Jonkman, J.; Larsen, T.; Hansen, A.; Nygaard, T.; Maus, K.; Karimirad, M.; Gao, Z.; Moan, T.; Fylling, I.

    2010-04-01

    Offshore wind turbines are designed and analyzed using comprehensive simulation codes that account for the coupled dynamics of the wind inflow, aerodynamics, elasticity, and controls of the turbine, along with the incident waves, sea current, hydrodynamics, and foundation dynamics of the support structure. This paper describes the latest findings of the code-to-code verification activities of the Offshore Code Comparison Collaboration, which operates under Subtask 2 of the International Energy Agency Wind Task 23. In the latest phase of the project, participants used an assortment of codes to model the coupled dynamic response of a 5-MW wind turbine installed on a floating spar buoy in 320 m of water. Code predictions were compared from load-case simulations selected to test different model features. The comparisons have resulted in a greater understanding of offshore floating wind turbine dynamics and modeling techniques, and better knowledge of the validity of various approximations. The lessons learned from this exercise have improved the participants' codes, thus improving the standard of offshore wind turbine modeling.

  10. In the transmission of information, the great potential of model-based coding with the SP theory of intelligence

    OpenAIRE

    Wolff, J Gerard

    2016-01-01

    Model-based coding, described by John Pierce in 1961, has great potential to reduce the volume of information that needs to be transmitted in moving big data, without loss of information, from one place to another, or in lossless communications via the internet. Compared with ordinary compression methods, this potential advantage of model-based coding in the transmission of data arises from the fact that both the transmitter ("Alice") and the receiver ("Bob") are equipped with a grammar for t...

  11. Samovar: a thermomechanical code for modeling of geodynamic processes in the lithosphere-application to basin evolution

    OpenAIRE

    Elesin, Y; Gerya, T.; Artemieva, Irina; Thybo, Hans

    2010-01-01

    We present a new 2D finite difference code, Samovar, for high-resolution numerical modeling of complex geodynamic processes. Examples are collision of lithospheric plates (including mountain building and subduction) and lithosphere extension (including formation of sedimentary basins, regions of extended crust, and rift zones). The code models deformation of the lithosphere with viscoelastoplastic rheology, including erosion/sedimentation processes and formation of shear zones in areas of hig...

  12. Validation of a Monte Carlo method using a joint PDF of the composition and an ILDM kinetic model for the prediction of turbulent flames

    Energy Technology Data Exchange (ETDEWEB)

    Zurbach, S.; Garreton, D.; Kanniche, M.

    1997-09-01

    The resolution of the joint probability density function (PDF) of the composition and its application to the calculation of turbulent diffusion flames is presented. The numerical method is based on an Eulerian Monte-Carlo solver coupled with the CFD code Hades; an ILDM kinetic model allows the calculation of the chemical source terms. Two configurations are studied: the Masri-Bilger-Dibble flame and the Delft flame. The first turbulent diffusion flame is close to extinction and is a good test for the prediction of the interactions between the turbulence and the chemicals scales. The second one enables the validation of the prediction of an intermediate species taking a super-equilibrium concentration, the OH radical. (author) 9 refs.

  13. Analysis of different containment models for IRIS small break LOCA, using GOTHIC and RELAP5 codes

    Energy Technology Data Exchange (ETDEWEB)

    Papini, Davide, E-mail: davide.papini@mail.polimi.i [Department of Energy, CeSNEF - Nuclear Engineering Division, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Grgic, Davor [Department of Power Systems, FER, University of Zagreb, Unska 3, 10000 Zagreb (Croatia); Cammi, Antonio; Ricotti, Marco E. [Department of Energy, CeSNEF - Nuclear Engineering Division, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy)

    2011-04-15

    Advanced nuclear water reactors rely on containment behaviour in realization of some of their passive safety functions. Steam condensation on containment walls, where non-condensable gas effects are significant, is an important feature of the new passive containment concepts, like the AP600/1000 ones. In this work the international reactor innovative and secure (IRIS) was taken as reference, and the relevant condensation phenomena involved within its containment were investigated with different computational tools. In particular, IRIS containment response to a small break LOCA (SBLOCA) was calculated with GOTHIC and RELAP5 codes. A simplified model of IRIS containment drywell was implemented with RELAP5 according to a sliced approach, based on the two-pipe-with-junction concept, while it was addressed with GOTHIC using several modelling options, regarding both heat transfer correlations and volume and thermal structure nodalization. The influence on containment behaviour prediction was investigated in terms of drywell temperature and pressure response, heat transfer coefficient (HTC) and steam volume fraction distribution, and internal recirculating mass flow rate. The objective of the paper is to preliminarily compare the capability of the two codes in modelling of the same postulated accident, thus to check the results obtained with RELAP5, when applied in a situation not covered by its validation matrix (comprising SBLOCA and to some extent LBLOCA transients, but not explicitly the modelling of large dry containment volumes). The option to include or not droplets in fluid mass flow discharged to the containment was the most influencing parameter for GOTHIC simulations. Despite some drawbacks, due, e.g. to a marked overestimation of internal natural recirculation, RELAP5 confirmed its capability to satisfactorily model the basic processes in IRIS containment following SBLOCA.

  14. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    Directory of Open Access Journals (Sweden)

    Chandranath R. N. Athaudage

    2003-09-01

    Full Text Available A dynamic programming-based optimization strategy for a temporal decomposition (TD model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%–60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  15. EXTENSION OF THE NUCLEAR REACTION MODEL CODE EMPIRE TO ACTINIDES NUCLEAR DATA EVALUATION.

    Energy Technology Data Exchange (ETDEWEB)

    CAPOTE,R.; SIN, M.; TRKOV, A.; HERMAN, M.; CARLSON, B.V.; OBLOZINSKY, P.

    2007-04-22

    Recent extensions and improvements of the EMPIRE code system are outlined. They add new capabilities to the code, such as prompt fission neutron spectra calculations using Hauser-Feshbach plus pre-equilibrium pre-fission spectra, cross section covariance matrix calculations by Monte Carlo method, fitting of optical model parameters, extended set of optical model potentials including new dispersive coupled channel potentials, parity-dependent level densities and transmission through numerically defined fission barriers. These features, along with improved and validated ENDF formatting, exclusive/inclusive spectra, and recoils make the current EMPIRE release a complete and well validated tool for evaluation of nuclear data at incident energies above the resonance region. The current EMPIRE release has been used in evaluations of neutron induced reaction files for {sup 232}Th and {sup 231,233}Pa nuclei in the fast neutron region at IAEA. Triple-humped fission barriers and exclusive pre-fission neutron spectra were considered for the fission data evaluation. Total, fission, capture and neutron emission cross section, average resonance parameters and angular distributions of neutron scattering are in excellent agreement with the available experimental data.

  16. An Optimization Model for Design of Asphalt Pavements Based on IHAP Code Number 234

    Directory of Open Access Journals (Sweden)

    Ali Reza Ghanizadeh

    2016-01-01

    Full Text Available Pavement construction is one of the most costly parts of transportation infrastructures. Incommensurate design and construction of pavements, in addition to the loss of the initial investment, would impose indirect costs to the road users and reduce road safety. This paper aims to propose an optimization model to determine the optimal configuration as well as the optimum thickness of different pavement layers based on the Iran Highway Asphalt Paving Code Number 234 (IHAP Code 234. After developing the optimization model, the optimum thickness of pavement layers for secondary rural roads, major rural roads, and freeways was determined based on the recommended prices in “Basic Price List for Road, Runway and Railway” of Iran in 2015 and several charts were developed to determine the optimum thickness of pavement layers including asphalt concrete, granular base, and granular subbase with respect to road classification, design traffic, and resilient modulus of subgrade. Design charts confirm that in the current situation (material prices in 2015, application of asphalt treated layer in pavement structure is not cost effective. Also it was shown that, with increasing the strength of subgrade soil, the subbase layer may be removed from the optimum structure of pavement.

  17. Holographic quantum error-correcting codes: Toy models for the bulk/boundary correspondence

    CERN Document Server

    Pastawski, Fernando; Harlow, Daniel; Preskill, John

    2015-01-01

    We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an exact isometry from bulk operators to boundary operators. The entire tensor network is a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindler-wedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed by Almheiri et. al in arXiv:1411.70...

  18. Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP

    Science.gov (United States)

    Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan

    2016-04-01

    Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the

  19. Model of U3Si2 Fuel System using BISON Fuel Code

    Energy Technology Data Exchange (ETDEWEB)

    K. E. Metzger; T. W. Knight; R. L. Williamson

    2014-04-01

    This research considers the proposed advanced fuel system: U3Si2 combined with an advanced cladding. U3Si2 has a number of advantageous thermophysical properties, which motivate its use as an accident tolerant fuel. This preliminary model evaluates the behavior of U3Si2 using available thermophysical data to predict the cladding-fuel pellet temperature and stress using the fuel performance code: BISON. The preliminary results obtained from the U3Si2 fuel model describe the mechanism of Pellet-Clad Mechanical Interaction for this system while more extensive testing including creep testing of U3Si2 is planned for improved understanding of thermophysical properties for predicting fuel performance.

  20. SurfKin: an ab initio kinetic code for modeling surface reactions.

    Science.gov (United States)

    Le, Thong Nguyen-Minh; Liu, Bin; Huynh, Lam K

    2014-10-05

    In this article, we describe a C/C++ program called SurfKin (Surface Kinetics) to construct microkinetic mechanisms for modeling gas-surface reactions. Thermodynamic properties of reaction species are estimated based on density functional theory calculations and statistical mechanics. Rate constants for elementary steps (including adsorption, desorption, and chemical reactions on surfaces) are calculated using the classical collision theory and transition state theory. Methane decomposition and water-gas shift reaction on Ni(111) surface were chosen as test cases to validate the code implementations. The good agreement with literature data suggests this is a powerful tool to facilitate the analysis of complex reactions on surfaces, and thus it helps to effectively construct detailed microkinetic mechanisms for such surface reactions. SurfKin also opens a possibility for designing nanoscale model catalysts.

  1. New Source Term Model for the RESRAD-OFFSITE Code Version 3

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Charley [Argonne National Lab. (ANL), Argonne, IL (United States); Gnanapragasam, Emmanuel [Argonne National Lab. (ANL), Argonne, IL (United States); Cheng, Jing-Jy [Argonne National Lab. (ANL), Argonne, IL (United States); Kamboj, Sunita [Argonne National Lab. (ANL), Argonne, IL (United States); Chen, Shih-Yew [Argonne National Lab. (ANL), Argonne, IL (United States)

    2013-06-01

    This report documents the new source term model developed and implemented in Version 3 of the RESRAD-OFFSITE code. This new source term model includes: (1) "first order release with transport" option, in which the release of the radionuclide is proportional to the inventory in the primary contamination and the user-specified leach rate is the proportionality constant, (2) "equilibrium desorption release" option, in which the user specifies the distribution coefficient which quantifies the partitioning of the radionuclide between the solid and aqueous phases, and (3) "uniform release" option, in which the radionuclides are released from a constant fraction of the initially contaminated material during each time interval and the user specifies the duration over which the radionuclides are released.

  2. Contribution to the analysis of dileptons production reactions in proton-proton collision with HADES; Contribution a l'analyse de reactions de production de dileptons en collision proton-proton avec HADES

    Energy Technology Data Exchange (ETDEWEB)

    Moriniere, E

    2008-03-15

    The most recent analysis of dilepton spectra, produced in heavy ion collisions, have shown the need for a precise knowledge of all dilepton production channels. The experimental HADES facility, installed on the GSI accelerator site, is appropriate for that goal. Thus, the Dalitz decay branching ratio of {delta} resonance ({delta} {yields} Ne{sup +}e{sup -}), which has never been measured, is studied in this work. Moreover, the pp {yields} p/ {delta}{sup +} {yields} ppe{sup +-} reaction could allow to provide some information about the internal resonance structure and more precisely, about the electromagnetic transition N - A form factors. The analysis of simulations shows the feasibility of this experiment, estimates the counting yield as well as the Signal over Background ratio. This analysis shows also the great importance of the momentum resolution of the detector for the success of this experiment. The momentum resolution must be investigated. In this work, an attempt to find out the most important contributions to the measured resolution is presented. The calibration step, which provides the relation between electronic time and physical time, the detector alignment (global, relative or internal) as well as the tracking method are studied. Some methods for improvement of these different contributions are proposed in order to reach the optimal resolution. (author)

  3. A physiologically-inspired model of numerical classification based on graded stimulus coding

    Directory of Open Access Journals (Sweden)

    John Pearson

    2010-01-01

    Full Text Available In most natural decision contexts, the process of selecting among competing actions takes place in the presence of informative, but potentially ambiguous, stimuli. Decisions about magnitudes—quantities like time, length, and brightness that are linearly ordered—constitute an important subclass of such decisions. It has long been known that perceptual judgments about such quantities obey Weber’s Law, wherein the just-noticeable difference in a magnitude is proportional to the magnitude itself. Current physiologically inspired models of numerical classification assume discriminations are made via a labeled line code of neurons selectively tuned for numerosity, a pattern observed in the firing rates of neurons in the ventral intraparietal area (VIP of the macaque. By contrast, neurons in the contiguous lateral intraparietal area (LIP signal numerosity in a graded fashion, suggesting the possibility that numerical classification could be achieved in the absence of neurons tuned for number. Here, we consider the performance of a decision model based on this analog coding scheme in a paradigmatic discrimination task—numerosity bisection. We demonstrate that a basic two-neuron classifier model, derived from experimentally measured monotonic responses of LIP neurons, is sufficient to reproduce the numerosity bisection behavior of monkeys, and that the threshold of the classifier can be set by reward maximization via a simple learning rule. In addition, our model predicts deviations from Weber Law scaling of choice behavior at high numerosity. Together, these results suggest both a generic neuronal framework for magnitude-based decisions and a role for reward contingency in the classification of such stimuli.

  4. Laser-Plasma Modeling Using PERSEUS Extended-MHD Simulation Code for HED Plasmas

    Science.gov (United States)

    Hamlin, Nathaniel; Seyler, Charles

    2016-10-01

    We discuss the use of the PERSEUS extended-MHD simulation code for high-energy-density (HED) plasmas in modeling laser-plasma interactions in relativistic and nonrelativistic regimes. By formulating the fluid equations as a relaxation system in which the current is semi-implicitly time-advanced using the Generalized Ohm's Law, PERSEUS enables modeling of two-fluid phenomena in dense plasmas without the need to resolve the smallest electron length and time scales. For relativistic and nonrelativistic laser-target interactions, we have validated a cycle-averaged absorption (CAA) laser driver model against the direct approach of driving the electromagnetic fields. The CAA model refers to driving the radiation energy and flux rather than the fields, and using hyperbolic radiative transport, coupled to the plasma equations via energy source terms, to model absorption and propagation of the radiation. CAA has the advantage of not requiring adequate grid resolution of each laser wavelength, so that the system can span many wavelengths without requiring prohibitive CPU time. For several laser-target problems, we compare existing MHD results to extended-MHD results generated using PERSEUS with the CAA model, and examine effects arising from Hall physics. This work is supported by the National Nuclear Security Administration stewardship sciences academic program under Department of Energy cooperative agreements DE-FOA-0001153 and DE-NA0001836.

  5. Adjudicating between face-coding models with individual-face fMRI responses.

    Directory of Open Access Journals (Sweden)

    Johan D Carlin

    2017-07-01

    Full Text Available The perceptual representation of individual faces is often explained with reference to a norm-based face space. In such spaces, individuals are encoded as vectors where identity is primarily conveyed by direction and distinctiveness by eccentricity. Here we measured human fMRI responses and psychophysical similarity judgments of individual face exemplars, which were generated as realistic 3D animations using a computer-graphics model. We developed and evaluated multiple neurobiologically plausible computational models, each of which predicts a representational distance matrix and a regional-mean activation profile for 24 face stimuli. In the fusiform face area, a face-space coding model with sigmoidal ramp tuning provided a better account of the data than one based on exemplar tuning. However, an image-processing model with weighted banks of Gabor filters performed similarly. Accounting for the data required the inclusion of a measurement-level population averaging mechanism that approximates how fMRI voxels locally average distinct neuronal tunings. Our study demonstrates the importance of comparing multiple models and of modeling the measurement process in computational neuroimaging.

  6. A computational code for resolution of general compartment models applied to internal dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Claro, Thiago R.; Todo, Alberto S., E-mail: claro@usp.br, E-mail: astodo@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The dose resulting from internal contamination can be estimated with the use of biokinetic models combined with experimental results obtained from bio analysis and the knowledge of the incorporation time. The biokinetics models can be represented by a set of compartments expressing the transportation, retention and elimination of radionuclides from the body. The ICRP publications, number 66, 78 and 100, present compartmental models for the respiratory tract, gastrointestinal tract and for systemic distribution for an array of radionuclides of interest for the radiological protection. The objective of this work is to develop a computational code for designing, visualization and resolution of compartmental models of any nature. There are available four different techniques for the resolution of system of differential equations, including semi-analytical and numerical methods. The software was developed in C{ne} programming, using a Microsoft Access database and XML standards for file exchange with other applications. Compartmental models for uranium, thorium and iodine radionuclides were generated for the validation of the CBT software. The models were subsequently solved by SSID software and the results compared with the values published in the issue 78 of ICRP. In all cases the system is in accordance with the values published by ICRP. (author)

  7. A computational model of cellular mechanisms of temporal coding in the medial geniculate body (MGB.

    Directory of Open Access Journals (Sweden)

    Cal F Rabang

    Full Text Available Acoustic stimuli are often represented in the early auditory pathway as patterns of neural activity synchronized to time-varying features. This phase-locking predominates until the level of the medial geniculate body (MGB, where previous studies have identified two main, largely segregated response types: Stimulus-synchronized responses faithfully preserve the temporal coding from its afferent inputs, and Non-synchronized responses, which are not phase locked to the inputs, represent changes in temporal modulation by a rate code. The cellular mechanisms underlying this transformation from phase-locked to rate code are not well understood. We use a computational model of a MGB thalamocortical neuron to test the hypothesis that these response classes arise from inferior colliculus (IC excitatory afferents with divergent properties similar to those observed in brain slice studies. Large-conductance inputs exhibiting synaptic depression preserved input synchrony as short as 12.5 ms interclick intervals, while maintaining low firing rates and low-pass filtering responses. By contrast, small-conductance inputs with Mixed plasticity (depression of AMPA-receptor component and facilitation of NMDA-receptor component desynchronized afferent inputs, generated a click-rate dependent increase in firing rate, and high-pass filtered the inputs. Synaptic inputs with facilitation often permitted band-pass synchrony along with band-pass rate tuning. These responses could be tuned by changes in membrane potential, strength of the NMDA component, and characteristics of synaptic plasticity. These results demonstrate how the same synchronized input spike trains from the inferior colliculus can be transformed into different representations of temporal modulation by divergent synaptic properties.

  8. A wavelet based neural model to optimize and read out a temporal population code

    Directory of Open Access Journals (Sweden)

    Andre eLuvizotto

    2012-05-01

    Full Text Available It has been proposed that the dense excitatory local connectivity of the neo-cortex plays a specific role in the transformation of spatial stimulus information into a temporal representation or a temporal population code (TPC. TPC provides for a rapid, robust and high-capacity encoding of salient stimulus features with respect to position, rotation and distortion. The TPC hypothesis gives a functional interpretation to a core feature of the cortical anatomy: its dense local and sparse long-range connectivity. Thus far, the question of how the TPC encoding can be decoded in downstream areas has not been addressed. Here, we present a neural circuit that decodes the spectral properties of the TPC using a biologically plausible implementation of a Haar transform. We perform a systematic investigation of our model in a recognition task using a standardized stimulus set. We consider alternative implementations using either regular spiking or bursting neurons and a range of spectral bands. Our results show that our wavelet readout circuit provides for robust decoding of the TPC and further compresses the code without loosing speed or quality of decoding. We show that in the TPC signal the relevant stimulus information is present in the frequencies around 100 Hz. Our results show that the TPC is constructed around a small number of coding components that can be well decoded by wavelet coefficients in a neuronal implementation. This solution to the TPC decoding problem suggests that cortical processing streams might well consist of sequential operations where spatio-temporal transformations at lower levels form a compact stimulus encoding using TPC that are are subsequently decoded back to a spatial representation using wavelet transforms. In addition, the results presented here shows that different properties of the stimulus might be transmitted to further processing stages using different frequency component that are captured by appropriately tuned

  9. Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code

    Science.gov (United States)

    Waithe, Kenrick A.

    2004-01-01

    A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  10. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  11. Development of Depletion Code Surrogate Models for Uncertainty Propagation in Scenario Studies

    Science.gov (United States)

    Krivtchik, Guillaume; Coquelet-Pascal, Christine; Blaise, Patrick; Garzenne, Claude; Le Mer, Joël; Freynet, David

    2014-06-01

    The result of transition scenario studies, which enable the comparison of different options of the reactor fleet evolution and management of the future fuel cycle materials, allow to perform technical and economic feasibility studies. The COSI code is developed by CEA and used to perform scenario calculations. It allows to model any fuel type, reactor fleet, fuel facility, and permits the tracking of U, Pu, minor actinides and fission products nuclides on a large time scale. COSI is coupled with the CESAR code which performs the depletion calculations based on one-group cross-section libraries and nuclear data. Different types of uncertainties have an impact on scenario studies: nuclear data and scenario assumptions. Therefore, it is necessary to evaluate their impact on the major scenario results. The methodology adopted to propagate these uncertainties throughout the scenario calculations is a stochastic approach. Considering the amount of inputs to be sampled in order to perform a stochastic calculation of the propagated uncertainty, it appears necessary to reduce the calculation time. Given that evolution calculations represent approximately 95% of the total scenario simulation time, an optimization can be done, with the development and implementation of a surrogate models library of CESAR in COSI. The input parameters of CESAR are sampled with URANIE, the CEA uncertainty platform, and for every sample, the isotopic composition after evolution evaluated with CESAR is stored. Then statistical analysis of the input and output tables allow to model the behavior of CESAR on each CESAR library, i.e. building a surrogate model. Several quality tests are performed on each surrogate model to insure the prediction power is satisfying. Afterward, a new routine implemented in COSI reads these surrogate models and using them in replacement of CESAR calculations. A preliminary study of the calculation time gain shows that the use of surrogate models allows stochastic

  12. Models of neural networks temporal aspects of coding and information processing in biological systems

    CERN Document Server

    Hemmen, J; Schulten, Klaus

    1994-01-01

    Since the appearance of Vol. 1 of Models of Neural Networks in 1991, the theory of neural nets has focused on two paradigms: information coding through coherent firing of the neurons and functional feedback. Information coding through coherent neuronal firing exploits time as a cardinal degree of freedom. This capacity of a neural network rests on the fact that the neuronal action potential is a short, say 1 ms, spike, localized in space and time. Spatial as well as temporal correlations of activity may represent different states of a network. In particular, temporal correlations of activity may express that neurons process the same "object" of, for example, a visual scene by spiking at the very same time. The traditional description of a neural network through a firing rate, the famous S-shaped curve, presupposes a wide time window of, say, at least 100 ms. It thus fails to exploit the capacity to "bind" sets of coherently firing neurons for the purpose of both scene segmentation and figure-ground segregatio...

  13. Error threshold in topological quantum-computing models with color codes

    Science.gov (United States)

    Katzgraber, Helmut; Bombin, Hector; Martin-Delgado, Miguel A.

    2009-03-01

    Dealing with errors in quantum computing systems is possibly one of the hardest tasks when attempting to realize physical devices. By encoding the qubits in topological properties of a system, an inherent protection of the quantum states can be achieved. Traditional topologically-protected approaches are based on the braiding of quasiparticles. Recently, a braid-less implementation using brane-net condensates in 3-colexes has been proposed. In 2D it allows the transversal implementation of the whole Clifford group of quantum gates. In this work, we compute the error threshold for this topologically-protected quantum computing system in 2D, by means of mapping its error correction process onto a random 3-body Ising model on a triangular lattice. Errors manifest themselves as random perturbation of the plaquette interaction terms thus introducing frustration. Our results from Monte Carlo simulations suggest that these topological color codes are similarly robust to perturbations as the toric codes. Furthermore, they provide more computational capabilities and the possibility of having more qubits encoded in the quantum memory.

  14. Verification of three-dimensional neutron kinetics model of TRAP-KS code regarding reactivity variations

    Energy Technology Data Exchange (ETDEWEB)

    Uvakin, Maxim A.; Alekhin, Grigory V.; Bykov, Mikhail A.; Zaitsev, Sergei I. [EDO ' GIDROPRESS' , Moscow Region, Podolsk (Russian Federation)

    2016-09-15

    This work deals with TRAP-KS code verification. TRAP-KS is used for coupled neutron and thermo-hydraulic process calculations of VVER reactors. The three-dimensional neutron kinetics model enables consideration of space effects, which are produced by energy field and feedback parameters variations. This feature has to be investigated especially for asymmetrical multiplying variations of core properties, power fluctuations and strong local perturbation insertion. The presented work consists of three test definitions. First, an asymmetrical control rod (CR) ejection during power operation is defined. This process leads to fast reactivity insertion with short-time power spike. As second task xenon oscillations are considered. Here, small negative reactivity insertion leads to power decreasing and induces space oscillations of xenon concentration. In the late phase, these oscillations are suppressed by external actions. As last test, an international code comparison for a hypothetical main steam line break (V1000CT-2, task 2) was performed. This scenario is interesting for asymmetrical positive reactivity insertion by decreasing coolant temperature in the affected loop.

  15. Synaptic learning rules and sparse coding in a model sensory system.

    Directory of Open Access Journals (Sweden)

    Luca A Finelli

    2008-04-01

    Full Text Available Neural circuits exploit numerous strategies for encoding information. Although the functional significance of individual coding mechanisms has been investigated, ways in which multiple mechanisms interact and integrate are not well understood. The locust olfactory system, in which dense, transiently synchronized spike trains across ensembles of antenna lobe (AL neurons are transformed into a sparse representation in the mushroom body (MB; a region associated with memory, provides a well-studied preparation for investigating the interaction of multiple coding mechanisms. Recordings made in vivo from the insect MB demonstrated highly specific responses to odors in Kenyon cells (KCs. Typically, only a few KCs from the recorded population of neurons responded reliably when a specific odor was presented. Different odors induced responses in different KCs. Here, we explored with a biologically plausible model the possibility that a form of plasticity may control and tune synaptic weights of inputs to the mushroom body to ensure the specificity of KCs' responses to familiar or meaningful odors. We found that plasticity at the synapses between the AL and the MB efficiently regulated the delicate tuning necessary to selectively filter the intense AL oscillatory output and condense it to a sparse representation in the MB. Activity-dependent plasticity drove the observed specificity, reliability, and expected persistence of odor representations, suggesting a role for plasticity in information processing and making a testable prediction about synaptic plasticity at AL-MB synapses.

  16. Modern multicore and manycore architectures: Modelling, optimisation and benchmarking a multiblock CFD code

    Science.gov (United States)

    Hadade, Ioan; di Mare, Luca

    2016-08-01

    Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.

  17. A system for environmental model coupling and code reuse: The Great Rivers Project

    Science.gov (United States)

    Eckman, B.; Rice, J.; Treinish, L.; Barford, C.

    2008-12-01

    As part of the Great Rivers Project, IBM is collaborating with The Nature Conservancy and the Center for Sustainability and the Global Environment (SAGE) at the University of Wisconsin, Madison to build a Modeling Framework and Decision Support System (DSS) designed to help policy makers and a variety of stakeholders (farmers, fish & wildlife managers, hydropower operators, et al.) to assess, come to consensus, and act on land use decisions representing effective compromises between human use and ecosystem preservation/restoration. Initially focused on Brazil's Paraguay-Parana, China's Yangtze, and the Mississippi Basin in the US, the DSS integrates data and models from a wide variety of environmental sectors, including water balance, water quality, carbon balance, crop production, hydropower, and biodiversity. In this presentation we focus on the modeling framework aspect of this project. In our approach to these and other environmental modeling projects, we see a flexible, extensible modeling framework infrastructure for defining and running multi-step analytic simulations as critical. In this framework, we divide monolithic models into atomic components with clearly defined semantics encoded via rich metadata representation. Once models and their semantics and composition rules have been registered with the system by their authors or other experts, non-expert users may construct simulations as workflows of these atomic model components. A model composition engine enforces rules/constraints for composing model components into simulations, to avoid the creation of Frankenmodels, models that execute but produce scientifically invalid results. A common software environment and common representations of data and models are required, as well as an adapter strategy for code written in e.g., Fortran or python, that still enables efficient simulation runs, including parallelization. Since each new simulation, as a new composition of model components, requires calibration

  18. Study of exclusive one-pion and one-eta production using hadron and dielectron channels in pp reactions at kinetic beam energies of 1.25 GeV and 2.2 GeV with HADES

    Energy Technology Data Exchange (ETDEWEB)

    Agakishiev, G.; Chernenko, S.; Fateev, O.; Ierusalimov, A.; Zanevsky, Y. [Joint Institute of Nuclear Research, Dubna (Russian Federation); Alvarez-Pol, H.; Cabanelas, P.; Garzon, J.A.; Sanchez, M. [Univ. de Santiago de Compostela (Spain); Balanda, A.; Dybczak, A.; Kozuch, A.; Michalska, B.; Otwinowski, J.; Przygoda, W.; Salabura, P.; Trebacz, R.; Wisniowski, M.; Wojcik, T. [Jagiellonian University of Cracow, Smoluchowski Institute of Physics, Krakow (Poland); Bassini, R.; Iori, I. [Sezione di Milano, Istituto Nazionale di Fisica Nucleare, Milano (Italy); Boehmer, M.; Christ, T.; Eberl, T.; Friese, J.; Gernhaeuser, R.; Jurkovic, M.; Maier, L.; Sailer, B. [Technische Univ. Muenchen, Physik Dept. E12, Garching (Germany); Bokemeyer, H.; Holzmann, R.; Koenig, I.; Koenig, W.; Kolb, B.W.; Lang, S.; Muench, M.; Pechenov, V.; Rustamov, A.; Schwab, E.; Sturm, C.; Traxler, M.; Yurevich, S.; Zumbruch, P. [GSI Helmholtz-Zentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); Boyard, J.L.; Gumberidze, M.; Hennino, T.; Liu, T.; Moriniere, E.; Ramstein, B.; Roy-Stephan, M. [CNRS/IN2P3 - Univ. Paris Sud, Institut de Physique Nucleaire, Orsay (France); Destefanis, M.; Gilardi, C.; Kuehn, W.; Metag, V.; Perez Cavalcanti, T.; Spruck, B.; Toia, A. [Justus Liebig Univ. Giessen, II. Physikalisches Inst., Giessen (Germany); Dohrmann, F.; Kaempfer, B.; Kanaki, K.; Kotte, R.; Naumann, L.; Wuestenfeld, J. [Helmholtz-Zentrum Dresden-Rossendorf, Inst. fuer Strahlenphysik, Dresden (Germany); Fabbietti, L.; Lapidus, K. [Excellence Cluster ' ' Origin and Structure of the Universe' ' , Garching (Germany); Finocchiaro, P.; Gonzalez-Diaz, D.; Schmah, A. [Laboratori Nazionali del Sud, Istituto Nazionale di Fisica Nucleare, Catania (Italy); Froehlich, I.; Galatyuk, T.; Markert, J.; Muentz, C.; Pachmayer, Y.C.; Pechenova, O.; Pietraszko, J.; Stroebele, H.; Tarantola, A.; Teilab, K. [Goethe-Univ., Inst. fuer Kernphysik, Frankfurt (Germany)] [and others

    2012-05-15

    We present measurements of exclusive {pi}{sup +,0} and {eta} production in pp reactions at 1.25GeV and 2.2GeV beam kinetic energy in hadron and dielectron channels. In the case of {pi}{sup +} and {pi}{sup 0}, high-statistics invariant-mass and angular distributions are obtained within the HADES acceptance as well as acceptance-corrected distributions, which are compared to a resonance model. The sensitivity of the data to the yield and production angular distribution of {delta} (1232) and higher-lying baryon resonances is shown, and an improved parameterization is proposed. The extracted cross-sections are of special interest in the case of pp{yields}pp{eta}, since controversial data exist at 2.0GeV; we find {sigma}=0.142{+-}0.022 mb. Using the dielectron channels, the {pi}{sup 0} and {eta} Dalitz decay signals are reconstructed with yields fully consistent with the hadronic channels. The electron invariant masses and acceptance-corrected helicity angle distributions are found in good agreement with model predictions. (orig.)

  19. Study of exclusive one-pion and one-eta production using hadron and dielectron channels in pp reactions at kinetic beam energies of 1.25 GeV and 2.2 GeV with HADES

    CERN Document Server

    Agakishiev, G; Balanda, A; Bassini, R; Böhmer, M; Boyard, J L; Cabanelas, P; Chernenko, S; Christ, T; Destefanis, M; Dohrmann, F; Dybczak, A; Eberl, T; Fabbietti, L; Fateev, O; Finocchiaro, P; Friese, J; Fröhlich, I; Galatyuk, T; Garzón, J A; Gernhäuser, R; Gilardi, C; Golubeva, M; González-Díaz, D; Guber, F; Gumberidze, M; Hennino, T; Holzmann, R; Iori, I; Ierusalimov, A; Ivashkin, A; Jurkovic, M; Kämpfer, B; Kanaki, K; Karavicheva, T; Koenig, I; Koenig, W; Kolb, B W; Kotte, R; Kozuch, A; Krizek, F; Kühn, W; Kugler, A; Kurepin, A; Lang, S; Lapidus, K; Liu, T; Maier, L; Markert, J; Metag, V; Michalska, B; Morinière, E; Mousa, J; Müntz, C; Naumann, L; Otwinowski, J; Pachmayer, Y C; Pechenov, V; Pechenova, O; Pietraszko, J; Przygoda, W; Ramstein, B; Reshetin, A; Roy-Stephan, M; Rustamov, A; Sadovsky, A; Sailer, B; Salabura, P; Sánchez, M; Schmah, A; Schwab, E; Sobolev, Yu G; Spataro, S; Spruck, B; Ströbele, H; Stroth, J; Sturm, C; Tarantola, A; Teilab, K; Tlusty, P; Toia, A; Traxler, M; Trebacz, R; Tsertos, H; Wagner, V; Wisniowski, M; Wüstenfeld, J; Yurevich, S; Zanevsky, Y

    2012-01-01

    We present measurements of exclusive \\pi^{+,0} and \\eta\\ production in pp reactions at 1.25 GeV and 2.2 GeV beam kinetic energy in hadron and dielectron channels. In the case of \\pi^+ and \\pi^0, high-statistics invariant-mass and angular distributions are obtained within the HADES acceptance as well as acceptance corrected distributions, which are compared to a resonance model. The sensitivity of the data to the yield and production angular distribution of \\Delta(1232) and higher lying baryon resonances is shown, and an improved parameterization is proposed. The extracted cross sections are of special interest in the case of pp \\to pp \\eta, since controversial data exist at 2.0 GeV; we find \\sigma =0.142 \\pm 0.022 mb. Using the dielectron channels, the \\pi^0 and \\eta\\ Dalitz decay signals are reconstructed with yields fully consistent with the hadronic channels. The electron invariant masses and acceptance corrected helicity angle distributions are found in good agreement with model predictions.

  20. ROBO: a model and a code for studying the interstellar medium

    Energy Technology Data Exchange (ETDEWEB)

    Grassi, T [University of Padua, Italy; Krstic, Predrag S [ORNL; Merlin, E [University of Padua, Padua, Italy; Buonomo, U [University of Padua, Padua, Italy; Piovan, L [University of Padua, Padua, Italy; Chiosi, C [University of Padua, Padua, Italy

    2011-01-01

    We present robo, a model and its companion code for the study of the interstellar medium (ISM). The aim is to provide an accurate description of the physical evolution of the ISM and to set the ground for an ancillary tool to be inserted in NBody-Tree-SPH (NB-TSPH) simulations of large-scale structures in the cosmological context or of the formation and evolution of individual galaxies. The ISM model consists of gas and dust. The gas chemical composition is regulated by a network of reactions that includes a large number of species (hydrogen and deuterium-based molecules, helium, and metals). New reaction rates for the charge transfer in H{sup +} and H{sub 2} collisions are presented. The dust contains the standard mixture of carbonaceous grains (graphite grains and PAHs) and silicates. In our model dust are formed and destroyed by several processes. The model accurately treats the cooling process, based on several physical mechanisms, and cooling functions recently reported in the literature. The model is applied to a wide range of the input parameters, and the results for important quantities describing the physical state of the gas and dust are presented. The results are organized in a database suited to the artificial neural networks (ANNs). Once trained, the ANNs yield the same results obtained by ROBO with great accuracy. We plan to develop ANNs suitably tailored for applications to NB-TSPH simulations of cosmological structures and/or galaxies.

  1. Numerical modelling of pressure suppression pools with CFD and FEM codes

    Energy Technology Data Exchange (ETDEWEB)

    Paettikangas, T.; Niemi, J.; Timperi, A. (VTT Technical Research Centre of Finland (Finland))

    2011-06-15

    Experiments on large-break loss-of-coolant accident for BWR is modeled with computational fluid (CFD) dynamics and finite element calculations. In the CFD calculations, the direct-contact condensation in the pressure suppression pool is studied. The heat transfer in the liquid phase is modeled with the Hughes-Duffey correlation based on the surface renewal model. The heat transfer is proportional to the square root of the turbulence kinetic energy. The condensation models are implemented with user-defined functions in the Euler-Euler two-phase model of the Fluent 12.1 CFD code. The rapid collapse of a large steam bubble and the resulting pressure source is studied analytically and numerically. Pressure source obtained from simplified calculations is used for studying the structural effects and FSI in a realistic BWR containment. The collapse results in volume acceleration, which induces pressure loads on the pool walls. In the case of a spherical bubble, the velocity term of the volume acceleration is responsible of the largest pressure load. As the amount of air in the bubble is decreased, the peak pressure increases. However, when the water compressibility is accounted for, the finite speed of sound becomes a limiting factor. (Author)

  2. A model of polarized-beam AGS in the ray-tracing code Zgoubi

    Energy Technology Data Exchange (ETDEWEB)

    Meot, F. [Brookhaven National Lab. (BNL), Upton, NY (United States); Ahrens, L. [Brookhaven National Lab. (BNL), Upton, NY (United States); Brown, K. [Brookhaven National Lab. (BNL), Upton, NY (United States); Dutheil, Y. [Brookhaven National Lab. (BNL), Upton, NY (United States); Glenn, J. [Brookhaven National Lab. (BNL), Upton, NY (United States); Huang, H. [Brookhaven National Lab. (BNL), Upton, NY (United States); Roser, T. [Brookhaven National Lab. (BNL), Upton, NY (United States); Shoefer, V. [Brookhaven National Lab. (BNL), Upton, NY (United States); Tsoupas, N. [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2016-07-12

    A model of the Alternating Gradient Synchrotron, based on the AGS snapramps, has been developed in the stepwise ray-tracing code Zgoubi. It has been used over the past 5 years in a number of accelerator studies aimed at enhancing RHIC proton beam polarization. It is also used to study and optimize proton and Helion beam polarization in view of future RHIC and eRHIC programs. The AGS model in Zgoubi is operational on-line via three different applications, ’ZgoubiFromSnaprampCmd’, ’AgsZgoubiModel’ and ’AgsModelViewer’, with the latter two essentially interfaces to the former which is the actual model ’engine’. All three commands are available from the controls system application launcher in the AGS ’StartUp’ menu, or from eponymous commands on shell terminals. Main aspects of the model and of its operation are presented in this technical note, brief excerpts from various studies performed so far are given for illustration, means and methods entering in ZgoubiFromSnaprampCmd are developed further in appendix.

  3. ROBO: a model and a code for studying the interstellar medium

    Science.gov (United States)

    Grassi, T.; Krstic, P.; Merlin, E.; Buonomo, U.; Piovan, L.; Chiosi, C.

    2011-09-01

    We present robo, a model and its companion code for the study of the interstellar medium (ISM). The aim is to provide an accurate description of the physical evolution of the ISM and to set the ground for an ancillary tool to be inserted in NBody-Tree-SPH (NB-TSPH) simulations of large-scale structures in the cosmological context or of the formation and evolution of individual galaxies. The ISM model consists of gas and dust. The gas chemical composition is regulated by a network of reactions that includes a large number of species (hydrogen and deuterium-based molecules, helium, and metals). New reaction rates for the charge transfer in H+ and H2 collisions are presented. The dust contains the standard mixture of carbonaceous grains (graphite grains and PAHs) and silicates. In our model dust are formed and destroyed by several processes. The model accurately treats the cooling process, based on several physical mechanisms, and cooling functions recently reported in the literature. The model is applied to a wide range of the input parameters, and the results for important quantities describing the physical state of the gas and dust are presented. The results are organized in a database suited to the artificial neural networks (ANNs). Once trained, the ANNs yield the same results obtained by ROBO with great accuracy. We plan to develop ANNs suitably tailored for applications to NB-TSPH simulations of cosmological structures and/or galaxies.

  4. Improving the Salammbo code modelling and using it to better predict radiation belts dynamics

    Science.gov (United States)

    Maget, Vincent; Sicard-Piet, Angelica; Grimald, Sandrine Rochel; Boscher, Daniel

    2016-07-01

    In the framework of the FP7-SPACESTORM project, one objective is to improve the reliability of the model-based predictions performed of the radiation belt dynamics (first developed during the FP7-SPACECAST project). In this purpose we have analyzed and improved the way the simulations using the ONERA Salammbô code are performed, especially in : - Better controlling the driving parameters of the simulation; - Improving the initialization of the simulation in order to be more accurate at most energies for L values between 4 to 6; - Improving the physics of the model. For first point a statistical analysis of the accuracy of the Kp index has been conducted. For point two we have based our method on a long duration simulation in order to extract typical radiation belt states depending on the solar wind stress and geomagnetic activity. For last point we have first improved separately the modelling of different processes acting in the radiation belts and then, we have analyzed the global improvements obtained when simulating them together. We'll discuss here on all these points and on the balance that has to be taken into account between modeled processes to globally improve the radiation belt modelling.

  5. Cross-band noise model refinement for transform domain Wyner–Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2012-01-01

    Distributed Video Coding (DVC) is a new video coding paradigm, which mainly exploits the source statistics at the decoder based on the availability of decoder side information. One approach to DVC is feedback channel based Transform Domain Wyner–Ziv (TDWZ) video coding. The efficiency of current ...

  6. Committed to the Honor Code: An Investment Model Analysis of Academic Integrity

    Science.gov (United States)

    Dix, Emily L.; Emery, Lydia F.; Le, Benjamin

    2014-01-01

    Educators worldwide face challenges surrounding academic integrity. The development of honor codes can promote academic integrity, but understanding how and why honor codes affect behavior is critical to their successful implementation. To date, research has not examined how students' "relationship" to an honor code predicts…

  7. Joint source channel coding using arithmetic codes

    CERN Document Server

    Bi, Dongsheng

    2009-01-01

    Based on the encoding process, arithmetic codes can be viewed as tree codes and current proposals for decoding arithmetic codes with forbidden symbols belong to sequential decoding algorithms and their variants. In this monograph, we propose a new way of looking at arithmetic codes with forbidden symbols. If a limit is imposed on the maximum value of a key parameter in the encoder, this modified arithmetic encoder can also be modeled as a finite state machine and the code generated can be treated as a variable-length trellis code. The number of states used can be reduced and techniques used fo

  8. The HADES RPC time of flight wall performance in Au+Au collisions at 1.23 AGeV

    Energy Technology Data Exchange (ETDEWEB)

    Kornakov, Georgy [TU Darmstadt (Germany)

    2014-07-01

    The HADES Resistive Plate Chamber (RPC) detector measures the time-of-flight of charged particles in the innermost part of the High Acceptance Di-Electron Spectrometer located at GSI, Darmstadt, Germany. Its main goal is to provide lepton identification at low momenta (p<400MeV/c) as well as identification of pi, K, p, He3, d/He4, t, studied by the experiment. For the Au+Au beam time, a major improvement of the spectrometer in terms of granularity and particle identification capability was achieved by replacing the old TOFino detector by the new shielded timing RPC time-of-flight detectors. The gold beam provided by the SIS 18 accelerator with energy of 1.23 AGeV was colliding with a segmented gold target, creating in the RPC region mean multiplicities of 72 charged particles per event and in the most central ones of 150 charged particles. Results show a RPC efficiency above 95 % and a mean time accuracy below 70 ps. In here we describe the design and performance characteristics required to achieve the goal as well as the methods and algorithms used for calibration and correction of the data.

  9. Assessment of Modified Wall Condensation Models of MARS-KS and SPACE Codes using Reflux Condensation Test

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kyung Won; Cheong, Ae Ju; Suh, Duk SUH [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2015-10-15

    The wall condensation models of MARS-KS and SPACE codes adopt the Colburn-Hougen diffusion method to solve for the liquid-gas interface temperature in the presence of noncondensable (NC) gases. Recent studies reported that there was an error in the vapor mass flux term when the models were implemented in the codes. This error causes the codes to underestimate the steam condensation rate. This tendency becomes more noticeable with the increase in the mole fraction of NC gases. In this study, we assess the modified condensation model of MARS-KS and SPACE codes. The calculation results of modified version of the codes (MARS-KS 1.3r1 and SPACE 2.16r1) are compared to those of the original version (MARS-KS 1.3 and SPACE 2.14) and the experimental data. The modified model predicts higher heat fluxes and HTCs than the original model due to the increase in the steam condensation rate. At relatively high air mass fraction, the modified model increases the discrepancy between the measured data and calculated values.

  10. Kombucha brewing under the Food and Drug Administration model Food Code: risk analysis and processing guidance.

    Science.gov (United States)

    Nummer, Brian A

    2013-11-01

    Kombucha is a fermented beverage made from brewed tea and sugar. The taste is slightly sweet and acidic and it may have residual carbon dioxide. Kombucha is consumed in many countries as a health beverage and it is gaining in popularity in the U.S. Consequently, many retailers and food service operators are seeking to brew this beverage on site. As a fermented beverage, kombucha would be categorized in the Food and Drug Administration model Food Code as a specialized process and would require a variance with submission of a food safety plan. This special report was created to assist both operators and regulators in preparing or reviewing a kombucha food safety plan.

  11. Technical support document for proposed revision of the model energy code thermal envelope requirements

    Energy Technology Data Exchange (ETDEWEB)

    Conner, C.C.; Lucas, R.G.

    1993-02-01

    This report documents the development of the proposed revision of the council of American Building Officials' (CABO) 1993 supplement to the 1992 Model Energy Code (MEC) (referred to as the 1993 MEC) building thermal envelope requirements for single-family and low-rise multifamily residences. The goal of this analysis was to develop revised guidelines based on an objective methodology that determined the most cost-effective (least total life-cycle cost [LCC]) combination of energy conservation measures (ECMs) for residences in different locations. The ECMs with the lowest LCC were used as a basis for proposing revised MEC maximum U[sub o]-value (thermal transmittance) curves in the MEC format. The changes proposed here affect the requirements for group R'' residences. The group R residences are detached one- and two-family dwellings (referred to as single-family) and all other residential buildings three stories or less (referred to as multifamily).

  12. Technical support document for proposed revision of the model energy code thermal envelope requirements

    Energy Technology Data Exchange (ETDEWEB)

    Conner, C.C.; Lucas, R.G.

    1993-02-01

    This report documents the development of the proposed revision of the council of American Building Officials` (CABO) 1993 supplement to the 1992 Model Energy Code (MEC) (referred to as the 1993 MEC) building thermal envelope requirements for single-family and low-rise multifamily residences. The goal of this analysis was to develop revised guidelines based on an objective methodology that determined the most cost-effective (least total life-cycle cost [LCC]) combination of energy conservation measures (ECMs) for residences in different locations. The ECMs with the lowest LCC were used as a basis for proposing revised MEC maximum U{sub o}-value (thermal transmittance) curves in the MEC format. The changes proposed here affect the requirements for ``group R`` residences. The group R residences are detached one- and two-family dwellings (referred to as single-family) and all other residential buildings three stories or less (referred to as multifamily).

  13. SPRINT: A Tool to Generate Concurrent Transaction-Level Models from Sequential Code

    Directory of Open Access Journals (Sweden)

    Cockx Johan

    2007-01-01

    Full Text Available A high-level concurrent model such as a SystemC transaction-level model can provide early feedback during the exploration of implementation alternatives for state-of-the-art signal processing applications like video codecs on a multiprocessor platform. However, the creation of such a model starting from sequential code is a time-consuming and error-prone task. It is typically done only once, if at all, for a given design. This lack of exploration of the design space often leads to a suboptimal implementation. To support our systematic C-based design flow, we have developed a tool to generate a concurrent SystemC transaction-level model for user-selected task boundaries. Using this tool, different parallelization alternatives have been evaluated during the design of an MPEG-4 simple profile encoder and an embedded zero-tree coder. Generation plus evaluation of an alternative was possible in less than six minutes. This is fast enough to allow extensive exploration of the design space.

  14. SPRINT: A Tool to Generate Concurrent Transaction-Level Models from Sequential Code

    Directory of Open Access Journals (Sweden)

    Richard Stahl

    2007-01-01

    Full Text Available A high-level concurrent model such as a SystemC transaction-level model can provide early feedback during the exploration of implementation alternatives for state-of-the-art signal processing applications like video codecs on a multiprocessor platform. However, the creation of such a model starting from sequential code is a time-consuming and error-prone task. It is typically done only once, if at all, for a given design. This lack of exploration of the design space often leads to a suboptimal implementation. To support our systematic C-based design flow, we have developed a tool to generate a concurrent SystemC transaction-level model for user-selected task boundaries. Using this tool, different parallelization alternatives have been evaluated during the design of an MPEG-4 simple profile encoder and an embedded zero-tree coder. Generation plus evaluation of an alternative was possible in less than six minutes. This is fast enough to allow extensive exploration of the design space.

  15. Improved Intranuclear Cascade Models for the Codes CEM2k and LAQGSM

    CERN Document Server

    Mashnik, S G; Sierk, A J; Prael, R E

    2005-01-01

    An improved version of the Cascade-Exciton Model (CEM) of nuclear reactions implemented in the codes CEM2k and the Los Alamos version of the Quark-Gluon String Model (LAQGSM) has been developed recently at LANL to describe reactions induced by particles and nuclei at energies up to hundreds of GeV/nucleon for a number of applications. We present several improvements to the intranuclear cascade models used in CEM2k and LAQGSM developed recently to better describe the physics of nuclear reactions. First, we incorporate the photonuclear mode from CEM2k into LAQGSM to allow it to describe photonuclear reactions, not previously modeled there. Then, we develop new approximations to describe more accurately experimental elementary energy and angular distributions of secondary particles from hadron-hadron and photon-hadron interactions using available data and approximations published by other authors. Finally, to consider reactions involving very highly excited nuclei (E* > 2-3 MeV/A), we have incorporated into CEM2...

  16. HARDWARE MODELING OF BINARY CODED DECIMAL ADDER IN FIELD PROGRAMMABLE GATE ARRAY

    Directory of Open Access Journals (Sweden)

    Muhammad Ibn Ibrahimy

    2013-01-01

    Full Text Available There are insignificant relevant research works available which are involved with the Field Programmable Gate Array (FPGA based hardware implementation of Binary Coded Decimal (BCD adder. This is because, the FPGA based hardware realization is quiet new and still developing field of research. The article illustrates the design and hardware modeling of a BCD adder. Among the types of adders, Carry Look Ahead (CLA and Ripple Carry (RC adder have been studied, designed and compared in terms of area consumption and time requirement. The simulation results show that the CLA adder performs faster with optimized area consumption. Verilog Hardware Description Language (HDL is used for designing the model with the help of Altera Quartus II Electronic Design Automation (EDA tool. EDA synthesis tools make it easy to develop an HDL model and which can be synthesized into target-specific architectures. Whereas, the HDL based modeling provides shorter development phases with continuous testing and verification of the system performance and behavior. After successful functional and timing simulations of the CLA based BCD adder, the design has been downloaded to physical FPGA device. For FPGA implementation, the Altera DE2 board has been used which contains Altera Cyclone II 2C35 FPGA device.

  17. GECO: Galaxy Evolution COde - A new semi-analytical model of galaxy formation

    CERN Document Server

    Ricciardelli, E

    2010-01-01

    We present a new semi-analytical model of galaxy formation, GECO (Galaxy Evolution COde), aimed at a better understanding of when and how the two processes of star formation and galaxy assembly have taken place. Our model is structured into a Monte Carlo algorithm based on the Extended Press-Schechter theory, for the representation of the merging hierarchy of dark matter halos, and a set of analytic algorithms for the treatment of the baryonic physics, including classical recipes for the gas cooling, the star formation time-scales, galaxy mergers and SN feedback. Together with the galaxies, the parallel growth of BHs is followed in time and their feedback on the hosting galaxies is modelled. We set the model free parameters by matching with data on local stellar mass functions and the BH-bulge relation at z=0. Based on such local boundary conditions, we investigate how data on the high-redshift universe constrain our understanding of the physical processes driving the evolution, focusing in particular on the ...

  18. ROBO: a Model and a Code for the Study of the Interstellar Medium

    CERN Document Server

    Grassi, T; Merlin, E; Buonomo, U; Piovan, L; Chiosi, C

    2010-01-01

    We present ROBO, a model and its companion code for the study of the interstellar medium (ISM). The aim is to provide an accurate description of the physical evolution of the ISM and to set the ground for an ancillary tool to be inserted in NBody-Tree-SPH (NB-TSPH) simulations of large scale structures in cosmological context or of the formation and evolution of individual galaxies. The ISM model consists of gas and dust. The gas chemical composition is regulated by a network of reactions that includes a large number of species (hydrogen and deuterium based molecules, helium, and metals). New reaction rates for the charge transfer in $\\mathrm H^+$ and $\\mathrm H_2$ collisions are presented. The dust contains the standard mixture of carbonaceous grains (graphite grains and PAHs) and silicates of which the model follows the formation and destruction by several processes. The model takes into account an accurate treatment of the cooling process, based on several physical mechanisms, and cooling functions recentl...

  19. Lumpy - an interactive Lumped Parameter Modeling code based on MS Access and MS Excel.

    Science.gov (United States)

    Suckow, A.

    2012-04-01

    Several tracers for dating groundwater (18O/2H, 3H, CFCs, SF6, 85Kr) need lumped parameter modeling (LPM) to convert measured values into numbers with unit time. Other tracers (T/3He, 39Ar, 14C, 81Kr) allow the computation of apparent ages with a mathematical formula using radioactive decay without defining the age mixture that any groundwater sample represents. Also interpretation of the latter profits significantly from LPM tools that allow forward modeling of input time series to measurable output values assuming different age distributions and mixtures in the sample. This talk presents a Lumped Parameter Modeling code, Lumpy, combining up to two LPMs in parallel. The code is standalone and freeware. It is based on MS Access and Access Basic (AB) and allows using any number of measurements for both input time series and output measurements, with any, not necessarily constant, time resolution. Several tracers, also comprising very different timescales like e.g. the combination of 18O, CFCs and 14C, can be modeled, displayed and fitted simultaneously. Lumpy allows for each of the two parallel models the choice of the following age distributions: Exponential Piston flow Model (EPM), Linear Piston flow Model (LPM), Dispersion Model (DM), Piston flow Model (PM) and Gamma Model (GM). Concerning input functions, Lumpy allows delaying (passage through the unsaturated zone) shifting by a constant value (converting 18O data from a GNIP station to a different altitude), multiplying by a constant value (geochemical reduction of initial 14C) and the definition of a constant input value prior to the input time series (pre-bomb tritium). Lumpy also allows underground tracer production (4He or 39Ar) and the computation of a daughter product (tritiugenic 3He) as well as partial loss of the daughter product (partial re-equilibration of 3He). These additional parameters and the input functions can be defined independently for the two sub-LPMs to represent two different recharge

  20. Application of Gamma code coupled with turbomachinery models for high temperature gas-cooled reactors

    Energy Technology Data Exchange (ETDEWEB)

    Chang Oh

    2008-02-01

    The very high-temperature gas-cooled reactor (VHTR) is envisioned as a single- or dual-purpose reactor for electricity and hydrogen generation. The concept has average coolant temperatures above 9000C and operational fuel temperatures above 12500C. The concept provides the potential for increased energy conversion efficiency and for high-temperature process heat application in addition to power generation. While all the High Temperature Gas Cooled Reactor (HTGR) concepts have sufficiently high temperature to support process heat applications, such as coal gasification, desalination or cogenerative processes, the VHTR’s higher temperatures allow broader applications, including thermochemical hydrogen production. However, the very high temperatures of this reactor concept can be detrimental to safety if a loss-ofcoolant accident (LOCA) occurs. Following the loss of coolant through the break and coolant depressurization, air will enter the core through the break by molecular diffusion and ultimately by natural convection, leading to oxidation of the in-core graphite structure and fuel. The oxidation will accelerate heatup of the reactor core and the release of a toxic gas, CO, and fission products. Thus, without any effective countermeasures, a pipe break may lead to significant fuel damage and fission product release. Prior to the start of this Korean/United States collaboration, no computer codes were available that had been sufficiently developed and validated to reliably simulate a LOCA in the VHTR. Therefore, we have worked for the past three years on developing and validating advanced computational methods for simulating LOCAs in a VHTR. GAMMA code is being developed to implement turbomachinery models in the power conversion unit (PCU) and ultimately models associated with the hydrogen plant. Some preliminary results will be described in this paper.

  1. Advantages of a mechanistic codon substitution model for evolutionary analysis of protein-coding sequences.

    Directory of Open Access Journals (Sweden)

    Sanzo Miyazawa

    Full Text Available BACKGROUND: A mechanistic codon substitution model, in which each codon substitution rate is proportional to the product of a codon mutation rate and the average fixation probability depending on the type of amino acid replacement, has advantages over nucleotide, amino acid, and empirical codon substitution models in evolutionary analysis of protein-coding sequences. It can approximate a wide range of codon substitution processes. If no selection pressure on amino acids is taken into account, it will become equivalent to a nucleotide substitution model. If mutation rates are assumed not to depend on the codon type, then it will become essentially equivalent to an amino acid substitution model. Mutation at the nucleotide level and selection at the amino acid level can be separately evaluated. RESULTS: The present scheme for single nucleotide mutations is equivalent to the general time-reversible model, but multiple nucleotide changes in infinitesimal time are allowed. Selective constraints on the respective types of amino acid replacements are tailored to each gene in a linear function of a given estimate of selective constraints. Their good estimates are those calculated by maximizing the respective likelihoods of empirical amino acid or codon substitution frequency matrices. Akaike and Bayesian information criteria indicate that the present model performs far better than the other substitution models for all five phylogenetic trees of highly-divergent to highly-homologous sequences of chloroplast, mitochondrial, and nuclear genes. It is also shown that multiple nucleotide changes in infinitesimal time are significant in long branches, although they may be caused by compensatory substitutions or other mechanisms. The variation of selective constraint over sites fits the datasets significantly better than variable mutation rates, except for 10 slow-evolving nuclear genes of 10 mammals. An critical finding for phylogenetic analysis is that

  2. Modelling the biogeochemical cycle of silicon in soils using the reactive transport code MIN3P

    Science.gov (United States)

    Gerard, F.; Mayer, K. U.; Hodson, M. J.; Meunier, J.

    2006-12-01

    We investigated the biogeochemical cycling of Si in an acidic brown soil covered by a coniferous forest (Douglas fir) based on a comprehensive data set and reactive transport modelling. Both published and original data enable us to make up a conceptual model on which the development of a numerical model is based. We modified the reactive transport code MIN3P, which solves thermodynamic and kinetic reactions coupled with vadose zone flow and solute transport. Simulations were performed for a one-dimensional heterogeneous soil profile and were constrained by observed data including daily soil temperature, plant transpiration, throughfall, and dissolved Si in solutions collected beneath the organic layer. Reactive transport modelling was first used to test the validity of the hypothesis that a dynamic balance between Si uptake by plants and release by weathering controls aqueous Si-concentrations. We were able to calibrate the model quite accurately by stepwise adjustment of the relevant parameters. The capability of the model to predict Si-concentrations was good. Mass balance calculations indicate that only 40% of the biogeochemical cycle of Si is controlled by weathering and that about 60% of Si-cycling is related to biological processes (i.e. Si uptake by plants and dissolution of biogenic Si). Such a large contribution of biological processes was not anticipated considering the temperate climate regime, but may be explained by the high biomass productivity of the planted coniferous species. The large contribution of passive Si-uptake by vegetation permits the conservation of seasonal concentration variations caused by temperature-induced weathering, although the modelling suggests that the latter process was of lesser importance relative to biological Si-cycling.

  3. LUMPED: a Visual Basic code of lumped-parameter models for mean residence time analyses of groundwater systems

    Science.gov (United States)

    Ozyurt, N. N.; Bayari, C. S.

    2003-02-01

    A Microsoft ® Visual Basic 6.0 (Microsoft Corporation, 1987-1998) code of 15 lumped-parameter models is presented for the analysis of mean residence time in aquifers. Groundwater flow systems obeying plug and exponential flow models and their combinations of parallel or serial connection can be simulated by these steady-state models which may include complications such as bypass flow and dead volume. Each model accepts tritium, krypton-85, chlorofluorocarbons (CFC-11, CFC-12 and CFC-113) and sulfur hexafluoride (SF 6) as environmental tracer. Retardation of gas tracers in the unsaturated zone and their degradation in the flow system may also be accounted for. The executable code has been tested to run under Windows 95 or higher operating systems. The results of comparisons between other comparable codes are discussed and the limitations are indicated.

  4. A model of R-D performance evaluation for Rate-Distortion-Complexity evaluation of H.264 video coding

    DEFF Research Database (Denmark)

    Wu, Mo; Forchhammer, Søren

    2007-01-01

    This paper considers a method for evaluation of Rate-Distortion-Complexity (R-D-C) performance of video coding. A statistical model of the transformed coefficients is used to estimate the Rate-Distortion (R-D) performance. A model frame work for rate, distortion and slope of the R-D curve for int...... and intra frame is presented. Assumptions are given for analyzing an R-D model for fast R-D-C evaluation. The theoretical expressions are combined with H.264 video coding, and confirmed by experimental results. The complexity frame work is applied to the integer motion estimation....

  5. Non-coding cancer driver candidates identified with a sample- and position-specific model of the somatic mutation rate

    Science.gov (United States)

    Juul, Malene; Bertl, Johanna; Guo, Qianyun; Nielsen, Morten Muhlig; Świtnicki, Michał; Hornshøj, Henrik; Madsen, Tobias; Hobolth, Asger; Pedersen, Jakob Skou

    2017-01-01

    Non-coding mutations may drive cancer development. Statistical detection of non-coding driver regions is challenged by a varying mutation rate and uncertainty of functional impact. Here, we develop a statistically founded non-coding driver-detection method, ncdDetect, which includes sample-specific mutational signatures, long-range mutation rate variation, and position-specific impact measures. Using ncdDetect, we screened non-coding regulatory regions of protein-coding genes across a pan-cancer set of whole-genomes (n = 505), which top-ranked known drivers and identified new candidates. For individual candidates, presence of non-coding mutations associates with altered expression or decreased patient survival across an independent pan-cancer sample set (n = 5454). This includes an antigen-presenting gene (CD1A), where 5’UTR mutations correlate significantly with decreased survival in melanoma. Additionally, mutations in a base-excision-repair gene (SMUG1) correlate with a C-to-T mutational-signature. Overall, we find that a rich model of mutational heterogeneity facilitates non-coding driver identification and integrative analysis points to candidates of potential clinical relevance. DOI: http://dx.doi.org/10.7554/eLife.21778.001 PMID:28362259

  6. Energy Consumption Model and Measurement Results for Network Coding-enabled IEEE 802.11 Meshed Wireless Networks

    DEFF Research Database (Denmark)

    Paramanathan, Achuthan; Rasmussen, Ulrik Wilken; Hundebøll, Martin

    2012-01-01

    This paper presents an energy model and energy measurements for network coding enabled wireless meshed networks based on IEEE 802.11 technology. The energy model and the energy measurement testbed is limited to a simple Alice and Bob scenario. For this toy scenario we compare the energy usages...

  7. Modeling Monte Carlo of multileaf collimators using the code GEANT4

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Alex C.H.; Lima, Fernando R.A., E-mail: oliveira.ach@yahoo.com, E-mail: falima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil); Lima, Luciano S.; Vieira, Jose W., E-mail: lusoulima@yahoo.com.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Pernambuco (IFPE), Recife, PE (Brazil)

    2014-07-01

    Radiotherapy uses various techniques and equipment for local treatment of cancer. The equipment most often used in radiotherapy to the patient irradiation is linear accelerator (Linac). Among the many algorithms developed for evaluation of dose distributions in radiotherapy planning, the algorithms based on Monte Carlo (MC) methods have proven to be very promising in terms of accuracy by providing more realistic results. The MC simulations for applications in radiotherapy are divided into two parts. In the first, the simulation of the production of the radiation beam by the Linac is performed and then the phase space is generated. The phase space contains information such as energy, position, direction, etc. of millions of particles (photons, electrons, positrons). In the second part the simulation of the transport of particles (sampled phase space) in certain configurations of irradiation field is performed to assess the dose distribution in the patient (or phantom). Accurate modeling of the Linac head is of particular interest in the calculation of dose distributions for intensity modulated radiation therapy (IMRT), where complex intensity distributions are delivered using a multileaf collimator (MLC). The objective of this work is to describe a methodology for modeling MC of MLCs using code Geant4. To exemplify this methodology, the Varian Millennium 120-leaf MLC was modeled, whose physical description is available in BEAMnrc Users Manual (20 11). The dosimetric characteristics (i.e., penumbra, leakage, and tongue-and-groove effect) of this MLC were evaluated. The results agreed with data published in the literature concerning the same MLC. (author)

  8. Modeling and Analysis of FCM UN TRISO Fuel Using the PARFUME Code

    Energy Technology Data Exchange (ETDEWEB)

    Blaise Collin

    2013-09-01

    The PARFUME (PARticle Fuel ModEl) modeling code was used to assess the overall fuel performance of uranium nitride (UN) tri-structural isotropic (TRISO) ceramic fuel in the frame of the design and development of Fully Ceramic Matrix (FCM) fuel. A specific modeling of a TRISO particle with UN kernel was developed with PARFUME, and its behavior was assessed in irradiation conditions typical of a Light Water Reactor (LWR). The calculations were used to access the dimensional changes of the fuel particle layers and kernel, including the formation of an internal gap. The survivability of the UN TRISO particle was estimated depending on the strain behavior of the constituent materials at high fast fluence and burn-up. For nominal cases, internal gas pressure and representative thermal profiles across the kernel and layers were determined along with stress levels in the pyrolytic carbon (PyC) and silicon carbide (SiC) layers. These parameters were then used to evaluate fuel particle failure probabilities. Results of the study show that the survivability of UN TRISO fuel under LWR irradiation conditions might only be guaranteed if the kernel and PyC swelling rates are limited at high fast fluence and burn-up. These material properties are unknown at the irradiation levels expected to be reached by UN TRISO fuel in LWRs. Therefore, more effort is needed to determine them and positively conclude on the applicability of FCM fuel to LWRs.

  9. Analytical model for effect of temperature variation on PSF consistency in wavefront coding infrared imaging system

    Science.gov (United States)

    Feng, Bin; Shi, Zelin; Zhang, Chengshuo; Xu, Baoshu; Zhang, Xiaodong

    2016-05-01

    The point spread function (PSF) inconsistency caused by temperature variation leads to artifacts in decoded images of a wavefront coding infrared imaging system. Therefore, this paper proposes an analytical model for the effect of temperature variation on the PSF consistency. In the proposed model, a formula for the thermal deformation of an optical phase mask is derived. This formula indicates that a cubic optical phase mask (CPM) is still cubic after thermal deformation. A proposed equivalent cubic phase mask (E-CPM) is a virtual and room-temperature lens which characterizes the optical effect of temperature variation on the CPM. Additionally, a calculating method for PSF consistency after temperature variation is presented. Numerical simulation illustrates the validity of the proposed model and some significant conclusions are drawn. Given the form parameter, the PSF consistency achieved by a Ge-material CPM is better than the PSF consistency by a ZnSe-material CPM. The effect of the optical phase mask on PSF inconsistency is much slighter than that of the auxiliary lens group. A large form parameter of the CPM will introduce large defocus-insensitive aberrations, which improves the PSF consistency but degrades the room-temperature MTF.

  10. Implementation of Lumped Plasticity Models and Developments in an Object Oriented Nonlinear Finite Element Code

    Science.gov (United States)

    Segura, Christopher L.

    Numerical simulation tools capable of modeling nonlinear material and geometric behavior are important to structural engineers concerned with approximating the strength and deformation capacity of a structure. While structures are typically designed to behave linear elastic when subjected to building code design loads, exceedance of the linear elastic range is often an important consideration, especially with regards to structural response during hazard level events (i.e. earthquakes, hurricanes, floods), where collapse prevention is the primary goal. This thesis addresses developments made to Mercury, a nonlinear finite element program developed in MATLAB for numerical simulation and in C++ for real time hybrid simulation. Developments include the addition of three new constitutive models to extend Mercury's lumped plasticity modeling capabilities, a constitutive driver tool for testing and implementing Mercury constitutive models, and Mercury pre and post-processing tools. Mercury has been developed as a tool for transient analysis of distributed plasticity models, offering accurate nonlinear results on the material level, element level, and structural level. When only structural level response is desired (collapse prevention), obtaining material level results leads to unnecessarily lengthy computational time. To address this issue in Mercury, lumped plasticity capabilities are developed by implementing two lumped plasticity flexural response constitutive models and a column shear failure constitutive model. The models are chosen for implementation to address two critical issues evident in structural testing: column shear failure and strength and stiffness degradation under reverse cyclic loading. These tools make it possible to model post-peak behavior, capture strength and stiffness degradation, and predict global collapse. During the implementation process, a need was identified to create a simple program, separate from Mercury, to simplify the process of

  11. Analysis of the technology acceptance model in examining hospital nurses' behavioral intentions toward the use of bar code medication administration.

    Science.gov (United States)

    Song, Lunar; Park, Byeonghwa; Oh, Kyeung Mi

    2015-04-01

    Serious medication errors continue to exist in hospitals, even though there is technology that could potentially eliminate them such as bar code medication administration. Little is known about the degree to which the culture of patient safety is associated with behavioral intention to use bar code medication administration. Based on the Technology Acceptance Model, this study evaluated the relationships among patient safety culture and perceived usefulness and perceived ease of use, and behavioral intention to use bar code medication administration technology among nurses in hospitals. Cross-sectional surveys with a convenience sample of 163 nurses using bar code medication administration were conducted. Feedback and communication about errors had a positive impact in predicting perceived usefulness (β=.26, Pmodel predicting for behavioral intention, age had a negative impact (β=-.17, Pmodel explained 24% (Ptechnology.

  12. Application of flow network models of SINDA/FLUINT{sup TM} to a nuclear power plant system thermal hydraulic code

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Ji Bum [Institute for Advanced Engineering, Yongin (Korea, Republic of); Park, Jong Woon [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    In order to enhance the dynamic and interactive simulation capability of a system thermal hydraulic code for nuclear power plant, applicability of flow network models in SINDA/FLUINT{sup TM} has been tested by modeling feedwater system and coupling to DSNP which is one of a system thermal hydraulic simulation code for a pressurized heavy water reactor. The feedwater system is selected since it is one of the most important balance of plant systems with a potential to greatly affect the behavior of nuclear steam supply system. The flow network model of this feedwater system consists of condenser, condensate pumps, low and high pressure heaters, deaerator, feedwater pumps, and control valves. This complicated flow network is modeled and coupled to DSNP and it is tested for several normal and abnormal transient conditions such turbine load maneuvering, turbine trip, and loss of class IV power. The results show reasonable behavior of the coupled code and also gives a good dynamic and interactive simulation capabilities for the several mild transient conditions. It has been found that coupling system thermal hydraulic code with a flow network code is a proper way of upgrading simulation capability of DSNP to mature nuclear plant analyzer (NPA). 5 refs., 10 figs. (Author)

  13. Linear-Time Non-Malleable Codes in the Bit-Wise Independent Tampering Model

    DEFF Research Database (Denmark)

    Giacomelli, Irene; Döttling, Nico; Damgård, Ivan Bjerre;

    abort) completely unrelated with m. It is known that non-malleability is possible only for restricted classes of tampering functions. Since their introduction, a long line of works has established feasibility results of non-malleable codes against different families of tampering functions. However......Non-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m' (eventually......, for many interesting families the challenge of finding "good" non-malleable codes remains open. In particular, we would like to have explicit constructions of non-malleable codes with high-rate and efficient encoding/decoding algorithms (i.e. low computational complexity). In this work we present two...

  14. Genetic hotels for the standard genetic code: evolutionary analysis based upon novel three-dimensional algebraic models.

    Science.gov (United States)

    José, Marco V; Morgado, Eberto R; Govezensky, Tzipe

    2011-07-01

    Herein, we rigorously develop novel 3-dimensional algebraic models called Genetic Hotels of the Standard Genetic Code (SGC). We start by considering the primeval RNA genetic code which consists of the 16 codons of type RNY (purine-any base-pyrimidine). Using simple algebraic operations, we show how the RNA code could have evolved toward the current SGC via two different intermediate evolutionary stages called Extended RNA code type I and II. By rotations or translations of the subset RNY, we arrive at the SGC via the former (type I) or via the latter (type II), respectively. Biologically, the Extended RNA code type I, consists of all codons of the type RNY plus codons obtained by considering the RNA code but in the second (NYR type) and third (YRN type) reading frames. The Extended RNA code type II, comprises all codons of the type RNY plus codons that arise from transversions of the RNA code in the first (YNY type) and third (RNR) nucleotide bases. Since the dimensions of remarkable subsets of the Genetic Hotels are not necessarily integer numbers, we also introduce the concept of algebraic fractal dimension. A general decoding function which maps each codon to its corresponding amino acid or the stop signals is also derived. The Phenotypic Hotel of amino acids is also illustrated. The proposed evolutionary paths are discussed in terms of the existing theories of the evolution of the SGC. The adoption of 3-dimensional models of the Genetic and Phenotypic Hotels will facilitate the understanding of the biological properties of the SGC.

  15. Bulbar microcircuit model predicts connectivity and roles of interneurons in odor coding.

    Directory of Open Access Journals (Sweden)

    Aditya Gilra

    Full Text Available Stimulus encoding by primary sensory brain areas provides a data-rich context for understanding their circuit mechanisms. The vertebrate olfactory bulb is an input area having unusual two-layer dendro-dendritic connections whose roles in odor coding are unclear. To clarify these roles, we built a detailed compartmental model of the rat olfactory bulb that synthesizes a much wider range of experimental observations on bulbar physiology and response dynamics than has hitherto been modeled. We predict that superficial-layer inhibitory interneurons (periglomerular cells linearize the input-output transformation of the principal neurons (mitral cells, unlike previous models of contrast enhancement. The linearization is required to replicate observed linear summation of mitral odor responses. Further, in our model, action-potentials back-propagate along lateral dendrites of mitral cells and activate deep-layer inhibitory interneurons (granule cells. Using this, we propose sparse, long-range inhibition between mitral cells, mediated by granule cells, to explain how the respiratory phases of odor responses of sister mitral cells can be sometimes decorrelated as observed, despite receiving similar receptor input. We also rule out some alternative mechanisms. In our mechanism, we predict that a few distant mitral cells receiving input from different receptors, inhibit sister mitral cells differentially, by activating disjoint subsets of granule cells. This differential inhibition is strong enough to decorrelate their firing rate phases, and not merely modulate their spike timing. Thus our well-constrained model suggests novel computational roles for the two most numerous classes of interneurons in the bulb.

  16. Bulbar microcircuit model predicts connectivity and roles of interneurons in odor coding.

    Science.gov (United States)

    Gilra, Aditya; Bhalla, Upinder S

    2015-01-01

    Stimulus encoding by primary sensory brain areas provides a data-rich context for understanding their circuit mechanisms. The vertebrate olfactory bulb is an input area having unusual two-layer dendro-dendritic connections whose roles in odor coding are unclear. To clarify these roles, we built a detailed compartmental model of the rat olfactory bulb that synthesizes a much wider range of experimental observations on bulbar physiology and response dynamics than has hitherto been modeled. We predict that superficial-layer inhibitory interneurons (periglomerular cells) linearize the input-output transformation of the principal neurons (mitral cells), unlike previous models of contrast enhancement. The linearization is required to replicate observed linear summation of mitral odor responses. Further, in our model, action-potentials back-propagate along lateral dendrites of mitral cells and activate deep-layer inhibitory interneurons (granule cells). Using this, we propose sparse, long-range inhibition between mitral cells, mediated by granule cells, to explain how the respiratory phases of odor responses of sister mitral c