HADES. A computer code for fast neutron cross section from the Optical Model
International Nuclear Information System (INIS)
Guasp, J.; Navarro, C.
1973-01-01
A FORTRAN V computer code for UNIVAC 1108/6 using a local Optical Model with spin-orbit interaction is described. The code calculates fast neutron cross sections, angular distribution, and Legendre moments for heavy and intermediate spherical nuclei. It allows for the possibility of automatic variation of potential parameters for experimental data fitting. (Author) 55 refs
Energy Technology Data Exchange (ETDEWEB)
Guasp, J.; Navarro, C.
1973-07-01
A FORTRAN V computer code for UNIVAC 1108/6 using a local Optical Model with spin-orbit interaction is described. The code calculates fast neutron cross sections, angular distribution, and Legendre moments for heavy and intermediate spherical nuclei. It allows for the possibility of automatic variation of potential parameters for experimental data fitting. (Author) 55 refs.
Recent Progress Validating the HADES Model of LLNL's HEAF MicroCT Measurements
Energy Technology Data Exchange (ETDEWEB)
White, W. T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bond, K. C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lennox, K. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Aufderheide, M. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Seetho, I. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Roberson, G. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2014-07-17
This report compares recent HADES calculations of x-ray linear attenuation coefficients to previous MicroCT measurements made at Lawrence Livermore National Laboratory’s High Energy Applications Facility (HEAF). The chief objective is to investigate what impact recent changes in HADES modeling have on validation results. We find that these changes have no obvious effect on the overall accuracy of the model. Detailed comparisons between recent and previous results are presented.
Studying Strangeness Production with HADES
Schuldes, Heidi
2018-02-01
The High-Acceptance DiElectron Spectrometer (HADES) operates in the 1 - 2A GeV energy regime in fixed target experiments to explore baryon-rich strongly interacting matter in heavy-ion collisions at moderate temperatures with rare and penetrating probes. We present results on the production of strange hadrons below their respective NN threshold energy in Au+Au collisions at 1.23A GeV ( = 2.4 GeV). Special emphasis is put on the enhanced feed-down contribution of ϕ mesons to the inclusive yield of K- and its implication on the measured spectral shape of K-. Furthermore, we investigate global properties of the system, confronting the measured hadron yields and transverse mass spectra with a Statistical Hadronization Model (SHM) and a blastwave parameterization, respectively. These supplement the world data of the chemical and kinetic freeze-out temperatures.
Electromagnetic Calorimeter for HADES Experiment
Directory of Open Access Journals (Sweden)
Rodríguez-Ramos P.
2014-01-01
Full Text Available Electromagnetic calorimeter (ECAL is being developed to complement dilepton spectrometer HADES. ECAL will enable the HADES@FAIR experiment to measure data on neutral meson production in heavy ion collisions at the energy range of 2-10 AGeV on the beam of future accelerator SIS100@FAIR. We will report results of the last beam test with quasi-monoenergetic photons carried out in MAMI facility at Johannes Gutenberg Universität Mainz.
New Mexico Univ., Albuquerque. American Indian Law Center.
The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…
International Nuclear Information System (INIS)
Anon.
1977-01-01
Progress in model and code development for reactor physics calculations is summarized. The codes included CINDER-10, PHROG, RAFFLE GAPP, DCFMR, RELAP/4, PARET, and KENO. Kinetics models for the PBF were developed
International Nuclear Information System (INIS)
Lapidus, K.; Agakishiev, G.; Balanda, A.; Bassini, R.; Behnke, C.; Belyaev, A.; Blanco, A.; Böhmer, M.; Cabanelas, P.; Carolino, N.; Chen, J. C.; Chernenko, S.; Díaz, J.; Dybczak, A.
2012-01-01
After the completion of the experimental program at SIS18 the HADES setup will migrate to FAIR, where it will deliver high-quality data for heavy-ion collisions in an unexplored energy range of up to 8 A GeV. In this contribution, we briefly present the physics case, relevant detector characteristics and discuss the recently completed upgrade of HADES.
Czech Academy of Sciences Publication Activity Database
Lapidus, K.; Agakishiev, G.; Balanda, A.; Fabbietti, L.; Finocchiaro, P.; Guber, F.; Karavicheva, T.; Kugler, Andrej; Markert, J.; Michel, J.; Pechenova, O.; Rustamov, A.; Sobolev, Yuri, G.; Strobele, H.; Tarantola, A.; Teilab, K.; Tlustý, Pavel; Troyan, A.Y.; Wagner, Vladimír; Zanevsky, Yu.
2012-01-01
Roč. 75, č. 5 (2012), s. 589-593 ISSN 1063-7788 R&D Projects: GA MŠk LC07050; GA AV ČR IAA100480803 Institutional support: RVO:61389005 Keywords : dielectron spectrometer * nuclear matter * HADES Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 0.539, year: 2012
Searching a dark photon with HADES
Czech Academy of Sciences Publication Activity Database
Agakishiev, G.; Balanda, A.; Belver, D.; Belyaev, A.; Berger-Chen, J. C.; Blanco, A.; Böhmer, M.; Boyard, J. L.; Cabanelas, P.; Chernenko, S.; Dybczak, A.; Epple, E.; Fabbietti, L.; Fateev, O.; Krása, Antonín; Křížek, Filip; Kugler, Andrej; Sobolev, Yuri, G.; Tlustý, Pavel; Wagner, Vladimír
2014-01-01
Roč. 731, APR (2014), s. 265-271 ISSN 0370-2693 R&D Projects: GA ČR GA13-06759S; GA MŠk LG14004 Institutional support: RVO:61389005 Keywords : HADES * dark photon * hidden sector * dark matter * rare eta decays Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 6.131, year: 2014
Studies on DANESS Code Modeling
International Nuclear Information System (INIS)
Jeong, Chang Joon
2009-09-01
The DANESS code modeling study has been performed. DANESS code is widely used in a dynamic fuel cycle analysis. Korea Atomic Energy Research Institute (KAERI) has used the DANESS code for the Korean national nuclear fuel cycle scenario analysis. In this report, the important models such as Energy-demand scenario model, New Reactor Capacity Decision Model, Reactor and Fuel Cycle Facility History Model, and Fuel Cycle Model are investigated. And, some models in the interface module are refined and inserted for Korean nuclear fuel cycle model. Some application studies have also been performed for GNEP cases and for US fast reactor scenarios with various conversion ratios
Backtracking algorithm for lepton reconstruction with HADES
International Nuclear Information System (INIS)
Sellheim, P
2015-01-01
The High Acceptance Di-Electron Spectrometer (HADES) at the GSI Helmholtzzentrum für Schwerionenforschung investigates dilepton and strangeness production in elementary and heavy-ion collisions. In April - May 2012 HADES recorded 7 billion Au+Au events at a beam energy of 1.23 GeV/u with the highest multiplicities measured so far. The track reconstruction and particle identification in the high track density environment are challenging. The most important detector component for lepton identification is the Ring Imaging Cherenkov detector. Its main purpose is the separation of electrons and positrons from large background of charged hadrons produced in heavy-ion collisions. In order to improve lepton identification this backtracking algorithm was developed. In this contribution we will show the results of the algorithm compared to the currently applied method for e +/- identification. Efficiency and purity of a reconstructed e +/- sample will be discussed as well. (paper)
Dilepton spectroscopy with HADES at SIS
International Nuclear Information System (INIS)
Salabura, P.
1996-01-01
A High Acceptance DiElectron Spectrometer (HADES) has been proposed for the SIS accelerator at GSI in order to measure e + e - pairs produced in proton, meson and heavy ion induced collisions. HADES will be able to operate at the highest luminosities available at SIS in an environment of high hadron and photon background. The physics program includes studies of in-medium properties of vector mesons and electromagnetic form factors of hadrons. A Ring Imaging Cerenkov Counter (RICH) and arrays of pre-shower detectors and TOF wall serve for electron identification. Particle momentum determination is achieved with magnetic spectrometer consisting of 6 superconducting coils in toroidal geometry and 2 sets of mini drift chambers for tracking. The detector features have been investigated in detailed simulations. The anticipated mass resolution and geometrical acceptance of the detector amounts to 1% and 40%, respectively for pairs from ρ/ω decay. First experiments are planned for 1998. (author)
Investigating hadronic resonances in pp interactions with HADES
Directory of Open Access Journals (Sweden)
Przygoda Witold
2015-01-01
Full Text Available In this paper we report on the investigation of baryonic resonance production in proton-proton collisions at the kinetic energies of 1.25 GeV and 3.5 GeV, based on data measured with HADES. Exclusive channels npπ+ and ppπ0 as well as ppe+e− were studied simultaneously in the framework of a one-boson exchange model. The resonance cross sections were determined from the one-pion channels for Δ(1232 and N(1440 (1.25 GeV as well as further Δ and N* resonances up to 2 GeV/c2 for the 3.5 GeV data. The data at 1.25 GeV energy were also analysed within the framework of the partial wave analysis together with the set of several other measurements at lower energies. The obtained solutions provided the evolution of resonance production with the beam energy, showing a sizeable non-resonant contribution but with still dominating contribution of Δ(1232P33. In the case of 3.5 GeV data, the study of the ppe+e− channel gave the insight on the Dalitz decays of the baryon resonances and, in particular, on the electromagnetic transition form-factors in the time-like region. We show that the assumption of a constant electromagnetic transition form-factors leads to underestimation of the yield in the dielectron invariant mass spectrum below the vector mesons pole. On the other hand, a comparison with various transport models shows the important role of intermediate ρ production, though with a large model dependency. The exclusive channels analysis done by the HADES collaboration provides new stringent restrictions on the parameterizations used in the models.
System Code Models and Capabilities
International Nuclear Information System (INIS)
Bestion, D.
2008-01-01
System thermalhydraulic codes such as RELAP, TRACE, CATHARE or ATHLET are now commonly used for reactor transient simulations. The whole methodology of code development is described including the derivation of the system of equations, the analysis of experimental data to obtain closure relation and the validation process. The characteristics of the models are briefly presented starting with the basic assumptions, the system of equations and the derivation of closure relationships. An extensive work was devoted during the last three decades to the improvement and validation of these models, which resulted in some homogenisation of the different codes although separately developed. The so called two-fluid model is the common basis of these codes and it is shown how it can describe both thermal and mechanical nonequilibrium. A review of some important physical models allows to illustrate the main capabilities and limitations of system codes. Attention is drawn on the role of flow regime maps, on the various methods for developing closure laws, on the role of interfacial area and turbulence on interfacial and wall transfers. More details are given for interfacial friction laws and their relation with drift flux models. Prediction of chocked flow and CFFL is also addressed. Based on some limitations of the present generation of codes, perspectives for future are drawn.
Cheetah: Starspot modeling code
Walkowicz, Lucianne; Thomas, Michael; Finkestein, Adam
2014-12-01
Cheetah models starspots in photometric data (lightcurves) by calculating the modulation of a light curve due to starspots. The main parameters of the program are the linear and quadratic limb darkening coefficients, stellar inclination, spot locations and sizes, and the intensity ratio of the spots to the stellar photosphere. Cheetah uses uniform spot contrast and the minimum number of spots needed to produce a good fit and ignores bright regions for the sake of simplicity.
Aion, Eros e Hades nei frammenti caldaici
Directory of Open Access Journals (Sweden)
Silvia Lenzi
2006-01-01
Full Text Available The essay deals with some aspects of the knotty question concerning the presence of Aion, Eros and Hades in the variegated universe of the Chaldean Oracles, traditionally attributed to a collaboration between two “Hellenized Magi” of the second c. AD: Julian “the Chaldean” and his son Julian “the Theurgist” (even if the theurgical expertise of the elder Julian, the repository of a more ancient Chaldean tradition, might not involve a direct eastern origin but only his knowledge of “Chaldean” sciences such as astrology and divination”. Aion, who is examined here in terms of his relationship with Chronos, as a more or less autonomous divinity, is mentioned only in the context of some fragments, but his name never appears in the oracular verses. Eros (beloging also to the triad of fr. 46, along with “Faith” and “Truth” stands for an independent cosmological principle whose task is to preserve the universe and the armony of the cosmos. The Chaldean Oracles never make any explicit mention of Hades, who anyway is cited by the Byzantine scholar Psellos as an autonomous divinity of the Chaldeans.
Pump Component Model in SPACE Code
International Nuclear Information System (INIS)
Kim, Byoung Jae; Kim, Kyoung Doo
2010-08-01
This technical report describes the pump component model in SPACE code. A literature survey was made on pump models in existing system codes. The models embedded in SPACE code were examined to check the confliction with intellectual proprietary rights. Design specifications, computer coding implementation, and test results are included in this report
The high-acceptance dielectron spectrometer HADES
International Nuclear Information System (INIS)
Agakichiev, G.; Destefanis, M.; Gilardi, C.; Kirschner, D.; Kuehn, W.; Lange, J.S.; Lehnert, J.; Lichtblau, C.; Lins, E.; Metag, V.; Mishra, D.; Novotny, R.; Pechenov, V.; Pechenova, O.; Perez Cavalcanti, T.; Petri, M.; Ritman, J.; Salz, C.; Schaefer, D.; Skoda, M.; Spataro, S.; Spruck, B.; Toia, A.; Agodi, C.; Coniglione, R.; Cosentino, L.; Finocchiaro, P.; Maiolino, C.; Piattelli, P.; Sapienza, P.; Vassiliev, D.; Alvarez-Pol, H.; Belver, D.; Cabanelas, P.; Castro, E.; Duran, I.; Fernandez, C.; Fuentes, B.; Garzon, J.A.; Kurtukian-Nieto, T.; Rodriguez-Prieto, G.; Sabin-Fernandez, J.; Sanchez, M.; Vazquez, A.; Atkin, E.; Volkov, Y.; Badura, E.; Bertini, D.; Bielcik, J.; Bokemeyer, H.; Dahlinger, M.; Daues, H.W.; Galatyuk, T.; Garabatos, C.; Gonzalez-Diaz, D.; Hehner, J.; Heinz, T.; Hoffmann, J.; Holzmann, R.; Koenig, I.; Koenig, W.; Kolb, B.W.; Kopf, U.; Lang, S.; Leinberger, U.; Magestro, D.; Muench, M.; Niebur, W.; Ott, W.; Pietraszko, J.; Rustamov, A.; Schicker, R.M.; Schoen, H.; Schoen, W.; Schroeder, C.; Schwab, E.; Senger, P.; Simon, R.S.; Stelzer, H.; Traxler, M.; Yurevich, S.; Zovinec, D.; Zumbruch, P.; Balanda, A.; Kozuch, A.; Przygoda, W.; Bassi, A.; Bassini, R.; Boiano, C.; Bartolotti, A.; Brambilla, S.; Bellia, G.; Migneco, E.; Belyaev, A.V.; Chepurnov, V.; Chernenko, S.; Fateev, O.V.; Ierusalimov, A.P.; Smykov, L.; Troyan, A.Yu.; Zanevsky, Y.V.; Benovic, M.; Hlavac, S.; Turzo, I.; Boehmer, M.; Christ, T.; Eberl, T.; Fabbietti, L.; Friese, J.; Gernhaeuser, R.; Gilg, H.; Homolka, J.; Jurkovic, M.; Kastenmueller, A.; Kienle, P.; Koerner, H.J.; Kruecken, R.; Maier, L.; Maier-Komor, P.; Sailer, B.; Schroeder, S.; Ulrich, A.; Wallner, C.; Weber, M.; Wieser, J.; Winkler, S.; Zeitelhack, K.; Boyard, J.L.; Genolini, B.; Hennino, T.; Jourdain, J.C.; Moriniere, E.; Pouthas, J.; Ramstein, B.; Rosier, P.; Roy-Stephan, M.; Sudol, M.; Braun-Munzinger, P.; Diaz, J.; Dohrmann, F.; Dressler, R.; Enghardt, W.; Heidel, K.; Hutsch, J.; Kanaki, K.; Kotte, R.; Naumann, L.; Sobiella, M.; Wuestenfeld, J.; Zhou, P.; Dybczak, A.; Jaskula, M.; Kajetanowicz, M.; Kidon, L.; Korcyl, K.; Kulessa, R.; Malarz, A.; Michalska, B.; Otwinowski, J.; Ploskon, M.; Prokopowicz, W.; Salabura, P.; Szczybura, M.; Trebacz, R.; Walus, W.; Wisniowski, M.; Wojcik, T.; Froehlich, I.; Lippmann, C.; Lorenz, M.; Markert, J.; Michel, J.; Muentz, C.; Pachmayer, Y.C.; Rosenkranz, K.; Stroebele, H.; Sturm, C.; Tarantola, A.; Teilab, K.; Wang, Y.; Zentek, A.; Golubeva, M.; Guber, F.; Ivashkin, A.; Karavicheva, T.; Kurepin, A.; Lapidus, K.; Reshetin, A.; Sadovsky, A.; Shileev, K.; Tiflov, V.; Grosse, E.; Kaempfer, B.; Iori, I.; Krizek, F.; Kugler, A.; Marek, T.; Novotny, J.; Pleskac, R.; Pospisil, V.; Sobolev, Yu.G.; Suk, M.; Taranenko, A.; Tikhonov, A.; Tlusty, P.; Wagner, V.; Mousa, J.; Parpottas, Y.; Tsertos, H.; Nekhaev, A.; Smolyankin, V.; Palka, M.; Roche, G.; Schmah, A.; Stroth, J.
2009-01-01
HADES is a versatile magnetic spectrometer aimed at studying dielectron production in pion, proton and heavy-ion-induced collisions. Its main features include a ring imaging gas Cherenkov detector for electron-hadron discrimination, a tracking system consisting of a set of 6 superconducting coils producing a toroidal field and drift chambers and a multiplicity and electron trigger array for additional electron-hadron discrimination and event characterization. A two-stage trigger system enhances events containing electrons. The physics program is focused on the investigation of hadron properties in nuclei and in the hot and dense hadronic matter. The detector system is characterized by an 85% azimuthal coverage over a polar angle interval from 18 to 85 , a single electron efficiency of 50% and a vector meson mass resolution of 2.5%. Identification of pions, kaons and protons is achieved combining time-of-flight and energy loss measurements over a large momentum range (0.1< p< 1.0 GeV/c). This paper describes the main features and the performance of the detector system. (orig.)
Searching a dark photon with HADES
Energy Technology Data Exchange (ETDEWEB)
Agakishiev, G. [Joint Institute of Nuclear Research, 141980 Dubna (Russian Federation); Balanda, A. [Smoluchowski Institute of Physics, Jagiellonian University of Cracow, 30-059 Kraków (Poland); Belver, D. [LabCAF. F. Física, Univ. de Santiago de Compostela, 15706 Santiago de Compostela (Spain); Belyaev, A. [Joint Institute of Nuclear Research, 141980 Dubna (Russian Federation); Berger-Chen, J.C. [Excellence Cluster ‘Origin and Structure of the Universe’, 85748 Garching (Germany); Blanco, A. [LIP – Laboratório de Instrumentação e Física Experimental de Partículas, 3004-516 Coimbra (Portugal); Böhmer, M. [Physik Department E12, Technische Universität München, 85748 Garching (Germany); Boyard, J.L. [Institut de Physique Nucléaire (UMR 8608), CNRS/IN2P3 – Université Paris Sud, F-91406 Orsay Cedex (France); Cabanelas, P. [LabCAF. F. Física, Univ. de Santiago de Compostela, 15706 Santiago de Compostela (Spain); Chernenko, S. [Joint Institute of Nuclear Research, 141980 Dubna (Russian Federation); Dybczak, A. [Smoluchowski Institute of Physics, Jagiellonian University of Cracow, 30-059 Kraków (Poland); Epple, E.; Fabbietti, L. [Excellence Cluster ‘Origin and Structure of the Universe’, 85748 Garching (Germany); Fateev, O. [Joint Institute of Nuclear Research, 141980 Dubna (Russian Federation); Finocchiaro, P. [Istituto Nazionale di Fisica Nucleare – Laboratori Nazionali del Sud, 95125 Catania (Italy); and others
2014-04-04
We present a search for the e{sup +}e{sup −} decay of a hypothetical dark photon, also named U vector boson, in inclusive dielectron spectra measured by HADES in the p(3.5 GeV) + p, Nb reactions, as well as the Ar (1.756 GeV/u) + KCl reaction. An upper limit on the kinetic mixing parameter squared ϵ{sup 2} at 90% CL has been obtained for the mass range M{sub U}=0.02–0.55 GeV/c{sup 2} and is compared with the present world data set. For masses 0.03–0.1 GeV/c{sup 2}, the limit has been lowered with respect to previous results, allowing now to exclude a large part of the parameter region favored by the muon g−2 anomaly. Furthermore, an improved upper limit on the branching ratio of 2.3×10{sup −6} has been set on the helicity-suppressed direct decay of the eta meson, η→e{sup +}e{sup −}, at 90% CL.
Impacts of Model Building Energy Codes
Energy Technology Data Exchange (ETDEWEB)
Athalye, Rahul A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sivaraman, Deepak [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elliott, Douglas B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Liu, Bing [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bartlett, Rosemarie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2016-10-31
The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO_{2} emissions at the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.
Dual-code quantum computation model
Choi, Byung-Soo
2015-08-01
In this work, we propose the dual-code quantum computation model—a fault-tolerant quantum computation scheme which alternates between two different quantum error-correction codes. Since the chosen two codes have different sets of transversal gates, we can implement a universal set of gates transversally, thereby reducing the overall cost. We use code teleportation to convert between quantum states in different codes. The overall cost is decreased if code teleportation requires fewer resources than the fault-tolerant implementation of the non-transversal gate in a specific code. To analyze the cost reduction, we investigate two cases with different base codes, namely the Steane and Bacon-Shor codes. For the Steane code, neither the proposed dual-code model nor another variation of it achieves any cost reduction since the conventional approach is simple. For the Bacon-Shor code, the three proposed variations of the dual-code model reduce the overall cost. However, as the encoding level increases, the cost reduction decreases and becomes negative. Therefore, the proposed dual-code model is advantageous only when the encoding level is low and the cost of the non-transversal gate is relatively high.
Dilepton Production at SIS Energies Studied with HADES
Czech Academy of Sciences Publication Activity Database
Holzmann, R.; Balanda, A.; Belver, D.; Belyaev, A. V.; Blanco, A.; Böhmer, M.; Boyard, J. L.; Braun-Munzinger, P.; Cabanelas, P.; Castro, E.; Chernenko, S.; Díaz, J.; Dybczak, A.; Epple, E.; Fabbietti, L.; Fateev, O.V.; Finocchiaro, P.; Fonte, P.; Friese, J.; Froehlich, I.; Galatyuk, T.; Garzón, J.A.; Gernhäuser, R.; Gil, A.; Golubeva, M.; Gonzalez-Diaz, D.; Guber, F.; Hennino, T.; Huck, P.; Ierusalimov, A.P.; Iori, I.; Ivashkin, A.; Jurkovic, M.; Kampfer, B.; Karavicheva, T.; Koenig, I.; Koenig, W.; Kolb, B.W.; Kopp, A.; Kotte, R.; Kozuch, A.; Krása, Antonín; Křížek, Filip; Krücken, R.; Kuhn, W.; Kugler, Andrej; Kurepin, A.; Khlitz, P.K.; Lamas-Valverde, J.; Lang, S.; Lange, J.S.; Lapidus, K.; Liu, T.; Lopes, L.; Lorenz, M.; Maier, L.; Mangiarotti, A.; Markert, J.; Metag, V.; Michalska, B.; Michel, J.; Moriniére, E.; Mousa, J.; Muntz, C.; Naumann, L.; Pachmayer, Y.C.; Palka, M.; Parpottas, Y.; Pechenov, V.; Pietraszko, J.; Przygoda, W.; Ramstein, B.; Reshetin, A.; Roskoss, J.; Rustamov, A.; Sadovsky, A.; Salabura, P.; Schmah, A.; Siebenson, J.; Simon, R. S.; Sobolev, Yuri, G.; Spruck, B.; Strobele, H.; Stroth, J.; Sturm, C.; Sudol, M.; Tarantola, A.; Teilab, K.; Tlustý, Pavel; Traxler, M.; Trebacz, R.; Tsertos, H.; Veretenkin, I.; Wagner, Vladimír; Weber, M.; Wisniowski, M.; Wüstenfeld, J.; Yurevich, S.; Zanevsky, Y.V.
2010-01-01
Roč. 834, 1-4 (2010), 298C-302C ISSN 0375-9474. [10th International Conference on Nucleus-Nucleus Collisions (NN2009). Beijing, 16.08.2009-21.08.2009] Institutional research plan: CEZ:AV0Z10480505 Keywords : COLLISIONS * DLS * HADES Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 1.986, year: 2010
Dilepton production in pp and CC collisions with HADES
Czech Academy of Sciences Publication Activity Database
Frohlich, I.; Křížek, Filip; Kugler, Andrej; Tlustý, Pavel; Wagner, Vladimír
2007-01-01
Roč. 31, č. 4 (2007), s. 831-835 ISSN 1434-6001 R&D Projects: GA AV ČR IAA1048304; GA MŠk LC07050 Institutional research plan: CEZ:AV0Z10480505 Keywords : HADES Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 1.801, year: 2007
Test of Gb Ethernet with FPGA for HADES upgrade
Energy Technology Data Exchange (ETDEWEB)
Gilardi, C. [II. Physikalisches Inst., Giessen Univ. (Germany)
2007-07-01
Within the HADES experiment, we are investigating a trigger upgrade in order to run heavier systems (Au + Au). We investigate Gigabit Ethernet transfers with Xilinx Virtex II FPGA on the commercial board Celoxica RC300E. We implement the transfer protocols (UDP, ICMP, ARP) with Handel-C. First results of bandwidth and latency will be presented. (orig.)
Fatigue modelling according to the JCSS Probabilistic model code
Vrouwenvelder, A.C.W.M.
2007-01-01
The Joint Committee on Structural Safety is working on a Model Code for full probabilistic design. The code consists out of three major parts: Basis of design, Load Models and Models for Material and Structural Properties. The code is intended as the operational counter part of codes like ISO,
Coding with partially hidden Markov models
DEFF Research Database (Denmark)
Forchhammer, Søren; Rissanen, J.
1995-01-01
Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...
A COME and KISS QDC read-out scheme for the HADES Electromagnetic Calorimeter
Energy Technology Data Exchange (ETDEWEB)
Rost, Adrian [Technische Universitaet Darmstadt, Darmstadt (Germany); Collaboration: HADES-Collaboration
2014-07-01
At the future FAIR Facility in Darmstadt the High Acceptance Di-Electron Spectrometer will continue its physics program. For beam energies between 2 and 40 GeV/u the database for pion and eta production is not complete. Therefore, interpretation of future di-electron data would have to depend on interpolations or on theoretical models. The addition of an electromagnetic calorimeter to HADES would allow such measurements and would additionally improve the electron-to-pion separation at large momentum p>0.4 GeV/c. Furthermore, photon measurement would be of a large interest for the HADES strangeness program. An 8 channel QDC Front-End-Electronics (FEE) was developed for the signals of photomultipliers (PMTs) from lead-glass calorimeter modules. The measurement principle is to convert the charge of the PMT signals into a pulse, where the charge is encoded in the width of the pulse. The width of the pulses is afterwards measured by the already well-established TRBv3 platform. For that simple electronics, hiding complex operations inside a commercial FPGA is used. In this contribution the current status and future perspectives of this read-out concept are shown.
Genetic coding and gene expression - new Quadruplet genetic coding model
Shankar Singh, Rama
2012-07-01
Successful demonstration of human genome project has opened the door not only for developing personalized medicine and cure for genetic diseases, but it may also answer the complex and difficult question of the origin of life. It may lead to making 21st century, a century of Biological Sciences as well. Based on the central dogma of Biology, genetic codons in conjunction with tRNA play a key role in translating the RNA bases forming sequence of amino acids leading to a synthesized protein. This is the most critical step in synthesizing the right protein needed for personalized medicine and curing genetic diseases. So far, only triplet codons involving three bases of RNA, transcribed from DNA bases, have been used. Since this approach has several inconsistencies and limitations, even the promise of personalized medicine has not been realized. The new Quadruplet genetic coding model proposed and developed here involves all four RNA bases which in conjunction with tRNA will synthesize the right protein. The transcription and translation process used will be the same, but the Quadruplet codons will help overcome most of the inconsistencies and limitations of the triplet codes. Details of this new Quadruplet genetic coding model and its subsequent potential applications including relevance to the origin of life will be presented.
Resonance production and decay in pion induced collisions with HADES
Directory of Open Access Journals (Sweden)
Scozzi Federico
2017-01-01
Full Text Available The main goal of the High Acceptance Di-Electron experiment (HADES at GSI is the study of hadronic matter in the 1-3.5 GeV/nucleon incident energy range. HADES results on e+ e− production in proton-nucleus reactions and in nucleus-nucleus collisions demonstrate a strong enhancement of the dilepton yield relative to a reference spectrum obtained from elementary nucleon-nucleon reactions. These observations point to a strong modification of the in-medium ρ spectral function driven by the coupling of the ρ to baryon-resonance hole states. However, to scrutinize this conjecture, a precise study of the role of the ρ meson in electromagnetic baryon-resonance transitions in the time-like region is mandatory. In order to perform this study, HADES has started a dedicated pion-nucleon programme. For the first time a combined measurement of hadronic and dielectron final states have been performed in π−-N reactions at four different pion beam momenta (0.656, 0.69, 0.748 and 0.8 GeV/c. First results on exclusive channels with one pion π−-p and two pions (nπ+π−, pπ−π0 in the final state, which are currently analysed within the partial wave analysis (PWA framework of the Bonn-Gatchina group, are discussed. Finally first results for the dielectron analysis will be shown.
Improvement of MARS code reflood model
International Nuclear Information System (INIS)
Hwang, Moonkyu; Chung, Bub-Dong
2011-01-01
A specifically designed heat transfer model for the reflood process which normally occurs at low flow and low pressure was originally incorporated in the MARS code. The model is essentially identical to that of the RELAP5/MOD3.3 code. The model, however, is known to have under-estimated the peak cladding temperature (PCT) with earlier turn-over. In this study, the original MARS code reflood model is improved. Based on the extensive sensitivity studies for both hydraulic and wall heat transfer models, it is found that the dispersed flow film boiling (DFFB) wall heat transfer is the most influential process determining the PCT, whereas the interfacial drag model most affects the quenching time through the liquid carryover phenomenon. The model proposed by Bajorek and Young is incorporated for the DFFB wall heat transfer. Both space grid and droplet enhancement models are incorporated. Inverted annular film boiling (IAFB) is modeled by using the original PSI model of the code. The flow transition between the DFFB and IABF, is modeled using the TRACE code interpolation. A gas velocity threshold is also added to limit the top-down quenching effect. Assessment calculations are performed for the original and modified MARS codes for the Flecht-Seaset test and RBHT test. Improvements are observed in terms of the PCT and quenching time predictions in the Flecht-Seaset assessment. In case of the RBHT assessment, the improvement over the original MARS code is found marginal. A space grid effect, however, is clearly seen from the modified version of the MARS code. (author)
Diagnosis code assignment: models and evaluation metrics.
Perotte, Adler; Pivovarov, Rimma; Natarajan, Karthik; Weiskopf, Nicole; Wood, Frank; Elhadad, Noémie
2014-01-01
The volume of healthcare data is growing rapidly with the adoption of health information technology. We focus on automated ICD9 code assignment from discharge summary content and methods for evaluating such assignments. We study ICD9 diagnosis codes and discharge summaries from the publicly available Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC II) repository. We experiment with two coding approaches: one that treats each ICD9 code independently of each other (flat classifier), and one that leverages the hierarchical nature of ICD9 codes into its modeling (hierarchy-based classifier). We propose novel evaluation metrics, which reflect the distances among gold-standard and predicted codes and their locations in the ICD9 tree. Experimental setup, code for modeling, and evaluation scripts are made available to the research community. The hierarchy-based classifier outperforms the flat classifier with F-measures of 39.5% and 27.6%, respectively, when trained on 20,533 documents and tested on 2282 documents. While recall is improved at the expense of precision, our novel evaluation metrics show a more refined assessment: for instance, the hierarchy-based classifier identifies the correct sub-tree of gold-standard codes more often than the flat classifier. Error analysis reveals that gold-standard codes are not perfect, and as such the recall and precision are likely underestimated. Hierarchy-based classification yields better ICD9 coding than flat classification for MIMIC patients. Automated ICD9 coding is an example of a task for which data and tools can be shared and for which the research community can work together to build on shared models and advance the state of the art.
Generation of Java code from Alvis model
Matyasik, Piotr; Szpyrka, Marcin; Wypych, Michał
2015-12-01
Alvis is a formal language that combines graphical modelling of interconnections between system entities (called agents) and a high level programming language to describe behaviour of any individual agent. An Alvis model can be verified formally with model checking techniques applied to the model LTS graph that represents the model state space. This paper presents transformation of an Alvis model into executable Java code. Thus, the approach provides a method of automatic generation of a Java application from formally verified Alvis model.
PetriCode: A Tool for Template-Based Code Generation from CPN Models
DEFF Research Database (Denmark)
Simonsen, Kent Inge
2014-01-01
levels of abstraction. The elements of the models are annotated with code generation pragmatics enabling PetriCode to use a template-based approach to generate code while keeping the models uncluttered from implementation artefacts. PetriCode is the realization of our code generation approach which has...
Steam condensation modelling in aerosol codes
International Nuclear Information System (INIS)
Dunbar, I.H.
1986-01-01
The principal subject of this study is the modelling of the condensation of steam into and evaporation of water from aerosol particles. These processes introduce a new type of term into the equation for the development of the aerosol particle size distribution. This new term faces the code developer with three major problems: the physical modelling of the condensation/evaporation process, the discretisation of the new term and the separate accounting for the masses of the water and of the other components. This study has considered four codes which model the condensation of steam into and its evaporation from aerosol particles: AEROSYM-M (UK), AEROSOLS/B1 (France), NAUA (Federal Republic of Germany) and CONTAIN (USA). The modelling in the codes has been addressed under three headings. These are the physical modelling of condensation, the mathematics of the discretisation of the equations, and the methods for modelling the separate behaviour of different chemical components of the aerosol. The codes are least advanced in area of solute effect modelling. At present only AEROSOLS/B1 includes the effect. The effect is greater for more concentrated solutions. Codes without the effect will be more in error (underestimating the total airborne mass) the less condensation they predict. Data are needed on the water vapour pressure above concentrated solutions of the substances of interest (especially CsOH and CsI) if the extent to which aerosols retain water under superheated conditions is to be modelled. 15 refs
Economic aspects and models for building codes
DEFF Research Database (Denmark)
Bonke, Jens; Pedersen, Dan Ove; Johnsen, Kjeld
It is the purpose of this bulletin to present an economic model for estimating the consequence of new or changed building codes. The object is to allow comparative analysis in order to improve the basis for decisions in this field. The model is applied in a case study.......It is the purpose of this bulletin to present an economic model for estimating the consequence of new or changed building codes. The object is to allow comparative analysis in order to improve the basis for decisions in this field. The model is applied in a case study....
Modeling report of DYMOND code (DUPIC version)
Energy Technology Data Exchange (ETDEWEB)
Park, Joo Hwan [KAERI, Taejon (Korea, Republic of); Yacout, Abdellatif M. [Argonne National Laboratory, Ilinois (United States)
2003-04-01
The DYMOND code employs the ITHINK dynamic modeling platform to assess the 100-year dynamic evolution scenarios for postulated global nuclear energy parks. Firstly, DYMOND code has been developed by ANL(Argonne National Laboratory) to perform the fuel cycle analysis of LWR once-through and LWR-FBR mixed plant. Since the extensive application of DYMOND code has been requested, the first version of DYMOND has been modified to adapt the DUPIC, MSR and RTF fuel cycle. DYMOND code is composed of three parts; the source language platform, input supply and output. But those platforms are not clearly distinguished. This report described all the equations which were modeled in the modified DYMOND code (which is called as DYMOND-DUPIC version). It divided into five parts;Part A deals model in reactor history which is included amount of the requested fuels and spent fuels. Part B aims to describe model of fuel cycle about fuel flow from the beginning to the end of fuel cycle. Part C is for model in re-processing which is included recovery of burned uranium, plutonium, minor actinide and fission product as well as the amount of spent fuels in storage and disposal. Part D is for model in other fuel cycle which is considered the thorium fuel cycle for MSR and RTF reactor. Part E is for model in economics. This part gives all the information of cost such as uranium mining cost, reactor operating cost, fuel cost etc.
High burnup models in computer code fair
International Nuclear Information System (INIS)
Dutta, B.K.; Swami Prasad, P.; Kushwaha, H.S.; Mahajan, S.C.; Kakodar, A.
1997-01-01
An advanced fuel analysis code FAIR has been developed for analyzing the behavior of fuel rods of water cooled reactors under severe power transients and high burnups. The code is capable of analyzing fuel pins of both collapsible clad, as in PHWR and free standing clad as in LWR. The main emphasis in the development of this code is on evaluating the fuel performance at extended burnups and modelling of the fuel rods for advanced fuel cycles. For this purpose, a number of suitable models have been incorporated in FAIR. For modelling the fission gas release, three different models are implemented, namely Physically based mechanistic model, the standard ANS 5.4 model and the Halden model. Similarly the pellet thermal conductivity can be modelled by the MATPRO equation, the SIMFUEL relation or the Halden equation. The flux distribution across the pellet is modelled by using the model RADAR. For modelling pellet clad interaction (PCMI)/ stress corrosion cracking (SCC) induced failure of sheath, necessary routines are provided in FAIR. The validation of the code FAIR is based on the analysis of fuel rods of EPRI project ''Light water reactor fuel rod modelling code evaluation'' and also the analytical simulation of threshold power ramp criteria of fuel rods of pressurized heavy water reactors. In the present work, a study is carried out by analysing three CRP-FUMEX rods to show the effect of various combinations of fission gas release models and pellet conductivity models, on the fuel analysis parameters. The satisfactory performance of FAIR may be concluded through these case studies. (author). 12 refs, 5 figs
The slow control system of the HADES RPC wall
International Nuclear Information System (INIS)
Gil, A.; Blanco, A.; Castro, E.; Díaz, J.; Garzón, J.A.; Gonzalez-Diaz, D.; Fouedjio, L.; Kolb, B.W.; Palka, M.; Traxler, M.; Trebacz, R.; Zumbruch, P.
2012-01-01
The control and monitoring system for the new HADES RPC time of flight wall installed at GSI Helmholtzzentrum für Schwerionenforschung GmbH (Darmstadt, Germany), is described. The slow control system controls/monitors about 6000 variables from different physical devices via a distributed architecture, which uses intensively the 1-wire ® bus. The software implementation is based on the Experimental Physics and Industrial Control System (EPICS) software tool kit providing low cost, reliability and adaptability without requiring large hardware resources. The control and monitoring system attends five different subsystems: front-end electronics, low voltage, high voltage, gases, and detector.
V0 Reconstruction of Strange Hadrons in Au+Au Collisions at 1.23 AGeV with HADES
International Nuclear Information System (INIS)
Scheib, T
2015-01-01
Preliminary results on the production of weakly decaying strange hadrons are reported for collisions of Au+Au at 1.23 AGeV beam energy studied with the HADES detector at GSI in Darmstadt. At this collision energy all strange particles are created below their elementary threshold. The reconstruction of the investigated particles (i.e. Λ and K 0 s ) via the topology of their charged decay products (V 0 reconstruction) is presented in detail. From the corrected yields of Λ and K 0 s the ratio K 0 S /Λ can be calculated and included into a statistical model fit. (paper)
Hydrogen recycle modeling in transport codes
International Nuclear Information System (INIS)
Howe, H.C.
1979-01-01
The hydrogen recycling models now used in Tokamak transport codes are reviewed and the method by which realistic recycling models are being added is discussed. Present models use arbitrary recycle coefficients and therefore do not model the actual recycling processes at the wall. A model for the hydrogen concentration in the wall serves two purposes: (1) it allows a better understanding of the density behavior in present gas puff, pellet, and neutral beam heating experiments; and (2) it allows one to extrapolate to long pulse devices such as EBT, ISX-C and reactors where the walls are observed or expected to saturate. Several wall models are presently being studied for inclusion in transport codes
Chemistry models in the Victoria code
International Nuclear Information System (INIS)
Grimley, A.J. III
1988-01-01
The VICTORIA Computer code consists of the fission product release and chemistry models for the MELPROG severe accident analysis code. The chemistry models in VICTORIA are used to treat multi-phase interactions in four separate physical regions: fuel grains, gap/open porosity/clad, coolant/aerosols, and structure surfaces. The physical and chemical environment of each region is very different from the others and different models are required for each. The common thread in the modelling is the use of a chemical equilibrium assumption. The validity of this assumption along with a description of the various physical constraints applicable to each region will be discussed. The models that result from the assumptions and constraints will be presented along with samples of calculations in each region
HADES-1: A rapid prototyping environment based on advanced FPGA’s
Aguirre Echanove, Miguel Ángel; Torralba Silgado, Antonio Jesús; García Franquelo, Leopoldo; Tombs, J. N.
2001-01-01
Rapid prototyping of large digital systems is becoming supported with the use of new advanced FPGA's. These FPGA's can give more Information than functional simulation and emulation tasks, due to their inner inspection features. This paper presents HADES-l, a new environment for rapid prototyping and hardware debugging. HADES-l is based on one FPGA of the VIRTEX family, exploiting the advanced features of the SelectMap port and a fast link with the host PC.
Modeling peripheral olfactory coding in Drosophila larvae.
Directory of Open Access Journals (Sweden)
Derek J Hoare
Full Text Available The Drosophila larva possesses just 21 unique and identifiable pairs of olfactory sensory neurons (OSNs, enabling investigation of the contribution of individual OSN classes to the peripheral olfactory code. We combined electrophysiological and computational modeling to explore the nature of the peripheral olfactory code in situ. We recorded firing responses of 19/21 OSNs to a panel of 19 odors. This was achieved by creating larvae expressing just one functioning class of odorant receptor, and hence OSN. Odor response profiles of each OSN class were highly specific and unique. However many OSN-odor pairs yielded variable responses, some of which were statistically indistinguishable from background activity. We used these electrophysiological data, incorporating both responses and spontaneous firing activity, to develop a bayesian decoding model of olfactory processing. The model was able to accurately predict odor identity from raw OSN responses; prediction accuracy ranged from 12%-77% (mean for all odors 45.2% but was always significantly above chance (5.6%. However, there was no correlation between prediction accuracy for a given odor and the strength of responses of wild-type larvae to the same odor in a behavioral assay. We also used the model to predict the ability of the code to discriminate between pairs of odors. Some of these predictions were supported in a behavioral discrimination (masking assay but others were not. We conclude that our model of the peripheral code represents basic features of odor detection and discrimination, yielding insights into the information available to higher processing structures in the brain.
Sodium pool fire model for CONACS code
International Nuclear Information System (INIS)
Yung, S.C.
1982-01-01
The modeling of sodium pool fires constitutes an important ingredient in conducting LMFBR accident analysis. Such modeling capability has recently come under scrutiny at Westinghouse Hanford Company (WHC) within the context of developing CONACS, the Containment Analysis Code System. One of the efforts in the CONACS program is to model various combustion processes anticipated to occur during postulated accident paths. This effort includes the selection or modification of an existing model and development of a new model if it clearly contributes to the program purpose. As part of this effort, a new sodium pool fire model has been developed that is directed at removing some of the deficiencies in the existing models, such as SOFIRE-II and FEUNA
Dynamic alignment models for neural coding.
Directory of Open Access Journals (Sweden)
Sepp Kollmorgen
2014-03-01
Full Text Available Recently, there have been remarkable advances in modeling the relationships between the sensory environment, neuronal responses, and behavior. However, most models cannot encompass variable stimulus-response relationships such as varying response latencies and state or context dependence of the neural code. Here, we consider response modeling as a dynamic alignment problem and model stimulus and response jointly by a mixed pair hidden Markov model (MPH. In MPHs, multiple stimulus-response relationships (e.g., receptive fields are represented by different states or groups of states in a Markov chain. Each stimulus-response relationship features temporal flexibility, allowing modeling of variable response latencies, including noisy ones. We derive algorithms for learning of MPH parameters and for inference of spike response probabilities. We show that some linear-nonlinear Poisson cascade (LNP models are a special case of MPHs. We demonstrate the efficiency and usefulness of MPHs in simulations of both jittered and switching spike responses to white noise and natural stimuli. Furthermore, we apply MPHs to extracellular single and multi-unit data recorded in cortical brain areas of singing birds to showcase a novel method for estimating response lag distributions. MPHs allow simultaneous estimation of receptive fields, latency statistics, and hidden state dynamics and so can help to uncover complex stimulus response relationships that are subject to variable timing and involve diverse neural codes.
Dynamic alignment models for neural coding.
Kollmorgen, Sepp; Hahnloser, Richard H R
2014-03-01
Recently, there have been remarkable advances in modeling the relationships between the sensory environment, neuronal responses, and behavior. However, most models cannot encompass variable stimulus-response relationships such as varying response latencies and state or context dependence of the neural code. Here, we consider response modeling as a dynamic alignment problem and model stimulus and response jointly by a mixed pair hidden Markov model (MPH). In MPHs, multiple stimulus-response relationships (e.g., receptive fields) are represented by different states or groups of states in a Markov chain. Each stimulus-response relationship features temporal flexibility, allowing modeling of variable response latencies, including noisy ones. We derive algorithms for learning of MPH parameters and for inference of spike response probabilities. We show that some linear-nonlinear Poisson cascade (LNP) models are a special case of MPHs. We demonstrate the efficiency and usefulness of MPHs in simulations of both jittered and switching spike responses to white noise and natural stimuli. Furthermore, we apply MPHs to extracellular single and multi-unit data recorded in cortical brain areas of singing birds to showcase a novel method for estimating response lag distributions. MPHs allow simultaneous estimation of receptive fields, latency statistics, and hidden state dynamics and so can help to uncover complex stimulus response relationships that are subject to variable timing and involve diverse neural codes.
The MESORAD dose assessment model: Computer code
International Nuclear Information System (INIS)
Ramsdell, J.V.; Athey, G.F.; Bander, T.J.; Scherpelz, R.I.
1988-10-01
MESORAD is a dose equivalent model for emergency response applications that is designed to be run on minicomputers. It has been developed by the Pacific Northwest Laboratory for use as part of the Intermediate Dose Assessment System in the US Nuclear Regulatory Commission Operations Center in Washington, DC, and the Emergency Management System in the US Department of Energy Unified Dose Assessment Center in Richland, Washington. This volume describes the MESORAD computer code and contains a listing of the code. The technical basis for MESORAD is described in the first volume of this report (Scherpelz et al. 1986). A third volume of the documentation planned. That volume will contain utility programs and input and output files that can be used to check the implementation of MESORAD. 18 figs., 4 tabs
Energy Technology Data Exchange (ETDEWEB)
Tarantola Peloni, Attilio
2011-06-15
The HADES (High Acceptance DiElectron Spectrometer) is an experimental apparatus installed at the heavy-ion synchrotron SIS-18 at GSI, Darmstadt. The main physics motivation of the HADES experiment is the measurement of e{sup +}e{sup -} pairs in the invariant-mass range up to 1 GeV/c{sup 2} in heavy-ion collisions as well as in pion and proton-induced reactions. The HADES physics program is focused on in-medium properties of the light vector mesons {rho}(770), {omega}(783) and {phi}(1020), which decay with a small branching ratio into dileptons. Dileptons are penetrating probes which allow to study the in-medium properties of hadrons. However, in heavy-ion collisions, the measurement of such lepton pairs is difficult because they are rare and have a very large combinatorial background. Recently, HADES has been upgraded with new detectors and new electronics in order to handle higher intensity beams and reactions with heavy nuclei up to Au. HADES will continue for a few more years its rich physics program at its current place at SIS-18 and then move to the upcoming international Facility for Antiproton and Ion Research (FAIR) accelerator complex. In this context the physics results presented in this work are important prerequisites for the investigation of in-medium vector meson properties in p + A and A+A collisions. This work consists of five chapters. The first chapter introduces the physics motivation and a review of recent physics results. In the second chapter, the HADES spectrometer is described and its sub-detectors are presented. Chapter three deals with the issue of lepton identification and the reconstruction of the dielectron spectra in p + p collisions is presented. Here, two reactions are characterized: inclusive and exclusive dilepton production reactions. From the spectra obtained, the corresponding cross sections are presented with the respective statistical and systematical errors. A comparison with theoretical models is included as well
International Nuclear Information System (INIS)
Tarantola Peloni, Attilio
2011-06-01
The HADES (High Acceptance DiElectron Spectrometer) is an experimental apparatus installed at the heavy-ion synchrotron SIS-18 at GSI, Darmstadt. The main physics motivation of the HADES experiment is the measurement of e + e - pairs in the invariant-mass range up to 1 GeV/c 2 in heavy-ion collisions as well as in pion and proton-induced reactions. The HADES physics program is focused on in-medium properties of the light vector mesons ρ(770), ω(783) and φ(1020), which decay with a small branching ratio into dileptons. Dileptons are penetrating probes which allow to study the in-medium properties of hadrons. However, in heavy-ion collisions, the measurement of such lepton pairs is difficult because they are rare and have a very large combinatorial background. Recently, HADES has been upgraded with new detectors and new electronics in order to handle higher intensity beams and reactions with heavy nuclei up to Au. HADES will continue for a few more years its rich physics program at its current place at SIS-18 and then move to the upcoming international Facility for Antiproton and Ion Research (FAIR) accelerator complex. In this context the physics results presented in this work are important prerequisites for the investigation of in-medium vector meson properties in p + A and A+A collisions. This work consists of five chapters. The first chapter introduces the physics motivation and a review of recent physics results. In the second chapter, the HADES spectrometer is described and its sub-detectors are presented. Chapter three deals with the issue of lepton identification and the reconstruction of the dielectron spectra in p + p collisions is presented. Here, two reactions are characterized: inclusive and exclusive dilepton production reactions. From the spectra obtained, the corresponding cross sections are presented with the respective statistical and systematical errors. A comparison with theoretical models is included as well. Conclusions are given in chapter
Measurement of low-mass e{sup +}e{sup -} pair production in 2 AGeV C-C collisions with HADES
Energy Technology Data Exchange (ETDEWEB)
Sudol, Malgorzata
2007-07-01
The search for a modification of hadron properties inside nuclear matter at normal and/or high temperature and density is one of the most interesting issues of modern nuclear physics. Dilepton experiments, give insight into the properties of strong interaction and the nature of hadron mass generation. One of these research tools is the HADES spectrometer. HADES is a high acceptance dilepton spectrometer installed at the heavy-ion synchrotron (SIS) at GSI, Darmstadt. The main physics motivation of HADES is the measurement of e{sup +}e{sup -} pairs in the invariant-mass range up to 1 GeV/c{sup 2} in pion- and proton-induced reactions, as well as in heavy-ion collisions. The goal is to investigate the properties of the vector mesons {rho}, {omega} and of other hadrons reconstructed from e{sup +}e{sup -} decay pairs. Dileptons are penetrating probes allowing to study the in-medium properties of hadrons. However, the measurement of such dilepton pairs is difficult because of a very large background from other processes in which leptons are created. This thesis presents the analysis of the data provided by the first physics run with the HADES spectrometer. For the first time e{sup +}e{sup -} pairs produced in C+C collisions at an incident energy of 2 GeV per nucleon have been collected with sufficient statistics. This experiment is of particular importance since it allows to address the puzzling pair excess measured by the former DLS experiment at a beam energy 1.04 AGeV. The thesis consists of five chapters. The first chapter presents the physics case which is addressed in the work. In the second chapter the HADES spectrometer is introduced with the characteristic of specific detectors which are part of the spectrometer. Chapter three focusses on the issue of charged-particle identification. The fourth chapter discusses the reconstruction of the di-electron spectra in C+C collisions. In this part of the thesis a comparison with theoretical models is included as well
Gap Conductance model Validation in the TASS/SMR-S code using MARS code
International Nuclear Information System (INIS)
Ahn, Sang Jun; Yang, Soo Hyung; Chung, Young Jong; Lee, Won Jae
2010-01-01
Korea Atomic Energy Research Institute (KAERI) has been developing the TASS/SMR-S (Transient and Setpoint Simulation/Small and Medium Reactor) code, which is a thermal hydraulic code for the safety analysis of the advanced integral reactor. An appropriate work to validate the applicability of the thermal hydraulic models within the code should be demanded. Among the models, the gap conductance model which is describes the thermal gap conductivity between fuel and cladding was validated through the comparison with MARS code. The validation of the gap conductance model was performed by evaluating the variation of the gap temperature and gap width as the changed with the power fraction. In this paper, a brief description of the gap conductance model in the TASS/SMR-S code is presented. In addition, calculated results to validate the gap conductance model are demonstrated by comparing with the results of the MARS code with the test case
A graph model for opportunistic network coding
Sorour, Sameh
2015-08-12
© 2015 IEEE. Recent advancements in graph-based analysis and solutions of instantly decodable network coding (IDNC) trigger the interest to extend them to more complicated opportunistic network coding (ONC) scenarios, with limited increase in complexity. In this paper, we design a simple IDNC-like graph model for a specific subclass of ONC, by introducing a more generalized definition of its vertices and the notion of vertex aggregation in order to represent the storage of non-instantly-decodable packets in ONC. Based on this representation, we determine the set of pairwise vertex adjacency conditions that can populate this graph with edges so as to guarantee decodability or aggregation for the vertices of each clique in this graph. We then develop the algorithmic procedures that can be applied on the designed graph model to optimize any performance metric for this ONC subclass. A case study on reducing the completion time shows that the proposed framework improves on the performance of IDNC and gets very close to the optimal performance.
Conservation of concrete structures in fib Model Code 2010
Matthews, S.L.; Ueda, T.; Bigaj-Van Vliet, A.J.
2010-01-01
Chapter 9: Conservation of Concrete Structures forms part of the forthcoming fib new Model Code. As it is expected that the fib new Model Code will be largely completed in 2010, it is being referred to as fib Model Code 2010 (fib MC2010) and it will soon become available for wider review by the
Conservation of concrete structures according to fib Model Code 2010
Matthews, S.; Bigaj-Van Vliet, A.; Ueda, T.
2013-01-01
Conservation of concrete structures forms an essential part of the fib Model Code for Concrete Structures 2010 (fib Model Code 2010). In particular, Chapter 9 of fib Model Code 2010 addresses issues concerning conservation strategies and tactics, conservation management, condition surveys, condition
”Mer handledning och feedback hade varit bra”
Directory of Open Access Journals (Sweden)
Åsa Mickwitz
2017-09-01
Full Text Available I denna studie granskas studenters upplevelser av sin förmåga till självreglering under en kurs i akademiskt skrivande vid Helsingfors universitet. På kursen användes lärformen självständigt lärande (eng. autonomous learning, som innebär att studenterna själva måste ta ansvar för sitt eget lärande i högre grad än på en traditionell kurs.Materialet bestod av 31 enkäter med öppna frågor som insamlats år 2014 och 2015. Jag ville framför allt veta om studenterna upplevde att de kunde a sätta upp realistiska mål för sina studier, b hitta lärstrategier för att nå dessa mål, c hade en effektiv tidsanvändning och om de kunde d utvärdera sitt lärande.Enkätsvaren analyserades med hjälp av en kvalitativ innehållsanalys. Resultatet visade att studenterna ansåg att de kunde använda sig av i första hand två självregleringsfärdigheter: de kunde hantera sin tid effektivt, även om en del av dem uppfattade detta som utmanande, och de klarade av att sätta upp egna mål för lärandet. En del studenter upplevde också spontant att de blev mer motiverade att lära sig tack vare att de själva fick ta så mycket ansvar för sitt lärande.Ett annat resultat var att nästan hälften av studenterna på kursen hade problem med att hitta lärstrategier för att kunna utveckla sin skrivkompetens, och samma studenter ansåg att de allmänt sett skulle ha behövt mer stöd från en lärare. De ansåg att de hade behövt respons på sina texter och handledning av en lärare under kursen för att kunna utveckla sitt skrivande.Även om en majoritet (23 av 30 av studenterna ansåg att de föredrar att lära sig på egen hand, var det en stor del av dessa som också, i kombination med självstudier, ansåg sig behöva lärarens stöd.Orsaken till att lärarstödet upplevs som så viktigt just på skrivkurser är att det är svårt för studenterna att själva ta reda på hur vetenskapliga texter skrivs, eftersom normerna är implicita.
ER@CEBAF: Modeling code developments
Energy Technology Data Exchange (ETDEWEB)
Meot, F. [Brookhaven National Lab. (BNL), Upton, NY (United States); Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Roblin, Y. [Brookhaven National Lab. (BNL), Upton, NY (United States); Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)
2016-04-13
A proposal for a multiple-pass, high-energy, energy-recovery experiment using CEBAF is under preparation in the frame of a JLab-BNL collaboration. In view of beam dynamics investigations regarding this project, in addition to the existing model in use in Elegant a version of CEBAF is developed in the stepwise ray-tracing code Zgoubi, Beyond the ER experiment, it is also planned to use the latter for the study of polarization transport in the presence of synchrotron radiation, down to Hall D line where a 12 GeV polarized beam can be delivered. This Note briefly reports on the preliminary steps, and preliminary outcomes, based on an Elegant to Zgoubi translation.
Improved choked flow model for MARS code
International Nuclear Information System (INIS)
Chung, Moon Sun; Lee, Won Jae; Ha, Kwi Seok; Hwang, Moon Kyu
2002-01-01
Choked flow calculation is improved by using a new sound speed criterion for bubbly flow that is derived by the characteristic analysis of hyperbolic two-fluid model. This model was based on the notion of surface tension for the interfacial pressure jump terms in the momentum equations. Real eigenvalues obtained as the closed-form solution of characteristic polynomial represent the sound speed in the bubbly flow regime that agrees well with the existing experimental data. The present sound speed shows more reasonable result in the extreme case than the Nguyens did. The present choked flow criterion derived by the present sound speed is employed in the MARS code and assessed by using the Marviken choked flow tests. The assessment results without any adjustment made by some discharge coefficients demonstrate more accurate predictions of choked flow rate in the bubbly flow regime than those of the earlier choked flow calculations. By calculating the Typical PWR (SBLOCA) problem, we make sure that the present model can reproduce the reasonable transients of integral reactor system
Simulink Code Generation: Tutorial for Generating C Code from Simulink Models using Simulink Coder
MolinaFraticelli, Jose Carlos
2012-01-01
This document explains all the necessary steps in order to generate optimized C code from Simulink Models. This document also covers some general information on good programming practices, selection of variable types, how to organize models and subsystems, and finally how to test the generated C code and compare it with data from MATLAB.
RPC HADES-TOF wall cosmic ray test performance
International Nuclear Information System (INIS)
Blanco, A.; Belver, D.; Cabanelas, P.; Díaz, J.; Fonte, P.; Garzon, J.A.; Gil, A.; Gonzalez-Díaz, D.; Koenig, W.; Kolb, B.; Lopes, L.; Palka, M.; Pereira, A.
2012-01-01
In this work we present results concerning the cosmic ray test, prior to the final installation and commissioning of the new Resistive Plate Chamber (RPC) Time of Flight (TOF) wall for the High-Acceptance DiElectron Spectrometer (HADES) at GSI. The TOF wall is composed of six equal sectors, each one constituted by 186 individual 4-gaps glass-aluminium shielded RPC cells distributed in six columns and 31 rows in two partially overlapping layers, covering an area of 1.26 m 2 . All sectors were tested with the final Front End Electronic (FEE) and Data AcQuisition system (DAQ) together with Low Voltage (LV) and High Voltage (HV) systems. Results confirm a very uniform average system time resolution of 77 ps sigma together with an average multi-hit time resolution of 83 ps. Crosstalk levels below 1% (in average), moderate timing tails along with an average longitudinal position resolution of 8.4 mm sigma are also confirmed.
Geomechanical research in the test drift of the Hades underground research facility at Mol
International Nuclear Information System (INIS)
Neerdael, B.; Bruyn, D. De
1989-01-01
In the framework of the Hades project, managed by the Belgian Nuclear Research Establishment (CEN-SCK), the Underground Research Facility (URF) was extended by the construction of a test drift. This test drift is a gallery of circular cross section composed of a 5.6m long access gallery (2.64m inner diameter), followed by a 42m long test zone (3.5m in diameter) lined with 60 cm- thick precast concrete segments. An alternative gallery lining concept (sliding steel ribs) has been developed by Andra and tested in a 12m long experimental gallery dug in the prolongation of the concrete lined test drift. Based on predictions by models and according to previous investigations at smaller scale, a geotechnical investigation programme, so called mine-by test, was designed and developed in the clay that surrounds the volume to be excavated. One more experiment was performed after the construction period (April 1988). It consists in quantifying the perturbation from the mechanical point of view around the drift by performing Self Boring Pressuremeter Tests at different distances from the gallery. The project is sponsored by the Commission of the European Communities in the frame of part B of the CEC programme on radioactive waste management and disposal and by ONDRAF-NIRAS, the Belgian Waste Management Authority
40 CFR 194.23 - Models and computer codes.
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
Fuel analysis code FAIR and its high burnup modelling capabilities
International Nuclear Information System (INIS)
Prasad, P.S.; Dutta, B.K.; Kushwaha, H.S.; Mahajan, S.C.; Kakodkar, A.
1995-01-01
A computer code FAIR has been developed for analysing performance of water cooled reactor fuel pins. It is capable of analysing high burnup fuels. This code has recently been used for analysing ten high burnup fuel rods irradiated at Halden reactor. In the present paper, the code FAIR and its various high burnup models are described. The performance of code FAIR in analysing high burnup fuels and its other applications are highlighted. (author). 21 refs., 12 figs
International Nuclear Information System (INIS)
Berger-Chen, Jia Chii
2015-01-01
The present work deals with an inclusive and an exclusive K 0 analysis of the p+p data - recorded with the HADES experiment at 3.5 GeV - for the determination of the K 0 production dynamic, of production cross sections and angular distributions in particular in the context of resonances (e.g. Δ(1232) ++ ). The exclusive results, which show the presence of a dominant resonance contribution, were, thereby, implemented in theoretical models allowing the reproduction of the inclusive K 0 kinematics.
Image Coding using Markov Models with Hidden States
DEFF Research Database (Denmark)
Forchhammer, Søren Otto
1999-01-01
The Cylinder Partially Hidden Markov Model (CPH-MM) is applied to lossless coding of bi-level images. The original CPH-MM is relaxed for the purpose of coding by not imposing stationarity, but otherwise the model description is the same.......The Cylinder Partially Hidden Markov Model (CPH-MM) is applied to lossless coding of bi-level images. The original CPH-MM is relaxed for the purpose of coding by not imposing stationarity, but otherwise the model description is the same....
MARS code manual volume I: code structure, system models, and solution methods
International Nuclear Information System (INIS)
Chung, Bub Dong; Kim, Kyung Doo; Bae, Sung Won; Jeong, Jae Jun; Lee, Seung Wook; Hwang, Moon Kyu; Yoon, Churl
2010-02-01
Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This theory manual provides a complete list of overall information of code structure and major function of MARS including code architecture, hydrodynamic model, heat structure, trip / control system and point reactor kinetics model. Therefore, this report would be very useful for the code users. The overall structure of the manual is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible
Interfacial and Wall Transport Models for SPACE-CAP Code
Energy Technology Data Exchange (ETDEWEB)
Hong, Soon Joon; Choo, Yeon Joon; Han, Tae Young; Hwang, Su Hyun; Lee, Byung Chul [FNC Tech., Seoul (Korea, Republic of); Choi, Hoon; Ha, Sang Jun [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)
2009-10-15
The development project for the domestic design code was launched to be used for the safety and performance analysis of pressurized light water reactors. And CAP (Containment Analysis Package) code has been also developed for the containment safety and performance analysis side by side with SPACE. The CAP code treats three fields (gas, continuous liquid, and dispersed drop) for the assessment of containment specific phenomena, and is featured by its multidimensional assessment capabilities. Thermal hydraulics solver was already developed and now under testing of its stability and soundness. As a next step, interfacial and wall transport models was setup. In order to develop the best model and correlation package for the CAP code, various models currently used in major containment analysis codes, which are GOTHIC, CONTAIN2.0, and CONTEMPT-LT, have been reviewed. The origins of the selected models used in these codes have also been examined to find out if the models have not conflict with a proprietary right. In addition, a literature survey of the recent studies has been performed in order to incorporate the better models for the CAP code. The models and correlations of SPACE were also reviewed. CAP models and correlations are composed of interfacial heat/mass, and momentum transport models, and wall heat/mass, and momentum transport models. This paper discusses on those transport models in the CAP code.
Dielectron production in C+C collisions at 2 AGeV with HADES
Czech Academy of Sciences Publication Activity Database
Przygoda, W.; Kugler, Andrej; Wagner, Vladimír
-, č. 783 (2007), s. 583-583 ISSN 0375-9474 R&D Projects: GA AV ČR IAA1048304 Institutional research plan: CEZ:AV0Z10480505 Keywords : HADES spectrometer Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 3.096, year: 2007
A facility for pion-induced nuclear reaction studies with HADES
Czech Academy of Sciences Publication Activity Database
Adamczewski-Musch, J.; Arnold, O.; Behnke, C.; Chlad, Lukáš; Kugler, Andrej; Rodriguez Ramos, Pablo; Sobolev, Yuri, G.; Svoboda, Ondřej; Tlustý, Pavel; Wagner, Vladimír
2017-01-01
Roč. 53, č. 9 (2017), č. článku 188. ISSN 1434-6001 R&D Projects: GA ČR GA13-06759S; GA MŠk LM2015049 Institutional support: RVO:61389005 Keywords : HADES collaboration Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders OBOR OECD: Nuclear physics Impact factor: 2.833, year: 2016
Czech Academy of Sciences Publication Activity Database
Andreeva, O. V.; Golubeva, M. B.; Guber, F. F.; Ivashkin, A. P.; Krása, Antonín; Kugler, Andrej; Kurepin, A. B.; Petukhov, O. A.; Reshetin, A. I.; Sadovsky, A. S.; Svoboda, Ondřej; Sobolev, Yuri, G.; Tlustý, Pavel; Usenko, E. A.
Roč. 57, č. 2 ( 2014 ), s. 103-119 ISSN 0020-4412 R&D Projects: GA ČR GA13-06759S Institutional support: RVO:61389005 Keywords : HADES * collisions * detectors Subject RIV: BG - Nuclear , Atomic and Molecular Physics, Colliders Impact factor: 0.331, year: 2014
Dielectron spectroscopy at 1-2 AGeV with HADES
Czech Academy of Sciences Publication Activity Database
Spataro, S.; Agakishiev, G.; Agodi, C.; Balanda, A.; Bellia, G.; Belver, D.; Belyaev, A.; Blanco, A.; Böhmer, M.; Boyard, J. L.; Braun-Munzinger, P.; Cabanelas, P.; Castro, E.; Chernenko, S.; Christ, T.; Destefanis, M.; Díaz, J.; Dohrmann, F.; Dybczak, A.; Eberl, T.; Fabbietti, L.; Fateev, O.; Finocchiaro, P.; Fonte, P.; Friese, J.; Frohlich, I.; Galatyuk, T.; Garzón, J.A.; Gernhäuser, R.; Gil, A.; Gilardi, C.; Golubeva, M.; González-Díaz, D.; Grosse, E.; Guber, F.; Heilmann, M.; Hennino, T.; Holzmann, R.; Ierusalimov, A.; Iori, I.; Ivashkin, A.; Jurkovic, M.; Kämpfer, B.; Kanaki, K.; Karavicheva, T.; Kirschner, D.; Koenig, I.; Koenig, W.; Kolb, B.W.; Kotte, R.; Kozuch, A.; Krása, Antonín; Křížek, Filip; Krücken, R.; Kuhn, W.; Kugler, Andrej; Kurepin, A.; Lamas-Valverde, J.; Lang, S.; Lange, J.S.; Lapidus, K.; Lopes, L.; Lorenz, M.; Maier, L.; Mangiarotti, A.; Marín, J.; Markert, J.; Metag, V.; Micel, J.; Michalska, B.; Mishra, D.; Morinière, E.; Mousa, J.; Muntz, C.; Naumann, L.; Novotny, R.; Otwinowski, J.; Pachmayer, Y.C.; Palka, M.; Parpottas, Y.; Pechenov, V.; Pechenova, O.; Pérez Cavalcanti, T.; Pietraszko, J.; Przygoda, W.; Ramstein, B.; Reshetin, A.; Roy-Stephan, M.; Rustamov, A.; Sadovsky, A.; Sailer, B.; Salabura, P.; Schmah, A.; Simon, R. S.; Sobolev, Yu. G.; Spruck, B.; Strobele, H.; Stroth, J.; Sturm, C.; Sudol, M.; Tarantola, A.; Teilab, K.; Tlustý, Pavel; Traxler, M.; Trebacz, R.; Tsertos, H.; Veretenkin, I.; Wagner, Vladimír; Wen, H.; Wisniowski, M.; Wojcik, T.; Wüstenfeld, J.; Yurevich, S.; Zanevsky, Yu.; Zhou, P.; Zumbruch, P.
2008-01-01
Roč. 38, č. 2 (2008), s. 163-166 ISSN 1434-6001 R&D Projects: GA AV ČR IAA100480803; GA MŠk LC07050 Institutional research plan: CEZ:AV0Z10480505 Keywords : HADES * GeV * GSI Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 2.015, year: 2008
Neothada hades n.sp. from South Africa, with notes on the genus ...
African Journals Online (AJOL)
Neothada hades n.sp. from South Africa is characterized by the posession of small but distinct stylet knobs, 149-175 body annules intersected by 14 longitudinal lines (incisures), protruding lateral fields marked by four incisures, and an elongate-conoid tail with bluntly rounded tip. Reasons are given why N. canceilata ...
Homero y el perro de Hades (Hiliada VIII 368 y Odisea XI 623
Directory of Open Access Journals (Sweden)
H. Thiry
1974-06-01
Full Text Available The dog of Hades, afterwards called Cerberus, is in Homer just a dog guardian of the house. It is neither a monstruous nor a deform dog, but has the same characteristics as regular watch dogs in Greek houses.
Code Generation for Protocols from CPN models Annotated with Pragmatics
DEFF Research Database (Denmark)
Simonsen, Kent Inge; Kristensen, Lars Michael; Kindler, Ekkart
of the same model and sufficiently detailed to serve as a basis for automated code generation when annotated with code generation pragmatics. Pragmatics are syntactical annotations designed to make the CPN models descriptive and to address the problem that models with enough details for generating code from...... them tend to be verbose and cluttered. Our code generation approach consists of three main steps, starting from a CPN model that the modeller has annotated with a set of pragmatics that make the protocol structure and the control-flow explicit. The first step is to compute for the CPN model, a set...... of derived pragmatics that identify control-flow structures and operations, e. g., for sending and receiving packets, and for manipulating the state. In the second step, an abstract template tree (ATT) is constructed providing an association between pragmatics and code generation templates. The ATT...
Nuclear model computer codes available from the NEA Data Bank
International Nuclear Information System (INIS)
Sartori, E.
1989-01-01
A library of computer codes for nuclear model calculations, a subset of a library covering the different aspects of reactor physics and technology applications has been established at the NEA Data Bank. These codes are listed and classified according to the model used in the text. Copies of the programs can be obtained from the NEA Data Bank. (author). 8 refs
The GNASH preequilibrium-statistical nuclear model code
International Nuclear Information System (INIS)
Arthur, E. D.
1988-01-01
The following report is based on materials presented in a series of lectures at the International Center for Theoretical Physics, Trieste, which were designed to describe the GNASH preequilibrium statistical model code and its use. An overview is provided of the code with emphasis upon code's calculational capabilities and the theoretical models that have been implemented in it. Two sample problems are discussed, the first dealing with neutron reactions on 58 Ni. the second illustrates the fission model capabilities implemented in the code and involves n + 235 U reactions. Finally a description is provided of current theoretical model and code development underway. Examples of calculated results using these new capabilities are also given. 19 refs., 17 figs., 3 tabs
WWER radial reflector modeling by diffusion codes
International Nuclear Information System (INIS)
Petkov, P. T.; Mittag, S.
2005-01-01
The two commonly used approaches to describe the WWER radial reflectors in diffusion codes, by albedo on the core-reflector boundary and by a ring of diffusive assembly size nodes, are discussed. The advantages and disadvantages of the first approach are presented first, then the Koebke's equivalence theory is outlined and its implementation for the WWER radial reflectors is discussed. Results for the WWER-1000 reactor are presented. Then the boundary conditions on the outer reflector boundary are discussed. The possibility to divide the library into fuel assembly and reflector parts and to generate each library by a separate code package is discussed. Finally, the homogenization errors for rodded assemblies are presented and discussed (Author)
Expanding of reactor power calculation model of RELAP5 code
International Nuclear Information System (INIS)
Lin Meng; Yang Yanhua; Chen Yuqing; Zhang Hong; Liu Dingming
2007-01-01
For better analyzing of the nuclear power transient in rod-controlled reactor core by RELAP5 code, a nuclear reactor thermal-hydraulic best-estimate system code, it is expected to get the nuclear power using not only the point neutron kinetics model but also one-dimension neutron kinetics model. Thus an existing one-dimension nuclear reactor physics code was modified, to couple its neutron kinetics model with the RELAP5 thermal-hydraulic model. The detailed example test proves that the coupling is valid and correct. (authors)
RELAP5/MOD3 code manual: Code structure, system models, and solution methods. Volume 1
International Nuclear Information System (INIS)
1995-08-01
The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling, approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I provides modeling theory and associated numerical schemes
Strangeness at high μB: Recent data from FOPI and HADES
Leifels, Yvonne
2018-02-01
Strangeness production in heavy-ion reactions at incident energies at or below the threshold in NN collisions gives access to the characteristics of bulk nuclear matter and the properties of strange particles inside the hot and dense nuclear medium, like potentials and interaction cross sections. At these energies strangeness is produced in multi-step processes potentially via excitation of intermediate heavy resonances. The amount of experimental data on strangeness production at these energies has increased substantially during the last years due to the FOPI and the HADES experiments at SIS18 at GSI. Experimental data on K+ and K0 production support the assumption that particles with an s quark feel a moderate repulsive potential in the nuclear medium. The situation is not that clear in the case of K-. Here, spectra and flow of K- mesons is influenced by the contribution of ø mesons which are decaying into K+K- pairs with a branching ratio of 48.9 %. Depending on incident energy upto 30 % of all K- mesons measured in heavyion collisions are originating from ø-decays. Strangeness production yields - except the yield of Ξ- are described by thermal hadronisation models. Experimental data not only measured for heavy-ion collisions but also in proton induced reactions are described with sets of temperature T and baryon chemical potential μb which are close to a universal freeze-out curve which is fitting also experimental data obtained at lower baryon chemical potential. Despite the good description of most particle production yields, the question how this is achieved is still not settled and should be the focus of further investigations.
In situ water and gas injection experiments performed in the Hades Underground Research Facility
International Nuclear Information System (INIS)
Volckaert, G.; Ortiz, L.; Put, M.
1995-01-01
The movement of water and gas through plastic clay is an important subject in the research at SCK-CEN on the possible disposal of high level radioactive waste in the Boom clay layer at Mol. Since the construction of the Hades underground research facility in 1983, SCK-CEN has developed and installed numerous piezometers for the geohydrologic characterization and for in situ radionuclide migration experiments. In situ gas and water injection experiments have been performed at two different locations in the underground laboratory. The first location is a multi filter piezometer installed vertically at the bottom of the shaft in 1986. The second location is a three dimensional configuration of four horizontal multi piezometers installed from the gallery. This piezometer configuration was designed for the MEGAS (Modelling and Experiments on GAS migration through argillaceous rocks) project and installed in 1992. It contains 29 filters at distances between 10 m and 15 m from the gallery in the clay. Gas injection experiments show that gas breakthrough occurs at a gas overpressure of about 0.6 MPa. The breakthrough occurs by the creation of gas pathways along the direction of lowest resistance i.e. the zone of low effective stress resulting from the drilling of the borehole. The water injections performed in a filter -- not used for gas injection -- show that the flow of water is also influenced by the mechanical stress conditions. Low effective stress leads to higher hydraulic conductivity. However, water overpressures up to 1.3 MPa did not cause hydrofracturing. Water injections performed in a filter previously used for gas injections, show that the occluded gas hinders the water flow and reduces the hydraulic conductivity by a factor two
Fuel behavior modeling using the MARS computer code
International Nuclear Information System (INIS)
Faya, S.C.S.; Faya, A.J.G.
1983-01-01
The fuel behaviour modeling code MARS against experimental data, was evaluated. Two cases were selected: an early comercial PWR rod (Maine Yankee rod) and an experimental rod from the Canadian BWR program (Canadian rod). The MARS predictions are compared with experimental data and predictions made by other fuel modeling codes. Improvements are suggested for some fuel behaviour models. Mars results are satisfactory based on the data available. (Author) [pt
Coupling a Basin Modeling and a Seismic Code using MOAB
Yan, Mi
2012-06-02
We report on a demonstration of loose multiphysics coupling between a basin modeling code and a seismic code running on a large parallel machine. Multiphysics coupling, which is one critical capability for a high performance computing (HPC) framework, was implemented using the MOAB open-source mesh and field database. MOAB provides for code coupling by storing mesh data and input and output field data for the coupled analysis codes and interpolating the field values between different meshes used by the coupled codes. We found it straightforward to use MOAB to couple the PBSM basin modeling code and the FWI3D seismic code on an IBM Blue Gene/P system. We describe how the coupling was implemented and present benchmarking results for up to 8 racks of Blue Gene/P with 8192 nodes and MPI processes. The coupling code is fast compared to the analysis codes and it scales well up to at least 8192 nodes, indicating that a mesh and field database is an efficient way to implement loose multiphysics coupling for large parallel machines.
EM modeling for GPIR using 3D FDTD modeling codes
Energy Technology Data Exchange (ETDEWEB)
Nelson, S.D.
1994-10-01
An analysis of the one-, two-, and three-dimensional electrical characteristics of structural cement and concrete is presented. This work connects experimental efforts in characterizing cement and concrete in the frequency and time domains with the Finite Difference Time Domain (FDTD) modeling efforts of these substances. These efforts include Electromagnetic (EM) modeling of simple lossless homogeneous materials with aggregate and targets and the modeling dispersive and lossy materials with aggregate and complex target geometries for Ground Penetrating Imaging Radar (GPIR). Two- and three-dimensional FDTD codes (developed at LLNL) where used for the modeling efforts. Purpose of the experimental and modeling efforts is to gain knowledge about the electrical properties of concrete typically used in the construction industry for bridges and other load bearing structures. The goal is to optimize the performance of a high-sample-rate impulse radar and data acquisition system and to design an antenna system to match the characteristics of this material. Results show agreement to within 2 dB of the amplitudes of the experimental and modeled data while the frequency peaks correlate to within 10% the differences being due to the unknown exact nature of the aggregate placement.
RELAP5/MOD3 code coupling model
International Nuclear Information System (INIS)
Martin, R.P.; Johnsen, G.W.
1994-01-01
A new capability has been incorporated into RELAP5/MOD3 that enables the coupling of RELAP5/MOD3 to other computer codes. The new capability has been designed to support analysis of the new advanced reactor concepts. Its user features rely solely on new RELAP5 open-quotes styledclose quotes input and the Parallel Virtual Machine (PVM) software, which facilitates process management and distributed communication of multiprocess problems. RELAP5/MOD3 manages the input processing, communication instruction, process synchronization, and its own send and receive data processing. The flexible capability requires that an explicit coupling be established, which updates boundary conditions at discrete time intervals. Two test cases are presented that demonstrate the functionality, applicability, and issues involving use of this capability
Case studies in Gaussian process modelling of computer codes
International Nuclear Information System (INIS)
Kennedy, Marc C.; Anderson, Clive W.; Conti, Stefano; O'Hagan, Anthony
2006-01-01
In this paper we present a number of recent applications in which an emulator of a computer code is created using a Gaussian process model. Tools are then applied to the emulator to perform sensitivity analysis and uncertainty analysis. Sensitivity analysis is used both as an aid to model improvement and as a guide to how much the output uncertainty might be reduced by learning about specific inputs. Uncertainty analysis allows us to reflect output uncertainty due to unknown input parameters, when the finished code is used for prediction. The computer codes themselves are currently being developed within the UK Centre for Terrestrial Carbon Dynamics
The HADES-RICH upgrade using Hamamatsu H12700 MAPMTs with DiRICH FEE + Readout
Patel, V.; Traxler, M.
2018-03-01
The High Acceptance Di-Electron Spectrometer (HADES) is operational since the year 2000 and uses a hadron blind RICH detector for electron identification. The RICH photon detector is currently replaced by Hamamatsu H12700 MAPMTs with a readout system based on the DiRICH front-end module. The electronic readout chain is being developed as a joint effort of the HADES-, CBM- and PANDA collaborations and will also be used in the photon detectors for the upcoming Compressed Baryonic Matter (CBM) and PANDA experiments at FAIR . This article gives a brief overview on the photomultipliers and their quality assurance test measurements, as well as first measurements of the new DiRICH front-end module in final configurations.
ATHENA code manual. Volume 1. Code structure, system models, and solution methods
International Nuclear Information System (INIS)
Carlson, K.E.; Roth, P.A.; Ransom, V.H.
1986-09-01
The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code has been developed to perform transient simulation of the thermal hydraulic systems which may be found in fusion reactors, space reactors, and other advanced systems. A generic modeling approach is utilized which permits as much of a particular system to be modeled as necessary. Control system and secondary system components are included to permit modeling of a complete facility. Several working fluids are available to be used in one or more interacting loops. Different loops may have different fluids with thermal connections between loops. The modeling theory and associated numerical schemes are documented in Volume I in order to acquaint the user with the modeling base and thus aid effective use of the code. The second volume contains detailed instructions for input data preparation
The HADES mission concept - astrobiological survey of Jupiter's icy moon Europa
Böttcher, Thomas; Huber, Liliane; Le Corre, Lucille; Leitner, Johannes; McCarthy, David; Nilsson, Ricky; Teixeira, Carlos; Vaquer Araujo, Sergi; Wilson, Rebecca C.; Adjali, Fatah; Altenburg, Martin; Briani, Giacomo; Buchas, Peter; Le Postollec, Aurélie; Meier, Teresa
2017-01-01
The HADES Europa mission concept aims to provide a framework for an astrobiological in-depth investigation of the Jupiter moon Europa, relying on existing technologies and feasibility. This mission study proposes a system consisting of an orbiter, lander and cryobot as a platform for detailed exploration of Europa. While the orbiter will investigate the presence of a liquid ocean and characterize Europa's internal structure, the lander will survey local dynamics of the ice layer and the surfa...
Energy Technology Data Exchange (ETDEWEB)
Lalik, Rafal Tomasz
2016-06-02
pp center of mass system and pin down contributions of a various production channels, with special focus on the resonances contributing to the total production. The first goal is obtained by finding the momentum and the spatial distributions of the produced Λs and applying corrections emerging from detector efficiency and acceptance to the data. The second one can be verified by comparing obtained results with the Λ production model applied for the analysis. In the end, a total Λ-hyperon production cross-section of σ(pp→Λ+X)=201.3±1.3{sup +6.0}{sub -7.3±}8.1{sup +0.4}{sub -0.4} μb with the angular anisotropy expressed in terms of Legendre polynomials have been extracted from the data. In the first chapter of this part, general physics which motivates this analysis is presented, and brief description of the HADES detector is presented. In the next chapter, procedures of the signal reconstruction from the collected experimental data are described. The third section describes the production model used for comparision with the data and preparing all data corrections. The fourth section presents the differential analysis of the data and of the model, in the end comparision of these both data samples is presented and final results of the cross section extraction is presented.
International Nuclear Information System (INIS)
Lalik, Rafal Tomasz
2016-01-01
of mass system and pin down contributions of a various production channels, with special focus on the resonances contributing to the total production. The first goal is obtained by finding the momentum and the spatial distributions of the produced Λs and applying corrections emerging from detector efficiency and acceptance to the data. The second one can be verified by comparing obtained results with the Λ production model applied for the analysis. In the end, a total Λ-hyperon production cross-section of σ(pp→Λ+X)=201.3±1.3 +6.0 -7.3± 8.1 +0.4 -0.4 μb with the angular anisotropy expressed in terms of Legendre polynomials have been extracted from the data. In the first chapter of this part, general physics which motivates this analysis is presented, and brief description of the HADES detector is presented. In the next chapter, procedures of the signal reconstruction from the collected experimental data are described. The third section describes the production model used for comparision with the data and preparing all data corrections. The fourth section presents the differential analysis of the data and of the model, in the end comparision of these both data samples is presented and final results of the cross section extraction is presented.
CMCpy: Genetic Code-Message Coevolution Models in Python.
Becich, Peter J; Stark, Brian P; Bhat, Harish S; Ardell, David H
2013-01-01
Code-message coevolution (CMC) models represent coevolution of a genetic code and a population of protein-coding genes ("messages"). Formally, CMC models are sets of quasispecies coupled together for fitness through a shared genetic code. Although CMC models display plausible explanations for the origin of multiple genetic code traits by natural selection, useful modern implementations of CMC models are not currently available. To meet this need we present CMCpy, an object-oriented Python API and command-line executable front-end that can reproduce all published results of CMC models. CMCpy implements multiple solvers for leading eigenpairs of quasispecies models. We also present novel analytical results that extend and generalize applications of perturbation theory to quasispecies models and pioneer the application of a homotopy method for quasispecies with non-unique maximally fit genotypes. Our results therefore facilitate the computational and analytical study of a variety of evolutionary systems. CMCpy is free open-source software available from http://pypi.python.org/pypi/CMCpy/.
Noise Residual Learning for Noise Modeling in Distributed Video Coding
DEFF Research Database (Denmark)
Luong, Huynh Van; Forchhammer, Søren
2012-01-01
Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The noise model is one of the inherently difficult challenges in DVC. This paper considers Transform Domain Wyner-Ziv (TDWZ) coding and proposes...... noise residual learning techniques that take residues from previously decoded frames into account to estimate the decoding residue more precisely. Moreover, the techniques calculate a number of candidate noise residual distributions within a frame to adaptively optimize the soft side information during...
A fully scalable motion model for scalable video coding.
Kao, Meng-Ping; Nguyen, Truong
2008-06-01
Motion information scalability is an important requirement for a fully scalable video codec, especially for decoding scenarios of low bit rate or small image size. So far, several scalable coding techniques on motion information have been proposed, including progressive motion vector precision coding and motion vector field layered coding. However, it is still vague on the required functionalities of motion scalability and how it collaborates flawlessly with other scalabilities, such as spatial, temporal, and quality, in a scalable video codec. In this paper, we first define the functionalities required for motion scalability. Based on these requirements, a fully scalable motion model is proposed along with tailored encoding techniques to minimize the coding overhead of scalability. Moreover, the associated rate distortion optimized motion estimation algorithm will be provided to achieve better efficiency throughout various decoding scenarios. Simulation results will be presented to verify the superiorities of proposed scalable motion model over nonscalable ones.
Modeling Guidelines for Code Generation in the Railway Signaling Context
Ferrari, Alessio; Bacherini, Stefano; Fantechi, Alessandro; Zingoni, Niccolo
2009-01-01
Modeling guidelines constitute one of the fundamental cornerstones for Model Based Development. Their relevance is essential when dealing with code generation in the safety-critical domain. This article presents the experience of a railway signaling systems manufacturer on this issue. Introduction of Model-Based Development (MBD) and code generation in the industrial safety-critical sector created a crucial paradigm shift in the development process of dependable systems. While traditional software development focuses on the code, with MBD practices the focus shifts to model abstractions. The change has fundamental implications for safety-critical systems, which still need to guarantee a high degree of confidence also at code level. Usage of the Simulink/Stateflow platform for modeling, which is a de facto standard in control software development, does not ensure by itself production of high-quality dependable code. This issue has been addressed by companies through the definition of modeling rules imposing restrictions on the usage of design tools components, in order to enable production of qualified code. The MAAB Control Algorithm Modeling Guidelines (MathWorks Automotive Advisory Board)[3] is a well established set of publicly available rules for modeling with Simulink/Stateflow. This set of recommendations has been developed by a group of OEMs and suppliers of the automotive sector with the objective of enforcing and easing the usage of the MathWorks tools within the automotive industry. The guidelines have been published in 2001 and afterwords revisited in 2007 in order to integrate some additional rules developed by the Japanese division of MAAB [5]. The scope of the current edition of the guidelines ranges from model maintainability and readability to code generation issues. The rules are conceived as a reference baseline and therefore they need to be tailored to comply with the characteristics of each industrial context. Customization of these
A Single Model Explains both Visual and Auditory Precortical Coding
Shan, Honghao; Tong, Matthew H.; Cottrell, Garrison W.
2016-01-01
Precortical neural systems encode information collected by the senses, but the driving principles of the encoding used have remained a subject of debate. We present a model of retinal coding that is based on three constraints: information preservation, minimization of the neural wiring, and response equalization. The resulting novel version of sparse principal components analysis successfully captures a number of known characteristics of the retinal coding system, such as center-surround rece...
MARS CODE MANUAL VOLUME V: Models and Correlations
International Nuclear Information System (INIS)
Chung, Bub Dong; Bae, Sung Won; Lee, Seung Wook; Yoon, Churl; Hwang, Moon Kyu; Kim, Kyung Doo; Jeong, Jae Jun
2010-02-01
Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This models and correlations manual provides a complete list of detailed information of the thermal-hydraulic models used in MARS, so that this report would be very useful for the code users. The overall structure of the manual is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible
Fuel rod modelling during transients: The TOUTATIS code
International Nuclear Information System (INIS)
Bentejac, F.; Bourreau, S.; Brochard, J.; Hourdequin, N.; Lansiart, S.
2001-01-01
The TOUTATIS code is devoted to the PCI local phenomena simulation, in correlation with the METEOR code for the global behaviour of the fuel rod. More specifically, the TOUTATIS objective is to evaluate the mechanical constraints on the cladding during a power transient thus predicting its behaviour in term of stress corrosion cracking. Based upon the finite element computation code CASTEM 2000, TOUTATIS is a set of modules written in a macro language. The aim of this paper is to present both code modules: The axisymmetric bi-dimensional module, modeling a unique block pellet; The tri dimensional module modeling a radially fragmented pellet. Having shown the boundary conditions and the algorithms used, the application will be illustrated by: A short presentation of the bidimensional axisymmetric modeling performances as well as its limits; The enhancement due to the three dimensional modeling will be displayed by sensitivity studies to the geometry, in this case the pellet height/diameter ratio. Finally, we will show the easiness of the development inherent to the CASTEM 2000 system by depicting the process of a modeling enhancement by adding the possibility of an axial (horizontal) fissuration of the pellet. As conclusion, the future improvements planned for the code are depicted. (author)
Compositional Model Based Fisher Vector Coding for Image Classification.
Liu, Lingqiao; Wang, Peng; Shen, Chunhua; Wang, Lei; Hengel, Anton van den; Wang, Chao; Shen, Heng Tao
2017-12-01
Deriving from the gradient vector of a generative model of local features, Fisher vector coding (FVC) has been identified as an effective coding method for image classification. Most, if not all, FVC implementations employ the Gaussian mixture model (GMM) as the generative model for local features. However, the representative power of a GMM can be limited because it essentially assumes that local features can be characterized by a fixed number of feature prototypes, and the number of prototypes is usually small in FVC. To alleviate this limitation, in this work, we break the convention which assumes that a local feature is drawn from one of a few Gaussian distributions. Instead, we adopt a compositional mechanism which assumes that a local feature is drawn from a Gaussian distribution whose mean vector is composed as a linear combination of multiple key components, and the combination weight is a latent random variable. In doing so we greatly enhance the representative power of the generative model underlying FVC. To implement our idea, we design two particular generative models following this compositional approach. In our first model, the mean vector is sampled from the subspace spanned by a set of bases and the combination weight is drawn from a Laplace distribution. In our second model, we further assume that a local feature is composed of a discriminative part and a residual part. As a result, a local feature is generated by the linear combination of discriminative part bases and residual part bases. The decomposition of the discriminative and residual parts is achieved via the guidance of a pre-trained supervised coding method. By calculating the gradient vector of the proposed models, we derive two new Fisher vector coding strategies. The first is termed Sparse Coding-based Fisher Vector Coding (SCFVC) and can be used as the substitute of traditional GMM based FVC. The second is termed Hybrid Sparse Coding-based Fisher vector coding (HSCFVC) since it
The JCSS probabilistic model code: Experience and recent developments
Chryssanthopoulos, M.; Diamantidis, D.; Vrouwenvelder, A.C.W.M.
2003-01-01
The JCSS Probabilistic Model Code (JCSS-PMC) has been available for public use on the JCSS website (www.jcss.ethz.ch) for over two years. During this period, several examples have been worked out and new probabilistic models have been added. Since the engineering community has already been exposed
An improved thermal model for the computer code NAIAD
International Nuclear Information System (INIS)
Rainbow, M.T.
1982-12-01
An improved thermal model, based on the concept of heat slabs, has been incorporated as an option into the thermal hydraulic computer code NAIAD. The heat slabs are one-dimensional thermal conduction models with temperature independent thermal properties which may be internal and/or external to the fluid. Thermal energy may be added to or removed from the fluid via heat slabs and passed across the external boundary of external heat slabs at a rate which is a linear function of the external surface temperatures. The code input for the new option has been restructured to simplify data preparation. A full description of current input requirements is presented
Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach
Directory of Open Access Journals (Sweden)
W. Bastiaan Kleijn
2005-06-01
Full Text Available Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel coding.
Thermal-hydraulic models and correlations for the SPACE code
International Nuclear Information System (INIS)
Kim, K. D.; Lee, S. W.; Bae, S. W.; Moon, S. K.; Kim, S. Y.; Lee, Y. H.
2009-01-01
The SPACE code which is based on a multi-dimensional two-fluid, three-field model is under development to be used for licensing future pressurized water reactors. Several research and industrial organizations are participated in the collaboration of the development program, including KAERI, KHNP, KOPEC, KNF, and KEPRI. KAERI has been assigned to develop the thermal-hydraulic models and correlations which are required to solve the field equations as the closure relationships. This task can be categorized into five packages; i) a flow regime selection package, ii) a wall and interfacial friction package, iii) an interfacial heat and mass transfer package iv) a droplet entrainment and de-entrainment package and v) a wall heat transfer package. Since the SPACE code, unlike other major best-estimate nuclear reactor system analysis codes, RELAP5, TRAC-M and CATHARE which consider only liquid and vapor phases, incorporates a dispersed liquid field in addition to vapor and continuous liquid fields, intel facial interaction models between continuous, dispersed liquid phases and vapor phase have to be developed separately. The proper physical models can significantly improve the accuracy of the prediction of a nuclear reactor system behavior under many different transient conditions because those models are composed of the source terms for the governing equations. In this paper, a development program for the physical models and correlations for the SPACE code will be introduced briefly
COCOA code for creating mock observations of star cluster models
Askar, Abbas; Giersz, Mirek; Pych, Wojciech; Dalessandro, Emanuele
2018-04-01
We introduce and present results from the COCOA (Cluster simulatiOn Comparison with ObservAtions) code that has been developed to create idealized mock photometric observations using results from numerical simulations of star cluster evolution. COCOA is able to present the output of realistic numerical simulations of star clusters carried out using Monte Carlo or N-body codes in a way that is useful for direct comparison with photometric observations. In this paper, we describe the COCOA code and demonstrate its different applications by utilizing globular cluster (GC) models simulated with the MOCCA (MOnte Carlo Cluster simulAtor) code. COCOA is used to synthetically observe these different GC models with optical telescopes, perform point spread function photometry, and subsequently produce observed colour-magnitude diagrams. We also use COCOA to compare the results from synthetic observations of a cluster model that has the same age and metallicity as the Galactic GC NGC 2808 with observations of the same cluster carried out with a 2.2 m optical telescope. We find that COCOA can effectively simulate realistic observations and recover photometric data. COCOA has numerous scientific applications that maybe be helpful for both theoreticians and observers that work on star clusters. Plans for further improving and developing the code are also discussed in this paper.
Code Development for Control Design Applications: Phase I: Structural Modeling
International Nuclear Information System (INIS)
Bir, G. S.; Robinson, M.
1998-01-01
The design of integrated controls for a complex system like a wind turbine relies on a system model in an explicit format, e.g., state-space format. Current wind turbine codes focus on turbine simulation and not on system characterization, which is desired for controls design as well as applications like operating turbine model analysis, optimal design, and aeroelastic stability analysis. This paper reviews structural modeling that comprises three major steps: formation of component equations, assembly into system equations, and linearization
Verification and Validation of Heat Transfer Model of AGREE Code
Energy Technology Data Exchange (ETDEWEB)
Tak, N. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Seker, V.; Drzewiecki, T. J.; Downar, T. J. [Department of Nuclear Engineering and Radiological Sciences, Univ. of Michigan, Michigan (United States); Kelly, J. M. [US Nuclear Regulatory Commission, Washington (United States)
2013-05-15
The AGREE code was originally developed as a multi physics simulation code to perform design and safety analysis of Pebble Bed Reactors (PBR). Currently, additional capability for the analysis of Prismatic Modular Reactor (PMR) core is in progress. Newly implemented fluid model for a PMR core is based on a subchannel approach which has been widely used in the analyses of light water reactor (LWR) cores. A hexagonal fuel (or graphite block) is discretized into triangular prism nodes having effective conductivities. Then, a meso-scale heat transfer model is applied to the unit cell geometry of a prismatic fuel block. Both unit cell geometries of multi-hole and pin-in-hole types of prismatic fuel blocks are considered in AGREE. The main objective of this work is to verify and validate the heat transfer model newly implemented for a PMR core in the AGREE code. The measured data in the HENDEL experiment were used for the validation of the heat transfer model for a pin-in-hole fuel block. However, the HENDEL tests were limited to only steady-state conditions of pin-in-hole fuel blocks. There exist no available experimental data regarding a heat transfer in multi-hole fuel blocks. Therefore, numerical benchmarks using conceptual problems are considered to verify the heat transfer model of AGREE for multi-hole fuel blocks as well as transient conditions. The CORONA and GAMMA+ codes were used to compare the numerical results. In this work, the verification and validation study were performed for the heat transfer model of the AGREE code using the HENDEL experiment and the numerical benchmarks of selected conceptual problems. The results of the present work show that the heat transfer model of AGREE is accurate and reliable for prismatic fuel blocks. Further validation of AGREE is in progress for a whole reactor problem using the HTTR safety test data such as control rod withdrawal tests and loss-of-forced convection tests.
Code Shift: Grid Specifications and Dynamic Wind Turbine Models
DEFF Research Database (Denmark)
Ackermann, Thomas; Ellis, Abraham; Fortmann, Jens
2013-01-01
Grid codes (GCs) and dynamic wind turbine (WT) models are key tools to allow increasing renewable energy penetration without challenging security of supply. In this article, the state of the art and the further development of both tools are discussed, focusing on the European and North American e...
Testing geochemical modeling codes using New Zealand hydrothermal systems
International Nuclear Information System (INIS)
Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.
1993-12-01
Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of selected portions of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will: (1) ensure that we are providing adequately for all significant processes occurring in natural systems; (2) determine the adequacy of the mathematical descriptions of the processes; (3) check the adequacy and completeness of thermodynamic data as a function of temperature for solids, aqueous species and gases; and (4) determine the sensitivity of model results to the manner in which the problem is conceptualized by the user and then translated into constraints in the code input. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions. The kinetics of silica precipitation in EQ6 will be tested using field data from silica-lined drain channels carrying hot water away from the Wairakei borefield
Dynamic Model on the Transmission of Malicious Codes in Network
Bimal Kumar Mishra; Apeksha Prajapati
2013-01-01
This paper introduces differential susceptible e-epidemic model S_i IR (susceptible class-1 for virus (S1) - susceptible class-2 for worms (S2) -susceptible class-3 for Trojan horse (S3) – infectious (I) – recovered (R)) for the transmission of malicious codes in a computer network. We derive the formula for reproduction number (R0) to study the spread of malicious codes in computer network. We show that the Infectious free equilibrium is globally asymptotically stable and endemic equilibrium...
A compressible Navier-Stokes code for turbulent flow modeling
Coakley, T. J.
1984-01-01
An implicit, finite volume code for solving two dimensional, compressible turbulent flows is described. Second order upwind differencing of the inviscid terms of the equations is used to enhance stability and accuracy. A diagonal form of the implicit algorithm is used to improve efficiency. Several zero and two equation turbulence models are incorporated to study their impact on overall flow modeling accuracy. Applications to external and internal flows are discussed.
Thermohydraulic modeling of nuclear thermal rockets: The KLAXON code
International Nuclear Information System (INIS)
Hall, M.L.; Rider, W.J.; Cappiello, M.W.
1992-01-01
The hydrogen flow from the storage tanks, through the reactor core, and out the nozzle of a Nuclear Thermal Rocket is an integral design consideration. To provide an analysis and design tool for this phenomenon, the KLAXON code is being developed. A shock-capturing numerical methodology is used to model the gas flow (the Harten, Lax, and van Leer method, as implemented by Einfeldt). Preliminary results of modeling the flow through the reactor core and nozzle are given in this paper
An improved steam generator model for the SASSYS code
International Nuclear Information System (INIS)
Pizzica, P.A.
1989-01-01
A new steam generator model has been developed for the SASSYS computer code, which analyzes accident conditions in a liquid-metal-cooled fast reactor. It has been incorporated into the new SASSYS balance-of-plant model, but it can also function as a stand-alone model. The model provides a full solution of the steady-state condition before the transient calculation begins for given sodium and water flow rates, inlet and outlet sodium temperatures, and inlet enthalpy and region lengths on the water side
The 2010 fib Model Code for Structural Concrete: A new approach to structural engineering
Walraven, J.C.; Bigaj-Van Vliet, A.
2011-01-01
The fib Model Code is a recommendation for the design of reinforced and prestressed concrete which is intended to be a guiding document for future codes. Model Codes have been published before, in 1978 and 1990. The draft for fib Model Code 2010 was published in May 2010. The most important new
Plutonium explosive dispersal modeling using the MACCS2 computer code
Energy Technology Data Exchange (ETDEWEB)
Steele, C.M.; Wald, T.L.; Chanin, D.I.
1998-11-01
The purpose of this paper is to derive the necessary parameters to be used to establish a defensible methodology to perform explosive dispersal modeling of respirable plutonium using Gaussian methods. A particular code, MACCS2, has been chosen for this modeling effort due to its application of sophisticated meteorological statistical sampling in accordance with the philosophy of Nuclear Regulatory Commission (NRC) Regulatory Guide 1.145, ``Atmospheric Dispersion Models for Potential Accident Consequence Assessments at Nuclear Power Plants``. A second advantage supporting the selection of the MACCS2 code for modeling purposes is that meteorological data sets are readily available at most Department of Energy (DOE) and NRC sites. This particular MACCS2 modeling effort focuses on the calculation of respirable doses and not ground deposition. Once the necessary parameters for the MACCS2 modeling are developed and presented, the model is benchmarked against empirical test data from the Double Tracks shot of project Roller Coaster (Shreve 1965) and applied to a hypothetical plutonium explosive dispersal scenario. Further modeling with the MACCS2 code is performed to determine a defensible method of treating the effects of building structure interaction on the respirable fraction distribution as a function of height. These results are related to the Clean Slate 2 and Clean Slate 3 bunkered shots of Project Roller Coaster. Lastly a method is presented to determine the peak 99.5% sector doses on an irregular site boundary in the manner specified in NRC Regulatory Guide 1.145 (1983). Parametric analyses are performed on the major analytic assumptions in the MACCS2 model to define the potential errors that are possible in using this methodology.
Plutonium explosive dispersal modeling using the MACCS2 computer code
International Nuclear Information System (INIS)
Steele, C.M.; Wald, T.L.; Chanin, D.I.
1998-01-01
The purpose of this paper is to derive the necessary parameters to be used to establish a defensible methodology to perform explosive dispersal modeling of respirable plutonium using Gaussian methods. A particular code, MACCS2, has been chosen for this modeling effort due to its application of sophisticated meteorological statistical sampling in accordance with the philosophy of Nuclear Regulatory Commission (NRC) Regulatory Guide 1.145, ''Atmospheric Dispersion Models for Potential Accident Consequence Assessments at Nuclear Power Plants''. A second advantage supporting the selection of the MACCS2 code for modeling purposes is that meteorological data sets are readily available at most Department of Energy (DOE) and NRC sites. This particular MACCS2 modeling effort focuses on the calculation of respirable doses and not ground deposition. Once the necessary parameters for the MACCS2 modeling are developed and presented, the model is benchmarked against empirical test data from the Double Tracks shot of project Roller Coaster (Shreve 1965) and applied to a hypothetical plutonium explosive dispersal scenario. Further modeling with the MACCS2 code is performed to determine a defensible method of treating the effects of building structure interaction on the respirable fraction distribution as a function of height. These results are related to the Clean Slate 2 and Clean Slate 3 bunkered shots of Project Roller Coaster. Lastly a method is presented to determine the peak 99.5% sector doses on an irregular site boundary in the manner specified in NRC Regulatory Guide 1.145 (1983). Parametric analyses are performed on the major analytic assumptions in the MACCS2 model to define the potential errors that are possible in using this methodology
The WARP Code: Modeling High Intensity Ion Beams
Grote, David P.; Friedman, Alex; Vay, Jean-Luc; Haber, Irving
2005-03-01
The Warp code, developed for heavy-ion driven inertial fusion energy studies, is used to model high intensity ion (and electron) beams. Significant capability has been incorporated in Warp, allowing nearly all sections of an accelerator to be modeled, beginning with the source. Warp has as its core an explicit, three-dimensional, particle-in-cell model. Alongside this is a rich set of tools for describing the applied fields of the accelerator lattice, and embedded conducting surfaces (which are captured at sub-grid resolution). Also incorporated are models with reduced dimensionality: an axisymmetric model and a transverse "slice" model. The code takes advantage of modern programming techniques, including object orientation, parallelism, and scripting (via Python). It is at the forefront in the use of the computational technique of adaptive mesh refinement, which has been particularly successful in the area of diode and injector modeling, both steady-state and time-dependent. In the presentation, some of the major aspects of Warp will be overviewed, especially those that could be useful in modeling ECR sources. Warp has been benchmarked against both theory and experiment. Recent results will be presented showing good agreement of Warp with experimental results from the STS500 injector test stand. Additional information can be found on the web page http://hif.lbl.gov/theory/WARP_summary.html.
The WARP Code: Modeling High Intensity Ion Beams
International Nuclear Information System (INIS)
Grote, David P.; Friedman, Alex; Vay, Jean-Luc; Haber, Irving
2005-01-01
The Warp code, developed for heavy-ion driven inertial fusion energy studies, is used to model high intensity ion (and electron) beams. Significant capability has been incorporated in Warp, allowing nearly all sections of an accelerator to be modeled, beginning with the source. Warp has as its core an explicit, three-dimensional, particle-in-cell model. Alongside this is a rich set of tools for describing the applied fields of the accelerator lattice, and embedded conducting surfaces (which are captured at sub-grid resolution). Also incorporated are models with reduced dimensionality: an axisymmetric model and a transverse ''slice'' model. The code takes advantage of modern programming techniques, including object orientation, parallelism, and scripting (via Python). It is at the forefront in the use of the computational technique of adaptive mesh refinement, which has been particularly successful in the area of diode and injector modeling, both steady-state and time-dependent. In the presentation, some of the major aspects of Warp will be overviewed, especially those that could be useful in modeling ECR sources. Warp has been benchmarked against both theory and experiment. Recent results will be presented showing good agreement of Warp with experimental results from the STS500 injector test stand
Low mass dielectrons radiated off cold nuclear matter measured with HADES
Directory of Open Access Journals (Sweden)
Lorenz M.
2014-03-01
Full Text Available The High Acceptance DiElectron Spectrometer HADES [1] is installed at the Helmholtzzentrum für Schwerionenforschung (GSI accelerator facility in Darmstadt. It investigates dielectron emission and strangeness production in the 1-3 AGeV regime. A recent experiment series focusses on medium-modifications of light vector mesons in cold nuclear matter. In two runs, p+p and p+Nb reactions were investigated at 3.5 GeV beam energy; about 9·109 events have been registered. In contrast to other experiments the high acceptance of the HADES allows for a detailed analysis of electron pairs with low momenta relative to nuclear matter, where modifications of the spectral functions of vector mesons are predicted to be most prominent. Comparing these low momentum electron pairs to the reference measurement in the elementary p+p reaction, we find in fact a strong modification of the spectral distribution in the whole vector meson region.
Galactic Cosmic Ray Event-Based Risk Model (GERM) Code
Cucinotta, Francis A.; Plante, Ianik; Ponomarev, Artem L.; Kim, Myung-Hee Y.
2013-01-01
This software describes the transport and energy deposition of the passage of galactic cosmic rays in astronaut tissues during space travel, or heavy ion beams in patients in cancer therapy. Space radiation risk is a probability distribution, and time-dependent biological events must be accounted for physical description of space radiation transport in tissues and cells. A stochastic model can calculate the probability density directly without unverified assumptions about shape of probability density function. The prior art of transport codes calculates the average flux and dose of particles behind spacecraft and tissue shielding. Because of the signaling times for activation and relaxation in the cell and tissue, transport code must describe temporal and microspatial density of functions to correlate DNA and oxidative damage with non-targeted effects of signals, bystander, etc. These are absolutely ignored or impossible in the prior art. The GERM code provides scientists data interpretation of experiments; modeling of beam line, shielding of target samples, and sample holders; and estimation of basic physical and biological outputs of their experiments. For mono-energetic ion beams, basic physical and biological properties are calculated for a selected ion type, such as kinetic energy, mass, charge number, absorbed dose, or fluence. Evaluated quantities are linear energy transfer (LET), range (R), absorption and fragmentation cross-sections, and the probability of nuclear interactions after 1 or 5 cm of water equivalent material. In addition, a set of biophysical properties is evaluated, such as the Poisson distribution for a specified cellular area, cell survival curves, and DNA damage yields per cell. Also, the GERM code calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle in a selected material. The GERM code makes the numerical estimates of basic
Geochemical modelling of groundwater evolution using chemical equilibrium codes
International Nuclear Information System (INIS)
Pitkaenen, P.; Pirhonen, V.
1991-01-01
Geochemical equilibrium codes are a modern tool in studying interaction between groundwater and solid phases. The most common used programs and application subjects are shortly presented in this article. The main emphasis is laid on the approach method of using calculated results in evaluating groundwater evolution in hydrogeological system. At present in geochemical equilibrium modelling also kinetic as well as hydrologic constrains along a flow path are taken into consideration
Level 3 trigger algorithm and hardware platform for the HADES experiment
Energy Technology Data Exchange (ETDEWEB)
Kirschner, Daniel Georg
2007-10-26
One focus of the HADES experiment is the investigation of the decay of light vector mesons inside a dense medium into lepton pairs. These decays provide a conceptually ideal tool to study the invariant mass of the vector meson in-medium, since the lepton pairs of these meson decays leave the reaction without further strong interaction. Thus, no final state interaction affects the measurement. Unfortunately, the branching ratios of vector mesons into lepton pairs are very small ({approx} 10{sup -5}). This calls for a high rate, high acceptance experiment. In addition, a sophisticated real time trigger system is used in HADES to enrich the interesting events in the recorded data. The focus of this thesis is the development of a next generation real time trigger method to improve the enrichment of lepton events in the HADES trigger. In addition, a flexible hardware platform (GE-MN) was developed to implement and test the trigger method. The GE-MN features two Gigabit-Ethernet interfaces for data transport, a VMEbus for slow control and configuration, and a TigerSHARC DSP for data processing. It provides the experience to discuss the challenges and benefits of using a commercial standard network technology based system in an experiment. The developed and tested trigger method correlates the ring information of the HADES RICH with the fired wires (cells) of the HADES MDC detector. This correlation method operates by calculating for each event the cells which should have seen the signal of a traversing lepton, and compares these calculated cells to all the cells that did see a signal. The cells which should have fired are calculated from the polar and azimuthal angle information of the RICH rings by assuming a straight line in space, which is starting at the target and extending into a direction given by the ring angles. The line extends through the inner MDC chambers and the traversed cells are those that should have been hit. To compensate different sources for
Level 3 trigger algorithm and hardware platform for the HADES experiment
International Nuclear Information System (INIS)
Kirschner, Daniel Georg
2007-01-01
One focus of the HADES experiment is the investigation of the decay of light vector mesons inside a dense medium into lepton pairs. These decays provide a conceptually ideal tool to study the invariant mass of the vector meson in-medium, since the lepton pairs of these meson decays leave the reaction without further strong interaction. Thus, no final state interaction affects the measurement. Unfortunately, the branching ratios of vector mesons into lepton pairs are very small (∼ 10 -5 ). This calls for a high rate, high acceptance experiment. In addition, a sophisticated real time trigger system is used in HADES to enrich the interesting events in the recorded data. The focus of this thesis is the development of a next generation real time trigger method to improve the enrichment of lepton events in the HADES trigger. In addition, a flexible hardware platform (GE-MN) was developed to implement and test the trigger method. The GE-MN features two Gigabit-Ethernet interfaces for data transport, a VMEbus for slow control and configuration, and a TigerSHARC DSP for data processing. It provides the experience to discuss the challenges and benefits of using a commercial standard network technology based system in an experiment. The developed and tested trigger method correlates the ring information of the HADES RICH with the fired wires (cells) of the HADES MDC detector. This correlation method operates by calculating for each event the cells which should have seen the signal of a traversing lepton, and compares these calculated cells to all the cells that did see a signal. The cells which should have fired are calculated from the polar and azimuthal angle information of the RICH rings by assuming a straight line in space, which is starting at the target and extending into a direction given by the ring angles. The line extends through the inner MDC chambers and the traversed cells are those that should have been hit. To compensate different sources for inaccuracies not
Direct containment heating models in the CONTAIN code
Energy Technology Data Exchange (ETDEWEB)
Washington, K.E.; Williams, D.C.
1995-08-01
The potential exists in a nuclear reactor core melt severe accident for molten core debris to be dispersed under high pressure into the containment building. If this occurs, the set of phenomena that result in the transfer of energy to the containment atmosphere and its surroundings is referred to as direct containment heating (DCH). Because of the potential for DCH to lead to early containment failure, the U.S. Nuclear Regulatory Commission (USNRC) has sponsored an extensive research program consisting of experimental, analytical, and risk integration components. An important element of the analytical research has been the development and assessment of direct containment heating models in the CONTAIN code. This report documents the DCH models in the CONTAIN code. DCH models in CONTAIN for representing debris transport, trapping, chemical reactions, and heat transfer from debris to the containment atmosphere and surroundings are described. The descriptions include the governing equations and input instructions in CONTAIN unique to performing DCH calculations. Modifications made to the combustion models in CONTAIN for representing the combustion of DCH-produced and pre-existing hydrogen under DCH conditions are also described. Input table options for representing the discharge of debris from the RPV and the entrainment phase of the DCH process are also described. A sample calculation is presented to demonstrate the functionality of the models. The results show that reasonable behavior is obtained when the models are used to predict the sixth Zion geometry integral effects test at 1/10th scale.
Channel modeling, signal processing and coding for perpendicular magnetic recording
Wu, Zheng
With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by
Temporal perceptual coding using a visual acuity model
Adzic, Velibor; Cohen, Robert A.; Vetro, Anthony
2014-02-01
This paper describes research and results in which a visual acuity (VA) model of the human visual system (HVS) is used to reduce the bitrate of coded video sequences, by eliminating the need to signal transform coefficients when their corresponding frequencies will not be detected by the HVS. The VA model is integrated into the state of the art HEVC HM codec. Compared to the unmodified codec, up to 45% bitrate savings are achieved while maintaining the same subjective quality of the video sequences. Encoding times are reduced as well.
Systematic effects in CALOR simulation code to model experimental configurations
International Nuclear Information System (INIS)
Job, P.K.; Proudfoot, J.; Handler, T.
1991-01-01
CALOR89 code system is being used to simulate test beam results and the design parameters of several calorimeter configurations. It has been bench-marked against the ZEUS, Dθ and HELIOS data. This study identifies the systematic effects in CALOR simulation to model the experimental configurations. Five major systematic effects are identified. These are the choice of high energy nuclear collision model, material composition, scintillator saturation, shower integration time, and the shower containment. Quantitative estimates of these systematic effects are presented. 23 refs., 6 figs., 7 tabs
Modelling of LOCA Tests with the BISON Fuel Performance Code
Energy Technology Data Exchange (ETDEWEB)
Williamson, Richard L [Idaho National Laboratory; Pastore, Giovanni [Idaho National Laboratory; Novascone, Stephen Rhead [Idaho National Laboratory; Spencer, Benjamin Whiting [Idaho National Laboratory; Hales, Jason Dean [Idaho National Laboratory
2016-05-01
BISON is a modern finite-element based, multidimensional nuclear fuel performance code that is under development at Idaho National Laboratory (USA). Recent advances of BISON include the extension of the code to the analysis of LWR fuel rod behaviour during loss-of-coolant accidents (LOCAs). In this work, BISON models for the phenomena relevant to LWR cladding behaviour during LOCAs are described, followed by presentation of code results for the simulation of LOCA tests. Analysed experiments include separate effects tests of cladding ballooning and burst, as well as the Halden IFA-650.2 fuel rod test. Two-dimensional modelling of the experiments is performed, and calculations are compared to available experimental data. Comparisons include cladding burst pressure and temperature in separate effects tests, as well as the evolution of fuel rod inner pressure during ballooning and time to cladding burst. Furthermore, BISON three-dimensional simulations of separate effects tests are performed, which demonstrate the capability to reproduce the effect of azimuthal temperature variations in the cladding. The work has been carried out in the frame of the collaboration between Idaho National Laboratory and Halden Reactor Project, and the IAEA Coordinated Research Project FUMAC.
Toward a Probabilistic Automata Model of Some Aspects of Code-Switching.
Dearholt, D. W.; Valdes-Fallis, G.
1978-01-01
The purpose of the model is to select either Spanish or English as the language to be used; its goals at this stage of development include modeling code-switching for lexical need, apparently random code-switching, dependency of code-switching upon sociolinguistic context, and code-switching within syntactic constraints. (EJS)
Benchmarking of computer codes and approaches for modeling exposure scenarios
International Nuclear Information System (INIS)
Seitz, R.R.; Rittmann, P.D.; Wood, M.I.; Cook, J.R.
1994-08-01
The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided
Finite element code development for modeling detonation of HMX composites
Duran, Adam V.; Sundararaghavan, Veera
2017-01-01
In this work, we present a hydrodynamics code for modeling shock and detonation waves in HMX. A stable efficient solution strategy based on a Taylor-Galerkin finite element (FE) discretization was developed to solve the reactive Euler equations. In our code, well calibrated equations of state for the solid unreacted material and gaseous reaction products have been implemented, along with a chemical reaction scheme and a mixing rule to define the properties of partially reacted states. A linear Gruneisen equation of state was employed for the unreacted HMX calibrated from experiments. The JWL form was used to model the EOS of gaseous reaction products. It is assumed that the unreacted explosive and reaction products are in both pressure and temperature equilibrium. The overall specific volume and internal energy was computed using the rule of mixtures. Arrhenius kinetics scheme was integrated to model the chemical reactions. A locally controlled dissipation was introduced that induces a non-oscillatory stabilized scheme for the shock front. The FE model was validated using analytical solutions for SOD shock and ZND strong detonation models. Benchmark problems are presented for geometries in which a single HMX crystal is subjected to a shock condition.
A model code for the radiative theta pinch
Energy Technology Data Exchange (ETDEWEB)
Lee, S., E-mail: leesing@optusnet.com.au [INTI International University, 71800 Nilai (Malaysia); Institute for Plasma Focus Studies, 32 Oakpark Drive, Chadstone 3148 Australia (Australia); Physics Department, University of Malaya, Kuala Lumpur (Malaysia); Saw, S. H. [INTI International University, 71800 Nilai (Malaysia); Institute for Plasma Focus Studies, 32 Oakpark Drive, Chadstone 3148 Australia (Australia); Lee, P. C. K. [Nanyang Technological University, National Institute of Education, Singapore 637616 (Singapore); Akel, M. [Department of Physics, Atomic Energy Commission, Damascus, P. O. Box 6091, Damascus (Syrian Arab Republic); Damideh, V. [INTI International University, 71800 Nilai (Malaysia); Khattak, N. A. D. [Department of Physics, Gomal University, Dera Ismail Khan (Pakistan); Mongkolnavin, R.; Paosawatyanyong, B. [Department of Physics, Faculty of Science, Chulalongkorn University, Bangkok 10140 (Thailand)
2014-07-15
A model for the theta pinch is presented with three modelled phases of radial inward shock phase, reflected shock phase, and a final pinch phase. The governing equations for the phases are derived incorporating thermodynamics and radiation and radiation-coupled dynamics in the pinch phase. A code is written incorporating correction for the effects of transit delay of small disturbing speeds and the effects of plasma self-absorption on the radiation. Two model parameters are incorporated into the model, the coupling coefficient f between the primary loop current and the induced plasma current and the mass swept up factor f{sub m}. These values are taken from experiments carried out in the Chulalongkorn theta pinch.
MMA, A Computer Code for Multi-Model Analysis
Energy Technology Data Exchange (ETDEWEB)
Eileen P. Poeter and Mary C. Hill
2007-08-20
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.
Improved Flow Modeling in Transient Reactor Safety Analysis Computer Codes
International Nuclear Information System (INIS)
Holowach, M.J.; Hochreiter, L.E.; Cheung, F.B.
2002-01-01
A method of accounting for fluid-to-fluid shear in between calculational cells over a wide range of flow conditions envisioned in reactor safety studies has been developed such that it may be easily implemented into a computer code such as COBRA-TF for more detailed subchannel analysis. At a given nodal height in the calculational model, equivalent hydraulic diameters are determined for each specific calculational cell using either laminar or turbulent velocity profiles. The velocity profile may be determined from a separate CFD (Computational Fluid Dynamics) analysis, experimental data, or existing semi-empirical relationships. The equivalent hydraulic diameter is then applied to the wall drag force calculation so as to determine the appropriate equivalent fluid-to-fluid shear caused by the wall for each cell based on the input velocity profile. This means of assigning the shear to a specific cell is independent of the actual wetted perimeter and flow area for the calculational cell. The use of this equivalent hydraulic diameter for each cell within a calculational subchannel results in a representative velocity profile which can further increase the accuracy and detail of heat transfer and fluid flow modeling within the subchannel when utilizing a thermal hydraulics systems analysis computer code such as COBRA-TF. Utilizing COBRA-TF with the flow modeling enhancement results in increased accuracy for a coarse-mesh model without the significantly greater computational and time requirements of a full-scale 3D (three-dimensional) transient CFD calculation. (authors)
Auditory information coding by modeled cochlear nucleus neurons.
Wang, Huan; Isik, Michael; Borst, Alexander; Hemmert, Werner
2011-06-01
In this paper we use information theory to quantify the information in the output spike trains of modeled cochlear nucleus globular bushy cells (GBCs). GBCs are part of the sound localization pathway. They are known for their precise temporal processing, and they code amplitude modulations with high fidelity. Here we investigated the information transmission for a natural sound, a recorded vowel. We conclude that the maximum information transmission rate for a single neuron was close to 1,050 bits/s, which corresponds to a value of approximately 5.8 bits per spike. For quasi-periodic signals like voiced speech, the transmitted information saturated as word duration increased. In general, approximately 80% of the available information from the spike trains was transmitted within about 20 ms. Transmitted information for speech signals concentrated around formant frequency regions. The efficiency of neural coding was above 60% up to the highest temporal resolution we investigated (20 μs). The increase in transmitted information to that precision indicates that these neurons are able to code information with extremely high fidelity, which is required for sound localization. On the other hand, only 20% of the information was captured when the temporal resolution was reduced to 4 ms. As the temporal resolution of most speech recognition systems is limited to less than 10 ms, this massive information loss might be one of the reasons which are responsible for the lack of noise robustness of these systems.
Modeling experimental plasma diagnostics in the FLASH code: proton radiography
Flocke, Norbert; Weide, Klaus; Feister, Scott; Tzeferacos, Petros; Lamb, Donald
2017-10-01
Proton radiography is an important diagnostic tool for laser plasma experiments and for studying magnetized plasmas. We describe a new synthetic proton radiography diagnostic recently implemented into the FLASH code. FLASH is an open source, finite-volume Eulerian, spatially adaptive radiation hydrodynamics and magneto-hydrodynamics code that incorporates capabilities for a broad range of physical processes. Proton radiography is modeled through the use of the (relativistic) Lorentz force equation governing the motion of protons through 3D domains. Both instantaneous (one time step) and time-resolved (over many time steps) proton radiography can be simulated. The code module is also equipped with several different setup options (beam structure and detector screen placements) to reproduce a large variety of experimental proton radiography designs. FLASH's proton radiography diagnostic unit can be used either during runtime or in post-processing of simulation results. FLASH is publicly available at flash.uchicago.edu. U.S. DOE NNSA, U.S. DOE NNSA ASC, U.S. DOE Office of Science and NSF.
Dataset of coded handwriting features for use in statistical modelling
Directory of Open Access Journals (Sweden)
Anna Agius
2018-02-01
Full Text Available The data presented here is related to the article titled, “Using handwriting to infer a writer's country of origin for forensic intelligence purposes” (Agius et al., 2017 [1]. This article reports original writer, spatial and construction characteristic data for thirty-seven English Australian writers and thirty-seven Vietnamese writers. All of these characteristics were coded and recorded in Microsoft Excel 2013 (version 15.31. The construction characteristics coded were only extracted from seven characters, which were: ‘g’, ‘h’, ‘th’, ‘M’, ‘0’, ‘7’ and ‘9’. The coded format of the writer, spatial and construction characteristics is made available in this Data in Brief in order to allow others to perform statistical analyses and modelling to investigate whether there is a relationship between the handwriting features and the nationality of the writer, and whether the two nationalities can be differentiated. Furthermore, to employ mathematical techniques that are capable of characterising the extracted features from each participant.
Physics models in the toroidal transport code PROCTR
Energy Technology Data Exchange (ETDEWEB)
Howe, H.C.
1990-08-01
The physics models that are contained in the toroidal transport code PROCTR are described in detail. Time- and space-dependent models are included for the plasma hydrogenic-ion, helium, and impurity densities, the electron and ion temperatures, the toroidal rotation velocity, and the toroidal current profile. Time- and depth-dependent models for the trapped and mobile hydrogenic particle concentrations in the wall and a time-dependent point model for the number of particles in the limiter are also included. Time-dependent models for neutral particle transport, neutral beam deposition and thermalization, fusion heating, impurity radiation, pellet injection, and the radial electric potential are included and recalculated periodically as the time-dependent models evolve. The plasma solution is obtained either in simple flux coordinates, where the radial shift of each elliptical, toroidal flux surface is included to maintain an approximate pressure equilibrium, or in general three-dimensional torsatron coordinates represented by series of helical harmonics. The detailed coupling of the plasma, scrape-off layer, limiter, and wall models through the neutral transport model makes PROCTR especially suited for modeling of recycling and particle control in toroidal plasmas. The model may also be used in a steady-state profile analysis mode for studying energy and particle balances starting with measured plasma profiles.
MMA, A Computer Code for Multi-Model Analysis
Poeter, Eileen P.; Hill, Mary C.
2007-01-01
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will
An improved steam generator model for the SASSYS code
International Nuclear Information System (INIS)
Pizzica, P.A.
1989-01-01
A new steam generator model has been developed for the SASSYS computer code, which analyzes accident conditions in a liquid metal cooled fast reactor. It has been incorporated into the new SASSYS balance-of-plant model but it can also function on a stand-alone basis. The steam generator can be used in a once-through mode, or a variant of the model can be used as a separate evaporator and a superheater with recirculation loop. The new model provides for an exact steady-state solution as well as the transient calculation. There was a need for a faster and more flexible model than the old steam generator model. The new model provides for more detail with its multi-mode treatment as opposed to the previous model's one node per region approach. Numerical instability problems which were the result of cell-centered spatial differencing, fully explicit time differencing, and the moving boundary treatment of the boiling crisis point in the boiling region have been reduced. This leads to an increase in speed as larger time steps can now be taken. The new model is an improvement in many respects. 2 refs., 3 figs
Modeling of the CTEx subcritical unit using MCNPX code
International Nuclear Information System (INIS)
Santos, Avelino; Silva, Ademir X. da; Rebello, Wilson F.; Cunha, Victor L. Lassance
2011-01-01
The present work aims at simulating the subcritical unit of Army Technology Center (CTEx) namely ARGUS pile (subcritical uranium-graphite arrangement) by using the computational code MCNPX. Once such modeling is finished, it could be used in k-effective calculations for systems using natural uranium as fuel, for instance. ARGUS is a subcritical assembly which uses reactor-grade graphite as moderator of fission neutrons and metallic uranium fuel rods with aluminum cladding. The pile is driven by an Am-Be spontaneous neutron source. In order to achieve a higher value for k eff , a higher concentration of U235 can be proposed, provided it safely remains below one. (author)
Status of emergency spray modelling in the integral code ASTEC
International Nuclear Information System (INIS)
Plumecocq, W.; Passalacqua, R.
2001-01-01
Containment spray systems are emergency systems that would be used in very low probability events which may lead to severe accidents in Light Water Reactors. In most cases, the primary function of the spray would be to remove heat and condense steam in order to reduce pressure and temperature in the containment building. Spray would also wash out fission products (aerosols and gaseous species) from the containment atmosphere. The efficiency of the spray system in the containment depressurization as well as in the removal of aerosols, during a severe accident, depends on the evolution of the spray droplet size distribution with the height in the containment, due to kinetic and thermal relaxation, gravitational agglomeration and mass transfer with the gas. A model has been developed taking into account all of these phenomena. This model has been implemented in the ASTEC code with a validation of the droplets relaxation against the CARAIDAS experiment (IPSN). Applications of this modelling to a PWR 900, during a severe accident, with special emphasis on the effect of spray on containment hydrogen distribution have been performed in multi-compartment configuration with the ASTEC V0.3 code. (author)
Model comparisons of the reactive burn model SURF in three ASC codes
Energy Technology Data Exchange (ETDEWEB)
Whitley, Von Howard [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Stalsberg, Krista Lynn [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Reichelt, Benjamin Lee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Shipley, Sarah Jayne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2018-01-12
A study of the SURF reactive burn model was performed in FLAG, PAGOSA and XRAGE. In this study, three different shock-to-detonation transition experiments were modeled in each code. All three codes produced similar model results for all the experiments modeled and at all resolutions. Buildup-to-detonation time, particle velocities and resolution dependence of the models was notably similar between the codes. Given the current PBX 9502 equations of state and SURF calibrations, each code is equally capable of predicting the correct detonation time and distance when impacted by a 1D impactor at pressures ranging from 10-16 GPa, as long as the resolution of the mesh is not too coarse.
Development of condensation modeling modeling and simulation code for IRWST
Energy Technology Data Exchange (ETDEWEB)
Kim, Sang Nyung; Jang, Wan Ho; Ko, Jong Hyun; Ha, Jong Baek; Yang, Chang Keun; Son, Myung Seong [Kyung Hee Univ., Seoul (Korea)
1997-07-01
One of the design improvements of the KNGR(Korean Next Generation Reactor) which is advanced to safety and economy is the adoption of IRWST(In-Containment Refueling Water Storage Tank). The IRWST, installed inside of the containment building, has more designed purpose than merely the location change of the tank. Since the design functions of the IRWST is similar to these of the BWR's suppression pool, theoretical models applicable to BWR's suppression pool can be mostly applied to the IRWST. But for the PWR, the geometry of the sparger, the operation mode and the steam quantity and temperature and pressure of discharged fluid from primary system to IRWST through PSV or SDV may be different from those of BWR. Also there is some defects in detailed parts of condensation model. Therefore we, as the first nation to construct PWR with IRWST, must carry out profound research for there problems such that the results can be utilized and localized as an exclusive technology. All kinds of thermal hydraulics phenomena was investigated and existing condensation models by Hideki Nariai and Izuo Aya were analyzed. Also throuh a rigorous literature review such as operation experience, experimental data, design document of KNGR, items which need modification and supplementation were derived. Analytical model for chugging phenomena is also presented. 15 refs., 18 figs., 4 tabs. (Author)
Inclusive reconstruction of hadron resonances in elementary and heavy-ion collisions with HADES
Directory of Open Access Journals (Sweden)
Kornakov Georgy
2016-01-01
Full Text Available The unambiguous identification of hadron modifications in hot and dense QCD matter is one of the important goals in nuclear physics. In the regime of 1 - 2 GeV kinetic energy per nucleon, HADES has measured rare and penetrating probes in elementary and heavy-ion collisions. The main creation mechanism of mesons is the excitation and decay of baryonic resonances throughout the fireball evolution. The reconstruction of shortlived (≈ 1 fm/c resonance states through their decay products is notoriously difficult. We have developed a new iterative algorithm, which builds the best hypothesis of signal and background by distortion of individual particle properties. This allows to extract signals with signal-to-background ratios of <1%.
7 CFR Exhibit E to Subpart A of... - Voluntary National Model Building Codes
2010-01-01
... 7 Agriculture 12 2010-01-01 2010-01-01 false Voluntary National Model Building Codes E Exhibit E... National Model Building Codes The following documents address the health and safety aspects of buildings and related structures and are voluntary national model building codes as defined in § 1924.4(h)(2) of...
A MATLAB based 3D modeling and inversion code for MT data
Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.
2017-07-01
The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.
24 CFR 200.925c - Model codes.
2010-04-01
... Hills, Illinois 60478. (ii) Standard Building Code, 1991 Edition, including 1992/1993 revisions... (Chapter 6) of the Standard Building Code, but including Appendices A, C, E, J, K, M, and R. Available from...
Fission product core release model evaluation in MELCOR code
International Nuclear Information System (INIS)
Song, Y. M.; Kim, D. H.; Kim, H. D.
2003-01-01
The fission product core release in the MELCOR code is based on the CORSOR models developed by Battelle Memorial Institute. Release of radionuclides can occur from the fuel-cladding gap when a failure temperature criterion exceeds or intact geometry is lost, and various CORSOR empirical release correlations based on fuel temperatures are used for the release. Released masses into the core may exist as aerosols and/or vapors, depending on the vapor pressure of the radionuclide class and the surrounding temperature. This paper shows a release analysis for selected representative volatile and non-volatile radionuclides during conservative high and low pressure sequences in the APR1400 plant. Three core release models (CORSOR, CORSOR-M, CORSOR-Booth) in the latest MELCOR 1.8.5 version are used. In the analysis, the option of the fuel component surface-to-volume ratio in the CORSOR and CORSOR-M models and the option of the high and low burn-up in the CORSOR-Booth model are considered together. As the results, the CORSOR-M release rate is high for volatile radionuclides, and the CORSOR release rate is high for non-volatile radionuclides with insufficient consistency. As the uncertainty range for the release rate expands from several times (volatile radionuclides) to more than maximum 10,000 times (non-volatile radionuclides), user's careful choice for core release models is needed
Mathematical modeling of wiped-film evaporators. [MAIN codes
Energy Technology Data Exchange (ETDEWEB)
Sommerfeld, J.T.
1976-05-01
A mathematical model and associated computer program were developed to simulate the steady-state operation of wiped-film evaporators for the concentration of typical waste solutions produced at the Savannah River Plant. In this model, which treats either a horizontal or a vertical wiped-film evaporator as a plug-flow device with no backmixing, three fundamental phenomena are described: sensible heating of the waste solution, vaporization of water, and crystallization of solids from solution. Physical property data were coded into the computer program, which performs the calculations of this model. Physical properties of typical waste solutions and of the heating steam, generally as analytical functions of temperature, were obtained from published data or derived by regression analysis of tabulated or graphical data. Preliminary results from tests of the Savannah River Laboratory semiworks wiped-film evaporators were used to select a correlation for the inside film heat transfer coefficient. This model should be a useful aid in the specification, operation, and control of the full-scale wiped-film evaporators proposed for application under plant conditions. In particular, it should be of value in the development and analysis of feed-forward control schemes for the plant units. Also, this model can be readily adapted, with only minor changes, to simulate the operation of wiped-film evaporators for other conceivable applications, such as the concentration of acid wastes.
International Nuclear Information System (INIS)
Chung, Young Jong; Kim, Soo Hyoung; Kim, See Darl
2008-10-01
The TASS/SMR code has been developed with domestic technologies for the safety analysis of the SMART plant which is an integral type pressurized water reactor. It can be applied to the analysis of design basis accidents including non-LOCA (loss of coolant accident) and LOCA of the SMART plant. The TASS/SMR code can be applied to any plant regardless of the structural characteristics of a reactor since the code solves the same governing equations for both the primary and secondary system. The code has been developed to meet the requirements of the safety analysis code. This report describes the overall structure of the TASS/SMR, input processing, and the processes of a steady state and transient calculations. In addition, basic differential equations, finite difference equations, state relationships, and constitutive models are described in the report. First, the conservation equations, a discretization process for numerical analysis, search method for state relationship are described. Then, a core power model, heat transfer models, physical models for various components, and control and trip models are explained
Energy Technology Data Exchange (ETDEWEB)
Chung, Young Jong; Kim, Soo Hyoung; Kim, See Darl (and others)
2008-10-15
The TASS/SMR code has been developed with domestic technologies for the safety analysis of the SMART plant which is an integral type pressurized water reactor. It can be applied to the analysis of design basis accidents including non-LOCA (loss of coolant accident) and LOCA of the SMART plant. The TASS/SMR code can be applied to any plant regardless of the structural characteristics of a reactor since the code solves the same governing equations for both the primary and secondary system. The code has been developed to meet the requirements of the safety analysis code. This report describes the overall structure of the TASS/SMR, input processing, and the processes of a steady state and transient calculations. In addition, basic differential equations, finite difference equations, state relationships, and constitutive models are described in the report. First, the conservation equations, a discretization process for numerical analysis, search method for state relationship are described. Then, a core power model, heat transfer models, physical models for various components, and control and trip models are explained.
Model code for energy conservation in new building construction
Energy Technology Data Exchange (ETDEWEB)
None
1977-12-01
In response to the recognized lack of existing consensus standards directed to the conservation of energy in building design and operation, the preparation and publication of such a standard was accomplished with the issuance of ASHRAE Standard 90-75 ''Energy Conservation in New Building Design,'' by the American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc., in 1975. This standard addressed itself to recommended practices for energy conservation, using both depletable and non-depletable sources. A model code for energy conservation in building construction has been developed, setting forth the minimum regulations found necessary to mandate such conservation. The code addresses itself to the administration, design criteria, systems elements, controls, service water heating and electrical distribution and use, both for depletable and non-depletable energy sources. The technical provisions of the document are based on ASHRAE 90-75 and it is intended for use by state and local building officials in the implementation of a statewide energy conservation program.
Laser Energy Deposition Model for the ICF3D Code
Kaiser, Thomas B.; Byers, Jack A.
1996-11-01
We have built a laser deposition module for the new ICF physics design code, ICF3D(``3D Unstructured Mesh ALE Hydrodynamics with the Upwind Discontinuous Finite Element Method,'' D. S. Kershaw, M. K. Prasad and M. J. Shaw,'' LLNL Report UCRL-JC-122104, (1995)), being developed at LLNL. The code uses a 3D unstructured grid on which hydrodynamic quantities are represented in terms of discontinuous linear finite elements (hexahedrons, prisms, tetrahedrons or pyramids). Because of the complex mesh geometry and (in general) non-uniform index of refraction (i.e., plasma density), the geometrical-optical ray-tracing problem is quite complicated. To solve it we have developed a grid-cell-face-crossing detection algorithm, an integrator for the ray equations of motion and a path-length calculator that are encapsulated in a C++ class that is used to create ray-bundle objects. Additional classes are being developed for inverse-bremsstrahlung and resonance-absorption heating models. A quasi-optical technique will be used to include diffractive effects. We use the ICF3D Python shell, a very flexible interface that allows command-line invocation of member functions.
Development of Eulerian Code Modeling for ICF Experiments
Energy Technology Data Exchange (ETDEWEB)
Bradley, Paul A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-02-27
One of the most pressing unexplained phenomena standing in the way of ICF ignition is understanding mix and how it interacts with burn. Experiments were being designed and fielded as part of the Defect-Induced Mix Experiment (DIME) project to obtain data about the extent of material mix and how this mix influenced burn. Experiments on the Omega laser and National Ignition Facility (NIF) provided detailed data for comparison to the Eulerian code RAGE1. The Omega experiments were able to resolve the mix and provide “proof of principle” support for subsequent NIF experiments, which were fielded from July 2012 through June 2013. The Omega shots were fired at least once per year between 2009 and 2012. RAGE was not originally designed to model inertial confinement fusion (ICF) implosions. It still lacks lasers, so the code has been validated using an energy source. To test RAGE, the simulation output is compared to data and by means of postprocessing tools that were developed. Here, the various postprocessing tools are described with illustrative examples.
International Nuclear Information System (INIS)
Pijlgroms, B.J.; Oppe, J.; Oudshoorn, H.L.; Slobben, J.
1991-06-01
A brief description is given of the PASC-3 (Petten-AMPX-SCALE) Reactor Physics code system and associated UNIPASC work environment. The PASC-3 code system is used for criticality and reactor calculations and consists of a selection from the Oak Ridge National Laboratory AMPX-SCALE-3 code collection complemented with a number of additional codes and nuclear data bases. The original codes have been adapted to run under the UNIX operating system. The recommended nuclear data base is a complete 219 group cross section library derived from JEF-1 of which some benchmark results are presented. By the addition of the UNIPASC work environment the usage of the code system is greatly simplified. Complex chains of programs can easily be coupled together to form a single job. In addition, the model parameters can be represented by variables instead of literal values which enhances the readability and may improve the integrity of the code inputs. (author). 8 refs.; 6 figs.; 1 tab
CODE's new solar radiation pressure model for GNSS orbit determination
Arnold, D.; Meindl, M.; Beutler, G.; Dach, R.; Schaer, S.; Lutz, S.; Prange, L.; Sośnica, K.; Mervart, L.; Jäggi, A.
2015-08-01
The Empirical CODE Orbit Model (ECOM) of the Center for Orbit Determination in Europe (CODE), which was developed in the early 1990s, is widely used in the International GNSS Service (IGS) community. For a rather long time, spurious spectral lines are known to exist in geophysical parameters, in particular in the Earth Rotation Parameters (ERPs) and in the estimated geocenter coordinates, which could recently be attributed to the ECOM. These effects grew creepingly with the increasing influence of the GLONASS system in recent years in the CODE analysis, which is based on a rigorous combination of GPS and GLONASS since May 2003. In a first step we show that the problems associated with the ECOM are to the largest extent caused by the GLONASS, which was reaching full deployment by the end of 2011. GPS-only, GLONASS-only, and combined GPS/GLONASS solutions using the observations in the years 2009-2011 of a global network of 92 combined GPS/GLONASS receivers were analyzed for this purpose. In a second step we review direct solar radiation pressure (SRP) models for GNSS satellites. We demonstrate that only even-order short-period harmonic perturbations acting along the direction Sun-satellite occur for GPS and GLONASS satellites, and only odd-order perturbations acting along the direction perpendicular to both, the vector Sun-satellite and the spacecraft's solar panel axis. Based on this insight we assess in the third step the performance of four candidate orbit models for the future ECOM. The geocenter coordinates, the ERP differences w. r. t. the IERS 08 C04 series of ERPs, the misclosures for the midnight epochs of the daily orbital arcs, and scale parameters of Helmert transformations for station coordinates serve as quality criteria. The old and updated ECOM are validated in addition with satellite laser ranging (SLR) observations and by comparing the orbits to those of the IGS and other analysis centers. Based on all tests, we present a new extended ECOM which
A Realistic Model Under Which the Genetic Code is Optimal
Buhrman, Harry; van der Gulik, Peter T. S.; Klau, Gunnar W.; Schaffner, Christian; Speijer, Dave; Stougie, Leen
2013-01-01
The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By
A Realistic Model under which the Genetic Code is Optimal
Buhrman, H.; van der Gulik, P.T.S.; Klau, G.W.; Schaffner, C.; Speijer, D.; Stougie, L.
2013-01-01
The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By
C code generation applied to nonlinear model predictive control for an artificial pancreas
DEFF Research Database (Denmark)
Boiroux, Dimitri; Jørgensen, John Bagterp
2017-01-01
This paper presents a method to generate C code from MATLAB code applied to a nonlinear model predictive control (NMPC) algorithm. The C code generation uses the MATLAB Coder Toolbox. It can drastically reduce the time required for development compared to a manual porting of code from MATLAB to C...... of glucose regulation for people with type 1 diabetes as a case study. The average computation time when using generated C code is 0.21 s (MATLAB: 1.5 s), and the maximum computation time when using generated C code is 0.97 s (MATLAB: 5.7 s). Compared to the MATLAB implementation, generated C code can run...
Resonance production in p+p, p+A and A+A collisions measured with HADES
Directory of Open Access Journals (Sweden)
Reshetin A.
2012-11-01
Full Text Available The knowledge of baryonic resonance properties and production cross sections plays an important role for the extraction and understanding of medium modifications of mesons in hot and/or dense nuclear matter. We present and discuss systematics on dielectron and strangeness production obtained with HADES on p+p, p+A and A+A collisions in the few GeV energy regime with respect to these resonances.
Analysis of pion production data measured by HADES in proton-proton collisions at 1.25 GeV
Czech Academy of Sciences Publication Activity Database
Agakishiev, G.; Balanda, A.; Belver, D.; Belyaev, A.; Krása, Antonín; Křížek, Filip; Kugler, Andrej; Sobolev, Yuri, G.; Tlustý, Pavel; Wagner, Vladimír
2015-01-01
Roč. 51, X (2015), č. článku 137. ISSN 1434-6001 R&D Projects: GA MŠk LG12007; GA ČR GA13-06759S Institutional support: RVO:61389005 Keywords : proton-proton collisions * HADES collaboration * baryon resonance Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 2.373, year: 2015
American Inst. of Architects, Washington, DC.
A MODEL BUILDING CODE FOR FALLOUT SHELTERS WAS DRAWN UP FOR INCLUSION IN FOUR NATIONAL MODEL BUILDING CODES. DISCUSSION IS GIVEN OF FALLOUT SHELTERS WITH RESPECT TO--(1) NUCLEAR RADIATION, (2) NATIONAL POLICIES, AND (3) COMMUNITY PLANNING. FALLOUT SHELTER REQUIREMENTS FOR SHIELDING, SPACE, VENTILATION, CONSTRUCTION, AND SERVICES SUCH AS ELECTRICAL…
Modeling Vortex Generators in a Navier-Stokes Code
Dudek, Julianne C.
2011-01-01
A source-term model that simulates the effects of vortex generators was implemented into the Wind-US Navier-Stokes code. The source term added to the Navier-Stokes equations simulates the lift force that would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, subsonic flow in an S-duct with 22 corotating vortex generators, and supersonic flow in a rectangular duct with a counter-rotating vortex-generator pair. The model was also used to successfully simulate microramps in supersonic flow by treating each microramp as a pair of vanes with opposite angles of incidence. The validation results indicate that the source-term vortex-generator model provides a useful tool for screening vortex-generator configurations and gives comparable results to solutions computed using gridded vanes.
Maximizing entropy of image models for 2-D constrained coding
DEFF Research Database (Denmark)
Forchhammer, Søren; Danieli, Matteo; Burini, Nino
2010-01-01
This paper considers estimating and maximizing the entropy of two-dimensional (2-D) fields with application to 2-D constrained coding. We consider Markov random fields (MRF), which have a non-causal description, and the special case of Pickard random fields (PRF). The PRF are 2-D causal finite...... context models, which define stationary probability distributions on finite rectangles and thus allow for calculation of the entropy. We consider two binary constraints and revisit the hard square constraint given by forbidding neighboring 1s and provide novel results for the constraint that no uniform 2...... £ 2 squares contains all 0s or all 1s. The maximum values of the entropy for the constraints are estimated and binary PRF satisfying the constraint are characterized and optimized w.r.t. the entropy. The maximum binary PRF entropy is 0.839 bits/symbol for the no uniform squares constraint. The entropy...
Large-Signal Code TESLA: Improvements in the Implementation and in the Model
National Research Council Canada - National Science Library
Chernyavskiy, Igor A; Vlasov, Alexander N; Anderson, Jr., Thomas M; Cooke, Simon J; Levush, Baruch; Nguyen, Khanh T
2006-01-01
We describe the latest improvements made in the large-signal code TESLA, which include transformation of the code to a Fortran-90/95 version with dynamical memory allocation and extension of the model...
Code-switched English pronunciation modeling for Swahili spoken term detection
CSIR Research Space (South Africa)
Kleynhans, N
2016-05-01
Full Text Available We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code switching, where speakers switch language in a conversation, occurs frequently in multilingual environments, and typically...
Modeling Vortex Generators in the Wind-US Code
Dudek, Julianne C.
2010-01-01
A source term model which simulates the effects of vortex generators was implemented into the Wind-US Navier Stokes code. The source term added to the Navier-Stokes equations simulates the lift force which would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, supersonic flow in a rectangular duct with a counterrotating vortex generator pair, and subsonic flow in an S-duct with 22 co-rotating vortex generators. The validation results indicate that the source term vortex generator model provides a useful tool for screening vortex generator configurations and gives comparable results to solutions computed using a gridded vane.
Methodology Using MELCOR Code to Model Proposed Hazard Scenario
Energy Technology Data Exchange (ETDEWEB)
Gavin Hawkley
2010-07-01
This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.
A New Approach to Model Pitch Perception Using Sparse Coding.
Directory of Open Access Journals (Sweden)
Oded Barzelay
2017-01-01
Full Text Available Our acoustical environment abounds with repetitive sounds, some of which are related to pitch perception. It is still unknown how the auditory system, in processing these sounds, relates a physical stimulus and its percept. Since, in mammals, all auditory stimuli are conveyed into the nervous system through the auditory nerve (AN fibers, a model should explain the perception of pitch as a function of this particular input. However, pitch perception is invariant to certain features of the physical stimulus. For example, a missing fundamental stimulus with resolved or unresolved harmonics, or a low and high-level amplitude stimulus with the same spectral content-these all give rise to the same percept of pitch. In contrast, the AN representations for these different stimuli are not invariant to these effects. In fact, due to saturation and non-linearity of both cochlear and inner hair cells responses, these differences are enhanced by the AN fibers. Thus there is a difficulty in explaining how pitch percept arises from the activity of the AN fibers. We introduce a novel approach for extracting pitch cues from the AN population activity for a given arbitrary stimulus. The method is based on a technique known as sparse coding (SC. It is the representation of pitch cues by a few spatiotemporal atoms (templates from among a large set of possible ones (a dictionary. The amount of activity of each atom is represented by a non-zero coefficient, analogous to an active neuron. Such a technique has been successfully applied to other modalities, particularly vision. The model is composed of a cochlear model, an SC processing unit, and a harmonic sieve. We show that the model copes with different pitch phenomena: extracting resolved and non-resolved harmonics, missing fundamental pitches, stimuli with both high and low amplitudes, iterated rippled noises, and recorded musical instruments.
Modeling ion exchange in clinoptilolite using the EQ3/6 geochemical modeling code
International Nuclear Information System (INIS)
Viani, B.E.; Bruton, C.J.
1992-06-01
Assessing the suitability of Yucca Mtn., NV as a potential repository for high-level nuclear waste requires the means to simulate ion-exchange behavior of zeolites. Vanselow and Gapon convention cation-exchange models have been added to geochemical modeling codes EQ3NR/EQ6, allowing exchange to be modeled for up to three exchangers or a single exchanger with three independent sites. Solid-solution models that are numerically equivalent to the ion-exchange models were derived and also implemented in the code. The Gapon model is inconsistent with experimental adsorption isotherms of trace components in clinoptilolite. A one-site Vanselow model can describe adsorption of Cs or Sr on clinoptilolite, but a two-site Vanselow exchange model is necessary to describe K contents of natural clinoptilolites
International Nuclear Information System (INIS)
Li Hai; Huang Chen; Du Aibing; Xu Baoyu
2014-01-01
The thermal conductivity is one of the most important parameters in the computer code for performance prediction for fuel rods. Several fuel thermal conductivity models used in foreign computer code, including thermal conductivity models for MOX fuel and UO 2 fuel were introduced in this paper. Thermal conductivities were calculated by using these models, and the results were compared and analyzed. Finally, the thermal conductivity model for the native computer code for performance prediction for fuel rods in fast reactor was recommended. (authors)
Coding conventions and principles for a National Land-Change Modeling Framework
Donato, David I.
2017-07-14
This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.
Cross-band noise model refinement for transform domain Wyner–Ziv video coding
DEFF Research Database (Denmark)
Huang, Xin; Forchhammer, Søren
2012-01-01
TDWZ video coding trails that of conventional video coding solutions, mainly due to the quality of side information, inaccurate noise modeling and loss in the final coding step. The major goal of this paper is to enhance the accuracy of the noise modeling, which is one of the most important aspects...... influencing the coding performance of DVC. A TDWZ video decoder with a novel cross-band based adaptive noise model is proposed, and a noise residue refinement scheme is introduced to successively update the estimated noise residue for noise modeling after each bit-plane. Experimental results show...... that the proposed noise model and noise residue refinement scheme can improve the rate-distortion (RD) performance of TDWZ video coding significantly. The quality of the side information modeling is also evaluated by a measure of the ideal code length....
Three-field modeling for MARS 1-D code
International Nuclear Information System (INIS)
Hwang, Moonkyu; Lim, Ho-Gon; Jeong, Jae-Jun; Chung, Bub-Dong
2006-01-01
In this study, the three-field modeling of the two-phase mixture is developed. The finite difference equations for the three-field equations thereafter are devised. The solution scheme has been implemented into the MARS 1-D code. The three-field formulations adopted are similar to those for MARS 3-D module, in a sense that the mass and momentum are treated separately for the entrained liquid and continuous liquid. As in the MARS-3D module, the entrained liquid and continuous liquid are combined into one for the energy equation, assuming thermal equilibrium between the two. All the non-linear terms are linearized to arrange the finite difference equation set into a linear matrix form with respect to the unknown arguments. The problems chosen for the assessment of the newly added entrained field consist of basic conceptual tests. Among the tests are gas-only test, liquid-only test, gas-only with supplied entrained liquid test, Edwards pipe problem, and GE level swell problem. The conceptual tests performed confirm the sound integrity of the three-field solver
2016-05-03
resourced Languages, SLTU 2016, 9-12 May 2016, Yogyakarta, Indonesia Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection Neil...Abstract We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code switching...et al. / Procedia Computer Science 81 ( 2016 ) 128 – 135 Our research focuses on pronunciation modeling of English (embedded language) words within
A Perceptual Model for Sinusoidal Audio Coding Based on Spectral Integration
Van de Par, S.; Kohlrausch, A.; Heusdens, R.; Jensen, J.; Holdt Jensen, S.
2005-01-01
Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of
A perceptual model for sinusoidal audio coding based on spectral integration
Van de Par, S.; Kohlrauch, A.; Heusdens, R.; Jensen, J.; Jensen, S.H.
2005-01-01
Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of
Energy Technology Data Exchange (ETDEWEB)
Brannon, R.M.; Wong, M.K.
1996-08-01
A set of model interface guidelines, called MIG, is presented as a means by which any compliant numerical material model can be rapidly installed into any parent code without having to modify the model subroutines. Here, {open_quotes}model{close_quotes} usually means a material model such as one that computes stress as a function of strain, though the term may be extended to any numerical operation. {open_quotes}Parent code{close_quotes} means a hydrocode, finite element code, etc. which uses the model and enforces, say, the fundamental laws of motion and thermodynamics. MIG requires the model developer (who creates the model package) to specify model needs in a standardized but flexible way. MIG includes a dictionary of technical terms that allows developers and parent code architects to share a common vocabulary when specifying field variables. For portability, database management is the responsibility of the parent code. Input/output occurs via structured calling arguments. As much model information as possible (such as the lists of required inputs, as well as lists of precharacterized material data and special needs) is supplied by the model developer in an ASCII text file. Every MIG-compliant model also has three required subroutines to check data, to request extra field variables, and to perform model physics. To date, the MIG scheme has proven flexible in beta installations of a simple yield model, plus a more complicated viscodamage yield model, three electromechanical models, and a complicated anisotropic microcrack constitutive model. The MIG yield model has been successfully installed using identical subroutines in three vectorized parent codes and one parallel C++ code, all predicting comparable results. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort, thereby reducing the cost of installing and sharing models in diverse new codes.
Mechanistic modelling of gaseous fission product behaviour in UO2 fuel by Rtop code
International Nuclear Information System (INIS)
Kanukova, V.D.; Khoruzhii, O.V.; Kourtchatov, S.Y.; Likhanskii, V.V.; Matveew, L.V.
2002-01-01
The current status of a mechanistic modelling by the RTOP code of the fission product behaviour in polycrystalline UO 2 fuel is described. An outline of the code and implemented physical models is presented. The general approach to code validation is discussed. It is exemplified by the results of validation of the models of fuel oxidation and grain growth. The different models of intragranular and intergranular gas bubble behaviour have been tested and the sensitivity of the code in the framework of these models has been analysed. An analysis of available models of the resolution of grain face bubbles is also presented. The possibilities of the RTOP code are presented through the example of modelling behaviour of WWER fuel over the course of a comparative WWER-PWR experiment performed at Halden and by comparison with Yanagisawa experiments. (author)
General Description of Fission Observables: GEF Model Code
Schmidt, K.-H.; Jurado, B.; Amouroux, C.; Schmitt, C.
2016-01-01
The GEF ("GEneral description of Fission observables") model code is documented. It describes the observables for spontaneous fission, neutron-induced fission and, more generally, for fission of a compound nucleus from any other entrance channel, with given excitation energy and angular momentum. The GEF model is applicable for a wide range of isotopes from Z = 80 to Z = 112 and beyond, up to excitation energies of about 100 MeV. The results of the GEF model are compared with fission barriers, fission probabilities, fission-fragment mass- and nuclide distributions, isomeric ratios, total kinetic energies, and prompt-neutron and prompt-gamma yields and energy spectra from neutron-induced and spontaneous fission. Derived properties of delayed neutrons and decay heat are also considered. The GEF model is based on a general approach to nuclear fission that explains a great part of the complex appearance of fission observables on the basis of fundamental laws of physics and general properties of microscopic systems and mathematical objects. The topographic theorem is used to estimate the fission-barrier heights from theoretical macroscopic saddle-point and ground-state masses and experimental ground-state masses. Motivated by the theoretically predicted early localisation of nucleonic wave functions in a necked-in shape, the properties of the relevant fragment shells are extracted. These are used to determine the depths and the widths of the fission valleys corresponding to the different fission channels and to describe the fission-fragment distributions and deformations at scission by a statistical approach. A modified composite nuclear-level-density formula is proposed. It respects some features in the superfluid regime that are in accordance with new experimental findings and with theoretical expectations. These are a constant-temperature behaviour that is consistent with a considerably increased heat capacity and an increased pairing condensation energy that is
Improved virtual channel noise model for transform domain Wyner-Ziv video coding
DEFF Research Database (Denmark)
Huang, Xin; Forchhammer, Søren
2009-01-01
Distributed video coding (DVC) has been proposed as a new video coding paradigm to deal with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. A virtual channel noise model is utilized at the decoder to estimate...... the noise distribution between the side information frame and the original frame. This is one of the most important aspects influencing the coding performance of DVC. Noise models with different granularity have been proposed. In this paper, an improved noise model for transform domain Wyner-Ziv video...
Application of the thermal-hydraulic codes in VVER-440 steam generators modelling
Energy Technology Data Exchange (ETDEWEB)
Matejovic, P.; Vranca, L.; Vaclav, E. [Nuclear Power Plant Research Inst. VUJE (Slovakia)
1995-12-31
Performances with the CATHARE2 V1.3U and RELAP5/MOD3.0 application to the VVER-440 SG modelling during normal conditions and during transient with secondary water lowering are described. Similar recirculation model was chosen for both codes. In the CATHARE calculation, no special measures were taken with the aim to optimize artificially flow rate distribution coefficients for the junction between SG riser and steam dome. Contrary to RELAP code, the CATHARE code is able to predict reasonable the secondary swell level in nominal conditions. Both codes are able to model properly natural phase separation on the SG water level. 6 refs.
Linear-Time Non-Malleable Codes in the Bit-Wise Independent Tampering Model
DEFF Research Database (Denmark)
Cramer, Ronald; Damgård, Ivan Bjerre; Döttling, Nico
Non-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m' (eventuall...... non-malleable codes of Agrawal et al. (TCC 2015) and of Cher- aghchi and Guruswami (TCC 2014) and improves the previous result in the bit-wise tampering model: it builds the first non-malleable codes with linear-time complexity and optimal-rate (i.e. rate 1 - o(1))....
Santos, José; Monteagudo, Angel
2011-02-21
As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the fact that the best possible codes show the patterns of the
Directory of Open Access Journals (Sweden)
Monteagudo Ángel
2011-02-01
Full Text Available Abstract Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the
Experimental data bases useful for quantification of model uncertainties in best estimate codes
International Nuclear Information System (INIS)
Wilson, G.E.; Katsma, K.R.; Jacobson, J.L.; Boodry, K.S.
1988-01-01
A data base is necessary for assessment of thermal hydraulic codes within the context of the new NRC ECCS Rule. Separate effect tests examine particular phenomena that may be used to develop and/or verify models and constitutive relationships in the code. Integral tests are used to demonstrate the capability of codes to model global characteristics and sequence of events for real or hypothetical transients. The nuclear industry has developed a large experimental data base of fundamental nuclear, thermal-hydraulic phenomena for code validation. Given a particular scenario, and recognizing the scenario's important phenomena, selected information from this data base may be used to demonstrate applicability of a particular code to simulate the scenario and to determine code model uncertainties. LBLOCA experimental data bases useful to this objective are identified in this paper. 2 tabs
INTRA/Mod3.2. Manual and Code Description. Volume I - Physical Modelling
Energy Technology Data Exchange (ETDEWEB)
Andersson, Jenny; Edlund, O.; Hermann, J.; Johansson, Lise-Lotte
1999-01-01
The INTRA Manual consists of two volumes. Volume I of the manual is a thorough description of the code INTRA, the Physical modelling of INTRA and the ruling numerical methods and volume II, the User`s Manual is an input description. This document, the Physical modelling of INTRA, contains code characteristics, integration methods and applications
Improving system modeling accuracy with Monte Carlo codes
International Nuclear Information System (INIS)
Johnson, A.S.
1996-01-01
The use of computer codes based on Monte Carlo methods to perform criticality calculations has become common-place. Although results frequently published in the literature report calculated k eff values to four decimal places, people who use the codes in their everyday work say that they only believe the first two decimal places of any result. The lack of confidence in the computed k eff values may be due to the tendency of the reported standard deviation to underestimate errors associated with the Monte Carlo process. The standard deviation as reported by the codes is the standard deviation of the mean of the k eff values for individual generations in the computer simulation, not the standard deviation of the computed k eff value compared with the physical system. A more subtle problem with the standard deviation of the mean as reported by the codes is that all the k eff values from the separate generations are not statistically independent since the k eff of a given generation is a function of k eff of the previous generation, which is ultimately based on the starting source. To produce a standard deviation that is more representative of the physical system, statistically independent values of k eff are needed
Model document for code officials on solar heating and cooling of buildings. First draft
Energy Technology Data Exchange (ETDEWEB)
1979-03-01
The primary purpose of this document is to promote the use and further development of solar energy through a systematic categorizing of all the attributes in a solar energy system that may impact on those requirements in the nationally recognized model codes relating to the safeguard of life or limb, health, property, and public welfare. Administrative provisions have been included to integrate this document with presently adopted codes, so as to allow incorporation into traditional building, plumbing, mechanical, and electrical codes. In those areas where model codes are not used it is recommended that the requirements, references, and standards herein be adopted to regulate all solar energy systems. (MOW)
Gupta, H. V.
2014-12-01
There is a clear need for comprehensive quantification of simulation uncertainty when using geophysical models to support and inform decision-making. Further, it is clear that the nature of such uncertainty depends on the quality of information in (a) the forcing data (driver information), (b) the model code (prior information), and (c) the specific values of inferred model components that localize the model to the system of interest (inferred information). Of course, the relative quality of each varies with geophysical discipline and specific application. In this talk I will discuss a structured approach to characterizing how 'Information', and hence 'Uncertainty', is coded into the structures of physics-based geophysical models. I propose that a better understanding of what is meant by "Information", and how it is embodied in models and data, can offer a structured (less ad-hoc), robust and insightful basis for diagnostic learning through the model-data juxtaposition. In some fields, a natural consequence may be to emphasize the a priori role of System Architecture (Process Modeling) over that of the selection of System Parameterization, thereby emphasizing the more creative aspect of scientific investigation - the use of models for Discovery and Learning.
The modeling of core melting and in-vessel corium relocation in the APRIL code
Energy Technology Data Exchange (ETDEWEB)
Kim. S.W.; Podowski, M.Z.; Lahey, R.T. [Rensselaer Polytechnic Institute, Troy, NY (United States)] [and others
1995-09-01
This paper is concerned with the modeling of severe accident phenomena in boiling water reactors (BWR). New models of core melting and in-vessel corium debris relocation are presented, developed for implementation in the APRIL computer code. The results of model testing and validations are given, including comparisons against available experimental data and parametric/sensitivity studies. Also, the application of these models, as parts of the APRIL code, is presented to simulate accident progression in a typical BWR reactor.
Modelling of Cold Water Hammer with WAHA code
International Nuclear Information System (INIS)
Gale, J.; Tiselj, I.
2003-01-01
The Cold Water Hammer experiment described in the present paper is a simple facility where overpressure accelerates a column of liquid water into the steam bubble at the closed vertical end of the pipe. Severe water hammer with high pressure peak occurs when the vapor bubble condenses and the liquid column hits the closed end of the pipe. Experimental data of Forschungszentrum Rossendorf are being used to test the newly developed computer code WAHA and the computer code RELAP5. Results show that a small amount of noncondensable air in the steam bubble significantly affects the magnitude of the calculated pressure peak, while the wall friction and condensation rate only slightly affect the simulated phenomena. (author)
A high burnup model developed for the DIONISIO code
Soba, A.; Denis, A.; Romero, L.; Villarino, E.; Sardella, F.
2013-02-01
A group of subroutines, designed to extend the application range of the fuel performance code DIONISIO to high burn up, has recently been included in the code. The new calculation tools, which are tuned for UO2 fuels in LWR conditions, predict the radial distribution of power density, burnup, and concentration of diverse nuclides within the pellet. The balance equations of all the isotopes involved in the fission process are solved in a simplified manner, and the one-group effective cross sections of all of them are obtained as functions of the radial position in the pellet, burnup, and enrichment in 235U. In this work, the subroutines are described and the results of the simulations performed with DIONISIO are presented. The good agreement with the data provided in the FUMEX II/III NEA data bank can be easily recognized.
A predictive transport modeling code for ICRF-heated tokamaks
International Nuclear Information System (INIS)
Phillips, C.K.; Hwang, D.Q.
1992-02-01
In this report, a detailed description of the physic included in the WHIST/RAZE package as well as a few illustrative examples of the capabilities of the package will be presented. An in depth analysis of ICRF heating experiments using WHIST/RAZE will be discussed in a forthcoming report. A general overview of philosophy behind the structure of the WHIST/RAZE package, a summary of the features of the WHIST code, and a description of the interface to the RAZE subroutines are presented in section 2 of this report. Details of the physics contained in the RAZE code are examined in section 3. Sample results from the package follow in section 4, with concluding remarks and a discussion of possible improvements to the package discussed in section 5
Development and implementation of a new trigger and data acquisition system for the HADES detector
Energy Technology Data Exchange (ETDEWEB)
Michel, Jan
2012-11-16
One of the crucial points of instrumentation in modern nuclear and particle physics is the setup of data acquisition systems (DAQ). In collisions of heavy ions, particles of special interest for research are often produced at very low rates resulting in the need for high event rates and a fast data acquisition. Additionally, the identification and precise tracking of particles requires fast and highly granular detectors. Both requirements result in very high data rates that have to be transported within the detector read-out system: Typical experiments produce data at rates of 200 to 1,000 MByte/s. The structure of the trigger and read-out systems of such experiments is quite similar: A central instance generates a signal that triggers read-out of all sub-systems. The signals from each detector system are then processed and digitized by front-end electronics before they are transported to a computing farm where data is analyzed and prepared for long-term storage. Some systems introduce additional steps (high level triggers) in this process to select only special types of events to reduce the amount of data to be processed later. The main focus of this work is put on the development of a new data acquisition system for the High Acceptance Di-Electron Spectrometer HADES located at the GSI Helmholtz Center for Heavy Ion Research in Darmstadt, Germany. Fully operational since 2002, its front-end electronics and data transport system were subject to a major upgrade program. The goal was an increase of the event rate capabilities by a factor of more than 20 to reach event rates of 20 kHz in heavy ion collisions and more than 50 kHz in light collision systems. The new electronics are based on FPGA-equipped platforms distributed throughout the detector. Data is transported over optical fibers to reduce the amount of electromagnetic noise induced in the sensitive front-end electronics. Besides the high data rates of up to 500 MByte/s at the design event rate of 20 kHz, the
A computer code for calculations in the algebraic collective model of the atomic nucleus
Welsh, T. A.; Rowe, D. J.
2014-01-01
A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1,1) x SO(5) dynamical group. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code enables a wide range of model Hamiltonians to be analysed. This range includes essentially all Hamiltonians that are rational functi...
Delta (1232) Dalitz decay in proton-proton collisions at T=1.25 GeV measured with HADES at GSI
Czech Academy of Sciences Publication Activity Database
Adamczewski-Musch, J.; Arnold, O.; Atomssa, E. T.; Chlad, Lukáš; Kugler, Andrej; Rodriguez Ramos, Pablo; Sobolev, Yuri, G.; Svoboda, Ondřej; Tlustý, Pavel; Wagner, Vladimír
2017-01-01
Roč. 95, č. 6 (2017), č. článku 065205. ISSN 2469-9985 R&D Projects: GA ČR GA13-06759S Institutional support: RVO:61389005 Keywords : HADES collaboration * heavy ion collisions * Delta(1232) Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders OBOR OECD: Nuclear physics Impact factor: 3.820, year: 2016
Nodal kinetics model upgrade in the Penn State coupled TRAC/NEM codes
International Nuclear Information System (INIS)
Beam, Tara M.; Ivanov, Kostadin N.; Baratta, Anthony J.; Finnemann, Herbert
1999-01-01
The Pennsylvania State University currently maintains and does development and verification work for its own versions of the coupled three-dimensional kinetics/thermal-hydraulics codes TRAC-PF1/NEM and TRAC-BF1/NEM. The subject of this paper is nodal model enhancements in the above mentioned codes. Because of the numerous validation studies that have been performed on almost every aspect of these codes, this upgrade is done without a major code rewrite. The upgrade consists of four steps. The first two steps are designed to improve the accuracy of the kinetics model, based on the nodal expansion method. The polynomial expansion solution of 1D transverse integrated diffusion equation is replaced with a solution, which uses a semi-analytic expansion. Further the standard parabolic polynomial representation of the transverse leakage in the above 1D equations is replaced with an improved approximation. The last two steps of the upgrade address the code efficiency by improving the solution of the time-dependent NEM equations and implementing a multi-grid solver. These four improvements are implemented into the standalone NEM kinetics code. Verification of this code was accomplished based on the original verification studies. The results show that the new methods improve the accuracy and efficiency of the code. The verification of the upgraded NEM model in the TRAC-PF1/NEM and TRAC-BF1/NEM coupled codes is underway
Evaluation of the analysis models in the ASTRA nuclear design code system
Energy Technology Data Exchange (ETDEWEB)
Cho, Nam Jin; Park, Chang Jea; Kim, Do Sam; Lee, Kyeong Taek; Kim, Jong Woon [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)
2000-11-15
In the field of nuclear reactor design, main practice was the application of the improved design code systems. During the process, a lot of basis and knowledge were accumulated in processing input data, nuclear fuel reload design, production and analysis of design data, et al. However less efforts were done in the analysis of the methodology and in the development or improvement of those code systems. Recently, KEPO Nuclear Fuel Company (KNFC) developed the ASTRA (Advanced Static and Transient Reactor Analyzer) code system for the purpose of nuclear reactor design and analysis. In the code system, two group constants were generated from the CASMO-3 code system. The objective of this research is to analyze the analysis models used in the ASTRA/CASMO-3 code system. This evaluation requires indepth comprehension of the models, which is important so much as the development of the code system itself. Currently, most of the code systems used in domestic Nuclear Power Plant were imported, so it is very difficult to maintain and treat the change of the situation in the system. Therefore, the evaluation of analysis models in the ASTRA nuclear reactor design code system in very important.
International Nuclear Information System (INIS)
Blanchet, Y.; Obry, P.; Louvet, J.; Deshayes, M.; Phalip, C.
1979-01-01
The explicit 2-D axisymmetric Langrangian code SIRIUS, developed at the CEA/DRNR, Cadarache, deals with transient compressive flows in deformable primary tanks with more or less complex internal component geometries. This code has been subjected to a two-year intensive validation program on scale model experiments and a number of improvements have been incorporated. This paper presents a recent calculation of one of these experiments using the SIRIUS code, and the comparison with experimental results shows the encouraging possibilities of this Lagrangian code
Modeling Proton- and Light Ion-Induced Reactions at Low Energies in the MARS15 Code
Energy Technology Data Exchange (ETDEWEB)
Rakhno, I. L. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Mokhov, N. V. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Gudima, K. K. [National Academy of Sciences, Cisineu (Moldova)
2015-04-25
An implementation of both ALICE code and TENDL evaluated nuclear data library in order to describe nuclear reactions induced by low-energy projectiles in the Monte Carlo code MARS15 is presented. Comparisons between results of modeling and experimental data on reaction cross sections and secondary particle distributions are shown.
A Test of Two Alternative Cognitive Processing Models: Learning Styles and Dual Coding
Cuevas, Joshua; Dawson, Bryan L.
2018-01-01
This study tested two cognitive models, learning styles and dual coding, which make contradictory predictions about how learners process and retain visual and auditory information. Learning styles-based instructional practices are common in educational environments despite a questionable research base, while the use of dual coding is less…
Coupling of 3D neutronics models with the system code ATHLET
International Nuclear Information System (INIS)
Langenbuch, S.; Velkov, K.
1999-01-01
The system code ATHLET for plant transient and accident analysis has been coupled with 3D neutronics models, like QUABOX/CUBBOX, for the realistic evaluation of some specific safety problems under discussion. The considerations for the coupling approach and its realization are discussed. The specific features of the coupled code system established are explained and experience from first applications is presented. (author)
Directory of Open Access Journals (Sweden)
Shangdi Chen
2014-01-01
Full Text Available Multisender authentication codes allow a group of senders to construct an authenticated message for a receiver such that the receiver can verify authenticity of the received message. In this paper, we construct multisender authentication codes with sequential model from symplectic geometry over finite fields, and the parameters and the maximum probabilities of deceptions are also calculated.
Implementation of the critical points model in a SFM-FDTD code working in oblique incidence
Energy Technology Data Exchange (ETDEWEB)
Hamidi, M; Belkhir, A; Lamrous, O [Laboratoire de Physique et Chimie Quantique, Universite Mouloud Mammeri, Tizi-Ouzou (Algeria); Baida, F I, E-mail: omarlamrous@mail.ummto.dz [Departement d' Optique P.M. Duffieux, Institut FEMTO-ST UMR 6174 CNRS Universite de Franche-Comte, 25030 Besancon Cedex (France)
2011-06-22
We describe the implementation of the critical points model in a finite-difference-time-domain code working in oblique incidence and dealing with dispersive media through the split field method. Some tests are presented to validate our code in addition to an application devoted to plasmon resonance of a gold nanoparticles grating.
FISIC - a full-wave code to model ion cyclotron resonance heating of tokamak plasmas
International Nuclear Information System (INIS)
Kruecken, T.
1988-08-01
We present a user manual for the FISIC code which solves the integrodifferential wave equation in the finite Larmor radius approximation in fully toroidal geometry to simulate ICRF heating experiments. The code models the electromagnetic wave field as well as antenna coupling and power deposition profiles in axisymmetric plasmas. (orig.)
Implementation of a Model of Turbulence into a System Code, GAMMA+
International Nuclear Information System (INIS)
Kim, Hyeonil; Lim, Hong-Sik; No, Hee-Cheon
2015-01-01
The Launder-Sharma model was selected as the best model to predict the heat transfer performance while offsetting the lack of accuracy in even recently updated empirical correlations from both the extensive review of numerical analyses and the validation process. An application of the Launder-Sharma model into the system analysis code GAMMA+ for gas-cooled reactors is presented: 1) governing equations, discretization, and algebraic equations, 2) an application result of GAMMA''T, an integrated GAMMA+ code with CFD capability of low-Re resolution incorporated. The numerical foundation was formulated and implemented in a way such that the capability of the LS model was incorporated into GAMMA+, a system code for gas-cooled reactors, based on the same backbone of the ICE scheme on stagger mesh, that is, the code structure and numerical schemes used in the original code. The GAMMA''T code, an integrated system code with low-Re CFD capability on board, was suitably verified using an available set of data covering a turbulent flow and turbulent forced convection. In addition, a much better solution with the same quality of prediction with fewer meshes was given. This is a considerable advantage of the application into the system code
The Nuremberg Code subverts human health and safety by requiring animal modeling
Greek, Ray; Pippus, Annalea; Hansen, Lawrence A
2012-01-01
Abstract Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive...
Energy Technology Data Exchange (ETDEWEB)
Joshua J. Cogliati; Abderrafi M. Ougouag
2006-10-01
A comprehensive, high fidelity model for pebble flow has been developed and embodied in the PEBBLES computer code. In this paper, a description of the physical artifacts included in the model is presented and some results from using the computer code for predicting the features of pebble flow and packing in a realistic pebble bed reactor design are shown. The sensitivity of models to various physical parameters is also discussed.
A combined N-body and hydrodynamic code for modeling disk galaxies
International Nuclear Information System (INIS)
Schroeder, M.C.
1989-01-01
A combined N-body and hydrodynamic computer code for the modeling of two dimensional galaxies is described. The N-body portion of the code is used to calculate the motion of the particle component of a galaxy, while the hydrodynamics portion of the code is used to follow the motion and evolution of the fluid component. A complete description of the numerical methods used for each portion of the code is given. Additionally, the proof tests of the separate and combined portions of the code are presented and discussed. Finally, a discussion of the topics researched with the code and results obtained is presented. These include: the measurement of stellar relaxation times in disk galaxy simulations; the effects of two-armed spiral perturbations on stable axisymmetric disks; the effects of the inclusion of an instellar medium (ISM) on the stability of disk galaxies; and the effect of the inclusion of stellar evolution on disk galaxy simulations
Subgroup A: nuclear model codes report to the Sixteenth Meeting of the WPEC
International Nuclear Information System (INIS)
Talou, P.; Chadwick, M.B.; Dietrich, F.S.; Herman, M.; Kawano, T.; Konig, A.; Oblozinsky, P.
2004-01-01
The Subgroup A activities focus on the development of nuclear reaction models and codes, used in evaluation work for nuclear reactions from the unresolved energy region up to the pion threshold production limit, and for target nuclides from the low teens and heavier. Much of the efforts are devoted by each participant to the continuing development of their own Institution codes. Progresses in this arena are reported in detail for each code in the present document. EMPIRE-II is of public access. The release of the TALYS code has been announced for the ND2004 Conference in Santa Fe, NM, October 2004. McGNASH is still under development and is not expected to be released in the very near future. In addition, Subgroup A members have demonstrated a growing interest in working on common modeling and codes capabilities, which would significantly reduce the amount of duplicate work, help manage efficiently the growing lines of existing codes, and render codes inter-comparison much easier. A recent and important activity of the Subgroup A has therefore been to develop the framework and the first bricks of the ModLib library, which is constituted of mostly independent pieces of codes written in Fortran 90 (and above) to be used in existing and future nuclear reaction codes. Significant progresses in the development of ModLib have been made during the past year. Several physics modules have been added to the library, and a few more have been planned in detail for the coming year.
Energy Technology Data Exchange (ETDEWEB)
Gould, R K; Srivastava, R
1979-12-01
Models and computer codes which may be used to describe flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides are described. A prominent example of the type of process which may be studied using the codes developed in this program is the SiCl/sub 4//Na reactor currently being developed by the Westinghouse Electric Corp. During this program two large computer codes were developed. The first is the CHEMPART code, an axisymmetric, marching code which treats two-phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. This code, based on the AeroChem LAPP (Low Altitude Plume Program) code can be used to describe flow reactors in which reactants mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, depositon of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail than can be afforded using CHEMPART. These two codes have been used in this program to predict particle formation characteristics and wall collection efficiencies for SiCl/sub 4//Na flow reactors. Results are described.
Energy Technology Data Exchange (ETDEWEB)
Farmer, M. T. [Argonne National Lab. (ANL), Argonne, IL (United States)
2017-09-01
MELTSPREAD3 is a transient one-dimensional computer code that has been developed to predict the gravity-driven flow and freezing behavior of molten reactor core materials (corium) in containment geometries. Predictions can be made for corium flowing across surfaces under either dry or wet cavity conditions. The spreading surfaces that can be selected are steel, concrete, a user-specified material (e.g., a ceramic), or an arbitrary combination thereof. The corium can have a wide range of compositions of reactor core materials that includes distinct oxide phases (predominantly Zr, and steel oxides) plus metallic phases (predominantly Zr and steel). The code requires input that describes the containment geometry, melt “pour” conditions, and cavity atmospheric conditions (i.e., pressure, temperature, and cavity flooding information). For cases in which the cavity contains a preexisting water layer at the time of RPV failure, melt jet breakup and particle bed formation can be calculated mechanistically given the time-dependent melt pour conditions (input data) as well as the heatup and boiloff of water in the melt impingement zone (calculated). For core debris impacting either the containment floor or previously spread material, the code calculates the transient hydrodynamics and heat transfer which determine the spreading and freezing behavior of the melt. The code predicts conditions at the end of the spreading stage, including melt relocation distance, depth and material composition profiles, substrate ablation profile, and wall heatup. Code output can be used as input to other models such as CORQUENCH that evaluate long term core-concrete interaction behavior following the transient spreading stage. MELTSPREAD3 was originally developed to investigate BWR Mark I liner vulnerability, but has been substantially upgraded and applied to other reactor designs (e.g., the EPR), and more recently to the plant accidents at Fukushima Daiichi. The most recent round of
Realistic edge field model code REFC for designing and study of isochronous cyclotron
International Nuclear Information System (INIS)
Ismail, M.
1989-01-01
The focussing properties and the requirements for isochronism in cyclotron magnet configuration are well-known in hard edge field model. The fact that they quite often change considerably in realistic field can be attributed mainly to the influence of the edge field. A solution to this problem requires a field model which allows a simple construction of equilibrium orbit and yield simple formulae. This can be achieved by using a fitted realistic edge field (Hudson et al 1975) in the region of the pole edge and such a field model is therefore called a realistic edge field model. A code REFC based on realistic edge field model has been developed to design the cyclotron sectors and the code FIELDER has been used to study the beam properties. In this report REFC code has been described along with some relevant explaination of the FIELDER code. (author). 11 refs., 6 figs
The production of {eta} and {omega} mesons in 3.5 GeV p+p interaction in HADES
Energy Technology Data Exchange (ETDEWEB)
Teilab, Khaled
2011-08-31
The study of meson production in proton-proton collisions in the energy range up to one GeV above the production threshold provides valuable information about the nature of the nucleon-nucleon interaction. Theoretical models describe the interaction between nucleons via the exchange of mesons. In such models, different mechanisms contribute to the production of the mesons in nucleon-nucleon collisions. The measurement of total and differential production cross sections provide information which can help in determining the magnitude of the various mechanisms. Moreover, such cross section information serves as an input to the transport calculations which describe e.g. the production of e{sup +}e{sup -} pairs in proton- and pion-induced reactions as well as in heavy ion collisions. In this thesis, the production of {omega} and {eta} mesons in proton-proton collisions at 3.5 GeV beam energy was studied using the High Acceptance DiElectron Spectrometer (HADES) installed at the Schwerionensynchrotron (SIS 18) at the Helmholtzzentrum fuer Schwerionenforschung in Darmstadt. About 80 000 {omega} mesons and 35 000 {eta} mesons were reconstructed. Total production cross sections of both mesons were determined. Furthermore, the collected statistics allowed for extracting angular distributions of both mesons as well as performing Dalitz plot studies. The {omega} and {eta} mesons were reconstructed via their decay into three pions ({pi}{sup +}{pi}{sup -}{pi}{sup 0}) in the exclusive reaction pp {yields} pp{pi}{sup +}{pi}{sup -}{pi}{sup 0}. The charged particles were identified via their characteristic energy loss, via the measurement of their time of flight and momentum, or using kinematics. The neutral pion was reconstructed using the missing mass method. A kinematic fit was applied to improve the resolution and to select events in which a {pi}{sup 0} was produced. The correction of measured yields for the effects of spectrometer acceptance was done as a function of four
International Nuclear Information System (INIS)
Serfontein, Dawid E.; Mulder, Eben J.; Reitsma, Frederik
2014-01-01
A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications
Improvement on reaction model for sodium-water reaction jet code and application analysis
International Nuclear Information System (INIS)
Itooka, Satoshi; Saito, Yoshinori; Okabe, Ayao; Fujimata, Kazuhiro; Murata, Shuuichi
2000-03-01
In selecting the reasonable DBL on steam generator (SG), it is necessary to improve analytical method for estimating the sodium temperature on failure propagation due to overheating. Improvement on sodium-water reaction (SWR) jet code (LEAP-JET ver.1.30) and application analysis to the water injection tests for confirmation of code propriety were performed. On the improvement of the code, a gas-liquid interface area density model was introduced to develop a chemical reaction model with a little dependence on calculation mesh size. The test calculation using the improved code (LEAP-JET ver.1.40) were carried out with conditions of the SWAT-3·Run-19 test and an actual scale SG. It is confirmed that the SWR jet behavior on the results and the influence to analysis result of a model are reasonable. For the application analysis to the water injection tests, water injection behavior and SWR jet behavior analyses on the new SWAT-1 (SWAT-1R) and SWAT-3 (SWAT-3R) tests were performed using the LEAP-BLOW code and the LEAP-JET code. In the application analysis of the LEAP-BLOW code, parameter survey study was performed. As the results, the condition of the injection nozzle diameter needed to simulate the water leak rate was confirmed. In the application analysis of the LEAP-JET code, temperature behavior of the SWR jet was investigated. (author)
Development of seismic analysis model for HTGR core on commercial FEM code
International Nuclear Information System (INIS)
Tsuji, Nobumasa; Ohashi, Kazutaka
2015-01-01
The aftermath of the Great East Japan Earthquake prods to revise the design basis earthquake intensity severely. In aseismic design of block-type HTGR, the securement of structural integrity of core blocks and other structures which are made of graphite become more important. For the aseismic design of block-type HTGR, it is necessary to predict the motion of core blocks which are collided with adjacent blocks. Some seismic analysis codes have been developed in 1970s, but these codes are special purpose-built codes and have poor collaboration with other structural analysis code. We develop the vertical 2 dimensional analytical model on multi-purpose commercial FEM code, which take into account the multiple impacts and friction between block interfaces and rocking motion on contact with dowel pins of the HTGR core by using contact elements. This model is verified by comparison with the experimental results of 12 column vertical slice vibration test. (author)
Turbine Internal and Film Cooling Modeling For 3D Navier-Stokes Codes
DeWitt, Kenneth; Garg Vijay; Ameri, Ali
2005-01-01
The aim of this research project is to make use of NASA Glenn on-site computational facilities in order to develop, validate and apply aerodynamic, heat transfer, and turbine cooling models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes such as the Glenn-" code. Specific areas of effort include: Application of the Glenn-HT code to specific configurations made available under Turbine Based Combined Cycle (TBCC), and Ultra Efficient Engine Technology (UEET) projects. Validating the use of a multi-block code for the time accurate computation of the detailed flow and heat transfer of cooled turbine airfoils. The goal of the current research is to improve the predictive ability of the Glenn-HT code. This will enable one to design more efficient turbine components for both aviation and power generation. The models will be tested against specific configurations provided by NASA Glenn.
An Advanced simulation Code for Modeling Inductive Output Tubes
Energy Technology Data Exchange (ETDEWEB)
Thuc Bui; R. Lawrence Ives
2012-04-27
During the Phase I program, CCR completed several major building blocks for a 3D large signal, inductive output tube (IOT) code using modern computer language and programming techniques. These included a 3D, Helmholtz, time-harmonic, field solver with a fully functional graphical user interface (GUI), automeshing and adaptivity. Other building blocks included the improved electrostatic Poisson solver with temporal boundary conditions to provide temporal fields for the time-stepping particle pusher as well as the self electric field caused by time-varying space charge. The magnetostatic field solver was also updated to solve for the self magnetic field caused by time changing current density in the output cavity gap. The goal function to optimize an IOT cavity was also formulated, and the optimization methodologies were investigated.
GEMSFITS: Code package for optimization of geochemical model parameters and inverse modeling
International Nuclear Information System (INIS)
Miron, George D.; Kulik, Dmitrii A.; Dmytrieva, Svitlana V.; Wagner, Thomas
2015-01-01
Highlights: • Tool for generating consistent parameters against various types of experiments. • Handles a large number of experimental data and parameters (is parallelized). • Has a graphical interface and can perform statistical analysis on the parameters. • Tested on fitting the standard state Gibbs free energies of aqueous Al species. • Example on fitting interaction parameters of mixing models and thermobarometry. - Abstract: GEMSFITS is a new code package for fitting internally consistent input parameters of GEM (Gibbs Energy Minimization) geochemical–thermodynamic models against various types of experimental or geochemical data, and for performing inverse modeling tasks. It consists of the gemsfit2 (parameter optimizer) and gfshell2 (graphical user interface) programs both accessing a NoSQL database, all developed with flexibility, generality, efficiency, and user friendliness in mind. The parameter optimizer gemsfit2 includes the GEMS3K chemical speciation solver ( (http://gems.web.psi.ch/GEMS3K)), which features a comprehensive suite of non-ideal activity- and equation-of-state models of solution phases (aqueous electrolyte, gas and fluid mixtures, solid solutions, (ad)sorption. The gemsfit2 code uses the robust open-source NLopt library for parameter fitting, which provides a selection between several nonlinear optimization algorithms (global, local, gradient-based), and supports large-scale parallelization. The gemsfit2 code can also perform comprehensive statistical analysis of the fitted parameters (basic statistics, sensitivity, Monte Carlo confidence intervals), thus supporting the user with powerful tools for evaluating the quality of the fits and the physical significance of the model parameters. The gfshell2 code provides menu-driven setup of optimization options (data selection, properties to fit and their constraints, measured properties to compare with computed counterparts, and statistics). The practical utility, efficiency, and
Simulation of Two-group IATE models with EAGLE code
International Nuclear Information System (INIS)
Nguyen, V. T.; Bae, B. U.; Song, C. H.
2011-01-01
The two-group transport equation should be employed in order to describe correctly the interfacial area transport in various two phase flow regimes, especially at the bubbly-to-slug flow transition. This is because the differences in bubble sizes or shapes cause substantial differences in their transport mechanisms and interaction phenomena. The basic concept of two group interfacial area transport equations have been formulated and demonstrated for vertical gas-liquid bubbly-to-slug flow transition by Hibiki and his coworkers. More than twelve adjustable parameters need to be determined based on extensive experimental database. It should be noted that these parameters were adjusted only in one-dimensional approach by area averaged flow parameters in a vertical pipe under adiabatic and steady conditions. This obviously brings up the following experimental issue: how to adjust all these parameters as independently as possible by considering experiments where a single physical phenomenon is of importance. The vertical air-water loop (VAWL) has been used for investigating the transport phenomena of two-phase flow at Korea Atomic Energy Research Institute (KAERI). The data for local void fraction and interfacial area concentration are measured by using five-sensor conductivity probe method and classified into two groups, the small spherical bubble group and the cap/slug one. The initial bubble size, which has a big influence on the interaction mechanism between phases, was controlled. In the present work, two-group interfacial area transport equation (IATE) was implemented in the EAGLE code and assessed against VAWL data. The purpose of this study is to investigate the capability of coefficients derived by Hibiki in the two-group interfacial area transport equations with CFD code
Documentation for grants equal to tax model: Volume 3, Source code
International Nuclear Information System (INIS)
Boryczka, M.K.
1986-01-01
The GETT model is capable of forecasting the amount of tax liability associated with all property owned and all activities undertaken by the US Department of Energy (DOE) in site characterization and repository development. The GETT program is a user-friendly, menu-driven model developed using dBASE III/trademark/, a relational data base management system. The data base for GETT consists primarily of eight separate dBASE III/trademark/ files corresponding to each of the eight taxes (real property, personal property, corporate income, franchise, sales, use, severance, and excise) levied by State and local jurisdictions on business property and activity. Additional smaller files help to control model inputs and reporting options. Volume 3 of the GETT model documentation is the source code. The code is arranged primarily by the eight tax types. Other code files include those for JURISDICTION, SIMULATION, VALIDATION, TAXES, CHANGES, REPORTS, GILOT, and GETT. The code has been verified through hand calculations
Field-based tests of geochemical modeling codes: New Zealand hydrothermal systems
International Nuclear Information System (INIS)
Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.
1993-12-01
Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will determine how the codes can be used to predict the chemical and mineralogical response of the environment to nuclear waste emplacement. Field-based exercises allow us to test the models on time scales unattainable in the laboratory. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei and Kawerau geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions
Field-based tests of geochemical modeling codes usign New Zealand hydrothermal systems
International Nuclear Information System (INIS)
Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.
1994-06-01
Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will determine how the codes can be used to predict the chemical and mineralogical response of the environment to nuclear waste emplacement. Field-based exercises allow us to test the models on time scales unattainable in the laboratory. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei and Kawerau geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions
Implementing the WebSocket Protocol Based on Formal Modelling and Automated Code Generation
DEFF Research Database (Denmark)
Simonsen, Kent Inge; Kristensen, Lars Michael
2014-01-01
Model-based software engineering offers several attractive benefits for the implementation of protocols, including automated code generation for different platforms from design-level models. In earlier work, we have proposed a template-based approach using Coloured Petri Net formal models...... with pragmatic annotations for automated code generation of protocol software. The contribution of this paper is an application of the approach as implemented in the PetriCode tool to obtain protocol software implementing the IETF WebSocket protocol. This demonstrates the scalability of our approach to real...... protocols. Furthermore, we perform formal verification of the CPN model prior to code generation, and test the implementation for interoperability against the Autobahn WebSocket test-suite resulting in 97% and 99% success rate for the client and server implementation, respectively. The tests show...
Context discovery using attenuated Bloom codes: model description and validation
Liu, F.; Heijenk, Geert
A novel approach to performing context discovery in ad-hoc networks based on the use of attenuated Bloom filters is proposed in this report. In order to investigate the performance of this approach, a model has been developed. This document describes the model and its validation. The model has been
Recent progress of an integrated implosion code and modeling of element physics
International Nuclear Information System (INIS)
Nagatomo, H.; Takabe, H.; Mima, K.; Ohnishi, N.; Sunahara, A.; Takeda, T.; Nishihara, K.; Nishiguchu, A.; Sawada, K.
2001-01-01
Physics of the inertial fusion is based on a variety of elements such as compressible hydrodynamics, radiation transport, non-ideal equation of state, non-LTE atomic process, and relativistic laser plasma interaction. In addition, implosion process is not in stationary state and fluid dynamics, energy transport and instabilities should be solved simultaneously. In order to study such complex physics, an integrated implosion code including all physics important in the implosion process should be developed. The details of physics elements should be studied and the resultant numerical modeling should be installed in the integrated code so that the implosion can be simulated with available computer within realistic CPU time. Therefore, this task can be basically separated into two parts. One is to integrate all physics elements into a code, which is strongly related to the development of hydrodynamic equation solver. We have developed 2-D integrated implosion code which solves mass, momentum, electron energy, ion energy, equation of states, laser ray-trace, laser absorption radiation, surface tracing and so on. The reasonable results in simulating Rayleigh-Taylor instability and cylindrical implosion are obtained using this code. The other is code development on each element physics and verification of these codes. We had progress in developing a nonlocal electron transport code and 2 and 3 dimension radiation hydrodynamic code. (author)
Modelling of blackout sequence at Atucha-1 using the MARCH3 code
International Nuclear Information System (INIS)
Baron, J.; Bastianelli, B.
1997-01-01
This paper presents the modelling of a complete blackout at the Atucha-1 NPP as preliminary phase for a Level II safety probabilistic analysis. The MARCH3 code of the STCP (Source Term Code Package) is used, based on a plant model made in accordance with particularities of the plant design. The analysis covers all the severe accident phases. The results allow to view the time sequence of the events, and provide the basis for source term studies. (author). 6 refs., 2 figs
Coupled neutronics and thermal hydraulics modelling in reactor dynamics codes TRAB-3D and HEXTRAN
International Nuclear Information System (INIS)
Kyrki-Rajamaeki, R.; Raety, H.
1999-01-01
The reactor dynamics codes for transient and accident analyses inherently include the coupling of neutronics and thermal hydraulics modelling. In Finland a number of codes with 1D and 3D neutronic models have been developed, which include models also for the cooling circuits. They have been used mainly for the needs of Finnish power plants, but some of the codes have also been utilized elsewhere. The continuous validation, simultaneous development, and experiences obtained in commercial applications have considerably improved the performance and range of application of the codes. The fast operation of the codes has enabled realistic analysis of 3D core combined to a full model of the cooling circuit even in such long reactivity scenarios as ATWS. The reactor dynamics methods are developed further and new more detailed models are created for tasks related to increased safety requirements. For thermal hydraulics calculations, an accurate general flow model based on a new solution method has been developed. Although mainly intended for analysis purposes, the reactor dynamics codes also provide reference solutions for simulator applications. As computer capability increases, these more sophisticated methods can be taken into use also in simulator environments. (author)
Pechorro, Ed; Lecuyot, Arnaud; Bacon, Andrew; Chalkley, Simon; Milnes, Martin; Williams, Ivan; Williams, Stuart; Muthu, Kavitha
2014-05-01
The Hidden Aquifer & Deep Earth Sounder (HADES) is a ground penetrating radar mission concept for identifying new saline aquifer sites suitable for Carbon Capture & Storage (CCS). HADES uses a newly proposed type of Earth Observation technique, previously deployed in Mars orbit to search for water. It has been proposed to globally map the sub-surface layers of Earth's land area down to a maximum depth of 3km to detect underground aquifers of suitable depth and geophysical conditions for CCS. We present the mission concept together with the approach and findings of the project from which the concept has arisen, a European Space Agency (ESA) study on "Future Earth Observation Missions & Techniques for the Energy Sector" performed by a consortium of partners comprising CGI and SEA. The study aims to improve and increase the current and future application of Earth Observation in provision of data and services to directly address long term energy sector needs for a de-carbonised economy. This is part of ESA's cross-agency "Space and Energy" initiative. The HADES mission concept is defined by our specification of (i) mission requirements, reflecting the challenges and opportunities with identifying CCS sites from space, (ii) the observation technique, derived from ground penetrating radar, and (iii) the preliminary system concept, including specification of the resulting satellite, ground and launch segments. Activities have also included a cost-benefit analysis of the mission, a defined route to technology maturation, and a preliminary strategic plan towards proposed implementation. Moreover, the mission concept maps to a stakeholder analysis forming the initial part of the study. Its method has been to first identify the user needs specific to the energy sector in the global transition towards a de-carbonised economy. This activity revealed the energy sector requirements geared to the identification of suitable CCS sites. Subsequently, a qualitative and quantitative
Drobny, Jon; Curreli, Davide; Ruzic, David; Lasa, Ane; Green, David; Canik, John; Younkin, Tim; Blondel, Sophie; Wirth, Brian
2017-10-01
Surface roughness greatly impacts material erosion, and thus plays an important role in Plasma-Surface Interactions. Developing strategies for efficiently introducing rough surfaces into ion-solid interaction codes will be an important step towards whole-device modeling of plasma devices and future fusion reactors such as ITER. Fractal TRIDYN (F-TRIDYN) is an upgraded version of the Monte Carlo, BCA program TRIDYN developed for this purpose that includes an explicit fractal model of surface roughness and extended input and output options for file-based code coupling. Code coupling with both plasma and material codes has been achieved and allows for multi-scale, whole-device modeling of plasma experiments. These code coupling results will be presented. F-TRIDYN has been further upgraded with an alternative, statistical model of surface roughness. The statistical model is significantly faster than and compares favorably to the fractal model. Additionally, the statistical model compares well to alternative computational surface roughness models and experiments. Theoretical links between the fractal and statistical models are made, and further connections to experimental measurements of surface roughness are explored. This work was supported by the PSI-SciDAC Project funded by the U.S. Department of Energy through contract DOE-DE-SC0008658.
A predictive coding account of bistable perception - a model-based fMRI study.
Weilnhammer, Veith; Stuke, Heiner; Hesselmann, Guido; Sterzer, Philipp; Schmack, Katharina
2017-05-01
In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current work
A predictive coding account of bistable perception - a model-based fMRI study.
Directory of Open Access Journals (Sweden)
Veith Weilnhammer
2017-05-01
Full Text Available In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together
An Eulerian transport-dispersion model of passive effluents: the Difeul code
International Nuclear Information System (INIS)
Wendum, D.
1994-11-01
R and D has decided to develop an Eulerian diffusion model easy to adapt to meteorological data coming from different sources: for instance the ARPEGE code of Meteo-France or the MERCURE code of EDF. We demand this in order to be able to apply the code in independent cases: a posteriori studies of accidental releases from nuclear power plants ar large or medium scale, simulation of urban pollution episodes within the ''Reactive Atmospheric Flows'' research project. For simplicity reasons, the numerical formulation of our code is the same as the one used in Meteo-France's MEDIA model. The numerical tests presented in this report show the good performance of those schemes. In order to illustrate the method by a concrete example a fictitious release from Saint-Laurent has been simulated at national scale: the results of this simulation agree quite well with those of the trajectory model DIFTRA. (author). 6 figs., 4 tabs
Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.
2012-01-01
An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.
UCODE, a computer code for universal inverse modeling
Poeter, Eileen P.; Hill, Mary C.
1999-05-01
This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating
Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF
Blyth, Taylor S.
The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics-based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR) cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal-hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.
Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF
Energy Technology Data Exchange (ETDEWEB)
Blyth, Taylor S. [Pennsylvania State Univ., University Park, PA (United States); Avramova, Maria [North Carolina State Univ., Raleigh, NC (United States)
2017-04-01
The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics- based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR) cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal- hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.
Stimulus-dependent maximum entropy models of neural population codes.
Directory of Open Access Journals (Sweden)
Einat Granot-Atedgi
Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.
User's guide for waste tank corrosion data model code
International Nuclear Information System (INIS)
Mackey, D.B.; Divine, J.R.
1986-12-01
Corrosion tests were conducted on A-516 and A-537 carbon steel in simulated Double Shell Slurry, Future PUREX, and Hanford Facilities wastes. The corrosion rate data, gathered between 25 and 180 0 C, were statistically ''modeled'' for each waste; a fourth model was developed that utilized the combined data. The report briefly describes the modeling procedure and details on how to access information through a computerized data system. Copies of the report and operating information may be obtained from the author (DB Mackey) at 509-376-9844 of FTS 444-9844
The aeroelastic code HawC - model and comparisons
Energy Technology Data Exchange (ETDEWEB)
Thirstrup Petersen, J. [Risoe National Lab., The Test Station for Wind Turbines, Roskilde (Denmark)
1996-09-01
A general aeroelastic finite element model for simulation of the dynamic response of horizontal axis wind turbines is presented. The model has been developed with the aim to establish an effective research tool, which can support the general investigation of wind turbine dynamics and research in specific areas of wind turbine modelling. The model concentrates on the correct representation of the inertia forces in a form, which makes it possible to recognize and isolate effects originating from specific degrees of freedom. The turbine structure is divided into substructures, and nonlinear kinematic terms are retained in the equations of motion. Moderate geometric nonlinearities are allowed for. Gravity and a full wind field including 3-dimensional 3-component turbulence are included in the loading. Simulation results for a typical three bladed, stall regulated wind turbine are presented and compared with measurements. (au)
Uncertainty modelling and code calibration for composite materials
DEFF Research Database (Denmark)
Toft, Henrik Stensgaard; Branner, Kim; Mishnaevsky, Leon, Jr
2013-01-01
Uncertainties related to the material properties of a composite material can be determined from the micro-, meso- or macro-scales. These three starting points for a stochastic modelling of the material properties are investigated. The uncertainties are divided into physical, model, statistical...... between risk of failure and cost of the structure. Consideration related to calibration of partial safety factors for composite material is described, including the probability of failure, format for the partial safety factor method and weight factors for different load cases. In a numerical example......, it is demonstrated how probabilistic models for the material properties formulated on micro-scale can be calibrated using tests on the meso- and macro-scales. The results are compared to probabilistic models estimated directly from tests on the macro-scale. In another example, partial safety factors for application...
Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach
International Nuclear Information System (INIS)
2014-12-01
In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the
An implicit Navier-Stokes code for turbulent flow modeling
Huang, P. G.; Coakley, T. J.
1992-01-01
This paper presents a numerical approach to calculating turbulent flows employing advanced turbulence models. The main features include a line-by-line Gauss-Seidel algorithm using Roe's approximate Riemann solver, TVD numerical schemes, implicit boundary conditions and a decoupled turbulence-model solver. Based on the problems tested so far, the method has consistently demonstrated its ability in offering accuracy, boundedness and a fast rate of convergence to steady-state solution.
Improved gap conductance model for the TRAC code
International Nuclear Information System (INIS)
Hatch, S.W.; Mandell, D.A.
1980-01-01
The purpose of the present work, as indicated earlier, is to improve the present constant fuel clad spacing in TRAC-P1A without significantly increasing the computer costs. It is realized that the simple model proposed may not be accurate enough for some cases, but for the initial calculations made the DELTAR model improves the predictions over the constant Δr results of TRAC-P1A and the additional computing costs are negligible
Code modernization and modularization of APEX and SWAT watershed simulation models
SWAT (Soil and Water Assessment Tool) and APEX (Agricultural Policy / Environmental eXtender) are respectively large and small watershed simulation models derived from EPIC Environmental Policy Integrated Climate), a field-scale agroecology simulation model. All three models are coded in FORTRAN an...
PWR hot leg natural circulation modeling with MELCOR code
Energy Technology Data Exchange (ETDEWEB)
Park, Jae Hong; Lee, Jong In [Korea Institute of Nuclear Safety, Taejon (Korea, Republic of)
1997-12-31
Previous MELCOR and SCDAP/RELAP5 nodalizations for simulating the counter-current, natural circulation behavior of vapor flow within the RCS hot legs and SG U-tubes when core damage progress can not be applied to the steady state and water-filled conditions during the initial period of accident progression because of the artificially high loss coefficients in the hot legs and SG U-tubes which were chosen from results of COMMIX calculation and the Westinghouse natural circulation experiments in a 1/7-scale facility for simulating steam natural circulation behavior in the vessel and circulation modeling which can be used both for the liquid flow condition at steady state and for the vapor flow condition at the later period of in-vessel core damage. For this, the drag forces resulting from the momentum exchange effects between the two vapor streams in the hot leg was modeled as a pressure drop by pump model. This hot leg natural circulation modeling of MELCOR was able to reproduce similar mass flow rates with those predicted by previous models. 6 refs., 2 figs. (Author)
The improvement of the heat transfer model for sodium-water reaction jet code
International Nuclear Information System (INIS)
Hashiguchi, Yoshirou; Yamamoto, Hajime; Kamoshida, Norio; Murata, Shuuichi
2001-02-01
For confirming the reasonable DBL (Design Base Leak) on steam generator (SG), it is necessary to evaluate phenomena of sodium-water reaction (SWR) in an actual steam generator realistically. The improvement of a heat transfer model on sodium-water reaction (SWR) jet code (LEAP-JET ver.1.40) and application analysis to the water injection tests for confirmation of propriety for the code were performed. On the improvement of the code, the heat transfer model between a inside fluid and a tube wall was introduced instead of the prior model which was heat capacity model including both heat capacity of the tube wall and inside fluid. And it was considered that the fluid of inside the heat exchange tube was able to treat as water or sodium and typical heat transfer equations used in SG design were also introduced in the new heat transfer model. Further additional work was carried out in order to improve the stability of the calculation for long calculation time. The test calculation using the improved code (LEAP-JET ver.1.50) were carried out with conditions of the SWAT-IR·Run-HT-2 test. It was confirmed that the SWR jet behavior on the result and the influence to the result of the heat transfer model were reasonable. And also on the improved code (LEAP-JET ver.1.50), user's manual was revised with additional I/O manual and explanation of the heat transfer model and new variable name. (author)
Recommendations for computer modeling codes to support the UMTRA groundwater restoration project
International Nuclear Information System (INIS)
Tucker, M.D.; Khan, M.A.
1996-04-01
The Uranium Mill Tailings Remediation Action (UMTRA) Project is responsible for the assessment and remedial action at the 24 former uranium mill tailings sites located in the US. The surface restoration phase, which includes containment and stabilization of the abandoned uranium mill tailings piles, has a specific termination date and is nearing completion. Therefore, attention has now turned to the groundwater restoration phase, which began in 1991. Regulated constituents in groundwater whose concentrations or activities exceed maximum contaminant levels (MCLs) or background levels at one or more sites include, but are not limited to, uranium, selenium, arsenic, molybdenum, nitrate, gross alpha, radium-226 and radium-228. The purpose of this report is to recommend computer codes that can be used to assist the UMTRA groundwater restoration effort. The report includes a survey of applicable codes in each of the following areas: (1) groundwater flow and contaminant transport modeling codes, (2) hydrogeochemical modeling codes, (3) pump and treat optimization codes, and (4) decision support tools. Following the survey of the applicable codes, specific codes that can best meet the needs of the UMTRA groundwater restoration program in each of the four areas are recommended
Recommendations for computer modeling codes to support the UMTRA groundwater restoration project
Energy Technology Data Exchange (ETDEWEB)
Tucker, M.D. [Sandia National Labs., Albuquerque, NM (United States); Khan, M.A. [IT Corp., Albuquerque, NM (United States)
1996-04-01
The Uranium Mill Tailings Remediation Action (UMTRA) Project is responsible for the assessment and remedial action at the 24 former uranium mill tailings sites located in the US. The surface restoration phase, which includes containment and stabilization of the abandoned uranium mill tailings piles, has a specific termination date and is nearing completion. Therefore, attention has now turned to the groundwater restoration phase, which began in 1991. Regulated constituents in groundwater whose concentrations or activities exceed maximum contaminant levels (MCLs) or background levels at one or more sites include, but are not limited to, uranium, selenium, arsenic, molybdenum, nitrate, gross alpha, radium-226 and radium-228. The purpose of this report is to recommend computer codes that can be used to assist the UMTRA groundwater restoration effort. The report includes a survey of applicable codes in each of the following areas: (1) groundwater flow and contaminant transport modeling codes, (2) hydrogeochemical modeling codes, (3) pump and treat optimization codes, and (4) decision support tools. Following the survey of the applicable codes, specific codes that can best meet the needs of the UMTRA groundwater restoration program in each of the four areas are recommended.
International Nuclear Information System (INIS)
Yokoyama, Kenji; Uto, Nariaki; Kasahara, Naoto; Ishikawa, Makoto
2003-04-01
In the fast reactor development, numerical simulation using analytical codes plays an important role for complementing theory and experiment. It is necessary that the engineering models and analysis methods can be flexibly changed, because the phenomena to be investigated become more complicated due to the diversity of the needs for research. And, there are large problems in combining physical properties and engineering models in many different fields. Aiming to the realization of the next generation code system which can solve those problems, the authors adopted three methods, (1) Multi-language (SoftWIRE.NET, Visual Basic.NET and Fortran) (2) Fortran 90 and (3) Python to make a prototype of the next generation code system. As this result, the followings were confirmed. (1) It is possible to reuse a function of the existing codes written in Fortran as an object of the next generation code system by using Visual Basic.NET. (2) The maintainability of the existing code written by Fortran 77 can be improved by using the new features of Fortran 90. (3) The toolbox-type code system can be built by using Python. (author)
Collective flow measurements with HADES in Au+Au collisions at 1.23A GeV
Kardan, Behruz; Hades Collaboration
2017-11-01
HADES has a large acceptance combined with a good mass-resolution and therefore allows the study of dielectron and hadron production in heavy-ion collisions with unprecedented precision. With the statistics of seven billion Au-Au collisions at 1.23A GeV recorded in 2012, the investigation of higher-order flow harmonics is possible. At the BEVALAC and SIS18 directed and elliptic flow has been measured for pions, charged kaons, protons, neutrons and fragments, but higher-order harmonics have not yet been studied. They provide additional important information on the properties of the dense hadronic medium produced in heavy-ion collisions. We present here a high-statistics, multidifferential measurement of v1 and v2 for protons in Au+Au collisions at 1.23A GeV.
Numerical and modeling techniques used in the EPIC code
International Nuclear Information System (INIS)
Pizzica, P.A.; Abramson, P.B.
1977-01-01
EPIC models fuel and coolant motion which result from internal fuel pin pressure (from fission gas or fuel vapor) and/or from the generation of sodium vapor pressures in the coolant channel subsequent to pin failure in an LMFBR. The modeling includes the ejection of molten fuel from the pin into a coolant channel with any amount of voiding through a clad rip which may be of any length or which may expand with time. One-dimensional Eulerian hydrodynamics is used to model both the motion of fuel and fission gas inside a molten fuel cavity and the mixture of two-phase sodium and fission gas in the channel. Motion of molten fuel particles in the coolant channel is tracked with a particle-in-cell technique
Energy Technology Data Exchange (ETDEWEB)
Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng
2011-03-01
This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are
Defect Detection: Combining Bounded Model Checking and Code Contracts
Directory of Open Access Journals (Sweden)
Marat Akhin
2013-01-01
Full Text Available Bounded model checking (BMC of C/C++ programs is a matter of scientific enquiry that attracts great attention in the last few years. In this paper, we present our approach to this problem. It is based on combining several recent results in BMC, namely, the use of LLVM as a baseline for model generation, employment of high-performance Z3 SMT solver to do the formula heavy-lifting, and the use of various function summaries to improve analysis efficiency and expressive power. We have implemented a basic prototype; experiment results on a set of simple test BMC problems are satisfactory.
A Secure Network Coding-based Data Gathering Model and Its Protocol in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Qian Xiao
2012-09-01
Full Text Available To provide security for data gathering based on network coding in wireless sensor networks (WSNs, a secure network coding-based data gathering model is proposed, and a data-privacy preserving and pollution preventing (DPPaamp;PP protocol using network coding is designed. DPPaamp;PP makes use of a new proposed pollution symbol selection and pollution (PSSP scheme based on a new obfuscation idea to pollute existing symbols. Analyses of DPPaamp;PP show that it not only requires low overhead on computation and communication, but also provides high security on resisting brute-force attacks.
Comparison for the interfacial and wall friction models in thermal-hydraulic system analysis codes
International Nuclear Information System (INIS)
Hwang, Moon Kyu; Park, Jee Won; Chung, Bub Dong; Kim, Soo Hyung; Kim, See Dal
2007-07-01
The average equations employed in the current thermal hydraulic analysis codes need to be closed with the appropriate models and correlations to specify the interphase phenomena along with fluid/structure interactions. This includes both thermal and mechanical interactions. Among the closure laws, an interfacial and wall frictions, which are included in the momentum equations, not only affect pressure drops along the fluid flow, but also have great effects for the numerical stability of the codes. In this study, the interfacial and wall frictions are reviewed for the commonly applied thermal-hydraulic system analysis codes, i.e. RELAP5-3D, MARS-3D, TRAC-M, and CATHARE
ABAREX -- A neutron spherical optical-statistical-model code -- A user`s manual
Energy Technology Data Exchange (ETDEWEB)
Smith, A.B. [ed.; Lawson, R.D.
1998-06-01
The contemporary version of the neutron spherical optical-statistical-model code ABAREX is summarized with the objective of providing detailed operational guidance for the user. The physical concepts involved are very briefly outlined. The code is described in some detail and a number of explicit examples are given. With this document one should very quickly become fluent with the use of ABAREX. While the code has operated on a number of computing systems, this version is specifically tailored for the VAX/VMS work station and/or the IBM-compatible personal computer.
Nuclear model codes and related software distributed by the OECD/NEA Data Bank
International Nuclear Information System (INIS)
Sartori, E.
1993-01-01
Software and data for nuclear energy applications is acquired, tested and distributed by several information centres; in particular, relevant computer codes are distributed internationally by the OECD/NEA Data Bank (France) and by ESTSC and EPIC/RSIC (United States). This activity is coordinated among the centres and is extended outside the OECD area through an arrangement with the IAEA. This article covers more specifically the availability of nuclear model codes and also those codes which further process their results into data sets needed for specific nuclear application projects. (author). 2 figs
Energy Technology Data Exchange (ETDEWEB)
Veshchunov, M.S.; Kisselev, A.E.; Palagin, A.V. [Nuclear Safety Institute, Moscow (Russian Federation)] [and others
1995-09-01
The code package SVECHA for the modeling of in-vessel core degradation (CD) phenomena in severe accidents is being developed in the Nuclear Safety Institute, Russian Academy of Science (NSI RAS). The code package presents a detailed mechanistic description of the phenomenology of severe accidents in a reactor core. The modules of the package were developed and validated on separate effect test data. These modules were then successfully implemented in the ICARE2 code and validated against a wide range of integral tests. Validation results have shown good agreement with separate effect tests data and with the integral tests CORA-W1/W2, CORA-13, PHEBUS-B9+.
International Nuclear Information System (INIS)
King, C.M.; Wilhite, E.L.; Root, R.W. Jr.
1985-01-01
The Savannah River Laboratory DOSTOMAN code has been used since 1978 for environmental pathway analysis of potential migration of radionuclides and hazardous chemicals. The DOSTOMAN work is reviewed including a summary of historical use of compartmental models, the mathematical basis for the DOSTOMAN code, examples of exact analytical solutions for simple matrices, methods for numerical solution of complex matrices, and mathematical validation/calibration of the SRL code. The review includes the methodology for application to nuclear and hazardous chemical waste disposal, examples of use of the model in contaminant transport and pathway analysis, a user's guide for computer implementation, peer review of the code, and use of DOSTOMAN at other Department of Energy sites. 22 refs., 3 figs
Energy Technology Data Exchange (ETDEWEB)
Carbajo, Juan (Oak Ridge National Laboratory, Oak Ridge, TN); Jeong, Hae-Yong (Korea Atomic Energy Research Institute, Daejeon, Korea); Wigeland, Roald (Idaho National Laboratory, Idaho Falls, ID); Corradini, Michael (University of Wisconsin, Madison, WI); Schmidt, Rodney Cannon; Thomas, Justin (Argonne National Laboratory, Argonne, IL); Wei, Tom (Argonne National Laboratory, Argonne, IL); Sofu, Tanju (Argonne National Laboratory, Argonne, IL); Ludewig, Hans (Brookhaven National Laboratory, Upton, NY); Tobita, Yoshiharu (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Ohshima, Hiroyuki (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Serre, Frederic (Centre d' %C3%94etudes nucl%C3%94eaires de Cadarache %3CU%2B2013%3E CEA, France)
2011-06-01
This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the
Assessing the LWR codes capability to address SFR BDBAs: Modeling of the ABCOVE tests
International Nuclear Information System (INIS)
Garcia, M.; Herranz, L. E.
2012-01-01
Tic present paper is aimed at assessing the current capability of LWR codes to model aerosol transport within a SFR containment under BDBA conditions. Through a systematic application of the ASTEC and MELCOR codes lo relevant ABCOVE tests, insights have been gained into drawbacks and capabilities of these computation tools. Hypotheses and approximations have been adopted so that differences in boundary conditions between LWR and SFR containments under BDBA can be accommodated to some extent.
Radiation transport phenomena and modeling. Part A: Codes; Part B: Applications with examples
Energy Technology Data Exchange (ETDEWEB)
Lorence, L.J. Jr.; Beutler, D.E. [Sandia National Labs., Albuquerque, NM (United States). Simulation Technology Research Dept.
1997-09-01
This report contains the notes from the second session of the 1997 IEEE Nuclear and Space Radiation Effects Conference Short Course on Applying Computer Simulation Tools to Radiation Effects Problems. Part A discusses the physical phenomena modeled in radiation transport codes and various types of algorithmic implementations. Part B gives examples of how these codes can be used to design experiments whose results can be easily analyzed and describes how to calculate quantities of interest for electronic devices.
Domain-specific modeling enabling full code generation
Kelly, Steven
2007-01-01
Domain-Specific Modeling (DSM) is the latest approach tosoftware development, promising to greatly increase the speed andease of software creation. Early adopters of DSM have been enjoyingproductivity increases of 500–1000% in production for over adecade. This book introduces DSM and offers examples from variousfields to illustrate to experienced developers how DSM can improvesoftware development in their teams. Two authorities in the field explain what DSM is, why it works,and how to successfully create and use a DSM solution to improveproductivity and quality. Divided into four parts, the book covers:background and motivation; fundamentals; in-depth examples; andcreating DSM solutions. There is an emphasis throughout the book onpractical guidelines for implementing DSM, including how toidentify the nece sary language constructs, how to generate fullcode from models, and how to provide tool support for a new DSMlanguage. The example cases described in the book are available thebook's Website, www.dsmbook....
Intercomparison of radiation codes for Mars Models: SW and LW
Savijarvi, H. I.; Crisp, D.; Harri, A.-M.
2002-09-01
We have enlarged our radiation scheme intercomparison for Mars models into the SW region. A reference mean case is introduced by having a T(z) -profile based on Mariner 9 IRIS observations at 35 fixed- altitude points for a 95.3 per cent CO2-atmosphere plus optional trace gases and well-mixed dust at visible optical depths of 0, 0.3, 0.6, 1.0 and 5.0. A Spectrum Resolving (line-by-line) multiple scattering multi-stream Model (SRM, by Crisp) is used as the first-principles reference calculation. The University of Helsinki (UH) old and new (improved) Mars model schemes are also included. The intercomparisons have pointed out the importance of dust and water vapour in the LW, while the CO2 spectral line data difference effects were minimal but nonzero. In the shortwave, the results show that the CO2 absorption of solar radiation by the line-by-line scheme is relatively intense, especially so at low solar height angles. This is attributed to the (often neglected) very weak lines and bands in the near-infrared. The other trace gases are not important but dust, of course, scatters and absorbs strongly in the shortwave. The old, very simple, UH SW scheme was surprisingly good at low dust concentrations, compared to SRM. It was however considerably improved for both low and high dust amounts by using the SRM results as benchmark. Other groups are welcome to join.
Pre-engineering Spaceflight Validation of Environmental Models and the 2005 HZETRN Simulation Code
Nealy, John E.; Cucinotta, Francis A.; Wilson, John W.; Badavi, Francis F.; Dachev, Ts. P.; Tomov, B. T.; Walker, Steven A.; DeAngelis, Giovanni; Blattnig, Steve R.; Atwell, William
2006-01-01
The HZETRN code has been identified by NASA for engineering design in the next phase of space exploration highlighting a return to the Moon in preparation for a Mars mission. In response, a new series of algorithms beginning with 2005 HZETRN, will be issued by correcting some prior limitations and improving control of propagated errors along with established code verification processes. Code validation processes will use new/improved low Earth orbit (LEO) environmental models with a recently improved International Space Station (ISS) shield model to validate computational models and procedures using measured data aboard ISS. These validated models will provide a basis for flight-testing the designs of future space vehicles and systems of the Constellation program in the LEO environment.
The Nuremberg Code subverts human health and safety by requiring animal modeling
Directory of Open Access Journals (Sweden)
Greek Ray
2012-07-01
Full Text Available Abstract Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive value of animal models when used as test subjects for human response to drugs and disease. We explore the use of animals for models in toxicity testing as an example of the problem with using animal models. Summary We conclude that the requirements for animal testing found in the Nuremberg Code were based on scientifically outdated principles, compromised by people with a vested interest in animal experimentation, serve no useful function, increase the cost of drug development, and prevent otherwise safe and efficacious drugs and therapies from being implemented.
Modeling of BWR core meltdown accidents - for application in the MELRPI. MOD2 computer code
Energy Technology Data Exchange (ETDEWEB)
Koh, B R; Kim, S H; Taleyarkhan, R P; Podowski, M Z; Lahey, Jr, R T
1985-04-01
This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing.
Modeling of BWR core meltdown accidents - for application in the MELRPI.MOD2 computer code
International Nuclear Information System (INIS)
Koh, B.R.; Kim, S.H.; Taleyarkhan, R.P.; Podowski, M.Z.; Lahey, R.T. Jr.
1985-04-01
This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing
The Nuremberg Code subverts human health and safety by requiring animal modeling.
Greek, Ray; Pippus, Annalea; Hansen, Lawrence A
2012-07-08
The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive value of animal models when used as test subjects for human response to drugs and disease. We explore the use of animals for models in toxicity testing as an example of the problem with using animal models. We conclude that the requirements for animal testing found in the Nuremberg Code were based on scientifically outdated principles, compromised by people with a vested interest in animal experimentation, serve no useful function, increase the cost of drug development, and prevent otherwise safe and efficacious drugs and therapies from being implemented.
Modelling Chemical Equilibrium Partitioning with the GEMS-PSI Code
Energy Technology Data Exchange (ETDEWEB)
Kulik, D.; Berner, U.; Curti, E
2004-03-01
Sorption, co-precipitation and re-crystallisation are important retention processes for dissolved contaminants (radionuclides) migrating through the sub-surface. The retention of elements is usually measured by empirical partition coefficients (Kd), which vary in response to many factors: temperature, solid/liquid ratio, total contaminant loading, water composition, host-mineral composition, etc. The Kd values can be predicted for in-situ conditions from thermodynamic modelling of solid solution, aqueous solution or sorption equilibria, provided that stoichiometry, thermodynamic stability and mixing properties of the pure components are known (Example 1). Unknown thermodynamic properties can be retrieved from experimental Kd values using inverse modelling techniques (Example 2). An efficient, advanced tool for performing both tasks is the Gibbs Energy Minimization (GEM) approach, implemented in the user-friendly GEM-Selector (GEMS) program package, which includes the Nagra-PSI chemical thermodynamic database. The package is being further developed at PSI and used extensively in studies relating to nuclear waste disposal. (author)
Modelling Chemical Equilibrium Partitioning with the GEMS-PSI Code
International Nuclear Information System (INIS)
Kulik, D.; Berner, U.; Curti, E.
2004-01-01
Sorption, co-precipitation and re-crystallisation are important retention processes for dissolved contaminants (radionuclides) migrating through the sub-surface. The retention of elements is usually measured by empirical partition coefficients (Kd), which vary in response to many factors: temperature, solid/liquid ratio, total contaminant loading, water composition, host-mineral composition, etc. The Kd values can be predicted for in-situ conditions from thermodynamic modelling of solid solution, aqueous solution or sorption equilibria, provided that stoichiometry, thermodynamic stability and mixing properties of the pure components are known (Example 1). Unknown thermodynamic properties can be retrieved from experimental Kd values using inverse modelling techniques (Example 2). An efficient, advanced tool for performing both tasks is the Gibbs Energy Minimization (GEM) approach, implemented in the user-friendly GEM-Selector (GEMS) program package, which includes the Nagra-PSI chemical thermodynamic database. The package is being further developed at PSI and used extensively in studies relating to nuclear waste disposal. (author)
An improved UO2 thermal conductivity model in the ELESTRES computer code
International Nuclear Information System (INIS)
Chassie, G.G.; Tochaie, M.; Xu, Z.
2010-01-01
This paper describes the improved UO 2 thermal conductivity model for use in the ELESTRES (ELEment Simulation and sTRESses) computer code. The ELESTRES computer code models the thermal, mechanical and microstructural behaviour of a CANDU® fuel element under normal operating conditions. The main purpose of the code is to calculate fuel temperatures, fission gas release, internal gas pressure, fuel pellet deformation, and fuel sheath strains for fuel element design and assessment. It is also used to provide initial conditions for evaluating fuel behaviour during high temperature transients. The thermal conductivity of UO 2 fuel is one of the key parameters that affect ELESTRES calculations. The existing ELESTRES thermal conductivity model has been assessed and improved based on a large amount of thermal conductivity data from measurements of irradiated and un-irradiated UO 2 fuel with different densities. The UO 2 thermal conductivity data cover 90% to 99% theoretical density of UO 2 , temperature up to 3027 K, and burnup up to 1224 MW·h/kg U. The improved thermal conductivity model, which is recommended for a full implementation in the ELESTRES computer code, has reduced the ELESTRES code prediction biases of temperature, fission gas release, and fuel sheath strains when compared with the available experimental data. This improved thermal conductivity model has also been checked with a test version of ELESTRES over the full ranges of fuel temperature, fuel burnup, and fuel density expected in CANDU fuel. (author)
Energy Technology Data Exchange (ETDEWEB)
Kanaki, K.
2007-03-15
The HADES spectrometer is a high resolution detector installed at the SIS/GSI, Darmstadt. It was primarily designed for studying dielectron decay channels of vector mesons. However, its high accuracy capabilities make it an attractive tool for investigating other rare probes at these beam energies, like strange baryons. Development and investigation of Multiwire Drift Chambers for high spatial resolution have been provided. One of the early experimental runs of HADES was analyzed and the {lambda} hyperon signal was successfully reconstructed for the first time in C+C collisions at 2 AGeV beam kinetic energy. The total {lambda} production cross section is contrasted with expectations from simulations and compared with measurements of the {lambda} yield in heavier systems at the same energy. In addition, the result is considered in the context of strangeness balance and the relative strangeness content of the reaction products is determined. (orig.)
Study of Λ hyperon production in C+C collisions at 2 AGeV beam energy with the HADES spectrometer
International Nuclear Information System (INIS)
Kanaki, K.
2007-03-01
The HADES spectrometer is a high resolution detector installed at the SIS/GSI, Darmstadt. It was primarily designed for studying dielectron decay channels of vector mesons. However, its high accuracy capabilities make it an attractive tool for investigating other rare probes at these beam energies, like strange baryons. Development and investigation of Multiwire Drift Chambers for high spatial resolution have been provided. One of the early experimental runs of HADES was analyzed and the Λ hyperon signal was successfully reconstructed for the first time in C+C collisions at 2 AGeV beam kinetic energy. The total Λ production cross section is contrasted with expectations from simulations and compared with measurements of the Λ yield in heavier systems at the same energy. In addition, the result is considered in the context of strangeness balance and the relative strangeness content of the reaction products is determined. (orig.)
Quantitative analysis of crossflow model of the COBRA-IV.1 code
International Nuclear Information System (INIS)
Lira, C.A.B.O.
1983-01-01
Based on experimental data in a rod bundle test section, the crossflow model of the COBRA-IV.1 code was quantitatively analysed. The analysis showed that is possible to establish some operational conditions in which the results of the theoretical model are acceptable. (author) [pt
A model for bootstrap current calculations with bounce averaged Fokker-Planck codes
Westerhof, E.; Peeters, A.G.
1996-01-01
A model is presented that allows the calculation of the neoclassical bootstrap current originating from the radial electron density and pressure gradients in standard (2+1)D bounce averaged Fokker-Planck codes. The model leads to an electron momentum source located almost exclusively at the
OWL: A code for the two-center shell model with spherical Woods-Saxon potentials
Diaz-Torres, Alexis
2018-03-01
A Fortran-90 code for solving the two-center nuclear shell model problem is presented. The model is based on two spherical Woods-Saxon potentials and the potential separable expansion method. It describes the single-particle motion in low-energy nuclear collisions, and is useful for characterizing a broad range of phenomena from fusion to nuclear molecular structures.
Pitchcontrol of wind turbines using model free adaptivecontrol based on wind turbine code
DEFF Research Database (Denmark)
Zhang, Yunqian; Chen, Zhe; Cheng, Ming
2011-01-01
value is only based on I/O data of the wind turbine is identified and then the wind turbine system is replaced by a dynamic linear time-varying model. In order to verify the correctness and robustness of the proposed model free adaptive pitch controller, the wind turbine code FAST which can predict...
Ciotti, M.; Nijhuis, Arend; Ribani, P.L.; Savoldi Richard, L.; Zanino, R.
2006-01-01
The new THELMA code, including a thermal-hydraulic (TH) and an electro-magnetic (EM) model of a cable-in-conduit conductor (CICC), has been developed. The TH model is at this stage relatively conventional, with two fluid components (He flowing in the annular cable region and He flowing in the
The modelling of wall condensation with noncondensable gases for the containment codes
Energy Technology Data Exchange (ETDEWEB)
Leduc, C.; Coste, P.; Barthel, V.; Deslandes, H. [Commissariat a l`Energi Atomique, Grenoble (France)
1995-09-01
This paper presents several approaches in the modelling of wall condensation in the presence of noncondensable gases for containment codes. The lumped-parameter modelling and the local modelling by 3-D codes are discussed. Containment analysis codes should be able to predict the spatial distributions of steam, air, and hydrogen as well as the efficiency of cooling by wall condensation in both natural convection and forced convection situations. 3-D calculations with a turbulent diffusion modelling are necessary since the diffusion controls the local condensation whereas the wall condensation may redistribute the air and hydrogen mass in the containment. A fine mesh modelling of film condensation in forced convection has been in the developed taking into account the influence of the suction velocity at the liquid-gas interface. It is associated with the 3-D model of the TRIO code for the gas mixture where a k-{xi} turbulence model is used. The predictions are compared to the Huhtiniemi`s experimental data. The modelling of condensation in natural convection or mixed convection is more complex. As no universal velocity and temperature profile exist for such boundary layers, a very fine nodalization is necessary. More simple models integrate equations over the boundary layer thickness, using the heat and mass transfer analogy. The model predictions are compared with a MIT experiment. For the containment compartments a two node model is proposed using the lumped parameter approach. Heat and mass transfer coefficients are tested on separate effect tests and containment experiments. The CATHARE code has been adapted to perform such calculations and shows a reasonable agreement with data.
Slag transport models for vertical and horizontal surfaces. [SLGTR code
Energy Technology Data Exchange (ETDEWEB)
Chow, L S.H.; Johnson, T R
1978-01-01
In a coal-fired MHD system, all downstream component surfaces that are exposed to combustion gases will be covered by a solid, liquid, or solid-liquid film of slag, seed, or a mixture of the two, the specific nature of the film depending on the physical properties of the slag and seed and on local conditions. An analysis was made of a partly-liquid slag film flowing on a cooled vertical or horizontal wall of a large duct, through which passed slag-laden combustion gases. The model is applicable to the high-temperature steam generators in the downstream system of an MHD power plant and was used in calculations for a radiant-boiler concept similar to that in the 1000-MWe Gilbert-STD Baseline Plant study and also for units large enough for 230 and 8 lb/s (104.3 and 3.5 kg/s) of combustion gas. The qualitative trends of the results are similar for both vertical and horizontal surfaces. The results show the effects of the slag film, slag properties, and gas emissivity on the heat flux to the steam tubes. The slag film does not reduce the rate of heat transfer in proportion to its surface temperature, because most of the heat is radiated from the gas and particles suspended in it to the slag surface.
International Nuclear Information System (INIS)
Cheng, Z.D.; He, Y.L.; Cui, F.Q.
2013-01-01
Highlights: ► A general-purpose method or design/simulation tool needs to be developed for CSCs. ► A new modelling method and homemade unified code with MCRT are presented. ► The photo-thermal conversion processes in three typical CSCs were analyzed. ► The results show that the proposed method and model are feasible and reliable. -- Abstract: The main objective of the present work is to develop a general-purpose numerical method for improving design/simulation tools for the concentrating solar collectors (CSCs) of concentrated solar power (CSP) systems. A new modelling method and homemade unified code with the Monte Carlo Ray-Trace (MCRT) method for the CSCs are presented firstly. The details of the new designing method and homemade unified code with MCRT for numerical investigations on solar concentrating and collecting characteristics of the CSCs are introduced. Three coordinate systems are used in the MCRT program and can be totally independent from each other. Solar radiation in participating medium and/or non-participating medium can be taken into account simultaneously or dividedly in the simulation. The criteria of data processing and method/code checking are also proposed in detail. Finally the proposed method and code are applied to simulate and analyze the involuted photo-thermal conversion processes in three typical CSCs. The results show that the proposed method and model are reliable to simulate various types of CSCs.
The implementation of a toroidal limiter model into the gyrokinetic code ELMFIRE
Energy Technology Data Exchange (ETDEWEB)
Leerink, S.; Janhunen, S.J.; Kiviniemi, T.P.; Nora, M. [Euratom-Tekes Association, Helsinki University of Technology (Finland); Heikkinen, J.A. [Euratom-Tekes Association, VTT, P.O. Box 1000, FI-02044 VTT (Finland); Ogando, F. [Universidad Nacional de Educacion a Distancia, Madrid (Spain)
2008-03-15
The ELMFIRE full nonlinear gyrokinetic simulation code has been developed for calculations of plasma evolution and dynamics of turbulence in tokamak geometry. The code is applicable for calculations of strong perturbations in particle distribution function, rapid transients and steep gradients in plasma. Benchmarking against experimental reflectometry data from the FT2 tokamak is being discussed and in this paper a model for comparison and studying poloidal velocity is presented. To make the ELMFIRE code suitable for scrape-off layer simulations a simplified toroidal limiter model has been implemented. The model is be discussed and first results are presented. (copyright 2008 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
Inclusion of models to describe severe accident conditions in the fuel simulation code DIONISIO
Energy Technology Data Exchange (ETDEWEB)
Lemes, Martín; Soba, Alejandro [Sección Códigos y Modelos, Gerencia Ciclo del Combustible Nuclear, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina); Daverio, Hernando [Gerencia Reactores y Centrales Nucleares, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina); Denis, Alicia [Sección Códigos y Modelos, Gerencia Ciclo del Combustible Nuclear, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina)
2017-04-15
The simulation of fuel rod behavior is a complex task that demands not only accurate models to describe the numerous phenomena occurring in the pellet, cladding and internal rod atmosphere but also an adequate interconnection between them. In the last years several models have been incorporated to the DIONISIO code with the purpose of increasing its precision and reliability. After the regrettable events at Fukushima, the need for codes capable of simulating nuclear fuels under accident conditions has come forth. Heat removal occurs in a quite different way than during normal operation and this fact determines a completely new set of conditions for the fuel materials. A detailed description of the different regimes the coolant may exhibit in such a wide variety of scenarios requires a thermal-hydraulic formulation not suitable to be included in a fuel performance code. Moreover, there exist a number of reliable and famous codes that perform this task. Nevertheless, and keeping in mind the purpose of building a code focused on the fuel behavior, a subroutine was developed for the DIONISIO code that performs a simplified analysis of the coolant in a PWR, restricted to the more representative situations and provides to the fuel simulation the boundary conditions necessary to reproduce accidental situations. In the present work this subroutine is described and the results of different comparisons with experimental data and with thermal-hydraulic codes are offered. It is verified that, in spite of its comparative simplicity, the predictions of this module of DIONISIO do not differ significantly from those of the specific, complex codes.
International Nuclear Information System (INIS)
Hiergesell, R.; Taylor, G.
2010-01-01
An investigation was conducted to compare and evaluate contaminant transport results of two model codes, GoldSim and Porflow, using a simple 1-D string of elements in each code. Model domains were constructed to be identical with respect to cell numbers and dimensions, matrix material, flow boundary and saturation conditions. One of the codes, GoldSim, does not simulate advective movement of water; therefore the water flux term was specified as a boundary condition. In the other code, Porflow, a steady-state flow field was computed and contaminant transport was simulated within that flow-field. The comparisons were made solely in terms of the ability of each code to perform contaminant transport. The purpose of the investigation was to establish a basis for, and to validate follow-on work that was conducted in which a 1-D GoldSim model developed by abstracting information from Porflow 2-D and 3-D unsaturated and saturated zone models and then benchmarked to produce equivalent contaminant transport results. A handful of contaminants were selected for the code-to-code comparison simulations, including a non-sorbing tracer and several long- and short-lived radionuclides exhibiting both non-sorbing to strongly-sorbing characteristics with respect to the matrix material, including several requiring the simulation of in-growth of daughter radionuclides. The same diffusion and partitioning coefficients associated with each contaminant and the half-lives associated with each radionuclide were incorporated into each model. A string of 10-elements, having identical spatial dimensions and properties, were constructed within each code. GoldSim's basic contaminant transport elements, Mixing cells, were utilized in this construction. Sand was established as the matrix material and was assigned identical properties (e.g. bulk density, porosity, saturated hydraulic conductivity) in both codes. Boundary conditions applied included an influx of water at the rate of 40 cm/yr at one
Aquelarre. A computer code for fast neutron cross sections from the statistical model
International Nuclear Information System (INIS)
Guasp, J.
1974-01-01
A Fortran V computer code for Univac 1108/6 using the partial statistical (or compound nucleus) model is described. The code calculates fast neutron cross sections for the (n, n'), (n, p), (n, d) and (n, α reactions and the angular distributions and Legendre moments.for the (n, n) and (n, n') processes in heavy and intermediate spherical nuclei. A local Optical Model with spin-orbit interaction for each level is employed, allowing for the width fluctuation and Moldauer corrections, as well as the inclusion of discrete and continuous levels. (Author) 67 refs
Self-shielding models of MICROX-2 code: Review and updates
International Nuclear Information System (INIS)
Hou, J.; Choi, H.; Ivanov, K.N.
2014-01-01
Highlights: • The MICROX-2 code has been improved to expand its application to advanced reactors. • New fine-group cross section libraries based on ENDF/B-VII have been generated. • Resonance self-shielding and spatial self-shielding models have been improved. • The improvements were assessed by a series of benchmark calculations against MCNPX. - Abstract: The MICROX-2 is a transport theory code that solves for the neutron slowing-down and thermalization equations of a two-region lattice cell. The MICROX-2 code has been updated to expand its application to advanced reactor concepts and fuel cycle simulations, including generation of new fine-group cross section libraries based on ENDF/B-VII. In continuation of previous work, the MICROX-2 methods are reviewed and updated in this study, focusing on its resonance self-shielding and spatial self-shielding models for neutron spectrum calculations. The improvement of self-shielding method was assessed by a series of benchmark calculations against the Monte Carlo code, using homogeneous and heterogeneous pin cell models. The results have shown that the implementation of the updated self-shielding models is correct and the accuracy of physics calculation is improved. Compared to the existing models, the updates reduced the prediction error of the infinite multiplication factor by ∼0.1% and ∼0.2% for the homogeneous and heterogeneous pin cell models, respectively, considered in this study
Sensitivity Analysis of the TRIGA IPR-R1 Reactor Models Using the MCNP Code
Directory of Open Access Journals (Sweden)
C. A. M. Silva
2014-01-01
Full Text Available In the process of verification and validation of code modelling, the sensitivity analysis including systematic variations in code input variables must be used to help identifying the relevant parameters necessary for a determined type of analysis. The aim of this work is to identify how much the code results are affected by two different types of the TRIGA IPR-R1 reactor modelling processes performed using the MCNP (Monte Carlo N-Particle Transport code. The sensitivity analyses included small differences of the core and the rods dimensions and different levels of model detailing. Four models were simulated and neutronic parameters such as effective multiplication factor (keff, reactivity (ρ, and thermal and total neutron flux in central thimble in some different conditions of the reactor operation were analysed. The simulated models presented good agreement between them, as well as in comparison with available experimental data. In this way, the sensitivity analyses demonstrated that simulations of the TRIGA IPR-R1 reactor can be performed using any one of the four investigated MCNP models to obtain the referenced neutronic parameters.
Development of the Monju core safety analysis numerical models by super-COPD code
International Nuclear Information System (INIS)
Yamada, Fumiaki; Minami, Masaki
2010-12-01
Japan Atomic Energy Agency constructed a computational model for safety analysis of Monju reactor core to be built into a modularized plant dynamics analysis code Super-COPD code, for the purpose of heat removal capability evaluation at the in total 21 defined transients in the annex to the construction permit application. The applicability of this model to core heat removal capability evaluation has been estimated by back to back result comparisons of the constituent models with conventionally applied codes and by application of the unified model. The numerical model for core safety analysis has been built based on the best estimate model validated by the actually measured plant behavior up to 40% rated power conditions, taking over safety analysis models of conventionally applied COPD and HARHO-IN codes, to be capable of overall calculations of the entire plant with the safety protection and control systems. Among the constituents of the analytical model, neutronic-thermal model, heat transfer and hydraulic models of PHTS, SHTS, and water/steam system are individually verified by comparisons with the conventional calculations. Comparisons are also made with the actually measured plant behavior up to 40% rated power conditions to confirm the calculation adequacy and conservativeness of the input data. The unified analytical model was applied to analyses of in total 8 anomaly events; reactivity insertion, abnormal power distribution, decrease and increase of coolant flow rate in PHTS, SHTS and water/steam systems. The resulting maximum values and temporal variations of the key parameters in safety evaluation; temperatures of fuel, cladding, in core sodium coolant and RV inlet and outlet coolant have negligible discrepancies against the existing analysis result in the annex to the construction permit application, verifying the unified analytical model. These works have enabled analytical evaluation of Monju core heat removal capability by Super-COPD utilizing the
Off-take Model of the SPACE Code and Its Validation
International Nuclear Information System (INIS)
Oh, Myung Taek; Park, Chan Eok; Sohn, Jong Joo
2011-01-01
Liquid entrainment and vapor pull-through models of horizontal pipe have been implemented in the SPACE code. The model of SPACE accounts for the phase separation phenomena and computes the flux of mass and energy through an off-take attached to a horizontal pipe when stratified conditions occur in the horizontal pipe. This model is referred to as the off-take model. The importance of predicting the fluid conditions through an off-take in a small-break LOCA has been well known. In this case, the occurrence of the stratification can affect the break node void fraction and thus the break flow discharged from the primary system. In order to validate the off-take model newly developed for the SPACE code, a simulation of the HDU experiments has been performed. The main feature of the off-take model and its application results will be presented in this paper
A Perceptual Model for Sinusoidal Audio Coding Based on Spectral Integration
Directory of Open Access Journals (Sweden)
Jensen Søren Holdt
2005-01-01
Full Text Available Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of audio signals. In this paper, we present a new perceptual model that predicts masked thresholds for sinusoidal distortions. The model relies on signal detection theory and incorporates more recent insights about spectral and temporal integration in auditory masking. As a consequence, the model is able to predict the distortion detectability. In fact, the distortion detectability defines a (perceptually relevant norm on the underlying signal space which is beneficial for optimisation algorithms such as rate-distortion optimisation or linear predictive coding. We evaluate the merits of the model by combining it with a sinusoidal extraction method and compare the results with those obtained with the ISO MPEG-1 Layer I-II recommended model. Listening tests show a clear preference for the new model. More specifically, the model presented here leads to a reduction of more than 20% in terms of number of sinusoids needed to represent signals at a given quality level.
Energy Technology Data Exchange (ETDEWEB)
Kroeger, P.G.; Kennett, R.J.; Colman, J.; Ginsberg, T. (Brookhaven National Lab., Upton, NY (United States))
1991-10-01
This report documents the THATCH code, which can be used to model general thermal and flow networks of solids and coolant channels in two-dimensional r-z geometries. The main application of THATCH is to model reactor thermo-hydraulic transients in High-Temperature Gas-Cooled Reactors (HTGRs). The available modules simulate pressurized or depressurized core heatup transients, heat transfer to general exterior sinks or to specific passive Reactor Cavity Cooling Systems, which can be air or water-cooled. Graphite oxidation during air or water ingress can be modelled, including the effects of added combustion products to the gas flow and the additional chemical energy release. A point kinetics model is available for analyzing reactivity excursions; for instance due to water ingress, and also for hypothetical no-scram scenarios. For most HTGR transients, which generally range over hours, a user-selected nodalization of the core in r-z geometry is used. However, a separate model of heat transfer in the symmetry element of each fuel element is also available for very rapid transients. This model can be applied coupled to the traditional coarser r-z nodalization. This report described the mathematical models used in the code and the method of solution. It describes the code and its various sub-elements. Details of the input data and file usage, with file formats, is given for the code, as well as for several preprocessing and postprocessing options. The THATCH model of the currently applicable 350 MW{sub th} reactor is described. Input data for four sample cases are given with output available in fiche form. Installation requirements and code limitations, as well as the most common error indications are listed. 31 refs., 23 figs., 32 tabs.
Energy Technology Data Exchange (ETDEWEB)
Burke, G.J.
1988-04-08
Computer modeling of antennas, since its start in the late 1960's, has become a powerful and widely used tool for antenna design. Computer codes have been developed based on the Method-of-Moments, Geometrical Theory of Diffraction, or integration of Maxwell's equations. Of such tools, the Numerical Electromagnetics Code-Method of Moments (NEC) has become one of the most widely used codes for modeling resonant sized antennas. There are several reasons for this including the systematic updating and extension of its capabilities, extensive user-oriented documentation and accessibility of its developers for user assistance. The result is that there are estimated to be several hundred users of various versions of NEC world wide. 23 refs., 10 figs.
International Nuclear Information System (INIS)
Luneville, L.; Chiron, M.; Toubon, H.; Dogny, S.; Huver, M.; Berger, L.
2001-01-01
The research performed in common these last 3 years by the French Atomic Commission CEA, COGEMA and Eurisys Mesures had for main subject the realization of a complete tool of modelization for the largest range of realistic cases, the Pascalys modelization software. The main purpose of the modelization was to calculate the global measurement efficiency, which delivers the most accurate relationship between the photons emitted by the nuclear source in volume, punctual or deposited form and the germanium hyper pure detector, which detects and analyzes the received photons. It has been stated since long time that experimental global measurement efficiency becomes more and more difficult to address especially for complex scene as we can find in decommissioning and dismantling or in case of high activities for which the use of high activity reference sources become difficult to use for both health physics point of view and regulations. The choice of a calculation code is fundamental if accurate modelization is searched. MCNP represents the reference code but its use is long time calculation consuming and then not practicable in line on the field. Direct line-of-sight point kernel code as the French Atomic Commission 3-D analysis Mercure code can represent the practicable compromise between the most accurate MCNP reference code and the realistic performances needed in modelization. The comparison between the results of Pascalys-Mercure and MCNP code taking in account the last improvements of Mercure in the low energy range where the most important errors can occur, is presented in this paper, Mercure code being supported in line by the recent Pascalys 3-D modelization scene software. The incidence of the intrinsic efficiency of the Germanium detector is also approached for the total efficiency of measurement. (authors)
Implementation of a structural dependent model for the superalloy IN738LC in ABAQUS-code
International Nuclear Information System (INIS)
Wolters, J.; Betten, J.; Penkalla, H.J.
1994-05-01
Superalloys, mainly consisting of nickel, are used for applications in aerospace as well as in stationary gas turbines. In the temperature range above 800 C the blades, which are manufactured of these superalloys, are subjected to high centrifugal forces and thermal induced loads. For computer based analysis of the thermo-mechanical behaviour of the blades models for the stress-strain behaviour are necessary. These models have to give a reliable description of the stress-strain behaviour, with emphasis on inelastic affects. The implementation of the model in finite element codes requires a numerical treatment of the constitutive equations with respect to the given interface of the used code. In this paper constitutive equations for the superalloy IN738LC are presented and the implementation in the finite element code ABAQUS with the numerical preparation of the model is described. In order to validate the model calculations were performed for simple uniaxial loading conditions as well as for a complete cross section of a turbine blade under combined thermal and mechanical loading. The achieved results were compared with those of additional calculations by using ABAQUS, including Norton's law, which was already implemented in this code. (orig.) [de
van Walraven, Carl
2017-04-01
Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
Seepage and Piping through Levees and Dikes using 2D and 3D Modeling Codes
2016-06-01
Flood & Coastal Storm Damage Reduction Program ERDC/CHL TR-16-6 June 2016 Seepage and Piping through Levees and Dikes Using 2D and 3D Modeling Codes...of this Technical Report is to evaluate the benefits of three- dimensional ( 3D ) modeling of common seepage and piping issues along embankments over...traditional two-dimensional (2D) models . To facilitate the evaluation, one 3D model , two 2D cross-sectional models , and one 2D plan-view model were
On models of the genetic code generated by binary dichotomic algorithms.
Gumbel, Markus; Fimmel, Elena; Danielli, Alberto; Strüngmann, Lutz
2015-02-01
In this paper we introduce the concept of a BDA-generated model of the genetic code which is based on binary dichotomic algorithms (BDAs). A BDA-generated model is based on binary dichotomic algorithms (BDAs). Such a BDA partitions the set of 64 codons into two disjoint classes of size 32 each and provides a generalization of known partitions like the Rumer dichotomy. We investigate what partitions can be generated when a set of different BDAs is applied sequentially to the set of codons. The search revealed that these models are able to generate code tables with very different numbers of classes ranging from 2 to 64. We have analyzed whether there are models that map the codons to their amino acids. A perfect matching is not possible. However, we present models that describe the standard genetic code with only few errors. There are also models that map all 64 codons uniquely to 64 classes showing that BDAs can be used to identify codons precisely. This could serve as a basis for further mathematical analysis using coding theory, for example. The hypothesis that BDAs might reflect a molecular mechanism taking place in the decoding center of the ribosome is discussed. The scan demonstrated that binary dichotomic partitions are able to model different aspects of the genetic code very well. The search was performed with our tool Beady-A. This software is freely available at http://mi.informatik.hs-mannheim.de/beady-a. It requires a JVM version 6 or higher. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
International Nuclear Information System (INIS)
Yoon, C.; Rhee, B. W.; Chung, B. D.; Ahn, S. H.; Kim, M. W.
2009-01-01
Korea currently has four operating units of the CANDU-6 type reactor in Wolsong. However, the safety assessment system for CANDU reactors has not been fully established due to a lack of self-reliance technology. Although the CATHENA code had been introduced from AECL, it is undesirable to use a vendor's code for a regulatory auditing analysis. In Korea, the MARS code has been developed for decades and is being considered by KINS as a thermal hydraulic regulatory auditing tool for nuclear power plants. Before this decision, KINS (Korea Institute of Nuclear Safety) had developed the RELAP5/MOD3/CANDU code for CANDU safety analyses by modifying the model of the existing PWR auditing tool, RELAP5/MOD3. The main purpose of this study is to transplant the CANDU models of the RELAP5/MOD3/CANDU code to the MARS code including a quality assurance of the developed models
Solar optical codes evaluation for modeling and analyzing complex solar receiver geometries
Yellowhair, Julius; Ortega, Jesus D.; Christian, Joshua M.; Ho, Clifford K.
2014-09-01
Solar optical modeling tools are valuable for modeling and predicting the performance of solar technology systems. Four optical modeling tools were evaluated using the National Solar Thermal Test Facility heliostat field combined with flat plate receiver geometry as a benchmark. The four optical modeling tools evaluated were DELSOL, HELIOS, SolTrace, and Tonatiuh. All are available for free from their respective developers. DELSOL and HELIOS both use a convolution of the sunshape and optical errors for rapid calculation of the incident irradiance profiles on the receiver surfaces. SolTrace and Tonatiuh use ray-tracing methods to intersect the reflected solar rays with the receiver surfaces and construct irradiance profiles. We found the ray-tracing tools, although slower in computation speed, to be more flexible for modeling complex receiver geometries, whereas DELSOL and HELIOS were limited to standard receiver geometries such as flat plate, cylinder, and cavity receivers. We also list the strengths and deficiencies of the tools to show tool preference depending on the modeling and design needs. We provide an example of using SolTrace for modeling nonconventional receiver geometries. The goal is to transfer the irradiance profiles on the receiver surfaces calculated in an optical code to a computational fluid dynamics code such as ANSYS Fluent. This approach eliminates the need for using discrete ordinance or discrete radiation transfer models, which are computationally intensive, within the CFD code. The irradiance profiles on the receiver surfaces then allows for thermal and fluid analysis on the receiver.
A study on the dependency between turbulent models and mesh configurations of CFD codes
International Nuclear Information System (INIS)
Bang, Jungjin; Heo, Yujin; Jerng, Dong-Wook
2015-01-01
This paper focuses on the analysis of the behavior of hydrogen mixing and hydrogen stratification, using the GOTHIC code and the CFD code. Specifically, we examined the mesh sensitivity and how the turbulence model affects hydrogen stratification or hydrogen mixing, depending on the mesh configuration. In this work, sensitivity analyses for the meshes and the turbulence models were conducted for missing and stratification phenomena. During severe accidents in a nuclear power plants, the generation of hydrogen may occur and this will complicate the atmospheric condition of the containment by causing stratification of air, steam, and hydrogen. This could significantly impact containment integrity analyses, as hydrogen could be accumulated in local region. From this need arises the importance of research about stratification of gases in the containment. Two computation fluid dynamics code, i.e. GOTHIC and STAR-CCM+ were adopted and the computational results were benchmarked against the experimental data from PANDA facility. The main findings observed through the present work can be summarized as follows: 1) In the case of the GOTHIC code, it was observed that the aspect ratio of the mesh was found more important than the mesh size. Also, if the number of the mesh is over 3,000, the effects of the turbulence models were marginal. 2) For STAR-CCM+, the tendency is quite different from the GOTHIC code. That is, the effects of the turbulence models were small for fewer number of the mesh, however, as the number of mesh increases, the effects of the turbulence models becomes significant. Another observation is that away from the injection orifice, the role of the turbulence models tended to be important due to the nature of mixing process and inducted jet stream
A study on the dependency between turbulent models and mesh configurations of CFD codes
Energy Technology Data Exchange (ETDEWEB)
Bang, Jungjin; Heo, Yujin; Jerng, Dong-Wook [CAU, Seoul (Korea, Republic of)
2015-10-15
This paper focuses on the analysis of the behavior of hydrogen mixing and hydrogen stratification, using the GOTHIC code and the CFD code. Specifically, we examined the mesh sensitivity and how the turbulence model affects hydrogen stratification or hydrogen mixing, depending on the mesh configuration. In this work, sensitivity analyses for the meshes and the turbulence models were conducted for missing and stratification phenomena. During severe accidents in a nuclear power plants, the generation of hydrogen may occur and this will complicate the atmospheric condition of the containment by causing stratification of air, steam, and hydrogen. This could significantly impact containment integrity analyses, as hydrogen could be accumulated in local region. From this need arises the importance of research about stratification of gases in the containment. Two computation fluid dynamics code, i.e. GOTHIC and STAR-CCM+ were adopted and the computational results were benchmarked against the experimental data from PANDA facility. The main findings observed through the present work can be summarized as follows: 1) In the case of the GOTHIC code, it was observed that the aspect ratio of the mesh was found more important than the mesh size. Also, if the number of the mesh is over 3,000, the effects of the turbulence models were marginal. 2) For STAR-CCM+, the tendency is quite different from the GOTHIC code. That is, the effects of the turbulence models were small for fewer number of the mesh, however, as the number of mesh increases, the effects of the turbulence models becomes significant. Another observation is that away from the injection orifice, the role of the turbulence models tended to be important due to the nature of mixing process and inducted jet stream.
INDOSE V2.1.1, Internal Dosimetry Code Using Biokinetics Models
International Nuclear Information System (INIS)
Silverman, Ido
2002-01-01
A - Description of program or function: InDose is an internal dosimetry code developed to enable dose estimations using the new biokinetic models (presented in ICRP-56 to ICRP71) as well as the old ones. The code is written in FORTRAN90 and uses the ICRP-66 respiratory tract model and the ICRP-30 gastrointestinal tract model as well as the new and old biokinetic models. The code has been written in such a way that the user is able to change any of the parameters of any one of the models without recompiling the code. All the parameters are given in well annotated parameters files that the user may change and the code reads during invocation. As default, these files contains the values listed in ICRP publications. The full InDose code is planed to have three parts: 1) the main part includes the uptake and systemic models and is used to calculate the activities in the body tissues and excretion as a function of time for a given intake. 2) An optimization module for automatic estimation of the intake for a specific exposure case. 3) A module to calculate the dose due to the estimated intake. Currently, the code is able to perform only its main task (part 1) while the other two have to be done externally using other tools. In the future we would like to add these modules in order to provide a complete solution for the people in the laboratory. The code has been tested extensively to verify the accuracy of its results. The verification procedure was divided into three parts: 1) verification of the implementation of each model, 2) verification of the integrity of the whole code, and 3) usability test. The first two parts consisted of comparing results obtained with InDose to published results for the same cases. For example ICRP-78 monitoring data. The last part consisted of participating in the 3. EIE-IDA and assessing some of the scenarios provided in this exercise. These tests where presented in a few publications. It has been found that there is very good agreement
Reliability in the performance-based concept of fib Model Code 2010
Bigaj-van Vliet, A.; Vrouwenvelder, T.
2013-01-01
The design philosophy of the new fib Model Code for Concrete Structures 2010 represents the state of the art with regard to performance-based approach to the design and assessment of concrete structures. Given the random nature of quantities determining structural behaviour, the assessment of
Turbomachinery Heat Transfer and Loss Modeling for 3D Navier-Stokes Codes
DeWitt, Kenneth; Ameri, Ali
2005-01-01
This report's contents focus on making use of NASA Glenn on-site computational facilities,to develop, validate, and apply models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes to enhance the capability to compute heat transfer and losses in turbomachiney.
ALICE-87 (Livermore). Precompound Nuclear Model Code. Version for Personal Computer IBM/AT
International Nuclear Information System (INIS)
Blann, M.
1988-05-01
The precompound nuclear model code ALICE-87 from the Lawrence Livermore National Laboratory (USA) was implemented for use on personal computer. It is available on a set of high density diskettes from the Data Bank of Nuclear Energy Agency (Saclay) and the IAEA Nuclear Data Section. (author). Refs and figs
Wang, Yanqing; Li, Hang; Feng, Yuqiang; Jiang, Yu; Liu, Ying
2012-01-01
The traditional assessment approach, in which one single written examination counts toward a student's total score, no longer meets new demands of programming language education. Based on a peer code review process model, we developed an online assessment system called "EduPCR" and used a novel approach to assess the learning of computer…
Energy Technology Data Exchange (ETDEWEB)
Blann, M.
1976-01-01
The most recent edition of an evaporation code originally written previously with frequent updating and improvement. This version replaces the version Alice described previously. A brief summary is given of the types of calculations which can be done. A listing of the code and the results of several sample calculations are presented. (JFP)
Reduced-order LPV model of flexible wind turbines from high fidelity aeroelastic codes
DEFF Research Database (Denmark)
Adegas, Fabiano Daher; Sønderby, Ivan Bergquist; Hansen, Morten Hartvig
2013-01-01
of high-order linear time invariant (LTI) models. Firstly, the high-order LTI models are locally approximated using modal and balanced truncation and residualization. Then, an appropriate coordinate transformation is applied to allow interpolation of the model matrices between points on the parameter...... space. The obtained LPV model is of suitable size for designing modern gain-scheduling controllers based on recently developed LPV control design techniques. Results are thoroughly assessed on a set of industrial wind turbine models generated by the recently developed aeroelastic code HAWCStab2....
Non-volatile fission product core release model evaluation in ISAAC 2.0 code
International Nuclear Information System (INIS)
Song, Y. M.; Park, S. Y.; Kim, H. D.
2004-01-01
To evaluate fission product core release behavior in ISAAC 2.0 code, which is an integrated severe accident computer code for PHWR plants, release fractions according to core release models and/or options are analyzed for major non-volatile fission product species under severe accident conditions. The upgrade models in ISAAC 2.0 beta version (2003), which has revised from ISAAC 1.0 (1995), are used as simulation tools and the reference plant is Wolsong 2/3/4 units. For the analyzed sequence, a hypothetical conservative large LOCA is selected initiated by a guillotine break in the reactor outlet header with total loss of feed water assuming that most of safety systems are not available. As analysis results, the release fractions of upgrade models were higher in the order of ORNL-B, CORSOR-M and CORSOR-O models and the release fractions of existing models were similar with the CORSOR-O case. In conclusion, most non-volatile fission products except Sb species whose initial inventory is very small are transported together with corium under severe accident conditions while only small amount (less than maximum several percents) are released and distributed into other regions due to their non-volatile characteristics. This model evaluation will help users to predict the difference and uncertainty among core release models, which results in easier comparison with other competitive codes
Modeling developments for the SAS4A and SASSYS computer codes
International Nuclear Information System (INIS)
Cahalan, J.E.; Wei, T.Y.C.
1990-01-01
The SAS4A and SASSYS computer codes are being developed at Argonne National Laboratory for transient analysis of liquid metal cooled reactors. The SAS4A code is designed to analyse severe loss-of-coolant flow and overpower accidents involving coolant boiling, Cladding failures, and fuel melting and relocation. Recent SAS4A modeling developments include extension of the coolant boiling model to treat sudden fission gas release upon pin failure, expansion of the DEFORM fuel behavior model to handle advanced cladding materials and metallic fuel, and addition of metallic fuel modeling capability to the PINACLE and LEVITATE fuel relocation models. The SASSYS code is intended for the analysis of operational and beyond-design-basis transients, and provides a detailed transient thermal and hydraulic simulation of the core, the primary and secondary coolant circuits, and the balance-of-plant, in addition to a detailed model of the plant control and protection systems. Recent SASSYS modeling developments have resulted in detailed representations of the balance of plant piping network and components, including steam generators, feedwater heaters and pumps, and the turbine. 12 refs., 2 tabs
Energy Technology Data Exchange (ETDEWEB)
Schultz, Peter Andrew
2011-12-01
The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomic scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V&V) is required throughout the system to establish evidence-based metrics for the level of confidence in M&S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V&V challenge at the subcontinuum scale, an approach to incorporate V&V concepts into subcontinuum scale modeling and simulation (M&S), and a plan to incrementally incorporate effective V&V into subcontinuum scale M&S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.
Improvement of reflood model in RELAP5 code based on sensitivity analysis
Energy Technology Data Exchange (ETDEWEB)
Li, Dong; Liu, Xiaojing; Yang, Yanhua, E-mail: yanhuay@sjtu.edu.cn
2016-07-15
Highlights: • Sensitivity analysis is performed on the reflood model of RELAP5. • The selected influential models are discussed and modified. • The modifications are assessed by FEBA experiment and better predictions are obtained. - Abstract: Reflooding is an important and complex process to the safety of nuclear reactor during loss of coolant accident (LOCA). Accurate prediction of the reflooding behavior is one of the challenge tasks for the current system code development. RELAP5 as a widely used system code has the capability to simulate this process but with limited accuracy, especially for low inlet flow rate reflooding conditions. Through the preliminary assessment with six FEBA (Flooding Experiments with Blocked Arrays) tests, it is observed that the peak cladding temperature (PCT) is generally underestimated and bundle quench is predicted too early compared to the experiment data. In this paper, the improvement of constitutive models related to reflooding is carried out based on single parametric sensitivity analysis. Film boiling heat transfer model and interfacial friction model of dispersed flow are selected as the most influential models to the results of interests. Then studies and discussions are specifically focused on these sensitive models and proper modifications are recommended. These proposed improvements are implemented in RELAP5 code and assessed against FEBA experiment. Better agreement between calculations and measured data for both cladding temperature and quench time is obtained.
SPIDERMAN: an open-source code to model phase curves and secondary eclipses
Louden, Tom; Kreidberg, Laura
2018-03-01
We present SPIDERMAN (Secondary eclipse and Phase curve Integrator for 2D tempERature MAppiNg), a fast code for calculating exoplanet phase curves and secondary eclipses with arbitrary surface brightness distributions in two dimensions. Using a geometrical algorithm, the code solves exactly the area of sections of the disc of the planet that are occulted by the star. The code is written in C with a user-friendly Python interface, and is optimised to run quickly, with no loss in numerical precision. Approximately 1000 models can be generated per second in typical use, making Markov Chain Monte Carlo analyses practicable. The modular nature of the code allows easy comparison of the effect of multiple different brightness distributions for the dataset. As a test case we apply the code to archival data on the phase curve of WASP-43b using a physically motivated analytical model for the two dimensional brightness map. The model provides a good fit to the data; however, it overpredicts the temperature of the nightside. We speculate that this could be due to the presence of clouds on the nightside of the planet, or additional reflected light from the dayside. When testing a simple cloud model we find that the best fitting model has a geometric albedo of 0.32 ± 0.02 and does not require a hot nightside. We also test for variation of the map parameters as a function of wavelength and find no statistically significant correlations. SPIDERMAN is available for download at https://github.com/tomlouden/spiderman.
Description and assessment of deformation and temperature models in the FRAP-T6 code
International Nuclear Information System (INIS)
Siefken, L.J.
1983-01-01
The FRAP-T6 code was developed at the Idaho National Engineering Laboratory (INEL) for the purpose of calculating the transient performance of light water reactor fuel rods during reactor transients ranging from mild operational transients to severe hypothetical loss-of-coolant accidents. An important application of the FRAP-T6 code is to calculate the structural performance of fuel rod cladding. During a reactor transient, the cladding may be overstressed by mechanical interaction with thermally expanding fuel, overpressurized by fill and fission gases, embrittled by oxidation, and weakened by temperature increase. If the cladding does not crack, rupture or melt, the cladding has achieved the important objective of containing radioactive fission products. A combination of first principle models and empirical correlations are used to solve for fuel rod performance. The incremental cladding plastic strains are calculated by the Prandtl-Reuss flow rule. The total strains and stresses are solved iteratively by Mendelson's method of successive substitution. A circumferentially nonuniform cladding temperature is taken into account in the modeling of cladding ballooning. Iodine concentration and irradiation are taken into account in the modelling of stress corrosion cracking. The gap heat transfer model takes into account the size of the fuel-cladding gap, fission gas release, fuel-cladding interface pressure, and the incomplete thermal accommodation of gas molecules to surface temperature. The heat transfer at the cladding surface is determined using a set of empirical correlations which cover a wide range of heat transfer regimes. The capabilities of the FRAP-T6 code are assessed by comparisons of code calculations with the measurements of several hundred in-pile experiments on fuel rods. The results of the assessments show that the code accurately and efficiently models the structural and thermal response of fuel rods. (orig.)
TRIPOLI-4{sup ®} Monte Carlo code ITER A-lite neutronic model validation
Energy Technology Data Exchange (ETDEWEB)
Jaboulay, Jean-Charles, E-mail: jean-charles.jaboulay@cea.fr [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France); Cayla, Pierre-Yves; Fausser, Clement [MILLENNIUM, 16 Av du Québec Silic 628, F-91945 Villebon sur Yvette (France); Damian, Frederic; Lee, Yi-Kang; Puma, Antonella Li; Trama, Jean-Christophe [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France)
2014-10-15
3D Monte Carlo transport codes are extensively used in neutronic analysis, especially in radiation protection and shielding analyses for fission and fusion reactors. TRIPOLI-4{sup ®} is a Monte Carlo code developed by CEA. The aim of this paper is to show its capability to model a large-scale fusion reactor with complex neutron source and geometry. A benchmark between MCNP5 and TRIPOLI-4{sup ®}, on the ITER A-lite model was carried out; neutron flux, nuclear heating in the blankets and tritium production rate in the European TBMs were evaluated and compared. The methodology to build the TRIPOLI-4{sup ®} A-lite model is based on MCAM and the MCNP A-lite model. Simplified TBMs, from KIT, were integrated in the equatorial-port. A good agreement between MCNP and TRIPOLI-4{sup ®} is shown; discrepancies are mainly included in the statistical error.
Energy Technology Data Exchange (ETDEWEB)
Coste, P.; Bestion, D. [Commissariat a l Energie Atomique, Grenoble (France)
1995-09-01
This paper presents a simple modelling of mass diffusion effects on condensation. In presence of noncondensable gases, the mass diffusion near the interface is modelled using the heat and mass transfer analogy and requires normally an iterative procedure to calculate the interface temperature. Simplifications of the model and of the solution procedure are used without important degradation of the predictions. The model is assessed on experimental data for both film condensation in vertical tubes and direct contact condensation in horizontal tubes, including air-steam, Nitrogen-steam and Helium-steam data. It is implemented in the Cathare code, a french system code for nuclear reactor thermal hydraulics developed by CEA, EDF, and FRAMATOME.
Validation of the Thermal-Hydraulic Model in the SACAP Code with the ISP Tests
Energy Technology Data Exchange (ETDEWEB)
Park, Soon-Ho; Kim, Dong-Min; Park, Chang-Hwan [FNC Technology Co., Yongin (Korea, Republic of)
2016-10-15
In safety viewpoint, the pressure of the containment is the important parameter, of course, the local hydrogen concentration is also the parameter of the major concern because of its flammability and the risk of the detonation. In Korea, there have been an extensive efforts to develop the computer code which can analyze the severe accident behavior of the pressurized water reactor. The development has been done in a modularized manner and SACAP(Severe Accident Containment Analysis Package) code is now under final stage of development. SACAP code adopts LP(Lumped Parameter) model and is applicable to analyze the synthetic behavior of the containment during severe accident occurred by thermal-hydraulic transient, combustible gas burn, direct containment heating by high pressure melt ejection, steam explosion and molten core-concrete interaction. The analyses of a number of ISP(International Standard Problem) experiments were done as a part of the SACAP code V and V(verification and validation). In this paper, the SACAP analysis results for ISP-35 NUPEC and ISP-47 TOSQAN are presented including comparison with other existing NPP simulation codes. In this paper, we selected and analyzed ISP-35 NUPEC, ISP-47 TOSQAN in order to confirm the computational performance of SACAP code currently under development. Now the multi-node analysis for the ISP-47 is under process. As a result of simulation, SACAP predicts well the thermal-hydraulic variables such as temperature, pressure, etc. Also, we verify that SACAP code is properly equipped to analyze the gas distribution and condensation.
Physics Based Model for Cryogenic Chilldown and Loading. Part IV: Code Structure
Luchinsky, D. G.; Smelyanskiy, V. N.; Brown, B.
2014-01-01
This is the fourth report in a series of technical reports that describe separated two-phase flow model application to the cryogenic loading operation. In this report we present the structure of the code. The code consists of five major modules: (1) geometry module; (2) solver; (3) material properties; (4) correlations; and finally (5) stability control module. The two key modules - solver and correlations - are further divided into a number of submodules. Most of the physics and knowledge databases related to the properties of cryogenic two-phase flow are included into the cryogenic correlations module. The functional form of those correlations is not well established and is a subject of extensive research. Multiple parametric forms for various correlations are currently available. Some of them are included into correlations module as will be described in details in a separate technical report. Here we describe the overall structure of the code and focus on the details of the solver and stability control modules.
Novel models of controller parts for the SSC-L code
International Nuclear Information System (INIS)
Schubert, B.; Moench, D.
1986-01-01
The simulation of system control of LMFBRs is quite essential for design and licensing of such facilities. This report presents the numerical models for the simulation of controller parts, which have been tested in conjunction with the SSC-L code for the experimental breeder KNK II. The good agreement between calculation and measurement proves the principal usability of the proposed models for the required task. (orig.) [de
Applications of the 3-dim ICRH global wave code FISIC and comparison with other models
International Nuclear Information System (INIS)
Kruecken, T.; Brambilla, M.
1989-01-01
Numerical simulations of two ICRF heating experiments in ASDEX are presented, using the FISIC code to solve the integrodifferential wave equations in the finite Larmor radius (FLR) approximation model and of ray tracing. The different models show on the whole good agreement; we can however identify a few interesting toroidal effects, in particular on the efficiency of mode conversion and on the propagation of ion Bernstein waves. (author)
Applications of the 3-dim ICRH global wave code FISIC and comparison with other models
Energy Technology Data Exchange (ETDEWEB)
Kruecken, T.; Brambilla, M. (Association Euratom-Max-Planck-Institut fuer Plasmaphysik, Garching (Germany, F.R.))
1989-02-01
Numerical simulations of two ICRF heating experiments in ASDEX are presented, using the FISIC code to solve the integrodifferential wave equations in the finite Larmor radius (FLR) approximation model and of ray tracing. The different models show on the whole good agreement; we can however identify a few interesting toroidal effects, in particular on the efficiency of mode conversion and on the propagation of ion Bernstein waves. (author).
International Nuclear Information System (INIS)
Falck, W.E.
1991-01-01
There is a growing need to consider the influence of organic macromolecules on the speciation of ions in natural waters. It is recognized that a simple discrete ligand approach to the binding of protons/cations to organic macromolecules is not appropriate to represent heterogeneities of binding site distributions. A more realistic approach has been incorporated into the speciation code PHREEQE which retains the discrete ligand approach but modifies the binding intensities using an electrostatic (surface complexation) model. To allow for different conformations of natural organic material two alternative concepts have been incorporated: it is assumed that (a) the organic molecules form rigid, impenetrable spheres, and (b) the organic molecules form flat surfaces. The former concept will be more appropriate for molecules in the smaller size range, while the latter will be more representative for larger size molecules or organic surface coatings. The theoretical concept is discussed and the relevant changes to the standard PHREEQE code are explained. The modified codes are called PHREEQEO-RS and PHREEQEO-FS for the rigid-sphere and flat-surface models respectively. Improved output facilities for data transfer to other computers, e.g. the Macintosh, are introduced. Examples where the model is tested against literature data are shown and practical problems are discussed. Appendices contain listings of the modified subroutines GAMMA and PTOT, an example input file and an example command procedure to run the codes on VAX computers
MELMRK 2. 0: A description of computer models and results of code testing
Energy Technology Data Exchange (ETDEWEB)
Wittman, R.S. (ed.) (Westinghouse Savannah River Co., Aiken, SC (United States)); Denny, V.; Mertol, A. (Science Applications International Corp., Los Atlos, CA (United States))
1992-05-31
An advanced version of the MELMRK computer code has been developed that provides detailed models for conservation of mass, momentum, and thermal energy within relocating streams of molten metallics during meltdown of Savannah River Site (SRS) reactor assemblies. In addition to a mechanistic treatment of transport phenomena within a relocating stream, MELMRK 2.0 retains the MOD1 capability for real-time coupling of the in-depth thermal response of participating assembly heat structure and, further, augments this capability with models for self-heating of relocating melt owing to steam oxidation of metallics and fission product decay power. As was the case for MELMRK 1.0, the MOD2 version offers state-of-the-art numerics for solving coupled sets of nonlinear differential equations. Principal features include application of multi-dimensional Newton-Raphson techniques to accelerate convergence behavior and direct matrix inversion to advance primitive variables from one iterate to the next. Additionally, MELMRK 2.0 provides logical event flags for managing the broad range of code options available for treating such features as (1) coexisting flow regimes, (2) dynamic transitions between flow regimes, and (3) linkages between heatup and relocation code modules. The purpose of this report is to provide a detailed description of the MELMRK 2.0 computer models for melt relocation. Also included are illustrative results for code testing, as well as an integrated calculation for meltdown of a Mark 31a assembly.
MELMRK 2.0: A description of computer models and results of code testing
Energy Technology Data Exchange (ETDEWEB)
Wittman, R.S. [ed.] [Westinghouse Savannah River Co., Aiken, SC (United States); Denny, V.; Mertol, A. [Science Applications International Corp., Los Atlos, CA (United States)
1992-05-31
An advanced version of the MELMRK computer code has been developed that provides detailed models for conservation of mass, momentum, and thermal energy within relocating streams of molten metallics during meltdown of Savannah River Site (SRS) reactor assemblies. In addition to a mechanistic treatment of transport phenomena within a relocating stream, MELMRK 2.0 retains the MOD1 capability for real-time coupling of the in-depth thermal response of participating assembly heat structure and, further, augments this capability with models for self-heating of relocating melt owing to steam oxidation of metallics and fission product decay power. As was the case for MELMRK 1.0, the MOD2 version offers state-of-the-art numerics for solving coupled sets of nonlinear differential equations. Principal features include application of multi-dimensional Newton-Raphson techniques to accelerate convergence behavior and direct matrix inversion to advance primitive variables from one iterate to the next. Additionally, MELMRK 2.0 provides logical event flags for managing the broad range of code options available for treating such features as (1) coexisting flow regimes, (2) dynamic transitions between flow regimes, and (3) linkages between heatup and relocation code modules. The purpose of this report is to provide a detailed description of the MELMRK 2.0 computer models for melt relocation. Also included are illustrative results for code testing, as well as an integrated calculation for meltdown of a Mark 31a assembly.
A cortical sparse distributed coding model linking mini- and macrocolumn-scale functionality
Directory of Open Access Journals (Sweden)
Gerard J Rinkus
2010-06-01
Full Text Available No generic function for the minicolumn—i.e., one that would apply equally well to all cortical areas and species—has yet been proposed. I propose that the minicolumn does have a generic functionality, which only becomes clear when seen in the context of the function of the higher-level, subsuming unit, the macrocolumn. I propose that: a a macrocolumn’s function is to store sparse distributed representations of its inputs and to be a recognizer of those inputs; and b the generic function of the minicolumn is to enforce macrocolumnar code sparseness. The minicolumn, defined here as a physically localized pool of ~20 L2/3 pyramidals, does this by acting as a winner-take-all (WTA competitive module, implying that macrocolumnar codes consist of ~70 active L2/3 cells, assuming ~70 minicolumns per macrocolumn. I describe an algorithm for activating these codes during both learning and retrievals, which causes more similar inputs to map to more highly intersecting codes, a property which yields ultra-fast (immediate, first-shot storage and retrieval. The algorithm achieves this by adding an amount of randomness (noise into the code selection process, which is inversely proportional to an input’s familiarity. I propose a possible mapping of the algorithm onto cortical circuitry, and adduce evidence for a neuromodulatory implementation of this familiarity-contingent noise mechanism. The model is distinguished from other recent columnar cortical circuit models in proposing a generic minicolumnar function in which a group of cells within the minicolumn, the L2/3 pyramidals, compete (WTA to be part of the macrocolumnar code.
Energy Technology Data Exchange (ETDEWEB)
Moriniere, E
2008-03-15
The most recent analysis of dilepton spectra, produced in heavy ion collisions, have shown the need for a precise knowledge of all dilepton production channels. The experimental HADES facility, installed on the GSI accelerator site, is appropriate for that goal. Thus, the Dalitz decay branching ratio of {delta} resonance ({delta} {yields} Ne{sup +}e{sup -}), which has never been measured, is studied in this work. Moreover, the pp {yields} p/ {delta}{sup +} {yields} ppe{sup +-} reaction could allow to provide some information about the internal resonance structure and more precisely, about the electromagnetic transition N - A form factors. The analysis of simulations shows the feasibility of this experiment, estimates the counting yield as well as the Signal over Background ratio. This analysis shows also the great importance of the momentum resolution of the detector for the success of this experiment. The momentum resolution must be investigated. In this work, an attempt to find out the most important contributions to the measured resolution is presented. The calibration step, which provides the relation between electronic time and physical time, the detector alignment (global, relative or internal) as well as the tracking method are studied. Some methods for improvement of these different contributions are proposed in order to reach the optimal resolution. (author)
Energy Technology Data Exchange (ETDEWEB)
Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL
2014-01-01
At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.
New high burnup fuel models for NRC's licensing audit code, FRAPCON
International Nuclear Information System (INIS)
Lanning, D.D.; Beyer, C.E.; Painter, C.L.
1996-01-01
Fuel behavior models have recently been updated within the U.S. Nuclear Regulatory Commission steady-state FRAPCON code used for auditing of fuel vendor/utility-codes and analyses. These modeling updates have concentrated on providing a best estimate prediction of steady-state fuel behavior up to the maximum burnup level s of current data (60 to 65 GWd/MTU rod-average). A decade has passed since these models were last updated. Currently, some U.S. utilities and fuel vendors are requesting approval for rod-average burnups greater than 60 GWd/MTU; however, until these recent updates the NRC did not have valid fuel performance models at these higher burnup levels. Pacific Northwest Laboratory (PNL) has reviewed 15 separate effects models within the FRAPCON fuel performance code (References 1 and 2) and identified nine models that needed updating for improved prediction of fuel behavior at high burnup levels. The six separate effects models not updated were the cladding thermal properties, cladding thermal expansion, cladding creepdown, fuel specific heat, fuel thermal expansion and open gap conductance. Comparison of these models to the currently available data indicates that these models still adequately predict the data within data uncertainties. The nine models identified as needing improvement for predicting high-burnup behavior are fission gas release (FGR), fuel thermal conductivity (accounting for both high burnup effects and burnable poison additions), fuel swelling, fuel relocation, radial power distribution, fuel-cladding contact gap conductance, cladding corrosion, cladding mechanical properties and cladding axial growth. Each of the updated models will be described in the following sections and the model predictions will be compared to currently available high burnup data
Using finite mixture models in thermal-hydraulics system code uncertainty analysis
Energy Technology Data Exchange (ETDEWEB)
Carlos, S., E-mail: scarlos@iqn.upv.es [Department d’Enginyeria Química i Nuclear, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Sánchez, A. [Department d’Estadística Aplicada i Qualitat, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Ginestar, D. [Department de Matemàtica Aplicada, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Martorell, S. [Department d’Enginyeria Química i Nuclear, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain)
2013-09-15
Highlights: • Best estimate codes simulation needs uncertainty quantification. • The output variables can present multimodal probability distributions. • The analysis of multimodal distribution is performed using finite mixture models. • Two methods to reconstruct output variable probability distribution are used. -- Abstract: Nuclear Power Plant safety analysis is mainly based on the use of best estimate (BE) codes that predict the plant behavior under normal or accidental conditions. As the BE codes introduce uncertainties due to uncertainty in input parameters and modeling, it is necessary to perform uncertainty assessment (UA), and eventually sensitivity analysis (SA), of the results obtained. These analyses are part of the appropriate treatment of uncertainties imposed by current regulation based on the adoption of the best estimate plus uncertainty (BEPU) approach. The most popular approach for uncertainty assessment, based on Wilks’ method, obtains a tolerance/confidence interval, but it does not completely characterize the output variable behavior, which is required for an extended UA and SA. However, the development of standard UA and SA impose high computational cost due to the large number of simulations needed. In order to obtain more information about the output variable and, at the same time, to keep computational cost as low as possible, there has been a recent shift toward developing metamodels (model of model), or surrogate models, that approximate or emulate complex computer codes. In this way, there exist different techniques to reconstruct the probability distribution using the information provided by a sample of values as, for example, the finite mixture models. In this paper, the Expectation Maximization and the k-means algorithms are used to obtain a finite mixture model that reconstructs the output variable probability distribution from data obtained with RELAP-5 simulations. Both methodologies have been applied to a separated
Joint ICTP-IAEA advanced workshop on model codes for spallation reactions
International Nuclear Information System (INIS)
Filges, D.; Leray, S.; Yariv, Y.; Mengoni, A.; Stanculescu, A.; Mank, G.
2008-08-01
The International Atomic Energy Agency (IAEA) and the Abdus Salam International Centre for Theoretical Physics (ICTP) organised an expert meeting at the ICTP from 4 to 8 February 2008 to discuss model codes for spallation reactions. These nuclear reactions play an important role in a wide domain of applications ranging from neutron sources for condensed matter and material studies, transmutation of nuclear waste and rare isotope production to astrophysics, simulation of detector set-ups in nuclear and particle physics experiments, and radiation protection near accelerators or in space. The simulation tools developed for these domains use nuclear model codes to compute the production yields and characteristics of all the particles and nuclei generated in these reactions. These codes are generally Monte-Carlo implementations of Intra-Nuclear Cascade (INC) or Quantum Molecular Dynamics (QMD) models, followed by de-excitation (principally evaporation/fission) models. Experts have discussed in depth the physics contained within the different models in order to understand their strengths and weaknesses. Such codes need to be validated against experimental data in order to determine their accuracy and reliability with respect to all forms of application. Agreement was reached during the course of the workshop to organise an international benchmark of the different models developed by different groups around the world. The specifications of the benchmark, including the set of selected experimental data to be compared to the models, were also defined during the workshop. The benchmark will be organised under the auspices of the IAEA in 2008, and the first results will be discussed at the next Accelerator Applications Conference (AccApp'09) to be held in Vienna in May 2009. (author)
NRMC - A GPU code for N-Reverse Monte Carlo modeling of fluids in confined media
Sánchez-Gil, Vicente; Noya, Eva G.; Lomba, Enrique
2017-08-01
NRMC is a parallel code for performing N-Reverse Monte Carlo modeling of fluids in confined media [V. Sánchez-Gil, E.G. Noya, E. Lomba, J. Chem. Phys. 140 (2014) 024504]. This method is an extension of the usual Reverse Monte Carlo method to obtain structural models of confined fluids compatible with experimental diffraction patterns, specifically designed to overcome the problem of slow diffusion that can appear under conditions of tight confinement. Most of the computational time in N-Reverse Monte Carlo modeling is spent in the evaluation of the structure factor for each trial configuration, a calculation that can be easily parallelized. Implementation of the structure factor evaluation in NVIDIA® CUDA so that the code can be run on GPUs leads to a speed up of up to two orders of magnitude.
Maxwell: A semi-analytic 4D code for earthquake cycle modeling of transform fault systems
Sandwell, David; Smith-Konter, Bridget
2018-05-01
We have developed a semi-analytic approach (and computational code) for rapidly calculating 3D time-dependent deformation and stress caused by screw dislocations imbedded within an elastic layer overlying a Maxwell viscoelastic half-space. The maxwell model is developed in the Fourier domain to exploit the computational advantages of the convolution theorem, hence substantially reducing the computational burden associated with an arbitrarily complex distribution of force couples necessary for fault modeling. The new aspect of this development is the ability to model lateral variations in shear modulus. Ten benchmark examples are provided for testing and verification of the algorithms and code. One final example simulates interseismic deformation along the San Andreas Fault System where lateral variations in shear modulus are included to simulate lateral variations in lithospheric structure.
Incorporation of the capillary hysteresis model HYSTR into the numerical code TOUGH
International Nuclear Information System (INIS)
Niemi, A.; Bodvarsson, G.S.; Pruess, K.
1991-11-01
As part of the work performed to model flow in the unsaturated zone at Yucca Mountain Nevada, a capillary hysteresis model has been developed. The computer program HYSTR has been developed to compute the hysteretic capillary pressure -- liquid saturation relationship through interpolation of tabulated data. The code can be easily incorporated into any numerical unsaturated flow simulator. A complete description of HYSTR, including a brief summary of the previous hysteresis literature, detailed description of the program, and instructions for its incorporation into a numerical simulator are given in the HYSTR user's manual (Niemi and Bodvarsson, 1991a). This report describes the incorporation of HYSTR into the numerical code TOUGH (Transport of Unsaturated Groundwater and Heat; Pruess, 1986). The changes made and procedures for the use of TOUGH for hysteresis modeling are documented
International Nuclear Information System (INIS)
Bujan, A.; Adamik, V.; Misak, J.
1986-01-01
A brief description is presented of the expansion of the SICHTA-83 computer code for the analysis of the thermal history of the fuel channel for large LOCAs by modelling the mechanical behaviour of fuel element cladding. The new version of the code has a more detailed treatment of heat transfer in the fuel-cladding gap because it also respects the mechanical (plastic) deformations of the cladding and the fuel-cladding interaction (magnitude of contact pressure). Also respected is the change in pressure of the gas filling of the fuel element, the mechanical criterion is considered of a failure of the cladding and the degree is considered of the blockage of the through-flow cross section for coolant flow in the fuel channel. The LOCA WWER-440 model computation provides a comparison of the new SICHTA-85/MOD 1 code with the results of the original 83 version of SICHTA. (author)
SCDAP/RELAP5/MOD 3.1 code manual: Damage progression model theory. Volume 2
Energy Technology Data Exchange (ETDEWEB)
Davis, K.L. [ed.; Allison, C.M.; Berna, G.A. [Lockheed Idaho Technologies Co., Idaho Falls, ID (United States)] [and others
1995-06-01
The SCDAP/RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during a severe accident. The code models the coupled behavior of the reactor coolant system, the core, fission products released during a severe accident transient as well as large and small break loss of coolant accidents, operational transients such as anticipated transient without SCRAM, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits as much of a particular system to be modeled as necessary. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater conditioning systems. This volume contains detailed descriptions of the severe accident models and correlations. It provides the user with the underlying assumptions and simplifications used to generate and implement the basic equations into the code, so an intelligent assessment of the applicability and accuracy of the resulting calculation can be made.
A Mathematical Model and MATLAB Code for Muscle-Fluid-Structure Simulations.
Battista, Nicholas A; Baird, Austin J; Miller, Laura A
2015-11-01
This article provides models and code for numerically simulating muscle-fluid-structure interactions (FSIs). This work was presented as part of the symposium on Leading Students and Faculty to Quantitative Biology through Active Learning at the society-wide meeting of the Society for Integrative and Comparative Biology in 2015. Muscle mechanics and simple mathematical models to describe the forces generated by muscular contractions are introduced in most biomechanics and physiology courses. Often, however, the models are derived for simplifying cases such as isometric or isotonic contractions. In this article, we present a simple model of the force generated through active contraction of muscles. The muscles' forces are then used to drive the motion of flexible structures immersed in a viscous fluid. An example of an elastic band immersed in a fluid is first presented to illustrate a fully-coupled FSI in the absence of any external driving forces. In the second example, we present a valveless tube with model muscles that drive the contraction of the tube. We provide a brief overview of the numerical method used to generate these results. We also include as Supplementary Material a MATLAB code to generate these results. The code was written for flexibility so as to be easily modified to many other biological applications for educational purposes. © The Author 2015. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.
SCDAP/RELAP5/MOD 3.1 code manual: Damage progression model theory. Volume 2
International Nuclear Information System (INIS)
Davis, K.L.
1995-06-01
The SCDAP/RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during a severe accident. The code models the coupled behavior of the reactor coolant system, the core, fission products released during a severe accident transient as well as large and small break loss of coolant accidents, operational transients such as anticipated transient without SCRAM, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits as much of a particular system to be modeled as necessary. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater conditioning systems. This volume contains detailed descriptions of the severe accident models and correlations. It provides the user with the underlying assumptions and simplifications used to generate and implement the basic equations into the code, so an intelligent assessment of the applicability and accuracy of the resulting calculation can be made
Beyond the Business Model: Incentives for Organizations to Publish Software Source Code
Lindman, Juho; Juutilainen, Juha-Pekka; Rossi, Matti
The software stack opened under Open Source Software (OSS) licenses is growing rapidly. Commercial actors have released considerable amounts of previously proprietary source code. These actions beg the question why companies choose a strategy based on giving away software assets? Research on outbound OSS approach has tried to answer this question with the concept of the “OSS business model”. When studying the reasons for code release, we have observed that the business model concept is too generic to capture the many incentives organizations have. Conversely, in this paper we investigate empirically what the companies’ incentives are by means of an exploratory case study of three organizations in different stages of their code release. Our results indicate that the companies aim to promote standardization, obtain development resources, gain cost savings, improve the quality of software, increase the trustworthiness of software, or steer OSS communities. We conclude that future research on outbound OSS could benefit from focusing on the heterogeneous incentives for code release rather than on revenue models.
Directory of Open Access Journals (Sweden)
Wang Yanqing
2016-03-01
Full Text Available A good assignment of code reviewers can effectively utilize the intellectual resources, assure code quality and improve programmers’ skills in software development. However, little research on reviewer assignment of code review has been found. In this study, a code reviewer assignment model is created based on participants’ preference to reviewing assignment. With a constraint of the smallest size of a review group, the model is optimized to maximize review outcomes and avoid the negative impact of “mutual admiration society”. This study shows that the reviewer assignment strategies incorporating either the reviewers’ preferences or the authors’ preferences get much improvement than a random assignment. The strategy incorporating authors’ preference makes higher improvement than that incorporating reviewers’ preference. However, when the reviewers’ and authors’ preference matrixes are merged, the improvement becomes moderate. The study indicates that the majority of the participants have a strong wish to work with reviewers and authors having highest competence. If we want to satisfy the preference of both reviewers and authors at the same time, the overall improvement of learning outcomes may be not the best.
Comparative study of boron transport models in NRC Thermal-Hydraulic Code Trace
Energy Technology Data Exchange (ETDEWEB)
Olmo-Juan, Nicolás; Barrachina, Teresa; Miró, Rafael; Verdú, Gumersindo; Pereira, Claubia, E-mail: nioljua@iqn.upv.es, E-mail: tbarrachina@iqn.upv.es, E-mail: rmiro@iqn.upv.es, E-mail: gverdu@iqn.upv.es, E-mail: claubia@nuclear.ufmg.br [Institute for Industrial, Radiophysical and Environmental Safety (ISIRYM). Universitat Politècnica de València (Spain); Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear
2017-07-01
Recently, the interest in the study of various types of transients involving changes in the boron concentration inside the reactor, has led to an increase in the interest of developing and studying new models and tools that allow a correct study of boron transport. Therefore, a significant variety of different boron transport models and spatial difference schemes are available in the thermal-hydraulic codes, as TRACE. According to this interest, in this work it will be compared the results obtained using the different boron transport models implemented in the NRC thermal-hydraulic code TRACE. To do this, a set of models have been created using the different options and configurations that could have influence in boron transport. These models allow to reproduce a simple event of filling or emptying the boron concentration in a long pipe. Moreover, with the aim to compare the differences obtained when one-dimensional or three-dimensional components are chosen, it has modeled many different cases using only pipe components or a mix of pipe and vessel components. In addition, the influence of the void fraction in the boron transport has been studied and compared under close conditions to BWR commercial model. A final collection of the different cases and boron transport models are compared between them and those corresponding to the analytical solution provided by the Burgers equation. From this comparison, important conclusions are drawn that will be the basis of modeling the boron transport in TRACE adequately. (author)
Correction Model of BeiDou Code Systematic Multipath Errors and Its Impacts on Single-frequency PPP
Directory of Open Access Journals (Sweden)
WANG Jie
2017-07-01
Full Text Available There are systematic multipath errors on BeiDou code measurements, which are range from several decimeters to larger than 1 meter. They can be divided into two categories, which are systematic variances in IGSO/MEO code measurement and in GEO code measurement. In this contribution, a methodology of correcting BeiDou GEO code multipath is proposed base on Kalman filter algorithm. The standard deviation of GEO MP Series decreases about 10%~16% after correction. The weight of code in single-frequency PPP is great, therefore, code systematic multipath errors have impact on single-frequency PPP. Our analysis indicate that about 1 m bias will be caused by these systematic errors. Then, we evaluated the improvement of single-frequency PPP accuracy after code multipath correction. The systematic errors of GEO code measurements are corrected by applying our proposed Kalman filter method. The systematic errors of IGSO and MEO code measurements are corrected by applying elevation-dependent model proposed by Wanninger and Beer. Ten days observations of four MGEX (Multi-GNSS Experiment stations are processed. The results indicate that the single-frequency PPP accuracy can be improved remarkably by applying code multipath correction. The accuracy in up direction can be improved by 65% after IGSO and MEO code multipath correction. By applying GEO code multipath correction, the accuracy in up direction can be further improved by 15%.
Model-based coding of facial images based on facial muscle motion through isodensity maps
So, Ikken; Nakamura, Osamu; Minami, Toshi
1991-11-01
A model-based coding system has come under serious consideration for the next generation of image coding schemes, aimed at greater efficiency in TV telephone and TV conference systems. In this model-based coding system, the sender's model image is transmitted and stored at the receiving side before the start of the conversation. During the conversation, feature points are extracted from the facial image of the sender and are transmitted to the receiver. The facial expression of the sender facial is reconstructed from the feature points received and a wireframed model constructed at the receiving side. However, the conventional methods have the following problems: (1) Extreme changes of the gray level, such as in wrinkles caused by change of expression, cannot be reconstructed at the receiving side. (2) Extraction of stable feature points from facial images with irregular features such as spectacles or facial hair is very difficult. To cope with the first problem, a new algorithm based on isodensity lines which can represent detailed changes in expression by density correction has already been proposed and good results obtained. As for the second problem, we propose in this paper a new algorithm to reconstruct facial images by transmitting other feature points extracted from isodensity maps.
Modeling compositional dynamics based on GC and purine contents of protein-coding sequences
Zhang, Zhang
2010-11-08
Background: Understanding the compositional dynamics of genomes and their coding sequences is of great significance in gaining clues into molecular evolution and a large number of publically-available genome sequences have allowed us to quantitatively predict deviations of empirical data from their theoretical counterparts. However, the quantification of theoretical compositional variations for a wide diversity of genomes remains a major challenge.Results: To model the compositional dynamics of protein-coding sequences, we propose two simple models that take into account both mutation and selection effects, which act differently at the three codon positions, and use both GC and purine contents as compositional parameters. The two models concern the theoretical composition of nucleotides, codons, and amino acids, with no prerequisite of homologous sequences or their alignments. We evaluated the two models by quantifying theoretical compositions of a large collection of protein-coding sequences (including 46 of Archaea, 686 of Bacteria, and 826 of Eukarya), yielding consistent theoretical compositions across all the collected sequences.Conclusions: We show that the compositions of nucleotides, codons, and amino acids are largely determined by both GC and purine contents and suggest that deviations of the observed from the expected compositions may reflect compositional signatures that arise from a complex interplay between mutation and selection via DNA replication and repair mechanisms.Reviewers: This article was reviewed by Zhaolei Zhang (nominated by Mark Gerstein), Guruprasad Ananda (nominated by Kateryna Makova), and Daniel Haft. 2010 Zhang and Yu; licensee BioMed Central Ltd.
Source-term model for the SYVAC3-NSURE performance assessment code
International Nuclear Information System (INIS)
Rowat, J.H.; Rattan, D.S.; Dolinar, G.M.
1996-11-01
Radionuclide contaminants in wastes emplaced in disposal facilities will not remain in those facilities indefinitely. Engineered barriers will eventually degrade, allowing radioactivity to escape from the vault. The radionuclide release rate from a low-level radioactive waste (LLRW) disposal facility, the source term, is a key component in the performance assessment of the disposal system. This report describes the source-term model that has been implemented in Ver. 1.03 of the SYVAC3-NSURE (Systems Variability Analysis Code generation 3-Near Surface Repository) code. NSURE is a performance assessment code that evaluates the impact of near-surface disposal of LLRW through the groundwater pathway. The source-term model described here was developed for the Intrusion Resistant Underground Structure (IRUS) disposal facility, which is a vault that is to be located in the unsaturated overburden at AECL's Chalk River Laboratories. The processes included in the vault model are roof and waste package performance, and diffusion, advection and sorption of radionuclides in the vault backfill. The model presented here was developed for the IRUS vault; however, it is applicable to other near-surface disposal facilities. (author). 40 refs., 6 figs
Directory of Open Access Journals (Sweden)
Marcus Völp
2012-11-01
Full Text Available Reliability in terms of functional properties from the safety-liveness spectrum is an indispensable requirement of low-level operating-system (OS code. However, with evermore complex and thus less predictable hardware, quantitative and probabilistic guarantees become more and more important. Probabilistic model checking is one technique to automatically obtain these guarantees. First experiences with the automated quantitative analysis of low-level operating-system code confirm the expectation that the naive probabilistic model checking approach rapidly reaches its limits when increasing the numbers of processes. This paper reports on our work-in-progress to tackle the state explosion problem for low-level OS-code caused by the exponential blow-up of the model size when the number of processes grows. We studied the symmetry reduction approach and carried out our experiments with a simple test-and-test-and-set lock case study as a representative example for a wide range of protocols with natural inter-process dependencies and long-run properties. We quickly see a state-space explosion for scenarios where inter-process dependencies are insignificant. However, once inter-process dependencies dominate the picture models with hundred and more processes can be constructed and analysed.
Groundwater flow analyses in preliminary site investigations. Modelling strategy and computer codes
International Nuclear Information System (INIS)
Taivassalo, V.; Koskinen, L.; Meling, K.
1994-02-01
The analyses of groundwater flow comprised a part of the preliminary site investigations which were carried out by Teollisuuden Voima Oy (TVO) for five areas in Finland during 1987 -1992. The main objective of the flow analyses was to characterize groundwater flow at the sites. The flow simulations were also used to identify and study uncertainties and inadequacies which are inherent in the results of earlier modelling phases. The flow analyses were performed for flow conditions similar to the present conditions. The modelling approach was based on the concept of an equivalent continuum. Each fracture zone and the rock matrix among the zones was, however, considered separately as a hydrogeologic unit. The numerical calculations were carried out with a computer code package, FEFLOW. The code is based upon the finite element method. With the code two- and one-dimensional elements can also be used by way of embedding them in a three-dimensional element mesh. A set of new algorithms was developed and employed to create element meshes for FEFLOW. The most useful program in the preliminary site investigations was PAAWI, which adds two-dimensional elements for fracture zones to an existing three-dimensional element mesh. The new algorithms reduced significantly the time required to create spatial discretization for complex geometries. Three element meshes were created for each site. The boundaries of the regional models coincide with those of the flow models. (55 refs., 40 figs., 1 tab.)
Comparison of EAS simulation results using CORSIKA code for different high-energy interaction models
International Nuclear Information System (INIS)
Thakuria, Chabin; Boruah, K.
2011-01-01
Interpretation of EAS characteristics for primary energy above the energy attained by man made accelerator is dominated by model predictions and detailed simulations. Differences and uncertainties in these hadronic interaction models play crucial role in the prediction of EAS characteristics. A comparative study for muon numbers to primary energy ratio and depth of shower maximum for different hadronic interaction models available in monte-carlo simulation code CORSIKA viz. QGSJET01, QGSJET-II, DPMJET, EPOS, SYBILL, and VENUS for two primaries (viz. proton and iron) in the energy range 10 14 eV to 10 17 eV is presented in this work.
EBT time-dependent point model code: description and user's guide
International Nuclear Information System (INIS)
Roberts, J.F.; Uckan, N.A.
1977-07-01
A D-T time-dependent point model has been developed to assess the energy balance in an EBT reactor plasma. Flexibility is retained in the model to permit more recent data to be incorporated as they become available from the theoretical and experimental studies. This report includes the physics models involved, the program logic, and a description of the variables and routines used. All the files necessary for execution are listed, and the code, including a post-execution plotting routine, is discussed
Development of SSUBPIC code for modeling the neutral gas depletion effect in helicon discharges
Kollasch, Jeffrey; Sovenic, Carl; Schmitz, Oliver
2017-10-01
The SSUBPIC (steady-state unstructured-boundary particle-in-cell) code is being developed to model helicon plasma devices. The envisioned modeling framework incorporates (1) a kinetic neutral particle model, (2) a kinetic ion model, (3) a fluid electron model, and (4) an RF power deposition model. The models are loosely coupled and iterated until convergence to steady-state. Of the four required solvers, the kinetic ion and neutral particle simulation can now be done within the SSUBPIC code. Recent SSUBPIC modifications include implementation and testing of a Coulomb collision model (Lemons et al., JCP, 228(5), pp. 1391-1403) allowing efficient coupling of kineticly-treated ions to fluid electrons, and implementation of a neutral particle tracking mode with charge-exchange and electron impact ionization physics. These new simulation capabilities are demonstrated working independently and coupled to ``dummy'' profiles for RF power deposition to converge on steady-state plasma and neutral profiles. The geometry and conditions considered are similar to those of the MARIA experiment at UW-Madison. Initial results qualitatively show the expected neutral gas depletion effect in which neutrals in the plasma core are not replenished at a sufficient rate to sustain a higher plasma density. This work is funded by the NSF CAREER award PHY-1455210 and NSF Grant PHY-1206421.
MINI-TRAC code: a driver program for assessment of constitutive equations of two-fluid model
International Nuclear Information System (INIS)
Akimoto, Hajime; Abe, Yutaka; Ohnuki, Akira; Murao, Yoshio
1991-05-01
MINI-TRAC code, a driver program for assessment of constitutive equations of two-fluid model, has been developed to perform assessment and improvement of constitutive equations of two-fluid model widely and efficiently. The MINI-TRAC code uses one-dimensional conservation equations for mass, momentum and energy based on the two-fluid model. The code can work on a personal computer because it can be operated with a core memory size less than 640 KB. The MINI-TRAC code includes constitutive equations of TRAC-PF1/MOD1 code, TRAC-BF1 code and RELAP5/MOD2 code. The code is modulated so that one can easily change constitutive equations to perform a test calculation. This report is a manual of the MINI-TRAC code. The basic equations, numerics, constitutive, equations included in the MINI-TRAC code will be described. The user's manual such as input description will be presented. The program structure and contents of main variables will also be mentioned in this report. (author)
Development and applications of the interim direct heating model for the CONTAIN computer code
International Nuclear Information System (INIS)
Bergeron, K.D.; Carroll, D.E.; Tills, J.L.; Washington, K.E.; Williams, D.C.
1987-01-01
A model for Direct Containment Heating has been developed for the CONTAIN code which takes account of the competing processes of melt ejection, steam blowdown, heat transfer among debris, gas and walls, chemical reactions in the debris droplets, intercell gas and debris transport, and de-entrainment of the suspended debris. Calculations of accident sequences with the model have shown that a qualitatively different understanding of the Direct Heating phenomenon emerges when more detailed multi-cell calculations are performed within a system-level containment code, compared to the relatively simplistic single cell, adiabatic calculations which have been used in the past. Results are presented for the TMLB sequence at the Surry and Sequoyah plants, and a sensitivity study for some of the most important uncertain parameters is described
Blackout sequence modeling for Atucha-I with MARCH3 code
International Nuclear Information System (INIS)
Baron, J.; Bastianelli, B.
1997-01-01
The modeling of a blackout sequence in Atucha I nuclear power plant is presented in this paper, as a preliminary phase for a level II probabilistic safety assessment. Such sequence is analyzed with the code MARCH3 from STCP (Source Term Code Package), based on a specific model developed for Atucha, that takes into accounts it peculiarities. The analysis includes all the severe accident phases, from the initial transient (loss of heat sink), loss of coolant through the safety valves, core uncovered, heatup, metal-water reaction, melting and relocation, heatup and failure of the pressure vessel, core-concrete interaction in the reactor cavity, heatup and failure of the containment building (multi-compartmented) due to quasi-static overpressurization. The results obtained permit to visualize the time sequence of these events, as well as provide the basis for source term studies. (author) [es
Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret
2003-12-01
A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.
Analytical model for bottom reflooding heat transfer in light water reactors (the UCFLOOD code)
International Nuclear Information System (INIS)
Arrieta, L.; Yadigaroglu, G.
1978-08-01
The UCFLOOD code is based on mechanistic models developed to analyze bottom reflooding of a single flow channel and its associated fuel rod, or a tubular test section with internal flow. From the hydrodynamic point of view the flow channel is divided into a single-phase liquid region, a continuous-liquid two-phase region, and a dispersed-liquid region. The void fraction is obtained from drift flux models. For heat transfer calculations, the channel is divided into regions of single-phase-liquid heat transfer, nucleate boiling and forced-convection vaporization, inverted-annular film boiling, and dispersed-flow film boiling. The heat transfer coefficients are functions of the local flow conditions. Good agreement of calculated and experimental results has been obtained. A code user's manual is appended
The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code
Energy Technology Data Exchange (ETDEWEB)
Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.
1999-07-01
capability of performing iterated-source (criticality), multiplied-fixed-source, and fixed-source calculations. MCV uses a highly detailed continuous-energy (as opposed to multigroup) representation of neutron histories and cross section data. The spatial modeling is fully three-dimensional (3-D), and any geometrical region that can be described by quadric surfaces may be represented. The primary results are region-wise reaction rates, neutron production rates, slowing-down-densities, fluxes, leakages, and when appropriate the eigenvalue or multiplication factor. Region-wise nuclidic reaction rates are also computed, which may then be used by other modules in the system to determine time-dependent nuclide inventories so that RACER can perform depletion calculations. Furthermore, derived quantities such as ratios and sums of primary quantities and/or other derived quantities may also be calculated. MCV performs statistical analyses on output quantities, computing estimates of the 95% confidence intervals as well as indicators as to the reliability of these estimates. The remainder of this chapter provides an overview of the MCV algorithm. The following three chapters describe the MCV mathematical, physical, and statistical treatments in more detail. Specifically, Chapter 2 discusses topics related to tracking the histories including: geometry modeling, how histories are moved through the geometry, and variance reduction techniques related to the tracking process. Chapter 3 describes the nuclear data and physical models employed by MCV. Chapter 4 discusses the tallies, statistical analyses, and edits. Chapter 5 provides some guidance as to how to run the code, and Chapter 6 is a list of the code input options.
PREMOR: a point reactor exposure model computer code for survey analysis of power plant performance
International Nuclear Information System (INIS)
Vondy, D.R.
1979-10-01
The PREMOR computer code was written to exploit a simple, two-group point nuclear reactor power plant model for survey analysis. Up to thirteen actinides, fourteen fission products, and one lumped absorber nuclide density are followed over a reactor history. Successive feed batches are accounted for with provision for from one to twenty batches resident. The effect of exposure of each of the batches to the same neutron flux is determined
PREMOR: a point reactor exposure model computer code for survey analysis of power plant performance
Energy Technology Data Exchange (ETDEWEB)
Vondy, D.R.
1979-10-01
The PREMOR computer code was written to exploit a simple, two-group point nuclear reactor power plant model for survey analysis. Up to thirteen actinides, fourteen fission products, and one lumped absorber nuclide density are followed over a reactor history. Successive feed batches are accounted for with provision for from one to twenty batches resident. The effect of exposure of each of the batches to the same neutron flux is determined.
Verification of simulation model with COBRA-IIIP code by confrontment of experimental results
International Nuclear Information System (INIS)
Silva Galetti, M.R. da; Pontedeiro, A.C.; Oliveira Barroso, A.C. de
1985-01-01
It is presented an evaluation of the COBRA IIIP/MIT code (of thermal hydraulic analysis by subchannels), comparing their results with experimental data obtained in stationary and transient regimes. It was done a study to calculate the spatial and temporal critical heat flux. It is presented a sensitivity study of simulation model related to the turbulent mixture and the number of axial intervals. (M.C.K.) [pt
A finite-temperature Hartree-Fock code for shell-model Hamiltonians
Bertsch, G. F.; Mehlhaff, J. M
2016-01-01
The codes HFgradZ.py and HFgradT.py find axially symmetric minima of a Hartree-Fock energy functional for a Hamiltonian supplied in a shell-model basis. The functional to be minimized is the Hartree-Fock energy for zero-temperature properties or the Hartree-Fock grand potential for finite-temperature properties. Various constraints may be imposed on the minima.
Directory of Open Access Journals (Sweden)
Michael J. Pelosi
2014-12-01
Full Text Available Development teams and programmers must retain critical information about their work during work intervals and gaps in order to improve future performance when work resumes. Despite time lapses, project managers want to maximize coding efficiency and effectiveness. By developing a mathematically justified, practically useful, and computationally tractable quantitative and cognitive model of learning and memory retention, this study establishes calculations designed to maximize scheduling payoff and optimize developer efficiency and effectiveness.
Dysregulation of the long non-coding RNA transcriptome in a Rett syndrome mouse model.
Petazzi, Paolo; Sandoval, Juan; Szczesna, Karolina; Jorge, Olga C; Roa, Laura; Sayols, Sergi; Gomez, Antonio; Huertas, Dori; Esteller, Manel
2013-07-01
Mecp2 is a transcriptional repressor protein that is mutated in Rett syndrome, a neurodevelopmental disorder that is the second most common cause of mental retardation in women. It has been shown that the loss of the Mecp2 protein in Rett syndrome cells alters the transcriptional silencing of coding genes and microRNAs. Herein, we have studied the impact of Mecp2 impairment in a Rett syndrome mouse model on the global transcriptional patterns of long non-coding RNAs (lncRNAs). Using a microarray platform that assesses 41,232 unique lncRNA transcripts, we have identified the aberrant lncRNA transcriptome that is present in the brain of Rett syndrome mice. The study of the most relevant lncRNAs altered in the assay highlighted the upregulation of the AK081227 and AK087060 transcripts in Mecp2-null mice brains. Chromatin immunoprecipitation demonstrated the Mecp2 occupancy in the 5'-end genomic loci of the described lncRNAs and its absence in Rett syndrome mice. Most importantly, we were able to show that the overexpression of AK081227 mediated by the Mecp2 loss was associated with the downregulation of its host coding protein gene, the gamma-aminobutyric acid receptor subunit Rho 2 (Gabrr2). Overall, our findings indicate that the transcriptional dysregulation of lncRNAs upon Mecp2 loss contributes to the neurological phenotype of Rett syndrome and highlights the complex interaction between ncRNAs and coding-RNAs.
Dysregulation of the long non-coding RNA transcriptome in a Rett syndrome mouse model
Petazzi, Paolo; Sandoval, Juan; Szczesna, Karolina; Jorge, Olga C.; Roa, Laura; Sayols, Sergi; Gomez, Antonio; Huertas, Dori; Esteller, Manel
2013-01-01
Mecp2 is a transcriptional repressor protein that is mutated in Rett syndrome, a neurodevelopmental disorder that is the second most common cause of mental retardation in women. It has been shown that the loss of the Mecp2 protein in Rett syndrome cells alters the transcriptional silencing of coding genes and microRNAs. Herein, we have studied the impact of Mecp2 impairment in a Rett syndrome mouse model on the global transcriptional patterns of long non-coding RNAs (lncRNAs). Using a microarray platform that assesses 41,232 unique lncRNA transcripts, we have identified the aberrant lncRNA transcriptome that is present in the brain of Rett syndrome mice. The study of the most relevant lncRNAs altered in the assay highlighted the upregulation of the AK081227 and AK087060 transcripts in Mecp2-null mice brains. Chromatin immunoprecipitation demonstrated the Mecp2 occupancy in the 5′-end genomic loci of the described lncRNAs and its absence in Rett syndrome mice. Most importantly, we were able to show that the overexpression of AK081227 mediated by the Mecp2 loss was associated with the downregulation of its host coding protein gene, the gamma-aminobutyric acid receptor subunit Rho 2 (Gabrr2). Overall, our findings indicate that the transcriptional dysregulation of lncRNAs upon Mecp2 loss contributes to the neurological phenotype of Rett syndrome and highlights the complex interaction between ncRNAs and coding-RNAs. PMID:23611944
Development of computer code models for analysis of subassembly voiding in the LMFBR
International Nuclear Information System (INIS)
Hinkle, W.
1979-12-01
The research program discussed in this report was started in FY1979 under the combined sponsorship of the US Department of Energy (DOE), General Electric (GE) and Hanford Engineering Development Laboratory (HEDL). The objective of the program is to develop multi-dimensional computer codes which can be used for the analysis of subassembly voiding incoherence under postulated accident conditions in the LMFBR. Two codes are being developed in parallel. The first will use a two fluid (6 equation) model which is more difficult to develop but has the potential for providing a code with the utmost in flexibility and physical consistency for use in the long term. The other will use a mixture (< 6 equation) model which is less general but may be more amenable to interpretation and use of experimental data and therefore, easier to develop for use in the near term. To assure that the models developed are not design dependent, geometries and transient conditions typical of both foreign and US designs are being considered
Comparison of a Coupled Near and Far Wake Model With a Free Wake Vortex Code
DEFF Research Database (Denmark)
Pirrung, Georg; Riziotis, Vasilis; Aagaard Madsen, Helge
2016-01-01
to be updated during the computation. Further, the effect of simplifying the exponential function approximation of the near wake model to increase the computation speed is investigated in this work. A modification of the dynamic inflow weighting factors of the far wake model is presented that ensures good...... computations performed using a free wake panel code. The focus of the description of the aerodynamics model is on the numerical stability, the computation speed and the accuracy of 5 unsteady simulations. To stabilize the near wake model, it has to be iterated to convergence, using a relaxation factor that has...... and a BEM model is centered around the NREL 5 MW reference turbine. The response to pitch steps at different pitching speeds is compared. By means of prescribed vibration cases, the effect of the aerodynamic model on the predictions of the aerodynamic work is investigated. The validation shows that a BEM...
Energy Technology Data Exchange (ETDEWEB)
Park, Ju Yeop; In, Wang Kee; Chun, Tae Hyun; Oh, Dong Seok [Korea Atomic Energy Research Institute, Taejeon (Korea)
2000-02-01
The development of orthogonal 2-dimensional numerical code is made. The present code contains 9 kinds of turbulence models that are widely used. They include a standard k-{epsilon} model and 8 kinds of low Reynolds number ones. They also include 6 kinds of numerical schemes including 5 kinds of low order schemes and 1 kind of high order scheme such as QUICK. To verify the present numerical code, pipe flow, channel flow and expansion pipe flow are solved by this code with various options of turbulence models and numerical schemes and the calculated outputs are compared to experimental data. Furthermore, the discretization error that originates from the use of standard k-{epsilon} turbulence model with wall function is much more diminished by introducing a new grid system than a conventional one in the present code. 23 refs., 58 figs., 6 tabs. (Author)
International Nuclear Information System (INIS)
Bezrukov, Yu.A.; Shchekoldin, V.I.
2002-01-01
On the basis of the Gidropress OKB (Special Design Bureau) experimental data bank one verified the KORSAR code design models and correlations as to heat exchange crisis and overcrisis heat transfer as applied to the WWER reactor normal and emergency conditions. The VI.006.000 version of KORSAR code base calculations is shown to describe adequately the conducted experiments and to deviate insignificantly towards the conservative approach. So it may be considered as one of the codes ensuring more precise estimation [ru
A wide-range model of two-group gross sections in the dynamics code HEXTRAN
International Nuclear Information System (INIS)
Kaloinen, E.; Peltonen, J.
2002-01-01
In dynamic analyses the thermal hydraulic conditions within the reactor core may have a large variation, which sets a special requirement on the modeling of cross sections. The standard model in the dynamics code HEXTRAN is the same as in the static design code HEXBU-3D/MODS. It is based on a linear and second order fitting of two-group cross sections on fuel and moderator temperature, moderator density and boron density. A new, wide-range model of cross sections developed in Fortum Nuclear Services for HEXBU-3D/MOD6 has been included as an option into HEXTRAN. In this model the nodal cross sections are constructed from seven state variables in a polynomial of more than 40 terms. Coefficients of the polynomial are created by a least squares fitting to the results of a large number of fuel assembly calculations. Depending on the choice of state variables for the spectrum calculations, the new cross section model is capable to cover local conditions from cold zero power to boiling at full power. The 5. dynamic benchmark problem of AER is analyzed with the new option and results are compared to calculations with the standard model of cross sections in HEXTRAN (Authors)
Sodium spray and jet fire model development within the CONTAIN-LMR code
International Nuclear Information System (INIS)
Scholtyssek, W.
1993-01-01
An assessment was made of the sodium spray fire model implemented in the CONTAIN code. The original droplet burn model, which was based on the NACOM code, was improved in several aspects, especially concerning evaluation of the droplet burning rate, reaction chemistry and heat balance, spray geometry and droplet motion, and consistency with CONTAIN standards of gas property evaluation. An additional droplet burning model based on a proposal by Krolikowski was made available to include the effect of the chemical equilibrium conditions at the flame temperature. The models were validated against single-droplet burn experiments as well as spray and jet fire experiments. Reasonable agreement was found between the two burn models and experimental data. When the gas temperature in the burning compartment reaches high values, the Krolikowski model seems to be preferable. Critical parameters for spray fire evaluation were found to be the spray characterization, especially the droplet size, which largely determines the burning efficiency, and heat transfer conditions at the interface between the atmosphere and structures, which controls the thermal hydraulic behavior in the burn compartment
Dispersed Two-Phase Flow Modelling for Nuclear Safety in the NEPTUNE_CFD Code
Directory of Open Access Journals (Sweden)
Stephane Mimouni
2017-01-01
Full Text Available The objective of this paper is to give an overview of the capabilities of Eulerian bifluid approach to meet the needs of studies for nuclear safety regarding hydrogen risk, boiling crisis, and pipes and valves maintenance. The Eulerian bifluid approach has been implemented in a CFD code named NEPTUNE_CFD. NEPTUNE_CFD is a three-dimensional multifluid code developed especially for nuclear reactor applications by EDF, CEA, AREVA, and IRSN. The first set of models is dedicated to wall vapor condensation and spray modelling. Moreover, boiling crisis remains a major limiting phenomenon for the analysis of operation and safety of both nuclear reactors and conventional thermal power systems. The paper aims at presenting the generalization of the previous DNB model and its validation against 1500 validation cases. The modelling and the numerical simulation of cavitation phenomena are of relevant interest in many industrial applications, especially regarding pipes and valves maintenance where cavitating flows are responsible for harmful acoustics effects. In the last section, models are validated against experimental data of pressure profiles and void fraction visualisations obtained downstream of an orifice with the EPOCA facility (EDF R&D. Finally, a multifield approach is presented as an efficient tool to run all models together.
Directory of Open Access Journals (Sweden)
Fei Guo
2016-06-01
Full Text Available The BeiDou satellite-induced code biases have been confirmed to be orbit type-, frequency-, and elevation-dependent. Such code-phase divergences (code bias variations severely affect absolute precise applications which use code measurements. To reduce their adverse effects, an improved correction model is proposed in this paper. Different from the model proposed by Wanninger and Beer (2015, more datasets (a time span of almost two years were used to produce the correction values. More importantly, the stochastic information, i.e., the precision indexes, were given together with correction values in the improved model. However, only correction values were given while the precision indexes were completely missing in the traditional model. With the improved correction model, users may have a better understanding of their corrections, especially the uncertainty of corrections. Thus, it is helpful for refining the stochastic model of code observations. Validation tests in precise point positioning (PPP reveal that a proper stochastic model is critical. The actual precision of the corrected code observations can be reflected in a more objective manner if the stochastic model of the corrections is taken into account. As a consequence, PPP solutions with the improved model outperforms the traditional one in terms of positioning accuracy, as well as convergence speed. In addition, the Melbourne-Wübbena (MW combination which serves for ambiguity fixing were verified as well. The uncorrected MW values show strong systematic variations with an amplitude of half a wide-lane cycle, which prevents precise ambiguity determination and successful ambiguity resolution. After application of the code bias correction models, the systematic variations can be greatly removed, and the resulting wide lane ambiguities are more likely to be fixed. Moreover, the code residuals show more reasonable distributions after code bias corrections with either the traditional or the
A new nuclide transport model in soil in the GENII-LIN health physics code
Teodori, F.
2017-11-01
The nuclide soil transfer model, originally included in the GENII-LIN software system, was intended for residual contamination from long term activities and from waste form degradation. Short life nuclides were supposed absent or at equilibrium with long life parents. Here we present an enhanced soil transport model, where short life nuclide contributions are correctly accounted. This improvement extends the code capabilities to handle incidental release of contaminant to soil, by evaluating exposure since the very beginning of the contamination event, before the radioactive decay chain equilibrium is reached.
DISCRETE DYNAMIC MODEL OF BEVEL GEAR – VERIFICATION THE PROGRAM SOURCE CODE FOR NUMERICAL SIMULATION
Directory of Open Access Journals (Sweden)
Krzysztof TWARDOCH
2014-06-01
Full Text Available In the article presented a new model of physical and mathematical bevel gear to study the influence of design parameters and operating factors on the dynamic state of the gear transmission. Discusses the process of verifying proper operation of copyright calculation program used to determine the solutions of the dynamic model of bevel gear. Presents the block diagram of a computing algorithm that was used to create a program for the numerical simulation. The program source code is written in an interactive environment to perform scientific and engineering calculations, MATLAB
The perception of moving stimuli: a model of spatiotemporal coding in human vision.
Harris, M G
1986-01-01
An empirically plausible extension of the Marr and Ullman (1981) [Proc. R. Soc. Lond. B 211, 151-180 (1981)] model for coding movement direction uses the magnitude of spatial and temporal response derivatives to encode the speed as well as the direction of movement. The performance of the model is illustrated for drifting edges of different spatial scales and the properties of the components which separately compute the two derivatives are respectively compared to individual channels of the conventional pattern and flicker systems.
Auxiliary plasma heating and fueling models for use in particle simulation codes
International Nuclear Information System (INIS)
Procassini, R.J.; Cohen, B.I.
1989-01-01
Computational models of a radiofrequency (RF) heating system and neutral-beam injector are presented. These physics packages, when incorporated into a particle simulation code allow one to simulate the auxiliary heating and fueling of fusion plasmas. The RF-heating package is based upon a quasilinear diffusion equation which describes the slow evolution of the heated particle distribution. The neutral-beam injector package models the charge exchange and impact ionization processes which transfer energy and particles from the beam to the background plasma. Particle simulations of an RF-heated and a neutral-beam-heated simple-mirror plasma are presented. 8 refs., 5 figs
Coding Model and Mapping Method of Spherical Diamond Discrete Grids Based on Icosahedron
Directory of Open Access Journals (Sweden)
LIN Bingxian
2016-12-01
Full Text Available Discrete Global Grid(DGG provides a fundamental environment for global-scale spatial data's organization and management. DGG's encoding scheme, which blocks coordinate transformation between different coordination reference frames and reduces the complexity of spatial analysis, contributes a lot to the multi-scale expression and unified modeling of spatial data. Compared with other kinds of DGGs, Diamond Discrete Global Grid(DDGG based on icosahedron is beneficial to the spherical spatial data's integration and expression for much better geometric properties. However, its structure seems more complicated than DDGG on octahedron due to its initial diamond's edges cannot fit meridian and parallel. New challenges are posed when it comes to the construction of hierarchical encoding system and mapping relationship with geographic coordinates. On this issue, this paper presents a DDGG's coding system based on the Hilbert curve and designs conversion methods between codes and geographical coordinates. The study results indicate that this encoding system based on the Hilbert curve can express space scale and location information implicitly with the similarity between DDG and planar grid put into practice, and balances efficiency and accuracy of conversion between codes and geographical coordinates in order to support global massive spatial data's modeling, integrated management and all kinds of spatial analysis.
Multiresolution Source/Filter Model for Low Bitrate Coding of Spot Microphone Signals
Directory of Open Access Journals (Sweden)
Panagiotis Tsakalides
2008-04-01
Full Text Available A multiresolution source/filter model for coding of audio source signals (spot recordings is proposed. Spot recordings are a subset of the multimicrophone recordings of a music performance, before the mixing process is applied for producing the final multichannel audio mix. The technique enables low bitrate coding of spot signals with good audio quality (above 3.0 perceptual grade compared to the original. It is demonstrated that this particular model separates the various microphone recordings of a multimicrophone recording into a part that mainly characterizes a specific microphone signal and a part that is common to all signals of the same recording (and can thus be omitted during transmission. Our interest in low bitrate coding of spot recordings is related to applications such as remote mixing and real-time collaboration of musicians who are geographically distributed. Using the proposed approach, it is shown that it is possible to encode a multimicrophone audio recording using a single audio channel only, with additional information for each spot microphone signal in the order of 5Ã¢Â€Â‰kbps, for good-quality resynthesis. This is verified by employing both objective and subjective measures of performance.
Multiresolution Source/Filter Model for Low Bitrate Coding of Spot Microphone Signals
Directory of Open Access Journals (Sweden)
Mouchtaris Athanasios
2008-01-01
Full Text Available A multiresolution source/filter model for coding of audio source signals (spot recordings is proposed. Spot recordings are a subset of the multimicrophone recordings of a music performance, before the mixing process is applied for producing the final multichannel audio mix. The technique enables low bitrate coding of spot signals with good audio quality (above 3.0 perceptual grade compared to the original. It is demonstrated that this particular model separates the various microphone recordings of a multimicrophone recording into a part that mainly characterizes a specific microphone signal and a part that is common to all signals of the same recording (and can thus be omitted during transmission. Our interest in low bitrate coding of spot recordings is related to applications such as remote mixing and real-time collaboration of musicians who are geographically distributed. Using the proposed approach, it is shown that it is possible to encode a multimicrophone audio recording using a single audio channel only, with additional information for each spot microphone signal in the order of 5 kbps, for good-quality resynthesis. This is verified by employing both objective and subjective measures of performance.
An Efficient Code-Based Threshold Ring Signature Scheme with a Leader-Participant Model
Directory of Open Access Journals (Sweden)
Guomin Zhou
2017-01-01
Full Text Available Digital signature schemes with additional properties have broad applications, such as in protecting the identity of signers allowing a signer to anonymously sign a message in a group of signers (also known as a ring. While these number-theoretic problems are still secure at the time of this research, the situation could change with advances in quantum computing. There is a pressing need to design PKC schemes that are secure against quantum attacks. In this paper, we propose a novel code-based threshold ring signature scheme with a leader-participant model. A leader is appointed, who chooses some shared parameters for other signers to participate in the signing process. This leader-participant model enhances the performance because every participant including the leader could execute the decoding algorithm (as a part of signing process upon receiving the shared parameters from the leader. The time complexity of our scheme is close to Courtois et al.’s (2001 scheme. The latter is often used as a basis to construct other types of code-based signature schemes. Moreover, as a threshold ring signature scheme, our scheme is as efficient as the normal code-based ring signature.
Evolutionary modeling and prediction of non-coding RNAs in Drosophila.
Directory of Open Access Journals (Sweden)
Robert K Bradley
2009-08-01
Full Text Available We performed benchmarks of phylogenetic grammar-based ncRNA gene prediction, experimenting with eight different models of structural evolution and two different programs for genome alignment. We evaluated our models using alignments of twelve Drosophila genomes. We find that ncRNA prediction performance can vary greatly between different gene predictors and subfamilies of ncRNA gene. Our estimates for false positive rates are based on simulations which preserve local islands of conservation; using these simulations, we predict a higher rate of false positives than previous computational ncRNA screens have reported. Using one of the tested prediction grammars, we provide an updated set of ncRNA predictions for D. melanogaster and compare them to previously-published predictions and experimental data. Many of our predictions show correlations with protein-coding genes. We found significant depletion of intergenic predictions near the 3' end of coding regions and furthermore depletion of predictions in the first intron of protein-coding genes. Some of our predictions are colocated with larger putative unannotated genes: for example, 17 of our predictions showing homology to the RFAM family snoR28 appear in a tandem array on the X chromosome; the 4.5 Kbp spanned by the predicted tandem array is contained within a FlyBase-annotated cDNA.
International Nuclear Information System (INIS)
Russell, S.B.; Kempe, T.F.; Donnelly, K.J.
1985-05-01
Computer codes for modelling the dispersion and transfer of tritium released to the atmosphere were compared. The codes originated from Canada, the United States, Sweden and Japan. The comparisons include acute and chronic emissions of tritiated water vapour or elemental tritium from a hypothetical nuclear facility. Individual and collective doses to the population within 100 km of the site were calculated. The discrepancies among the code predictions were about one order of magnitude for the HTO emissions but were significantly more varied for the HT emissions. Codes that did not account for HT to HTO conversion and cycling of tritium in the environment predicted doses that were several orders of magnitude less than codes that incorporate this feature into the model. A field experiment consisting of the release of tritium gas and subsequent measurements of tritium concentrations in the environment has been recommended to validate the tritium transport models and code predictions and to refine the values of key parameters in the models. It is further recommended that the results from a field experiment be used in a follow-up code validation study to confirm which conceptual model and derived computer code provides the best representation of tritium transport in the real system
A new balance-of-plant model for the SASSYS-1 LMR systems analysis code
International Nuclear Information System (INIS)
Briggs, L.L.
1989-01-01
A balance-of-plant model has been added to the SASSYS-1 liquid metal reactor systems analysis code. Until this addition, the only waterside component which SASSYS-1 could explicitly model was the water side of a steam generator, with the remainder of the water side represented by boundary conditions on the steam generator. The balance-of-plant model is based on the model used for the sodium side of the plant. It will handle subcooled liquid water, superheated steam, and saturated two-phase fluid. With the exception of heated flow paths in heaters, the model assumes adiabatic conditions along flow paths; this assumption simplifies the solution procedure while introducing very little error for a wide range of reactor plant problems. Only adiabatic flow is discussed in this report. 3 refs., 4 figs
International Nuclear Information System (INIS)
Yoon, C.; Rhee, B. W.; Chung, B. D.; Cho, Y. J.; Kim, M. W.
2008-01-01
Korea currently has four operating units of the CANDU-6 type reactor in Wolsong. However, the safety assessment system for CANDU reactors has not been fully established due to lack of self-reliance technology. Although the CATHENA code had been introduced from AECL, it is undesirable to use vendor's code for regulatory auditing analysis. In Korea, the MARS code has been developed for decades and is being considered by KINS as a thermal hydraulic regulatory auditing tool for nuclear power plants. Before this decision, KINS (Korea Institute of Nuclear Safety) had developed RELAP5/MOD3/CANDU code for CANDU safety analyses by modifying the model of existing PWR auditing tool, RELAP5/MOD3. The main purpose of this study is to transplant the CANDU models of RELAP5/MOD3/CANDU code to MARS code including quality assurance of the developed models. This first part of the research series presents the implementation and verification of the Wolsong pump model, the pressure tube deformation model, and the off-take model for arbitrary-angled branch pipes
Wang, S.; Peters-Lidard, C. D.; Mocko, D. M.; Kumar, S.; Nearing, G. S.; Arsenault, K. R.; Geiger, J. V.
2014-12-01
Model integration bridges the data flow between modeling frameworks and models. However, models usually do not fit directly into a particular modeling environment, if not designed for it. An example includes implementing different types of models into the NASA Land Information System (LIS), a software framework for land-surface modeling and data assimilation. Model implementation requires scientific knowledge and software expertise and may take a developer months to learn LIS and model software structure. Debugging and testing of the model implementation is also time-consuming due to not fully understanding LIS or the model. This time spent is costly for research and operational projects. To address this issue, an approach has been developed to automate model integration into LIS. With this in mind, a general model interface was designed to retrieve forcing inputs, parameters, and state variables needed by the model and to provide as state variables and outputs to LIS. Every model can be wrapped to comply with the interface, usually with a FORTRAN 90 subroutine. Development efforts need only knowledge of the model and basic programming skills. With such wrappers, the logic is the same for implementing all models. Code templates defined for this general model interface could be re-used with any specific model. Therefore, the model implementation can be done automatically. An automated model implementation toolkit was developed with Microsoft Excel and its built-in VBA language. It allows model specifications in three worksheets and contains FORTRAN 90 code templates in VBA programs. According to the model specification, the toolkit generates data structures and procedures within FORTRAN modules and subroutines, which transfer data between LIS and the model wrapper. Model implementation is standardized, and about 80 - 90% of the development load is reduced. In this presentation, the automated model implementation approach is described along with LIS programming
International Nuclear Information System (INIS)
Guillermier, Pierre; Daniel, Lucile; Gauthier, Laurent
2009-01-01
To support AREVA NP in its design on HTR reactor and its HTR fuel R and D program, the Commissariat a l'Energie Atomique developed the ATLAS code (Advanced Thermal mechanicaL Analysis Software) with the objectives: - to quantify, with a statistical approach, the failed particle fraction and fission product release of a HTR fuel core under normal and accidental conditions (compact or pebble design). - to simulate irradiation tests or benchmark in order to compare measurements or others code results with ATLAS evaluation. These two objectives aim at qualifying the code in order to predict fuel behaviour and to design fuel according to core performance and safety requirements. A statistical calculation uses numerous deterministic calculations. The finite element method is used for these deterministic calculations, in order to be able to choose among three types of meshes, depending on what must be simulated: - One-dimensional calculation of one single particle, for intact particles or particles with fully debonded layers. - Two-dimensional calculations of one single particle, in the case of particles which are cracked, partially debonded or shaped in various ways. - Three-dimensional calculations of a whole compact slice, in order to simulate the interactions between the particles, the thermal gradient and the transport of fission products up to the coolant. - Some calculations of a whole pebble, using homogenization methods are being studied. The temperatures, displacements, stresses, strains and fission product concentrations are calculated on each mesh of the model. Statistical calculations are done using these results, taking into account ceramic failure mode, but also fabrication tolerances and material property uncertainties, variations of the loads (fluence, temperature, burn-up) and core data parameters. The statistical method used in ATLAS is the importance sampling. The model of migration of long-lived fission products in the coated particle and more
Trading speed and accuracy by coding time: a coupled-circuit cortical model.
Standage, Dominic; You, Hongzhi; Wang, Da-Hui; Dorris, Michael C
2013-04-01
Our actions take place in space and time, but despite the role of time in decision theory and the growing acknowledgement that the encoding of time is crucial to behaviour, few studies have considered the interactions between neural codes for objects in space and for elapsed time during perceptual decisions. The speed-accuracy trade-off (SAT) provides a window into spatiotemporal interactions. Our hypothesis is that temporal coding determines the rate at which spatial evidence is integrated, controlling the SAT by gain modulation. Here, we propose that local cortical circuits are inherently suited to the relevant spatial and temporal coding. In simulations of an interval estimation task, we use a generic local-circuit model to encode time by 'climbing' activity, seen in cortex during tasks with a timing requirement. The model is a network of simulated pyramidal cells and inhibitory interneurons, connected by conductance synapses. A simple learning rule enables the network to quickly produce new interval estimates, which show signature characteristics of estimates by experimental subjects. Analysis of network dynamics formally characterizes this generic, local-circuit timing mechanism. In simulations of a perceptual decision task, we couple two such networks. Network function is determined only by spatial selectivity and NMDA receptor conductance strength; all other parameters are identical. To trade speed and accuracy, the timing network simply learns longer or shorter intervals, driving the rate of downstream decision processing by spatially non-selective input, an established form of gain modulation. Like the timing network's interval estimates, decision times show signature characteristics of those by experimental subjects. Overall, we propose, demonstrate and analyse a generic mechanism for timing, a generic mechanism for modulation of decision processing by temporal codes, and we make predictions for experimental verification.
Directory of Open Access Journals (Sweden)
Peter eVuust
2014-10-01
Full Text Available Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of predictive coding, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Third, we develop a predictive coding model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard (‘rhythm’ and the brain’s anticipatory structuring of music (‘meter’. Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the predictive coding theory. We argue that musical rhythm exploits the brain’s general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms.
Trading speed and accuracy by coding time: a coupled-circuit cortical model.
Directory of Open Access Journals (Sweden)
Dominic Standage
2013-04-01
Full Text Available Our actions take place in space and time, but despite the role of time in decision theory and the growing acknowledgement that the encoding of time is crucial to behaviour, few studies have considered the interactions between neural codes for objects in space and for elapsed time during perceptual decisions. The speed-accuracy trade-off (SAT provides a window into spatiotemporal interactions. Our hypothesis is that temporal coding determines the rate at which spatial evidence is integrated, controlling the SAT by gain modulation. Here, we propose that local cortical circuits are inherently suited to the relevant spatial and temporal coding. In simulations of an interval estimation task, we use a generic local-circuit model to encode time by 'climbing' activity, seen in cortex during tasks with a timing requirement. The model is a network of simulated pyramidal cells and inhibitory interneurons, connected by conductance synapses. A simple learning rule enables the network to quickly produce new interval estimates, which show signature characteristics of estimates by experimental subjects. Analysis of network dynamics formally characterizes this generic, local-circuit timing mechanism. In simulations of a perceptual decision task, we couple two such networks. Network function is determined only by spatial selectivity and NMDA receptor conductance strength; all other parameters are identical. To trade speed and accuracy, the timing network simply learns longer or shorter intervals, driving the rate of downstream decision processing by spatially non-selective input, an established form of gain modulation. Like the timing network's interval estimates, decision times show signature characteristics of those by experimental subjects. Overall, we propose, demonstrate and analyse a generic mechanism for timing, a generic mechanism for modulation of decision processing by temporal codes, and we make predictions for experimental verification.
Energy Technology Data Exchange (ETDEWEB)
Jonkman, J.; Butterfield, S.; Passon, P.; Larsen, T.; Camp, T.; Nichols, J.; Azcona, J.; Martinez, A.
2008-01-01
This paper presents an overview and describes the latest findings of the code-to-code verification activities of the Offshore Code Comparison Collaboration, which operates under Subtask 2 of the International Energy Agency Wind Annex XXIII.
International Nuclear Information System (INIS)
Dominguez, L.; Camargo, C.T.M.
1984-09-01
The first step of the project for implementation of two non-symmetric cooling loops modeled by the ALMOD3 computer code is presented. This step consists of the introduction of a simplified model for simulating the steam generator. This model is the GEVAP computer code, integrant part of LOOP code, which simulates the primary coolant circuit of PWR nuclear power plants during transients. The ALMOD3 computer code has a model for the steam generator, called UTSG, which is very detailed. This model has spatial dependence, correlations for 2-phase flow, distinguished correlations for different heat transfer process. The GEVAP model has thermal equilibrium between phases (gaseous and liquid homogeneous mixture), no spatial dependence and uses only one generalized correlation to treat several heat transfer processes. (Author) [pt
Coding coarse grained polymer model for LAMMPS and its application to polymer crystallization
Luo, Chuanfu; Sommer, Jens-Uwe
2009-08-01
We present a patch code for LAMMPS to implement a coarse grained (CG) model of poly(vinyl alcohol) (PVA). LAMMPS is a powerful molecular dynamics (MD) simulator developed at Sandia National Laboratories. Our patch code implements tabulated angular potential and Lennard-Jones-9-6 (LJ96) style interaction for PVA. Benefited from the excellent parallel efficiency of LAMMPS, our patch code is suitable for large-scale simulations. This CG-PVA code is used to study polymer crystallization, which is a long-standing unsolved problem in polymer physics. By using parallel computing, cooling and heating processes for long chains are simulated. The results show that chain-folded structures resembling the lamellae of polymer crystals are formed during the cooling process. The evolution of the static structure factor during the crystallization transition indicates that long-range density order appears before local crystalline packing. This is consistent with some experimental observations by small/wide angle X-ray scattering (SAXS/WAXS). During the heating process, it is found that the crystalline regions are still growing until they are fully melted, which can be confirmed by the evolution both of the static structure factor and average stem length formed by the chains. This two-stage behavior indicates that melting of polymer crystals is far from thermodynamic equilibrium. Our results concur with various experiments. It is the first time that such growth/reorganization behavior is clearly observed by MD simulations. Our code can be easily used to model other type of polymers by providing a file containing the tabulated angle potential data and a set of appropriate parameters. Program summaryProgram title: lammps-cgpva Catalogue identifier: AEDE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU's GPL No. of lines in distributed program
Hickok, Gregory
2012-01-01
Speech recognition is an active process that involves some form of predictive coding. This statement is relatively uncontroversial. What is less clear is the source of the prediction. The dual-stream model of speech processing suggests that there are two possible sources of predictive coding in speech perception: the motor speech system and the…
Learning to Act Like a Lawyer: A Model Code of Professional Responsibility for Law Students
Directory of Open Access Journals (Sweden)
David M. Tanovich
2009-02-01
incidents at law schools that raise serious issues about the professionalism of law students. They include, for example, the UofT marks scandal, the Windsor first year blog and the proliferation of blogs like www.lawstudents.ca and www.lawbuzz.ca with gratuitous, defamatory and offensive entries. It is not clear that all of this conduct would be caught by University codes of conduct which often limit their reach to on campus behaviour or University sanctioned events. What should a law school code of professional responsibility look like and what ethical responsibilities should it identify? For example, should there be a mandatory pro bono obligation on students or a duty to report misconduct. The last part of the article addresses this question by setting out a model code of professional responsibility for law students. Les étudiants et étudiantes en droit constituent l’avenir de la profession juridique. Comment bien préparés sont-ils lorsqu’ils quittent la faculté de droit pour assumer leurs obligations professionnelles et éthiques envers eux-mêmes, envers la profession et envers le public? Cette question a mené à un intérêt grandissant au Canada à l’enseignement de l’éthique juridique. Elle a aussi mené à plus d’emphase sur le développement de formation clinique et expérientielle tel que l’exemplifie le savoir et l’enseignement de la professeure Rose Voyvodic. Toutefois, moins d’attention a été consacrée à identifier les responsabilités éthiques générales d’étudiants et étudiantes en droit lorsqu’ils n’oeuvrent pas dans une clinique ou dans un autre contexte légal. Cela se voit dans les faits qu’il y a très peu d’articles canadiens qui portent sur la question, et, de plus grande importance, qu’il y a pénurie, au sein de facultés de droit, de politiques disciplinaires ou de codes déontologiques qui présentent les obligations professionnelles d’étudiants et étudiantes en droit. Cet article développe une id
International Nuclear Information System (INIS)
Simmons, C.S.; Cole, C.R.
1985-05-01
This document was written to provide guidance to managers and site operators on how ground-water transport codes should be selected for assessing burial site performance. There is a need for a formal approach to selecting appropriate codes from the multitude of potentially useful ground-water transport codes that are currently available. Code selection is a problem that requires more than merely considering mathematical equation-solving methods. These guidelines are very general and flexible and are also meant for developing systems simulation models to be used to assess the environmental safety of low-level waste burial facilities. Code selection is only a single aspect of the overall objective of developing a systems simulation model for a burial site. The guidance given here is mainly directed toward applications-oriented users, but managers and site operators need to be familiar with this information to direct the development of scientifically credible and defensible transport assessment models. Some specific advice for managers and site operators on how to direct a modeling exercise is based on the following five steps: identify specific questions and study objectives; establish costs and schedules for achieving answers; enlist the aid of professional model applications group; decide on approach with applications group and guide code selection; and facilitate the availability of site-specific data. These five steps for managers/site operators are discussed in detail following an explanation of the nine systems model development steps, which are presented first to clarify what code selection entails
An Empirical Model for Vane-Type Vortex Generators in a Navier-Stokes Code
Dudek, Julianne C.
2005-01-01
An empirical model which simulates the effects of vane-type vortex generators in ducts was incorporated into the Wind-US Navier-Stokes computational fluid dynamics code. The model enables the effects of the vortex generators to be simulated without defining the details of the geometry within the grid, and makes it practical for researchers to evaluate multiple combinations of vortex generator arrangements. The model determines the strength of each vortex based on the generator geometry and the local flow conditions. Validation results are presented for flow in a straight pipe with a counter-rotating vortex generator arrangement, and the results are compared with experimental data and computational simulations using a gridded vane generator. Results are also presented for vortex generator arrays in two S-duct diffusers, along with accompanying experimental data. The effects of grid resolution and turbulence model are also examined.
H2-O2 supercritical combustion modeling using a CFD code
Directory of Open Access Journals (Sweden)
Benarous Abdallah
2009-01-01
Full Text Available The characteristics of propellant injection, mixing, and combustion have a profound effect on liquid rocket engine performance. The necessity of raising rocket engines performance requires a combustion chamber operation often in a supercritical regime. A supercritical combustion model based on a one-phase multi-components approach is developed and tested on a non-premixed H2-O2 flame configuration. A two equations turbulence model is used for describing the jet dynamics where a limited Pope correction is added to account for the oxidant spreading rate. Transport properties of the mixture are calculated using extended high pressure forms of the mixing rules. An equilibrium chemistry scheme is adopted in this combustion case, with both algebraic and stochastic expressions for the chemistry/turbulence coupling. The model was incorporated into a computational fluid dynamics commercial code (Fluent 6.2.16. The validity of the present model was investigated by comparing predictions of temperature, species mass fractions, recirculation zones and visible flame length to the experimental data measured on the Mascotte test rig. The results were confronted also with advanced code simulations. It appears that the agreement between the results was fairly good in the chamber regions situated downstream the near injection zone.
Development of sump model for containment hydrogen distribution calculations using CFD code
Energy Technology Data Exchange (ETDEWEB)
Ravva, Srinivasa Rao, E-mail: srini@aerb.gov.in [Indian Institute of Technology-Bombay, Mumbai (India); Nuclear Safety Analysis Division, Atomic Energy Regulatory Board, Mumbai (India); Iyer, Kannan N. [Indian Institute of Technology-Bombay, Mumbai (India); Gaikwad, A.J. [Nuclear Safety Analysis Division, Atomic Energy Regulatory Board, Mumbai (India)
2015-12-15
Highlights: • Sump evaporation model was implemented in FLUENT using three different approaches. • Validated the implemented sump evaporation models against TOSQAN facility. • It was found that predictions are in good agreement with the data. • Diffusion based model would be able to predict both condensation and evaporation. - Abstract: Computational Fluid Dynamics (CFD) simulations are necessary for obtaining accurate predictions and local behaviour for carrying out containment hydrogen distribution studies. However, commercially available CFD codes do not have all necessary models for carrying out hydrogen distribution analysis. One such model is sump or suppression pool evaporation model. The water in the sump may evaporate during the accident progression and affect the mixture concentrations in the containment. Hence, it is imperative to study the sump evaporation and its effect. Sump evaporation is modelled using three different approaches in the present work. The first approach deals with the calculation of evaporation flow rate and sump liquid temperature and supplying these quantities through user defined functions as boundary conditions. In this approach, the mean values of the domain are used. In the second approach, the mass, momentum, energy and species sources arise due to the sump evaporation are added to the domain through user defined functions. Cell values adjacent to the sump interface are used in this. Heat transfer between gas and liquid is calculated automatically by the code itself. However, in these two approaches, the evaporation rate was computed using an experimental correlation. In the third approach, the evaporation rate is directly estimated using diffusion approximation. The performance of these three models is compared with the sump behaviour experiment conducted in TOSQAN facility.Classification: K. Thermal hydraulics.
A fifth equation to model the relative velocity the 3-D thermal-hydraulic code THYC
International Nuclear Information System (INIS)
Jouhanique, T.; Rascle, P.
1995-11-01
E.D.F. has developed, since 1986, a general purpose code named THYC (Thermal HYdraulic Code) designed to study three-dimensional single and two-phase flows in rod tube bundles (pressurised water reactor cores, steam generators, condensers, heat exchangers). In these studies, the relative velocity was calculated by a drift-flux correlation. However, the relative velocity between vapor and liquid is an important parameter for the accuracy of a two-phase flow modelling in a three-dimensional code. The range of application of drift-flux correlations is mainly limited by the characteristic of the flow pattern (counter current flow ...) and by large 3-D effects. The purpose of this paper is to describe a numerical scheme which allows the relative velocity to be computed in a general case. Only the methodology is investigated in this paper which is not a validation work. The interfacial drag force is an important factor of stability and accuracy of the results. This force, closely dependent on the flow pattern, is not entirely established yet, so a range of multiplicator of its expression is used to compare the numerical results with the VATICAN test section measurements. (authors). 13 refs., 6 figs
Magy: Time dependent, multifrequency, self-consistent code for modeling electron beam devices
International Nuclear Information System (INIS)
Botton, M.; Antonsen, T.M.; Levush, B.
1997-01-01
A new MAGY code is being developed for three dimensional modeling of electron beam devices. The code includes a time dependent multifrequency description of the electromagnetic fields and a self consistent analysis of the electrons. The equations of motion are solved with the electromagnetic fields as driving forces and the resulting trajectories are used as current sources for the fields. The calculations of the electromagnetic fields are based on the waveguide modal representation, which allows the solution of relatively small number of coupled one dimensional partial differential equations for the amplitudes of the modes, instead of the full solution of Maxwell close-quote s equations. Moreover, the basic time scale for updating the electromagnetic fields is the cavity fill time and not the high frequency of the fields. In MAGY, the coupling among the various modes is determined by the waveguide non-uniformity, finite conductivity of the walls, and the sources due to the electron beam. The equations of motion of the electrons are solved assuming that all the electrons traverse the cavity in less than the cavity fill time. Therefore, at each time step, a set of trajectories are calculated with the high frequency and other external fields as the driving forces. The code includes a verity of diagnostics for both electromagnetic fields and particles trajectories. It is simple to operate and requires modest computing resources, thus expected to serve as a design tool. copyright 1997 American Institute of Physics
A self-organized internal models architecture for coding sensory-motor schemes
Directory of Open Access Journals (Sweden)
Esaú eEscobar Juárez
2016-04-01
Full Text Available Cognitive robotics research draws inspiration from theories and models on cognition, as conceived by neuroscience or cognitive psychology, to investigate biologically plausible computational models in artificial agents. In this field, the theoretical framework of Grounded Cognition provides epistemological and methodological grounds for the computational modeling of cognition. It has been stressed in the literature that textit{simulation}, textit{prediction}, and textit{multi-modal integration} are key aspects of cognition and that computational architectures capable of putting them into play in a biologically plausible way are a necessity.Research in this direction has brought extensive empirical evidencesuggesting that textit{Internal Models} are suitable mechanisms forsensory-motor integration. However, current Internal Models architectures show several drawbacks, mainly due to the lack of a unified substrate allowing for a true sensory-motor integration space, enabling flexible and scalable ways to model cognition under the embodiment hypothesis constraints.We propose the Self-Organized Internal ModelsArchitecture (SOIMA, a computational cognitive architecture coded by means of a network of self-organized maps, implementing coupled internal models that allow modeling multi-modal sensory-motor schemes. Our approach addresses integrally the issues of current implementations of Internal Models.We discuss the design and features of the architecture, and provide empirical results on a humanoid robot that demonstrate the benefits and potentialities of the SOIMA concept for studying cognition in artificial agents.
Code and Solution Verification of 3D Numerical Modeling of Flow in the Gust Erosion Chamber
Yuen, A.; Bombardelli, F. A.
2014-12-01
Erosion microcosms are devices commonly used to investigate the erosion and transport characteristics of sediments at the bed of rivers, lakes, or estuaries. In order to understand the results these devices provide, the bed shear stress and flow field need to be accurately described. In this research, the UMCES Gust Erosion Microcosm System (U-GEMS) is numerically modeled using Finite Volume Method. The primary aims are to simulate the bed shear stress distribution at the surface of the sediment core/bottom of the microcosm, and to validate the U-GEMS produces uniform bed shear stress at the bottom of the microcosm. The mathematical model equations are solved by on a Cartesian non-uniform grid. Multiple numerical runs were developed with different input conditions and configurations. Prior to developing the U-GEMS model, the General Moving Objects (GMO) model and different momentum algorithms in the code were verified. Code verification of these solvers was done via simulating the flow inside the top wall driven square cavity on different mesh sizes to obtain order of convergence. The GMO model was used to simulate the top wall in the top wall driven square cavity as well as the rotating disk in the U-GEMS. Components simulated with the GMO model were rigid bodies that could have any type of motion. In addition cross-verification was conducted as results were compared with numerical results by Ghia et al. (1982), and good agreement was found. Next, CFD results were validated by simulating the flow within the conventional microcosm system without suction and injection. Good agreement was found when the experimental results by Khalili et al. (2008) were compared. After the ability of the CFD solver was proved through the above code verification steps. The model was utilized to simulate the U-GEMS. The solution was verified via classic mesh convergence study on four consecutive mesh sizes, in addition to that Grid Convergence Index (GCI) was calculated and based on
International Nuclear Information System (INIS)
Bott, E.; Frepoli, C.; Monti, R.; Notini, V.; Carcassi, M.; Fineschi, F.; Heitsch, M.
1999-01-01
Large amounts of hydrogen can be generated in the containment of a nuclear power plant following a postulated accident with significant fuel damage. Different strategies have been proposed and implemented to prevent violent hydrogen combustion. An attractive one aims to eliminate hydrogen without burning processes; it is based on the use of catalytic hydrogen recombiners. This paper describes a simulation methodology which is being developed by Ansaldo, to support the application of the above strategy, in the frame of two projects sponsored by the Commission of the European Communities within the IV Framework Program on Reactor Safety. Involved organizations also include the DCMN of Pisa University (Italy), Battelle Institute and GRS (Germany), Politechnical University of Madrid (Spain). The aims to make available a simulation approach, suitable for use for containment design at industrial level (i.e. with reasonable computer running time) and capable to correctly capture the relevant phenomenologies (e.g. multiflow convective flow patterns, hydrogen, air and steam distribution in the containment atmosphere as determined by containment structures and geometries as well as by heat and mass sources and sinks). Eulerian algorithms provide the capability of three dimensional modelling with a fairly accurate prediction, however lower than CFD codes with a full Navier Stokes formulation. Open linking of an Eulerian code as GOTHIC to a full Navier Stokes CFD code as CFX 4.1 allows to dynamically tune the solving strategies of the Eulerian code itself. The effort in progress is an application of this innovative methodology to detailed hydrogen recombination simulation and a validation of the approach itself by reproducing experimental data. (author)
Analysis of LMFBR explosion model experiments by means of the SURBOUM-II code
International Nuclear Information System (INIS)
Stievenart, M.; Bouffioux, P.; Egleme, M.; Fabry, J.P.; Lamotte, H.
1975-01-01
Experiments have been carried out to simulate in small scale vessels the occurrence of an Hypothetical Core Disruptive Accident (HCDA) for the SNR-300 reactor. In parallel with the experimental programme, the (2D) computer code SURBOUM-II has been developed. The code computes the two-dimensional fluid flow within the system in case of a core explosion. The fluid is assumed incompressible and the deformations of the concrete shell(s) and vessel(s) are calculated by means of the thin shell theory. Perforated dip plates which are included in the model are treated as a particular type of boundary conditions involving empirical pressure drop versus flow relationships. The first part of the interpretation work was the determination of the pressure-volume relationship of the slow burning charge used to simulate the HCDA. This has been achieved by an original trial and error method built in the code which fits best the experimental impulse time records obtained in bare charge experiments fired in overstrong vessels. Other experiments carried out in overstrong vessels and involving perforated dip plate above the core to damp out the fluid impact on the roof were calculated and the comparison of the theoretical and experimental impulse time curves was satisfactory. Further experiments, including or not the perforated dip plate, carried out in yielding vessels were also interpreted. For those experiments the scope of the work was the comparison of calculated and measured deformation of the vessel. The agreement obtained is satisfactory, through the code seems to overestimate slightly the final deformation. (Auth.)
Analysis of LMFBR explosion model experiments by means of the SURBOUM-II code
International Nuclear Information System (INIS)
Stievenart, M.; Bouffioux, P.; Egleme, M.; Fabry, J.P.; Lamotte, H.
1975-01-01
During the last four years experiments were carried out at JRC-Ispra in the frame of a collaboration contract between BELGONUCLEAIRE and EURATOM to stimulate in small scale vessels the occurrence of an Hypothetical Core Disruptive Accident (hcda) of the SNR-300 reactor. The 2D computer code SURBOUM-II has been developed in parallel with the experimental programme. That code computes the two-dimensional fluid flow within the system in case of a core explosion. The fluid is assumed uncompressible and the deformations of the concentric shell(s) and vessel(s) are calculated by means of the thin shell theory. Perforated dip plates which are included in the model are treated as a particular type of boundary conditions involving empirical pressure drop versus flow relationships. The first part of the interpretation work was the determination of the pressure-volume relationship of the slow burning charge used to stimulate the hcda. This has been achieved by an original trial and error method built in the code which fits bets the experimental impulse time records obtained in bare charge experiments fired in overstrong vessels. Other experiments carried out in overstrong vessels and involving perforated dip plate above the core to damp out the fluid impact on the roof were calculated and the comparison of the theoretical and experimental impulse time curves was satisfactory. Further experiments, including or not the perforated dip plate, carried out in yielding vessels were also interpreted. For those experiments the scope of the work was the comparison of calculated and measured deformation of the vessel. The agreement obtained is satisfactory, though the code seems to overestimate slightly the final deformation
International Nuclear Information System (INIS)
Carvalho, F. de A.T. de.
1985-01-01
Some antecipated transients without scram (ATWS) for a pressurized water cooled reactor, model KWU 1300 MWe, are studied using coupling of the containment code CORAN to the system model code ALMOD, under severe random conditions. This coupling has the objective of including containment model as part of a unified code system. These severe conditions include failure of reactor scram, following a station black-out and emergency power initiation for the burn-up status at the beginning and end of the cycle. Furthermore, for the burn-up status at the end of the cycle a failure in the closure of the pressurizer relief valve was also investigated. For the beginning of the cycle, the containment participates actively during the transient. It is noted that the effect of the burn-up in the fuel is to reduce the seriousness of these transients. On the other hand, the failure in the closure of the pressurized relief valve makes this transients more severe. Moreover, the containment safety or radiological public safety is not affected in any of the cases. (Author) [pt
International Nuclear Information System (INIS)
Park, Hyun Sik; Choi, Ki Yong; Moon, Sang Ki; Kim, Jung Woo; Kim, Kyung Doo
2009-01-01
The wall condensation heat transfer models are developed for the SPACE code and are assessed for various condensation conditions. Both default and alternative models were selected through an extensive literature survey. For a pure steam condensation, a maximum value among the Nusselt, Chato, and Shah's correlations is used in order to consider the geometric and turbulent effects. In the presence of non-condensable gases, the Colburn-Hougen's diffusion model was used as a default model and a non-iterative condensation model proposed by No and Park was selected as an alternative model. The wall condensation heat transfer models were assessed preliminarily by using arbitrary test conditions. Both wall condensation models could simulate the heat transfer coefficients and heat fluxes in the vertical, horizontal and turbulent conditions quite reasonably for a pure steam condensation. Both the default and alternative wall condensation models were also verified for the condensation heat transfer coefficient and heat flux in the presence of noncondensable gas. However, some improvements and further detailed verification are necessary for the condensation phenomena in the presence of noncondensable gas
A sliding point contact model for the finite element structures code EURDYN
International Nuclear Information System (INIS)
Smith, B.L.
1986-01-01
A method is developed by which sliding point contact between two moving deformable structures may be incorporated within a lumped mass finite element formulation based on displacements. The method relies on a simple mechanical interpretation of the contact constraint in terms of equivalent nodal forces and avoids the use of nodal connectivity via a master slave arrangement or pseudo contact element. The methodology has been iplemented into the EURDYN finite element program for the (2D axisymmetric) version coupled to the hydro code SEURBNUK. Sample calculations are presented illustrating the use of the model in various contact situations. Effects due to separation and impact of structures are also included. (author)
International Nuclear Information System (INIS)
Strenge, D.L.; Watson, E.C.; Droppo, J.G.
1976-06-01
The development of technological bases for siting nuclear fuel cycle facilities requires calculational models and computer codes for the evaluation of risks and the assessment of environmental impact of radioactive effluents. A literature search and review of available computer programs revealed that no one program was capable of performing all of the great variety of calculations (i.e., external dose, internal dose, population dose, chronic release, accidental release, etc.). Available literature on existing computer programs has been reviewed and a description of each program reviewed is given
Energy Technology Data Exchange (ETDEWEB)
Strenge, D.L.; Watson, E.C.; Droppo, J.G.
1976-06-01
The development of technological bases for siting nuclear fuel cycle facilities requires calculational models and computer codes for the evaluation of risks and the assessment of environmental impact of radioactive effluents. A literature search and review of available computer programs revealed that no one program was capable of performing all of the great variety of calculations (i.e., external dose, internal dose, population dose, chronic release, accidental release, etc.). Available literature on existing computer programs has been reviewed and a description of each program reviewed is given.
Beutner, Thomas J.; Celik, Zeki Z.; Roberts, Leonard
1992-01-01
A computational study has been undertaken to investigate method of modeling solid and porous wall boundary conditions in computational fluid dynamics (CFD) codes. The procedure utilizes experimental measurements at the walls to develop a flow field solution based on the method of singularities. This flow field solution is then imposed as a boundary condition in a CFD simulation of the internal flow field. The effectiveness of this method in describing the boundary conditions at the wind tunnel walls using only sparse experimental measurements has been investigated. Position and refinement of experimental measurement locations required to describe porous wall boundary conditions has also been considered.
In silico discovery and modeling of non-coding RNA structure in viruses.
Moss, Walter N; Steitz, Joan A
2015-12-01
This review covers several computational methods for discovering structured non-coding RNAs in viruses and modeling their putative secondary structures. Here we will use examples from two target viruses to highlight these approaches: influenza A virus-a relatively small, segmented RNA virus; and Epstein-Barr virus-a relatively large DNA virus with a complex transcriptome. Each system has unique challenges to overcome and unique characteristics to exploit. From these particular cases, generically useful approaches can be derived for the study of additional viral targets. Copyright © 2015 Elsevier Inc. All rights reserved.
FOI-PERFECT code: 3D relaxation MHD modeling and Applications
Wang, Gang-Hua; Duan, Shu-Chao; Comutational Physics Team Team
2016-10-01
One of the challenges in numerical simulations of electromagnetically driven high energy density (HED) systems is the existence of vacuum region. FOI-PERFECT code adopts a full relaxation magnetohydrodynamic (MHD) model. The electromagnetic part of the conventional model adopts the magnetic diffusion approximation. The vacuum region is approximated by artificially increasing the resistivity. On one hand the phase/group velocity is superluminal and hence non-physical in the vacuum region, on the other hand a diffusion equation with large diffusion coefficient can only be solved by implicit scheme which is difficult to be parallelized and converge. A better alternative is to solve the full electromagnetic equations. Maxwell's equations coupled with the constitutive equation, generalized Ohm's law, constitute a relaxation model. The dispersion relation is given to show its transition from electromagnetic propagation in vacuum to resistive MHD in plasma in a natural way. The phase and group velocities are finite for this system. A better time stepping is adopted to give a 3rd full order convergence in time domain without the stiff relaxation term restriction. Therefore it is convenient for explicit & parallel computations. Some numerical results of FOI-PERFECT code are also given. Project supported by the National Natural Science Foundation of China (Grant No. 11571293) And Foundation of China Academy of Engineering Physics (Grant No. 2015B0201023).
Modeling a TRIGA Mark II reactor using the Attila three-dimensional deterministic transport code
International Nuclear Information System (INIS)
Keller, S.T.; Palmer, T.S.; Wareing, T.A.
2005-01-01
A benchmark model of a TRIGA reactor constructed using materials and dimensions similar to existing TRIGA reactors was analyzed using MCNP and the recently developed deterministic transport code Attila TM . The benchmark reactor requires no MCNP modeling approximations, yet is sufficiently complex to validate the new modeling techniques. Geometric properties of the benchmark reactor are specified for use by Attila TM with CAD software. Materials are treated individually in MCNP. Materials used in Attila TM that are clad are homogenized. Attila TM uses multigroup energy discretization. Two cross section libraries were constructed for comparison. A 16 group library collapsed from the SCALE 4.4.a 238 group library provided better results than a seven group library calculated with WIMS-ANL. Values of the k-effective eigenvalue and scalar flux as a function of location and energy were calculated by the two codes. The calculated values for k-effective and spatially averaged neutron flux were found to be in good agreement. Flux distribution by space and energy also agreed well. Attila TM results could be improved with increased spatial and angular resolution and revised energy group structure. (authors)
Cole, Gary L.; Richard, Jacques C.
1991-01-01
An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.
Development Of A Parallel Performance Model For The THOR Neutral Particle Transport Code
Energy Technology Data Exchange (ETDEWEB)
Yessayan, Raffi; Azmy, Yousry; Schunert, Sebastian
2017-02-01
The THOR neutral particle transport code enables simulation of complex geometries for various problems from reactor simulations to nuclear non-proliferation. It is undergoing a thorough V&V requiring computational efficiency. This has motivated various improvements including angular parallelization, outer iteration acceleration, and development of peripheral tools. For guiding future improvements to the code’s efficiency, better characterization of its parallel performance is useful. A parallel performance model (PPM) can be used to evaluate the benefits of modifications and to identify performance bottlenecks. Using INL’s Falcon HPC, the PPM development incorporates an evaluation of network communication behavior over heterogeneous links and a functional characterization of the per-cell/angle/group runtime of each major code component. After evaluating several possible sources of variability, this resulted in a communication model and a parallel portion model. The former’s accuracy is bounded by the variability of communication on Falcon while the latter has an error on the order of 1%.
Modelling of the Gadolinium Fuel Test IFA-681 using the BISON Code
Energy Technology Data Exchange (ETDEWEB)
Pastore, Giovanni [Idaho National Laboratory; Hales, Jason Dean [Idaho National Laboratory; Novascone, Stephen Rhead [Idaho National Laboratory; Spencer, Benjamin Whiting [Idaho National Laboratory; Williamson, Richard L [Idaho National Laboratory
2016-05-01
In this work, application of Idaho National Laboratory’s fuel performance code BISON to modelling of fuel rods from the Halden IFA-681 gadolinium fuel test is presented. First, an overview is given of BISON models, focusing on UO2/UO2-Gd2O3 fuel and Zircaloy cladding. Then, BISON analyses of selected fuel rods from the IFA-681 test are performed. For the first time in a BISON application to integral fuel rod simulations, the analysis is informed by detailed neutronics calculations in order to accurately capture the radial power profile throughout the fuel, which is strongly affected by the complex evolution of absorber Gd isotopes. In particular, radial power profiles calculated at IFE–Halden Reactor Project with the HELIOS code are used. The work has been carried out in the frame of the collaboration between Idaho National Laboratory and Halden Reactor Project. Some slide have been added as an Appendix to present the newly developed PolyPole-1 algorithm for modeling of intra-granular fission gas release.
LHC-GCS a model-driven approach for automatic PLC and SCADA code generation
Thomas, Geraldine; Barillère, Renaud; Cabaret, Sebastien; Kulman, Nikolay; Pons, Xavier; Rochez, Jacques
2005-01-01
The LHC experiments’ Gas Control System (LHC GCS) project [1] aims to provide the four LHC experiments (ALICE, ATLAS, CMS and LHCb) with control for their 23 gas systems. To ease the production and maintenance of 23 control systems, a model-driven approach has been adopted to generate automatically the code for the Programmable Logic Controllers (PLCs) and for the Supervision Control And Data Acquisition (SCADA) systems. The first milestones of the project have been achieved. The LHC GCS framework [4] and the generation tools have been produced. A first control application has actually been generated and is in production, and a second is in preparation. This paper describes the principle and the architecture of the model-driven solution. It will in particular detail how the model-driven solution fits with the LHC GCS framework and with the UNICOS [5] data-driven tools.
An evolutionary model for protein-coding regions with conserved RNA structure
DEFF Research Database (Denmark)
Pedersen, Jakob Skou; Forsberg, Roald; Meyer, Irmtraud Margret
2004-01-01
Here we present a model of nucleotide substitution in protein-coding regions that also encode the formation of conserved RNA structures. In such regions, apparent evolutionary context dependencies exist, both between nucleotides occupying the same codon and between nucleotides forming a base pair...... in the RNA structure. The overlap of these fundamental dependencies is sufficient to cause "contagious" context dependencies which cascade across many nucleotide sites. Such large-scale dependencies challenge the use of traditional phylogenetic models in evolutionary inference because they explicitly assume...... evolutionary independence between short nucleotide tuples. In our model we address this by replacing context dependencies within codons by annotation-specific heterogeneity in the substitution process. Through a general procedure, we fragment the alignment into sets of short nucleotide tuples based on both...
The Energy Coding of a Structural Neural Network Based on the Hodgkin-Huxley Model.
Zhu, Zhenyu; Wang, Rubin; Zhu, Fengyun
2018-01-01
Based on the Hodgkin-Huxley model, the present study established a fully connected structural neural network to simulate the neural activity and energy consumption of the network by neural energy coding theory. The numerical simulation result showed that the periodicity of the network energy distribution was positively correlated to the number of neurons and coupling strength, but negatively correlated to signal transmitting delay. Moreover, a relationship was established between the energy distribution feature and the synchronous oscillation of the neural network, which showed that when the proportion of negative energy in power consumption curve was high, the synchronous oscillation of the neural network was apparent. In addition, comparison with the simulation result of structural neural network based on the Wang-Zhang biophysical model of neurons showed that both models were essentially consistent.
Applications of Transport/Reaction Codes to Problems in Cell Modeling; TOPICAL
International Nuclear Information System (INIS)
MEANS, SHAWN A.; RINTOUL, MARK DANIEL; SHADID, JOHN N.
2001-01-01
We demonstrate two specific examples that show how our exiting capabilities in solving large systems of partial differential equations associated with transport/reaction systems can be easily applied to outstanding problems in computational biology. First, we examine a three-dimensional model for calcium wave propagation in a Xenopus Laevis frog egg and verify that a proposed model for the distribution of calcium release sites agrees with experimental results as a function of both space and time. Next, we create a model of the neuron's terminus based on experimental observations and show that the sodium-calcium exchanger is not the route of sodium's modulation of neurotransmitter release. These state-of-the-art simulations were performed on massively parallel platforms and required almost no modification of existing Sandia codes
OPT13B and OPTIM4 - computer codes for optical model calculations
International Nuclear Information System (INIS)
Pal, S.; Srivastava, D.K.; Mukhopadhyay, S.; Ganguly, N.K.
1975-01-01
OPT13B is a computer code in FORTRAN for optical model calculations with automatic search. A summary of different formulae used for computation is given. Numerical methods are discussed. The 'search' technique followed to obtain the set of optical model parameters which produce best fit to experimental data in a least-square sense is also discussed. Different subroutines of the program are briefly described. Input-output specifications are given in detail. A modified version of OPT13B specifications are given in detail. A modified version of OPT13B is OPTIM4. It can be used for optical model calculations where the form factors of different parts of the optical potential are known point by point. A brief description of the modifications is given. (author)
DEFF Research Database (Denmark)
Elesin, Y; Gerya, T; Artemieva, Irina
2010-01-01
We present a new 2D finite difference code, Samovar, for high-resolution numerical modeling of complex geodynamic processes. Examples are collision of lithospheric plates (including mountain building and subduction) and lithosphere extension (including formation of sedimentary basins, regions...
International Nuclear Information System (INIS)
Rattan, D.S.
1993-11-01
NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases
Directory of Open Access Journals (Sweden)
Abdul Latif Memon
2014-01-01
Full Text Available Many encoding schemes are used in OCDMA (Optical Code Division Multiple Access Network but SAC (Spectral Amplitude Codes is widely used. It is considered an effective arrangement to eliminate dominant noise called MAI (Multi Access Interference. Various codes are studied for evaluation with respect to their performance against three noises namely shot noise, thermal noise and PIIN (Phase Induced Intensity Noise. Various Mathematical models for SNR (Signal to Noise Ratios and BER (Bit Error Rates are discussed where the SNRs are calculated and BERs are computed using Gaussian distribution assumption. After analyzing the results mathematically, it is concluded that ZCC (Zero Cross Correlation Code performs better than the other selected SAC codes and can serve larger number of active users than the other codes do. At various receiver power levels, analysis points out that RDC (Random Diagonal Code also performs better than the other codes. For the power interval between -10 and -20 dBm performance of RDC is better ZCC. Their lowest BER values suggest that these codes should be part of an efficient and cost effective OCDM access network in the future.
Source Term Model for Steady Micro Jets in a Navier-Stokes Computer Code
Waithe, Kenrick A.
2005-01-01
A source term model for steady micro jets was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the mass flow and momentum created by a steady blowing micro jet. The model is obtained by adding the momentum and mass flow created by the jet to the Navier-Stokes equations. The model was tested by comparing with data from numerical simulations of a single, steady micro jet on a flat plate in two and three dimensions. The source term model predicted the velocity distribution well compared to the two-dimensional plate using a steady mass flow boundary condition, which was used to simulate a steady micro jet. The model was also compared to two three-dimensional flat plate cases using a steady mass flow boundary condition to simulate a steady micro jet. The three-dimensional comparison included a case with a grid generated to capture the circular shape of the jet and a case without a grid generated for the micro jet. The case without the jet grid mimics the application of the source term. The source term model compared well with both of the three-dimensional cases. Comparisons of velocity distribution were made before and after the jet and Mach and vorticity contours were examined. The source term model allows a researcher to quickly investigate different locations of individual or several steady micro jets. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.
International Nuclear Information System (INIS)
Viswanathan, H.S.
1995-01-01
The finite element code FEHMN is a three-dimensional finite element heat and mass transport simulator that can handle complex stratigraphy and nonlinear processes such as vadose zone flow, heat flow and solute transport. Scientists at LANL have been developed hydrologic flow and transport models of the Yucca Mountain site using FEHMN. Previous FEHMN simulations have used an equivalent K d model to model solute transport. In this thesis, FEHMN is modified making it possible to simulate the transport of a species with a rigorous chemical model. Including the rigorous chemical equations into FEHMN simulations should provide for more representative transport models for highly reactive chemical species. A fully kinetic formulation is chosen for the FEHMN reactive transport model. Several methods are available to computationally implement a fully kinetic formulation. Different numerical algorithms are investigated in order to optimize computational efficiency and memory requirements of the reactive transport model. The best algorithm of those investigated is then incorporated into FEHMN. The algorithm chosen requires for the user to place strongly coupled species into groups which are then solved for simultaneously using FEHMN. The complete reactive transport model is verified over a wide variety of problems and is shown to be working properly. The simulations demonstrate that gas flow and carbonate chemistry can significantly affect 14 C transport at Yucca Mountain. The simulations also provide that the new capabilities of FEHMN can be used to refine and buttress already existing Yucca Mountain radionuclide transport studies
Energy Technology Data Exchange (ETDEWEB)
Jonkman, J.; Larsen, T.; Hansen, A.; Nygaard, T.; Maus, K.; Karimirad, M.; Gao, Z.; Moan, T.; Fylling, I.
2010-04-01
Offshore wind turbines are designed and analyzed using comprehensive simulation codes that account for the coupled dynamics of the wind inflow, aerodynamics, elasticity, and controls of the turbine, along with the incident waves, sea current, hydrodynamics, and foundation dynamics of the support structure. This paper describes the latest findings of the code-to-code verification activities of the Offshore Code Comparison Collaboration, which operates under Subtask 2 of the International Energy Agency Wind Task 23. In the latest phase of the project, participants used an assortment of codes to model the coupled dynamic response of a 5-MW wind turbine installed on a floating spar buoy in 320 m of water. Code predictions were compared from load-case simulations selected to test different model features. The comparisons have resulted in a greater understanding of offshore floating wind turbine dynamics and modeling techniques, and better knowledge of the validity of various approximations. The lessons learned from this exercise have improved the participants' codes, thus improving the standard of offshore wind turbine modeling.
Analysis of different containment models for IRIS small break LOCA, using GOTHIC and RELAP5 codes
International Nuclear Information System (INIS)
Papini, Davide; Grgic, Davor; Cammi, Antonio; Ricotti, Marco E.
2011-01-01
Advanced nuclear water reactors rely on containment behaviour in realization of some of their passive safety functions. Steam condensation on containment walls, where non-condensable gas effects are significant, is an important feature of the new passive containment concepts, like the AP600/1000 ones. In this work the international reactor innovative and secure (IRIS) was taken as reference, and the relevant condensation phenomena involved within its containment were investigated with different computational tools. In particular, IRIS containment response to a small break LOCA (SBLOCA) was calculated with GOTHIC and RELAP5 codes. A simplified model of IRIS containment drywell was implemented with RELAP5 according to a sliced approach, based on the two-pipe-with-junction concept, while it was addressed with GOTHIC using several modelling options, regarding both heat transfer correlations and volume and thermal structure nodalization. The influence on containment behaviour prediction was investigated in terms of drywell temperature and pressure response, heat transfer coefficient (HTC) and steam volume fraction distribution, and internal recirculating mass flow rate. The objective of the paper is to preliminarily compare the capability of the two codes in modelling of the same postulated accident, thus to check the results obtained with RELAP5, when applied in a situation not covered by its validation matrix (comprising SBLOCA and to some extent LBLOCA transients, but not explicitly the modelling of large dry containment volumes). The option to include or not droplets in fluid mass flow discharged to the containment was the most influencing parameter for GOTHIC simulations. Despite some drawbacks, due, e.g. to a marked overestimation of internal natural recirculation, RELAP5 confirmed its capability to satisfactorily model the basic processes in IRIS containment following SBLOCA.
Atucha II NPP full scope simulator modelling with the thermal hydraulic code TRACRT
International Nuclear Information System (INIS)
Alonso, Pablo Rey; Ruiz, Jose Antonio; Rivero, Norberto
2011-01-01
In February 2010 NA-SA (Nucleoelectrica Argentina S.A.) awarded Tecnatom the Atucha II full scope simulator project. NA-SA is a public company owner of the Argentinean nuclear power plants. Atucha II is due to enter in operation shortly. Atucha II NPP is a PHWR type plant cooled by the water of the Parana River and has the same design as the Atucha I unit, doubling its power capacity. Atucha II will produce 745 MWe utilizing heavy water as coolant and moderator, and natural uranium as fuel. A plant singular feature is the permanent core refueling. TRAC R T is the first real time thermal hydraulic six-equations code used in the training simulation industry for NSSS modeling. It is the result from adapting to real time the best estimate code TRACG. TRAC R T is based on first principle conservation equations for mass, energy and momentum for liquid and steam phases, with two phase flows under non homogeneous and non equilibrium conditions. At present, it has been successfully implemented in twelve full scope replica simulators in different training centers throughout the world. To ease the modeling task, TRAC R T includes a graphical pre-processing tool designed to optimize this process and alleviate the burden of entering alpha numerical data in an input file. (author)
Development of new physical models devoted to internal dosimetry using the EGS4 Monte Carlo code
International Nuclear Information System (INIS)
Clairand, I.
1999-01-01
In the framework of diagnostic and therapeutic applications of nuclear medicine, the calculation of the absorbed dose at the organ scale is necessary for the evaluation of the risks taken by patients after the intake of radiopharmaceuticals. The classical calculation methods supply only a very approximative estimation of this dose because they use dosimetric models based on anthropomorphic phantoms with average corpulence (reference adult man and woman). The aim of this work is to improve these models by a better consideration of the physical characteristics of the patient in order to refine the dosimetric estimations. Several mathematical anthropomorphic phantoms representative of the morphological variations encountered in the adult population have been developed. The corresponding dosimetric parameters have been determined using the Monte Carlo method. The calculation code, based on the EGS4 Monte Carlo code, has been validated using the literature data for reference phantoms. Several phantoms with different corpulence have been developed using the analysis of anthropometric data from medico-legal autopsies. The corresponding dosimetric estimations show the influence of morphological variations on the absorbed dose. Two examples of application, based on clinical data, confirm the interest of this approach with respect to classical methods. (J.S.)
Zhou, Jian; Troyanskaya, Olga G.
2016-01-01
Interpreting the functional state of chromatin from the combinatorial binding patterns of chromatin factors, that is, the chromatin codes, is crucial for decoding the epigenetic state of the cell. Here we present a systematic map of Drosophila chromatin states derived from data-driven probabilistic modelling of dependencies between chromatin factors. Our model not only recapitulates enhancer-like chromatin states as indicated by widely used enhancer marks but also divides these states into three functionally distinct groups, of which only one specific group possesses active enhancer activity. Moreover, we discover a strong association between one specific enhancer state and RNA Polymerase II pausing, linking transcription regulatory potential and chromatin organization. We also observe that with the exception of long-intron genes, chromatin state transition positions in transcriptionally active genes align with an absolute distance to their corresponding transcription start site, regardless of gene length. Using our method, we provide a resource that helps elucidate the functional and spatial organization of the chromatin code landscape. PMID:26841971
Modeling of Al wire-array implosions on existing generators with the ''ZPIMP'' Code
International Nuclear Information System (INIS)
Giuliani, J.L. Jr.; Rogerson, J.; Davis, J.; Spielman, R.
1994-01-01
Aluminum wire-array implosion experiments have been performed on a number of pulsed-power generators, including Double EAGLE, SATURN, PHOENIX, and BLACKJACK 5. Recently, the measured K-shell emissions (> 1 keV) from the first three machines were compared with predictions of the Whitney-Thornhill scaling law. The present report employs a numerical simulation code to make a similar comparison. The code ZPIMP models the imploding Al plasma shell as a single, but variable width, zone. Two MMHD momentum equations, an electron energy, an ion energy, and a magnetic diffusion equation are solved in a coupled fashion to a lumped driver circuit model. Artificial viscosity is included in this Lagrangian formulation. Al ionization dynamics and probability-of-escape radiation transport are calculated self-consistently. Atomic structure is included for all ionization stages of the Al. Results for SATURN simulations and comparisons with data will be presented in an initial radius-mass loading plane. When the region interior to the primary plasma shell is treated as an adiabatic gas, the simulations match the existing data remarkably well for energetic implosions. The significant differences for the few small η implosion are discussed. Comparisons with the other three generators mentioned above are anticipated. The report will focus on significant trends between the studied generators
An Optimization Model for Design of Asphalt Pavements Based on IHAP Code Number 234
Directory of Open Access Journals (Sweden)
Ali Reza Ghanizadeh
2016-01-01
Full Text Available Pavement construction is one of the most costly parts of transportation infrastructures. Incommensurate design and construction of pavements, in addition to the loss of the initial investment, would impose indirect costs to the road users and reduce road safety. This paper aims to propose an optimization model to determine the optimal configuration as well as the optimum thickness of different pavement layers based on the Iran Highway Asphalt Paving Code Number 234 (IHAP Code 234. After developing the optimization model, the optimum thickness of pavement layers for secondary rural roads, major rural roads, and freeways was determined based on the recommended prices in “Basic Price List for Road, Runway and Railway” of Iran in 2015 and several charts were developed to determine the optimum thickness of pavement layers including asphalt concrete, granular base, and granular subbase with respect to road classification, design traffic, and resilient modulus of subgrade. Design charts confirm that in the current situation (material prices in 2015, application of asphalt treated layer in pavement structure is not cost effective. Also it was shown that, with increasing the strength of subgrade soil, the subbase layer may be removed from the optimum structure of pavement.
Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence
Pastawski, Fernando; Yoshida, Beni; Harlow, Daniel; Preskill, John
2015-06-01
We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindlerwedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed in [1].
Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence
Energy Technology Data Exchange (ETDEWEB)
Pastawski, Fernando; Yoshida, Beni [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States); Harlow, Daniel [Princeton Center for Theoretical Science, Princeton University,400 Jadwin Hall, Princeton NJ 08540 (United States); Preskill, John [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States)
2015-06-23
We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindler-wedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed in http://dx.doi.org/10.1007/JHEP04(2015)163.
Directory of Open Access Journals (Sweden)
Anthony McCosker
2014-03-01
Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.
Development Status of the PEBBLES Code for Pebble Mechanics: Improved Physical Models and Speed-up
Energy Technology Data Exchange (ETDEWEB)
Joshua J. Cogliati; Abderrafi M. Ougouag
2009-09-01
PEBBLES is a code for simulating the motion of all the pebbles in a pebble bed reactor. Since pebble bed reactors are packed randomly and not precisely placed, the location of the fuel elements in the reactor is not deterministically known. Instead, when determining operating parameters the motion of the pebbles can be simulated and stochastic locations can be found. The PEBBLES code can output information relevant for other simulations of the pebble bed reactors such as the positions of the pebbles in the reactor, packing fraction change in an earthquake, and velocity profiles created by recirculation. The goal for this level three milestone was to speedup the PEBBLES code through implementation on massively parallel computer. Work on this goal has resulted in speeding up both the single processor version and creation of a new parallel version of PEBBLES. Both the single processor version and the parallel running capability of the PEBBLES code have improved since the fiscal year start. The hybrid MPI/OpenMP PEBBLES version was created this year to run on the increasingly common cluster hardware profile that combines nodes with multiple processors that share memory and a cluster of nodes that are networked together. The OpenMP portions use the Open Multi-Processing shared memory parallel processing model to split the task across processors in a single node that shares memory. The Message Passing Interface (MPI) portion uses messages to communicate between different nodes over a network. The following are wall clock speed up for simulating an NGNP-600 sized reactor. The single processor version runs 1.5 times faster compared to the single processor version at the beginning of the fiscal year. This speedup is primarily due to the improved static friction model described in the report. When running on 64 processors, the new MPI/OpenMP hybrid version has a wall clock speed up of 22 times compared to the current single processor version. When using 88 processors, a
Development Status of the PEBBLES Code for Pebble Mechanics: Improved Physical Models and Speed-up
Energy Technology Data Exchange (ETDEWEB)
Joshua J. Cogliati; Abderrafi M. Ougouag
2009-12-01
PEBBLES is a code for simulating the motion of all the pebbles in a pebble bed reactor. Since pebble bed reactors are packed randomly and not precisely placed, the location of the fuel elements in the reactor is not deterministically known. Instead, when determining operating parameters the motion of the pebbles can be simulated and stochastic locations can be found. The PEBBLES code can output information relevant for other simulations of the pebble bed reactors such as the positions of the pebbles in the reactor, packing fraction change in an earthquake, and velocity profiles created by recirculation. The goal for this level three milestone was to speedup the PEBBLES code through implementation on massively parallel computer. Work on this goal has resulted in speeding up both the single processor version and creation of a new parallel version of PEBBLES. Both the single processor version and the parallel running capability of the PEBBLES code have improved since the fiscal year start. The hybrid MPI/OpenMP PEBBLES version was created this year to run on the increasingly common cluster hardware profile that combines nodes with multiple processors that share memory and a cluster of nodes that are networked together. The OpenMP portions use the Open Multi-Processing shared memory parallel processing model to split the task across processors in a single node that shares memory. The Message Passing Interface (MPI) portion uses messages to communicate between different nodes over a network. The following are wall clock speed up for simulating an NGNP-600 sized reactor. The single processor version runs 1.5 times faster compared to the single processor version at the beginning of the fiscal year. This speedup is primarily due to the improved static friction model described in the report. When running on 64 processors, the new MPI/OpenMP hybrid version has a wall clock speed up of 22 times compared to the current single processor version. When using 88 processors, a
Modelling transient 3D multi-phase criticality in fluidised granular materials - the FETCH code
International Nuclear Information System (INIS)
Pain, C.C.; Gomes, J.L.M.A.; Eaton, M.D.; Ziver, A.K.; Umpleby, A.P.; Oliveira, C.R.E. de; Goddard, A.J.H.
2003-01-01
The development and application of a generic model for modelling criticality in fluidised granular materials is described within the Finite Element Transient Criticality (FETCH) code - which models criticality transients in spatial and temporal detail from fundamental principles, as far as is currently possible. The neutronics model in FETCH solves the neutron transport in full phase space with a spherical harmonics angle of travel representation, multi-group in neutron energy, Crank Nicholson based in time stepping, and finite elements in space. The fluids representation coupled with the neutronics model is a two-fluid-granular-temperature model, also finite element fased. A separate fluid is used to represent the liquid/vapour gas and the solid fuel particle phases, respectively. Particle-particle, particle-wall interactions are modelled using a kinetic theory approach on an analogy between the motion of gas molecules subject to binary collisions and granular flows. This model has been extensively validated by comparison with fluidised bed experimental results. Gas-fluidised beds involve particles that are often extremely agitated (measured by granular temperature) and can thus be viewed as a particularly demanding application of the two-fluid model. Liquid fluidised systems are of criticality interest, but these can become demanding with the production of gases (e.g. radiolytic and water vapour) and large fluid/particle velocities in energetic transients. We present results from a test transient model in which fissile material ( 239 Pu) is presented as spherical granules subsiding in water, located in a tank initially at constant temperature and at two alternative over-pressures in order to verify the theoretical model implemented in FETCH. (author)
The effect of hot spots upon swelling of Zircaloy cladding as modelled by the code CANSWEL-2
International Nuclear Information System (INIS)
Haste, T.J.; Gittus, J.H.
1980-12-01
The code CANSWEL-2 models cladding creep deformation under conditions relevant to a loss-of-coolant accident (LOCA) in a pressurised-water reactor (PWR). It can treat azimuthal non-uniformities in cladding thickness and temperature, and model the mechanical restraint imposed by the nearest neighbouring rods, including situations where cladding is forced into non-circular shapes. The physical and mechanical models used in the code are presented. Applications of the code are described, both as a stand-alone version and as part of the PWR LOCA code MABEL-2. Comparison with a limited number of relevant out-of-reactor creep strain experiments has generally shown encouraging agreement with the data. (author)
Modeling of High pressure core spray system (HPCS) with TRAC-BF1 code
International Nuclear Information System (INIS)
Angel M, E. Del.
1993-01-01
In this work we present a model of the HPCS system of Laguna Verde Nuclear Power Plant (CNLV) which consist of a condensate storage tank (TAC) a vertical surge pipe (TOS), implemented by the Comision Federal de Electricidad (CFE), and the suppression pool (PSP), as well as the associated piping and valves for the HPCS pump suction, to study the system under transient state conditions. The analysis results show that the implemented surge pipe, allows a normal HPCS pump start without automatic inadvertented transfer of the HPCS pump suction from the condensate storage tank to the suppression pool. We briefly mention some problems found during the stimulation and their solution, further we point out some deficiencies of the code for this type of studies. The present model can be used to stimulate other transients with only minor modifications of it. (Author)
Model of U3Si2 Fuel System using BISON Fuel Code
Energy Technology Data Exchange (ETDEWEB)
K. E. Metzger; T. W. Knight; R. L. Williamson
2014-04-01
This research considers the proposed advanced fuel system: U3Si2 combined with an advanced cladding. U3Si2 has a number of advantageous thermophysical properties, which motivate its use as an accident tolerant fuel. This preliminary model evaluates the behavior of U3Si2 using available thermophysical data to predict the cladding-fuel pellet temperature and stress using the fuel performance code: BISON. The preliminary results obtained from the U3Si2 fuel model describe the mechanism of Pellet-Clad Mechanical Interaction for this system while more extensive testing including creep testing of U3Si2 is planned for improved understanding of thermophysical properties for predicting fuel performance.
Improvements of Physical Models in TRITGO code for Tritium Behavior Analysis in VHTR
International Nuclear Information System (INIS)
Yoo, Jun Soo; Tak, Nam Il; Lim, Hong Sik
2010-01-01
Since tritium is radioactive material with 12.32 year of half-life and is generated by a ternary fission reaction in fuel as well as by neutron absorption reactions of impurities in Very High Temperature gas-cooled Reactor (VHTR) core, accurate prediction of tritium behavior and its concentration in product hydrogen is definitely important in terms of public safety for its construction. In this respect, TRITGO code was developed for estimating the tritium production and distribution in high temperature gas-cooled reactors by General Atomics (GA). However, some models in it are hard-wired to specific reactor type or too simplified, which makes the analysis results less applicable. Thus, major improvements need to be considered for better predictions. In this study, some of model improvements have been suggested and its effect is evaluated based on the analysis work against PMR600 design concept
Fuel relocation modeling in the SAS4A accident analysis code system
International Nuclear Information System (INIS)
Tentner, A.M.; Miles, K.J.; Kalimullah; Hill, D.J.
1986-01-01
The SAS4A code system has been designed for the analysis of the initial phase of Hypothetical Core Disruptive Accidents (HCDAs) up to gross melting or failure of the subassembly walls. During such postulated accident scenarios as the Loss-of-Flow (LOF) and Transient-Overpower (TOP) events, the relocation of the fuel plays a key role in determining the sequence of events and the amount of energy produced before neutronic shutdown. This paper discusses the general strategy used in modelong the various phenomena which lead to fuel relocation and presents the key fuel relocation models used in SAS4A. The implications of these models for the whole-core accident analysis as well as recent results of fuel relocation are emphasized. 12 refs
New Source Term Model for the RESRAD-OFFSITE Code Version 3
Energy Technology Data Exchange (ETDEWEB)
Yu, Charley [Argonne National Lab. (ANL), Argonne, IL (United States); Gnanapragasam, Emmanuel [Argonne National Lab. (ANL), Argonne, IL (United States); Cheng, Jing-Jy [Argonne National Lab. (ANL), Argonne, IL (United States); Kamboj, Sunita [Argonne National Lab. (ANL), Argonne, IL (United States); Chen, Shih-Yew [Argonne National Lab. (ANL), Argonne, IL (United States)
2013-06-01
This report documents the new source term model developed and implemented in Version 3 of the RESRAD-OFFSITE code. This new source term model includes: (1) "first order release with transport" option, in which the release of the radionuclide is proportional to the inventory in the primary contamination and the user-specified leach rate is the proportionality constant, (2) "equilibrium desorption release" option, in which the user specifies the distribution coefficient which quantifies the partitioning of the radionuclide between the solid and aqueous phases, and (3) "uniform release" option, in which the radionuclides are released from a constant fraction of the initially contaminated material during each time interval and the user specifies the duration over which the radionuclides are released.
Coding a Weather Model: DOE-FIU Science & Technology Workforce Development Program.
Energy Technology Data Exchange (ETDEWEB)
Bradley, Jon David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-12-01
DOE Fellow, Andres Cremisini, completed a 10-week internship with Sandia National Laboratories (SNL) in Albuquerque, New Mexico. Under the management of Kristopher Klingler and the mentorship of Jon Bradley, he was tasked with conceiving and coding a realistic weather model for use in physical security applications. The objective was to make a weather model that could use real data to accurately predict wind and precipitation conditions at any location of interest on the globe at any user-determined time. The intern received guidance on software design, the C++ programming language and clear communication of project goals and ongoing progress. In addition, Mr. Cremisini was given license to structure the program however he best saw fit, an experience that will benefit ongoing research endeavors.
A physiologically-inspired model of numerical classification based on graded stimulus coding
Directory of Open Access Journals (Sweden)
John Pearson
2010-01-01
Full Text Available In most natural decision contexts, the process of selecting among competing actions takes place in the presence of informative, but potentially ambiguous, stimuli. Decisions about magnitudes—quantities like time, length, and brightness that are linearly ordered—constitute an important subclass of such decisions. It has long been known that perceptual judgments about such quantities obey Weber’s Law, wherein the just-noticeable difference in a magnitude is proportional to the magnitude itself. Current physiologically inspired models of numerical classification assume discriminations are made via a labeled line code of neurons selectively tuned for numerosity, a pattern observed in the firing rates of neurons in the ventral intraparietal area (VIP of the macaque. By contrast, neurons in the contiguous lateral intraparietal area (LIP signal numerosity in a graded fashion, suggesting the possibility that numerical classification could be achieved in the absence of neurons tuned for number. Here, we consider the performance of a decision model based on this analog coding scheme in a paradigmatic discrimination task—numerosity bisection. We demonstrate that a basic two-neuron classifier model, derived from experimentally measured monotonic responses of LIP neurons, is sufficient to reproduce the numerosity bisection behavior of monkeys, and that the threshold of the classifier can be set by reward maximization via a simple learning rule. In addition, our model predicts deviations from Weber Law scaling of choice behavior at high numerosity. Together, these results suggest both a generic neuronal framework for magnitude-based decisions and a role for reward contingency in the classification of such stimuli.
Adjudicating between face-coding models with individual-face fMRI responses.
Directory of Open Access Journals (Sweden)
Johan D Carlin
2017-07-01
Full Text Available The perceptual representation of individual faces is often explained with reference to a norm-based face space. In such spaces, individuals are encoded as vectors where identity is primarily conveyed by direction and distinctiveness by eccentricity. Here we measured human fMRI responses and psychophysical similarity judgments of individual face exemplars, which were generated as realistic 3D animations using a computer-graphics model. We developed and evaluated multiple neurobiologically plausible computational models, each of which predicts a representational distance matrix and a regional-mean activation profile for 24 face stimuli. In the fusiform face area, a face-space coding model with sigmoidal ramp tuning provided a better account of the data than one based on exemplar tuning. However, an image-processing model with weighted banks of Gabor filters performed similarly. Accounting for the data required the inclusion of a measurement-level population averaging mechanism that approximates how fMRI voxels locally average distinct neuronal tunings. Our study demonstrates the importance of comparing multiple models and of modeling the measurement process in computational neuroimaging.
Revision of Collisional-Radiative Models and Neutral-Transport Code for Hydrogen and Helium Species
International Nuclear Information System (INIS)
Sawada, Keiji; Goto, Motoshi
2013-01-01
We have been developing collisional-radiative models and a neutral-transport code for hydrogen and helium species, which are used to investigate fusion plasmas. Collisional-radiative models of atomic hydrogen and helium have been applied to a helium-hydrogen RF plasma at Shinshu University, Japan, to test whether these models reproduce the observed emission intensities. The electron temperature and density are determined from visible emission line intensities of helium atom considering photoexcitation from the ground state to singlet P states, which is accompanied by radiation trapping. From the observed hydrogen Balmer γ line intensity, which is hardly affected by photoexcitation, the atomic hydrogen density is determined using a hydrogen collisional-radiative model that ignores photoexcitation. The atomic hydrogen temperature, which reproduces Balmer α and β line intensities, is determined using an iterative hydrogen atom collisional-radiative model that calculates photoexcitation rates. R-Matrix cross sections for n≤5 are used in the model. The hope is hoped that precise cross sections for higher-lying levels will be produced to determine the atomic density in fusion plasmas
A computational code for resolution of general compartment models applied to internal dosimetry
International Nuclear Information System (INIS)
Claro, Thiago R.; Todo, Alberto S.
2011-01-01
The dose resulting from internal contamination can be estimated with the use of biokinetic models combined with experimental results obtained from bio analysis and the knowledge of the incorporation time. The biokinetics models can be represented by a set of compartments expressing the transportation, retention and elimination of radionuclides from the body. The ICRP publications, number 66, 78 and 100, present compartmental models for the respiratory tract, gastrointestinal tract and for systemic distribution for an array of radionuclides of interest for the radiological protection. The objective of this work is to develop a computational code for designing, visualization and resolution of compartmental models of any nature. There are available four different techniques for the resolution of system of differential equations, including semi-analytical and numerical methods. The software was developed in C≠ programming, using a Microsoft Access database and XML standards for file exchange with other applications. Compartmental models for uranium, thorium and iodine radionuclides were generated for the validation of the CBT software. The models were subsequently solved by SSID software and the results compared with the values published in the issue 78 of ICRP. In all cases the system is in accordance with the values published by ICRP. (author)
A computational code for resolution of general compartment models applied to internal dosimetry
Energy Technology Data Exchange (ETDEWEB)
Claro, Thiago R.; Todo, Alberto S., E-mail: claro@usp.br, E-mail: astodo@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2011-07-01
The dose resulting from internal contamination can be estimated with the use of biokinetic models combined with experimental results obtained from bio analysis and the knowledge of the incorporation time. The biokinetics models can be represented by a set of compartments expressing the transportation, retention and elimination of radionuclides from the body. The ICRP publications, number 66, 78 and 100, present compartmental models for the respiratory tract, gastrointestinal tract and for systemic distribution for an array of radionuclides of interest for the radiological protection. The objective of this work is to develop a computational code for designing, visualization and resolution of compartmental models of any nature. There are available four different techniques for the resolution of system of differential equations, including semi-analytical and numerical methods. The software was developed in C{ne} programming, using a Microsoft Access database and XML standards for file exchange with other applications. Compartmental models for uranium, thorium and iodine radionuclides were generated for the validation of the CBT software. The models were subsequently solved by SSID software and the results compared with the values published in the issue 78 of ICRP. In all cases the system is in accordance with the values published by ICRP. (author)
A wavelet-based neural model to optimize and read out a temporal population code
Luvizotto, Andre; Rennó-Costa, César; Verschure, Paul F. M. J.
2012-01-01
It has been proposed that the dense excitatory local connectivity of the neo-cortex plays a specific role in the transformation of spatial stimulus information into a temporal representation or a temporal population code (TPC). TPC provides for a rapid, robust, and high-capacity encoding of salient stimulus features with respect to position, rotation, and distortion. The TPC hypothesis gives a functional interpretation to a core feature of the cortical anatomy: its dense local and sparse long-range connectivity. Thus far, the question of how the TPC encoding can be decoded in downstream areas has not been addressed. Here, we present a neural circuit that decodes the spectral properties of the TPC using a biologically plausible implementation of a Haar transform. We perform a systematic investigation of our model in a recognition task using a standardized stimulus set. We consider alternative implementations using either regular spiking or bursting neurons and a range of spectral bands. Our results show that our wavelet readout circuit provides for the robust decoding of the TPC and further compresses the code without loosing speed or quality of decoding. We show that in the TPC signal the relevant stimulus information is present in the frequencies around 100 Hz. Our results show that the TPC is constructed around a small number of coding components that can be well decoded by wavelet coefficients in a neuronal implementation. The solution to the TPC decoding problem proposed here suggests that cortical processing streams might well consist of sequential operations where spatio-temporal transformations at lower levels forming a compact stimulus encoding using TPC that are subsequently decoded back to a spatial representation using wavelet transforms. In addition, the results presented here show that different properties of the stimulus might be transmitted to further processing stages using different frequency components that are captured by appropriately tuned
International Nuclear Information System (INIS)
Viswanathan, H.S.
1996-08-01
The finite element code FEHMN, developed by scientists at Los Alamos National Laboratory (LANL), is a three-dimensional finite element heat and mass transport simulator that can handle complex stratigraphy and nonlinear processes such as vadose zone flow, heat flow and solute transport. Scientists at LANL have been developing hydrologic flow and transport models of the Yucca Mountain site using FEHMN. Previous FEHMN simulations have used an equivalent Kd model to model solute transport. In this thesis, FEHMN is modified making it possible to simulate the transport of a species with a rigorous chemical model. Including the rigorous chemical equations into FEHMN simulations should provide for more representative transport models for highly reactive chemical species. A fully kinetic formulation is chosen for the FEHMN reactive transport model. Several methods are available to computationally implement a fully kinetic formulation. Different numerical algorithms are investigated in order to optimize computational efficiency and memory requirements of the reactive transport model. The best algorithm of those investigated is then incorporated into FEHMN. The algorithm chosen requires for the user to place strongly coupled species into groups which are then solved for simultaneously using FEHMN. The complete reactive transport model is verified over a wide variety of problems and is shown to be working properly. The new chemical capabilities of FEHMN are illustrated by using Los Alamos National Laboratory's site scale model of Yucca Mountain to model two-dimensional, vadose zone 14 C transport. The simulations demonstrate that gas flow and carbonate chemistry can significantly affect 14 C transport at Yucca Mountain. The simulations also prove that the new capabilities of FEHMN can be used to refine and buttress already existing Yucca Mountain radionuclide transport studies
International Nuclear Information System (INIS)
Calloo, A.A.
2012-01-01
In reactor physics, calculation schemes with deterministic codes are validated with respect to a reference Monte Carlo code. The remaining biases are attributed to the approximations and models induced by the multigroup theory (self-shielding models and expansion of the scattering law using Legendre polynomials) to represent physical phenomena (resonant absorption and scattering anisotropy respectively). This work focuses on the relevance of a polynomial expansion to model the scattering law. Since the outset of reactor physics, the latter has been expanded on a truncated Legendre polynomial basis. However, the transfer cross sections are highly anisotropic, with non-zero values for a very small range of the cosine of the scattering angle. Besides, the finer the energy mesh and the lighter the scattering nucleus, the more exacerbated is the peaked shape of this cross section. As such, the Legendre expansion is less suited to represent the scattering law. Furthermore, this model induces negative values which are non-physical. In this work, various scattering laws are briefly described and the limitations of the existing model are pointed out. Hence, piecewise-constant functions have been used to represent the multigroup scattering cross section. This representation requires a different model for the diffusion source. The discrete ordinates method which is widely employed to solve the transport equation has been adapted. Thus, the finite volume method for angular discretization has been developed and implemented in Paris environment which hosts the S n solver, Snatch. The angular finite volume method has been compared to the collocation method with Legendre moments to ensure its proper performance. Moreover, unlike the latter, this method is adapted for both the Legendre moments and the piecewise-constant functions representations of the scattering cross section. This hybrid-source method has been validated for different cases: fuel cell in infinite lattice
Modeling the reactor core of MNSR to simulate its dynamic behavior using the code PARET
International Nuclear Information System (INIS)
Hainoun, A.; Alhabet, F.
2004-02-01
Using the computer code PARET the core of the MNSR reactor was modelled and the neutronics and thermal hydraulic behaviour of the reactor core for the steady state and selected transients, that deal with step change of reactivity including control rod withdraw starting from steady state at various low power level, were simulated. For this purpose a PARET input model for the core of MNSR reactor has been developed enabling the simulation of neutron kinetic and thermal hydraulic of reactor core including reactivity feedback effects. The neutron kinetic model depends on the point kinetic with 15 groups delayed neutrons including photo neutrons of beryllium reflector. In this regard the effect of photo neutron on the dynamic behaviour has been analysed through two additional calculation. In the first the yield of photo neutrons was neglected completely and in the second its share was added to the sixth group of delayed neutrons. In the thermal hydraulic model the fuel elements with their cooling channels were distributed to 4 different groups with various radial power factors. The pressure lose factors for friction, flow direction change, expansion and contraction were estimated using suitable approaches. The post calculations of the relative neutron flux change and core average temperature were found to be consistent with the experimental measurements. Furthermore, the simulation has indicated the influence of photo neutrons of the Beryllium reflector on the neutron flux behaviour. For the reliability of the results sensitivity analysis was carried out to consider the uncertainty in some important parameters like temperature feedback coefficient and flow velocity. On the other hand the application of PARET in simulation of void formation in the subcooled boiling regime were tested. The calculation indicates the capability of PARET in modelling this phenomenon. However, big discrepancy between calculation results and measurement of axial void distribution were observed
Assessment of PWR fuel degradation by post-irradiation examinations and modeling in DEGRAD-1 code
International Nuclear Information System (INIS)
Castanheira, Myrthes; Lucki, Georgi; Silva, Jose Eduardo Rosa da; Terremoto, Luis A.A.; Silva, Antonio Teixeira e; Teodoro, Celso A.; Damy, Margaret de A.
2005-01-01
On the majority of the cases, the inquiries on primary failures and secondary in PWR fuel rods are based on results of analysis were made use of the non-destructive examination results (coolant activities monitoring, sipping tests, visual examination). The complementary analysis methodology proposed in this work includes a modeling approach to characterization of the physical effects of the individual chemistry mechanisms that constitute the incubation phase of degradation phenomenon after primary failure that are integrated in the reactor operational history under stationary operational regime, and normal power transients. The computational program called DEGRAD-1 was developed based on this modeling approach. The practical outcome of the program is to predict cladding regions susceptible to massive hydriding. The applications presented demonstrate the validity of proposed method and models by actual cases simulation, which (primary and secondary) defects positions were known and formation time was estimated. By using the modeling approach, a relationship between the hydrogen concentration in the gap and the inner cladding oxide thickness has been identified which, when satisfied, will induce massive hydriding. The novelty in this work is the integrated methodology, which supplements the traditional analysis methods (using data from non-destructive techniques) with mathematical models for the hydrogen evolution, oxidation and hydriding that include refined approaches and criteria for PWR fuel, and using the FRAPCON-3 fuel performance code as the basic tool. (author)
A model of polarized-beam AGS in the ray-tracing code Zgoubi
Energy Technology Data Exchange (ETDEWEB)
Meot, F. [Brookhaven National Lab. (BNL), Upton, NY (United States); Ahrens, L. [Brookhaven National Lab. (BNL), Upton, NY (United States); Brown, K. [Brookhaven National Lab. (BNL), Upton, NY (United States); Dutheil, Y. [Brookhaven National Lab. (BNL), Upton, NY (United States); Glenn, J. [Brookhaven National Lab. (BNL), Upton, NY (United States); Huang, H. [Brookhaven National Lab. (BNL), Upton, NY (United States); Roser, T. [Brookhaven National Lab. (BNL), Upton, NY (United States); Shoefer, V. [Brookhaven National Lab. (BNL), Upton, NY (United States); Tsoupas, N. [Brookhaven National Lab. (BNL), Upton, NY (United States)
2016-07-12
A model of the Alternating Gradient Synchrotron, based on the AGS snapramps, has been developed in the stepwise ray-tracing code Zgoubi. It has been used over the past 5 years in a number of accelerator studies aimed at enhancing RHIC proton beam polarization. It is also used to study and optimize proton and Helion beam polarization in view of future RHIC and eRHIC programs. The AGS model in Zgoubi is operational on-line via three different applications, ’ZgoubiFromSnaprampCmd’, ’AgsZgoubiModel’ and ’AgsModelViewer’, with the latter two essentially interfaces to the former which is the actual model ’engine’. All three commands are available from the controls system application launcher in the AGS ’StartUp’ menu, or from eponymous commands on shell terminals. Main aspects of the model and of its operation are presented in this technical note, brief excerpts from various studies performed so far are given for illustration, means and methods entering in ZgoubiFromSnaprampCmd are developed further in appendix.
Modelling of high burnup structure in UO2 fuel with the RTOP code
International Nuclear Information System (INIS)
Likhanskii, V.; Zborovskii, V.; Evdokimov, I.; Kanyukova, V.; Sorokin, A.
2008-01-01
The present work deals with self-consistent physical approach aimed to derive the criterion of fuel restructuring avoiding correlations. The approach is based on study of large over pressurized bubbles formation on dislocations, at grain boundaries and in grain volume. At first, stage of formation of bubbles non-destroyable by fission fragments is examined using consistent modelling of point defects and fission gas behavior near dislocation and in grain volume. Then, evolution of formed large non-destroyable bubbles is considered using results of the previous step as initial values. Finally, condition of dislocation loops punching by sufficiently large over pressurized bubbles is regarded as the criterion of fuel restructuring onset. In the present work consideration of large over pressurized bubbles evolution is applied to modelling of the restructuring threshold depending on temperature, burnup and grain size. Effect of grain size predicted by the model is in qualitative agreement with experimental observations. Restructuring threshold criterion as an analytical function of local burnup and fuel temperature is derived and compared with HBRP project data. To predict rim-layer width formation depending on fuel burnup and irradiation conditions the model is implemented into the mechanistic fuel performance code RTOP. Calculated dependencies give upper estimate for the width of restructured region. Calculations show that one needs to consider temperature distribution within pellet which depends on irradiation history in order to model rim-structure formation
Mathematical model and computer code for coated particles performance at normal operating conditions
International Nuclear Information System (INIS)
Golubev, I.; Kadarmetov, I.; Makarov, V.
2002-01-01
Computer modeling of thermo-mechanical behavior of coated particles during operating both at normal and off-normal conditions has a very significant role particularly on a stage of new reactors development. In Russia a big experience has been accumulated on fabrication and reactor tests of CP and fuel elements with UO 2 kernels. However, this experience cannot be using in full volume for development of a new reactor installation GT-MHR. This is due to very deep burn-up of the fuel based on plutonium oxide (up to 70% fima). Therefore the mathematical modeling of CP thermal-mechanical behavior and failure prediction becomes particularly important. The authors have a clean understanding that serviceability of fuel with high burn-ups are defined not only by thermo-mechanics, but also by structured changes in coating materials, thermodynamics of chemical processes, 'amoeba-effect', formation CO etc. In the report the first steps of development of integrate code for numerical modeling of coated particles behavior and some calculating results concerning the influence of various design parameters on fuel coated particles endurance for GT-MHR normal operating conditions are submitted. A failure model is developed to predict the fraction of TRISO-coated particles. In this model it is assumed that the failure of CP depends not only on probability of SiC-layer fracture but also on the PyC-layers damage. The coated particle is considered as a uniform design. (author)
Numerical modelling of pressure suppression pools with CFD and FEM codes
Energy Technology Data Exchange (ETDEWEB)
Paettikangas, T.; Niemi, J.; Timperi, A. (VTT Technical Research Centre of Finland (Finland))
2011-06-15
Experiments on large-break loss-of-coolant accident for BWR is modeled with computational fluid (CFD) dynamics and finite element calculations. In the CFD calculations, the direct-contact condensation in the pressure suppression pool is studied. The heat transfer in the liquid phase is modeled with the Hughes-Duffey correlation based on the surface renewal model. The heat transfer is proportional to the square root of the turbulence kinetic energy. The condensation models are implemented with user-defined functions in the Euler-Euler two-phase model of the Fluent 12.1 CFD code. The rapid collapse of a large steam bubble and the resulting pressure source is studied analytically and numerically. Pressure source obtained from simplified calculations is used for studying the structural effects and FSI in a realistic BWR containment. The collapse results in volume acceleration, which induces pressure loads on the pool walls. In the case of a spherical bubble, the velocity term of the volume acceleration is responsible of the largest pressure load. As the amount of air in the bubble is decreased, the peak pressure increases. However, when the water compressibility is accounted for, the finite speed of sound becomes a limiting factor. (Author)
Synaptic learning rules and sparse coding in a model sensory system.
Directory of Open Access Journals (Sweden)
Luca A Finelli
2008-04-01
Full Text Available Neural circuits exploit numerous strategies for encoding information. Although the functional significance of individual coding mechanisms has been investigated, ways in which multiple mechanisms interact and integrate are not well understood. The locust olfactory system, in which dense, transiently synchronized spike trains across ensembles of antenna lobe (AL neurons are transformed into a sparse representation in the mushroom body (MB; a region associated with memory, provides a well-studied preparation for investigating the interaction of multiple coding mechanisms. Recordings made in vivo from the insect MB demonstrated highly specific responses to odors in Kenyon cells (KCs. Typically, only a few KCs from the recorded population of neurons responded reliably when a specific odor was presented. Different odors induced responses in different KCs. Here, we explored with a biologically plausible model the possibility that a form of plasticity may control and tune synaptic weights of inputs to the mushroom body to ensure the specificity of KCs' responses to familiar or meaningful odors. We found that plasticity at the synapses between the AL and the MB efficiently regulated the delicate tuning necessary to selectively filter the intense AL oscillatory output and condense it to a sparse representation in the MB. Activity-dependent plasticity drove the observed specificity, reliability, and expected persistence of odor representations, suggesting a role for plasticity in information processing and making a testable prediction about synaptic plasticity at AL-MB synapses.
Hemmen, J; Schulten, Klaus
1994-01-01
Since the appearance of Vol. 1 of Models of Neural Networks in 1991, the theory of neural nets has focused on two paradigms: information coding through coherent firing of the neurons and functional feedback. Information coding through coherent neuronal firing exploits time as a cardinal degree of freedom. This capacity of a neural network rests on the fact that the neuronal action potential is a short, say 1 ms, spike, localized in space and time. Spatial as well as temporal correlations of activity may represent different states of a network. In particular, temporal correlations of activity may express that neurons process the same "object" of, for example, a visual scene by spiking at the very same time. The traditional description of a neural network through a firing rate, the famous S-shaped curve, presupposes a wide time window of, say, at least 100 ms. It thus fails to exploit the capacity to "bind" sets of coherently firing neurons for the purpose of both scene segmentation and figure-ground segregatio...
Energy Technology Data Exchange (ETDEWEB)
Gill, Katharina
2013-06-06
In April and May 2012, 7.3 . 10{sup 9} Au+Au collisions at a kinetic beam energy of E{sub kin}=1.23 GeV per nucleon have been recorded by the HADES detector, installed at the Helmholtzzentrum fuer Schwerionenforschung (GSI) at Darmstadt, Germany. Based on this experiment, this thesis focuses on the reconstruction of lightest strangeness containing mesons, such as K{sup +}, K{sup -} and K{sup 0}. Due to their production far below the threshold energy in nucleon-nucleon collisions, particles carrying strangeness have turned out to be very valuable messengers for the state of nuclear matter under extreme conditions. The K{sup 0}/K{sup +} production ratio in isospin asymmetric relativistic heavy-ion collisions has been suggested as a promising observable for the symmetry energy term of the nuclear equation of state. However, the high density behavior of the symmetry energy is at present largely unconstrained. Measurements of the K{sup 0}/K{sup +} ratio in the high density region are now possible with the HADES detector. In addition, the production rates of the different kaons provide information about their production mechanism and propagation in the baryon dominated matter. The results, presented in this thesis, are derived from an analysis of the total statistics of the April and May 2012 measuring campaign. The charged kaons are identified via cuts on their quality of the track reconstruction, their energy loss in multi-wire drift chambers distribution and their reconstructed momentum. In the analysis of the statistics a total of 3.6.10{sup 6} positively charged kaons and 1.1.10{sup 4} negatively charged kaons were reconstructed. For small polar angles of the detector (18 {sup circle} -45 {sup circle}, Resistive Plate Chamber wall), the signal of K{sup +} has been found to be at a mass of M=(488.8±0.7) MeV/c{sup 2} with a width of σ=(17.4±2.2) MeV/c{sup 2}. At large polar angles (44 {sup circle} -85 {sup circle}, Time of Flight wall) a mass of M=(501.6±0.5) Me
Committed to the Honor Code: An Investment Model Analysis of Academic Integrity
Dix, Emily L.; Emery, Lydia F.; Le, Benjamin
2014-01-01
Educators worldwide face challenges surrounding academic integrity. The development of honor codes can promote academic integrity, but understanding how and why honor codes affect behavior is critical to their successful implementation. To date, research has not examined how students' "relationship" to an honor code predicts…
Energy Technology Data Exchange (ETDEWEB)
Grazevicius, Audrius; Kaliatka, Algirdas [Lithuanian Energy Institute, Kaunas (Lithuania). Lab. of Nuclear Installation Safety
2017-07-15
The main functions of spent fuel pools are to remove the residual heat from spent fuel assemblies and to perform the function of biological shielding. In the case of loss of heat removal from spent fuel pool, the fuel rods and pool water temperatures would increase continuously. After the saturated temperature is reached, due to evaporation of water the pool water level would drop, eventually causing the uncover of spent fuel assemblies, fuel overheating and fuel rods failure. This paper presents an analysis of loss of heat removal accident in spent fuel pool of BWR 4 and a comparison of two different modelling approaches. The one-dimensional system thermal-hydraulic computer code RELAP5 and CFD tool ANSYS Fluent were used for the analysis. The results are similar, but the local effects cannot be simulated using a one-dimensional code. The ANSYS Fluent calculation demonstrated that this three-dimensional treatment allows to avoid the need for many one-dimensional modelling assumptions in the pool modelling and enables to reduce the uncertainties associated with natural circulation flow calculation.
Energy Technology Data Exchange (ETDEWEB)
Delbecq, J.M
1999-07-01
The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)
Thermal modeling of tanks 241-AW-101 and 241-AN-104 with the TEMPEST code
International Nuclear Information System (INIS)
Antoniak, Z.I.; Recknagle, K.P.
1995-07-01
The TEMPEST code was exercised in a preliminary study of double-shell Tanks 241 -AW-101 and 241-AN-104 thermal behavior. The two-dimensional model used is derived from our earlier studies on heat transfer from Tank 241-SY-101. Several changes were made to the model to simulate the waste and conditions in 241-AW-101 and 241-AN-104. The nonconvective waste layer was assumed to be 254 cm (100 in.) thick for Tank 241-AW-101, and 381 cm (150 in.) in Tank 241-AN-104. The remaining waste was assumed, for each tank, to consist of a convective layer with a 7.6-cm (3-inch) crust on top. The waste heat loads for 241-AW-101 and 241-AN-104 were taken to be 10 kW (3.4E4 Btu/hr) and 12 kW (4.0E4 Btu/hr), respectively. Present model predictions of maximum and convecting waste temperatures are within 1.7 degrees C (3 degrees F) of those measured in Tanks 241-AW-101 and 241-AN-104. The difference between the predicted and measured temperature is comparable to the uncertainty of the measurement equipment. These models, therefore, are suitable for estimating the temperatures within the tanks in the event of changing air flows, waste levels, and/or waste configurations
A Study of Performance in Low-Power Tokamak Reactor with Integrated Predictive Modeling Code
International Nuclear Information System (INIS)
Pianroj, Y.; Onjun, T.; Suwanna, S.; Picha, R.; Poolyarat, N.
2009-07-01
Full text: A fusion hybrid or a small fusion power output with low power tokamak reactor is presented as another useful application of nuclear fusion. Such tokamak can be used for fuel breeding, high-level waste transmutation, hydrogen production at high temperature, and testing of nuclear fusion technology components. In this work, an investigation of the plasma performance in a small fusion power output design is carried out using the BALDUR predictive integrated modeling code. The simulations of the plasma performance in this design are carried out using the empirical-based Mixed Bohm/gyro Bohm (B/gB) model, whereas the pedestal temperature model is based on magnetic and flow shear (δ α ρ ζ 2 ) stabilization pedestal width scaling. The preliminary results using this core transport model show that the central ion and electron temperatures are rather pessimistic. To improve the performance, the optimization approach are carried out by varying some parameters, such as plasma current and power auxiliary heating, which results in some improvement of plasma performance
Energy Technology Data Exchange (ETDEWEB)
Conner, C.C.; Lucas, R.G.
1993-02-01
This report documents the development of the proposed revision of the council of American Building Officials' (CABO) 1993 supplement to the 1992 Model Energy Code (MEC) (referred to as the 1993 MEC) building thermal envelope requirements for single-family and low-rise multifamily residences. The goal of this analysis was to develop revised guidelines based on an objective methodology that determined the most cost-effective (least total life-cycle cost [LCC]) combination of energy conservation measures (ECMs) for residences in different locations. The ECMs with the lowest LCC were used as a basis for proposing revised MEC maximum U[sub o]-value (thermal transmittance) curves in the MEC format. The changes proposed here affect the requirements for group R'' residences. The group R residences are detached one- and two-family dwellings (referred to as single-family) and all other residential buildings three stories or less (referred to as multifamily).
Energy Technology Data Exchange (ETDEWEB)
Conner, C.C.; Lucas, R.G.
1993-02-01
This report documents the development of the proposed revision of the council of American Building Officials` (CABO) 1993 supplement to the 1992 Model Energy Code (MEC) (referred to as the 1993 MEC) building thermal envelope requirements for single-family and low-rise multifamily residences. The goal of this analysis was to develop revised guidelines based on an objective methodology that determined the most cost-effective (least total life-cycle cost [LCC]) combination of energy conservation measures (ECMs) for residences in different locations. The ECMs with the lowest LCC were used as a basis for proposing revised MEC maximum U{sub o}-value (thermal transmittance) curves in the MEC format. The changes proposed here affect the requirements for ``group R`` residences. The group R residences are detached one- and two-family dwellings (referred to as single-family) and all other residential buildings three stories or less (referred to as multifamily).
Modelling and Simulation of National Electronic Product Code Network Demonstrator Project
Mo, John P. T.
The National Electronic Product Code (EPC) Network Demonstrator Project (NDP) was the first large scale consumer goods track and trace investigation in the world using full EPC protocol system for applying RFID technology in supply chains. The NDP demonstrated the methods of sharing information securely using EPC Network, providing authentication to interacting parties, and enhancing the ability to track and trace movement of goods within the entire supply chain involving transactions among multiple enterprise. Due to project constraints, the actual run of the NDP was 3 months only and was unable to consolidate with quantitative results. This paper discusses the modelling and simulation of activities in the NDP in a discrete event simulation environment and provides an estimation of the potential benefits that can be derived from the NDP if it was continued for one whole year.
Nummer, Brian A
2013-11-01
Kombucha is a fermented beverage made from brewed tea and sugar. The taste is slightly sweet and acidic and it may have residual carbon dioxide. Kombucha is consumed in many countries as a health beverage and it is gaining in popularity in the U.S. Consequently, many retailers and food service operators are seeking to brew this beverage on site. As a fermented beverage, kombucha would be categorized in the Food and Drug Administration model Food Code as a specialized process and would require a variance with submission of a food safety plan. This special report was created to assist both operators and regulators in preparing or reviewing a kombucha food safety plan.
From test data to FE code: a straightforward strategy for modelling the structural bonding interface
Directory of Open Access Journals (Sweden)
M. A. Lepore
2017-01-01
Full Text Available A straightforward methodology for modelling the cohesive zone (CZM of an adhesively bonded joint is developed, by using a commercial finite element code and experimental outcomes from standard fracture tests, without defining a damage law explicitly. The in-house developed algorithm implements a linear interpolated cohesive relationship, obtained from literature data, and calculates the damage at each step increment. The algorithm is applicable both to dominant mode I or dominant mode II debonding simulations. The hypothesis of unloading stages occurrence is also considered employing an irreversible behaviour with elastic damaged reloading. A case study for validation is presented, implementing the algorithm in the commercial finite element method (FEM software Abaqus®. Numerical simulation of dominant mode I fracture loading provides with satisfactory results.
The theoretical and computational models of the GASFLOW-II code
International Nuclear Information System (INIS)
Travis, J.R.
1999-01-01
GASFLOW-II is a finite-volume computer code that solves the time-dependent compressible Navier-Stokes equations for multiple gas species in a dispersed liquid water two-phase medium. The fluid-dynamics algorithm is coupled to the chemical kinetics of combusting gases to simulate diffusion or propagating flames in complex geometries of nuclear containments. GASFLOW-II is therefore able to predict gaseous distributions and thermal and pressure loads on containment structures and safety related equipment in the event combustion occurs. Current developments of GASFLOW-II are focused on hydrogen distribution, mitigation measures including carbon dioxide inerting, and possible combustion events in nuclear reactor containments. Fluid turbulence is calculated to enhance the transport and mixing of gases in rooms and volumes that may be connected by a ventilation system. Condensation, vaporization, and heat transfer to walls, floors, ceilings, internal structures, and within the fluid are calculated to model the appropriate mass and energy sinks. (author)
Modeling of hydrogen behaviour in a PWR nuclear power plant containment with the CONTAIN code
International Nuclear Information System (INIS)
Bobovnik, G.; Kljenak, I.
2001-01-01
Hydrogen behavior in the containment during a severe accident in a two-loop Westinghouse-type PWR nuclear power plant was simulated with the CONTAIN code. The accident was initiated with a cold-leg break of the reactor coolant system in a steam generator compartment. In the input model, the containment is represented with 34 cells. Beside hydrogen concentration, the containment atmosphere temperature and pressure and the carbon monoxide concentration were observed as well. Simulations were carried out for two different scenarios: with and without successful actuation of the containment spray system. The highest hydrogen concentration occurs in the containment dome and near the hydrogen release location in the early stages of the accident. Containment sprays do not have a significant effect on hydrogen stratification.(author)
A simplified treatment of the boundary conditions of the k- ε model in coarse-mesh CFD-type codes
International Nuclear Information System (INIS)
Analytis, G.Th.; Andreani, M.
1999-01-01
In coarse-mesh, CFD-type codes such as the containment analysis code GOTHIC, one of the options that can be used for modelling of turbulence is the k - ε model. However, in contrast to most other CFD codes which are designed to perform detailed CFD calculations with a large number of spatial meshes, codes such as GOTHIC are primarily aimed at simplified calculation of transients in large spaces (e.g., reactor containments), and generally use coarse meshes. The solution of the two parabolic equations for the k - ε model requires the definition of boundary conditions at physical boundaries and this, in turn, requires very small spatial meshes near these boundaries. Hence, while in codes like CFX this is done in a rigorous and consistent manner, codes like GOTHIC adopt an indirect and heuristic approach, due to the fact that the spatial meshes are usually large. This can have adverse consequences during the calculation of a transient and in this work, we shall give some examples of this and outline a method by which this problem can be avoided. (author)
Energy Technology Data Exchange (ETDEWEB)
Lindemuth, I.R.
1979-02-28
This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables.
International Nuclear Information System (INIS)
Lindemuth, I.R.
1979-01-01
This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables
DEFF Research Database (Denmark)
Berger, Michael Stübert; Soler, José; Yu, Hao
2013-01-01
The MODUS project aims to provide a pragmatic and viable solution that will allow SMEs to substantially improve their positioning in the embedded-systems development market. The MODUS tool will provide a model verification and Hardware/Software co-simulation tool (TRIAL) and a performance...... optimisation and customisable source-code generation tool (TUNE). The concept is depicted in automated modelling and optimisation of embedded-systems development. The tool will enable model verification by guiding the selection of existing open-source model verification engines, based on the automated analysis...... of system properties, and producing inputs to be fed into these engines, interfacing with standard (SystemC) simulation platforms for HW/SW co-simulation, customisable source-code generation towards respecting coding standards and conventions and software performance-tuning optimisation through automated...
DEFF Research Database (Denmark)
Wu, Mo; Forchhammer, Søren
2007-01-01
This paper considers a method for evaluation of Rate-Distortion-Complexity (R-D-C) performance of video coding. A statistical model of the transformed coefficients is used to estimate the Rate-Distortion (R-D) performance. A model frame work for rate, distortion and slope of the R-D curve for inter...... and intra frame is presented. Assumptions are given for analyzing an R-D model for fast R-D-C evaluation. The theoretical expressions are combined with H.264 video coding, and confirmed by experimental results. The complexity frame work is applied to the integer motion estimation....
Aseev, N.; Shprits, Y.; Drozdov, A.; Kellerman, A. C.; Wang, D.
2017-12-01
Ring current and radiation belts are key elements in the global dynamics of the Earth's magnetosphere. Comprehensive mathematical models are useful tools that allow us to understand the multiscale dynamics of these charged particle populations. In this work, we present results of simulations of combined ring current - radiation belt electron dynamics using the four-dimensional Versatile Electron Radiation Belt (VERB-4D) code. The VERB-4D code solves the modified Fokker-Planck equation including convective terms and models simultaneously ring current (1 - 100 keV) and radiation belt (100 keV - several MeV) electron dynamics. We apply the code to the number of geomagnetic storms that occurred in the past, compare the results with different satellite observations, and show how low-energy particles can affect the high-energy populations. Particularly, we use data from Polar Operational Environmental Satellite (POES) mission that provides a very good MLT coverage with 1.5-hour time resolution. The POES data allow us to validate the approach of the VERB-4D code for modeling MLT-dependent processes such as electron drift, wave-particle interactions, and magnetopause shadowing. We also show how different simulation parameters and empirical models can affect the results, making a particular emphasis on the electric and magnetic field models. This work will help us reveal advantages and disadvantages of the approach behind the code and determine its prediction efficiency.
International Nuclear Information System (INIS)
Yun, B. J.; Song, C. H.; Splawski, A.; Lo, S.
2010-01-01
Subcooled boiling is one of the crucial phenomena for the design, operation and safety analysis of a nuclear power plant. It occurs due to the thermally nonequilibrium state in the two-phase heat transfer system. Many complicated phenomena such as a bubble generation, a bubble departure, a bubble growth, and a bubble condensation are created by this thermally nonequilibrium condition in the subcooled boiling flow. However, it has been revealed that most of the existing best estimate safety analysis codes have a weakness in the prediction of the subcooled boiling phenomena in which multi-dimensional flow behavior is dominant. In recent years, many investigators are trying to apply CFD (Computational Fluid Dynamics) codes for an accurate prediction of the subcooled boiling flow. In the CFD codes, evaporation heat flux from heated wall is one of the key parameters to be modeled for an accurate prediction of the subcooled boiling flow. The evaporate heat flux for the CFD codes is expressed typically as follows, q' e = πD 3 d /6 ρ g h fg fN' where, D d , f ,N' are bubble departure size, bubble departure frequency and active nucleation site density, respectively. In the most of the commercial CFD codes, Tolubinsky bubble departure size model, Kurul and Podowski active nucleation site density model and Ceumem-Lindenstjerna bubble departure frequency model are adopted as a basic wall boiling model. However, these models do not consider their dependency on the flow, pressure and fluid type. In this paper, an advanced wall boiling model was proposed in order to improve subcooled boiling model for the CFD codes
Testing and Modeling of a 3-MW Wind Turbine Using Fully Coupled Simulation Codes (Poster)
Energy Technology Data Exchange (ETDEWEB)
LaCava, W.; Guo, Y.; Van Dam, J.; Bergua, R.; Casanovas, C.; Cugat, C.
2012-06-01
This poster describes the NREL/Alstom Wind testing and model verification of the Alstom 3-MW wind turbine located at NREL's National Wind Technology Center. NREL,in collaboration with ALSTOM Wind, is studying a 3-MW wind turbine installed at the National Wind Technology Center(NWTC). The project analyzes the turbine design using a state-of-the-art simulation code validated with detailed test data. This poster describes the testing and the model validation effort, and provides conclusions about the performance of the unique drive train configuration used in this wind turbine. The 3-MW machine has been operating at the NWTC since March 2011, and drive train measurements will be collected through the spring of 2012. The NWTC testing site has particularly turbulent wind patterns that allow for the measurement of large transient loads and the resulting turbine response. This poster describes the 3-MW turbine test project, the instrumentation installed, and the load cases captured. The design of a reliable wind turbine drive train increasingly relies on the use of advanced simulation to predict structural responses in a varying wind field. This poster presents a fully coupled, aero-elastic and dynamic model of the wind turbine. It also shows the methodology used to validate the model, including the use of measured tower modes, model-to-model comparisons of the power curve, and mainshaft bending predictions for various load cases. The drivetrain is designed to only transmit torque to the gearbox, eliminating non-torque moments that are known to cause gear misalignment. Preliminary results show that the drivetrain is able to divert bending loads in extreme loading cases, and that a significantly smaller bending moment is induced on the mainshaft compared to a three-point mounting design.
International Nuclear Information System (INIS)
Liao, J.; Cao, L.; Ohkawa, K.; Frepoli, C.
2012-01-01
The non-condensable gases condensation suppression model is important for a realistic LOCA safety analysis code. A condensation suppression model for direct contact condensation was previously developed by Westinghouse using first principles. The model is believed to be an accurate description of the direct contact condensation process in the presence of non-condensable gases. The Westinghouse condensation suppression model is further revised by applying a more physical model. The revised condensation suppression model is thus implemented into the WCOBRA/TRAC-TF2 LOCA safety evaluation code for both 3-D module (COBRA-TF) and 1-D module (TRAC-PF1). Parametric study using the revised Westinghouse condensation suppression model is conducted. Additionally, the performance of non-condensable gases condensation suppression model is examined in the ACHILLES (ISP-25) separate effects test and LOFT L2-5 (ISP-13) integral effects test. (authors)
Biomedical time series clustering based on non-negative sparse coding and probabilistic topic model.
Wang, Jin; Liu, Ping; F H She, Mary; Nahavandi, Saeid; Kouzani, Abbas
2013-09-01
Biomedical time series clustering that groups a set of unlabelled temporal signals according to their underlying similarity is very useful for biomedical records management and analysis such as biosignals archiving and diagnosis. In this paper, a new framework for clustering of long-term biomedical time series such as electrocardiography (ECG) and electroencephalography (EEG) signals is proposed. Specifically, local segments extracted from the time series are projected as a combination of a small number of basis elements in a trained dictionary by non-negative sparse coding. A Bag-of-Words (BoW) representation is then constructed by summing up all the sparse coefficients of local segments in a time series. Based on the BoW representation, a probabilistic topic model that was originally developed for text document analysis is extended to discover the underlying similarity of a collection of time series. The underlying similarity of biomedical time series is well captured attributing to the statistic nature of the probabilistic topic model. Experiments on three datasets constructed from publicly available EEG and ECG signals demonstrates that the proposed approach achieves better accuracy than existing state-of-the-art methods, and is insensitive to model parameters such as length of local segments and dictionary size. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Development and application of a multi-fluid simulation code for modeling interpenetrating plasmas
Khodak, M.; Berger, R. L.; Chapman, T.; Hittinger, J. A. F.
2015-11-01
A multi-fluid model, with independent velocities for all species, is developed and implemented for the numerical simulation of the interpenetration of colliding plasmas. The Euler equations for fluid flow, coupled through electron-ion and ion-ion collisional drag terms, thermal equilibration terms, and the electric field, are solved for each ion species with the electrons treated under a quasineutrality assumption. Fourth-order spatial convergence in smooth regions is achieved using flux-conservative iterative time integration and a Weighted Essentially Non-Oscillatory (WENO) finite volume scheme employing an approximate Riemann solver. Analytic solutions of well-known shock tube tests and spectral solutions of the linearized coupled system are used to test the implementation, and the model is further numerically compared to interpenetration experiments such as those of J.S. Ross et al. [Phys. Rev. Lett. 110 145005 (2013)]. This work has applications to laser-plasma interactions, specifically to hohlraum physics, as well as to modeling laboratory experiments of collisionless shocks important in astrophysical plasmas. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344 and funded by the Laboratory Research and Development Program at LLNL under project code 15-ERD-038.
Modeling Monte Carlo of multileaf collimators using the code GEANT4
Energy Technology Data Exchange (ETDEWEB)
Oliveira, Alex C.H.; Lima, Fernando R.A., E-mail: oliveira.ach@yahoo.com, E-mail: falima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil); Lima, Luciano S.; Vieira, Jose W., E-mail: lusoulima@yahoo.com.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Pernambuco (IFPE), Recife, PE (Brazil)
2014-07-01
Radiotherapy uses various techniques and equipment for local treatment of cancer. The equipment most often used in radiotherapy to the patient irradiation is linear accelerator (Linac). Among the many algorithms developed for evaluation of dose distributions in radiotherapy planning, the algorithms based on Monte Carlo (MC) methods have proven to be very promising in terms of accuracy by providing more realistic results. The MC simulations for applications in radiotherapy are divided into two parts. In the first, the simulation of the production of the radiation beam by the Linac is performed and then the phase space is generated. The phase space contains information such as energy, position, direction, etc. of millions of particles (photons, electrons, positrons). In the second part the simulation of the transport of particles (sampled phase space) in certain configurations of irradiation field is performed to assess the dose distribution in the patient (or phantom). Accurate modeling of the Linac head is of particular interest in the calculation of dose distributions for intensity modulated radiation therapy (IMRT), where complex intensity distributions are delivered using a multileaf collimator (MLC). The objective of this work is to describe a methodology for modeling MC of MLCs using code Geant4. To exemplify this methodology, the Varian Millennium 120-leaf MLC was modeled, whose physical description is available in BEAMnrc Users Manual (20 11). The dosimetric characteristics (i.e., penumbra, leakage, and tongue-and-groove effect) of this MLC were evaluated. The results agreed with data published in the literature concerning the same MLC. (author)
Models of multi-rod code FRETA-B for transient fuel behavior analysis
International Nuclear Information System (INIS)
Uchida, Masaaki; Otsubo, Naoaki.
1984-11-01
This paper is a final report of the development of FRETA-B code, which analyzes the LWR fuel behavior during accidents, particularly the Loss-of-Coolant Accident (LOCA). The very high temperature induced by a LOCA causes oxidation of the cladding by steam and, as a combined effect with low external pressure, extensive swelling of the cladding. The latter may reach a level that the rods block the coolant channel. To analyze these phenomena, single-rod model is insufficient; FRETA-B has a capability to handle multiple fuel rods in a bundle simultaneously, including the interaction between them. In the development work, therefore, efforts were made for avoiding the excessive increase of calculation time and core memory requirement. Because of the strong dependency of the in-LOCA fuel behavior on the coolant state, FRETA-B has emphasis on heat transfer to the coolant as well as the cladding deformation. In the final version, a capability was added to analyze the fuel behavior under reflooding using empirical models. The present report describes the basic models of FRETA-B, and also gives its input manual in the appendix. (author)
Development and validation of a model TRIGA Mark III reactor with code MCNP5
International Nuclear Information System (INIS)
Galicia A, J.; Francois L, J. L.; Aguilar H, F.
2015-09-01
The main purpose of this paper is to obtain a model of the reactor core TRIGA Mark III that accurately represents the real operating conditions to 1 M Wth, using the Monte Carlo code MCNP5. To provide a more detailed analysis, different models of the reactor core were realized by simulating the control rods extracted and inserted in conditions in cold (293 K) also including an analysis for shutdown margin, so that satisfied the Operation Technical Specifications. The position they must have the control rods to reach a power equal to 1 M Wth, were obtained from practice entitled Operation in Manual Mode performed at Instituto Nacional de Investigaciones Nucleares (ININ). Later, the behavior of the K eff was analyzed considering different temperatures in the fuel elements, achieving calculate subsequently the values that best represent the actual reactor operation. Finally, the calculations in the developed model for to obtain the distribution of average flow of thermal, epithermal and fast neutrons in the six new experimental facilities are presented. (Author)
Generalized rate-code model for neuron ensembles with finite populations
International Nuclear Information System (INIS)
Hasegawa, Hideo
2007-01-01
We have proposed a generalized Langevin-type rate-code model subjected to multiplicative noise, in order to study stationary and dynamical properties of an ensemble containing a finite number N of neurons. Calculations using the Fokker-Planck equation have shown that, owing to the multiplicative noise, our rate model yields various kinds of stationary non-Gaussian distributions such as Γ, inverse-Gaussian-like, and log-normal-like distributions, which have been experimentally observed. The dynamical properties of the rate model have been studied with the use of the augmented moment method (AMM), which was previously proposed by the author from a macroscopic point of view for finite-unit stochastic systems. In the AMM, the original N-dimensional stochastic differential equations (DEs) are transformed into three-dimensional deterministic DEs for the means and fluctuations of local and global variables. The dynamical responses of the neuron ensemble to pulse and sinusoidal inputs calculated by the AMM are in good agreement with those obtained by direct simulation. The synchronization in the neuronal ensemble is discussed. The variabilities of the firing rate and of the interspike interval are shown to increase with increasing magnitude of multiplicative noise, which may be a conceivable origin of the observed large variability in cortical neurons
Modeling and Analysis of FCM UN TRISO Fuel Using the PARFUME Code
Energy Technology Data Exchange (ETDEWEB)
Blaise Collin
2013-09-01
The PARFUME (PARticle Fuel ModEl) modeling code was used to assess the overall fuel performance of uranium nitride (UN) tri-structural isotropic (TRISO) ceramic fuel in the frame of the design and development of Fully Ceramic Matrix (FCM) fuel. A specific modeling of a TRISO particle with UN kernel was developed with PARFUME, and its behavior was assessed in irradiation conditions typical of a Light Water Reactor (LWR). The calculations were used to access the dimensional changes of the fuel particle layers and kernel, including the formation of an internal gap. The survivability of the UN TRISO particle was estimated depending on the strain behavior of the constituent materials at high fast fluence and burn-up. For nominal cases, internal gas pressure and representative thermal profiles across the kernel and layers were determined along with stress levels in the pyrolytic carbon (PyC) and silicon carbide (SiC) layers. These parameters were then used to evaluate fuel particle failure probabilities. Results of the study show that the survivability of UN TRISO fuel under LWR irradiation conditions might only be guaranteed if the kernel and PyC swelling rates are limited at high fast fluence and burn-up. These material properties are unknown at the irradiation levels expected to be reached by UN TRISO fuel in LWRs. Therefore, more effort is needed to determine them and positively conclude on the applicability of FCM fuel to LWRs.
Basic data, computer codes and integral experiments: The tools for modelling in nuclear technology
International Nuclear Information System (INIS)
Sartori, E.
2001-01-01
When studying applications in nuclear technology we need to understand and be able to predict the behavior of systems manufactured by human enterprise. First, the underlying basic physical and chemical phenomena need to be understood. We have then to predict the results from the interplay of the large number of the different basic events: i.e. the macroscopic effects. In order to be able to build confidence in our modelling capability, we need then to compare these results against measurements carried out on such systems. The different levels of modelling require the solution of different types of equations using different type of parameters. The tools required for carrying out a complete validated analysis are: - The basic nuclear or chemical data; - The computer codes, and; - The integral experiments. This article describes the role each component plays in a computational scheme designed for modelling purposes. It describes also which tools have been developed and are internationally available. The role of the OECD/NEA Data Bank, the Radiation Shielding Information Computational Center (RSICC), and the IAEA Nuclear Data Section are playing in making these elements available to the community of scientists and engineers is described. (author)
Laser-Plasma Modeling Using PERSEUS Extended-MHD Simulation Code for HED Plasmas
Hamlin, Nathaniel; Seyler, Charles
2017-10-01
We discuss the use of the PERSEUS extended-MHD simulation code for high-energy-density (HED) plasmas in modeling the influence of Hall and electron inertial physics on laser-plasma interactions. By formulating the extended-MHD equations as a relaxation system in which the current is semi-implicitly time-advanced using the Generalized Ohm's Law, PERSEUS enables modeling of extended-MHD phenomena (Hall and electron inertial physics) without the need to resolve the smallest electron time scales, which would otherwise be computationally prohibitive in HED plasma simulations. We first consider a laser-produced plasma plume pinched by an applied magnetic field parallel to the laser axis in axisymmetric cylindrical geometry, forming a conical shock structure and a jet above the flow convergence. The Hall term produces low-density outer plasma, a helical field structure, flow rotation, and field-aligned current, rendering the shock structure dispersive. We then model a laser-foil interaction by explicitly driving the oscillating laser fields, and examine the essential physics governing the interaction. This work is supported by the National Nuclear Security Administration stewardship sciences academic program under Department of Energy cooperative agreements DE-FOA-0001153 and DE-NA0001836.
DEFF Research Database (Denmark)
Paramanathan, Achuthan; Rasmussen, Ulrik Wilken; Hundebøll, Martin
2012-01-01
This paper presents an energy model and energy measurements for network coding enabled wireless meshed networks based on IEEE 802.11 technology. The energy model and the energy measurement testbed is limited to a simple Alice and Bob scenario. For this toy scenario we compare the energy usages...
Modeling post-transcriptional regulation activity of small non-coding RNAs in Escherichia coli.
Wang, Rui-Sheng; Jin, Guangxu; Zhang, Xiang-Sun; Chen, Luonan
2009-04-29
Transcriptional regulation is a fundamental process in biological systems, where transcription factors (TFs) have been revealed to play crucial roles. In recent years, in addition to TFs, an increasing number of non-coding RNAs (ncRNAs) have been shown to mediate post-transcriptional processes and regulate many critical pathways in both prokaryotes and eukaryotes. On the other hand, with more and more high-throughput biological data becoming available, it is possible and imperative to quantitatively study gene regulation in a systematic and detailed manner. Most existing studies for inferring transcriptional regulatory interactions and the activity of TFs ignore the possible post-transcriptional effects of ncRNAs. In this work, we propose a novel framework to infer the activity of regulators including both TFs and ncRNAs by exploring the expression profiles of target genes and (post)transcriptional regulatory relationships. We model the integrated regulatory system by a set of biochemical reactions which lead to a log-bilinear problem. The inference process is achieved by an iterative algorithm, in which two linear programming models are efficiently solved. In contrast to available related studies, the effects of ncRNAs on transcription process are considered in this work, and thus more reasonable and accurate reconstruction can be expected. In addition, the approach is suitable for large-scale problems from the viewpoint of computation. Experiments on two synthesized data sets and a model system of Escherichia coli (E. coli) carbon source transition from glucose to acetate illustrate the effectiveness of our model and algorithm. Our results show that incorporating the post-transcriptional regulation of ncRNAs into system model can mine the hidden effects from the regulation activity of TFs in transcription processes and thus can uncover the biological mechanisms in gene regulation in a more accurate manner. The software for the algorithm in this paper is available
Turbine model for the Laguna Verde nucleo electric central based in the RELAP code
International Nuclear Information System (INIS)
Cortes M, F.S.; Ramos P, J.C.; Salazar S, E.; Chavez M, C.
2003-01-01
The Nuclear Power stations as Laguna Verde occupy at the present time a place every time but important as non pollutant alternative, economic and trusty to generate electricity. It is for it that the Group of Nuclear Engineering of the Engineering Faculty (GrlNFI) of the National Autonomous University of Mexico (UNAM) it develops investigation projects applied to Nuclear Centrals. One of the projects in process is the development of a Classroom simulator, which it can configures to consent to diverse models of nuclear systems with training purposes in normal operation, or, to consent to specialized nuclear codes for the analysis of transitory events and have a severe accident. This work describes the development, implementation and it proves of a simplified model of the Main turbine to be integrated to the group of models of the Classroom simulator. It is part of the current effort of GrlNFI guided to obtain a representation of all the dynamic models necessary to simulate the Plant Balance of the Laguna Verde Central. It is included the development of the unfolding graphic which represent the modeling of the Main turbine, and of the control interface that allows the user to manipulate in simple way, direct and interactive this device during the training or the analysis. With this work it is wanted to contribute to the training of new technicians and to support the operation personnel of the Centrals. Also, the developed infrastructure is guided to contribute in the design and analysis of new Nuclear Power stations with the contribution of new computational tools for the visualization, simulation and process control. (Author)
Energy Technology Data Exchange (ETDEWEB)
Chung, Ji Bum [Institute for Advanced Engineering, Yongin (Korea, Republic of); Park, Jong Woon [Korea Electric Power Research Institute, Taejon (Korea, Republic of)
1998-12-31
In order to enhance the dynamic and interactive simulation capability of a system thermal hydraulic code for nuclear power plant, applicability of flow network models in SINDA/FLUINT{sup TM} has been tested by modeling feedwater system and coupling to DSNP which is one of a system thermal hydraulic simulation code for a pressurized heavy water reactor. The feedwater system is selected since it is one of the most important balance of plant systems with a potential to greatly affect the behavior of nuclear steam supply system. The flow network model of this feedwater system consists of condenser, condensate pumps, low and high pressure heaters, deaerator, feedwater pumps, and control valves. This complicated flow network is modeled and coupled to DSNP and it is tested for several normal and abnormal transient conditions such turbine load maneuvering, turbine trip, and loss of class IV power. The results show reasonable behavior of the coupled code and also gives a good dynamic and interactive simulation capabilities for the several mild transient conditions. It has been found that coupling system thermal hydraulic code with a flow network code is a proper way of upgrading simulation capability of DSNP to mature nuclear plant analyzer (NPA). 5 refs., 10 figs. (Author)