WorldWideScience

Sample records for modeling codes capable

  1. Hitch code capabilities for modeling AVT chemistry

    International Nuclear Information System (INIS)

    Leibovitz, J.

    1985-01-01

    Several types of corrosion have damaged alloy 600 tubing in the secondary side of steam generators. The types of corrosion include wastage, denting, intergranular attack, stress corrosion, erosion-corrosion, etc. The environments which cause attack may originate from leaks of cooling water into the condensate, etc. When the contaminated feedwater is pumped into the generator, the impurities may concentrate first 200 to 400 fold in the bulk water, depending on the blowdown, and then further to saturation and dryness in heated tube support plate crevices. Characterization of local solution chemistries is the first step to predict and correct the type of corrosion that can occur. The pH is of particular importance because it is a major factor governing the rate of corrosion reactions. The pH of a solution at high temperature is not the same as the ambient temperature, since ionic dissociation constants, solubility and solubility products, activity coefficients, etc., all change with temperature. Because the high temperature chemistry of such solutions is not readily characterized experimentally, modeling techniques were developed under EPRI sponsorship to calculate the high temperature chemistry of the relevant solutions. In many cases, the effects of cooling water impurities on steam generator water chemistry with all volatile treatment (AVT), upon concentration by boiling, and in particular the resulting acid or base concentration can be calculated by a simple code, the HITCH code, which is very easy to use. The scope and applicability of the HITCH code are summarized

  2. Fuel analysis code FAIR and its high burnup modelling capabilities

    International Nuclear Information System (INIS)

    Prasad, P.S.; Dutta, B.K.; Kushwaha, H.S.; Mahajan, S.C.; Kakodkar, A.

    1995-01-01

    A computer code FAIR has been developed for analysing performance of water cooled reactor fuel pins. It is capable of analysing high burnup fuels. This code has recently been used for analysing ten high burnup fuel rods irradiated at Halden reactor. In the present paper, the code FAIR and its various high burnup models are described. The performance of code FAIR in analysing high burnup fuels and its other applications are highlighted. (author). 21 refs., 12 figs

  3. Relativistic modeling capabilities in PERSEUS extended MHD simulation code for HED plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Hamlin, Nathaniel D., E-mail: nh322@cornell.edu [438 Rhodes Hall, Cornell University, Ithaca, NY, 14853 (United States); Seyler, Charles E., E-mail: ces7@cornell.edu [Cornell University, Ithaca, NY, 14853 (United States)

    2014-12-15

    We discuss the incorporation of relativistic modeling capabilities into the PERSEUS extended MHD simulation code for high-energy-density (HED) plasmas, and present the latest hybrid X-pinch simulation results. The use of fully relativistic equations enables the model to remain self-consistent in simulations of such relativistic phenomena as X-pinches and laser-plasma interactions. By suitable formulation of the relativistic generalized Ohm’s law as an evolution equation, we have reduced the recovery of primitive variables, a major technical challenge in relativistic codes, to a straightforward algebraic computation. Our code recovers expected results in the non-relativistic limit, and reveals new physics in the modeling of electron beam acceleration following an X-pinch. Through the use of a relaxation scheme, relativistic PERSEUS is able to handle nine orders of magnitude in density variation, making it the first fluid code, to our knowledge, that can simulate relativistic HED plasmas.

  4. Present capabilities and new developments in antenna modeling with the numerical electromagnetics code NEC

    Energy Technology Data Exchange (ETDEWEB)

    Burke, G.J.

    1988-04-08

    Computer modeling of antennas, since its start in the late 1960's, has become a powerful and widely used tool for antenna design. Computer codes have been developed based on the Method-of-Moments, Geometrical Theory of Diffraction, or integration of Maxwell's equations. Of such tools, the Numerical Electromagnetics Code-Method of Moments (NEC) has become one of the most widely used codes for modeling resonant sized antennas. There are several reasons for this including the systematic updating and extension of its capabilities, extensive user-oriented documentation and accessibility of its developers for user assistance. The result is that there are estimated to be several hundred users of various versions of NEC world wide. 23 refs., 10 figs.

  5. SCANAIR a transient fuel performance code Part two: Assessment of modelling capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Georgenthum, Vincent, E-mail: vincent.georgenthum@irsn.fr; Moal, Alain; Marchand, Olivier

    2014-12-15

    Highlights: • The SCANAIR code is devoted to the study of irradiated fuel rod behaviour during RIA. • The paper deals with the status of the code validation for PWR rods. • During the PCMI stage there is a good agreement between calculations and experiments. • The boiling crisis occurrence is rather well predicted. • The code assessment during the boiling crisis has still to be improved. - Abstract: In the frame of their research programmes on fuel safety, the French Institut de Radioprotection et de Sûreté Nucléaire develops the SCANAIR code devoted to the study of irradiated fuel rod behaviour during reactivity initiated accident. A first paper was focused on detailed modellings and code description. This second paper deals with the status of the code validation for pressurised water reactor rods performed thanks to the available experimental results. About 60 integral tests carried out in CABRI and NSRR experimental reactors and 24 separated tests performed in the PATRICIA facility (devoted to the thermal-hydraulics study) have been recalculated and compared to experimental data. During the first stage of the transient, the pellet clad mechanical interaction phase, there is a good agreement between calculations and experiments: the clad residual elongation and hoop strain of non failed tests but also the failure occurrence and failure enthalpy of failed tests are correctly calculated. After this first stage, the increase of cladding temperature can lead to the Departure from Nucleate Boiling. During the film boiling regime, the clad temperature can reach a very high temperature (>700 °C). If the boiling crisis occurrence is rather well predicted, the calculation of the clad temperature and the clad hoop strain during this stage have still to be improved.

  6. Benchmarking LWR codes capability to model radionuclide deposition within SFR containments: An analysis of the Na ABCOVE tests

    Energy Technology Data Exchange (ETDEWEB)

    Herranz, Luis E., E-mail: luisen.herranz@ciemat.es [CIEMAT, Unit of Nuclear Safety Research, Av. Complutense, 40, 28040 Madrid (Spain); Garcia, Monica, E-mail: monica.gmartin@ciemat.es [CIEMAT, Unit of Nuclear Safety Research, Av. Complutense, 40, 28040 Madrid (Spain); Morandi, Sonia, E-mail: sonia.morandi@rse-web.it [Nuclear and Industrial Plant Safety Team, Power Generation System Department, RSE, via Rubattino 54, 20134 Milano (Italy)

    2013-12-15

    Highlights: • Assessment of LWR codes capability to model aerosol deposition within SFR containments. • Original hypotheses proposed to partially accommodate drawbacks from Na oxidation reactions. • A defined methodology to derive a more accurate characterization of Na-based particles. • Key missing models in LWR codes for SFR applications are identified. - Abstract: Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide transport, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide deposition, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. The present paper is aimed at assessing the current capability of LWR codes to model aerosol deposition within a SFR containment under BDBA conditions. Through a systematic application of the ASTEC, ECART and MELCOR codes to relevant ABCOVE tests, insights have been gained into drawbacks and capabilities of these computation tools. Hypotheses and approximations have

  7. Benchmarking LWR codes capability to model radionuclide deposition within SFR containments: An analysis of the Na ABCOVE tests

    International Nuclear Information System (INIS)

    Herranz, Luis E.; Garcia, Monica; Morandi, Sonia

    2013-01-01

    Highlights: • Assessment of LWR codes capability to model aerosol deposition within SFR containments. • Original hypotheses proposed to partially accommodate drawbacks from Na oxidation reactions. • A defined methodology to derive a more accurate characterization of Na-based particles. • Key missing models in LWR codes for SFR applications are identified. - Abstract: Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide transport, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide deposition, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. The present paper is aimed at assessing the current capability of LWR codes to model aerosol deposition within a SFR containment under BDBA conditions. Through a systematic application of the ASTEC, ECART and MELCOR codes to relevant ABCOVE tests, insights have been gained into drawbacks and capabilities of these computation tools. Hypotheses and approximations have been adopted so that

  8. Validation and comparison of two-phase flow modeling capabilities of CFD, sub channel and system codes by means of post-test calculations of BFBT transient tests

    Energy Technology Data Exchange (ETDEWEB)

    Jaeger, Wadim; Manes, Jorge Perez; Imke, Uwe; Escalante, Javier Jimenez; Espinoza, Victor Sanchez, E-mail: victor.sanchez@kit.edu

    2013-10-15

    Highlights: • Simulation of BFBT turbine and pump transients at multiple scales. • CFD, sub-channel and system codes are used for the comparative study. • Heat transfer models are compared to identify difference between the code predictions. • All three scales predict results in good agreement to experiment. • Sub cooled boiling models are identified as field for future research. -- Abstract: The Institute for Neutron Physics and Reactor Technology (INR) at the Karlsruhe Institute of Technology (KIT) is involved in the validation and qualification of modern thermo hydraulic simulations tools at various scales. In the present paper, the prediction capabilities of four codes from three different scales – NEPTUNE{sub C}FD as fine mesh computational fluid dynamics code, SUBCHANFLOW and COBRA-TF as sub channels codes and TRACE as system code – are assessed with respect to their two-phase flow modeling capabilities. The subject of the investigations is the well-known and widely used data base provided within the NUPEC BFBT benchmark related to BWRs. Void fraction measurements simulating a turbine and a re-circulation pump trip are provided at several axial levels of the bundle. The prediction capabilities of the codes for transient conditions with various combinations of boundary conditions are validated by comparing the code predictions with the experimental data. In addition, the physical models of the different codes are described and compared to each other in order to explain the different results and to identify areas for further improvements.

  9. Development of covariance capabilities in EMPIRE code

    Energy Technology Data Exchange (ETDEWEB)

    Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.

    2008-06-24

    The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.

  10. Group Capability Model

    Science.gov (United States)

    Olejarski, Michael; Appleton, Amy; Deltorchio, Stephen

    2009-01-01

    The Group Capability Model (GCM) is a software tool that allows an organization, from first line management to senior executive, to monitor and track the health (capability) of various groups in performing their contractual obligations. GCM calculates a Group Capability Index (GCI) by comparing actual head counts, certifications, and/or skills within a group. The model can also be used to simulate the effects of employee usage, training, and attrition on the GCI. A universal tool and common method was required due to the high risk of losing skills necessary to complete the Space Shuttle Program and meet the needs of the Constellation Program. During this transition from one space vehicle to another, the uncertainty among the critical skilled workforce is high and attrition has the potential to be unmanageable. GCM allows managers to establish requirements for their group in the form of head counts, certification requirements, or skills requirements. GCM then calculates a Group Capability Index (GCI), where a score of 1 indicates that the group is at the appropriate level; anything less than 1 indicates a potential for improvement. This shows the health of a group, both currently and over time. GCM accepts as input head count, certification needs, critical needs, competency needs, and competency critical needs. In addition, team members are categorized by years of experience, percentage of contribution, ex-members and their skills, availability, function, and in-work requirements. Outputs are several reports, including actual vs. required head count, actual vs. required certificates, CGI change over time (by month), and more. The program stores historical data for summary and historical reporting, which is done via an Excel spreadsheet that is color-coded to show health statistics at a glance. GCM has provided the Shuttle Ground Processing team with a quantifiable, repeatable approach to assessing and managing the skills in their organization. They now have a common

  11. Assessment of capability for modeling the core degradation in 2D geometry with ASTEC V2 integral code for VVER type of reactor

    International Nuclear Information System (INIS)

    Dimov, D.

    2011-01-01

    The ASTEC code is progressively becoming the reference European severe accident integral code through in particular the intensification of research activities carried out since 2004. The purpose of this analysis is to assess ASTEC code modelling of main phenomena arising during hypothetical severe accidents and particularly in-vessel degradation in 2D geometry. The investigation covers both early and late phase of degradation of reactor core as well as determination of corium which will enter the reactor cavity. The initial event is station back-out. In order to receive severe accident condition, failure of all active component of emergency core cooling system is apply. The analysis is focus on ICARE module of ASTEC code and particularly on so call MAGMA model. The aim of study is to determine the capability of the integral code to simulate core degradation and to determine the corium composition entering the reactor cavity. (author)

  12. New capabilities of the lattice code WIMS-AECL

    International Nuclear Information System (INIS)

    Altiparmakov, Dimitar

    2008-01-01

    The lattice code WIMS-AECL has been restructured and rewritten in Fortran 95 in order to increase the accuracy of its responses and extend its capabilities. Significant changes of computing algorithms have been made in the following two areas: geometric calculations and resonance self-shielding. Among various geometry enhancements, the code is no longer restricted to deal with single lattice cell problems. The multi-cell capability allows modelling of various lattice structures such as checkerboard lattices, a de-fuelled channel, and core-reflector interface problems. The new resonance method performs distributed resonance self-shielding including the skin effect. This paper describes the main code changes and presents selected code verification results. (authors)

  13. People Capability Maturity Model. SM.

    Science.gov (United States)

    1995-09-01

    tailored so it consumes less time and resources than a traditional software process assessment or CMU/SEI-95-MM-02 People Capability Maturity Model...improved reputation or customer loyalty. CMU/SEI-95-MM-02 People Capability Maturity Model ■ L5-17 Coaching Level 5: Optimizing Activity 1...Maturity Model CMU/SEI-95-MM-62 Carnegie-Mellon University Software Engineering Institute DTIC ELECTE OCT 2 7 1995 People Capability Maturity

  14. User's manual for a process model code

    International Nuclear Information System (INIS)

    Kern, E.A.; Martinez, D.P.

    1981-03-01

    The MODEL code has been developed for computer modeling of materials processing facilities associated with the nuclear fuel cycle. However, it can also be used in other modeling applications. This report provides sufficient information for a potential user to apply the code to specific process modeling problems. Several examples that demonstrate most of the capabilities of the code are provided

  15. Reactivity Insertion Accident (RIA) Capability Status in the BISON Fuel Performance Code

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, Richard L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Folsom, Charles Pearson [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pastore, Giovanni [Idaho National Lab. (INL), Idaho Falls, ID (United States); Veeraraghavan, Swetha [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-05-01

    One of the Challenge Problems being considered within CASL relates to modelling and simulation of Light Water Reactor LWR) fuel under Reactivity Insertion Accident (RIA) conditions. BISON is the fuel performance code used within CASL for LWR fuel under both normal operating and accident conditions, and thus must be capable of addressing the RIA challenge problem. This report outlines required BISON capabilities for RIAs and describes the current status of the code. Information on recent accident capability enhancements, application of BISON to a RIA benchmark exercise, and plans for validation to RIA behavior are included.

  16. Verification of the depletion capabilities of the MCNPX code on a LWR MOX fuel assembly

    International Nuclear Information System (INIS)

    Cerba, S.; Hrncir, M.; Necas, V.

    2012-01-01

    The study deals with the verification of the depletion capabilities of the MCNPX code, which is a linked Monte-Carlo depletion code. For such a purpose the IV-B phase of the OECD NEA Burnup credit benchmark has been chosen. The mentioned benchmark is a code to code comparison of the multiplication coefficient k eff and the isotopic composition of a LWR MOX fuel assembly at three given burnup levels and after five years of cooling. The benchmark consists of 6 cases, 2 different Pu vectors and 3 geometry models, however in this study only the fuel assembly calculations with two Pu vectors were performed. The aim of this study was to compare the obtained result with data from the participants of the OECD NEA Burnup Credit project and confirm the burnup capability of the MCNPX code. (Authors)

  17. Code Verification Capabilities and Assessments in Support of ASC V&V Level 2 Milestone #6035

    Energy Technology Data Exchange (ETDEWEB)

    Doebling, Scott William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Budzien, Joanne Louise [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ferguson, Jim Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Harwell, Megan Louise [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hickmann, Kyle Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Israel, Daniel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Magrogan, William Richard III [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Singleton, Jr., Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Srinivasan, Gowri [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Walter, Jr, John William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Woods, Charles Nathan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-26

    This document provides a summary of the code verification activities supporting the FY17 Level 2 V&V milestone entitled “Deliver a Capability for V&V Assessments of Code Implementations of Physics Models and Numerical Algorithms in Support of Future Predictive Capability Framework Pegposts.” The physics validation activities supporting this milestone are documented separately. The objectives of this portion of the milestone are: 1) Develop software tools to support code verification analysis; 2) Document standard definitions of code verification test problems; and 3) Perform code verification assessments (focusing on error behavior of algorithms). This report and a set of additional standalone documents serve as the compilation of results demonstrating accomplishment of these objectives.

  18. System Reliability Analysis Capability and Surrogate Model Application in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Huang, Dongli [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gleicher, Frederick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Wang, Bei [Idaho National Lab. (INL), Idaho Falls, ID (United States); Adbel-Khalik, Hany S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pascucci, Valerio [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-11-01

    This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.

  19. Numerical modeling capabilities to predict repository performance

    International Nuclear Information System (INIS)

    1979-09-01

    This report presents a summary of current numerical modeling capabilities that are applicable to the design and performance evaluation of underground repositories for the storage of nuclear waste. The report includes codes that are available in-house, within Golder Associates and Lawrence Livermore Laboratories; as well as those that are generally available within the industry and universities. The first listing of programs are in-house codes in the subject areas of hydrology, solute transport, thermal and mechanical stress analysis, and structural geology. The second listing of programs are divided by subject into the following categories: site selection, structural geology, mine structural design, mine ventilation, hydrology, and mine design/construction/operation. These programs are not specifically designed for use in the design and evaluation of an underground repository for nuclear waste; but several or most of them may be so used

  20. Development of an integrated thermal-hydraulics capability incorporating RELAP5 and PANTHER neutronics code

    Energy Technology Data Exchange (ETDEWEB)

    Page, R.; Jones, J.R.

    1997-07-01

    Ensuring that safety analysis needs are met in the future is likely to lead to the development of new codes and the further development of existing codes. It is therefore advantageous to define standards for data interfaces and to develop software interfacing techniques which can readily accommodate changes when they are made. Defining interface standards is beneficial but is necessarily restricted in application if future requirements are not known in detail. Code interfacing methods are of particular relevance with the move towards automatic grid frequency response operation where the integration of plant dynamic, core follow and fault study calculation tools is considered advantageous. This paper describes the background and features of a new code TALINK (Transient Analysis code LINKage program) used to provide a flexible interface to link the RELAP5 thermal hydraulics code with the PANTHER neutron kinetics and the SIBDYM whole plant dynamic modelling codes used by Nuclear Electric. The complete package enables the codes to be executed in parallel and provides an integrated whole plant thermal-hydraulics and neutron kinetics model. In addition the paper discusses the capabilities and pedigree of the component codes used to form the integrated transient analysis package and the details of the calculation of a postulated Sizewell `B` Loss of offsite power fault transient.

  1. Development of an integrated thermal-hydraulics capability incorporating RELAP5 and PANTHER neutronics code

    International Nuclear Information System (INIS)

    Page, R.; Jones, J.R.

    1997-01-01

    Ensuring that safety analysis needs are met in the future is likely to lead to the development of new codes and the further development of existing codes. It is therefore advantageous to define standards for data interfaces and to develop software interfacing techniques which can readily accommodate changes when they are made. Defining interface standards is beneficial but is necessarily restricted in application if future requirements are not known in detail. Code interfacing methods are of particular relevance with the move towards automatic grid frequency response operation where the integration of plant dynamic, core follow and fault study calculation tools is considered advantageous. This paper describes the background and features of a new code TALINK (Transient Analysis code LINKage program) used to provide a flexible interface to link the RELAP5 thermal hydraulics code with the PANTHER neutron kinetics and the SIBDYM whole plant dynamic modelling codes used by Nuclear Electric. The complete package enables the codes to be executed in parallel and provides an integrated whole plant thermal-hydraulics and neutron kinetics model. In addition the paper discusses the capabilities and pedigree of the component codes used to form the integrated transient analysis package and the details of the calculation of a postulated Sizewell 'B' Loss of offsite power fault transient

  2. Updated Covariance Processing Capabilities in the AMPX Code System

    International Nuclear Information System (INIS)

    Wiarda, Dorothea; Dunn, Michael E.

    2007-01-01

    A concerted effort is in progress within the nuclear data community to provide new cross-section covariance data evaluations to support sensitivity/uncertainty analyses of fissionable systems. The objective of this work is to update processing capabilities of the AMPX library to process the latest Evaluated Nuclear Data File (ENDF)/B formats to generate covariance data libraries for radiation transport software such as SCALE. The module PUFF-IV was updated to allow processing of new ENDF covariance formats in the resolved resonance region. In the resolved resonance region, covariance matrices are given in terms of resonance parameters, which need to be processed into covariance matrices with respect to the group-averaged cross-section data. The parameter covariance matrix can be quite large if the evaluation has many resonances. The PUFF-IV code has recently been used to process an evaluation of 235U, which was prepared in collaboration between Oak Ridge National Laboratory and Los Alamos National Laboratory.

  3. Prediction Capability of SPACE Code about the Loop Seal Clearing on ATLAS SBLOCA

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Sung Won; Lee, Jong Hyuk; Chung, Bub Dong; Kim, Kyung Doo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    The most possible break size for loop seal reforming has been decided as 4 inch by the pre-calculation conducted by the RELAP5 and MARS. Many organizations have participated with various system analysis codes: for examples, RELAP5, MARS, TRACE. KAERI also anticipated with SPACE code. SPACE code has been developed for the use of design and safety analysis of nuclear thermal hydraulics system. KHNP and other organizations have collaborated during last 10 years. And it is currently under the certification procedures. SPACE has the capability to analyze the droplet field with full governing equation set: continuity, momentum, and energy. The SPACE code has been participated in PKL- 3 benchmark program for the international activity. The DSP-04 benchmark problem is also the application of SPACE as the domestic activities. The cold leg top slot break accident of APR1400 reactor has been modeled and surveyed by SPACE code. Benchmark experiment as a program of DSP-04 has been performed with ATLAS facility. The break size has been selected as 4 inch in APR1400 and the corresponding scale down break size has been modeled in SPACE code. The loop seal reforming has been occurred at all 4 loops. But the PCT shows no significant behaviors.

  4. An Enhanced GINGER Simulation Code with Harmonic Emission and HDF5 IO Capabilities

    International Nuclear Information System (INIS)

    Fawley, William M.

    2006-01-01

    GINGER [1] is an axisymmetric, polychromatic (r-z-t) FEL simulation code originally developed in the mid-1980's to model the performance of single-pass amplifiers. Over the past 15 years GINGER's capabilities have been extended to include more complicated configurations such as undulators with drift spaces, dispersive sections, and vacuum chamber wakefield effects; multi-pass oscillators; and multi-stage harmonic cascades. Its coding base has been tuned to permit running effectively on platforms ranging from desktop PC's to massively parallel processors such as the IBM-SP. Recently, we have made significant changes to GINGER by replacing the original predictor-corrector field solver with a new direct implicit algorithm, adding harmonic emission capability, and switching to the HDF5 IO library [2] for output diagnostics. In this paper, we discuss some details regarding these changes and also present simulation results for LCLS SASE emission at λ = 0.15 nm and higher harmonics

  5. NGNP Component Test Capability Design Code of Record

    Energy Technology Data Exchange (ETDEWEB)

    S.L. Austad; D.S. Ferguson; L.E. Guillen; C.W. McKnight; P.J. Petersen

    2009-09-01

    The Next Generation Nuclear Plant Project is conducting a trade study to select a preferred approach for establishing a capability whereby NGNP technology development testing—through large-scale, integrated tests—can be performed for critical HTGR structures, systems, and components (SSCs). The mission of this capability includes enabling the validation of interfaces, interactions, and performance for critical systems and components prior to installation in the NGNP prototype.

  6. Studies on DANESS Code Modeling

    International Nuclear Information System (INIS)

    Jeong, Chang Joon

    2009-09-01

    The DANESS code modeling study has been performed. DANESS code is widely used in a dynamic fuel cycle analysis. Korea Atomic Energy Research Institute (KAERI) has used the DANESS code for the Korean national nuclear fuel cycle scenario analysis. In this report, the important models such as Energy-demand scenario model, New Reactor Capacity Decision Model, Reactor and Fuel Cycle Facility History Model, and Fuel Cycle Model are investigated. And, some models in the interface module are refined and inserted for Korean nuclear fuel cycle model. Some application studies have also been performed for GNEP cases and for US fast reactor scenarios with various conversion ratios

  7. Los Alamos neutral particle transport codes: New and enhanced capabilities

    International Nuclear Information System (INIS)

    Alcouffe, R.E.; Baker, R.S.; Brinkley, F.W.; Clark, B.A.; Koch, K.R.; Marr, D.R.

    1992-01-01

    We present new developments in Los Alamos discrete-ordinates transport codes and introduce THREEDANT, the latest in the series of Los Alamos discrete ordinates transport codes. THREEDANT solves the multigroup, neutral-particle transport equation in X-Y-Z and R-Θ-Z geometries. THREEDANT uses computationally efficient algorithms: Diffusion Synthetic Acceleration (DSA) is used to accelerate the convergence of transport iterations, the DSA solution is accelerated using the multigrid technique. THREEDANT runs on a wide range of computers, from scientific workstations to CRAY supercomputers. The algorithms are highly vectorized on CRAY computers. Recently, the THREEDANT transport algorithm was implemented on the massively parallel CM-2 computer, with performance that is comparable to a single-processor CRAY-YMP We present the results of THREEDANT analysis of test problems

  8. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  9. Towards a national cybersecurity capability development model

    CSIR Research Space (South Africa)

    Jacobs, Pierre C

    2017-06-01

    Full Text Available to be broken down into its components, a model serves as a blueprint to ensure that those building the capability considers all components, allows for cost estimation and facilitates the evaluation of trade-offs. One national cybersecurity capability...

  10. Geospatial Information System Capability Maturity Models

    Science.gov (United States)

    2017-06-01

    To explore how State departments of transportation (DOTs) evaluate geospatial tool applications and services within their own agencies, particularly their experiences using capability maturity models (CMMs) such as the Urban and Regional Information ...

  11. Discrete rod burnup analysis capability in the Westinghouse advanced nodal code

    International Nuclear Information System (INIS)

    Buechel, R.J.; Fetterman, R.J.; Petrunyak, M.A.

    1992-01-01

    Core design analysis in the last several years has evolved toward the adoption of nodal-based methods to replace traditional fine-mesh models as the standard neutronic tool for first core and reload design applications throughout the nuclear industry. The accuracy, speed, and reduction in computation requirements associated with the nodal methods have made three-dimensional modeling the preferred approach to obtain the most realistic core model. These methods incorporate detailed rod power reconstruction as well. Certain design applications such as confirmation of fuel rod design limits and fuel reconstitution considerations, for example, require knowledge of the rodwise burnup distribution to avoid unnecessary conservatism in design analyses. The Westinghouse Advanced Nodal Code (ANC) incorporates the capability to generate the intra-assembly pin burnup distribution using an efficient algorithm

  12. MELCOR code modeling for APR1400

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Young; Park, S. Y.; Kim, D. H.; Ahn, K. I.; Song, Y. M.; Kim, S. D.; Park, J. H

    2001-11-01

    The severe accident phenomena of nuclear power plant have large uncertainties. For the retention of the containment integrity and improvement of nuclear reactor safety against severe accident, it is essential to understand severe accident phenomena and be able to access the accident progression accurately using computer code. Furthermore, it is important to attain a capability for developing technique and assessment tools for an advanced nuclear reactor design as well as for the severe accident prevention and mitigation. The objective of this report is to establish technical bases for an application of the MELCOR code to the Korean Next Generation Reactor (APR1400) by modeling the plant and analyzing plant steady state. This report shows the data and the input preparation for MELCOR code as well as state-state assessment results using MELCOR code.

  13. A user's guide to the SASSYS-1 control system modeling capability

    International Nuclear Information System (INIS)

    Vilim, R.B.

    1987-06-01

    This report describes a control system modeling capability that has been developed for the analysis of control schemes for advanced liquid metal reactors. The general class of control equations that can be represented using the modeling capability is identified, and the numerical algorithms used to solve these equations are described. The modeling capability has been implemented in the SASSYS-1 systems analysis code. A description of the card input, a sample input deck and some guidelines for running the code are given

  14. Cheetah: Starspot modeling code

    Science.gov (United States)

    Walkowicz, Lucianne; Thomas, Michael; Finkestein, Adam

    2014-12-01

    Cheetah models starspots in photometric data (lightcurves) by calculating the modulation of a light curve due to starspots. The main parameters of the program are the linear and quadratic limb darkening coefficients, stellar inclination, spot locations and sizes, and the intensity ratio of the spots to the stellar photosphere. Cheetah uses uniform spot contrast and the minimum number of spots needed to produce a good fit and ignores bright regions for the sake of simplicity.

  15. High burnup models in computer code fair

    Energy Technology Data Exchange (ETDEWEB)

    Dutta, B K; Swami Prasad, P; Kushwaha, H S; Mahajan, S C; Kakodar, A [Bhabha Atomic Research Centre, Bombay (India)

    1997-08-01

    An advanced fuel analysis code FAIR has been developed for analyzing the behavior of fuel rods of water cooled reactors under severe power transients and high burnups. The code is capable of analyzing fuel pins of both collapsible clad, as in PHWR and free standing clad as in LWR. The main emphasis in the development of this code is on evaluating the fuel performance at extended burnups and modelling of the fuel rods for advanced fuel cycles. For this purpose, a number of suitable models have been incorporated in FAIR. For modelling the fission gas release, three different models are implemented, namely Physically based mechanistic model, the standard ANS 5.4 model and the Halden model. Similarly the pellet thermal conductivity can be modelled by the MATPRO equation, the SIMFUEL relation or the Halden equation. The flux distribution across the pellet is modelled by using the model RADAR. For modelling pellet clad interaction (PCMI)/ stress corrosion cracking (SCC) induced failure of sheath, necessary routines are provided in FAIR. The validation of the code FAIR is based on the analysis of fuel rods of EPRI project ``Light water reactor fuel rod modelling code evaluation`` and also the analytical simulation of threshold power ramp criteria of fuel rods of pressurized heavy water reactors. In the present work, a study is carried out by analysing three CRP-FUMEX rods to show the effect of various combinations of fission gas release models and pellet conductivity models, on the fuel analysis parameters. The satisfactory performance of FAIR may be concluded through these case studies. (author). 12 refs, 5 figs.

  16. High burnup models in computer code fair

    International Nuclear Information System (INIS)

    Dutta, B.K.; Swami Prasad, P.; Kushwaha, H.S.; Mahajan, S.C.; Kakodar, A.

    1997-01-01

    An advanced fuel analysis code FAIR has been developed for analyzing the behavior of fuel rods of water cooled reactors under severe power transients and high burnups. The code is capable of analyzing fuel pins of both collapsible clad, as in PHWR and free standing clad as in LWR. The main emphasis in the development of this code is on evaluating the fuel performance at extended burnups and modelling of the fuel rods for advanced fuel cycles. For this purpose, a number of suitable models have been incorporated in FAIR. For modelling the fission gas release, three different models are implemented, namely Physically based mechanistic model, the standard ANS 5.4 model and the Halden model. Similarly the pellet thermal conductivity can be modelled by the MATPRO equation, the SIMFUEL relation or the Halden equation. The flux distribution across the pellet is modelled by using the model RADAR. For modelling pellet clad interaction (PCMI)/ stress corrosion cracking (SCC) induced failure of sheath, necessary routines are provided in FAIR. The validation of the code FAIR is based on the analysis of fuel rods of EPRI project ''Light water reactor fuel rod modelling code evaluation'' and also the analytical simulation of threshold power ramp criteria of fuel rods of pressurized heavy water reactors. In the present work, a study is carried out by analysing three CRP-FUMEX rods to show the effect of various combinations of fission gas release models and pellet conductivity models, on the fuel analysis parameters. The satisfactory performance of FAIR may be concluded through these case studies. (author). 12 refs, 5 figs

  17. The analysis of thermal-hydraulic models in MELCOR code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, M H; Hur, C; Kim, D K; Cho, H J [POhang Univ., of Science and TECHnology, Pohang (Korea, Republic of)

    1996-07-15

    The objective of the present work is to verify the prediction and analysis capability of MELCOR code about the progression of severe accidents in light water reactor and also to evaluate appropriateness of thermal-hydraulic models used in MELCOR code. Comparing the results of experiment and calculation with MELCOR code is carried out to achieve the above objective. Specially, the comparison between the CORA-13 experiment and the MELCOR code calculation was performed.

  18. Strategies for developing subchannel capability in an advanced system thermalhydraulic code: a literature review

    International Nuclear Information System (INIS)

    Cheng, J.; Rao, Y.F.

    2015-01-01

    In the framework of developing next generation safety analysis tools, Canadian Nuclear Laboratories (CNL) has planned to incorporate subchannel analysis capability into its advanced system thermalhydraulic code CATHENA 4. This paper provides a literature review and an assessment of current subchannel codes. It also evaluates three code-development methods: (i) static coupling of CATHENA 4 with the subchannel code ASSERT-PV, (ii) dynamic coupling of the two codes, and (iii) fully implicit implementation for a new, standalone CATHENA 4 version with subchannel capability. Results of the review and assessment suggest that the current ASSERT-PV modules can be used as the base for the fully implicit implementation of subchannel capability in CATHENA 4, and that this option may be the most cost-effective in the long run, resulting in savings in user application and maintenance costs. In addition, improved versatility of the tool could be accomplished by the addition of new features that could be added as part of its development. The new features would improve the capabilities of the existing subchannel code in handling low, reverse, and stagnant flows often encountered in system thermalhydraulic analysis. Therefore, the method of fully implicit implementation is preliminarily recommended for further exploration. A feasibility study will be performed in an attempt to extend the present work into a preliminary development plan. (author)

  19. The GNASH preequilibrium-statistical nuclear model code

    International Nuclear Information System (INIS)

    Arthur, E. D.

    1988-01-01

    The following report is based on materials presented in a series of lectures at the International Center for Theoretical Physics, Trieste, which were designed to describe the GNASH preequilibrium statistical model code and its use. An overview is provided of the code with emphasis upon code's calculational capabilities and the theoretical models that have been implemented in it. Two sample problems are discussed, the first dealing with neutron reactions on 58 Ni. the second illustrates the fission model capabilities implemented in the code and involves n + 235 U reactions. Finally a description is provided of current theoretical model and code development underway. Examples of calculated results using these new capabilities are also given. 19 refs., 17 figs., 3 tabs

  20. Validation of Heat Transfer and Film Cooling Capabilities of the 3-D RANS Code TURBO

    Science.gov (United States)

    Shyam, Vikram; Ameri, Ali; Chen, Jen-Ping

    2010-01-01

    The capabilities of the 3-D unsteady RANS code TURBO have been extended to include heat transfer and film cooling applications. The results of simulations performed with the modified code are compared to experiment and to theory, where applicable. Wilcox s k-turbulence model has been implemented to close the RANS equations. Two simulations are conducted: (1) flow over a flat plate and (2) flow over an adiabatic flat plate cooled by one hole inclined at 35 to the free stream. For (1) agreement with theory is found to be excellent for heat transfer, represented by local Nusselt number, and quite good for momentum, as represented by the local skin friction coefficient. This report compares the local skin friction coefficients and Nusselt numbers on a flat plate obtained using Wilcox's k-model with the theory of Blasius. The study looks at laminar and turbulent flows over an adiabatic flat plate and over an isothermal flat plate for two different wall temperatures. It is shown that TURBO is able to accurately predict heat transfer on a flat plate. For (2) TURBO shows good qualitative agreement with film cooling experiments performed on a flat plate with one cooling hole. Quantitatively, film effectiveness is under predicted downstream of the hole.

  1. New Three-Dimensional Neutron Transport Calculation Capability in STREAM Code

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Youqi [Xi' an Jiaotong University, Xi' an (China); Choi, Sooyoung; Lee, Deokjung [UNIST, Ulsan (Korea, Republic of)

    2016-10-15

    The method of characteristics (MOC) is one of the best choices for its powerful capability in the geometry modeling. To reduce the large computational burden in 3D MOC, the 2D/1D schemes were proposed and have achieved great success in the past 10 years. However, such methods have some instability problems during the iterations when the neutron leakage for axial direction is large. Therefore, full 3D MOC methods were developed. A lot of efforts have been devoted to reduce the computational costs. However, it still requires too much memory storage and computational time for the practical modeling of a commercial size reactor core. Recently, a new approach for the 3D MOC calculation without transverse integration has been implemented in the STREAM code. In this approach, the angular flux is expressed as a basis function expansion form of only axial variable z. A new approach based on the axial expansion and 2D MOC sweeping to solve the 3D neutron transport equation is implemented in the STREAM code. This approach avoids using the transverse integration in the traditional 2D/1D scheme of MOC calculation. By converting the 3D equation into the 2D form of angular flux expansion coefficients, it also avoids the complex 3D ray tracing. Current numerical tests using two benchmarks show good accuracy of the new method.

  2. Evacuation emergency response model coupling atmospheric release advisory capability output

    International Nuclear Information System (INIS)

    Rosen, L.C.; Lawver, B.S.; Buckley, D.W.; Finn, S.P.; Swenson, J.B.

    1983-01-01

    A Federal Emergency Management Agency (FEMA) sponsored project to develop a coupled set of models between those of the Lawrence Livermore National Laboratory (LLNL) Atmospheric Release Advisory Capability (ARAC) system and candidate evacuation models is discussed herein. This report describes the ARAC system and discusses the rapid computer code developed and the coupling with ARAC output. The computer code is adapted to the use of color graphics as a means to display and convey the dynamics of an emergency evacuation. The model is applied to a specific case of an emergency evacuation of individuals surrounding the Rancho Seco Nuclear Power Plant, located approximately 25 miles southeast of Sacramento, California. The graphics available to the model user for the Rancho Seco example are displayed and noted in detail. Suggestions for future, potential improvements to the emergency evacuation model are presented

  3. Cross checking of the new capabilities of the fuel cycle scenario code TR-EVOL - 5229

    International Nuclear Information System (INIS)

    Merino-Rodriguez, I.; Garcia-Martinez, M.; Alvarez-Velarde, F.

    2015-01-01

    This work is intended to cross check the new capabilities of the fuel cycle scenario code TR-EVOL by means of comparing its results with those published in bibliography. This process has been divided in two stages as follows. The first stage is dedicated to check the improvements in the material management part of the fuel cycle code (the nuclear fuel mass balance estimation). The Spanish nuclear fuel cycle has been chosen as the model for the mass balance comparison given that the fuel mass per reactor is available in bibliography. The second stage has been focused in verifying the validity of the TR-EVOL economic module. The economic model verification has been carried out by making use of the ARCAS EU project and its economic assessments for advanced reactors and scenarios involving fast reactors and ADS. As conclusions, the main finding from the first stage includes that TR-EVOL provides a prediction of mass values quite accurate after the improvements and when using the proper parameters as input for the code. For the second stage, results were highly satisfactory since a difference smaller than 3% can be found regarding results published by the ARCAS project (NRG estimations). Furthermore, concerning the Decommissioning, Dismantling and Disposal cost, results are highly acceptable (7% difference in the comparison with the final disposal in a once-through scenario and around 11% in a final disposal with a reprocessing strategy) given the difficulties to find in bibliography detailed information about the costs of the final disposals and the significant uncertainties involved in design concepts and related unit costs

  4. Capability maturity models for offshore organisational management.

    Science.gov (United States)

    Strutt, J E; Sharp, J V; Terry, E; Miles, R

    2006-12-01

    The goal setting regime imposed by the UK safety regulator has important implications for an organisation's ability to manage health and safety related risks. Existing approaches to safety assurance based on risk analysis and formal safety assessments are increasingly considered unlikely to create the step change improvement in safety to which the offshore industry aspires and alternative approaches are being considered. One approach, which addresses the important issue of organisational behaviour and which can be applied at a very early stage of design, is the capability maturity model (CMM). The paper describes the development of a design safety capability maturity model, outlining the key processes considered necessary to safety achievement, definition of maturity levels and scoring methods. The paper discusses how CMM is related to regulatory mechanisms and risk based decision making together with the potential of CMM to environmental risk management.

  5. Sodium pool fire model for CONACS code

    International Nuclear Information System (INIS)

    Yung, S.C.

    1982-01-01

    The modeling of sodium pool fires constitutes an important ingredient in conducting LMFBR accident analysis. Such modeling capability has recently come under scrutiny at Westinghouse Hanford Company (WHC) within the context of developing CONACS, the Containment Analysis Code System. One of the efforts in the CONACS program is to model various combustion processes anticipated to occur during postulated accident paths. This effort includes the selection or modification of an existing model and development of a new model if it clearly contributes to the program purpose. As part of this effort, a new sodium pool fire model has been developed that is directed at removing some of the deficiencies in the existing models, such as SOFIRE-II and FEUNA

  6. Overview of current RFSP-code capabilities for CANDU core analysis

    International Nuclear Information System (INIS)

    Rouben, B.

    1996-01-01

    RFSP (Reactor Fuelling Simulation Program) is the major finite-reactor computer code in use at the Atomic Energy of Canada Limited for the design and analysis of CANDU reactor cores. An overview is given of the major computational capabilities available in RFSP. (author) 11 refs., 29 figs

  7. Pump Component Model in SPACE Code

    International Nuclear Information System (INIS)

    Kim, Byoung Jae; Kim, Kyoung Doo

    2010-08-01

    This technical report describes the pump component model in SPACE code. A literature survey was made on pump models in existing system codes. The models embedded in SPACE code were examined to check the confliction with intellectual proprietary rights. Design specifications, computer coding implementation, and test results are included in this report

  8. U.S. Sodium Fast Reactor Codes and Methods: Current Capabilities and Path Forward

    Energy Technology Data Exchange (ETDEWEB)

    Brunett, A. J.; Fanning, T. H.

    2017-06-26

    The United States has extensive experience with the design, construction, and operation of sodium cooled fast reactors (SFRs) over the last six decades. Despite the closure of various facilities, the U.S. continues to dedicate research and development (R&D) efforts to the design of innovative experimental, prototype, and commercial facilities. Accordingly, in support of the rich operating history and ongoing design efforts, the U.S. has been developing and maintaining a series of tools with capabilities that envelope all facets of SFR design and safety analyses. This paper provides an overview of the current U.S. SFR analysis toolset, including codes such as SAS4A/SASSYS-1, MC2-3, SE2-ANL, PERSENT, NUBOW-3D, and LIFE-METAL, as well as the higher-fidelity tools (e.g. PROTEUS) being integrated into the toolset. Current capabilities of the codes are described and key ongoing development efforts are highlighted for some codes.

  9. Experimental modeling of eddy current inspection capabilities

    International Nuclear Information System (INIS)

    Junker, W.R.; Clark, W.G.

    1984-01-01

    This chapter examines the experimental modeling of eddy current inspection capabilities based upon the use of liquid mercury samples designed to represent metal components containing discontinuities. A brief summary of past work with mercury modeling and a detailed discussion of recent experiments designed to further evaluate the technique are presented. The main disadvantages of the mercury modeling concept are that mercury is toxic and must be handled carefully, liquid mercury can only be used to represent nonferromagnetic materials, and wetting and meniscus problems can distort the effective size of artificial discontinuities. Artificial discontinuities placed in a liquid mercury sample can be used to represent discontinuities in solid metallic structures. Discontinuity size and type cannot be characterized from phase angle and signal amplitude data developed with a surface scanning, pancake-type eddy current probe. It is concluded that the mercury model approach can greatly enhance the overall understanding and applicability of eddy current inspection techniques

  10. Extending the capabilities of CFD codes to assess ash related problems

    DEFF Research Database (Denmark)

    Kær, Søren Knudsen; Rosendahl, Lasse Aistrup; Baxter, B. B.

    2004-01-01

    This paper discusses the application of FLUENT? in theanalysis of grate-fired biomass boilers. A short description of theconcept used to model fuel conversion on the grate and the couplingto the CFD code is offered. The development and implementation ofa CFD-based deposition model is presented...... in the reminder of thepaper. The growth of deposits on furnace walls and super heatertubes is treated including the impact on heat transfer rates determinedby the CFD code. Based on the commercial CFD code FLUENT?,the overall model is fully implemented through the User DefinedFunctions. The model is configured...

  11. Development of burnup methods and capabilities in Monte Carlo code RMC

    International Nuclear Information System (INIS)

    She, Ding; Liu, Yuxuan; Wang, Kan; Yu, Ganglin; Forget, Benoit; Romano, Paul K.; Smith, Kord

    2013-01-01

    Highlights: ► The RMC code has been developed aiming at large-scale burnup calculations. ► Matrix exponential methods are employed to solve the depletion equations. ► The Energy-Bin method reduces the time expense of treating ACE libraries. ► The Cell-Mapping method is efficient to handle massive amounts of tally cells. ► Parallelized depletion is necessary for massive amounts of burnup regions. -- Abstract: The Monte Carlo burnup calculation has always been a challenging problem because of its large time consumption when applied to full-scale assembly or core calculations, and thus its application in routine analysis is limited. Most existing MC burnup codes are usually external wrappers between a MC code, e.g. MCNP, and a depletion code, e.g. ORIGEN. The code RMC is a newly developed MC code with an embedded depletion module aimed at performing burnup calculations of large-scale problems with high efficiency. Several measures have been taken to strengthen the burnup capabilities of RMC. Firstly, an accurate and efficient depletion module called DEPTH has been developed and built in, which employs the rational approximation and polynomial approximation methods. Secondly, the Energy-Bin method and the Cell-Mapping method are implemented to speed up the transport calculations with large numbers of nuclides and tally cells. Thirdly, the batch tally method and the parallelized depletion module have been utilized to better handle cases with massive amounts of burnup regions in parallel calculations. Burnup cases including a PWR pin and a 5 × 5 assembly group are calculated, thereby demonstrating the burnup capabilities of the RMC code. In addition, the computational time and memory requirements of RMC are compared with other MC burnup codes.

  12. Toward a CFD nose-to-tail capability - Hypersonic unsteady Navier-Stokes code validation

    Science.gov (United States)

    Edwards, Thomas A.; Flores, Jolen

    1989-01-01

    Computational fluid dynamics (CFD) research for hypersonic flows presents new problems in code validation because of the added complexity of the physical models. This paper surveys code validation procedures applicable to hypersonic flow models that include real gas effects. The current status of hypersonic CFD flow analysis is assessed with the Compressible Navier-Stokes (CNS) code as a case study. The methods of code validation discussed to beyond comparison with experimental data to include comparisons with other codes and formulations, component analyses, and estimation of numerical errors. Current results indicate that predicting hypersonic flows of perfect gases and equilibrium air are well in hand. Pressure, shock location, and integrated quantities are relatively easy to predict accurately, while surface quantities such as heat transfer are more sensitive to the solution procedure. Modeling transition to turbulence needs refinement, though preliminary results are promising.

  13. Exploiting the Error-Correcting Capabilities of Low Density Parity Check Codes in Distributed Video Coding using Optical Flow

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau; Søgaard, Jacob; Salmistraro, Matteo

    2012-01-01

    We consider Distributed Video Coding (DVC) in presence of communication errors. First, we present DVC side information generation based on a new method of optical flow driven frame interpolation, where a highly optimized TV-L1 algorithm is used for the flow calculations and combine three flows....... Thereafter methods for exploiting the error-correcting capabilities of the LDPCA code in DVC are investigated. The proposed frame interpolation includes a symmetric flow constraint to the standard forward-backward frame interpolation scheme, which improves quality and handling of large motion. The three...... flows are combined in one solution. The proposed frame interpolation method consistently outperforms an overlapped block motion compensation scheme and a previous TV-L1 optical flow frame interpolation method with an average PSNR improvement of 1.3 dB and 2.3 dB respectively. For a GOP size of 2...

  14. Assessment of predictive capability of REFLA/TRAC code for large break LOCA transient in PWR using LOFT L2-5 test data

    International Nuclear Information System (INIS)

    Akimoto, Hajime; Ohnuki, Akira; Murao, Yoshio

    1994-03-01

    The REFLA/TRAC code is a best estimate code developed at Japan Atomic Energy Research Institute (JAERI) to provide advanced predictions of thermal hydraulic transient in light water reactors (LWRs). The REFLA/TRAC code uses the TRAC-PF1/MOD1 code as the framework of the code. The REFLA/TRAC code is expected to be used for the calibration of licensing codes, accident analysis, accident simulation of LWRs, and design of advanced LWRs. Several models have been implemented to the TRAC-PF1/MOD1 code at JAERI including reflood model, condensation model, interfacial and wall friction models, etc. These models have been verified using data from various separate effect tests. This report describes an assessment result of the REFLA/TRAC code, which was performed to assess the predictive capability for integral system behavior under large break loss of coolant accident (LBLOCA) using data from the LOFT L2-5 test. The assessment calculation confirmed that the REFLA/TRAC code can predict break mass flow rate, emergency core cooling water bypass and clad temperature excellently in the LOFT L2-5 test. The CPU time of the REFLA/TRAC code was about 1/3 of the TRAC-PF1/MOD1 code. The REFLA/TRAC code can perform stable and fast simulation of thermal hydraulic behavior in PWR LBLOCA with enough accuracy for practical use. (author)

  15. Assessment of Prediction Capabilities of COCOSYS and CFX Code for Simplified Containment

    Directory of Open Access Journals (Sweden)

    Jia Zhu

    2016-01-01

    Full Text Available The acceptable accuracy for simulation of severe accident scenarios in containments of nuclear power plants is required to investigate the consequences of severe accidents and effectiveness of potential counter measures. For this purpose, the actual capability of CFX tool and COCOSYS code is assessed in prototypical geometries for simplified physical process-plume (due to a heat source under adiabatic and convection boundary condition, respectively. Results of the comparison under adiabatic boundary condition show that good agreement is obtained among the analytical solution, COCOSYS prediction, and CFX prediction for zone temperature. The general trend of the temperature distribution along the vertical direction predicted by COCOSYS agrees with the CFX prediction except in dome, and this phenomenon is predicted well by CFX and failed to be reproduced by COCOSYS. Both COCOSYS and CFX indicate that there is no temperature stratification inside dome. CFX prediction shows that temperature stratification area occurs beneath the dome and away from the heat source. Temperature stratification area under adiabatic boundary condition is bigger than that under convection boundary condition. The results indicate that the average temperature inside containment predicted with COCOSYS model is overestimated under adiabatic boundary condition, while it is underestimated under convection boundary condition compared to CFX prediction.

  16. MORSE-CGT Monte Carlo radiation transport code with the capability of the torus geometric treatment

    International Nuclear Information System (INIS)

    Deng Li

    1990-01-01

    The combinatorial geometry package CGT with the capability of the torus geometric treatment is introduced. It is get by developing the combinatorial geometry package CG. The CGT package can be transplanted to those codes which the CG package is being used and makes them also with the capability. The MORSE-CGT code can be used to solve the neutron, gamma-ray or coupled neutron-gamma-ray transport problems and time dependence for both shielding and criticality problems in torus system or system which is produced by arbitrary finite combining torus with torus or other bodies in CG package and it can also be used to design the blanket and compute shielding for TOKAMAK Fusion-Fission Hybrid Reactor

  17. Milagro Version 2 An Implicit Monte Carlo Code for Thermal Radiative Transfer: Capabilities, Development, and Usage

    Energy Technology Data Exchange (ETDEWEB)

    T.J. Urbatsch; T.M. Evans

    2006-02-15

    We have released Version 2 of Milagro, an object-oriented, C++ code that performs radiative transfer using Fleck and Cummings' Implicit Monte Carlo method. Milagro, a part of the Jayenne program, is a stand-alone driver code used as a methods research vehicle and to verify its underlying classes. These underlying classes are used to construct Implicit Monte Carlo packages for external customers. Milagro-2 represents a design overhaul that allows better parallelism and extensibility. New features in Milagro-2 include verified momentum deposition, restart capability, graphics capability, exact energy conservation, and improved load balancing and parallel efficiency. A users' guide also describes how to configure, make, and run Milagro2.

  18. Model-Based Military Scenario Management for Defence Capability

    National Research Council Canada - National Science Library

    Gori, Ronnie; Chen, Pin; Pozgay, Angela

    2004-01-01

    .... This paper describes initial work towards the development of an information model that links scenario and capability related information, and the results of capability analysis and experimentation...

  19. Interfacial and Wall Transport Models for SPACE-CAP Code

    International Nuclear Information System (INIS)

    Hong, Soon Joon; Choo, Yeon Joon; Han, Tae Young; Hwang, Su Hyun; Lee, Byung Chul; Choi, Hoon; Ha, Sang Jun

    2009-01-01

    The development project for the domestic design code was launched to be used for the safety and performance analysis of pressurized light water reactors. And CAP (Containment Analysis Package) code has been also developed for the containment safety and performance analysis side by side with SPACE. The CAP code treats three fields (gas, continuous liquid, and dispersed drop) for the assessment of containment specific phenomena, and is featured by its multidimensional assessment capabilities. Thermal hydraulics solver was already developed and now under testing of its stability and soundness. As a next step, interfacial and wall transport models was setup. In order to develop the best model and correlation package for the CAP code, various models currently used in major containment analysis codes, which are GOTHIC, CONTAIN2.0, and CONTEMPT-LT, have been reviewed. The origins of the selected models used in these codes have also been examined to find out if the models have not conflict with a proprietary right. In addition, a literature survey of the recent studies has been performed in order to incorporate the better models for the CAP code. The models and correlations of SPACE were also reviewed. CAP models and correlations are composed of interfacial heat/mass, and momentum transport models, and wall heat/mass, and momentum transport models. This paper discusses on those transport models in the CAP code

  20. Interfacial and Wall Transport Models for SPACE-CAP Code

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Soon Joon; Choo, Yeon Joon; Han, Tae Young; Hwang, Su Hyun; Lee, Byung Chul [FNC Tech., Seoul (Korea, Republic of); Choi, Hoon; Ha, Sang Jun [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)

    2009-10-15

    The development project for the domestic design code was launched to be used for the safety and performance analysis of pressurized light water reactors. And CAP (Containment Analysis Package) code has been also developed for the containment safety and performance analysis side by side with SPACE. The CAP code treats three fields (gas, continuous liquid, and dispersed drop) for the assessment of containment specific phenomena, and is featured by its multidimensional assessment capabilities. Thermal hydraulics solver was already developed and now under testing of its stability and soundness. As a next step, interfacial and wall transport models was setup. In order to develop the best model and correlation package for the CAP code, various models currently used in major containment analysis codes, which are GOTHIC, CONTAIN2.0, and CONTEMPT-LT, have been reviewed. The origins of the selected models used in these codes have also been examined to find out if the models have not conflict with a proprietary right. In addition, a literature survey of the recent studies has been performed in order to incorporate the better models for the CAP code. The models and correlations of SPACE were also reviewed. CAP models and correlations are composed of interfacial heat/mass, and momentum transport models, and wall heat/mass, and momentum transport models. This paper discusses on those transport models in the CAP code.

  1. Conceptual Model of IT Infrastructure Capability and Its Empirical Justification

    Institute of Scientific and Technical Information of China (English)

    QI Xianfeng; LAN Boxiong; GUO Zhenwei

    2008-01-01

    Increasing importance has been attached to the value of information technology (IT) infrastructure in today's organizations. The development of efficacious IT infrastructure capability enhances business performance and brings sustainable competitive advantage. This study analyzed the IT infrastructure capability in a holistic way and then presented a concept model of IT capability. IT infrastructure capability was categorized into sharing capability, service capability, and flexibility. This study then empirically tested the model using a set of survey data collected from 145 firms. Three factors emerge from the factor analysis as IT flexibility, IT service capability, and IT sharing capability, which agree with those in the conceptual model built in this study.

  2. Impacts of Model Building Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Athalye, Rahul A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sivaraman, Deepak [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elliott, Douglas B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Liu, Bing [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bartlett, Rosemarie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-10-31

    The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO2 emissions at the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.

  3. MIDAS/PK code development using point kinetics model

    International Nuclear Information System (INIS)

    Song, Y. M.; Park, S. H.

    1999-01-01

    In this study, a MIDAS/PK code has been developed for analyzing the ATWS (Anticipated Transients Without Scram) which can be one of severe accident initiating events. The MIDAS is an integrated computer code based on the MELCOR code to develop a severe accident risk reduction strategy by Korea Atomic Energy Research Institute. In the mean time, the Chexal-Layman correlation in the current MELCOR, which was developed under a BWR condition, is appeared to be inappropriate for a PWR. So as to provide ATWS analysis capability to the MIDAS code, a point kinetics module, PKINETIC, has first been developed as a stand-alone code whose reference model was selected from the current accident analysis codes. In the next step, the MIDAS/PK code has been developed via coupling PKINETIC with the MIDAS code by inter-connecting several thermal hydraulic parameters between the two codes. Since the major concern in the ATWS analysis is the primary peak pressure during the early few minutes into the accident, the peak pressure from the PKINETIC module and the MIDAS/PK are compared with the RETRAN calculations showing a good agreement between them. The MIDAS/PK code is considered to be valuable for analyzing the plant response during ATWS deterministically, especially for the early domestic Westinghouse plants which rely on the operator procedure instead of an AMSAC (ATWS Mitigating System Actuation Circuitry) against ATWS. This capability of ATWS analysis is also important from the view point of accident management and mitigation

  4. Computer code structure for evaluation of fire protection measures and fighting capability at nuclear plants

    International Nuclear Information System (INIS)

    Anton, V.

    1997-01-01

    In this work a computer code structure for Fire Protection Measures (FPM) and Fire Fighting Capability (FFC) at Nuclear Power Plants (NPP) is presented. It allows to evaluate the category (satisfactory (s), needs for further evaluation (n), unsatisfactory (u)) to which belongs the given NPP for a self-control in view of an IAEA inspection. This possibility of a self assessment resulted from IAEA documents. Our approach is based on international experience gained in this field and stated in IAEA recommendations. As an illustration we used the FORTRAN programming language statement to make clear the structure of the computer code for the problem taken into account. This computer programme can be conceived so that some literal message in English and Romanian languages be displayed beside the percentage assessments. (author)

  5. New strategies of sensitivity analysis capabilities in continuous-energy Monte Carlo code RMC

    International Nuclear Information System (INIS)

    Qiu, Yishu; Liang, Jingang; Wang, Kan; Yu, Jiankai

    2015-01-01

    Highlights: • Data decomposition techniques are proposed for memory reduction. • New strategies are put forward and implemented in RMC code to improve efficiency and accuracy for sensitivity calculations. • A capability to compute region-specific sensitivity coefficients is developed in RMC code. - Abstract: The iterated fission probability (IFP) method has been demonstrated to be an accurate alternative for estimating the adjoint-weighted parameters in continuous-energy Monte Carlo forward calculations. However, the memory requirements of this method are huge especially when a large number of sensitivity coefficients are desired. Therefore, data decomposition techniques are proposed in this work. Two parallel strategies based on the neutron production rate (NPR) estimator and the fission neutron population (FNP) estimator for adjoint fluxes, as well as a more efficient algorithm which has multiple overlapping blocks (MOB) in a cycle, are investigated and implemented in the continuous-energy Reactor Monte Carlo code RMC for sensitivity analysis. Furthermore, a region-specific sensitivity analysis capability is developed in RMC. These new strategies, algorithms and capabilities are verified against analytic solutions of a multi-group infinite-medium problem and against results from other software packages including MCNP6, TSUANAMI-1D and multi-group TSUNAMI-3D. While the results generated by the NPR and FNP strategies agree within 0.1% of the analytic sensitivity coefficients, the MOB strategy surprisingly produces sensitivity coefficients exactly equal to the analytic ones. Meanwhile, the results generated by the three strategies in RMC are in agreement with those produced by other codes within a few percent. Moreover, the MOB strategy performs the most efficient sensitivity coefficient calculations (offering as much as an order of magnitude gain in FoMs over MCNP6), followed by the NPR and FNP strategies, and then MCNP6. The results also reveal that these

  6. MARS code manual volume I: code structure, system models, and solution methods

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Kim, Kyung Doo; Bae, Sung Won; Jeong, Jae Jun; Lee, Seung Wook; Hwang, Moon Kyu; Yoon, Churl

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This theory manual provides a complete list of overall information of code structure and major function of MARS including code architecture, hydrodynamic model, heat structure, trip / control system and point reactor kinetics model. Therefore, this report would be very useful for the code users. The overall structure of the manual is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  7. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    International Nuclear Information System (INIS)

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.

    2015-01-01

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results

  8. Establishing an infrared measurement and modelling capability

    CSIR Research Space (South Africa)

    Willers, CJ

    2011-04-01

    Full Text Available The protection of own aircraft assets against infrared missile threats requires a deep understanding of the vulnerability of these assets with regard to specific threats and specific environments of operation. A key capability in the protection...

  9. Pressure vessel code construction capabilities for a nickel-chromium-tungsten-molybdenum alloy

    International Nuclear Information System (INIS)

    Rothman, M.F.

    1990-01-01

    HAYNES alloy 230 (UNS NO6230) has achieved wide usage in a variety of high-temperature aerospace, chemical process industry and industrial heating applications since its introduction in 1981. Combining high elevated temperature strength with excellent metallurgical stability, environment-resistance and relatively straight forward fabrication characteristics, this Ni-Cr-W-Mo alloy was an excellent candidate for ASME Pressure vessel Code applications. Coverage under case No. 2063 was granted in July, 1989, for both Section I and Section VIII Division 1 construction. In this paper, the metallurgy of 230 alloy will be described, and its design strength capabilities contrasted with those for more established code materials. Other important performance capabilities, such as long-term thermal stability, oxidation-resistance, fatigue-resistance, and resistance to other forms of environmental degradation will be discussed. It will be shown that the combined properties of 230 alloy offer some significant advantages over other materials for applications such as expansion bellows, heat-exchangers, valves and other components in the fossil energy, nuclear energy and chemical process industries, among others

  10. Assessing the Predictive Capability of the LIFEIV Nuclear Fuel Performance Code using Sequential Calibration

    International Nuclear Information System (INIS)

    Stull, Christopher J.; Williams, Brian J.; Unal, Cetin

    2012-01-01

    This report considers the problem of calibrating a numerical model to data from an experimental campaign (or series of experimental tests). The issue is that when an experimental campaign is proposed, only the input parameters associated with each experiment are known (i.e. outputs are not known because the experiments have yet to be conducted). Faced with such a situation, it would be beneficial from the standpoint of resource management to carefully consider the sequence in which the experiments are conducted. In this way, the resources available for experimental tests may be allocated in a way that best 'informs' the calibration of the numerical model. To address this concern, the authors propose decomposing the input design space of the experimental campaign into its principal components. Subsequently, the utility (to be explained) of each experimental test to the principal components of the input design space is used to formulate the sequence in which the experimental tests will be used for model calibration purposes. The results reported herein build on those presented and discussed in (1,2) wherein Verification and Validation and Uncertainty Quantification (VU) capabilities were applied to the nuclear fuel performance code LIFEIV. In addition to the raw results from the sequential calibration studies derived from the above, a description of the data within the context of the Predictive Maturity Index (PMI) will also be provided. The PMI (3,4) is a metric initiated and developed at Los Alamos National Laboratory to quantitatively describe the ability of a numerical model to make predictions in the absence of experimental data, where it is noted that 'predictions in the absence of experimental data' is not synonymous with extrapolation. This simply reflects the fact that resources do not exist such that each and every execution of the numerical model can be compared against experimental data. If such resources existed, the justification for numerical models

  11. User Instructions for the Systems Assessment Capability, Rev. 0, Computer Codes Volume 2: Impact Modules

    International Nuclear Information System (INIS)

    Eslinger, Paul W.; Arimescu, Carmen; Kanyid, Beverly A.; Miley, Terri B.

    2001-01-01

    One activity of the Department of Energy?s Groundwater/Vadose Zone Integration Project is an assessment of cumulative impacts from Hanford Site wastes on the subsurface environment and the Columbia River. Through the application of a system assessment capability (SAC), decisions for each cleanup and disposal action will be able to take into account the composite effect of other cleanup and disposal actions. The SAC has developed a suite of computer programs to simulate the migration of contaminants (analytes) present on the Hanford Site and to assess the potential impacts of the analytes, including dose to humans, socio-cultural impacts, economic impacts, and ecological impacts. The general approach to handling uncertainty in the SAC computer codes is a Monte Carlo approach. Conceptually, one generates a value for every stochastic parameter in the code (the entire sequence of modules from inventory through transport and impacts) and then executes the simulation, obtaining an output value, or result. This document provides user instructions for the SAC codes that generate human, ecological, economic, and cultural impacts

  12. Fatigue modelling according to the JCSS Probabilistic model code

    NARCIS (Netherlands)

    Vrouwenvelder, A.C.W.M.

    2007-01-01

    The Joint Committee on Structural Safety is working on a Model Code for full probabilistic design. The code consists out of three major parts: Basis of design, Load Models and Models for Material and Structural Properties. The code is intended as the operational counter part of codes like ISO,

  13. Towards a heavy-ion transport capability in the MARS15 Code

    International Nuclear Information System (INIS)

    Mokhov, N.V.; Gudima, K.K.; Mashnik, S.G.; Rakhno, I.L.; Striganov, S.

    2004-01-01

    In order to meet the challenges of new accelerator and space projects and further improve modelling of radiation effects in microscopic objects, heavy-ion interaction and transport physics have been recently incorporated into the MARS15 Monte Carlo code. A brief description of new modules is given in comparison with experimental data. The MARS Monte Carlo code is widely used in numerous accelerator, detector, shielding and cosmic ray applications. The needs of the Relativistic Heavy-Ion Collider, Large Hadron Collider, Rare Isotope Accelerator and NASA projects have recently induced adding heavy-ion interaction and transport physics to the MARS15 code. The key modules of the new implementation are described below along with their comparisons to experimental data.

  14. Development of a system code with CFD capability for analyzing turbulent mixed convection in gas-cooled reactors

    International Nuclear Information System (INIS)

    Kim, Hyeon Il

    2010-02-01

    convection regime, and (4) recently conducted experiments in a deteriorated turbulent heat transfer regime. The validation proved that the Launder-Sharma model can supply improved solutions and much better knowledge about not only the wall temperature but also the heat transfer phenomena in turbulent mixed convection regime, the DTHT, compared to that offered by a single-dimensional empirical correlation. A set of modules to provide Computational Fluid Dynamics (CFD) capability being able to handle multi-dimensional heat transfer is incorporated into a system code for GCRs, GAMMA+, by adopting the Launder-Sharma model of turbulence. We implemented the model into the original system code based on the same schemes, that is, the Implicit Continuous fluid Eulerian (ICE) scheme in a staggered mesh layout, and Newton linearization as constructed in the original code in such a way that the model did not interfere with the numerical stability. The extended code, GAMMA T , was successfully verified and validated in that the model was well formulated with a firmly established numerical foundation through comparisons with an available set of data covering turbulent forced convection regime. The GAMMA T code showed strong potential for future use as a robust integrated system code with the capability of multi-scale analysis in it

  15. A study on the prediction capability of GOTHIC and HYCA3D code for local hydrogen concentrations

    International Nuclear Information System (INIS)

    Choi, Y. S.; Lee, W. J.; Lee, J. J.; Park, K. C.

    2002-01-01

    In this study the prediction capability of GOTHIC and HYCA3D code for local hydrogen concentrations was verified with experimental results. Among the experiments, executed by SNU and other organization inside and outside of the country, the fast transient and the obstacle cases are selected. In case of large subcompartment both the code show good agreement with the experimental data. But in case of small and complex geometry or fast transient the results of GOTHIC code have the large difference from experimental ones. This represents that GOTHIC code is unsuitable for these cases. On the contrary HTCA3D code agrees well with all the experimental data

  16. Coupling a Basin Modeling and a Seismic Code using MOAB

    KAUST Repository

    Yan, Mi; Jordan, Kirk; Kaushik, Dinesh; Perrone, Michael; Sachdeva, Vipin; Tautges, Timothy J.; Magerlein, John

    2012-01-01

    We report on a demonstration of loose multiphysics coupling between a basin modeling code and a seismic code running on a large parallel machine. Multiphysics coupling, which is one critical capability for a high performance computing (HPC) framework, was implemented using the MOAB open-source mesh and field database. MOAB provides for code coupling by storing mesh data and input and output field data for the coupled analysis codes and interpolating the field values between different meshes used by the coupled codes. We found it straightforward to use MOAB to couple the PBSM basin modeling code and the FWI3D seismic code on an IBM Blue Gene/P system. We describe how the coupling was implemented and present benchmarking results for up to 8 racks of Blue Gene/P with 8192 nodes and MPI processes. The coupling code is fast compared to the analysis codes and it scales well up to at least 8192 nodes, indicating that a mesh and field database is an efficient way to implement loose multiphysics coupling for large parallel machines.

  17. Coupling a Basin Modeling and a Seismic Code using MOAB

    KAUST Repository

    Yan, Mi

    2012-06-02

    We report on a demonstration of loose multiphysics coupling between a basin modeling code and a seismic code running on a large parallel machine. Multiphysics coupling, which is one critical capability for a high performance computing (HPC) framework, was implemented using the MOAB open-source mesh and field database. MOAB provides for code coupling by storing mesh data and input and output field data for the coupled analysis codes and interpolating the field values between different meshes used by the coupled codes. We found it straightforward to use MOAB to couple the PBSM basin modeling code and the FWI3D seismic code on an IBM Blue Gene/P system. We describe how the coupling was implemented and present benchmarking results for up to 8 racks of Blue Gene/P with 8192 nodes and MPI processes. The coupling code is fast compared to the analysis codes and it scales well up to at least 8192 nodes, indicating that a mesh and field database is an efficient way to implement loose multiphysics coupling for large parallel machines.

  18. Error detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3

    Science.gov (United States)

    Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1989-09-01

    The error-detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3 are investigated. These codes are also used for error detection in the data link layer of the Ethernet, a local area network. The weight distributions for various code lengths are calculated to obtain the probability of undetectable error and that of detectable error for a binary symmetric channel with bit-error rate between 0.00001 and 1/2.

  19. Coding with partially hidden Markov models

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Rissanen, J.

    1995-01-01

    Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general...... 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...

  20. Enhancement of Pre-and Post-Processing Capability of the CUPID code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jongtae; Park, Ik Kyu; Yoon, Hanyoung [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    To simulate heat transfer and fluid flow in a field with a complicated geometry, an unstructured mesh is popularly used. Most commercial CFD (Computational Fluid Dynamics) solvers are based on an unstructured mesh technology. An advantage of using unstructured meshes for a field simulation is reduced man-hours by automatic mesh generation compared to a traditional structured mesh generation, which requires a huge amount of man-hours to discretized a complex geometry. Initially, unstructured meshes that can be generated automatically are limited to regular cell elements such as tetrahedron, pyramid, prism, or hexahedron. The multi-dimensional multi-phase flow solver, CUPID, has been developed in the context of an unstructured mesh finite volume method (FVM). Its numerical formulation and programming structure is independent of the number of faces surrounding the computational cells. Thus, it can be easily extended into polyhedral unstructured meshes. In this study, new tools for enhancing the pre- and post-processing capabilities of CUPID are proposed. They are based on an open-source CFD tool box OpenFOAM. A goal of this study is an extension of the applicability of the CUPID code by improving the mesh and solution treatment of the code.

  1. Movable geometry and eigenvalue search capability in the MC21 Monte Carlo code

    International Nuclear Information System (INIS)

    Gill, D. F.; Nease, B. R.; Griesheimer, D. P.

    2013-01-01

    A description of a robust and flexible movable geometry implementation in the Monte Carlo code MC21 is described along with a search algorithm that can be used in conjunction with the movable geometry capability to perform eigenvalue searches based on the position of some geometric component. The natural use of the combined movement and search capability is searching to critical through variation of control rod (or control drum) position. The movable geometry discussion provides the mathematical framework for moving surfaces in the MC21 combinatorial solid geometry description. A discussion of the interface between the movable geometry system and the user is also described, particularly the ability to create a hierarchy of movable groups. Combined with the hierarchical geometry description in MC21 the movable group framework provides a very powerful system for inline geometry modification. The eigenvalue search algorithm implemented in MC21 is also described. The foundations of this algorithm are a regula falsi search though several considerations are made in an effort to increase the efficiency of the algorithm for use with Monte Carlo. Specifically, criteria are developed to determine after each batch whether the Monte Carlo calculation should be continued, the search iteration can be rejected, or the search iteration has converged. These criteria seek to minimize the amount of time spent per iteration. Results for the regula falsi method are shown, illustrating that the method as implemented is indeed convergent and that the optimizations made ultimately reduce the total computational expense. (authors)

  2. Movable geometry and eigenvalue search capability in the MC21 Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Gill, D. F.; Nease, B. R.; Griesheimer, D. P. [Bettis Atomic Power Laboratory, PO Box 79, West Mifflin, PA 15122 (United States)

    2013-07-01

    A description of a robust and flexible movable geometry implementation in the Monte Carlo code MC21 is described along with a search algorithm that can be used in conjunction with the movable geometry capability to perform eigenvalue searches based on the position of some geometric component. The natural use of the combined movement and search capability is searching to critical through variation of control rod (or control drum) position. The movable geometry discussion provides the mathematical framework for moving surfaces in the MC21 combinatorial solid geometry description. A discussion of the interface between the movable geometry system and the user is also described, particularly the ability to create a hierarchy of movable groups. Combined with the hierarchical geometry description in MC21 the movable group framework provides a very powerful system for inline geometry modification. The eigenvalue search algorithm implemented in MC21 is also described. The foundations of this algorithm are a regula falsi search though several considerations are made in an effort to increase the efficiency of the algorithm for use with Monte Carlo. Specifically, criteria are developed to determine after each batch whether the Monte Carlo calculation should be continued, the search iteration can be rejected, or the search iteration has converged. These criteria seek to minimize the amount of time spent per iteration. Results for the regula falsi method are shown, illustrating that the method as implemented is indeed convergent and that the optimizations made ultimately reduce the total computational expense. (authors)

  3. Random geometry capability in RMC code for explicit analysis of polytype particle/pebble and applications to HTR-10 benchmark

    International Nuclear Information System (INIS)

    Liu, Shichang; Li, Zeguang; Wang, Kan; Cheng, Quan; She, Ding

    2018-01-01

    Highlights: •A new random geometry was developed in RMC for mixed and polytype particle/pebble. •This capability was applied to the full core calculations of HTR-10 benchmark. •Reactivity, temperature coefficient and control rod worth of HTR-10 were compared. •This method can explicitly model different packing fraction of different pebbles. •Monte Carlo code with this method can simulate polytype particle/pebble type reactor. -- Abstract: With the increasing demands of high fidelity neutronics analysis and the development of computer technology, Monte Carlo method is becoming more and more attractive in accurate simulation of pebble bed High Temperature gas-cooled Reactor (HTR), owing to its advantages of the flexible geometry modeling and the use of continuous-energy nuclear cross sections. For the double-heterogeneous geometry of pebble bed, traditional Monte Carlo codes can treat it by explicit geometry description. However, packing methods such as Random Sequential Addition (RSA) can only produce a sphere packing up to 38% volume packing fraction, while Discrete Element Method (DEM) is troublesome and also time consuming. Moreover, traditional Monte Carlo codes are difficult and inconvenient to simulate the mixed and polytype particles or pebbles. A new random geometry method was developed in Monte Carlo code RMC to simulate the particle transport in polytype particle/pebble in double heterogeneous geometry systems. This method was verified by some test cases, and applied to the full core calculations of HTR-10 benchmark. The reactivity, temperature coefficient and control rod worth of HTR-10 were compared for full core and initial core in helium and air atmosphere respectively, and the results agree well with the benchmark results and experimental results. This work would provide an efficient tool for the innovative design of pebble bed, prism HTRs and molten salt reactors with polytype particles or pebbles using Monte Carlo method.

  4. Genetic coding and gene expression - new Quadruplet genetic coding model

    Science.gov (United States)

    Shankar Singh, Rama

    2012-07-01

    Successful demonstration of human genome project has opened the door not only for developing personalized medicine and cure for genetic diseases, but it may also answer the complex and difficult question of the origin of life. It may lead to making 21st century, a century of Biological Sciences as well. Based on the central dogma of Biology, genetic codons in conjunction with tRNA play a key role in translating the RNA bases forming sequence of amino acids leading to a synthesized protein. This is the most critical step in synthesizing the right protein needed for personalized medicine and curing genetic diseases. So far, only triplet codons involving three bases of RNA, transcribed from DNA bases, have been used. Since this approach has several inconsistencies and limitations, even the promise of personalized medicine has not been realized. The new Quadruplet genetic coding model proposed and developed here involves all four RNA bases which in conjunction with tRNA will synthesize the right protein. The transcription and translation process used will be the same, but the Quadruplet codons will help overcome most of the inconsistencies and limitations of the triplet codes. Details of this new Quadruplet genetic coding model and its subsequent potential applications including relevance to the origin of life will be presented.

  5. Capabilities and accuracy of energy modelling software

    CSIR Research Space (South Africa)

    Osburn, L

    2010-11-01

    Full Text Available Energy modelling can be used in a number of different ways to fulfill different needs, including certification within building regulations or green building rating tools. Energy modelling can also be used in order to try and predict what the energy...

  6. Frameworks for Assessing the Quality of Modeling and Simulation Capabilities

    Science.gov (United States)

    Rider, W. J.

    2012-12-01

    The importance of assuring quality in modeling and simulation has spawned several frameworks for structuring the examination of quality. The format and content of these frameworks provides an emphasis, completeness and flow to assessment activities. I will examine four frameworks that have been developed and describe how they can be improved and applied to a broader set of high consequence applications. Perhaps the first of these frameworks was known as CSAU [Boyack] (code scaling, applicability and uncertainty) used for nuclear reactor safety and endorsed the United States' Nuclear Regulatory Commission (USNRC). This framework was shaped by nuclear safety practice, and the practical structure needed after the Three Mile Island accident. It incorporated the dominant experimental program, the dominant analysis approach, and concerns about the quality of modeling. The USNRC gave it the force of law that made the nuclear industry take it seriously. After the cessation of nuclear weapons' testing the United States began a program of examining the reliability of these weapons without testing. This program utilizes science including theory, modeling, simulation and experimentation to replace the underground testing. The emphasis on modeling and simulation necessitated attention on the quality of these simulations. Sandia developed the PCMM (predictive capability maturity model) to structure this attention [Oberkampf]. PCMM divides simulation into six core activities to be examined and graded relative to the needs of the modeling activity. NASA [NASA] has built yet another framework in response to the tragedy of the space shuttle accidents. Finally, Ben-Haim and Hemez focus upon modeling robustness and predictive fidelity in another approach. These frameworks are similar, and applied in a similar fashion. The adoption of these frameworks at Sandia and NASA has been slow and arduous because the force of law has not assisted acceptance. All existing frameworks are

  7. Transmutation Fuel Performance Code Thermal Model Verification

    Energy Technology Data Exchange (ETDEWEB)

    Gregory K. Miller; Pavel G. Medvedev

    2007-09-01

    FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.

  8. RELAP5/MOD3 code coupling model

    International Nuclear Information System (INIS)

    Martin, R.P.; Johnsen, G.W.

    1994-01-01

    A new capability has been incorporated into RELAP5/MOD3 that enables the coupling of RELAP5/MOD3 to other computer codes. The new capability has been designed to support analysis of the new advanced reactor concepts. Its user features rely solely on new RELAP5 open-quotes styledclose quotes input and the Parallel Virtual Machine (PVM) software, which facilitates process management and distributed communication of multiprocess problems. RELAP5/MOD3 manages the input processing, communication instruction, process synchronization, and its own send and receive data processing. The flexible capability requires that an explicit coupling be established, which updates boundary conditions at discrete time intervals. Two test cases are presented that demonstrate the functionality, applicability, and issues involving use of this capability

  9. MC21 v.6.0 - A continuous-energy Monte Carlo particle transport code with integrated reactor feedback capabilities

    International Nuclear Information System (INIS)

    Grieshemer, D.P.; Gill, D.F.; Nease, B.R.; Carpenter, D.C.; Joo, H.; Millman, D.L.; Sutton, T.M.; Stedry, M.H.; Dobreff, P.S.; Trumbull, T.H.; Caro, E.

    2013-01-01

    MC21 is a continuous-energy Monte Carlo radiation transport code for the calculation of the steady-state spatial distributions of reaction rates in three-dimensional models. The code supports neutron and photon transport in fixed source problems, as well as iterated-fission-source (eigenvalue) neutron transport problems. MC21 has been designed and optimized to support large-scale problems in reactor physics, shielding, and criticality analysis applications. The code also supports many in-line reactor feedback effects, including depletion, thermal feedback, xenon feedback, eigenvalue search, and neutron and photon heating. MC21 uses continuous-energy neutron/nucleus interaction physics over the range from 10 -5 eV to 20 MeV. The code treats all common neutron scattering mechanisms, including fast-range elastic and non-elastic scattering, and thermal- and epithermal-range scattering from molecules and crystalline materials. For photon transport, MC21 uses continuous-energy interaction physics over the energy range from 1 keV to 100 GeV. The code treats all common photon interaction mechanisms, including Compton scattering, pair production, and photoelectric interactions. All of the nuclear data required by MC21 is provided by the NDEX system of codes, which extracts and processes data from EPDL-, ENDF-, and ACE-formatted source files. For geometry representation, MC21 employs a flexible constructive solid geometry system that allows users to create spatial cells from first- and second-order surfaces. The system also allows models to be built up as hierarchical collections of previously defined spatial cells, with interior detail provided by grids and template overlays. Results are collected by a generalized tally capability which allows users to edit integral flux and reaction rate information. Results can be collected over the entire problem or within specific regions of interest through the use of phase filters that control which particles are allowed to score each

  10. Improvement of MARS code reflood model

    International Nuclear Information System (INIS)

    Hwang, Moonkyu; Chung, Bub-Dong

    2011-01-01

    A specifically designed heat transfer model for the reflood process which normally occurs at low flow and low pressure was originally incorporated in the MARS code. The model is essentially identical to that of the RELAP5/MOD3.3 code. The model, however, is known to have under-estimated the peak cladding temperature (PCT) with earlier turn-over. In this study, the original MARS code reflood model is improved. Based on the extensive sensitivity studies for both hydraulic and wall heat transfer models, it is found that the dispersed flow film boiling (DFFB) wall heat transfer is the most influential process determining the PCT, whereas the interfacial drag model most affects the quenching time through the liquid carryover phenomenon. The model proposed by Bajorek and Young is incorporated for the DFFB wall heat transfer. Both space grid and droplet enhancement models are incorporated. Inverted annular film boiling (IAFB) is modeled by using the original PSI model of the code. The flow transition between the DFFB and IABF, is modeled using the TRACE code interpolation. A gas velocity threshold is also added to limit the top-down quenching effect. Assessment calculations are performed for the original and modified MARS codes for the Flecht-Seaset test and RBHT test. Improvements are observed in terms of the PCT and quenching time predictions in the Flecht-Seaset assessment. In case of the RBHT assessment, the improvement over the original MARS code is found marginal. A space grid effect, however, is clearly seen from the modified version of the MARS code. (author)

  11. Facility Modeling Capability Demonstration Summary Report

    International Nuclear Information System (INIS)

    Key, Brian P.; Sadasivan, Pratap; Fallgren, Andrew James; Demuth, Scott Francis; Aleman, Sebastian E.; Almeida, Valmor F. de; Chiswell, Steven R.; Hamm, Larry; Tingey, Joel M.

    2017-01-01

    A joint effort has been initiated by Los Alamos National Laboratory (LANL), Oak Ridge National Laboratory (ORNL), Savanah River National Laboratory (SRNL), Pacific Northwest National Laboratory (PNNL), sponsored by the National Nuclear Security Administration's (NNSA's) office of Proliferation Detection, to develop and validate a flexible framework for simulating effluents and emissions from spent fuel reprocessing facilities. These effluents and emissions can be measured by various on-site and/or off-site means, and then the inverse problem can ideally be solved through modeling and simulation to estimate characteristics of facility operation such as the nuclear material production rate. The flexible framework called Facility Modeling Toolkit focused on the forward modeling of PUREX reprocessing facility operating conditions from fuel storage and chopping to effluent and emission measurements.

  12. Facility Modeling Capability Demonstration Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Key, Brian P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sadasivan, Pratap [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Fallgren, Andrew James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Demuth, Scott Francis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Aleman, Sebastian E. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); de Almeida, Valmor F. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Chiswell, Steven R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hamm, Larry [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Tingey, Joel M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-02-01

    A joint effort has been initiated by Los Alamos National Laboratory (LANL), Oak Ridge National Laboratory (ORNL), Savanah River National Laboratory (SRNL), Pacific Northwest National Laboratory (PNNL), sponsored by the National Nuclear Security Administration’s (NNSA’s) office of Proliferation Detection, to develop and validate a flexible framework for simulating effluents and emissions from spent fuel reprocessing facilities. These effluents and emissions can be measured by various on-site and/or off-site means, and then the inverse problem can ideally be solved through modeling and simulation to estimate characteristics of facility operation such as the nuclear material production rate. The flexible framework called Facility Modeling Toolkit focused on the forward modeling of PUREX reprocessing facility operating conditions from fuel storage and chopping to effluent and emission measurements.

  13. Computable general equilibrium model fiscal year 2014 capability development report

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Laboratory; Boero, Riccardo [Los Alamos National Laboratory

    2016-05-11

    This report provides an overview of the development of the NISAC CGE economic modeling capability since 2012. This capability enhances NISAC's economic modeling and analysis capabilities to answer a broader set of questions than possible with previous economic analysis capability. In particular, CGE modeling captures how the different sectors of the economy, for example, households, businesses, government, etc., interact to allocate resources in an economy and this approach captures these interactions when it is used to estimate the economic impacts of the kinds of events NISAC often analyzes.

  14. A Thermo-Optic Propagation Modeling Capability.

    Energy Technology Data Exchange (ETDEWEB)

    Schrader, Karl; Akau, Ron

    2014-10-01

    A new theoretical basis is derived for tracing optical rays within a finite-element (FE) volume. The ray-trajectory equations are cast into the local element coordinate frame and the full finite-element interpolation is used to determine instantaneous index gradient for the ray-path integral equation. The FE methodology (FEM) is also used to interpolate local surface deformations and the surface normal vector for computing the refraction angle when launching rays into the volume, and again when rays exit the medium. The method is implemented in the Matlab(TM) environment and compared to closed- form gradient index models. A software architecture is also developed for implementing the algorithms in the Zemax(TM) commercial ray-trace application. A controlled thermal environment was constructed in the laboratory, and measured data was collected to validate the structural, thermal, and optical modeling methods.

  15. Recent improvements of the TNG statistical model code

    International Nuclear Information System (INIS)

    Shibata, K.; Fu, C.Y.

    1986-08-01

    The applicability of the nuclear model code TNG to cross-section evaluations has been extended. The new TNG is capable of using variable bins for outgoing particle energies. Moreover, three additional quantities can now be calculated: capture gamma-ray spectrum, the precompound mode of the (n,γ) reaction, and fission cross section. In this report, the new features of the code are described together with some sample calculations and a brief explanation of the input data. 15 refs., 6 figs., 2 tabs

  16. Steam condensation modelling in aerosol codes

    International Nuclear Information System (INIS)

    Dunbar, I.H.

    1986-01-01

    The principal subject of this study is the modelling of the condensation of steam into and evaporation of water from aerosol particles. These processes introduce a new type of term into the equation for the development of the aerosol particle size distribution. This new term faces the code developer with three major problems: the physical modelling of the condensation/evaporation process, the discretisation of the new term and the separate accounting for the masses of the water and of the other components. This study has considered four codes which model the condensation of steam into and its evaporation from aerosol particles: AEROSYM-M (UK), AEROSOLS/B1 (France), NAUA (Federal Republic of Germany) and CONTAIN (USA). The modelling in the codes has been addressed under three headings. These are the physical modelling of condensation, the mathematics of the discretisation of the equations, and the methods for modelling the separate behaviour of different chemical components of the aerosol. The codes are least advanced in area of solute effect modelling. At present only AEROSOLS/B1 includes the effect. The effect is greater for more concentrated solutions. Codes without the effect will be more in error (underestimating the total airborne mass) the less condensation they predict. Data are needed on the water vapour pressure above concentrated solutions of the substances of interest (especially CsOH and CsI) if the extent to which aerosols retain water under superheated conditions is to be modelled. 15 refs

  17. Plaspp: A New X-Ray Postprocessing Capability for ASCI Codes

    International Nuclear Information System (INIS)

    Pollak, Gregory

    2003-01-01

    This report announces the availability of the beta version of a (partly) new code, Plaspp (Plasma Postprocessor). This code postprocesses (graphics) dumps from at least two ASCI code suites: Crestone Project and Shavano Project. The basic structure of the code follows that of TDG, the equivalent postprocessor code for LASNEX. In addition to some new commands, the basic differences between TDG and Plaspp are the following: Plaspp uses a graphics dump instead of the unique TDG dump, it handles the unstructured meshes that the ASCI codes produce, and it can use its own multigroup opacity data. Because of the dump format, this code should be useable by any code that produces Cartesian, cylindrical, or spherical graphics formats. This report details the new commands; the required information to be placed on the dumps; some new commands and edits that are applicable to TDG as well, but have not been documented elsewhere; and general information about execution on the open and secure networks.

  18. Economic aspects and models for building codes

    DEFF Research Database (Denmark)

    Bonke, Jens; Pedersen, Dan Ove; Johnsen, Kjeld

    It is the purpose of this bulletin to present an economic model for estimating the consequence of new or changed building codes. The object is to allow comparative analysis in order to improve the basis for decisions in this field. The model is applied in a case study.......It is the purpose of this bulletin to present an economic model for estimating the consequence of new or changed building codes. The object is to allow comparative analysis in order to improve the basis for decisions in this field. The model is applied in a case study....

  19. MARS CODE MANUAL VOLUME V: Models and Correlations

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Bae, Sung Won; Lee, Seung Wook; Yoon, Churl; Hwang, Moon Kyu; Kim, Kyung Doo; Jeong, Jae Jun

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This models and correlations manual provides a complete list of detailed information of the thermal-hydraulic models used in MARS, so that this report would be very useful for the code users. The overall structure of the manual is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  20. Modeling report of DYMOND code (DUPIC version)

    International Nuclear Information System (INIS)

    Park, Joo Hwan; Yacout, Abdellatif M.

    2003-04-01

    The DYMOND code employs the ITHINK dynamic modeling platform to assess the 100-year dynamic evolution scenarios for postulated global nuclear energy parks. Firstly, DYMOND code has been developed by ANL(Argonne National Laboratory) to perform the fuel cycle analysis of LWR once-through and LWR-FBR mixed plant. Since the extensive application of DYMOND code has been requested, the first version of DYMOND has been modified to adapt the DUPIC, MSR and RTF fuel cycle. DYMOND code is composed of three parts; the source language platform, input supply and output. But those platforms are not clearly distinguished. This report described all the equations which were modeled in the modified DYMOND code (which is called as DYMOND-DUPIC version). It divided into five parts;Part A deals model in reactor history which is included amount of the requested fuels and spent fuels. Part B aims to describe model of fuel cycle about fuel flow from the beginning to the end of fuel cycle. Part C is for model in re-processing which is included recovery of burned uranium, plutonium, minor actinide and fission product as well as the amount of spent fuels in storage and disposal. Part D is for model in other fuel cycle which is considered the thorium fuel cycle for MSR and RTF reactor. Part E is for model in economics. This part gives all the information of cost such as uranium mining cost, reactor operating cost, fuel cost etc

  1. Modeling report of DYMOND code (DUPIC version)

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo Hwan [KAERI, Taejon (Korea, Republic of); Yacout, Abdellatif M [Argonne National Laboratory, Ilinois (United States)

    2003-04-01

    The DYMOND code employs the ITHINK dynamic modeling platform to assess the 100-year dynamic evolution scenarios for postulated global nuclear energy parks. Firstly, DYMOND code has been developed by ANL(Argonne National Laboratory) to perform the fuel cycle analysis of LWR once-through and LWR-FBR mixed plant. Since the extensive application of DYMOND code has been requested, the first version of DYMOND has been modified to adapt the DUPIC, MSR and RTF fuel cycle. DYMOND code is composed of three parts; the source language platform, input supply and output. But those platforms are not clearly distinguished. This report described all the equations which were modeled in the modified DYMOND code (which is called as DYMOND-DUPIC version). It divided into five parts;Part A deals model in reactor history which is included amount of the requested fuels and spent fuels. Part B aims to describe model of fuel cycle about fuel flow from the beginning to the end of fuel cycle. Part C is for model in re-processing which is included recovery of burned uranium, plutonium, minor actinide and fission product as well as the amount of spent fuels in storage and disposal. Part D is for model in other fuel cycle which is considered the thorium fuel cycle for MSR and RTF reactor. Part E is for model in economics. This part gives all the information of cost such as uranium mining cost, reactor operating cost, fuel cost etc.

  2. Proposing a Capability Perspective on Digital Business Models

    OpenAIRE

    Bärenfänger, Rieke; Otto, Boris

    2015-01-01

    Business models comprehensively describe the functioning of businesses in contemporary economic, technological, and societal environments. This paper focuses on the characteristics of digital business models from the perspective of capability research and develops a capability model for digital businesses. Following the design science research (DSR) methodology, multiple evaluation and design iterations were performed. Contributions to the design process came from IS/IT practice and the resea...

  3. Models and applications of the UEDGE code

    International Nuclear Information System (INIS)

    Rensink, M.E.; Knoll, D.A.; Porter, G.D.; Rognlien, T.D.; Smith, G.R.; Wising, F.

    1996-09-01

    The transport of particles and energy from the core of a tokamak to nearby material surfaces is an important problem for understanding present experiments and for designing reactor-grade devices. A number of fluid transport codes have been developed to model the plasma in the edge and scrape-off layer (SOL) regions. This report will focus on recent model improvements and illustrative results from the UEDGE code. Some geometric and mesh considerations are introduced, followed by a general description of the plasma and neutral fluid models. A few comments on computational issues are given and then two important applications are illustrated concerning benchmarking and the ITER radiative divertor. Finally, we report on some recent work to improve the models in UEDGE by coupling to a Monte Carlo neutrals code and by utilizing an adaptive grid

  4. The nuclear reaction model code MEDICUS

    International Nuclear Information System (INIS)

    Ibishia, A.I.

    2008-01-01

    The new computer code MEDICUS has been used to calculate cross sections of nuclear reactions. The code, implemented in MATLAB 6.5, Mathematica 5, and Fortran 95 programming languages, can be run in graphical and command line mode. Graphical User Interface (GUI) has been built that allows the user to perform calculations and to plot results just by mouse clicking. The MS Windows XP and Red Hat Linux platforms are supported. MEDICUS is a modern nuclear reaction code that can compute charged particle-, photon-, and neutron-induced reactions in the energy range from thresholds to about 200 MeV. The calculation of the cross sections of nuclear reactions are done in the framework of the Exact Many-Body Nuclear Cluster Model (EMBNCM), Direct Nuclear Reactions, Pre-equilibrium Reactions, Optical Model, DWBA, and Exciton Model with Cluster Emission. The code can be used also for the calculation of nuclear cluster structure of nuclei. We have calculated nuclear cluster models for some nuclei such as 177 Lu, 90 Y, and 27 Al. It has been found that nucleus 27 Al can be represented through the two different nuclear cluster models: 25 Mg + d and 24 Na + 3 He. Cross sections in function of energy for the reaction 27 Al( 3 He,x) 22 Na, established as a production method of 22 Na, are calculated by the code MEDICUS. Theoretical calculations of cross sections are in good agreement with experimental results. Reaction mechanisms are taken into account. (author)

  5. RCS modeling with the TSAR FDTD code

    Energy Technology Data Exchange (ETDEWEB)

    Pennock, S.T.; Ray, S.L.

    1992-03-01

    The TSAR electromagnetic modeling system consists of a family of related codes that have been designed to work together to provide users with a practical way to set up, run, and interpret the results from complex 3-D finite-difference time-domain (FDTD) electromagnetic simulations. The software has been in development at the Lawrence Livermore National Laboratory (LLNL) and at other sites since 1987. Active internal use of the codes began in 1988 with limited external distribution and use beginning in 1991. TSAR was originally developed to analyze high-power microwave and EMP coupling problems. However, the general-purpose nature of the tools has enabled us to use the codes to solve a broader class of electromagnetic applications and has motivated the addition of new features. In particular a family of near-to-far field transformation routines have been added to the codes, enabling TSAR to be used for radar-cross section and antenna analysis problems.

  6. ITER Dynamic Tritium Inventory Modeling Code

    International Nuclear Information System (INIS)

    Cristescu, Ioana-R.; Doerr, L.; Busigin, A.; Murdoch, D.

    2005-01-01

    A tool for tritium inventory evaluation within each sub-system of the Fuel Cycle of ITER is vital, with respect to both the process of licensing ITER and also for operation. It is very likely that measurements of total tritium inventories may not be possible for all sub-systems, however tritium accounting may be achieved by modeling its hold-up within each sub-system and by validating these models in real-time against the monitored flows and tritium streams between the systems. To get reliable results, an accurate dynamic modeling of the tritium content in each sub-system is necessary. In order to optimize the configuration and operation of the ITER fuel cycle, a dynamic fuel cycle model was developed progressively in the decade up to 2000-2001. As the design for some sub-systems from the fuel cycle (i.e. Vacuum pumping, Neutral Beam Injectors (NBI)) have substantially progressed meanwhile, a new code developed under a different platform to incorporate these modifications has been developed. The new code is taking over the models and algorithms for some subsystems, such as Isotope Separation System (ISS); where simplified models have been previously considered, more detailed have been introduced, as for the Water Detritiation System (WDS). To reflect all these changes, the new code developed inside EU participating team was nominated TRIMO (Tritium Inventory Modeling), to emphasize the use of the code on assessing the tritium inventory within ITER

  7. Science version 2: the most recent capabilities of the Framatome 3-D nuclear code package

    International Nuclear Information System (INIS)

    Girieud, P.; Daudin, L.; Garat, C.; Marotte, P.; Tarle, S.

    2001-01-01

    The Framatome nuclear code package SCIENCE developed in the 1990's has been fully operational for nuclear design since 1997. Results obtained using the package demonstrate the high accuracy of its physical models. Nevertheless, since the first release of the SCIENCE package, continuous improvement work has been carried out at Framatome, which leads today to Version 2 of the package. The intensive use of the package by Framatome teams, for example, while performing reload calculations and the associated core follow, is a permanent opportunity to point out any trend or scattering in the results, even the smaller they are. Thus the main objective of improvements was to take advantage of the progress in computer performances in using more sophisticated calculation schemes conducting to more accurate results. Besides the implementation of more accurate physical models, SCIENCE Version 2 also exploits developments conducted in other fields, mainly for transient calculations using 3-D kinetics or coupling with open-channel core thermal-hydraulics and the plant simulator. These developments allow Framatome to perform accident analyses with advanced methodologies using the SCIENCE package. (author)

  8. State-of-the-art modeling capabilities for Orimulsion modeling

    International Nuclear Information System (INIS)

    Cekirge, H.M.; Palmer, S.L.; Convery, K.; Ileri, L.

    1996-01-01

    The pollution response of Orimulsion was discussed. Orimulsion is an inexpensive alternative to fuel oil No. 6. It has the capability to heat large industrial and electric utility boilers. It is an emulsion composed of approximately 70% bitumen (a heavy hydrocarbon) and 30% water to which a surfactant has been added. It has a specific gravity of one or higher, so it is of particular concern in the event of a spill. The physical and chemical processes that would take place in an Orimulsion spill were studied and incorporated into the design of the model ORI SLIK, a fate and transport model for marine environments. The most critical decision in using ORI SLIK is the assignment of the number of parcels into which the initial spill volume will be divided since an underspecification would result in inaccurate results. However, no reliable methods for determining this, other than a decision based on trial and error, has been found. It was concluded that while many of the complex processes of Orimulsion in marine environments are approximated in currently available models, some areas still need further study. Among these are the effect of current shear, changing particle densities, and differential settling. 24 refs., 1 tab., 5 figs

  9. Are Hydrostatic Models Still Capable of Simulating Oceanic Fronts

    Science.gov (United States)

    2016-11-10

    Hydrostatic Models Still Capable of Simulating Oceanic Fronts Yalin Fan Zhitao Yu Ocean Dynamics and Prediction Branch Oceanography Division FengYan Shi...OF PAGES 17. LIMITATION OF ABSTRACT Are Hydrostatic Models Still Capable of Simulating Oceanic Fronts? Yalin Fan, Zhitao Yu, and, Fengyan Shi1 Naval...mixed layer and thermocline simulations as well as large scale circulations. Numerical experiments are conducted using hydrostatic (HY) and

  10. Neural network modeling of a dolphin's sonar discrimination capabilities

    OpenAIRE

    Andersen, Lars Nonboe; René Rasmussen, A; Au, WWL; Nachtigall, PE; Roitblat, H.

    1994-01-01

    The capability of an echo-locating dolphin to discriminate differences in the wall thickness of cylinders was previously modeled by a counterpropagation neural network using only spectral information of the echoes [W. W. L. Au, J. Acoust. Soc. Am. 95, 2728–2735 (1994)]. In this study, both time and frequency information were used to model the dolphin discrimination capabilities. Echoes from the same cylinders were digitized using a broadband simulated dolphin sonar signal with the transducer ...

  11. Modelling of fluid-solid interaction using two stand-alone codes

    CSIR Research Space (South Africa)

    Grobler, Jan H

    2010-01-01

    Full Text Available A method is proposed for the modelling of fluid-solid interaction in applications where fluid forces dominate. Data are transferred between two stand-alone codes: a dedicated computational fluid dynamics (CFD) code capable of free surface modelling...

  12. Hydrogen recycle modeling in transport codes

    International Nuclear Information System (INIS)

    Howe, H.C.

    1979-01-01

    The hydrogen recycling models now used in Tokamak transport codes are reviewed and the method by which realistic recycling models are being added is discussed. Present models use arbitrary recycle coefficients and therefore do not model the actual recycling processes at the wall. A model for the hydrogen concentration in the wall serves two purposes: (1) it allows a better understanding of the density behavior in present gas puff, pellet, and neutral beam heating experiments; and (2) it allows one to extrapolate to long pulse devices such as EBT, ISX-C and reactors where the walls are observed or expected to saturate. Several wall models are presently being studied for inclusion in transport codes

  13. Evaluating the habitat capability model for Merriam's turkeys

    Science.gov (United States)

    Mark A. Rumble; Stanley H. Anderson

    1995-01-01

    Habitat capability (HABCAP) models for wildlife assist land managers in predicting the consequences of their management decisions. Models must be tested and refined prior to using them in management planning. We tested the predicted patterns of habitat selection of the R2 HABCAP model using observed patterns of habitats selected by radio-marked Merriam’s turkey (

  14. Radiation transport phenomena and modeling - part A: Codes

    International Nuclear Information System (INIS)

    Lorence, L.J.

    1997-01-01

    The need to understand how particle radiation (high-energy photons and electrons) from a variety of sources affects materials and electronics has motivated the development of sophisticated computer codes that describe how radiation with energies from 1.0 keV to 100.0 GeV propagates through matter. Predicting radiation transport is the necessary first step in predicting radiation effects. The radiation transport codes that are described here are general-purpose codes capable of analyzing a variety of radiation environments including those produced by nuclear weapons (x-rays, gamma rays, and neutrons), by sources in space (electrons and ions) and by accelerators (x-rays, gamma rays, and electrons). Applications of these codes include the study of radiation effects on electronics, nuclear medicine (imaging and cancer treatment), and industrial processes (food disinfestation, waste sterilization, manufacturing.) The primary focus will be on coupled electron-photon transport codes, with some brief discussion of proton transport. These codes model a radiation cascade in which electrons produce photons and vice versa. This coupling between particles of different types is important for radiation effects. For instance, in an x-ray environment, electrons are produced that drive the response in electronics. In an electron environment, dose due to bremsstrahlung photons can be significant once the source electrons have been stopped

  15. Chemistry models in the Victoria code

    International Nuclear Information System (INIS)

    Grimley, A.J. III

    1988-01-01

    The VICTORIA Computer code consists of the fission product release and chemistry models for the MELPROG severe accident analysis code. The chemistry models in VICTORIA are used to treat multi-phase interactions in four separate physical regions: fuel grains, gap/open porosity/clad, coolant/aerosols, and structure surfaces. The physical and chemical environment of each region is very different from the others and different models are required for each. The common thread in the modelling is the use of a chemical equilibrium assumption. The validity of this assumption along with a description of the various physical constraints applicable to each region will be discussed. The models that result from the assumptions and constraints will be presented along with samples of calculations in each region

  16. Qualification and application of nuclear reactor accident analysis code with the capability of internal assessment of uncertainty

    International Nuclear Information System (INIS)

    Borges, Ronaldo Celem

    2001-10-01

    This thesis presents an independent qualification of the CIAU code ('Code with the capability of - Internal Assessment of Uncertainty') which is part of the internal uncertainty evaluation process with a thermal hydraulic system code on a realistic basis. This is done by combining the uncertainty methodology UMAE ('Uncertainty Methodology based on Accuracy Extrapolation') with the RELAP5/Mod3.2 code. This allows associating uncertainty band estimates with the results obtained by the realistic calculation of the code, meeting licensing requirements of safety analysis. The independent qualification is supported by simulations with RELAP5/Mod3.2 related to accident condition tests of LOBI experimental facility and to an event which has occurred in Angra 1 nuclear power plant, by comparison with measured results and by establishing uncertainty bands on safety parameter calculated time trends. These bands have indeed enveloped the measured trends. Results from this independent qualification of CIAU have allowed to ascertain the adequate application of a systematic realistic code procedure to analyse accidents with uncertainties incorporated in the results, although there is an evident need of extending the uncertainty data base. It has been verified that use of the code with this internal assessment of uncertainty is feasible in the design and license stages of a NPP. (author)

  17. Advanced capabilities for materials modelling with Quantum ESPRESSO

    Science.gov (United States)

    Giannozzi, P.; Andreussi, O.; Brumme, T.; Bunau, O.; Buongiorno Nardelli, M.; Calandra, M.; Car, R.; Cavazzoni, C.; Ceresoli, D.; Cococcioni, M.; Colonna, N.; Carnimeo, I.; Dal Corso, A.; de Gironcoli, S.; Delugas, P.; DiStasio, R. A., Jr.; Ferretti, A.; Floris, A.; Fratesi, G.; Fugallo, G.; Gebauer, R.; Gerstmann, U.; Giustino, F.; Gorni, T.; Jia, J.; Kawamura, M.; Ko, H.-Y.; Kokalj, A.; Küçükbenli, E.; Lazzeri, M.; Marsili, M.; Marzari, N.; Mauri, F.; Nguyen, N. L.; Nguyen, H.-V.; Otero-de-la-Roza, A.; Paulatto, L.; Poncé, S.; Rocca, D.; Sabatini, R.; Santra, B.; Schlipf, M.; Seitsonen, A. P.; Smogunov, A.; Timrov, I.; Thonhauser, T.; Umari, P.; Vast, N.; Wu, X.; Baroni, S.

    2017-11-01

    Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.

  18. Advanced capabilities for materials modelling with Quantum ESPRESSO.

    Science.gov (United States)

    Andreussi, Oliviero; Brumme, Thomas; Bunau, Oana; Buongiorno Nardelli, Marco; Calandra, Matteo; Car, Roberto; Cavazzoni, Carlo; Ceresoli, Davide; Cococcioni, Matteo; Colonna, Nicola; Carnimeo, Ivan; Dal Corso, Andrea; de Gironcoli, Stefano; Delugas, Pietro; DiStasio, Robert; Ferretti, Andrea; Floris, Andrea; Fratesi, Guido; Fugallo, Giorgia; Gebauer, Ralph; Gerstmann, Uwe; Giustino, Feliciano; Gorni, Tommaso; Jia, Junteng; Kawamura, Mitsuaki; Ko, Hsin-Yu; Kokalj, Anton; Küçükbenli, Emine; Lazzeri, Michele; Marsili, Margherita; Marzari, Nicola; Mauri, Francesco; Nguyen, Ngoc Linh; Nguyen, Huy-Viet; Otero-de-la-Roza, Alberto; Paulatto, Lorenzo; Poncé, Samuel; Giannozzi, Paolo; Rocca, Dario; Sabatini, Riccardo; Santra, Biswajit; Schlipf, Martin; Seitsonen, Ari Paavo; Smogunov, Alexander; Timrov, Iurii; Thonhauser, Timo; Umari, Paolo; Vast, Nathalie; Wu, Xifan; Baroni, Stefano

    2017-09-27

    Quantum ESPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudo-potential and projector-augmented-wave approaches. Quantum ESPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement theirs ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software. © 2017 IOP Publishing Ltd.

  19. A line code with quick-resynchronization capability and low latency for the optical data links of LHC experiments

    International Nuclear Information System (INIS)

    Deng, B; He, M; Chen, J; Guo, D; Hou, S; Teng, P-K; Li, X; Liu, C; Xiang, A C; Ye, J; Gong, D; Liu, T; You, Y

    2014-01-01

    We propose a line code that has fast resynchronization capability and low latency. Both the encoder and decoder have been implemented in FPGAs. The encoder has also been implemented in an ASIC. The latency of the whole optical link (not including the optical fiber) is estimated to be less than 73.9 ns. In the case of radiation-induced link synchronization loss, the decoder can recover the synchronization in 25 ns. The line code will be used in the ATLAS liquid argon calorimeter Phase-I trigger upgrade and can also be potentially used in other LHC experiments

  20. Constructing a justice model based on Sen's capability approach

    OpenAIRE

    Yüksel, Sevgi; Yuksel, Sevgi

    2008-01-01

    The thesis provides a possible justice model based on Sen's capability approach. For this goal, we first analyze the general structure of a theory of justice, identifying the main variables and issues. Furthermore, based on Sen (2006) and Kolm (1998), we look at 'transcendental' and 'comparative' approaches to justice and concentrate on the sufficiency condition for the comparative approach. Then, taking Rawls' theory of justice as a starting point, we present how Sen's capability approach em...

  1. Model comparisons of the reactive burn model SURF in three ASC codes

    Energy Technology Data Exchange (ETDEWEB)

    Whitley, Von Howard [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Stalsberg, Krista Lynn [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Reichelt, Benjamin Lee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Shipley, Sarah Jayne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-12

    A study of the SURF reactive burn model was performed in FLAG, PAGOSA and XRAGE. In this study, three different shock-to-detonation transition experiments were modeled in each code. All three codes produced similar model results for all the experiments modeled and at all resolutions. Buildup-to-detonation time, particle velocities and resolution dependence of the models was notably similar between the codes. Given the current PBX 9502 equations of state and SURF calibrations, each code is equally capable of predicting the correct detonation time and distance when impacted by a 1D impactor at pressures ranging from 10-16 GPa, as long as the resolution of the mesh is not too coarse.

  2. Three Models of Education: Rights, Capabilities and Human Capital

    Science.gov (United States)

    Robeyns, Ingrid

    2006-01-01

    This article analyses three normative accounts that can underlie educational policies, with special attention to gender issues. These three models of education are human capital theory, rights discourses and the capability approach. I first outline five different roles that education can play. Then I analyse these three models of educational…

  3. Verification and Validation of Heat Transfer Model of AGREE Code

    Energy Technology Data Exchange (ETDEWEB)

    Tak, N. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Seker, V.; Drzewiecki, T. J.; Downar, T. J. [Department of Nuclear Engineering and Radiological Sciences, Univ. of Michigan, Michigan (United States); Kelly, J. M. [US Nuclear Regulatory Commission, Washington (United States)

    2013-05-15

    The AGREE code was originally developed as a multi physics simulation code to perform design and safety analysis of Pebble Bed Reactors (PBR). Currently, additional capability for the analysis of Prismatic Modular Reactor (PMR) core is in progress. Newly implemented fluid model for a PMR core is based on a subchannel approach which has been widely used in the analyses of light water reactor (LWR) cores. A hexagonal fuel (or graphite block) is discretized into triangular prism nodes having effective conductivities. Then, a meso-scale heat transfer model is applied to the unit cell geometry of a prismatic fuel block. Both unit cell geometries of multi-hole and pin-in-hole types of prismatic fuel blocks are considered in AGREE. The main objective of this work is to verify and validate the heat transfer model newly implemented for a PMR core in the AGREE code. The measured data in the HENDEL experiment were used for the validation of the heat transfer model for a pin-in-hole fuel block. However, the HENDEL tests were limited to only steady-state conditions of pin-in-hole fuel blocks. There exist no available experimental data regarding a heat transfer in multi-hole fuel blocks. Therefore, numerical benchmarks using conceptual problems are considered to verify the heat transfer model of AGREE for multi-hole fuel blocks as well as transient conditions. The CORONA and GAMMA+ codes were used to compare the numerical results. In this work, the verification and validation study were performed for the heat transfer model of the AGREE code using the HENDEL experiment and the numerical benchmarks of selected conceptual problems. The results of the present work show that the heat transfer model of AGREE is accurate and reliable for prismatic fuel blocks. Further validation of AGREE is in progress for a whole reactor problem using the HTTR safety test data such as control rod withdrawal tests and loss-of-forced convection tests.

  4. Capabilities needed for the next generation of thermo-hydraulic codes for use in real time applications

    Energy Technology Data Exchange (ETDEWEB)

    Arndt, S.A.

    1997-07-01

    The real-time reactor simulation field is currently at a crossroads in terms of the capability to perform real-time analysis using the most sophisticated computer codes. Current generation safety analysis codes are being modified to replace simplified codes that were specifically designed to meet the competing requirement for real-time applications. The next generation of thermo-hydraulic codes will need to have included in their specifications the specific requirement for use in a real-time environment. Use of the codes in real-time applications imposes much stricter requirements on robustness, reliability and repeatability than do design and analysis applications. In addition, the need for code use by a variety of users is a critical issue for real-time users, trainers and emergency planners who currently use real-time simulation, and PRA practitioners who will increasingly use real-time simulation for evaluating PRA success criteria in near real-time to validate PRA results for specific configurations and plant system unavailabilities.

  5. Capabilities needed for the next generation of thermo-hydraulic codes for use in real time applications

    International Nuclear Information System (INIS)

    Arndt, S.A.

    1997-01-01

    The real-time reactor simulation field is currently at a crossroads in terms of the capability to perform real-time analysis using the most sophisticated computer codes. Current generation safety analysis codes are being modified to replace simplified codes that were specifically designed to meet the competing requirement for real-time applications. The next generation of thermo-hydraulic codes will need to have included in their specifications the specific requirement for use in a real-time environment. Use of the codes in real-time applications imposes much stricter requirements on robustness, reliability and repeatability than do design and analysis applications. In addition, the need for code use by a variety of users is a critical issue for real-time users, trainers and emergency planners who currently use real-time simulation, and PRA practitioners who will increasingly use real-time simulation for evaluating PRA success criteria in near real-time to validate PRA results for specific configurations and plant system unavailabilities

  6. Capability of the RELAP5 code to simulate natural circulation behaviour in test facilities

    International Nuclear Information System (INIS)

    Mangal, Amit; Jain, Vikas; Nayak, A.K.

    2011-01-01

    In the present study, one of the extensively used best estimate code RELAP5 has been used for simulation of steady state, transient and stability behavior of natural circulation based experimental facilities, such as the High-Pressure Natural Circulation Loop (HPNCL) and the Parallel Channel Loop (PCL) installed and operating at BARC. The test data have been generated for a range of pressure, power and subcooling conditions. The computer code RELAP5/MOD3.2 was applied to predict the transient natural circulation characteristics under single-phase and two-phase conditions, thresholds of flow instability, amplitude and frequency of flow oscillations for different operating conditions of the loops. This paper presents the effect of nodalisation in prediction of natural circulation behavior in test facilities and a comparison of experimental data in with that of code predictions. The errors associated with the predictions are also characterized

  7. Modeling peripheral olfactory coding in Drosophila larvae.

    Directory of Open Access Journals (Sweden)

    Derek J Hoare

    Full Text Available The Drosophila larva possesses just 21 unique and identifiable pairs of olfactory sensory neurons (OSNs, enabling investigation of the contribution of individual OSN classes to the peripheral olfactory code. We combined electrophysiological and computational modeling to explore the nature of the peripheral olfactory code in situ. We recorded firing responses of 19/21 OSNs to a panel of 19 odors. This was achieved by creating larvae expressing just one functioning class of odorant receptor, and hence OSN. Odor response profiles of each OSN class were highly specific and unique. However many OSN-odor pairs yielded variable responses, some of which were statistically indistinguishable from background activity. We used these electrophysiological data, incorporating both responses and spontaneous firing activity, to develop a bayesian decoding model of olfactory processing. The model was able to accurately predict odor identity from raw OSN responses; prediction accuracy ranged from 12%-77% (mean for all odors 45.2% but was always significantly above chance (5.6%. However, there was no correlation between prediction accuracy for a given odor and the strength of responses of wild-type larvae to the same odor in a behavioral assay. We also used the model to predict the ability of the code to discriminate between pairs of odors. Some of these predictions were supported in a behavioral discrimination (masking assay but others were not. We conclude that our model of the peripheral code represents basic features of odor detection and discrimination, yielding insights into the information available to higher processing structures in the brain.

  8. The discrete-dipole-approximation code ADDA: capabilities and known limitations

    NARCIS (Netherlands)

    Yurkin, M.A.; Hoekstra, A.G.

    2011-01-01

    The open-source code ADDA is described, which implements the discrete dipole approximation (DDA), a method to simulate light scattering by finite 3D objects of arbitrary shape and composition. Besides standard sequential execution, ADDA can run on a multiprocessor distributed-memory system,

  9. Stochastic and simulation models of maritime intercept operations capabilities

    OpenAIRE

    Sato, Hiroyuki

    2005-01-01

    The research formulates and exercises stochastic and simulation models to assess the Maritime Intercept Operations (MIO) capabilities. The models focus on the surveillance operations of the Maritime Patrol Aircraft (MPA). The analysis using the models estimates the probability with which a terrorist vessel (Red) is detected, correctly classified, and escorted for intensive investigation and neutralization before it leaves an area of interest (AOI). The difficulty of obtaining adequate int...

  10. Towards Product Lining Model-Driven Development Code Generators

    OpenAIRE

    Roth, Alexander; Rumpe, Bernhard

    2015-01-01

    A code generator systematically transforms compact models to detailed code. Today, code generation is regarded as an integral part of model-driven development (MDD). Despite its relevance, the development of code generators is an inherently complex task and common methodologies and architectures are lacking. Additionally, reuse and extension of existing code generators only exist on individual parts. A systematic development and reuse based on a code generator product line is still in its inf...

  11. Rapid installation of numerical models in multiple parent codes

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, R.M.; Wong, M.K.

    1996-10-01

    A set of``model interface guidelines``, called MIG, is offered as a means to more rapidly install numerical models (such as stress-strain laws) into any parent code (hydrocode, finite element code, etc.) without having to modify the model subroutines. The model developer (who creates the model package in compliance with the guidelines) specifies the model`s input and storage requirements in a standardized way. For portability, database management (such as saving user inputs and field variables) is handled by the parent code. To date, NUG has proved viable in beta installations of several diverse models in vectorized and parallel codes written in different computer languages. A NUG-compliant model can be installed in different codes without modifying the model`s subroutines. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort potentially reducing the cost of installing and sharing models.

  12. Experiences with the Capability Maturity Model in a research environment

    NARCIS (Netherlands)

    Velden, van der M.J.; Vreke, J.; Wal, van der B.; Symons, A.

    1996-01-01

    The project described here was aimed at evaluating the Capability Maturity Model (CMM) in the context of a research organization. Part of the evaluation was a standard CMM assessment. It was found that CMM could be applied to a research organization, although its five maturity levels were considered

  13. Neural network modeling of a dolphin's sonar discrimination capabilities

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; René Rasmussen, A; Au, WWL

    1994-01-01

    The capability of an echo-locating dolphin to discriminate differences in the wall thickness of cylinders was previously modeled by a counterpropagation neural network using only spectral information of the echoes [W. W. L. Au, J. Acoust. Soc. Am. 95, 2728–2735 (1994)]. In this study, both time a...

  14. Space Weather Models at the CCMC And Their Capabilities

    Science.gov (United States)

    Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha

    2007-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. The second focus of CCMC activities is on validation and verification of space weather models, and on the transition of appropriate models to space weather forecast centers. As part of the latter activity, the CCMC develops real-time simulation systems that stress models through routine execution. A by-product of these real-time calculations is the ability to derive model products, which may be useful for space weather operators. In this presentation, we will provide an overview of the community-provided, space weather-relevant, model suite, which resides at CCMC. We will discuss current capabilities, and analyze expected future developments of space weather related modeling.

  15. MEMOPS: data modelling and automatic code generation.

    Science.gov (United States)

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-03-25

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  16. Development of the coupled 'system thermal-hydraulics, 3D reactor kinetics, and hot channel' analysis capability of the MARS code

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, J. J.; Chung, B. D.; Lee, W.J

    2005-02-01

    The subchannel analysis capability of the MARS 3D module has been improved. Especially, the turbulent mixing and void drift models for flow mixing phenomena in rod bundles have been assessed using some well-known rod bundle test data. Then, the subchannel analysis feature was combined to the existing coupled 'system Thermal-Hydraulics (T/H) and 3D reactor kinetics' calculation capability of MARS. These features allow the coupled 'system T/H, 3D reactor kinetics, and hot channel' analysis capability and, thus, realistic simulations of hot channel behavior as well as global system T/H behavior. In this report, the MARS code features for the coupled analysis capability are described first. The code modifications relevant to the features are also given. Then, a coupled analysis of the Main Steam Line Break (MSLB) is carried out for demonstration. The results of the coupled calculations are very reasonable and realistic, and show these methods can be used to reduce the over-conservatism in the conventional safety analysis.

  17. Hydrological model in STEALTH 2-D code

    International Nuclear Information System (INIS)

    Hart, R.; Hofmann, R.

    1979-10-01

    Porous media fluid flow logic has been added to the two-dimensional version of the STEALTH explicit finite-difference code. It is a first-order hydrological model based upon Darcy's Law. Anisotropic permeability can be prescribed through x and y directional permeabilities. The fluid flow equations are formulated for either two-dimensional translation symmetry or two-dimensional axial symmetry. The addition of the hydrological model to STEALTH is a first step toward analyzing a physical system's response to the coupling of thermal, mechanical, and fluid flow phenomena

  18. Modeling RIA scenarios with the FRAPTRAN and SCANAIR codes

    International Nuclear Information System (INIS)

    Sagrado Garcia, I. C.; Vallejo, I.; Herranz, L. E.

    2013-01-01

    The need of defining new RIA safety criteria has pointed out the importance of performing a rigorous assessment of the transient codes capabilities. The present work is a comparative exercise devoted to identify the origin of the key deviations found between the predictions of FRAPTRAN-1.4 and SCANAIR-7.1. To do so, the calculations submitted by CIEMAT to the OECD/NEA RIA benchmark have been exploited. This work shows that deviations in clad temperatures mainly come from the treatment of the oxide layer. The systematically higher deformations calculated by FRAPTRAN-1.4 in early failed tests are caused by the different gap closure estimation. Besides, the dissimilarities observed in the FGR predictions are inherent to the different modeling strategies adopted in each code.

  19. Modeling RIA scenarios with the FRAPTRAN and SCANAIR codes

    Energy Technology Data Exchange (ETDEWEB)

    Sagrado Garcia, I. C.; Vallejo, I.; Herranz, L. E.

    2013-07-01

    The need of defining new RIA safety criteria has pointed out the importance of performing a rigorous assessment of the transient codes capabilities. The present work is a comparative exercise devoted to identify the origin of the key deviations found between the predictions of FRAPTRAN-1.4 and SCANAIR-7.1. To do so, the calculations submitted by CIEMAT to the OECD/NEA RIA benchmark have been exploited. This work shows that deviations in clad temperatures mainly come from the treatment of the oxide layer. The systematically higher deformations calculated by FRAPTRAN-1.4 in early failed tests are caused by the different gap closure estimation. Besides, the dissimilarities observed in the FGR predictions are inherent to the different modeling strategies adopted in each code.

  20. The MESORAD dose assessment model: Computer code

    International Nuclear Information System (INIS)

    Ramsdell, J.V.; Athey, G.F.; Bander, T.J.; Scherpelz, R.I.

    1988-10-01

    MESORAD is a dose equivalent model for emergency response applications that is designed to be run on minicomputers. It has been developed by the Pacific Northwest Laboratory for use as part of the Intermediate Dose Assessment System in the US Nuclear Regulatory Commission Operations Center in Washington, DC, and the Emergency Management System in the US Department of Energy Unified Dose Assessment Center in Richland, Washington. This volume describes the MESORAD computer code and contains a listing of the code. The technical basis for MESORAD is described in the first volume of this report (Scherpelz et al. 1986). A third volume of the documentation planned. That volume will contain utility programs and input and output files that can be used to check the implementation of MESORAD. 18 figs., 4 tabs

  1. Tokamak Simulation Code modeling of NSTX

    International Nuclear Information System (INIS)

    Jardin, S.C.; Kaye, S.; Menard, J.; Kessel, C.; Glasser, A.H.

    2000-01-01

    The Tokamak Simulation Code [TSC] is widely used for the design of new axisymmetric toroidal experiments. In particular, TSC was used extensively in the design of the National Spherical Torus eXperiment [NSTX]. The authors have now benchmarked TSC with initial NSTX results and find excellent agreement for plasma and vessel currents and magnetic flux loops when the experimental coil currents are used in the simulations. TSC has also been coupled with a ballooning stability code and with DCON to provide stability predictions for NSTX operation. TSC has also been used to model initial CHI experiments where a large poloidal voltage is applied to the NSTX vacuum vessel, causing a force-free current to appear in the plasma. This is a phenomenon that is similar to the plasma halo current that sometimes develops during a plasma disruption

  2. INTEGRATION OF FACILITY MODELING CAPABILITIES FOR NUCLEAR NONPROLIFERATION ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Gorensek, M.; Hamm, L.; Garcia, H.; Burr, T.; Coles, G.; Edmunds, T.; Garrett, A.; Krebs, J.; Kress, R.; Lamberti, V.; Schoenwald, D.; Tzanos, C.; Ward, R.

    2011-07-18

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  3. Integration of facility modeling capabilities for nuclear nonproliferation analysis

    International Nuclear Information System (INIS)

    Garcia, Humberto; Burr, Tom; Coles, Garill A.; Edmunds, Thomas A.; Garrett, Alfred; Gorensek, Maximilian; Hamm, Luther; Krebs, John; Kress, Reid L.; Lamberti, Vincent; Schoenwald, David; Tzanos, Constantine P.; Ward, Richard C.

    2012-01-01

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  4. Integration Of Facility Modeling Capabilities For Nuclear Nonproliferation Analysis

    International Nuclear Information System (INIS)

    Gorensek, M.; Hamm, L.; Garcia, H.; Burr, T.; Coles, G.; Edmunds, T.; Garrett, A.; Krebs, J.; Kress, R.; Lamberti, V.; Schoenwald, D.; Tzanos, C.; Ward, R.

    2011-01-01

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  5. NGNP Data Management and Analysis System Modeling Capabilities

    International Nuclear Information System (INIS)

    Gentillon, Cynthia D.

    2009-01-01

    Projects for the very-high-temperature reactor (VHTR) program provide data in support of Nuclear Regulatory Commission licensing of the VHTR. Fuel and materials to be used in the reactor are tested and characterized to quantify performance in high temperature and high fluence environments. In addition, thermal-hydraulic experiments are conducted to validate codes used to assess reactor safety. The VHTR Program has established the NGNP Data Management and Analysis System (NDMAS) to ensure that VHTR data are (1) qualified for use, (2) stored in a readily accessible electronic form, and (3) analyzed to extract useful results. This document focuses on the third NDMAS objective. It describes capabilities for displaying the data in meaningful ways and identifying relationships among the measured quantities that contribute to their understanding.

  6. NGNP Data Management and Analysis System Modeling Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Cynthia D. Gentillon

    2009-09-01

    Projects for the very-high-temperature reactor (VHTR) program provide data in support of Nuclear Regulatory Commission licensing of the VHTR. Fuel and materials to be used in the reactor are tested and characterized to quantify performance in high temperature and high fluence environments. In addition, thermal-hydraulic experiments are conducted to validate codes used to assess reactor safety. The VHTR Program has established the NGNP Data Management and Analysis System (NDMAS) to ensure that VHTR data are (1) qualified for use, (2) stored in a readily accessible electronic form, and (3) analyzed to extract useful results. This document focuses on the third NDMAS objective. It describes capabilities for displaying the data in meaningful ways and identifying relationships among the measured quantities that contribute to their understanding.

  7. Simulation modeling on the growth of firm's safety management capability

    Institute of Scientific and Technical Information of China (English)

    LIU Tie-zhong; LI Zhi-xiang

    2008-01-01

    Aiming to the deficiency of safety management measure, established simulation model about firm's safety management capability(FSMC) based on organizational learning theory. The system dynamics(SD) method was used, in which level and rate system, variable equation and system structure flow diagram was concluded. Simulation model was verified from two aspects: first, model's sensitivity to variable was tested from the gross of safety investment and the proportion of safety investment; second, variables dependency was checked up from the correlative variable of FSMC and organizational learning. The feasibility of simulation model is verified though these processes.

  8. The WARP Code: Modeling High Intensity Ion Beams

    International Nuclear Information System (INIS)

    Grote, David P.; Friedman, Alex; Vay, Jean-Luc; Haber, Irving

    2005-01-01

    The Warp code, developed for heavy-ion driven inertial fusion energy studies, is used to model high intensity ion (and electron) beams. Significant capability has been incorporated in Warp, allowing nearly all sections of an accelerator to be modeled, beginning with the source. Warp has as its core an explicit, three-dimensional, particle-in-cell model. Alongside this is a rich set of tools for describing the applied fields of the accelerator lattice, and embedded conducting surfaces (which are captured at sub-grid resolution). Also incorporated are models with reduced dimensionality: an axisymmetric model and a transverse ''slice'' model. The code takes advantage of modern programming techniques, including object orientation, parallelism, and scripting (via Python). It is at the forefront in the use of the computational technique of adaptive mesh refinement, which has been particularly successful in the area of diode and injector modeling, both steady-state and time-dependent. In the presentation, some of the major aspects of Warp will be overviewed, especially those that could be useful in modeling ECR sources. Warp has been benchmarked against both theory and experiment. Recent results will be presented showing good agreement of Warp with experimental results from the STS500 injector test stand

  9. The WARP Code: Modeling High Intensity Ion Beams

    International Nuclear Information System (INIS)

    Grote, D P; Friedman, A; Vay, J L; Haber, I

    2004-01-01

    The Warp code, developed for heavy-ion driven inertial fusion energy studies, is used to model high intensity ion (and electron) beams. Significant capability has been incorporated in Warp, allowing nearly all sections of an accelerator to be modeled, beginning with the source. Warp has as its core an explicit, three-dimensional, particle-in-cell model. Alongside this is a rich set of tools for describing the applied fields of the accelerator lattice, and embedded conducting surfaces (which are captured at sub-grid resolution). Also incorporated are models with reduced dimensionality: an axisymmetric model and a transverse ''slice'' model. The code takes advantage of modern programming techniques, including object orientation, parallelism, and scripting (via Python). It is at the forefront in the use of the computational technique of adaptive mesh refinement, which has been particularly successful in the area of diode and injector modeling, both steady-state and time-dependent. In the presentation, some of the major aspects of Warp will be overviewed, especially those that could be useful in modeling ECR sources. Warp has been benchmarked against both theory and experiment. Recent results will be presented showing good agreement of Warp with experimental results from the STS500 injector test stand. Additional information can be found on the web page http://hif.lbl.gov/theory/WARP( ) summary.html

  10. Implementation of JAERI's reflood model into TRAC-PF1/MOD1 code

    International Nuclear Information System (INIS)

    Akimoto, Hajime; Ohnuki, Akira; Murao, Yoshio

    1993-02-01

    Selected physical models of REFLA code, that is a reflood analysis code developed at JAERI, were implemented into the TRAC-PF1/MOD1 code in order to improve the predictive capability of the TRAC-PF1/MOD1 code for the core thermal hydraulic behaviors during the reflood phase in a PWR LOCA. Through comparisons of physical models between both codes, (1) Murao-Iguchi void fraction correlation, (2) the drag coefficient correlation acting to drops, (3) the correlation for wall heat transfer coefficient in the film boiling regime, (4) the quench velocity correlation and (5) heat transfer correlations for the dispersed flow regime were selected from the REFLA code to be implemented into the TRAC-PF1/MOD1 code. A method for the transformation of the void fraction correlation to the equivalent interfacial friction model was developed and the effect of the transformation method on the stability of the solution was discussed. Through assessment calculation using data from CCTF (Cylindrical Core Test Facility) flat power test, it was confirmed that the predictive capability of the TRAC code for the core thermal hydraulic behaviors during the reflood can be improved by the implementation of selected physical models of the REFLA code. Several user guidelines for the modified TRAC code were proposed based on the sensitivity studies on fluid cell number in the hydraulic calculation and on node number and effect of axial heat conduction in the heat conduction calculation of fuel rod. (author)

  11. Capability maturity models in engineering companies: case study analysis

    Directory of Open Access Journals (Sweden)

    Titov Sergei

    2016-01-01

    Full Text Available In the conditions of the current economic downturn engineering companies in Russia and worldwide are searching for new approaches and frameworks to improve their strategic position, increase the efficiency of the internal business processes and enhance the quality of the final products. Capability maturity models are well-known tools used by many foreign engineering companies to assess the productivity of the processes, to elaborate the program of business process improvement and to prioritize the efforts to optimize the whole company performance. The impact of capability maturity model implementation on cost and time are documented and analyzed in the existing research. However, the potential of maturity models as tools of quality management is less known. The article attempts to analyze the impact of CMM implementation on the quality issues. The research is based on a case study methodology and investigates the real life situation in a Russian engineering company.

  12. The discrete-dipole-approximation code ADDA: Capabilities and known limitations

    International Nuclear Information System (INIS)

    Yurkin, Maxim A.; Hoekstra, Alfons G.

    2011-01-01

    The open-source code ADDA is described, which implements the discrete dipole approximation (DDA), a method to simulate light scattering by finite 3D objects of arbitrary shape and composition. Besides standard sequential execution, ADDA can run on a multiprocessor distributed-memory system, parallelizing a single DDA calculation. Hence the size parameter of the scatterer is in principle limited only by total available memory and computational speed. ADDA is written in C99 and is highly portable. It provides full control over the scattering geometry (particle morphology and orientation, and incident beam) and allows one to calculate a wide variety of integral and angle-resolved scattering quantities (cross sections, the Mueller matrix, etc.). Moreover, ADDA incorporates a range of state-of-the-art DDA improvements, aimed at increasing the accuracy and computational speed of the method. We discuss both physical and computational aspects of the DDA simulations and provide a practical introduction into performing such simulations with the ADDA code. We also present several simulation results, in particular, for a sphere with size parameter 320 (100-wavelength diameter) and refractive index 1.05.

  13. Modeling of the YALINA booster facility by the Monte Carlo code MONK

    International Nuclear Information System (INIS)

    Talamo, A.; Gohar, Y.; Kondev, F.; Kiyavitskaya, H.; Serafimovich, I.; Bournos, V.; Fokov, Y.; Routkovskaya, C.

    2007-01-01

    The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics arameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1.

  14. Modeling of fission product release in integral codes

    International Nuclear Information System (INIS)

    Obaidurrahman, K.; Raman, Rupak K.; Gaikwad, Avinash J.

    2014-01-01

    The Great Tohoku earthquake and tsunami that stroke the Fukushima-Daiichi nuclear power station in March 11, 2011 has intensified the needs of detailed nuclear safety research and with this objective all streams associated with severe accident phenomenology are being revisited thoroughly. The present paper would cover an overview of state of art FP release models being used, the important phenomenon considered in semi-mechanistic models and knowledge gaps in present FP release modeling. Capability of FP release module, ELSA of ASTEC integral code in appropriate prediction of FP release under several diversified core degraded conditions will also be demonstrated. Use of semi-mechanistic fission product release models at AERB in source-term estimation shall be briefed. (author)

  15. Development of a coupled dynamics code with transport theory capability and application to accelerator driven systems transients

    International Nuclear Information System (INIS)

    Cahalan, J.E.; Ama, T.; Palmiotti, G.; Taiwo, T.A.; Yang, W.S.

    2000-01-01

    The VARIANT-K and DIF3D-K nodal spatial kinetics computer codes have been coupled to the SAS4A and SASSYS-1 liquid metal reactor accident and systems analysis codes. SAS4A and SASSYS-1 have been extended with the addition of heavy liquid metal (Pb and Pb-Bi) thermophysical properties, heat transfer correlations, and fluid dynamics correlations. The coupling methodology and heavy liquid metal modeling additions are described. The new computer code suite has been applied to analysis of neutron source and thermal-hydraulics transients in a model of an accelerator-driven minor actinide burner design proposed in an OECD/NEA/NSC benchmark specification. Modeling assumptions and input data generation procedures are described. Results of transient analyses are reported, with emphasis on comparison of P1 and P3 variational nodal transport theory results with nodal diffusion theory results, and on significance of spatial kinetics effects

  16. Time-dependent anisotropic distributed source capability in transient 3-d transport code tort-TD

    International Nuclear Information System (INIS)

    Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.

    2009-01-01

    The transient 3-D discrete ordinates transport code TORT-TD has been extended to account for time-dependent anisotropic distributed external sources. The extension aims at the simulation of the pulsed neutron source in the YALINA-Thermal subcritical assembly. Since feedback effects are not relevant in this zero-power configuration, this offers a unique opportunity to validate the time-dependent neutron kinetics of TORT-TD with experimental data. The extensions made in TORT-TD to incorporate a time-dependent anisotropic external source are described. The steady state of the YALINA-Thermal assembly and its response to an artificial square-wave source pulse sequence have been analysed with TORT-TD using pin-wise homogenised cross sections in 18 prompt energy groups with P 1 scattering order and 8 delayed neutron groups. The results demonstrate the applicability of TORT-TD to subcritical problems with a time-dependent external source. (authors)

  17. Simulation and Modeling Capability for Standard Modular Hydropower Technology

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Kevin M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Smith, Brennan T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Witt, Adam M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); DeNeale, Scott T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bevelhimer, Mark S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Pries, Jason L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Burress, Timothy A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kao, Shih-Chieh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Mobley, Miles H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lee, Kyutae [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Curd, Shelaine L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Tsakiris, Achilleas [Univ. of Tennessee, Knoxville, TN (United States); Mooneyham, Christian [Univ. of Tennessee, Knoxville, TN (United States); Papanicolaou, Thanos [Univ. of Tennessee, Knoxville, TN (United States); Ekici, Kivanc [Univ. of Tennessee, Knoxville, TN (United States); Whisenant, Matthew J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Welch, Tim [US Department of Energy, Washington, DC (United States); Rabon, Daniel [US Department of Energy, Washington, DC (United States)

    2017-08-01

    Grounded in the stakeholder-validated framework established in Oak Ridge National Laboratory’s SMH Exemplary Design Envelope Specification, this report on Simulation and Modeling Capability for Standard Modular Hydropower (SMH) Technology provides insight into the concepts, use cases, needs, gaps, and challenges associated with modeling and simulating SMH technologies. The SMH concept envisions a network of generation, passage, and foundation modules that achieve environmentally compatible, cost-optimized hydropower using standardization and modularity. The development of standardized modeling approaches and simulation techniques for SMH (as described in this report) will pave the way for reliable, cost-effective methods for technology evaluation, optimization, and verification.

  18. Top flooding modeling with MAAP4 code

    International Nuclear Information System (INIS)

    Brunet-Thibault, E.; Marguet, S.

    2006-01-01

    An engineering top flooding model was developed in MAAP4.04d.4, the severe accident code used in EDF, to simulate the thermal-hydraulic phenomena that should take place if emergency core cooling (ECC) water was injected in hot leg during quenching. In the framework of the ISTC (International Science and Technology Centre), a top flooding test was proposed in the PARAMETER facility (Podolsk, Russia). The MAAP calculation of the PARAMETER top flooding test is presented in this paper. A comparison between top and bottom flooding was made on the bundle test geometry. According to this study, top flooding appears to cool quickly and effectively the upper plenum internals. (author)

  19. Development of a fourth generation predictive capability maturity model.

    Energy Technology Data Exchange (ETDEWEB)

    Hills, Richard Guy; Witkowski, Walter R.; Urbina, Angel; Rider, William J.; Trucano, Timothy Guy

    2013-09-01

    The Predictive Capability Maturity Model (PCMM) is an expert elicitation tool designed to characterize and communicate completeness of the approaches used for computational model definition, verification, validation, and uncertainty quantification associated for an intended application. The primary application of this tool at Sandia National Laboratories (SNL) has been for physics-based computational simulations in support of nuclear weapons applications. The two main goals of a PCMM evaluation are 1) the communication of computational simulation capability, accurately and transparently, and 2) the development of input for effective planning. As a result of the increasing importance of computational simulation to SNLs mission, the PCMM has evolved through multiple generations with the goal to provide more clarity, rigor, and completeness in its application. This report describes the approach used to develop the fourth generation of the PCMM.

  20. The Aviation System Analysis Capability Airport Capacity and Delay Models

    Science.gov (United States)

    Lee, David A.; Nelson, Caroline; Shapiro, Gerald

    1998-01-01

    The ASAC Airport Capacity Model and the ASAC Airport Delay Model support analyses of technologies addressing airport capacity. NASA's Aviation System Analysis Capability (ASAC) Airport Capacity Model estimates the capacity of an airport as a function of weather, Federal Aviation Administration (FAA) procedures, traffic characteristics, and the level of technology available. Airport capacity is presented as a Pareto frontier of arrivals per hour versus departures per hour. The ASAC Airport Delay Model allows the user to estimate the minutes of arrival delay for an airport, given its (weather dependent) capacity. Historical weather observations and demand patterns are provided by ASAC as inputs to the delay model. The ASAC economic models can translate a reduction in delay minutes into benefit dollars.

  1. 24 CFR 200.925c - Model codes.

    Science.gov (United States)

    2010-04-01

    ... below. (1) Model Building Codes—(i) The BOCA National Building Code, 1993 Edition, The BOCA National..., Administration, for the Building, Plumbing and Mechanical Codes and the references to fire retardant treated wood... number 2 (Chapter 7) of the Building Code, but including the Appendices of the Code. Available from...

  2. Off-Gas Adsorption Model Capabilities and Recommendations

    Energy Technology Data Exchange (ETDEWEB)

    Lyon, Kevin L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Welty, Amy K. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Law, Jack [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ladshaw, Austin [Georgia Inst. of Technology, Atlanta, GA (United States); Yiacoumi, Sotira [Georgia Inst. of Technology, Atlanta, GA (United States); Tsouris, Costas [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-03-01

    Off-gas treatment is required to reduce emissions from aqueous fuel reprocessing. Evaluating the products of innovative gas adsorption research requires increased computational simulation capability to more effectively transition from fundamental research to operational design. Early modeling efforts produced the Off-Gas SeParation and REcoverY (OSPREY) model that, while efficient in terms of computation time, was of limited value for complex systems. However, the computational and programming lessons learned in development of the initial model were used to develop Discontinuous Galerkin OSPREY (DGOSPREY), a more effective model. Initial comparisons between OSPREY and DGOSPREY show that, while OSPREY does reasonably well to capture the initial breakthrough time, it displays far too much numerical dispersion to accurately capture the real shape of the breakthrough curves. DGOSPREY is a much better tool as it utilizes a more stable set of numerical methods. In addition, DGOSPREY has shown the capability to capture complex, multispecies adsorption behavior, while OSPREY currently only works for a single adsorbing species. This capability makes DGOSPREY ultimately a more practical tool for real world simulations involving many different gas species. While DGOSPREY has initially performed very well, there is still need for improvement. The current state of DGOSPREY does not include any micro-scale adsorption kinetics and therefore assumes instantaneous adsorption. This is a major source of error in predicting water vapor breakthrough because the kinetics of that adsorption mechanism is particularly slow. However, this deficiency can be remedied by building kinetic kernels into DGOSPREY. Another source of error in DGOSPREY stems from data gaps in single species, such as Kr and Xe, isotherms. Since isotherm data for each gas is currently available at a single temperature, the model is unable to predict adsorption at temperatures outside of the set of data currently

  3. Containment Modelling with the ASTEC Code

    International Nuclear Information System (INIS)

    Sadek, Sinisa; Grgic, Davor

    2014-01-01

    ASTEC is an integral computer code jointly developed by Institut de Radioprotection et de Surete Nucleaire (IRSN, France) and Gesellschaft fur Anlagen-und Reaktorsicherheit (GRS, Germany) to assess the nuclear power plant behaviour during a severe accident (SA). It consists of 13 coupled modules which compute various SA phenomena in primary and secondary circuits of the nuclear power plants (NPP), and in the containment. The ASTEC code was used to model and to simulate NPP behaviour during a postulated station blackout accident in the NPP Krsko, a two-loop pressurized water reactor (PWR) plant. The primary system of the plant was modelled with 110 thermal hydraulic (TH) volumes, 113 junctions and 128 heat structures. The secondary system was modelled with 76 TH volumes, 77 junctions and 87 heat structures. The containment was modelled with 10 TH volumes by taking into account containment representation as a set of distinctive compartments, connected with 23 junctions. A total of 79 heat structures were used to simulate outer containment walls and internal steel and concrete structures. Prior to the transient calculation, a steady state analysis was performed. In order to achieve correct plant initial conditions, the operation of regulation systems was modelled. Parameters which were subjected to regulation were the pressurizer pressure, the pressurizer narrow range level and steam mass flow rates in the steam lines. The accident analysis was focused on containment behaviour, however the complete integral NPP analysis was carried out in order to provide correct boundary conditions for the containment calculation. During the accident, the containment integrity was challenged by release of reactor system coolant through degraded coolant pump seals and, later in the accident following release of the corium out of the reactor pressure vessel, by the molten corium concrete interaction and direct containment heating mechanisms. Impact of those processes on relevant

  4. Modelling of LOCA Tests with the BISON Fuel Performance Code

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, Richard L [Idaho National Laboratory; Pastore, Giovanni [Idaho National Laboratory; Novascone, Stephen Rhead [Idaho National Laboratory; Spencer, Benjamin Whiting [Idaho National Laboratory; Hales, Jason Dean [Idaho National Laboratory

    2016-05-01

    BISON is a modern finite-element based, multidimensional nuclear fuel performance code that is under development at Idaho National Laboratory (USA). Recent advances of BISON include the extension of the code to the analysis of LWR fuel rod behaviour during loss-of-coolant accidents (LOCAs). In this work, BISON models for the phenomena relevant to LWR cladding behaviour during LOCAs are described, followed by presentation of code results for the simulation of LOCA tests. Analysed experiments include separate effects tests of cladding ballooning and burst, as well as the Halden IFA-650.2 fuel rod test. Two-dimensional modelling of the experiments is performed, and calculations are compared to available experimental data. Comparisons include cladding burst pressure and temperature in separate effects tests, as well as the evolution of fuel rod inner pressure during ballooning and time to cladding burst. Furthermore, BISON three-dimensional simulations of separate effects tests are performed, which demonstrate the capability to reproduce the effect of azimuthal temperature variations in the cladding. The work has been carried out in the frame of the collaboration between Idaho National Laboratory and Halden Reactor Project, and the IAEA Coordinated Research Project FUMAC.

  5. A graph model for opportunistic network coding

    KAUST Repository

    Sorour, Sameh

    2015-08-12

    © 2015 IEEE. Recent advancements in graph-based analysis and solutions of instantly decodable network coding (IDNC) trigger the interest to extend them to more complicated opportunistic network coding (ONC) scenarios, with limited increase in complexity. In this paper, we design a simple IDNC-like graph model for a specific subclass of ONC, by introducing a more generalized definition of its vertices and the notion of vertex aggregation in order to represent the storage of non-instantly-decodable packets in ONC. Based on this representation, we determine the set of pairwise vertex adjacency conditions that can populate this graph with edges so as to guarantee decodability or aggregation for the vertices of each clique in this graph. We then develop the algorithmic procedures that can be applied on the designed graph model to optimize any performance metric for this ONC subclass. A case study on reducing the completion time shows that the proposed framework improves on the performance of IDNC and gets very close to the optimal performance.

  6. Development of random geometry capability in RMC code for stochastic media analysis

    International Nuclear Information System (INIS)

    Liu, Shichang; She, Ding; Liang, Jin-gang; Wang, Kan

    2015-01-01

    Highlights: • Monte Carlo method plays an important role in modeling of particle transport in random media. • Three stochastic geometry modeling methods have been developed in RMC. • The stochastic effects of the randomly dispersed fuel particles are analyzed. • Investigation of accuracy and efficiency of three methods has been carried out. • All the methods are effective, and explicit modeling is regarded as the best choice. - Abstract: Simulation of particle transport in random media poses a challenge for traditional deterministic transport methods, due to the significant effects of spatial and energy self-shielding. Monte Carlo method plays an important role in accurate simulation of random media, owing to its flexible geometry modeling and the use of continuous-energy nuclear cross sections. Three stochastic geometry modeling methods including Random Lattice Method, Chord Length Sampling and explicit modeling approach with mesh acceleration technique, have been developed in RMC to simulate the particle transport in the dispersed fuels. The verifications of the accuracy and the investigations of the calculation efficiency have been carried out. The stochastic effects of the randomly dispersed fuel particles are also analyzed. The results show that all three stochastic geometry modeling methods can account for the effects of the random dispersion of fuel particles, and the explicit modeling method can be regarded as the best choice

  7. Epitaxial growth of hetero-Ln-MOF hierarchical single crystals for domain- and orientation-controlled multicolor luminescence 3D coding capability

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Mei; Zhu, Yi-Xuan; Wu, Kai; Chen, Ling; Hou, Ya-Jun; Yin, Shao-Yun; Wang, Hai-Ping; Fan, Ya-Nan [MOE Laboratory of Bioinorganic and Synthetic Chemistry, Lehn Institute of Functional Materials, School of Chemistry, Sun Yat-Sen University, Guangzhou (China); Su, Cheng-Yong [MOE Laboratory of Bioinorganic and Synthetic Chemistry, Lehn Institute of Functional Materials, School of Chemistry, Sun Yat-Sen University, Guangzhou (China); State Key Laboratory of Applied Organic Chemistry, Lanzhou University, Lanzhou (China)

    2017-11-13

    Core-shell or striped heteroatomic lanthanide metal-organic framework hierarchical single crystals were obtained by liquid-phase anisotropic epitaxial growth, maintaining identical periodic organization while simultaneously exhibiting spatially segregated structure. Different types of domain and orientation-controlled multicolor photophysical models are presented, which show either visually distinguishable or visible/near infrared (NIR) emissive colors. This provides a new bottom-up strategy toward the design of hierarchical molecular systems, offering high-throughput and multiplexed luminescence color tunability and readability. The unique capability of combining spectroscopic coding with 3D (three-dimensional) microscale spatial coding is established, providing potential applications in anti-counterfeiting, color barcoding, and other types of integrated and miniaturized optoelectronic materials and devices. (copyright 2017 Wiley-VCH Verlag GmbH and Co. KGaA, Weinheim)

  8. Epitaxial Growth of Hetero-Ln-MOF Hierarchical Single Crystals for Domain- and Orientation-Controlled Multicolor Luminescence 3D Coding Capability.

    Science.gov (United States)

    Pan, Mei; Zhu, Yi-Xuan; Wu, Kai; Chen, Ling; Hou, Ya-Jun; Yin, Shao-Yun; Wang, Hai-Ping; Fan, Ya-Nan; Su, Cheng-Yong

    2017-11-13

    Core-shell or striped heteroatomic lanthanide metal-organic framework hierarchical single crystals were obtained by liquid-phase anisotropic epitaxial growth, maintaining identical periodic organization while simultaneously exhibiting spatially segregated structure. Different types of domain and orientation-controlled multicolor photophysical models are presented, which show either visually distinguishable or visible/near infrared (NIR) emissive colors. This provides a new bottom-up strategy toward the design of hierarchical molecular systems, offering high-throughput and multiplexed luminescence color tunability and readability. The unique capability of combining spectroscopic coding with 3D (three-dimensional) microscale spatial coding is established, providing potential applications in anti-counterfeiting, color barcoding, and other types of integrated and miniaturized optoelectronic materials and devices. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Epitaxial growth of hetero-Ln-MOF hierarchical single crystals for domain- and orientation-controlled multicolor luminescence 3D coding capability

    International Nuclear Information System (INIS)

    Pan, Mei; Zhu, Yi-Xuan; Wu, Kai; Chen, Ling; Hou, Ya-Jun; Yin, Shao-Yun; Wang, Hai-Ping; Fan, Ya-Nan; Su, Cheng-Yong

    2017-01-01

    Core-shell or striped heteroatomic lanthanide metal-organic framework hierarchical single crystals were obtained by liquid-phase anisotropic epitaxial growth, maintaining identical periodic organization while simultaneously exhibiting spatially segregated structure. Different types of domain and orientation-controlled multicolor photophysical models are presented, which show either visually distinguishable or visible/near infrared (NIR) emissive colors. This provides a new bottom-up strategy toward the design of hierarchical molecular systems, offering high-throughput and multiplexed luminescence color tunability and readability. The unique capability of combining spectroscopic coding with 3D (three-dimensional) microscale spatial coding is established, providing potential applications in anti-counterfeiting, color barcoding, and other types of integrated and miniaturized optoelectronic materials and devices. (copyright 2017 Wiley-VCH Verlag GmbH and Co. KGaA, Weinheim)

  10. Conservation of concrete structures according to fib Model Code 2010

    NARCIS (Netherlands)

    Matthews, S.; Bigaj-Van Vliet, A.; Ueda, T.

    2013-01-01

    Conservation of concrete structures forms an essential part of the fib Model Code for Concrete Structures 2010 (fib Model Code 2010). In particular, Chapter 9 of fib Model Code 2010 addresses issues concerning conservation strategies and tactics, conservation management, condition surveys, condition

  11. Gap Conductance model Validation in the TASS/SMR-S code using MARS code

    International Nuclear Information System (INIS)

    Ahn, Sang Jun; Yang, Soo Hyung; Chung, Young Jong; Lee, Won Jae

    2010-01-01

    Korea Atomic Energy Research Institute (KAERI) has been developing the TASS/SMR-S (Transient and Setpoint Simulation/Small and Medium Reactor) code, which is a thermal hydraulic code for the safety analysis of the advanced integral reactor. An appropriate work to validate the applicability of the thermal hydraulic models within the code should be demanded. Among the models, the gap conductance model which is describes the thermal gap conductivity between fuel and cladding was validated through the comparison with MARS code. The validation of the gap conductance model was performed by evaluating the variation of the gap temperature and gap width as the changed with the power fraction. In this paper, a brief description of the gap conductance model in the TASS/SMR-S code is presented. In addition, calculated results to validate the gap conductance model are demonstrated by comparing with the results of the MARS code with the test case

  12. Climbing the ladder: capability maturity model integration level 3

    Science.gov (United States)

    Day, Bryce; Lutteroth, Christof

    2011-02-01

    This article details the attempt to form a complete workflow model for an information and communication technologies (ICT) company in order to achieve a capability maturity model integration (CMMI) maturity rating of 3. During this project, business processes across the company's core and auxiliary sectors were documented and extended using modern enterprise modelling tools and a The Open Group Architectural Framework (TOGAF) methodology. Different challenges were encountered with regard to process customisation and tool support for enterprise modelling. In particular, there were problems with the reuse of process models, the integration of different project management methodologies and the integration of the Rational Unified Process development process framework that had to be solved. We report on these challenges and the perceived effects of the project on the company. Finally, we point out research directions that could help to improve the situation in the future.

  13. Development of CAP code for nuclear power plant containment: Lumped model

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Soon Joon, E-mail: sjhong90@fnctech.com [FNC Tech. Co. Ltd., Heungdeok 1 ro 13, Giheung-gu, Yongin-si, Gyeonggi-do 446-908 (Korea, Republic of); Choo, Yeon Joon; Hwang, Su Hyun; Lee, Byung Chul [FNC Tech. Co. Ltd., Heungdeok 1 ro 13, Giheung-gu, Yongin-si, Gyeonggi-do 446-908 (Korea, Republic of); Ha, Sang Jun [Central Research Institute, Korea Hydro & Nuclear Power Company, Ltd., 70, 1312-gil, Yuseong-daero, Yuseong-gu, Daejeon 305-343 (Korea, Republic of)

    2015-09-15

    Highlights: • State-of-art containment analysis code, CAP, has been developed. • CAP uses 3-field equations, water level oriented upwind scheme, local head model. • CAP has a function of linked calculation with reactor coolant system code. • CAP code assessments showed appropriate prediction capabilities. - Abstract: CAP (nuclear Containment Analysis Package) code has been developed in Korean nuclear society for the analysis of nuclear containment thermal hydraulic behaviors including pressure and temperature trends and hydrogen concentration. Lumped model of CAP code uses 2-phase, 3-field equations for fluid behaviors, and has appropriate constitutive equations, 1-dimensional heat conductor model, component models, trip and control models, and special process models. CAP can run in a standalone mode or a linked mode with a reactor coolant system analysis code. The linked mode enables the more realistic calculation of a containment response and is expected to be applicable to a more complicated advanced plant design calculation. CAP code assessments were carried out by gradual approaches: conceptual problems, fundamental phenomena, component and principal phenomena, experimental validation, and finally comparison with other code calculations on the base of important phenomena identifications. The assessments showed appropriate prediction capabilities of CAP.

  14. Development of CAP code for nuclear power plant containment: Lumped model

    International Nuclear Information System (INIS)

    Hong, Soon Joon; Choo, Yeon Joon; Hwang, Su Hyun; Lee, Byung Chul; Ha, Sang Jun

    2015-01-01

    Highlights: • State-of-art containment analysis code, CAP, has been developed. • CAP uses 3-field equations, water level oriented upwind scheme, local head model. • CAP has a function of linked calculation with reactor coolant system code. • CAP code assessments showed appropriate prediction capabilities. - Abstract: CAP (nuclear Containment Analysis Package) code has been developed in Korean nuclear society for the analysis of nuclear containment thermal hydraulic behaviors including pressure and temperature trends and hydrogen concentration. Lumped model of CAP code uses 2-phase, 3-field equations for fluid behaviors, and has appropriate constitutive equations, 1-dimensional heat conductor model, component models, trip and control models, and special process models. CAP can run in a standalone mode or a linked mode with a reactor coolant system analysis code. The linked mode enables the more realistic calculation of a containment response and is expected to be applicable to a more complicated advanced plant design calculation. CAP code assessments were carried out by gradual approaches: conceptual problems, fundamental phenomena, component and principal phenomena, experimental validation, and finally comparison with other code calculations on the base of important phenomena identifications. The assessments showed appropriate prediction capabilities of CAP

  15. Capability to model reactor regulating system in RFSP

    Energy Technology Data Exchange (ETDEWEB)

    Chow, H C; Rouben, B; Younis, M H; Jenkins, D A [Atomic Energy of Canada Ltd., Mississauga, ON (Canada); Baudouin, A [Hydro-Quebec, Montreal, PQ (Canada); Thompson, P D [New Brunswick Electric Power Commission, Point Lepreau, NB (Canada). Point Lepreau Generating Station

    1996-12-31

    The Reactor Regulating System package extracted from SMOKIN-G2 was linked within RFSP to the spatial kinetics calculation. The objective is to use this new capability in safety analysis to model the actions of RRS in hypothetical events such as in-core LOCA or moderator drain scenarios. This paper describes the RRS modelling in RFSP and its coupling to the neutronics calculations, verification of the RRS control routine functions, sample applications and comparisons to SMOKIN-G2 results for the same transient simulations. (author). 7 refs., 6 figs.

  16. Nuclear Hybrid Energy System Modeling: RELAP5 Dynamic Coupling Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Piyush Sabharwall; Nolan Anderson; Haihua Zhao; Shannon Bragg-Sitton; George Mesina

    2012-09-01

    The nuclear hybrid energy systems (NHES) research team is currently developing a dynamic simulation of an integrated hybrid energy system. A detailed simulation of proposed NHES architectures will allow initial computational demonstration of a tightly coupled NHES to identify key reactor subsystem requirements, identify candidate reactor technologies for a hybrid system, and identify key challenges to operation of the coupled system. This work will provide a baseline for later coupling of design-specific reactor models through industry collaboration. The modeling capability addressed in this report focuses on the reactor subsystem simulation.

  17. Experimental data bases useful for quantification of model uncertainties in best estimate codes

    International Nuclear Information System (INIS)

    Wilson, G.E.; Katsma, K.R.; Jacobson, J.L.; Boodry, K.S.

    1988-01-01

    A data base is necessary for assessment of thermal hydraulic codes within the context of the new NRC ECCS Rule. Separate effect tests examine particular phenomena that may be used to develop and/or verify models and constitutive relationships in the code. Integral tests are used to demonstrate the capability of codes to model global characteristics and sequence of events for real or hypothetical transients. The nuclear industry has developed a large experimental data base of fundamental nuclear, thermal-hydraulic phenomena for code validation. Given a particular scenario, and recognizing the scenario's important phenomena, selected information from this data base may be used to demonstrate applicability of a particular code to simulate the scenario and to determine code model uncertainties. LBLOCA experimental data bases useful to this objective are identified in this paper. 2 tabs

  18. Assessing reactor physics codes capabilities to simulate fast reactors on the example of the BN-600 benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, Vladimir [Scientific and Engineering Centre for Nuclear and Radiation Safety (SES NRS), Moscow (Russian Federation); Bousquet, Jeremy [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH, Garching (Germany)

    2016-11-15

    This work aims to assess the capabilities of reactor physics codes (initially validated for thermal reactors) to simulate fast sodium cooled reactors. The BFS-62-3A critical experiment from the BN-600 Hybrid Core Benchmark Analyses was chosen for the investigation. Monte-Carlo codes (KENO from SCALE and SERPENT 2.1.23) and the deterministic diffusion code DYN3D-MG are applied to calculate the neutronic parameters. It was found that the multiplication factor and reactivity effects calculated by KENO and SERPENT using the ENDF/B-VII.0 continuous energy library are in a good agreement with each other and with the measured benchmark values. Few-groups macroscopic cross sections, required for DYN3D-MG, were prepared in applying different methods implemented in SCALE and SERPENT. The DYN3D-MG results of a simplified benchmark show reasonable agreement with results from Monte-Carlo calculations and measured values. The former results are used to justify DYN3D-MG implementation for sodium cooled fast reactors coupled deterministic analysis.

  19. User Instructions for the Systems Assessment Capability, Rev. 0, Computer Codes Volume 1: Inventory, Release, and Transport Modules

    International Nuclear Information System (INIS)

    Eslinger, Paul W.; Engel, David W.; Gerhardstein, Lawrence H.; Lopresti, Charles A.; Nichols, William E.; Strenge, Dennis L.

    2001-12-01

    One activity of the Department of Energy's Groundwater/Vadose Zone Integration Project is an assessment of cumulative impacts from Hanford Site wastes on the subsurface environment and the Columbia River. Through the application of a system assessment capability (SAC), decisions for each cleanup and disposal action will be able to take into account the composite effect of other cleanup and disposal actions. The SAC has developed a suite of computer programs to simulate the migration of contaminants (analytes) present on the Hanford Site and to assess the potential impacts of the analytes, including dose to humans, socio-cultural impacts, economic impacts, and ecological impacts. The general approach to handling uncertainty in the SAC computer codes is a Monte Carlo approach. Conceptually, one generates a value for every stochastic parameter in the code (the entire sequence of modules from inventory through transport and impacts) and then executes the simulation, obtaining an output value, or result. This document provides user instructions for the SAC codes that handle inventory tracking, release of contaminants to the environment, and transport of contaminants through the unsaturated zone, saturated zone, and the Columbia River

  20. Assessment of the prediction capability of the TRANSURANUS fuel performance code on the basis of power ramp tested LWR fuel rods

    International Nuclear Information System (INIS)

    Pastore, G.; Botazzoli, P.; Di Marcello, V.; Luzzi, L.

    2009-01-01

    The present work is aimed at assessing the prediction capability of the TRANSURANUS code for the performance analysis of LWR fuel rods under power ramp conditions. The analysis refers to all the power ramp tested fuel rods belonging to the Studsvik PWR Super-Ramp and BWR Inter-Ramp Irradiation Projects, and is focused on some integral quantities (i.e., burn-up, fission gas release, cladding creep-down and failure due to pellet cladding interaction) through a systematic comparison between the code predictions and the experimental data. To this end, a suitable setup of the code is established on the basis of previous works. Besides, with reference to literature indications, a sensitivity study is carried out, which considers the 'ITU model' for fission gas burst release and modifications in the treatment of the fuel solid swelling and the cladding stress corrosion cracking. The performed analyses allow to individuate some issues, which could be useful for the future development of the code. Keywords: Light Water Reactors, Fuel Rod Performance, Power Ramps, Fission Gas Burst Release, Fuel Swelling, Pellet Cladding Interaction, Stress Corrosion Cracking

  1. A graph model for opportunistic network coding

    KAUST Repository

    Sorour, Sameh; Aboutoraby, Neda; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2015-01-01

    © 2015 IEEE. Recent advancements in graph-based analysis and solutions of instantly decodable network coding (IDNC) trigger the interest to extend them to more complicated opportunistic network coding (ONC) scenarios, with limited increase

  2. Code Differentiation for Hydrodynamic Model Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Henninger, R.J.; Maudlin, P.J.

    1999-06-27

    Use of a hydrodynamics code for experimental data fitting purposes (an optimization problem) requires information about how a computed result changes when the model parameters change. These so-called sensitivities provide the gradient that determines the search direction for modifying the parameters to find an optimal result. Here, the authors apply code-based automatic differentiation (AD) techniques applied in the forward and adjoint modes to two problems with 12 parameters to obtain these gradients and compare the computational efficiency and accuracy of the various methods. They fit the pressure trace from a one-dimensional flyer-plate experiment and examine the accuracy for a two-dimensional jet-formation problem. For the flyer-plate experiment, the adjoint mode requires similar or less computer time than the forward methods. Additional parameters will not change the adjoint mode run time appreciably, which is a distinct advantage for this method. Obtaining ''accurate'' sensitivities for the j et problem parameters remains problematic.

  3. Spent fuel reprocessing system security engineering capability maturity model

    International Nuclear Information System (INIS)

    Liu Yachun; Zou Shuliang; Yang Xiaohua; Ouyang Zigen; Dai Jianyong

    2011-01-01

    In the field of nuclear safety, traditional work places extra emphasis on risk assessment related to technical skills, production operations, accident consequences through deterministic or probabilistic analysis, and on the basis of which risk management and control are implemented. However, high quality of product does not necessarily mean good safety quality, which implies a predictable degree of uniformity and dependability suited to the specific security needs. In this paper, we make use of the system security engineering - capability maturity model (SSE-CMM) in the field of spent fuel reprocessing, establish a spent fuel reprocessing systems security engineering capability maturity model (SFR-SSE-CMM). The base practices in the model are collected from the materials of the practice of the nuclear safety engineering, which represent the best security implementation activities, reflect the regular and basic work of the implementation of the security engineering in the spent fuel reprocessing plant, the general practices reveal the management, measurement and institutional characteristics of all process activities. The basic principles that should be followed in the course of implementation of safety engineering activities are indicated from 'what' and 'how' aspects. The model provides a standardized framework and evaluation system for the safety engineering of the spent fuel reprocessing system. As a supplement to traditional methods, this new assessment technique with property of repeatability and predictability with respect to cost, procedure and quality control, can make or improve the activities of security engineering to become a serial of mature, measurable and standard activities. (author)

  4. Human, Social, Cultural Behavior (HSCB) Modeling Workshop I: Characterizing the Capability Needs for HSCB Modeling

    Science.gov (United States)

    2008-07-01

    The expectations correspond to different roles individuals perform SocialConstructionis Social constructionism is a school of thought Peter L...HUMAN, SOCIAL , CULTURAL BEHAVIOR (HSCB) MODELING WORKSHOP I: CHARACTERIZING THE CAPABILITY NEEDS FOR HSCB MODELING FINAL REPORT... Social , Cultural Behavior (HSCB) Modeling Workshop I: Characterizing the Capability Needs for HSCB Modeling 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  5. Dataset of coded handwriting features for use in statistical modelling

    Directory of Open Access Journals (Sweden)

    Anna Agius

    2018-02-01

    Full Text Available The data presented here is related to the article titled, “Using handwriting to infer a writer's country of origin for forensic intelligence purposes” (Agius et al., 2017 [1]. This article reports original writer, spatial and construction characteristic data for thirty-seven English Australian11 In this study, English writers were Australians whom had learnt to write in New South Wales (NSW. writers and thirty-seven Vietnamese writers. All of these characteristics were coded and recorded in Microsoft Excel 2013 (version 15.31. The construction characteristics coded were only extracted from seven characters, which were: ‘g’, ‘h’, ‘th’, ‘M’, ‘0’, ‘7’ and ‘9’. The coded format of the writer, spatial and construction characteristics is made available in this Data in Brief in order to allow others to perform statistical analyses and modelling to investigate whether there is a relationship between the handwriting features and the nationality of the writer, and whether the two nationalities can be differentiated. Furthermore, to employ mathematical techniques that are capable of characterising the extracted features from each participant.

  6. ER@CEBAF: Modeling code developments

    Energy Technology Data Exchange (ETDEWEB)

    Meot, F. [Brookhaven National Lab. (BNL), Upton, NY (United States); Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Roblin, Y. [Brookhaven National Lab. (BNL), Upton, NY (United States); Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)

    2016-04-13

    A proposal for a multiple-pass, high-energy, energy-recovery experiment using CEBAF is under preparation in the frame of a JLab-BNL collaboration. In view of beam dynamics investigations regarding this project, in addition to the existing model in use in Elegant a version of CEBAF is developed in the stepwise ray-tracing code Zgoubi, Beyond the ER experiment, it is also planned to use the latter for the study of polarization transport in the presence of synchrotron radiation, down to Hall D line where a 12 GeV polarized beam can be delivered. This Note briefly reports on the preliminary steps, and preliminary outcomes, based on an Elegant to Zgoubi translation.

  7. Boiling water reactor modeling capabilities of MMS-02

    International Nuclear Information System (INIS)

    May, R.S.; Abdollahian, D.A.; Elias, E.; Shak, D.P.

    1987-01-01

    During the development period for the Modular Modeling System (MMS) library modules, the Boiling Water Reactor (BWR) has been the last major component to be addressed. The BWRX module includes models of the reactor core, reactor vessel, and recirculation loop. A pre-release version was made available for utility use in September 1983. Since that time a number of changes have been incorporated in BWRX to (1) improve running time for most transient events of interest, (2) extend its capability to include certain events of interest in reactor safety analysis, and (3) incorporate a variety of improvements to the module interfaces and user input formats. The purposes of this study were to briefly review the module structure and physical models, to point the differences between the MMS-02 BWRX module and the BWRX version previously available in the TESTREV1 library, to provide guidelines for choosing among the various user options, and to present some representative results

  8. EASEWASTE-life cycle modeling capabilities for waste management technologies

    DEFF Research Database (Denmark)

    Bhander, Gurbakhash Singh; Christensen, Thomas Højlund; Hauschild, Michael Zwicky

    2010-01-01

    Background, Aims and Scope The management of municipal solid waste and the associated environmental impacts are subject of growing attention in industrialized countries. EU has recently strongly emphasized the role of LCA in its waste and resource strategies. The development of sustainable solid...... waste management systems applying a life-cycle perspective requires readily understandable tools for modelling the life cycle impacts of waste management systems. The aim of the paper is to demonstrate the structure, functionalities and LCA modelling capabilities of the PC-based life cycle oriented...... waste management model EASEWASTE, developed at the Technical University of Denmark specifically to meet the needs of the waste system developer with the objective to evaluate the environmental performance of the various elements of existing or proposed solid waste management systems. Materials...

  9. A review of the Melcor Accident Consequence Code System (MACCS): Capabilities and applications

    International Nuclear Information System (INIS)

    Young, M.

    1995-01-01

    MACCS was developed at Sandia National Laboratories (SNL) under U.S. Nuclear Regulatory Commission (NRC) sponsorship to estimate the offsite consequences of potential severe accidents at nuclear power plants (NPPs). MACCS was publicly released in 1990. MACCS was developed to support the NRC's probabilistic safety assessment (PSA) efforts. PSA techniques can provide a measure of the risk of reactor operation. PSAs are generally divided into three levels. Level one efforts identify potential plant damage states that lead to core damage and the associated probabilities, level two models damage progression and containment strength for establishing fission-product release categories, and level three efforts evaluate potential off-site consequences of radiological releases and the probabilities associated with the consequences. MACCS was designed as a tool for level three PSA analysis. MACCS performs probabilistic health and economic consequence assessments of hypothetical accidental releases of radioactive material from NPPs. MACCS includes models for atmospheric dispersion and transport, wet and dry deposition, the probabilistic treatment of meteorology, environmental transfer, countermeasure strategies, dosimetry, health effects, and economic impacts. The computer systems MACCS is designed to run on are the 386/486 PC, VAX/VMS, E3M RISC S/6000, Sun SPARC, and Cray UNICOS. This paper provides an overview of MACCS, reviews some of the applications of MACCS, international collaborations which have involved MACCS, current developmental efforts, and future directions

  10. Improved choked flow model for MARS code

    International Nuclear Information System (INIS)

    Chung, Moon Sun; Lee, Won Jae; Ha, Kwi Seok; Hwang, Moon Kyu

    2002-01-01

    Choked flow calculation is improved by using a new sound speed criterion for bubbly flow that is derived by the characteristic analysis of hyperbolic two-fluid model. This model was based on the notion of surface tension for the interfacial pressure jump terms in the momentum equations. Real eigenvalues obtained as the closed-form solution of characteristic polynomial represent the sound speed in the bubbly flow regime that agrees well with the existing experimental data. The present sound speed shows more reasonable result in the extreme case than the Nguyens did. The present choked flow criterion derived by the present sound speed is employed in the MARS code and assessed by using the Marviken choked flow tests. The assessment results without any adjustment made by some discharge coefficients demonstrate more accurate predictions of choked flow rate in the bubbly flow regime than those of the earlier choked flow calculations. By calculating the Typical PWR (SBLOCA) problem, we make sure that the present model can reproduce the reasonable transients of integral reactor system

  11. PetriCode: A Tool for Template-Based Code Generation from CPN Models

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge

    2014-01-01

    Code generation is an important part of model driven methodologies. In this paper, we present PetriCode, a software tool for generating protocol software from a subclass of Coloured Petri Nets (CPNs). The CPN subclass is comprised of hierarchical CPN models describing a protocol system at different...

  12. The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability

    Science.gov (United States)

    Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.; hide

    2016-01-01

    The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users.The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.

  13. Systems Security Engineering Capability Maturity Model SSE-CMM Model Description Document

    National Research Council Canada - National Science Library

    1999-01-01

    The Systems Security Engineering Capability Maturity Model (SSE-CMM) describes the essential characteristics of an organization's security engineering process that must exist to ensure good security engineering...

  14. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC): gap analysis for high fidelity and performance assessment code development

    International Nuclear Information System (INIS)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-01-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  15. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-03-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  16. Fourth and last part in the modelling implementation project of two assimetric cooling systems for ALMOD 3 computer codes

    International Nuclear Information System (INIS)

    Dominguez, L.; Camargo, C.T.M.

    1985-01-01

    A two loop simulation capability was developed to the ALMOD 3W2 code through modelling the steam header line connecting the secondary side of steam generators. A brief description of the model is presented and two test cases are shown. Basic code changes are addressed. (Author) [pt

  17. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  18. Progress in the development of a reactivity capability in the SAM-CE system for validating fuel management codes. Interim report

    International Nuclear Information System (INIS)

    Lichtenstein, H.; Steinberg, H.; Troubetzkoy, E.; Cohen, M.O.; Chui, C.

    1978-02-01

    The SAM-CE Monte Carlo system (for three dimensional neutron, gamma ray and electron transport) has been expanded to include a reactivity capability. The implemented code modifications have effected the following improvements: (a) Doppler broadening of ENDF/B-IV based nuclear data (including fission); (b) probability table representation for the unresolved resonance range; (c) utilization of thermal scattering law data for the moderator; (d) free gas model in the absence of thermal scattering law data; (e) generalization of the nuclear element data tape structure to facilitate data management; (f) generalization of data management routines; (g) extension of the SAM-CE Complex Combinatorial Geometry capability to facilitate treatment of hexagonal lattices; (h) simultaneous use of 4 different eigenvalue estimators; (i) estimation of the eigenfunction in user prescribed spatial domains; and (j) variance reduction via stratification of source (position, energy, direction) and absorption (based on a quota sampling technique), as well as optional suppression of absorption. The new coding has undergone extensive testing, both specific (via drivers and idealized data) and integral (via comparison with previous computations). Base data have been examined for internal consistency and checked for reasonableness. A documented TRX-1 benchmark calculation has been performed. Agreement with other calculations, as well as with experiment, has served to validate the reactivity mode of SAM-CE. Further refinement of the cross section data processing component of SAM-CE (i.e., SAM-X) is suggested

  19. Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety.

    Energy Technology Data Exchange (ETDEWEB)

    Carbajo, Juan (Oak Ridge National Laboratory, Oak Ridge, TN); Jeong, Hae-Yong (Korea Atomic Energy Research Institute, Daejeon, Korea); Wigeland, Roald (Idaho National Laboratory, Idaho Falls, ID); Corradini, Michael (University of Wisconsin, Madison, WI); Schmidt, Rodney Cannon; Thomas, Justin (Argonne National Laboratory, Argonne, IL); Wei, Tom (Argonne National Laboratory, Argonne, IL); Sofu, Tanju (Argonne National Laboratory, Argonne, IL); Ludewig, Hans (Brookhaven National Laboratory, Upton, NY); Tobita, Yoshiharu (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Ohshima, Hiroyuki (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Serre, Frederic (Centre d' %C3%94etudes nucl%C3%94eaires de Cadarache %3CU%2B2013%3E CEA, France)

    2011-06-01

    This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the

  20. 40 CFR 194.23 - Models and computer codes.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  1. A cellular automata model for traffic flow based on kinetics theory, vehicles capabilities and driver reactions

    Science.gov (United States)

    Guzmán, H. A.; Lárraga, M. E.; Alvarez-Icaza, L.; Carvajal, J.

    2018-02-01

    In this paper, a reliable cellular automata model oriented to faithfully reproduce deceleration and acceleration according to realistic reactions of drivers, when vehicles with different deceleration capabilities are considered is presented. The model focuses on describing complex traffic phenomena by coding in its rules the basic mechanisms of drivers behavior, vehicles capabilities and kinetics, while preserving simplicity. In particular, vehiclés kinetics is based on uniform accelerated motion, rather than in impulsive accelerated motion as in most existing CA models. Thus, the proposed model calculates in an analytic way three safe preserving distances to determine the best action a follower vehicle can take under a worst case scenario. Besides, the prediction analysis guarantees that under the proper assumptions, collision between vehicles may not happen at any future time. Simulations results indicate that all interactions of heterogeneous vehicles (i.e., car-truck, truck-car, car-car and truck-truck) are properly reproduced by the model. In addition, the model overcomes one of the major limitations of CA models for traffic modeling: the inability to perform smooth approach to slower or stopped vehicles. Moreover, the model is also capable of reproducing most empirical findings including the backward speed of the downstream front of the traffic jam, and different congested traffic patterns induced by a system with open boundary conditions with an on-ramp. Like most CA models, integer values are used to make the model run faster, which makes the proposed model suitable for real time traffic simulation of large networks.

  2. Noise Residual Learning for Noise Modeling in Distributed Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Forchhammer, Søren

    2012-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The noise model is one of the inherently difficult challenges in DVC. This paper considers Transform Domain Wyner-Ziv (TDWZ) coding and proposes...

  3. The Community WRF-Hydro Modeling System Version 4 Updates: Merging Toward Capabilities of the National Water Model

    Science.gov (United States)

    McAllister, M.; Gochis, D.; Dugger, A. L.; Karsten, L. R.; McCreight, J. L.; Pan, L.; Rafieeinasab, A.; Read, L. K.; Sampson, K. M.; Yu, W.

    2017-12-01

    The community WRF-Hydro modeling system is publicly available and provides researchers and operational forecasters a flexible and extensible capability for performing multi-scale, multi-physics options for hydrologic modeling that can be run independent or fully-interactive with the WRF atmospheric model. The core WRF-Hydro physics model contains very high-resolution descriptions of terrestrial hydrologic process representations such as land-atmosphere exchanges of energy and moisture, snowpack evolution, infiltration, terrain routing, channel routing, basic reservoir representation and hydrologic data assimilation. Complementing the core physics components of WRF-Hydro are an ecosystem of pre- and post-processing tools that facilitate the preparation of terrain and meteorological input data, an open-source hydrologic model evaluation toolset (Rwrfhydro), hydrologic data assimilation capabilities with DART and advanced model visualization capabilities. The National Center for Atmospheric Research (NCAR), through collaborative support from the National Science Foundation and other funding partners, provides community support for the entire WRF-Hydro system through a variety of mechanisms. This presentation summarizes the enhanced user support capabilities that are being developed for the community WRF-Hydro modeling system. These products and services include a new website, open-source code repositories, documentation and user guides, test cases, online training materials, live, hands-on training sessions, an email list serve, and individual user support via email through a new help desk ticketing system. The WRF-Hydro modeling system and supporting tools which now include re-gridding scripts and model calibration have recently been updated to Version 4 and are merging toward capabilities of the National Water Model.

  4. Systems Modeling to Implement Integrated System Health Management Capability

    Science.gov (United States)

    Figueroa, Jorge F.; Walker, Mark; Morris, Jonathan; Smith, Harvey; Schmalzel, John

    2007-01-01

    ISHM capability includes: detection of anomalies, diagnosis of causes of anomalies, prediction of future anomalies, and user interfaces that enable integrated awareness (past, present, and future) by users. This is achieved by focused management of data, information and knowledge (DIaK) that will likely be distributed across networks. Management of DIaK implies storage, sharing (timely availability), maintaining, evolving, and processing. Processing of DIaK encapsulates strategies, methodologies, algorithms, etc. focused on achieving high ISHM Functional Capability Level (FCL). High FCL means a high degree of success in detecting anomalies, diagnosing causes, predicting future anomalies, and enabling health integrated awareness by the user. A model that enables ISHM capability, and hence, DIaK management, is denominated the ISHM Model of the System (IMS). We describe aspects of the IMS that focus on processing of DIaK. Strategies, methodologies, and algorithms require proper context. We describe an approach to define and use contexts, implementation in an object-oriented software environment (G2), and validation using actual test data from a methane thruster test program at NASA SSC. Context is linked to existence of relationships among elements of a system. For example, the context to use a strategy to detect leak is to identify closed subsystems (e.g. bounded by closed valves and by tanks) that include pressure sensors, and check if the pressure is changing. We call these subsystems Pressurizable Subsystems. If pressure changes are detected, then all members of the closed subsystem become suspect of leakage. In this case, the context is defined by identifying a subsystem that is suitable for applying a strategy. Contexts are defined in many ways. Often, a context is defined by relationships of function (e.g. liquid flow, maintaining pressure, etc.), form (e.g. part of the same component, connected to other components, etc.), or space (e.g. physically close

  5. Tardos fingerprinting codes in the combined digit model

    NARCIS (Netherlands)

    Skoric, B.; Katzenbeisser, S.; Schaathun, H.G.; Celik, M.U.

    2009-01-01

    We introduce a new attack model for collusion-secure codes, called the combined digit model, which represents signal processing attacks against the underlying watermarking level better than existing models. In this paper, we analyze the performance of two variants of the Tardos code and show that

  6. Turbomachinery Heat Transfer and Loss Modeling for 3D Navier-Stokes Codes

    Science.gov (United States)

    DeWitt, Kenneth; Ameri, Ali

    2005-01-01

    This report's contents focus on making use of NASA Glenn on-site computational facilities,to develop, validate, and apply models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes to enhance the capability to compute heat transfer and losses in turbomachiney.

  7. In-vessel retention modeling capabilities in MAAP5

    International Nuclear Information System (INIS)

    Paik, Chan Y.; Lee, Sung Jin; Zhou, Quan; Luangdilok, W.; Reeves, R.W.; Henry, R.E.; Plys, M.; Scobel, J.H.

    2012-01-01

    Modular Accident Analysis Program (MAAP) is an integrated severe accident analysis code for both light water and heavy water reactors. New and improved models to address the complex phenomena associated with in-vessel retention (IVR) were incorporated into MAAP5.01. They include: -a) time-dependent volatile and non-volatile decay heat, -b) material properties at high temperatures, -c) finer vessel wall nodalization, -d) new correlations for natural convection heat transfer in the oxidic pool, -e) refined metal layer heat transfer to the reactor vessel wall and surroundings, -f) formation of a heavy metal layer, and -g) insulation cooling channel model and associated ex-vessel heat transfer and critical heat flux correlations. In this paper, the new and improved models in MAAP5.01 are described and sample calculation results are presented for the AP1000 passive plant. For the IVR evaluation, a transient calculation is useful because the timing of corium relocation, decaying heat load, and formation of separate layers in the lower plenum all affect integrity of the lower head. The key parameters affecting the IVR success are the metal layer emissivity and thickness of the top metal layer, which depends on the amount of steel in the oxidic pool and in the heavy metal layer. With the best estimate inputs for the debris mixing parameters in a conservative IVR scenario, the AP1000 plant results show that the maximum ex-vessel heat flux to CHF ratio is about 0.7, which occurs before 10.000 seconds when the decay heat is high. The AP1000 plant results demonstrate how MAAP5.01 can be used to evaluate IVR and to gain insight into responses of the lower head during a severe accident

  8. MMA, A Computer Code for Multi-Model Analysis

    Science.gov (United States)

    Poeter, Eileen P.; Hill, Mary C.

    2007-01-01

    be well served by the default methods provided. To use the default methods, the only required input for MMA is a list of directories where the files for the alternate models are located. Evaluation and development of model-analysis methods are active areas of research. To facilitate exploration and innovation, MMA allows the user broad discretion to define alternatives to the default procedures. For example, MMA allows the user to (a) rank models based on model criteria defined using a wide range of provided and user-defined statistics in addition to the default AIC, AICc, BIC, and KIC criteria, (b) create their own criteria using model measures available from the code, and (c) define how each model criterion is used to calculate related posterior model probabilities. The default model criteria rate models are based on model fit to observations, the number of observations and estimated parameters, and, for KIC, the Fisher information matrix. In addition, MMA allows the analysis to include an evaluation of estimated parameter values. This is accomplished by allowing the user to define unreasonable estimated parameter values or relative estimated parameter values. An example of the latter is that it may be expected that one parameter value will be less than another, as might be the case if two parameters represented the hydraulic conductivity of distinct materials such as fine and coarse sand. Models with parameter values that violate the user-defined conditions are excluded from further consideration by MMA. Ground-water models are used as examples in this report, but MMA can be used to evaluate any set of models for which the required files have been produced. MMA needs to read files from a separate directory for each alternative model considered. The needed files are produced when using the Sensitivity-Analysis or Parameter-Estimation mode of UCODE_2005, or, possibly, the equivalent capability of another program. MMA is constructed using

  9. Documentation for grants equal to tax model: Volume 3, Source code

    International Nuclear Information System (INIS)

    Boryczka, M.K.

    1986-01-01

    The GETT model is capable of forecasting the amount of tax liability associated with all property owned and all activities undertaken by the US Department of Energy (DOE) in site characterization and repository development. The GETT program is a user-friendly, menu-driven model developed using dBASE III/trademark/, a relational data base management system. The data base for GETT consists primarily of eight separate dBASE III/trademark/ files corresponding to each of the eight taxes (real property, personal property, corporate income, franchise, sales, use, severance, and excise) levied by State and local jurisdictions on business property and activity. Additional smaller files help to control model inputs and reporting options. Volume 3 of the GETT model documentation is the source code. The code is arranged primarily by the eight tax types. Other code files include those for JURISDICTION, SIMULATION, VALIDATION, TAXES, CHANGES, REPORTS, GILOT, and GETT. The code has been verified through hand calculations

  10. Improving the quality of clinical coding: a comprehensive audit model

    Directory of Open Access Journals (Sweden)

    Hamid Moghaddasi

    2014-04-01

    Full Text Available Introduction: The review of medical records with the aim of assessing the quality of codes has long been conducted in different countries. Auditing medical coding, as an instructive approach, could help to review the quality of codes objectively using defined attributes, and this in turn would lead to improvement of the quality of codes. Method: The current study aimed to present a model for auditing the quality of clinical codes. The audit model was formed after reviewing other audit models, considering their strengths and weaknesses. A clear definition was presented for each quality attribute and more detailed criteria were then set for assessing the quality of codes. Results: The audit tool (based on the quality attributes included legibility, relevancy, completeness, accuracy, definition and timeliness; led to development of an audit model for assessing the quality of medical coding. Delphi technique was then used to reassure the validity of the model. Conclusion: The inclusive audit model designed could provide a reliable and valid basis for assessing the quality of codes considering more quality attributes and their clear definition. The inter-observer check suggested in the method of auditing is of particular importance to reassure the reliability of coding.

  11. Development of the Monju core safety analysis numerical models by super-COPD code

    International Nuclear Information System (INIS)

    Yamada, Fumiaki; Minami, Masaki

    2010-12-01

    Japan Atomic Energy Agency constructed a computational model for safety analysis of Monju reactor core to be built into a modularized plant dynamics analysis code Super-COPD code, for the purpose of heat removal capability evaluation at the in total 21 defined transients in the annex to the construction permit application. The applicability of this model to core heat removal capability evaluation has been estimated by back to back result comparisons of the constituent models with conventionally applied codes and by application of the unified model. The numerical model for core safety analysis has been built based on the best estimate model validated by the actually measured plant behavior up to 40% rated power conditions, taking over safety analysis models of conventionally applied COPD and HARHO-IN codes, to be capable of overall calculations of the entire plant with the safety protection and control systems. Among the constituents of the analytical model, neutronic-thermal model, heat transfer and hydraulic models of PHTS, SHTS, and water/steam system are individually verified by comparisons with the conventional calculations. Comparisons are also made with the actually measured plant behavior up to 40% rated power conditions to confirm the calculation adequacy and conservativeness of the input data. The unified analytical model was applied to analyses of in total 8 anomaly events; reactivity insertion, abnormal power distribution, decrease and increase of coolant flow rate in PHTS, SHTS and water/steam systems. The resulting maximum values and temporal variations of the key parameters in safety evaluation; temperatures of fuel, cladding, in core sodium coolant and RV inlet and outlet coolant have negligible discrepancies against the existing analysis result in the annex to the construction permit application, verifying the unified analytical model. These works have enabled analytical evaluation of Monju core heat removal capability by Super-COPD utilizing the

  12. A model based lean approach to capability management

    CSIR Research Space (South Africa)

    Venter, Jacobus P

    2017-09-01

    Full Text Available It is argued that the definition of the required operational capabilities in the short and long term is an essential element of command. Defence Capability Management can be a cumbersome, long and very resource intensive activity. Given the new...

  13. [The detection of occurrence rate of genes coding capability to form pili binding in auto-strains of Escherichia coli].

    Science.gov (United States)

    Ivanova, E I; Popkova, S M; Dzhioev, Iu P; Rakova, E B; Dolgikh, V V; Savel'kaeva, M V; Nemchenko, U M; Bukharova, E V; Serdiuk, L V

    2015-01-01

    E. coli is a commensal of intestine of the vertebrata. The exchange of genetic material of different types of bacteria between themselves and with other representatives of family of Enterobacteriaceae in intestinal ecosystem results in development of types of normal colibacillus with genetic characteristics of pathogenicity that can serve as a theoretical substantiation to attribute such strains to pathobionts. The entero-pathogenic colibacillus continues be an important cause of diarrhea in children in developing countries. The gene responsible for formation of pili binding is a necessary condition for virulence of entero-pathogenic colibacillus. The polymerase chain reaction was applied to examine 316 strains of different types of E. coli (normal, with weak enzyme activity and hemolytic activity) isolated from healthy children and children with functional disorders of gastro-intestinal tract for presence of genes coding capability to form pill binding. The presence of this gene in different biochemical types of E. coli permits to establish the fact of formation of reservoir of pathogenicity in indigent microbiota of intestinal biocenosis.

  14. Modelling guidelines for core exit temperature simulations with system codes

    Energy Technology Data Exchange (ETDEWEB)

    Freixa, J., E-mail: jordi.freixa-terradas@upc.edu [Department of Physics and Nuclear Engineering, Technical University of Catalonia (UPC) (Spain); Paul Scherrer Institut (PSI), 5232 Villigen (Switzerland); Martínez-Quiroga, V., E-mail: victor.martinez@nortuen.com [Department of Physics and Nuclear Engineering, Technical University of Catalonia (UPC) (Spain); Zerkak, O., E-mail: omar.zerkak@psi.ch [Paul Scherrer Institut (PSI), 5232 Villigen (Switzerland); Reventós, F., E-mail: francesc.reventos@upc.edu [Department of Physics and Nuclear Engineering, Technical University of Catalonia (UPC) (Spain)

    2015-05-15

    Highlights: • Core exit temperature is used in PWRs as an indication of core heat up. • Modelling guidelines of CET response with system codes. • Modelling of heat transfer processes in the core and UP regions. - Abstract: Core exit temperature (CET) measurements play an important role in the sequence of actions under accidental conditions in pressurized water reactors (PWR). Given the difficulties in placing measurements in the core region, CET readings are used as criterion for the initiation of accident management (AM) procedures because they can indicate a core heat up scenario. However, the CET responses have some limitation in detecting inadequate core cooling and core uncovery simply because the measurement is not placed inside the core. Therefore, it is of main importance in the field of nuclear safety for PWR power plants to assess the capabilities of system codes for simulating the relation between the CET and the peak cladding temperature (PCT). The work presented in this paper intends to address this open question by making use of experimental work at integral test facilities (ITF) where experiments related to the evolution of the CET and the PCT during transient conditions have been carried out. In particular, simulations of two experiments performed at the ROSA/LSTF and PKL facilities are presented. The two experiments are part of a counterpart exercise between the OECD/NEA ROSA-2 and OECD/NEA PKL-2 projects. The simulations are used to derive guidelines in how to correctly reproduce the CET response during a core heat up scenario. Three aspects have been identified to be of main importance: (1) the need for a 3-dimensional representation of the core and Upper Plenum (UP) regions in order to model the heterogeneity of the power zones and axial areas, (2) the detailed representation of the active and passive heat structures, and (3) the use of simulated thermocouples instead of steam temperatures to represent the CET readings.

  15. Subgroup A: nuclear model codes report to the Sixteenth Meeting of the WPEC

    International Nuclear Information System (INIS)

    Talou, P.; Chadwick, M.B.; Dietrich, F.S.; Herman, M.; Kawano, T.; Konig, A.; Oblozinsky, P.

    2004-01-01

    The Subgroup A activities focus on the development of nuclear reaction models and codes, used in evaluation work for nuclear reactions from the unresolved energy region up to the pion threshold production limit, and for target nuclides from the low teens and heavier. Much of the efforts are devoted by each participant to the continuing development of their own Institution codes. Progresses in this arena are reported in detail for each code in the present document. EMPIRE-II is of public access. The release of the TALYS code has been announced for the ND2004 Conference in Santa Fe, NM, October 2004. McGNASH is still under development and is not expected to be released in the very near future. In addition, Subgroup A members have demonstrated a growing interest in working on common modeling and codes capabilities, which would significantly reduce the amount of duplicate work, help manage efficiently the growing lines of existing codes, and render codes inter-comparison much easier. A recent and important activity of the Subgroup A has therefore been to develop the framework and the first bricks of the ModLib library, which is constituted of mostly independent pieces of codes written in Fortran 90 (and above) to be used in existing and future nuclear reaction codes. Significant progresses in the development of ModLib have been made during the past year. Several physics modules have been added to the library, and a few more have been planned in detail for the coming year.

  16. Quantitative Model for Supply Chain Visibility: Process Capability Perspective

    Directory of Open Access Journals (Sweden)

    Youngsu Lee

    2016-01-01

    Full Text Available Currently, the intensity of enterprise competition has increased as a result of a greater diversity of customer needs as well as the persistence of a long-term recession. The results of competition are becoming severe enough to determine the survival of company. To survive global competition, each firm must focus on achieving innovation excellence and operational excellence as core competency for sustainable competitive advantage. Supply chain management is now regarded as one of the most effective innovation initiatives to achieve operational excellence, and its importance has become ever more apparent. However, few companies effectively manage their supply chains, and the greatest difficulty is in achieving supply chain visibility. Many companies still suffer from a lack of visibility, and in spite of extensive research and the availability of modern technologies, the concepts and quantification methods to increase supply chain visibility are still ambiguous. Based on the extant researches in supply chain visibility, this study proposes an extended visibility concept focusing on a process capability perspective and suggests a more quantitative model using Z score in Six Sigma methodology to evaluate and improve the level of supply chain visibility.

  17. Integration of facility modeling capabilities for nuclear nonproliferation analysis

    International Nuclear Information System (INIS)

    Burr, Tom; Gorensek, M.B.; Krebs, John; Kress, Reid L.; Lamberti, Vincent; Schoenwald, David; Ward, Richard C.

    2012-01-01

    Developing automated methods for data collection and analysis that can facilitate nuclearnonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facilitymodeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facilitymodeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facilitymodelingcapabilities and illustrates how they could be integrated and utilized for nonproliferationanalysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facilitymodeling tools. After considering a representative sampling of key facilitymodelingcapabilities, the proposed integration framework is illustrated with several examples.

  18. Repairing business process models as retrieved from source code

    NARCIS (Netherlands)

    Fernández-Ropero, M.; Reijers, H.A.; Pérez-Castillo, R.; Piattini, M.; Nurcan, S.; Proper, H.A.; Soffer, P.; Krogstie, J.; Schmidt, R.; Halpin, T.; Bider, I.

    2013-01-01

    The static analysis of source code has become a feasible solution to obtain underlying business process models from existing information systems. Due to the fact that not all information can be automatically derived from source code (e.g., consider manual activities), such business process models

  19. Models and Correlations of Interfacial and Wall Frictions for the SPACE code

    International Nuclear Information System (INIS)

    Kim, Soo Hyung; Hwang, Moon Kyu; Chung, Bub Dong

    2010-04-01

    This report describes models and correlations for the interfacial and wall frictions implemented in the SPACE code which has the capability to predict thermal-hydraulic behavior of nuclear power plants. The interfacial and wall frictions are essential to solve the momentum conservation equations of gas, continuous liquid and droplet. The interfacial and wall frictions are dealt in the Chapter 2 and 3, respectively. In Chapter 4, selection criteria for models and correlations are explained. In Chapter 5, the origins of the selected models and correlations used in this code are examined to check whether they are in confliction with intellectual proprietary rights

  20. Evaluation of Advanced Models for PAFS Condensation Heat Transfer in SPACE Code

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Byoung-Uhn; Kim, Seok; Park, Yu-Sun; Kang, Kyung Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Ahn, Tae-Hwan; Yun, Byong-Jo [Pusan National University, Busan (Korea, Republic of)

    2015-10-15

    The PAFS (Passive Auxiliary Feedwater System) is operated by the natural circulation to remove the core decay heat through the PCHX (Passive Condensation Heat Exchanger) which is composed of the nearly horizontal tubes. For validation of the cooling and operational performance of the PAFS, PASCAL (PAFS Condensing Heat Removal Assessment Loop) facility was constructed and the condensation heat transfer and natural convection phenomena in the PAFS was experimentally investigated at KAERI (Korea Atomic Energy Research Institute). From the PASCAL experimental result, it was found that conventional system analysis code underestimated the condensation heat transfer. In this study, advanced condensation heat transfer models which can treat the heat transfer mechanisms with the different flow regimes in the nearly horizontal heat exchanger tube were analyzed. The models were implemented in a thermal hydraulic safety analysis code, SPACE (Safety and Performance Analysis Code for Nuclear Power Plant), and it was evaluated with the PASCAL experimental data. With an aim of enhancing the prediction capability for the condensation phenomenon inside the PCHX tube of the PAFS, advanced models for the condensation heat transfer were implemented into the wall condensation model of the SPACE code, so that the PASCAL experimental result was utilized to validate the condensation models. Calculation results showed that the improved model for the condensation heat transfer coefficient enhanced the prediction capability of the SPACE code. This result confirms that the mechanistic modeling for the film condensation in the steam phase and the convection in the condensate liquid contributed to enhance the prediction capability of the wall condensation model of the SPACE code and reduce conservatism in prediction of condensation heat transfer.

  1. Description and assessment of structural and temperature models in the FRAP-T6 code

    International Nuclear Information System (INIS)

    Siefken, L.J.

    1983-01-01

    The FRAP-T6 code was developed at the Idaho National Engineering Laboratory (INEL) for the purpose of calculating the transient performance of light water reactor fuel rods during reactor transients ranging from mild operational transients to severe hypothetical loss-of-coolant accidents. An important application of the FRAP-T6 code is to calculate the structural performance of fuel rod cladding. The capabilities of the FRAP-T6 code are assessed by comparisons of code calculations with the measurements of several hundred in-pile experiments on fuel rods. The results of the assessments show that the code accurately and efficiently models the structural and thermal response of fuel rods

  2. Innovation and dynamic capabilities of the firm: Defining an assessment model

    Directory of Open Access Journals (Sweden)

    André Cherubini Alves

    2017-05-01

    Full Text Available Innovation and dynamic capabilities have gained considerable attention in both academia and practice. While one of the oldest inquiries in economic and strategy literature involves understanding the features that drive business success and a firm’s perpetuity, the literature still lacks a comprehensive model of innovation and dynamic capabilities. This study presents a model that assesses firms’ innovation and dynamic capabilities perspectives based on four essential capabilities: development, operations, management, and transaction capabilities. Data from a survey of 1,107 Brazilian manufacturing firms were used for empirical testing and discussion of the dynamic capabilities framework. Regression and factor analyses validated the model; we discuss the results, contrasting with the dynamic capabilities’ framework. Operations Capability is the least dynamic of all capabilities, with the least influence on innovation. This reinforces the notion that operations capabilities as “ordinary capabilities,” whereas management, development, and transaction capabilities better explain firms’ dynamics and innovation.

  3. Content Coding of Psychotherapy Transcripts Using Labeled Topic Models.

    Science.gov (United States)

    Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic

    2017-03-01

    Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, nonstandardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the labeled latent Dirichlet allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of 0.79, and 0.70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scalable method for accurate automated coding of psychotherapy sessions that perform better than comparable discriminative methods at session-level coding and can also predict fine-grained codes.

  4. Verification of NUREC Code Transient Calculation Capability Using OECD NEA/US NRC PWR MOX/UO2 Core Transient Benchmark Problem

    International Nuclear Information System (INIS)

    Joo, Hyung Kook; Noh, Jae Man; Lee, Hyung Chul; Yoo, Jae Woon

    2006-01-01

    In this report, we verified the NUREC code transient calculation capability using OECD NEA/US NRC PWR MOX/UO2 Core Transient Benchmark Problem. The benchmark problem consists of Part 1, a 2-D problem with given T/H conditions, Part 2, a 3-D problem at HFP condition, Part 3, a 3-D problem at HZP condition, and Part 4, a transient state initiated by a control rod ejection at HZP condition in Part 3. In Part 1, the results of NUREC code agreed well with the reference solution obtained from DeCART calculation except for the pin power distributions at the rodded assemblies. In Part 2, the results of NUREC code agreed well with the reference DeCART solutions. In Part 3, some results of NUREC code such as critical boron concentration and core averaged delayed neutron fraction agreed well with the reference PARCS 2G solutions. But the error of the assembly power at the core center was quite large. The pin power errors of NUREC code at the rodded assemblies was much smaller the those of PARCS code. The axial power distribution also agreed well with the reference solution. In Part 4, the results of NUREC code agreed well with those of PARCS 2G code which was taken as the reference solution. From the above results we can conclude that the results of NUREC code for steady states and transient states of the MOX loaded LWR core agree well with those of the other codes

  5. The SCEC Community Modeling Environment (SCEC/CME) - An Overview of its Architecture and Current Capabilities

    Science.gov (United States)

    Maechling, P. J.; Jordan, T. H.; Minster, B.; Moore, R.; Kesselman, C.; SCEC ITR Collaboration

    2004-12-01

    The Southern California Earthquake Center (SCEC), in collaboration with the San Diego Supercomputer Center, the USC Information Sciences Institute, the Incorporated Research Institutions for Seismology, and the U.S. Geological Survey, is developing the Southern California Earthquake Center Community Modeling Environment (CME) under a five-year grant from the National Science Foundation's Information Technology Research (ITR) Program jointly funded by the Geosciences and Computer and Information Science & Engineering Directorates. The CME system is an integrated geophysical simulation modeling framework that automates the process of selecting, configuring, and executing models of earthquake systems. During the Project's first three years, we have performed fundamental geophysical and information technology research and have also developed substantial system capabilities, software tools, and data collections that can help scientist perform systems-level earthquake science. The CME system provides collaborative tools to facilitate distributed research and development. These collaborative tools are primarily communication tools, providing researchers with access to information in ways that are convenient and useful. The CME system provides collaborators with access to significant computing and storage resources. The computing resources of the Project include in-house servers, Project allocations on USC High Performance Computing Linux Cluster, as well as allocations on NPACI Supercomputers and the TeraGrid. The CME system provides access to SCEC community geophysical models such as the Community Velocity Model, Community Fault Model, Community Crustal Motion Model, and the Community Block Model. The organizations that develop these models often provide access to them so it is not necessary to use the CME system to access these models. However, in some cases, the CME system supplements the SCEC community models with utility codes that make it easier to use or access

  6. Physical model of the nuclear fuel cycle simulation code SITON

    International Nuclear Information System (INIS)

    Brolly, Á.; Halász, M.; Szieberth, M.; Nagy, L.; Fehér, S.

    2017-01-01

    Finding answers to main challenges of nuclear energy, like resource utilisation or waste minimisation, calls for transient fuel cycle modelling. This motivation led to the development of SITON v2.0 a dynamic, discrete facilities/discrete materials and also discrete events fuel cycle simulation code. The physical model of the code includes the most important fuel cycle facilities. Facilities can be connected flexibly; their number is not limited. Material transfer between facilities is tracked by taking into account 52 nuclides. Composition of discharged fuel is determined using burnup tables except for the 2400 MW thermal power design of the Gas-Cooled Fast Reactor (GFR2400). For the GFR2400 the FITXS method is used, which fits one-group microscopic cross-sections as polynomial functions of the fuel composition. This method is accurate and fast enough to be used in fuel cycle simulations. Operation of the fuel cycle, i.e. material requests and transfers, is described by discrete events. In advance of the simulation reactors and plants formulate their requests as events; triggered requests are tracked. After that, the events are simulated, i.e. the requests are fulfilled and composition of the material flow between facilities is calculated. To demonstrate capabilities of SITON v2.0, a hypothetical transient fuel cycle is presented in which a 4-unit VVER-440 reactor park was replaced by one GFR2400 that recycled its own spent fuel. It is found that the GFR2400 can be started if the cooling time of its spent fuel is 2 years. However, if the cooling time is 5 years it needs an additional plutonium feed, which can be covered from the spent fuel of a Generation III light water reactor.

  7. Systems Security Engineering Capability Maturity Model (SSECMM), Model Description, Version 1.1

    National Research Council Canada - National Science Library

    1997-01-01

    This document is designed to acquaint the reader with the SSE-CMM Project as a whole and present the project's major work product - the Systems Security Engineering Capability Maturity Model (SSE- CMM...

  8. WWER radial reflector modeling by diffusion codes

    International Nuclear Information System (INIS)

    Petkov, P. T.; Mittag, S.

    2005-01-01

    The two commonly used approaches to describe the WWER radial reflectors in diffusion codes, by albedo on the core-reflector boundary and by a ring of diffusive assembly size nodes, are discussed. The advantages and disadvantages of the first approach are presented first, then the Koebke's equivalence theory is outlined and its implementation for the WWER radial reflectors is discussed. Results for the WWER-1000 reactor are presented. Then the boundary conditions on the outer reflector boundary are discussed. The possibility to divide the library into fuel assembly and reflector parts and to generate each library by a separate code package is discussed. Finally, the homogenization errors for rodded assemblies are presented and discussed (Author)

  9. COMPBRN III: a computer code for modeling compartment fires

    International Nuclear Information System (INIS)

    Ho, V.; Siu, N.; Apostolakis, G.; Flanagan, G.F.

    1986-07-01

    The computer code COMPBRN III deterministically models the behavior of compartment fires. This code is an improvement of the original COMPBRN codes. It employs a different air entrainment model and numerical scheme to estimate properties of the ceiling hot gas layer model. Moreover, COMPBRN III incorporates a number of improvements in shape factor calculations and error checking, which distinguish it from the COMPBRN II code. This report presents the ceiling hot gas layer model employed by COMPBRN III as well as several other modifications. Information necessary to run COMPBRN III, including descriptions of required input and resulting output, are also presented. Simulation of experiments and a sample problem are included to demonstrate the usage of the code. 37 figs., 46 refs

  10. The burnup capabilities of the Deep Burn Modular Helium Reactor analyzed by the Monte Carlo Continuous Energy Code MCB

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, Alberto E-mail: alby@neutron.kth.se; Gudowski, Waclaw E-mail: wacek@neutron.kth.se; Venneri, Francesco E-mail: venneri@lanl.gov

    2004-01-01

    We have investigated the waste actinide burnup capabilities of a Gas Turbine Modular Helium Reactor (GT-MHR, similar to the reactor being designed by General Atomics and Minatom for surplus weapons plutonium destruction) with the Monte Carlo Continuous Energy Burnup Code MCB, an extension of MCNP developed at the Royal Institute of Technology in Stockholm and University of Mining and Metallurgy in Krakow. The GT-MHR is a gas-cooled, graphite-moderated reactor, which can be powered with a wide variety of fuels, like thorium, uranium or plutonium. In the present work, the GT-MHR is fueled with the transuranic actinides contained in Light Water Reactors (LWRs) spent fuel for the purpose of destroying them as completely as possible with minimum reliance on multiple reprocessing steps. After uranium extraction from the LWR spent fuel (UREX), the remaining waste actinides, including plutonium are partitioned into two distinct types of fuel for use in the GT-MHR: Driver Fuel (DF) and Transmutation Fuel (TF). The DF supplies the neutrons to maintain the fission chain reaction, whereas the TF emphasizes neutron capture to induce a deep burn transmutation and provide reactivity control by a negative feedback. When used in this mode, the GT-MHR is called Deep Burn Modular Helium Reactor (DB-MHR). Both fuels are contained in a structure of triple isotropic coated layers, TRISO coating, which has been proven to retain fission products up to 1600 deg. C and is expected to remain intact for hundreds of thousands of years after irradiation. Other benefits of this reactor consist of: a well-developed technology, both for the graphite-moderated core and the TRISO structure, a high energy conversion efficiency (about 50%), well established passive safety mechanism and a competitive cost. The destruction of more than 94% of {sup 239}Pu and the other geologically problematic actinide species makes this reactor a valid proposal for the reduction of nuclear waste and the prevention of

  11. The burnup capabilities of the Deep Burn Modular Helium Reactor analyzed by the Monte Carlo Continuous Energy Code MCB

    International Nuclear Information System (INIS)

    Talamo, Alberto; Gudowski, Waclaw; Venneri, Francesco

    2004-01-01

    We have investigated the waste actinide burnup capabilities of a Gas Turbine Modular Helium Reactor (GT-MHR, similar to the reactor being designed by General Atomics and Minatom for surplus weapons plutonium destruction) with the Monte Carlo Continuous Energy Burnup Code MCB, an extension of MCNP developed at the Royal Institute of Technology in Stockholm and University of Mining and Metallurgy in Krakow. The GT-MHR is a gas-cooled, graphite-moderated reactor, which can be powered with a wide variety of fuels, like thorium, uranium or plutonium. In the present work, the GT-MHR is fueled with the transuranic actinides contained in Light Water Reactors (LWRs) spent fuel for the purpose of destroying them as completely as possible with minimum reliance on multiple reprocessing steps. After uranium extraction from the LWR spent fuel (UREX), the remaining waste actinides, including plutonium are partitioned into two distinct types of fuel for use in the GT-MHR: Driver Fuel (DF) and Transmutation Fuel (TF). The DF supplies the neutrons to maintain the fission chain reaction, whereas the TF emphasizes neutron capture to induce a deep burn transmutation and provide reactivity control by a negative feedback. When used in this mode, the GT-MHR is called Deep Burn Modular Helium Reactor (DB-MHR). Both fuels are contained in a structure of triple isotropic coated layers, TRISO coating, which has been proven to retain fission products up to 1600 deg. C and is expected to remain intact for hundreds of thousands of years after irradiation. Other benefits of this reactor consist of: a well-developed technology, both for the graphite-moderated core and the TRISO structure, a high energy conversion efficiency (about 50%), well established passive safety mechanism and a competitive cost. The destruction of more than 94% of 239 Pu and the other geologically problematic actinide species makes this reactor a valid proposal for the reduction of nuclear waste and the prevention of

  12. RELAP5/MOD3 code manual: Code structure, system models, and solution methods. Volume 1

    International Nuclear Information System (INIS)

    1995-08-01

    The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling, approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I provides modeling theory and associated numerical schemes

  13. GASFLOW computer code (physical models and input data)

    International Nuclear Information System (INIS)

    Muehlbauer, Petr

    2007-11-01

    The GASFLOW computer code was developed jointly by the Los Alamos National Laboratory, USA, and Forschungszentrum Karlsruhe, Germany. The code is primarily intended for calculations of the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containments and in other facilities. The physical models and the input data are described, and a commented simple calculation is presented

  14. Fuel behavior modeling using the MARS computer code

    International Nuclear Information System (INIS)

    Faya, S.C.S.; Faya, A.J.G.

    1983-01-01

    The fuel behaviour modeling code MARS against experimental data, was evaluated. Two cases were selected: an early comercial PWR rod (Maine Yankee rod) and an experimental rod from the Canadian BWR program (Canadian rod). The MARS predictions are compared with experimental data and predictions made by other fuel modeling codes. Improvements are suggested for some fuel behaviour models. Mars results are satisfactory based on the data available. (Author) [pt

  15. PCCS model development for SBWR using the CONTAIN code

    International Nuclear Information System (INIS)

    Tills, J.; Murata, K.K.; Washington, K.E.

    1994-01-01

    The General Electric Simplified Boiling Water Reactor (SBWR) employs a passive containment cooling system (PCCS) to maintain long-term containment gas pressure and temperature below design limits during accidents. This system consists of a steam supply line that connects the upper portion of the drywell with a vertical shell-and-tube single pass heat exchanger located in an open water pool outside of the containment safety envelope. The heat exchanger tube outlet is connected to a vent line that is submerged below the suppression pool surface but above the main suppression pool horizontal vents. Steam generated in the post-shutdown period flows into the heat exchanger tubes as the result of suction and/or a low pressure differential between the drywell and suppression chamber. Operation of the PCCS is complicated by the presence of noncondensables in the flow stream. Build-up of noncondensables in the exchanger and vent line for the periods when the vent is not cleared causes a reduction in the exchanger heat removal capacity. As flow to the exchanger is reduced due to the noncondensable gas build-up, the drywell pressure increases until the vent line is cleared and the noncondensables are purged into the suppression chamber, restoring the heat removal capability of the PCCS. This paper reports on progress made in modeling SBWR containment loads using the CONTAIN code. As a central part of this effort, a PCCS model development effort has recently been undertaken to implement an appropriate model in CONTAIN. The CONTAIN PCCS modeling approach is discussed and validated. A full SBWR containment input deck has also been developed for CONTAIN. The plant response to a postulated design basis accident (DBA) has been calculated with the CONTAIN PCCS model and plant deck, and the preliminary results are discussed

  16. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  17. EM modeling for GPIR using 3D FDTD modeling codes

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, S.D.

    1994-10-01

    An analysis of the one-, two-, and three-dimensional electrical characteristics of structural cement and concrete is presented. This work connects experimental efforts in characterizing cement and concrete in the frequency and time domains with the Finite Difference Time Domain (FDTD) modeling efforts of these substances. These efforts include Electromagnetic (EM) modeling of simple lossless homogeneous materials with aggregate and targets and the modeling dispersive and lossy materials with aggregate and complex target geometries for Ground Penetrating Imaging Radar (GPIR). Two- and three-dimensional FDTD codes (developed at LLNL) where used for the modeling efforts. Purpose of the experimental and modeling efforts is to gain knowledge about the electrical properties of concrete typically used in the construction industry for bridges and other load bearing structures. The goal is to optimize the performance of a high-sample-rate impulse radar and data acquisition system and to design an antenna system to match the characteristics of this material. Results show agreement to within 2 dB of the amplitudes of the experimental and modeled data while the frequency peaks correlate to within 10% the differences being due to the unknown exact nature of the aggregate placement.

  18. Cavitation Modeling in Euler and Navier-Stokes Codes

    Science.gov (United States)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.

    1993-01-01

    Many previous researchers have modeled sheet cavitation by means of a constant pressure solution in the cavity region coupled with a velocity potential formulation for the outer flow. The present paper discusses the issues involved in extending these cavitation models to Euler or Navier-Stokes codes. The approach taken is to start from a velocity potential model to ensure our results are compatible with those of previous researchers and available experimental data, and then to implement this model in both Euler and Navier-Stokes codes. The model is then augmented in the Navier-Stokes code by the inclusion of the energy equation which allows the effect of subcooling in the vicinity of the cavity interface to be modeled to take into account the experimentally observed reduction in cavity pressures that occurs in cryogenic fluids such as liquid hydrogen. Although our goal is to assess the practicality of implementing these cavitation models in existing three-dimensional, turbomachinery codes, the emphasis in the present paper will center on two-dimensional computations, most specifically isolated airfoils and cascades. Comparisons between velocity potential, Euler and Navier-Stokes implementations indicate they all produce consistent predictions. Comparisons with experimental results also indicate that the predictions are qualitatively correct and give a reasonable first estimate of sheet cavitation effects in both cryogenic and non-cryogenic fluids. The impact on CPU time and the code modifications required suggests that these models are appropriate for incorporation in current generation turbomachinery codes.

  19. Development of the three dimensional flow model in the SPACE code

    International Nuclear Information System (INIS)

    Oh, Myung Taek; Park, Chan Eok; Kim, Shin Whan

    2014-01-01

    SPACE (Safety and Performance Analysis CodE) is a nuclear plant safety analysis code, which has been developed in the Republic of Korea through a joint research between the Korean nuclear industry and research institutes. The SPACE code has been developed with multi-dimensional capabilities as a requirement of the next generation safety code. It allows users to more accurately model the multi-dimensional flow behavior that can be exhibited in components such as the core, lower plenum, upper plenum and downcomer region. Based on generalized models, the code can model any configuration or type of fluid system. All the geometric quantities of mesh are described in terms of cell volume, centroid, face area, and face center, so that it can naturally represent not only the one dimensional (1D) or three dimensional (3D) Cartesian system, but also the cylindrical mesh system. It is possible to simulate large and complex domains by modelling the complex parts with a 3D approach and the rest of the system with a 1D approach. By 1D/3D co-simulation, more realistic conditions and component models can be obtained, providing a deeper understanding of complex systems, and it is expected to overcome the shortcomings of 1D system codes. (author)

  20. Assessment of horizontal in-tube condensation models using MARS code. Part I: Stratified flow condensation

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Seong-Su [Department of Engineering Project, FNC Technology Co., Ltd., Bldg. 135-308, Seoul National University, Gwanak-gu, Seoul 151-744 (Korea, Republic of); Department of Nuclear Engineering, Seoul National University, Gwanak-gu, Seoul 151-744 (Korea, Republic of); Hong, Soon-Joon, E-mail: sjhong90@fnctech.com [Department of Engineering Project, FNC Technology Co., Ltd., Bldg. 135-308, Seoul National University, Gwanak-gu, Seoul 151-744 (Korea, Republic of); Park, Ju-Yeop; Seul, Kwang-Won [Korea Institute of Nuclear Safety, 19 Kuseong-dong, Yuseong-gu, Daejon (Korea, Republic of); Park, Goon-Cherl [Department of Nuclear Engineering, Seoul National University, Gwanak-gu, Seoul 151-744 (Korea, Republic of)

    2013-01-15

    Highlights: Black-Right-Pointing-Pointer This study collected 11 horizontal in-tube condensation models for stratified flow. Black-Right-Pointing-Pointer This study assessed the predictive capability of the models for steam condensation. Black-Right-Pointing-Pointer Purdue-PCCS experiments were simulated using MARS code incorporated with models. Black-Right-Pointing-Pointer Cavallini et al. (2006) model predicts well the data for stratified flow condition. Black-Right-Pointing-Pointer Results of this study can be used to improve condensation model in RELAP5 or MARS. - Abstract: The accurate prediction of the horizontal in-tube condensation heat transfer is a primary concern in the optimum design and safety analysis of horizontal heat exchangers of passive safety systems such as the passive containment cooling system (PCCS), the emergency condenser system (ECS) and the passive auxiliary feed-water system (PAFS). It is essential to analyze and assess the predictive capability of the previous horizontal in-tube condensation models for each flow regime using various experimental data. This study assessed totally 11 condensation models for the stratified flow, one of the main flow regime encountered in the horizontal condenser, with the heat transfer data from the Purdue-PCCS experiment using the multi-dimensional analysis of reactor safety (MARS) code. From the assessments, it was found that the models by Akers and Rosson, Chato, Tandon et al., Sweeney and Chato, and Cavallini et al. (2002) under-predicted the data in the main condensation heat transfer region, on the contrary to this, the models by Rosson and Meyers, Jaster and Kosky, Fujii, Dobson and Chato, and Thome et al. similarly- or over-predicted the data, and especially, Cavallini et al. (2006) model shows good predictive capability for all test conditions. The results of this study can be used importantly to improve the condensation models in thermal hydraulic code, such as RELAP5 or MARS code.

  1. An Empirical Competence-Capability Model of Supply Chain Innovation

    OpenAIRE

    Mandal, Santanu

    2016-01-01

    Supply chain innovation has become the new pre-requisite for the survival of firms in developing capabilities and strategies for sustaining their operations and performance in the market. This study investigates the influence of supply and demand competence on supply chain innovation and its influence on a firm’s operational and relational performance. While the former competence refers to production and supply management related activities, the latter refers to distribution and demand manage...

  2. Case studies in Gaussian process modelling of computer codes

    International Nuclear Information System (INIS)

    Kennedy, Marc C.; Anderson, Clive W.; Conti, Stefano; O'Hagan, Anthony

    2006-01-01

    In this paper we present a number of recent applications in which an emulator of a computer code is created using a Gaussian process model. Tools are then applied to the emulator to perform sensitivity analysis and uncertainty analysis. Sensitivity analysis is used both as an aid to model improvement and as a guide to how much the output uncertainty might be reduced by learning about specific inputs. Uncertainty analysis allows us to reflect output uncertainty due to unknown input parameters, when the finished code is used for prediction. The computer codes themselves are currently being developed within the UK Centre for Terrestrial Carbon Dynamics

  3. Capabilities and requirements for modelling radionuclide transport in the geosphere

    International Nuclear Information System (INIS)

    Paige, R.W.; Piper, D.

    1989-02-01

    This report gives an overview of geosphere flow and transport models suitable for use by the Department of the Environment in the performance assessment of radioactive waste disposal sites. An outline methodology for geosphere modelling is proposed, consisting of a number of different types of model. A brief description of each of the component models is given, indicating the purpose of the model, the processes being modelled and the methodologies adopted. Areas requiring development are noted. (author)

  4. ATHENA code manual. Volume 1. Code structure, system models, and solution methods

    International Nuclear Information System (INIS)

    Carlson, K.E.; Roth, P.A.; Ransom, V.H.

    1986-09-01

    The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code has been developed to perform transient simulation of the thermal hydraulic systems which may be found in fusion reactors, space reactors, and other advanced systems. A generic modeling approach is utilized which permits as much of a particular system to be modeled as necessary. Control system and secondary system components are included to permit modeling of a complete facility. Several working fluids are available to be used in one or more interacting loops. Different loops may have different fluids with thermal connections between loops. The modeling theory and associated numerical schemes are documented in Volume I in order to acquaint the user with the modeling base and thus aid effective use of the code. The second volume contains detailed instructions for input data preparation

  5. Capabilities of a mechanistic model for containment condenser simulation

    International Nuclear Information System (INIS)

    Broxtermann, Philipp; Cron, Daniel von der; Allelein, Hans-Josef

    2011-01-01

    In this paper the first application of the new containment COndenser MOdule (COMO) is being presented. COMO, just under development, represents a newly introduced part of the German containment code system COCOSYS and evaluates the contribution of containment condensers (CC) to passive containment cooling. Several next-generation Light Water Reactors (LWR) will feature innovative systems for passive heat removal during accidents from both the primary circuit and the containment. The passivity is based on natural driving forces only, such as gravity and natural circulation. To investigate the complex thermal-hydraulics and their propagation within a containment during accidents, containment code systems have been developed and validated against a wide variety of experiments. Furthermore, these codes are constantly improved to meet the intensified interest and knowledge in single phenomena and the technological evolution of the actual containment systems, e.g. the installation of passive devices. Accordingly, COCOSYS has been developed and is being continuously enhanced by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) and other partners like LRST. As has been shown by GRS and the calculation of the PANDA BC4 experiment, the interaction between a CC and the atmosphere of the surrounding vessel can be reproduced with the help of COCOSYS. However, up to now this has only been achieved on account of good knowledge of the outcome of the experiment, the user's skills and a complex input deck. The main goal of the newly introduced COMO is to improve the simulation of physical processes of CCs. This will be achieved by considering the passive driving forces which the CCs are based on. Up to now, a natural circulation within the CC tubes has been realized. The simulation of boiling conditions and the impact on the flow will be addressed in future studies. Additionally, the application of CCs to reactor simulation is being simplified and thus is supposed to reduce

  6. ADVANCED ELECTRIC AND MAGNETIC MATERIAL MODELS FOR FDTD ELECTROMAGNETIC CODES

    Energy Technology Data Exchange (ETDEWEB)

    Poole, B R; Nelson, S D; Langdon, S

    2005-05-05

    The modeling of dielectric and magnetic materials in the time domain is required for pulse power applications, pulsed induction accelerators, and advanced transmission lines. For example, most induction accelerator modules require the use of magnetic materials to provide adequate Volt-sec during the acceleration pulse. These models require hysteresis and saturation to simulate the saturation wavefront in a multipulse environment. In high voltage transmission line applications such as shock or soliton lines the dielectric is operating in a highly nonlinear regime, which require nonlinear models. Simple 1-D models are developed for fast parameterization of transmission line structures. In the case of nonlinear dielectrics, a simple analytic model describing the permittivity in terms of electric field is used in a 3-D finite difference time domain code (FDTD). In the case of magnetic materials, both rate independent and rate dependent Hodgdon magnetic material models have been implemented into 3-D FDTD codes and 1-D codes.

  7. ADVANCED ELECTRIC AND MAGNETIC MATERIAL MODELS FOR FDTD ELECTROMAGNETIC CODES

    International Nuclear Information System (INIS)

    Poole, B R; Nelson, S D; Langdon, S

    2005-01-01

    The modeling of dielectric and magnetic materials in the time domain is required for pulse power applications, pulsed induction accelerators, and advanced transmission lines. For example, most induction accelerator modules require the use of magnetic materials to provide adequate Volt-sec during the acceleration pulse. These models require hysteresis and saturation to simulate the saturation wavefront in a multipulse environment. In high voltage transmission line applications such as shock or soliton lines the dielectric is operating in a highly nonlinear regime, which require nonlinear models. Simple 1-D models are developed for fast parameterization of transmission line structures. In the case of nonlinear dielectrics, a simple analytic model describing the permittivity in terms of electric field is used in a 3-D finite difference time domain code (FDTD). In the case of magnetic materials, both rate independent and rate dependent Hodgdon magnetic material models have been implemented into 3-D FDTD codes and 1-D codes

  8. Code Generation for Protocols from CPN models Annotated with Pragmatics

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars Michael; Kindler, Ekkart

    software implementation satisfies the properties verified for the model. Coloured Petri Nets (CPNs) have been widely used to model and verify protocol software, but limited work exists on using CPN models of protocol software as a basis for automated code generation. In this report, we present an approach...... modelling languages, MDE further has the advantage that models are amenable to model checking which allows key behavioural properties of the software design to be verified. The combination of formally verified models and automated code generation contributes to a high degree of assurance that the resulting...... for generating protocol software from a restricted class of CPN models. The class of CPN models considered aims at being descriptive in that the models are intended to be helpful in understanding and conveying the operation of the protocol. At the same time, a descriptive model is close to a verifiable version...

  9. Fusion safety codes International modeling with MELCOR and ATHENA- INTRA

    CERN Document Server

    Marshall, T; Topilski, L; Merrill, B

    2002-01-01

    For a number of years, the world fusion safety community has been involved in benchmarking their safety analyses codes against experiment data to support regulatory approval of a next step fusion device. This paper discusses the benchmarking of two prominent fusion safety thermal-hydraulic computer codes. The MELCOR code was developed in the US for fission severe accident safety analyses and has been modified for fusion safety analyses. The ATHENA code is a multifluid version of the US-developed RELAP5 code that is also widely used for fusion safety analyses. The ENEA Fusion Division uses ATHENA in conjunction with the INTRA code for its safety analyses. The INTRA code was developed in Germany and predicts containment building pressures, temperatures and fluid flow. ENEA employs the French-developed ISAS system to couple ATHENA and INTRA. This paper provides a brief introduction of the MELCOR and ATHENA-INTRA codes and presents their modeling results for the following breaches of a water cooling line into the...

  10. On the Delay Characteristics for Point-to-Point links using Random Linear Network Coding with On-the-fly Coding Capabilities

    DEFF Research Database (Denmark)

    Tömösközi, Máté; Fitzek, Frank; Roetter, Daniel Enrique Lucani

    2014-01-01

    Video surveillance and similar real-time applications on wireless networks require increased reliability and high performance of the underlying transmission layer. Classical solutions, such as Reed-Solomon codes, increase the reliability, but typically have the negative side-effect of additional ...

  11. Development of Parallel Computing Framework to Enhance Radiation Transport Code Capabilities for Rare Isotope Beam Facility Design

    Energy Technology Data Exchange (ETDEWEB)

    Kostin, Mikhail [Michigan State Univ., East Lansing, MI (United States); Mokhov, Nikolai [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Niita, Koji [Research Organization for Information Science and Technology, Ibaraki-ken (Japan)

    2013-09-25

    A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA and MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.

  12. LMFBR models for the ORIGEN2 computer code

    International Nuclear Information System (INIS)

    Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.

    1981-10-01

    Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th- 238 U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given

  13. CMCpy: Genetic Code-Message Coevolution Models in Python

    Science.gov (United States)

    Becich, Peter J.; Stark, Brian P.; Bhat, Harish S.; Ardell, David H.

    2013-01-01

    Code-message coevolution (CMC) models represent coevolution of a genetic code and a population of protein-coding genes (“messages”). Formally, CMC models are sets of quasispecies coupled together for fitness through a shared genetic code. Although CMC models display plausible explanations for the origin of multiple genetic code traits by natural selection, useful modern implementations of CMC models are not currently available. To meet this need we present CMCpy, an object-oriented Python API and command-line executable front-end that can reproduce all published results of CMC models. CMCpy implements multiple solvers for leading eigenpairs of quasispecies models. We also present novel analytical results that extend and generalize applications of perturbation theory to quasispecies models and pioneer the application of a homotopy method for quasispecies with non-unique maximally fit genotypes. Our results therefore facilitate the computational and analytical study of a variety of evolutionary systems. CMCpy is free open-source software available from http://pypi.python.org/pypi/CMCpy/. PMID:23532367

  14. WDEC: A Code for Modeling White Dwarf Structure and Pulsations

    Science.gov (United States)

    Bischoff-Kim, Agnès; Montgomery, Michael H.

    2018-05-01

    The White Dwarf Evolution Code (WDEC), written in Fortran, makes models of white dwarf stars. It is fast, versatile, and includes the latest physics. The code evolves hot (∼100,000 K) input models down to a chosen effective temperature by relaxing the models to be solutions of the equations of stellar structure. The code can also be used to obtain g-mode oscillation modes for the models. WDEC has a long history going back to the late 1960s. Over the years, it has been updated and re-packaged for modern computer architectures and has specifically been used in computationally intensive asteroseismic fitting. Generations of white dwarf astronomers and dozens of publications have made use of the WDEC, although the last true instrument paper is the original one, published in 1975. This paper discusses the history of the code, necessary to understand why it works the way it does, details the physics and features in the code today, and points the reader to where to find the code and a user guide.

  15. Dynamic Simulation of Human Gait Model With Predictive Capability.

    Science.gov (United States)

    Sun, Jinming; Wu, Shaoli; Voglewede, Philip A

    2018-03-01

    In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.

  16. Improvement of a combustion model in MELCOR code

    International Nuclear Information System (INIS)

    Ogino, Masao; Hashimoto, Takashi

    1999-01-01

    NUPEC has been improving a hydrogen combustion model in MELCOR code for severe accident analysis. In the proposed combustion model, the flame velocity in a node was predicted using five different flame front shapes of fireball, prism, bubble, spherical jet, and plane jet. For validation of the proposed model, the results of the Battelle multi-compartment hydrogen combustion test were used. The selected test cases for the study were Hx-6, 13, 14, 20 and Ix-2 which had two, three or four compartments under homogeneous hydrogen concentration of 5 to 10 vol%. The proposed model could predict well the combustion behavior in multi-compartment containment geometry on the whole. MELCOR code, incorporating the present combustion model, can simulate combustion behavior during severe accident with acceptable computing time and some degree of accuracy. The applicability study of the improved MELCOR code to the actual reactor plants will be further continued. (author)

  17. NRC model simulations in support of the hydrologic code intercomparison study (HYDROCOIN): Level 1-code verification

    International Nuclear Information System (INIS)

    1988-03-01

    HYDROCOIN is an international study for examining ground-water flow modeling strategies and their influence on safety assessments of geologic repositories for nuclear waste. This report summarizes only the combined NRC project temas' simulation efforts on the computer code bench-marking problems. The codes used to simulate thesee seven problems were SWIFT II, FEMWATER, UNSAT2M USGS-3D, AND TOUGH. In general, linear problems involving scalars such as hydraulic head were accurately simulated by both finite-difference and finite-element solution algorithms. Both types of codes produced accurate results even for complex geometrics such as intersecting fractures. Difficulties were encountered in solving problems that invovled nonlinear effects such as density-driven flow and unsaturated flow. In order to fully evaluate the accuracy of these codes, post-processing of results using paricle tracking algorithms and calculating fluxes were examined. This proved very valuable by uncovering disagreements among code results even through the hydraulic-head solutions had been in agreement. 9 refs., 111 figs., 6 tabs

  18. JPEG2000 COMPRESSION CODING USING HUMAN VISUAL SYSTEM MODEL

    Institute of Scientific and Technical Information of China (English)

    Xiao Jiang; Wu Chengke

    2005-01-01

    In order to apply the Human Visual System (HVS) model to JPEG2000 standard,several implementation alternatives are discussed and a new scheme of visual optimization isintroduced with modifying the slope of rate-distortion. The novelty is that the method of visual weighting is not lifting the coefficients in wavelet domain, but is complemented by code stream organization. It remains all the features of Embedded Block Coding with Optimized Truncation (EBCOT) such as resolution progressive, good robust for error bit spread and compatibility of lossless compression. Well performed than other methods, it keeps the shortest standard codestream and decompression time and owns the ability of VIsual Progressive (VIP) coding.

  19. The ELOCA fuel modelling code: past, present and future

    International Nuclear Information System (INIS)

    Williams, A.F.

    2005-01-01

    ELOCA is the Industry Standard Toolset (IST) computer code for modelling CANDU fuel under the transient coolant conditions typical of an accident scenario. Since its original inception in the early 1970's, the code has undergone continual development and improvement. The code now embodies much of the knowledge and experience of fuel behaviour gained by the Canadian nuclear industry over this period. ELOCA has proven to be a valuable tool for the safety analyst, and continues to be used extensively to support the licensing cases of CANDU reactors. This paper provides a brief and much simplified view of this development history, its current status, and plans for future development. (author)

  20. Hamming Code Based Watermarking Scheme for 3D Model Verification

    Directory of Open Access Journals (Sweden)

    Jen-Tse Wang

    2014-01-01

    Full Text Available Due to the explosive growth of the Internet and maturing of 3D hardware techniques, protecting 3D objects becomes a more and more important issue. In this paper, a public hamming code based fragile watermarking technique is proposed for 3D objects verification. An adaptive watermark is generated from each cover model by using the hamming code technique. A simple least significant bit (LSB substitution technique is employed for watermark embedding. In the extraction stage, the hamming code based watermark can be verified by using the hamming code checking without embedding any verification information. Experimental results shows that 100% vertices of the cover model can be watermarked, extracted, and verified. It also shows that the proposed method can improve security and achieve low distortion of stego object.

  1. Modeling Guidelines for Code Generation in the Railway Signaling Context

    Science.gov (United States)

    Ferrari, Alessio; Bacherini, Stefano; Fantechi, Alessandro; Zingoni, Niccolo

    2009-01-01

    Modeling guidelines constitute one of the fundamental cornerstones for Model Based Development. Their relevance is essential when dealing with code generation in the safety-critical domain. This article presents the experience of a railway signaling systems manufacturer on this issue. Introduction of Model-Based Development (MBD) and code generation in the industrial safety-critical sector created a crucial paradigm shift in the development process of dependable systems. While traditional software development focuses on the code, with MBD practices the focus shifts to model abstractions. The change has fundamental implications for safety-critical systems, which still need to guarantee a high degree of confidence also at code level. Usage of the Simulink/Stateflow platform for modeling, which is a de facto standard in control software development, does not ensure by itself production of high-quality dependable code. This issue has been addressed by companies through the definition of modeling rules imposing restrictions on the usage of design tools components, in order to enable production of qualified code. The MAAB Control Algorithm Modeling Guidelines (MathWorks Automotive Advisory Board)[3] is a well established set of publicly available rules for modeling with Simulink/Stateflow. This set of recommendations has been developed by a group of OEMs and suppliers of the automotive sector with the objective of enforcing and easing the usage of the MathWorks tools within the automotive industry. The guidelines have been published in 2001 and afterwords revisited in 2007 in order to integrate some additional rules developed by the Japanese division of MAAB [5]. The scope of the current edition of the guidelines ranges from model maintainability and readability to code generation issues. The rules are conceived as a reference baseline and therefore they need to be tailored to comply with the characteristics of each industrial context. Customization of these

  2. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC).

    Energy Technology Data Exchange (ETDEWEB)

    Schultz, Peter Andrew

    2011-12-01

    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomic scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V&V) is required throughout the system to establish evidence-based metrics for the level of confidence in M&S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V&V challenge at the subcontinuum scale, an approach to incorporate V&V concepts into subcontinuum scale modeling and simulation (M&S), and a plan to incrementally incorporate effective V&V into subcontinuum scale M&S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.

  3. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC)

    International Nuclear Information System (INIS)

    Schultz, Peter Andrew

    2011-01-01

    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M and S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomic scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V and V) is required throughout the system to establish evidence-based metrics for the level of confidence in M and S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V and V challenge at the subcontinuum scale, an approach to incorporate V and V concepts into subcontinuum scale modeling and simulation (M and S), and a plan to incrementally incorporate effective V and V into subcontinuum scale M and S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.

  4. Long-term predictive capability of erosion models

    Science.gov (United States)

    Veerabhadra, P.; Buckley, D. H.

    1983-01-01

    A brief overview of long-term cavitation and liquid impingement erosion and modeling methods proposed by different investigators, including the curve-fit approach is presented. A table was prepared to highlight the number of variables necessary for each model in order to compute the erosion-versus-time curves. A power law relation based on the average erosion rate is suggested which may solve several modeling problems.

  5. Description of codes and models to be used in risk assessment

    International Nuclear Information System (INIS)

    1991-09-01

    Human health and environmental risk assessments will be performed as part of the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA) remedial investigation/feasibility study (RI/FS) activities at the Hanford Site. Analytical and computer encoded numerical models are commonly used during both the remedial investigation (RI) and feasibility study (FS) to predict or estimate the concentration of contaminants at the point of exposure to humans and/or the environment. This document has been prepared to identify the computer codes that will be used in support of RI/FS human health and environmental risk assessments at the Hanford Site. In addition to the CERCLA RI/FS process, it is recommended that these computer codes be used when fate and transport analyses is required for other activities. Additional computer codes may be used for other purposes (e.g., design of tracer tests, location of observation wells, etc.). This document provides guidance for unit managers in charge of RI/FS activities. Use of the same computer codes for all analytical activities at the Hanford Site will promote consistency, reduce the effort required to develop, validate, and implement models to simulate Hanford Site conditions, and expedite regulatory review. The discussion provides a description of how models will likely be developed and utilized at the Hanford Site. It is intended to summarize previous environmental-related modeling at the Hanford Site and provide background for future model development. The modeling capabilities that are desirable for the Hanford Site and the codes that were evaluated. The recommendations include the codes proposed to support future risk assessment modeling at the Hanford Site, and provides the rational for the codes selected. 27 refs., 3 figs., 1 tab

  6. MELMRK 2.0: A description of computer models and results of code testing

    International Nuclear Information System (INIS)

    Wittman, R.S.; Denny, V.; Mertol, A.

    1992-01-01

    An advanced version of the MELMRK computer code has been developed that provides detailed models for conservation of mass, momentum, and thermal energy within relocating streams of molten metallics during meltdown of Savannah River Site (SRS) reactor assemblies. In addition to a mechanistic treatment of transport phenomena within a relocating stream, MELMRK 2.0 retains the MOD1 capability for real-time coupling of the in-depth thermal response of participating assembly heat structure and, further, augments this capability with models for self-heating of relocating melt owing to steam oxidation of metallics and fission product decay power. As was the case for MELMRK 1.0, the MOD2 version offers state-of-the-art numerics for solving coupled sets of nonlinear differential equations. Principal features include application of multi-dimensional Newton-Raphson techniques to accelerate convergence behavior and direct matrix inversion to advance primitive variables from one iterate to the next. Additionally, MELMRK 2.0 provides logical event flags for managing the broad range of code options available for treating such features as (1) coexisting flow regimes, (2) dynamic transitions between flow regimes, and (3) linkages between heatup and relocation code modules. The purpose of this report is to provide a detailed description of the MELMRK 2.0 computer models for melt relocation. Also included are illustrative results for code testing, as well as an integrated calculation for meltdown of a Mark 31a assembly

  7. The Global Modeling Test Bed - Building a New National Capability for Advancing Operational Global Modeling in the United States.

    Science.gov (United States)

    Toepfer, F.; Cortinas, J. V., Jr.; Kuo, W.; Tallapragada, V.; Stajner, I.; Nance, L. B.; Kelleher, K. E.; Firl, G.; Bernardet, L.

    2017-12-01

    NOAA develops, operates, and maintains an operational global modeling capability for weather, sub seasonal and seasonal prediction for the protection of life and property and fostering the US economy. In order to substantially improve the overall performance and accelerate advancements of the operational modeling suite, NOAA is partnering with NCAR to design and build the Global Modeling Test Bed (GMTB). The GMTB has been established to provide a platform and a capability for researchers to contribute to the advancement primarily through the development of physical parameterizations needed to improve operational NWP. The strategy to achieve this goal relies on effectively leveraging global expertise through a modern collaborative software development framework. This framework consists of a repository of vetted and supported physical parameterizations known as the Common Community Physics Package (CCPP), a common well-documented interface known as the Interoperable Physics Driver (IPD) for combining schemes into suites and for their configuration and connection to dynamic cores, and an open evidence-based governance process for managing the development and evolution of CCPP. In addition, a physics test harness designed to work within this framework has been established in order to facilitate easier like-to-like comparison of physics advancements. This paper will present an overview of the design of the CCPP and test platform. Additionally, an overview of potential new opportunities of how physics developers can engage in the process, from implementing code for CCPP/IPD compliance to testing their development within an operational-like software environment, will be presented. In addition, insight will be given as to how development gets elevated to CPPP-supported status, the pre-cursor to broad availability and use within operational NWP. An overview of how the GMTB can be expanded to support other global or regional modeling capabilities will also be presented.

  8. The drift flux model in the ASSERT subchannel code

    International Nuclear Information System (INIS)

    Carver, M.B.; Judd, R.A.; Kiteley, J.C.; Tahir, A.

    1987-01-01

    The ASSERT subchannel code has been developed specifically to model flow and phase distributions within CANDU fuel bundles. ASSERT uses a drift-flux model that permits the phases to have unequal velocities, and can thus model phase separation tendencies that may occur in horizontal flow. The basic principles of ASSERT are outlined, and computed results are compared against data from various experiments for validation purposes. The paper concludes with an example of the use of the code to predict critical heat flux in CANDU geometries

  9. The Creation and Use of an Analysis Capability Maturity Model (trademark) (ACMM)

    National Research Council Canada - National Science Library

    Covey, R. W; Hixon, D. J

    2005-01-01

    .... Capability Maturity Models (trademark) (CMMs) are being used in several intellectual endeavors, such as software engineering, software acquisition, and systems engineering. This Analysis CMM (ACMM...

  10. Fuel rod modelling during transients: The TOUTATIS code

    International Nuclear Information System (INIS)

    Bentejac, F.; Bourreau, S.; Brochard, J.; Hourdequin, N.; Lansiart, S.

    2001-01-01

    The TOUTATIS code is devoted to the PCI local phenomena simulation, in correlation with the METEOR code for the global behaviour of the fuel rod. More specifically, the TOUTATIS objective is to evaluate the mechanical constraints on the cladding during a power transient thus predicting its behaviour in term of stress corrosion cracking. Based upon the finite element computation code CASTEM 2000, TOUTATIS is a set of modules written in a macro language. The aim of this paper is to present both code modules: The axisymmetric bi-dimensional module, modeling a unique block pellet; The tri dimensional module modeling a radially fragmented pellet. Having shown the boundary conditions and the algorithms used, the application will be illustrated by: A short presentation of the bidimensional axisymmetric modeling performances as well as its limits; The enhancement due to the three dimensional modeling will be displayed by sensitivity studies to the geometry, in this case the pellet height/diameter ratio. Finally, we will show the easiness of the development inherent to the CASTEM 2000 system by depicting the process of a modeling enhancement by adding the possibility of an axial (horizontal) fissuration of the pellet. As conclusion, the future improvements planned for the code are depicted. (author)

  11. On the predictive capabilities of multiphase Darcy flow models

    KAUST Repository

    Icardi, Matteo; Prudhomme, Serge

    2016-01-01

    Darcy s law is a widely used model and the limit of its validity is fairly well known. When the flow is sufficiently slow and the porosity relatively homogeneous and low, Darcy s law is the homogenized equation arising from the Stokes and Navier- Stokes equations and depends on a single effective parameter (the absolute permeability). However when the model is extended to multiphase flows, the assumptions are much more restrictive and less realistic. Therefore it is often used in conjunction with empirical models (such as relative permeability and capillary pressure curves), derived usually from phenomenological speculations and experimental data fitting. In this work, we present the results of a Bayesian calibration of a two-phase flow model, using high-fidelity DNS numerical simulation (at the pore-scale) in a realistic porous medium. These reference results have been obtained from a Navier-Stokes solver coupled with an explicit interphase-tracking scheme. The Bayesian inversion is performed on a simplified 1D model in Matlab by using adaptive spectral method. Several data sets are generated and considered to assess the validity of this 1D model.

  12. On the predictive capabilities of multiphase Darcy flow models

    KAUST Repository

    Icardi, Matteo

    2016-01-09

    Darcy s law is a widely used model and the limit of its validity is fairly well known. When the flow is sufficiently slow and the porosity relatively homogeneous and low, Darcy s law is the homogenized equation arising from the Stokes and Navier- Stokes equations and depends on a single effective parameter (the absolute permeability). However when the model is extended to multiphase flows, the assumptions are much more restrictive and less realistic. Therefore it is often used in conjunction with empirical models (such as relative permeability and capillary pressure curves), derived usually from phenomenological speculations and experimental data fitting. In this work, we present the results of a Bayesian calibration of a two-phase flow model, using high-fidelity DNS numerical simulation (at the pore-scale) in a realistic porous medium. These reference results have been obtained from a Navier-Stokes solver coupled with an explicit interphase-tracking scheme. The Bayesian inversion is performed on a simplified 1D model in Matlab by using adaptive spectral method. Several data sets are generated and considered to assess the validity of this 1D model.

  13. Lattice Boltzmann model capable of mesoscopic vorticity computation

    Science.gov (United States)

    Peng, Cheng; Guo, Zhaoli; Wang, Lian-Ping

    2017-11-01

    It is well known that standard lattice Boltzmann (LB) models allow the strain-rate components to be computed mesoscopically (i.e., through the local particle distributions) and as such possess a second-order accuracy in strain rate. This is one of the appealing features of the lattice Boltzmann method (LBM) which is of only second-order accuracy in hydrodynamic velocity itself. However, no known LB model can provide the same quality for vorticity and pressure gradients. In this paper, we design a multiple-relaxation time LB model on a three-dimensional 27-discrete-velocity (D3Q27) lattice. A detailed Chapman-Enskog analysis is presented to illustrate all the necessary constraints in reproducing the isothermal Navier-Stokes equations. The remaining degrees of freedom are carefully analyzed to derive a model that accommodates mesoscopic computation of all the velocity and pressure gradients from the nonequilibrium moments. This way of vorticity calculation naturally ensures a second-order accuracy, which is also proven through an asymptotic analysis. We thus show, with enough degrees of freedom and appropriate modifications, the mesoscopic vorticity computation can be achieved in LBM. The resulting model is then validated in simulations of a three-dimensional decaying Taylor-Green flow, a lid-driven cavity flow, and a uniform flow passing a fixed sphere. Furthermore, it is shown that the mesoscopic vorticity computation can be realized even with single relaxation parameter.

  14. Computable general equilibrium model fiscal year 2013 capability development report

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-17

    This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences in the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.

  15. Uncertainty quantification's role in modeling and simulation planning, and credibility assessment through the predictive capability maturity model

    Energy Technology Data Exchange (ETDEWEB)

    Rider, William J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Witkowski, Walter R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mousseau, Vincent Andrew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-04-13

    The importance of credible, trustworthy numerical simulations is obvious especially when using the results for making high-consequence decisions. Determining the credibility of such numerical predictions is much more difficult and requires a systematic approach to assessing predictive capability, associated uncertainties and overall confidence in the computational simulation process for the intended use of the model. This process begins with an evaluation of the computational modeling of the identified, important physics of the simulation for its intended use. This is commonly done through a Phenomena Identification Ranking Table (PIRT). Then an assessment of the evidence basis supporting the ability to computationally simulate these physics can be performed using various frameworks such as the Predictive Capability Maturity Model (PCMM). There were several critical activities that follow in the areas of code and solution verification, validation and uncertainty quantification, which will be described in detail in the following sections. Here, we introduce the subject matter for general applications but specifics are given for the failure prediction project. In addition, the first task that must be completed in the verification & validation procedure is to perform a credibility assessment to fully understand the requirements and limitations of the current computational simulation capability for the specific application intended use. The PIRT and PCMM are tools used at Sandia National Laboratories (SNL) to provide a consistent manner to perform such an assessment. Ideally, all stakeholders should be represented and contribute to perform an accurate credibility assessment. PIRTs and PCMMs are both described in brief detail below and the resulting assessments for an example project are given.

  16. A predictive transport modeling code for ICRF-heated tokamaks

    International Nuclear Information System (INIS)

    Phillips, C.K.; Hwang, D.Q.

    1992-02-01

    In this report, a detailed description of the physic included in the WHIST/RAZE package as well as a few illustrative examples of the capabilities of the package will be presented. An in depth analysis of ICRF heating experiments using WHIST/RAZE will be discussed in a forthcoming report. A general overview of philosophy behind the structure of the WHIST/RAZE package, a summary of the features of the WHIST code, and a description of the interface to the RAZE subroutines are presented in section 2 of this report. Details of the physics contained in the RAZE code are examined in section 3. Sample results from the package follow in section 4, with concluding remarks and a discussion of possible improvements to the package discussed in section 5

  17. Capabilities For Modelling Of Conversion Processes In Life Cycle Assessment

    DEFF Research Database (Denmark)

    Damgaard, Anders; Zarrin, Bahram; Tonini, Davide

    considering how the biochemical parameters change through a process chain. A good example of this is bio-refinery processes where different residual biomass products are converted through different steps into the final energy product. Here it is necessary to know the stoichiometry of the different products...... little focus on the chemical composition of the functional flows, as flows in the models have mainly been tracked on a mass basis, as emphasis was the function of the product and not the chemical composition of said product. Conversely, in modelling of environmental technologies, such as wastewater...... varies considerably. To address this, EASETECH (Clavreul et al., 2014) was developed which integrates a matrix approach for the reference flow which contains the full chemical composition for different material fractions, and also the number of different material fractions present in the overall mass...

  18. Capabilities for modelling of conversion processes in LCA

    DEFF Research Database (Denmark)

    Damgaard, Anders; Zarrin, Bahram; Tonini, Davide

    2015-01-01

    substances themselves change through a process chain. A good example of this is bio-refinery processes where different residual biomass products are converted through different steps into the final energy product. Here it is necessary to know the stoichiometry of the different products going in, and being...... little focus on the chemical composition of the functional flows, as flows in the models have mainly been tracked on a mass basis, as focus was on the function of the product and not the chemical composition of said product. Conversely modelling environmental technologies, such as wastewater treatment......, EASETECH (Clavreul et al., 2014) was developed which integrates a matrix approach for the functional unit which contains the full chemical composition for different material fractions, and also the number of different material fractions present in the overall mass being handled. These chemical substances...

  19. Research Opportunities from Emerging Atmospheric Observing and Modeling Capabilities.

    Science.gov (United States)

    Dabberdt, Walter F.; Schlatter, Thomas W.

    1996-02-01

    The Second Prospectus Development Team (PDT-2) of the U.S. Weather Research Program was charged with identifying research opportunities that are best matched to emerging operational and experimental measurement and modeling methods. The overarching recommendation of PDT-2 is that inputs for weather forecast models can best be obtained through the use of composite observing systems together with adaptive (or targeted) observing strategies employing both in situ and remote sensing. Optimal observing systems and strategies are best determined through a three-part process: observing system simulation experiments, pilot field measurement programs, and model-assisted data sensitivity experiments. Furthermore, the mesoscale research community needs easy and timely access to the new operational and research datasets in a form that can readily be reformatted into existing software packages for analysis and display. The value of these data is diminished to the extent that they remain inaccessible.The composite observing system of the future must combine synoptic observations, routine mobile observations, and targeted observations, as the current or forecast situation dictates. High costs demand fuller exploitation of commercial aircraft, meteorological and navigation [Global Positioning System (GPS)] satellites, and Doppler radar. Single observing systems must be assessed in the context of a composite system that provides complementary information. Maintenance of the current North American rawinsonde network is critical for progress in both research-oriented and operational weather forecasting.Adaptive sampling strategies are designed to improve large-scale and regional weather prediction but they will also improve diagnosis and prediction of flash flooding, air pollution, forest fire management, and other environmental emergencies. Adaptive measurements can be made by piloted or unpiloted aircraft. Rawinsondes can be launched and satellites can be programmed to make

  20. Multi-phase model development to assess RCIC system capabilities under severe accident conditions

    Energy Technology Data Exchange (ETDEWEB)

    Kirkland, Karen Vierow [Texas A & M Univ., College Station, TX (United States); Ross, Kyle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Beeny, Bradley [Texas A & M Univ., College Station, TX (United States); Luthman, Nicholas [Texas A& M Engineering Experiment Station, College Station, TX (United States); Strater, Zachary [Texas A & M Univ., College Station, TX (United States)

    2017-12-23

    The Reactor Core Isolation Cooling (RCIC) System is a safety-related system that provides makeup water for core cooling of some Boiling Water Reactors (BWRs) with a Mark I containment. The RCIC System consists of a steam-driven Terry turbine that powers a centrifugal, multi-stage pump for providing water to the reactor pressure vessel. The Fukushima Dai-ichi accidents demonstrated that the RCIC System can play an important role under accident conditions in removing core decay heat. The unexpectedly sustained, good performance of the RCIC System in the Fukushima reactor demonstrates, firstly, that its capabilities are not well understood, and secondly, that the system has high potential for extended core cooling in accident scenarios. Better understanding and analysis tools would allow for more options to cope with a severe accident situation and to reduce the consequences. The objectives of this project were to develop physics-based models of the RCIC System, incorporate them into a multi-phase code and validate the models. This Final Technical Report details the progress throughout the project duration and the accomplishments.

  1. Universal Regularizers For Robust Sparse Coding and Modeling

    OpenAIRE

    Ramirez, Ignacio; Sapiro, Guillermo

    2010-01-01

    Sparse data models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and their use has led to state-of-the-art results in many signal and image processing tasks. It is now well understood that the choice of the sparsity regularization term is critical in the success of such models. Based on a codelength minimization interpretation of sparse coding, and using tools from universal coding...

  2. Model-Driven Engineering of Machine Executable Code

    Science.gov (United States)

    Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira

    Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.

  3. An improved thermal model for the computer code NAIAD

    International Nuclear Information System (INIS)

    Rainbow, M.T.

    1982-12-01

    An improved thermal model, based on the concept of heat slabs, has been incorporated as an option into the thermal hydraulic computer code NAIAD. The heat slabs are one-dimensional thermal conduction models with temperature independent thermal properties which may be internal and/or external to the fluid. Thermal energy may be added to or removed from the fluid via heat slabs and passed across the external boundary of external heat slabs at a rate which is a linear function of the external surface temperatures. The code input for the new option has been restructured to simplify data preparation. A full description of current input requirements is presented

  4. Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach

    Directory of Open Access Journals (Sweden)

    W. Bastiaan Kleijn

    2005-06-01

    Full Text Available Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel coding.

  5. Data model description for the DESCARTES and CIDER codes

    International Nuclear Information System (INIS)

    Miley, T.B.; Ouderkirk, S.J.; Nichols, W.E.; Eslinger, P.W.

    1993-01-01

    The primary objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation dose that individuals could have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. One of the major objectives of the HEDR Project is to develop several computer codes to model the airborne releases. transport and envirorunental accumulation of radionuclides resulting from Hanford operations from 1944 through 1972. In July 1992, the HEDR Project Manager determined that the computer codes being developed (DESCARTES, calculation of environmental accumulation from airborne releases, and CIDER, dose calculations from environmental accumulation) were not sufficient to create accurate models. A team of HEDR staff members developed a plan to assure that computer codes would meet HEDR Project goals. The plan consists of five tasks: (1) code requirements definition. (2) scoping studies, (3) design specifications, (4) benchmarking, and (5) data modeling. This report defines the data requirements for the DESCARTES and CIDER codes

  6. COCOA code for creating mock observations of star cluster models

    Science.gov (United States)

    Askar, Abbas; Giersz, Mirek; Pych, Wojciech; Dalessandro, Emanuele

    2018-04-01

    We introduce and present results from the COCOA (Cluster simulatiOn Comparison with ObservAtions) code that has been developed to create idealized mock photometric observations using results from numerical simulations of star cluster evolution. COCOA is able to present the output of realistic numerical simulations of star clusters carried out using Monte Carlo or N-body codes in a way that is useful for direct comparison with photometric observations. In this paper, we describe the COCOA code and demonstrate its different applications by utilizing globular cluster (GC) models simulated with the MOCCA (MOnte Carlo Cluster simulAtor) code. COCOA is used to synthetically observe these different GC models with optical telescopes, perform point spread function photometry, and subsequently produce observed colour-magnitude diagrams. We also use COCOA to compare the results from synthetic observations of a cluster model that has the same age and metallicity as the Galactic GC NGC 2808 with observations of the same cluster carried out with a 2.2 m optical telescope. We find that COCOA can effectively simulate realistic observations and recover photometric data. COCOA has numerous scientific applications that maybe be helpful for both theoreticians and observers that work on star clusters. Plans for further improving and developing the code are also discussed in this paper.

  7. NASA Air Force Cost Model (NAFCOM): Capabilities and Results

    Science.gov (United States)

    McAfee, Julie; Culver, George; Naderi, Mahmoud

    2011-01-01

    NAFCOM is a parametric estimating tool for space hardware. Uses cost estimating relationships (CERs) which correlate historical costs to mission characteristics to predict new project costs. It is based on historical NASA and Air Force space projects. It is intended to be used in the very early phases of a development project. NAFCOM can be used at the subsystem or component levels and estimates development and production costs. NAFCOM is applicable to various types of missions (crewed spacecraft, uncrewed spacecraft, and launch vehicles). There are two versions of the model: a government version that is restricted and a contractor releasable version.

  8. Code-code comparisons of DIVIMP's 'onion-skin model' and the EDGE2D fluid code

    International Nuclear Information System (INIS)

    Stangeby, P.C.; Elder, J.D.; Horton, L.D.; Simonini, R.; Taroni, A.; Matthews, O.F.; Monk, R.D.

    1997-01-01

    In onion-skin modelling, O-SM, of the edge plasma, the cross-field power and particle flows are treated very simply e.g. as spatially uniform. The validity of O-S modelling requires demonstration that such approximations can still result in reasonable solutions for the edge plasma. This is demonstrated here by comparison of O-SM with full 2D fluid edge solutions generated by the EDGE2D code. The target boundary conditions for the O-SM are taken from the EDGE2D output and the complete O-SM solutions are then compared with the EDGE2D ones. Agreement is generally within 20% for n e , T e , T i and parallel particle flux density Γ for the medium and high recycling JET cases examined and somewhat less good for a strongly detached CMOD example. (orig.)

  9. A review of MAAP4 code structure and core T/H model

    International Nuclear Information System (INIS)

    Song, Yong Mann; Park, Soo Yong

    1998-03-01

    The modular accident analysis program (MAAP) version 4 is a computer code that can simulate the response of LWR plants during severe accident sequences and includes models for all of the important phenomena which might occur during accident sequences. In this report, MAAP4 code structure and core thermal hydraulic (T/H) model which models the T/H behavior of the reactor core and the response of core components during all accident phases involving degraded cores are specifically reviewed and then reorganized. This reorganization is performed via getting the related models together under each topic whose contents and order are same with other two reports for MELCOR and SCDAP/RELAP5 to be simultaneously published. Major purpose of the report is to provide information about the characteristics of MAAP4 core T/H models for an integrated severe accident computer code development being performed under the one of on-going mid/long-term nuclear developing project. The basic characteristics of the new integrated severe accident code includes: 1) Flexible simulation capability of primary side, secondary side, and the containment under severe accident conditions, 2) Detailed plant simulation, 3) Convenient user-interfaces, 4) Highly modularization for easy maintenance/improvement, and 5) State-of-the-art model selection. In conclusion, MAAP4 code has appeared to be superior for 3) and 4) items but to be somewhat inferior for 1) and 2) items. For item 5), more efforts should be made in the future to compare separated models in detail with not only other codes but also recent world-wide work. (author). 17 refs., 1 tab., 12 figs

  10. Advanced Electric and Magnetic Material Models for FDTD Electromagnetic Codes

    CERN Document Server

    Poole, Brian R; Nelson, Scott D

    2005-01-01

    The modeling of dielectric and magnetic materials in the time domain is required for pulse power applications, pulsed induction accelerators, and advanced transmission lines. For example, most induction accelerator modules require the use of magnetic materials to provide adequate Volt-sec during the acceleration pulse. These models require hysteresis and saturation to simulate the saturation wavefront in a multipulse environment. In high voltage transmission line applications such as shock or soliton lines the dielectric is operating in a highly nonlinear regime, which requires nonlinear models. Simple 1-D models are developed for fast parameterization of transmission line structures. In the case of nonlinear dielectrics, a simple analytic model describing the permittivity in terms of electric field is used in a 3-D finite difference time domain code (FDTD). In the case of magnetic materials, both rate independent and rate dependent Hodgdon magnetic material models have been implemented into 3-D FDTD codes an...

  11. Assessment of one dimensional reflood model in REFLA/TRAC code

    International Nuclear Information System (INIS)

    Akimoto, Hajime; Ohnuki, Akira; Murao, Yoshio

    1993-12-01

    Post-test calculations for twelve selected SSRTF, SCTF and CCTF tests were performed to assess the predictive capability of the one-dimensional reflood model in the REFLA/TRAC code for core thermal behavior during the reflood in a PWR LOCA. Both core void fraction profile and clad temperature transients were predicted excellently by the REFLA/TRAC code including parameter effect of core inlet subcooling, core flooding rate, core configuration, core power, system pressure, initial clad temperature and so on. The peak clad temperature was predicted within an error of 50 K. Based on these assessment results, it is verified that the core thermal hydraulic behaviors during the reflood can be predicted excellently with the REFLA/TRAC code under various conditions where the reflood may occur in a PWR LOCA. (author)

  12. Capabilities of current wildfire models when simulating topographical flow

    Science.gov (United States)

    Kochanski, A.; Jenkins, M.; Krueger, S. K.; McDermott, R.; Mell, W.

    2009-12-01

    Accurate predictions of the growth, spread and suppression of wild fires rely heavily on the correct prediction of the local wind conditions and the interactions between the fire and the local ambient airflow. Resolving local flows, often strongly affected by topographical features like hills, canyons and ridges, is a prerequisite for accurate simulation and prediction of fire behaviors. In this study, we present the results of high-resolution numerical simulations of the flow over a smooth hill, performed using (1) the NIST WFDS (WUI or Wildland-Urban-Interface version of the FDS or Fire Dynamic Simulator), and (2) the LES version of the NCAR Weather Research and Forecasting (WRF-LES) model. The WFDS model is in the initial stages of development for application to wind flow and fire spread over complex terrain. The focus of the talk is to assess how well simple topographical flow is represented by WRF-LES and the current version of WFDS. If sufficient progress has been made prior to the meeting then the importance of the discrepancies between the predicted and measured winds, in terms of simulated fire behavior, will be examined.

  13. Three-dimensional modeling with finite element codes

    Energy Technology Data Exchange (ETDEWEB)

    Druce, R.L.

    1986-01-17

    This paper describes work done to model magnetostatic field problems in three dimensions. Finite element codes, available at LLNL, and pre- and post-processors were used in the solution of the mathematical model, the output from which agreed well with the experimentally obtained data. The geometry used in this work was a cylinder with ports in the periphery and no current sources in the space modeled. 6 refs., 8 figs.

  14. Code Development for Control Design Applications: Phase I: Structural Modeling

    International Nuclear Information System (INIS)

    Bir, G. S.; Robinson, M.

    1998-01-01

    The design of integrated controls for a complex system like a wind turbine relies on a system model in an explicit format, e.g., state-space format. Current wind turbine codes focus on turbine simulation and not on system characterization, which is desired for controls design as well as applications like operating turbine model analysis, optimal design, and aeroelastic stability analysis. This paper reviews structural modeling that comprises three major steps: formation of component equations, assembly into system equations, and linearization

  15. CFD modeling of combustion processes using KIVA3V Code with partially stirred reactor model for turbulence-combustion interactions

    International Nuclear Information System (INIS)

    Jarnicki, R.; Sobiesiak, A.

    2002-01-01

    In order to solve the averaged conservation equations for turbulent reacting flow one is faced with a task of specifying the averaged chemical reaction rate. This is due to turbulence influence on the mean reaction rates that appear in the species concentration Reynolds-averaged equation. In order to investigate the Partially Stirred Reactor (PaSR) combustion model capabilities, a CFD modeling using KIVA3V Code with the PaSR model of two very different combustion processes, was performed. Experimental results were compared with modeling

  16. Expanding the modeling capabilities of the cognitive environment simulation

    International Nuclear Information System (INIS)

    Roth, E.M.; Mumaw, R.J.; Pople, H.E. Jr.

    1991-01-01

    The Nuclear Regulatory Commission has been conducting a research program to develop more effective tools to model the cognitive activities that underlie intention formation during nuclear power plant (NPP) emergencies. Under this program an artificial intelligence (AI) computer simulation called Cognitive Environment Simulation (CES) has been developed. CES simulates the cognitive activities involved in responding to a NPP accident situation. It is intended to provide an analytic tool for predicting likely human responses, and the kinds of errors that can plausibly arise under different accident conditions to support human reliability analysis. Recently CES was extended to handle a class of interfacing loss of coolant accidents (ISLOCAs). This paper summarizes the results of these exercises and describes follow-on work currently underway

  17. Inclusion of models to describe severe accident conditions in the fuel simulation code DIONISIO

    Energy Technology Data Exchange (ETDEWEB)

    Lemes, Martín; Soba, Alejandro [Sección Códigos y Modelos, Gerencia Ciclo del Combustible Nuclear, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina); Daverio, Hernando [Gerencia Reactores y Centrales Nucleares, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina); Denis, Alicia [Sección Códigos y Modelos, Gerencia Ciclo del Combustible Nuclear, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina)

    2017-04-15

    The simulation of fuel rod behavior is a complex task that demands not only accurate models to describe the numerous phenomena occurring in the pellet, cladding and internal rod atmosphere but also an adequate interconnection between them. In the last years several models have been incorporated to the DIONISIO code with the purpose of increasing its precision and reliability. After the regrettable events at Fukushima, the need for codes capable of simulating nuclear fuels under accident conditions has come forth. Heat removal occurs in a quite different way than during normal operation and this fact determines a completely new set of conditions for the fuel materials. A detailed description of the different regimes the coolant may exhibit in such a wide variety of scenarios requires a thermal-hydraulic formulation not suitable to be included in a fuel performance code. Moreover, there exist a number of reliable and famous codes that perform this task. Nevertheless, and keeping in mind the purpose of building a code focused on the fuel behavior, a subroutine was developed for the DIONISIO code that performs a simplified analysis of the coolant in a PWR, restricted to the more representative situations and provides to the fuel simulation the boundary conditions necessary to reproduce accidental situations. In the present work this subroutine is described and the results of different comparisons with experimental data and with thermal-hydraulic codes are offered. It is verified that, in spite of its comparative simplicity, the predictions of this module of DIONISIO do not differ significantly from those of the specific, complex codes.

  18. ORIGEN-ARP 2.00, Isotope Generation and Depletion Code System-Matrix Exponential Method with GUI and Graphics Capability

    International Nuclear Information System (INIS)

    2002-01-01

    1 - Description of program or function: ORIGEN-ARP was developed for the Nuclear Regulatory Commission and the Department of Energy to satisfy a need for an easy-to-use standardized method of isotope depletion/decay analysis for spent fuel, fissile material, and radioactive material. It can be used to solve for spent fuel characterization, isotopic inventory, radiation source terms, and decay heat. This release of ORIGEN-ARP is a standalone code package that contains an updated version of the SCALE-4.4a ORIGEN-S code. It contains a subset of the modules, data libraries, and miscellaneous utilities in SCALE-4.4a. This package is intended for users who do not need the entire SCALE package. ORIGEN-ARP 2.00 (2-12-2002) differs from the previous release ORIGEN-ARP 1.0 (July 2001) in the following ways: 1.The neutron source and energy spectrum routines were replaced with computational algorithms and data from the SOURCES-4B code (RSICC package CCC-661) to provide more accurate spontaneous fission and (alpha,n) neutron sources, and a delayed neutron source capability was added. 2.The printout of the fixed energy group structure photon tables was removed. Gamma sources and spectra are now printed for calculations using the Master Photon Library only. 2 - Methods: ORIGEN-ARP is an automated sequence to perform isotopic depletion / decay calculations using the ARP and ORIGEN-S codes of the SCALE system. The sequence includes the OrigenArp for Windows graphical user interface (GUI) that prepares input for ARP (Automated Rapid Processing) and ORIGEN-S. ARP automatically interpolates cross sections for the ORIGEN-S depletion/decay analysis using enrichment, burnup, and, optionally moderator density, from a set of libraries generated with the SCALE SAS2 depletion sequence. Library sets for four LWR fuel assembly designs (BWR 8 x 8, PWR 14 x 14, 15 x 15, 17 x 17) are included. The libraries span enrichments from 1.5 to 5 wt% U-235 and burnups of 0 to 60,000 MWD/MTU. Other

  19. Performance Theories for Sentence Coding: Some Quantitative Models

    Science.gov (United States)

    Aaronson, Doris; And Others

    1977-01-01

    This study deals with the patterns of word-by-word reading times over a sentence when the subject must code the linguistic information sufficiently for immediate verbatim recall. A class of quantitative models is considered that would account for reading times at phrase breaks. (Author/RM)

  20. Modeling of PHWR fuel elements using FUDA code

    International Nuclear Information System (INIS)

    Tripathi, Rahul Mani; Soni, Rakesh; Prasad, P.N.; Pandarinathan, P.R.

    2008-01-01

    The computer code FUDA (Fuel Design Analysis) is used for modeling PHWR fuel bundle operation history and carry out fuel element thermo-mechanical analysis. The radial temperature profile across fuel and sheath, fission gas release, internal gas pressure, sheath stress and strains during the life of fuel bundle are estimated

  1. 28 CFR 36.608 - Guidance concerning model codes.

    Science.gov (United States)

    2010-07-01

    ... Section 36.608 Judicial Administration DEPARTMENT OF JUSTICE NONDISCRIMINATION ON THE BASIS OF DISABILITY BY PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES Certification of State Laws or Local Building... private entity responsible for developing a model code, the Assistant Attorney General may review the...

  2. Code Shift: Grid Specifications and Dynamic Wind Turbine Models

    DEFF Research Database (Denmark)

    Ackermann, Thomas; Ellis, Abraham; Fortmann, Jens

    2013-01-01

    Grid codes (GCs) and dynamic wind turbine (WT) models are key tools to allow increasing renewable energy penetration without challenging security of supply. In this article, the state of the art and the further development of both tools are discussed, focusing on the European and North American e...

  3. EMPIRE-II statistical model code for nuclear reaction calculations

    Energy Technology Data Exchange (ETDEWEB)

    Herman, M [International Atomic Energy Agency, Vienna (Austria)

    2001-12-15

    EMPIRE II is a nuclear reaction code, comprising various nuclear models, and designed for calculations in the broad range of energies and incident particles. A projectile can be any nucleon or Heavy Ion. The energy range starts just above the resonance region, in the case of neutron projectile, and extends up to few hundreds of MeV for Heavy Ion induced reactions. The code accounts for the major nuclear reaction mechanisms, such as optical model (SCATB), Multistep Direct (ORION + TRISTAN), NVWY Multistep Compound, and the full featured Hauser-Feshbach model. Heavy Ion fusion cross section can be calculated within the simplified coupled channels approach (CCFUS). A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers (BARFIT), moments of inertia (MOMFIT), and {gamma}-ray strength functions. Effects of the dynamic deformation of a fast rotating nucleus can be taken into account in the calculations. The results can be converted into the ENDF-VI format using the accompanying code EMPEND. The package contains the full EXFOR library of experimental data. Relevant EXFOR entries are automatically retrieved during the calculations. Plots comparing experimental results with the calculated ones can be produced using X4TOC4 and PLOTC4 codes linked to the rest of the system through bash-shell (UNIX) scripts. The graphic user interface written in Tcl/Tk is provided. (author)

  4. Developing Materials Processing to Performance Modeling Capabilities and the Need for Exascale Computing Architectures (and Beyond)

    Energy Technology Data Exchange (ETDEWEB)

    Schraad, Mark William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Physics and Engineering Models; Luscher, Darby Jon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Advanced Simulation and Computing

    2016-09-06

    Additive Manufacturing techniques are presenting the Department of Energy and the NNSA Laboratories with new opportunities to consider novel component production and repair processes, and to manufacture materials with tailored response and optimized performance characteristics. Additive Manufacturing technologies already are being applied to primary NNSA mission areas, including Nuclear Weapons. These mission areas are adapting to these new manufacturing methods, because of potential advantages, such as smaller manufacturing footprints, reduced needs for specialized tooling, an ability to embed sensing, novel part repair options, an ability to accommodate complex geometries, and lighter weight materials. To realize the full potential of Additive Manufacturing as a game-changing technology for the NNSA’s national security missions; however, significant progress must be made in several key technical areas. In addition to advances in engineering design, process optimization and automation, and accelerated feedstock design and manufacture, significant progress must be made in modeling and simulation. First and foremost, a more mature understanding of the process-structure-property-performance relationships must be developed. Because Additive Manufacturing processes change the nature of a material’s structure below the engineering scale, new models are required to predict materials response across the spectrum of relevant length scales, from the atomistic to the continuum. New diagnostics will be required to characterize materials response across these scales. And not just models, but advanced algorithms, next-generation codes, and advanced computer architectures will be required to complement the associated modeling activities. Based on preliminary work in each of these areas, a strong argument for the need for Exascale computing architectures can be made, if a legitimate predictive capability is to be developed.

  5. Coupling Hydrodynamic and Wave Propagation Codes for Modeling of Seismic Waves recorded at the SPE Test.

    Science.gov (United States)

    Larmat, C. S.; Rougier, E.; Delorey, A.; Steedman, D. W.; Bradley, C. R.

    2016-12-01

    The goal of the Source Physics Experiment (SPE) is to bring empirical and theoretical advances to the problem of detection and identification of underground nuclear explosions. For this, the SPE program includes a strong modeling effort based on first principles calculations with the challenge to capture both the source and near-source processes and those taking place later in time as seismic waves propagate within complex 3D geologic environments. In this paper, we report on results of modeling that uses hydrodynamic simulation codes (Abaqus and CASH) coupled with a 3D full waveform propagation code, SPECFEM3D. For modeling the near source region, we employ a fully-coupled Euler-Lagrange (CEL) modeling capability with a new continuum-based visco-plastic fracture model for simulation of damage processes, called AZ_Frac. These capabilities produce high-fidelity models of various factors believed to be key in the generation of seismic waves: the explosion dynamics, a weak grout-filled borehole, the surrounding jointed rock, and damage creation and deformations happening around the source and the free surface. SPECFEM3D, based on the Spectral Element Method (SEM) is a direct numerical method for full wave modeling with mathematical accuracy. The coupling interface consists of a series of grid points of the SEM mesh situated inside of the hydrodynamic code's domain. Displacement time series at these points are computed using output data from CASH or Abaqus (by interpolation if needed) and fed into the time marching scheme of SPECFEM3D. We will present validation tests with the Sharpe's model and comparisons of waveforms modeled with Rg waves (2-8Hz) that were recorded up to 2 km for SPE. We especially show effects of the local topography, velocity structure and spallation. Our models predict smaller amplitudes of Rg waves for the first five SPE shots compared to pure elastic models such as Denny &Johnson (1991).

  6. Data engineering systems: Computerized modeling and data bank capabilities for engineering analysis

    Science.gov (United States)

    Kopp, H.; Trettau, R.; Zolotar, B.

    1984-01-01

    The Data Engineering System (DES) is a computer-based system that organizes technical data and provides automated mechanisms for storage, retrieval, and engineering analysis. The DES combines the benefits of a structured data base system with automated links to large-scale analysis codes. While the DES provides the user with many of the capabilities of a computer-aided design (CAD) system, the systems are actually quite different in several respects. A typical CAD system emphasizes interactive graphics capabilities and organizes data in a manner that optimizes these graphics. On the other hand, the DES is a computer-aided engineering system intended for the engineer who must operationally understand an existing or planned design or who desires to carry out additional technical analysis based on a particular design. The DES emphasizes data retrieval in a form that not only provides the engineer access to search and display the data but also links the data automatically with the computer analysis codes.

  7. Modeling RERTR experimental fuel plates using the PLATE code

    International Nuclear Information System (INIS)

    Hayes, S.L.; Meyer, M.K.; Hofman, G.L.; Snelgrove, J.L.; Brazener, R.A.

    2003-01-01

    Modeling results using the PLATE dispersion fuel performance code are presented for the U-Mo/Al experimental fuel plates from the RERTR-1, -2, -3 and -5 irradiation tests. Agreement of the calculations with experimental data obtained in post-irradiation examinations of these fuels, where available, is shown to be good. Use of the code to perform a series of parametric evaluations highlights the sensitivity of U-Mo dispersion fuel performance to fabrication variables, especially fuel particle shape and size distributions. (author)

  8. Thermohydraulic modeling of nuclear thermal rockets: The KLAXON code

    International Nuclear Information System (INIS)

    Hall, M.L.; Rider, W.J.; Cappiello, M.W.

    1992-01-01

    The hydrogen flow from the storage tanks, through the reactor core, and out the nozzle of a Nuclear Thermal Rocket is an integral design consideration. To provide an analysis and design tool for this phenomenon, the KLAXON code is being developed. A shock-capturing numerical methodology is used to model the gas flow (the Harten, Lax, and van Leer method, as implemented by Einfeldt). Preliminary results of modeling the flow through the reactor core and nozzle are given in this paper

  9. Automatic modeling for the monte carlo transport TRIPOLI code

    International Nuclear Information System (INIS)

    Zhang Junjun; Zeng Qin; Wu Yican; Wang Guozhong; FDS Team

    2010-01-01

    TRIPOLI, developed by CEA, France, is Monte Carlo particle transport simulation code. It has been widely applied to nuclear physics, shielding design, evaluation of nuclear safety. However, it is time-consuming and error-prone to manually describe the TRIPOLI input file. This paper implemented bi-directional conversion between CAD model and TRIPOLI model. Its feasibility and efficiency have been demonstrated by several benchmarking examples. (authors)

  10. Steam generator and circulator model for the HELAP code

    International Nuclear Information System (INIS)

    Ludewig, H.

    1975-07-01

    An outline is presented of the work carried out in the 1974 fiscal year on the GCFBR safety research project consisting of the development of improved steam generator and circulator (steam turbine driven helium compressor) models which will eventually be inserted in the HELAP (1) code. Furthermore, a code was developed which will be used to generate steady state input for the primary and secondary sides of the steam generator. The following conclusions and suggestions for further work are made: (1) The steam-generator and circulator model are consistent with the volume and junction layout used in HELAP, (2) with minor changes these models, when incorporated in HELAP, could be used to simulate a direct cycle plant, (3) an explicit control valve model is still to be developed and would be very desirable to control the flow to the turbine during a transient (initially this flow will be controlled by using the existing check valve model); (4) the friction factor in the laminar flow region is computed inaccurately, this might cause significant errors in loss-of-flow accidents; and (5) it is felt that HELAP will still use a large amount of computer time and will thus be limited to design basis accidents without scram or loss of flow transients with and without scram. Finally it may also be used as a test bed for the development of prototype component models which would be incorporated in a more sophisticated system code, developed specifically for GCFBR's

  11. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    Science.gov (United States)

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  12. Application of flow network models of SINDA/FLUINT{sup TM} to a nuclear power plant system thermal hydraulic code

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Ji Bum [Institute for Advanced Engineering, Yongin (Korea, Republic of); Park, Jong Woon [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    In order to enhance the dynamic and interactive simulation capability of a system thermal hydraulic code for nuclear power plant, applicability of flow network models in SINDA/FLUINT{sup TM} has been tested by modeling feedwater system and coupling to DSNP which is one of a system thermal hydraulic simulation code for a pressurized heavy water reactor. The feedwater system is selected since it is one of the most important balance of plant systems with a potential to greatly affect the behavior of nuclear steam supply system. The flow network model of this feedwater system consists of condenser, condensate pumps, low and high pressure heaters, deaerator, feedwater pumps, and control valves. This complicated flow network is modeled and coupled to DSNP and it is tested for several normal and abnormal transient conditions such turbine load maneuvering, turbine trip, and loss of class IV power. The results show reasonable behavior of the coupled code and also gives a good dynamic and interactive simulation capabilities for the several mild transient conditions. It has been found that coupling system thermal hydraulic code with a flow network code is a proper way of upgrading simulation capability of DSNP to mature nuclear plant analyzer (NPA). 5 refs., 10 figs. (Author)

  13. Modeling developments for the SAS4A and SASSYS computer codes

    International Nuclear Information System (INIS)

    Cahalan, J.E.; Wei, T.Y.C.

    1990-01-01

    The SAS4A and SASSYS computer codes are being developed at Argonne National Laboratory for transient analysis of liquid metal cooled reactors. The SAS4A code is designed to analyse severe loss-of-coolant flow and overpower accidents involving coolant boiling, Cladding failures, and fuel melting and relocation. Recent SAS4A modeling developments include extension of the coolant boiling model to treat sudden fission gas release upon pin failure, expansion of the DEFORM fuel behavior model to handle advanced cladding materials and metallic fuel, and addition of metallic fuel modeling capability to the PINACLE and LEVITATE fuel relocation models. The SASSYS code is intended for the analysis of operational and beyond-design-basis transients, and provides a detailed transient thermal and hydraulic simulation of the core, the primary and secondary coolant circuits, and the balance-of-plant, in addition to a detailed model of the plant control and protection systems. Recent SASSYS modeling developments have resulted in detailed representations of the balance of plant piping network and components, including steam generators, feedwater heaters and pumps, and the turbine. 12 refs., 2 tabs

  14. An improved steam generator model for the SASSYS code

    International Nuclear Information System (INIS)

    Pizzica, P.A.

    1989-01-01

    A new steam generator model has been developed for the SASSYS computer code, which analyzes accident conditions in a liquid-metal-cooled fast reactor. It has been incorporated into the new SASSYS balance-of-plant model, but it can also function as a stand-alone model. The model provides a full solution of the steady-state condition before the transient calculation begins for given sodium and water flow rates, inlet and outlet sodium temperatures, and inlet enthalpy and region lengths on the water side

  15. Multi-Hypothesis Modelling Capabilities for Robust Data-Model Integration

    Science.gov (United States)

    Walker, A. P.; De Kauwe, M. G.; Lu, D.; Medlyn, B.; Norby, R. J.; Ricciuto, D. M.; Rogers, A.; Serbin, S.; Weston, D. J.; Ye, M.; Zaehle, S.

    2017-12-01

    Large uncertainty is often inherent in model predictions due to imperfect knowledge of how to describe the mechanistic processes (hypotheses) that a model is intended to represent. Yet this model hypothesis uncertainty (MHU) is often overlooked or informally evaluated, as methods to quantify and evaluate MHU are limited. MHU is increased as models become more complex because each additional processes added to a model comes with inherent MHU as well as parametric unceratinty. With the current trend of adding more processes to Earth System Models (ESMs), we are adding uncertainty, which can be quantified for parameters but not MHU. Model inter-comparison projects do allow for some consideration of hypothesis uncertainty but in an ad hoc and non-independent fashion. This has stymied efforts to evaluate ecosystem models against data and intepret the results mechanistically because it is not simple to interpret exactly why a model is producing the results it does and identify which model assumptions are key as they combine models of many sub-systems and processes, each of which may be conceptualised and represented mathematically in various ways. We present a novel modelling framework—the multi-assumption architecture and testbed (MAAT)—that automates the combination, generation, and execution of a model ensemble built with different representations of process. We will present the argument that multi-hypothesis modelling needs to be considered in conjunction with other capabilities (e.g. the Predictive Ecosystem Analyser; PecAn) and statistical methods (e.g. sensitivity anaylsis, data assimilation) to aid efforts in robust data model integration to enhance our predictive understanding of biological systems.

  16. Dual coding: a cognitive model for psychoanalytic research.

    Science.gov (United States)

    Bucci, W

    1985-01-01

    Four theories of mental representation derived from current experimental work in cognitive psychology have been discussed in relation to psychoanalytic theory. These are: verbal mediation theory, in which language determines or mediates thought; perceptual dominance theory, in which imagistic structures are dominant; common code or propositional models, in which all information, perceptual or linguistic, is represented in an abstract, amodal code; and dual coding, in which nonverbal and verbal information are each encoded, in symbolic form, in separate systems specialized for such representation, and connected by a complex system of referential relations. The weight of current empirical evidence supports the dual code theory. However, psychoanalysis has implicitly accepted a mixed model-perceptual dominance theory applying to unconscious representation, and verbal mediation characterizing mature conscious waking thought. The characterization of psychoanalysis, by Schafer, Spence, and others, as a domain in which reality is constructed rather than discovered, reflects the application of this incomplete mixed model. The representations of experience in the patient's mind are seen as without structure of their own, needing to be organized by words, thus vulnerable to distortion or dissolution by the language of the analyst or the patient himself. In these terms, hypothesis testing becomes a meaningless pursuit; the propositions of the theory are no longer falsifiable; the analyst is always more or less "right." This paper suggests that the integrated dual code formulation provides a more coherent theoretical framework for psychoanalysis than the mixed model, with important implications for theory and technique. In terms of dual coding, the problem is not that the nonverbal representations are vulnerable to distortion by words, but that the words that pass back and forth between analyst and patient will not affect the nonverbal schemata at all. Using the dual code

  17. Phenomenological optical potentials and optical model computer codes

    International Nuclear Information System (INIS)

    Prince, A.

    1980-01-01

    An introduction to the Optical Model is presented. Starting with the purpose and nature of the physical problems to be analyzed, a general formulation and the various phenomenological methods of solution are discussed. This includes the calculation of observables based on assumed potentials such as local and non-local and their forms, e.g. Woods-Saxon, folded model etc. Also discussed are the various calculational methods and model codes employed to describe nuclear reactions in the spherical and deformed regions (e.g. coupled-channel analysis). An examination of the numerical solutions and minimization techniques associated with the various codes, is briefly touched upon. Several computer programs are described for carrying out the calculations. The preparation of input, (formats and options), determination of model parameters and analysis of output are described. The class is given a series of problems to carry out using the available computer. Interpretation and evaluation of the samples includes the effect of varying parameters, and comparison of calculations with the experimental data. Also included is an intercomparison of the results from the various model codes, along with their advantages and limitations. (author)

  18. Improvement of blow down model for LEAP code

    International Nuclear Information System (INIS)

    Itooka, Satoshi; Fujimata, Kazuhiro

    2003-03-01

    In Japan Nuclear Cycle Development Institute, the improvement of analysis method for overheating tube rapture was studied for the accident of sodium-water reactions in the steam generator of a fast breeder reactor and the evaluation of heat transfer condition in the tube were carried out based on study of critical heat flux (CHF) and post-CHF heat transfer equation in Light Water Reactors. In this study, the improvement of blow down model for the LEAP code was carried out taking into consideration the above-mentioned evaluation of heat transfer condition. Improvements of the LEAP code were following items. Calculations and verification were performed with the improved LEAP code in order to confirm the code functions. The addition of critical heat flux (CHF) by the formula of Katto and the formula of Tong. The addition of post-CHF heat transfer equation by the formula of Condie-BengstonIV and the formula of Groeneveld 5.9. The physical properties of the water and steam are expanded to the critical conditions of the water. The expansion of the total number of section and the improvement of the input form. The addition of the function to control the valve setting by the PID control model. (author)

  19. BWR modeling capability and Scale/Triton lattice-to-core integration of the Nestle nodal simulator - 331

    International Nuclear Information System (INIS)

    Galloway, J.; Hernandez, H.; Maldonado, G.I.; Jessee, M.; Popov, E.; Clarno, K.

    2010-01-01

    This article reports the status of recent and substantial enhancements made to the NESTLE nodal core simulator, a code originally developed at North Carolina State University (NCSU) of which version 5.2.1 has been available for several years through the Oak Ridge National Laboratory (ORNL) Radiation Safety Information Computational Center (RSICC) software repository. In its released and available form, NESTLE is a seasoned, well-developed and extensively tested code system particularly useful to model PWRs. In collaboration with NCSU, University of Tennessee (UT) and ORNL researchers have recently developed new enhancements for the NESTLE code, including the implementation of a two-phase drift-flux thermal hydraulic and flow redistribution model to facilitate modeling of Boiling Water Reactors (BWRs) as well as the development of an integrated coupling of SCALE/TRITON lattice physics to NESTLE so to produce an end-to-end capability for reactor simulations. These latest advancements implemented into NESTLE as well as an update of other ongoing efforts of this project are herein reported. (authors)

  20. The 2010 fib Model Code for Structural Concrete: A new approach to structural engineering

    NARCIS (Netherlands)

    Walraven, J.C.; Bigaj-Van Vliet, A.

    2011-01-01

    The fib Model Code is a recommendation for the design of reinforced and prestressed concrete which is intended to be a guiding document for future codes. Model Codes have been published before, in 1978 and 1990. The draft for fib Model Code 2010 was published in May 2010. The most important new

  1. Plutonium explosive dispersal modeling using the MACCS2 computer code

    International Nuclear Information System (INIS)

    Steele, C.M.; Wald, T.L.; Chanin, D.I.

    1998-01-01

    The purpose of this paper is to derive the necessary parameters to be used to establish a defensible methodology to perform explosive dispersal modeling of respirable plutonium using Gaussian methods. A particular code, MACCS2, has been chosen for this modeling effort due to its application of sophisticated meteorological statistical sampling in accordance with the philosophy of Nuclear Regulatory Commission (NRC) Regulatory Guide 1.145, ''Atmospheric Dispersion Models for Potential Accident Consequence Assessments at Nuclear Power Plants''. A second advantage supporting the selection of the MACCS2 code for modeling purposes is that meteorological data sets are readily available at most Department of Energy (DOE) and NRC sites. This particular MACCS2 modeling effort focuses on the calculation of respirable doses and not ground deposition. Once the necessary parameters for the MACCS2 modeling are developed and presented, the model is benchmarked against empirical test data from the Double Tracks shot of project Roller Coaster (Shreve 1965) and applied to a hypothetical plutonium explosive dispersal scenario. Further modeling with the MACCS2 code is performed to determine a defensible method of treating the effects of building structure interaction on the respirable fraction distribution as a function of height. These results are related to the Clean Slate 2 and Clean Slate 3 bunkered shots of Project Roller Coaster. Lastly a method is presented to determine the peak 99.5% sector doses on an irregular site boundary in the manner specified in NRC Regulatory Guide 1.145 (1983). Parametric analyses are performed on the major analytic assumptions in the MACCS2 model to define the potential errors that are possible in using this methodology

  2. Plutonium explosive dispersal modeling using the MACCS2 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Steele, C.M.; Wald, T.L.; Chanin, D.I.

    1998-11-01

    The purpose of this paper is to derive the necessary parameters to be used to establish a defensible methodology to perform explosive dispersal modeling of respirable plutonium using Gaussian methods. A particular code, MACCS2, has been chosen for this modeling effort due to its application of sophisticated meteorological statistical sampling in accordance with the philosophy of Nuclear Regulatory Commission (NRC) Regulatory Guide 1.145, ``Atmospheric Dispersion Models for Potential Accident Consequence Assessments at Nuclear Power Plants``. A second advantage supporting the selection of the MACCS2 code for modeling purposes is that meteorological data sets are readily available at most Department of Energy (DOE) and NRC sites. This particular MACCS2 modeling effort focuses on the calculation of respirable doses and not ground deposition. Once the necessary parameters for the MACCS2 modeling are developed and presented, the model is benchmarked against empirical test data from the Double Tracks shot of project Roller Coaster (Shreve 1965) and applied to a hypothetical plutonium explosive dispersal scenario. Further modeling with the MACCS2 code is performed to determine a defensible method of treating the effects of building structure interaction on the respirable fraction distribution as a function of height. These results are related to the Clean Slate 2 and Clean Slate 3 bunkered shots of Project Roller Coaster. Lastly a method is presented to determine the peak 99.5% sector doses on an irregular site boundary in the manner specified in NRC Regulatory Guide 1.145 (1983). Parametric analyses are performed on the major analytic assumptions in the MACCS2 model to define the potential errors that are possible in using this methodology.

  3. A variable capacitance based modeling and power capability predicting method for ultracapacitor

    Science.gov (United States)

    Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang

    2018-01-01

    Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.

  4. Guidelines for Applying the Capability Maturity Model Analysis to Connected and Automated Vehicle Deployment

    Science.gov (United States)

    2017-11-23

    The Federal Highway Administration (FHWA) has adapted the Transportation Systems Management and Operations (TSMO) Capability Maturity Model (CMM) to describe the operational maturity of Infrastructure Owner-Operator (IOO) agencies across a range of i...

  5. A Stochastic Model for the Landing Dispersion of Hazard Detection and Avoidance Capable Flight Systems

    Science.gov (United States)

    Witte, L.

    2014-06-01

    To support landing site assessments for HDA-capable flight systems and to facilitate trade studies between the potential HDA architectures versus the yielded probability of safe landing a stochastic landing dispersion model has been developed.

  6. A new open-source pin power reconstruction capability in DRAGON5 and DONJON5 neutronic codes

    Energy Technology Data Exchange (ETDEWEB)

    Chambon, R., E-mail: richard-pierre.chambon@polymtl.ca; Hébert, A., E-mail: alain.hebert@polymtl.ca

    2015-08-15

    In order to better optimize the fuel energy efficiency in PWRs, the burnup distribution has to be known as accurately as possible, ideally in each pin. However, this level of detail is lost when core calculations are performed with homogenized cross-sections. The pin power reconstruction (PPR) method can be used to get back those levels of details as accurately as possible in a small additional computing time frame compared to classical core calculations. Such a de-homogenization technique for core calculations using arbitrarily homogenized fuel assembly geometries was presented originally by Fliscounakis et al. In our work, the same methodology was implemented in the open-source neutronic codes DRAGON5 and DONJON5. The new type of Selengut homogenization, called macro-calculation water gap, also proposed by Fliscounakis et al. was implemented. Some important details on the methodology were emphasized in order to get precise results. Validation tests were performed on 12 configurations of 3×3 clusters where simulations in transport theory and in diffusion theory followed by pin-power reconstruction were compared. The results shows that the pin power reconstruction and the Selengut macro-calculation water gap methods were correctly implemented. The accuracy of the simulations depends on the SPH method and on the homogenization geometry choices. Results show that the heterogeneous homogenization is highly recommended. SPH techniques were investigated with flux-volume and Selengut normalization, but the former leads to inaccurate results. Even though the new Selengut macro-calculation water gap method gives promising results regarding flux continuity at assembly interfaces, the classical Selengut approach is more reliable in terms of maximum and average errors in the whole range of configurations.

  7. Geochemical modelling of groundwater evolution using chemical equilibrium codes

    International Nuclear Information System (INIS)

    Pitkaenen, P.; Pirhonen, V.

    1991-01-01

    Geochemical equilibrium codes are a modern tool in studying interaction between groundwater and solid phases. The most common used programs and application subjects are shortly presented in this article. The main emphasis is laid on the approach method of using calculated results in evaluating groundwater evolution in hydrogeological system. At present in geochemical equilibrium modelling also kinetic as well as hydrologic constrains along a flow path are taken into consideration

  8. Optical model calculations with the code ECIS95

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, B V [Departamento de Fisica, Instituto Tecnologico da Aeronautica, Centro Tecnico Aeroespacial (Brazil)

    2001-12-15

    The basic features of elastic and inelastic scattering within the framework of the spherical and deformed nuclear optical models are discussed. The calculation of cross sections, angular distributions and other scattering quantities using J. Raynal's code ECIS95 is described. The use of the ECIS method (Equations Couplees en Iterations Sequentielles) in coupled-channels and distorted-wave Born approximation calculations is also reviewed. (author)

  9. International codes and model intercomparison for intermediate energy activation yields

    International Nuclear Information System (INIS)

    Rolf, M.; Nagel, P.

    1997-01-01

    The motivation for this intercomparison came from data needs of accelerator-based waste transmutation, energy amplification and medical therapy. The aim of this exercise is to determine the degree of reliability of current nuclear reaction models and codes when calculating activation yields in the intermediate energy range up to 5000 MeV. Emphasis has been placed for a wide range of target elements ( O, Al, Fe, Co, Zr and Au). This work is mainly based on calculation of (P,xPyN) integral cross section for incident proton. A qualitative description of some of the nuclear models and code options employed is made. The systematics of graphical presentation of the results allows a quick quantitative measure of agreement or deviation. This code intercomparison highlights the fact that modeling calculations of energy activation yields may at best have uncertainties of a factor of two. The causes of such discrepancies are multi-factorial. Problems are encountered which are connected with the calculation of nuclear masses, binding energies, Q-values, shell effects, medium energy fission and Fermi break-up. (A.C.)

  10. Film grain noise modeling in advanced video coding

    Science.gov (United States)

    Oh, Byung Tae; Kuo, C.-C. Jay; Sun, Shijun; Lei, Shawmin

    2007-01-01

    A new technique for film grain noise extraction, modeling and synthesis is proposed and applied to the coding of high definition video in this work. The film grain noise is viewed as a part of artistic presentation by people in the movie industry. On one hand, since the film grain noise can boost the natural appearance of pictures in high definition video, it should be preserved in high-fidelity video processing systems. On the other hand, video coding with film grain noise is expensive. It is desirable to extract film grain noise from the input video as a pre-processing step at the encoder and re-synthesize the film grain noise and add it back to the decoded video as a post-processing step at the decoder. Under this framework, the coding gain of the denoised video is higher while the quality of the final reconstructed video can still be well preserved. Following this idea, we present a method to remove film grain noise from image/video without distorting its original content. Besides, we describe a parametric model containing a small set of parameters to represent the extracted film grain noise. The proposed model generates the film grain noise that is close to the real one in terms of power spectral density and cross-channel spectral correlation. Experimental results are shown to demonstrate the efficiency of the proposed scheme.

  11. Models to enhance research capacity and capability in clinical nurses: a narrative review.

    Science.gov (United States)

    O'Byrne, Louise; Smith, Sheree

    2011-05-01

    To identify models used as local initiatives to build capability and capacity in clinical nurses. The National Health Service, Nursing and Midwifery Council and the United Kingdom Clinical Research Collaboration all support the development of the building of research capability and capacity in clinical nurses in the UK. Narrative review. A literature search of databases (including Medline and Pubmed) using the search terms nursing research, research capacity and research capability combined with building, development, model and collaboration. Publications which included a description or methodological study of a structured initiative to tackle research capacity and capability development in clinical nurses were selected. Three models were found to be dominant in the literature. These comprised evidence-based practice, facilitative and experiential learning models. Strong leadership, organisational need and support management were elements found in all three models. Methodological issues were evident and pertain to small sample sizes, inconsistent and poorly defined outcomes along with a lack of data. Whilst the vision of a research ready and active National Health Service is to be applauded to date, there appears to be limited research on the best approach to support local initiatives for nurses that build research capability and capacity. Future studies will need to focus on well-defined objectives and outcomes to enable robust evidence to support local initiatives. To build research capability and capacity in clinical nurses, there is a need to evaluate models and determine the best approach that will provide clinical nurses with research opportunities. © 2010 Blackwell Publishing Ltd.

  12. Review of calculational models and computer codes for environmental dose assessment of radioactive releases

    International Nuclear Information System (INIS)

    Strenge, D.L.; Watson, E.C.; Droppo, J.G.

    1976-06-01

    The development of technological bases for siting nuclear fuel cycle facilities requires calculational models and computer codes for the evaluation of risks and the assessment of environmental impact of radioactive effluents. A literature search and review of available computer programs revealed that no one program was capable of performing all of the great variety of calculations (i.e., external dose, internal dose, population dose, chronic release, accidental release, etc.). Available literature on existing computer programs has been reviewed and a description of each program reviewed is given

  13. Review of calculational models and computer codes for environmental dose assessment of radioactive releases

    Energy Technology Data Exchange (ETDEWEB)

    Strenge, D.L.; Watson, E.C.; Droppo, J.G.

    1976-06-01

    The development of technological bases for siting nuclear fuel cycle facilities requires calculational models and computer codes for the evaluation of risks and the assessment of environmental impact of radioactive effluents. A literature search and review of available computer programs revealed that no one program was capable of performing all of the great variety of calculations (i.e., external dose, internal dose, population dose, chronic release, accidental release, etc.). Available literature on existing computer programs has been reviewed and a description of each program reviewed is given.

  14. Security Process Capability Model Based on ISO/IEC 15504 Conformant Enterprise SPICE

    Directory of Open Access Journals (Sweden)

    Mitasiunas Antanas

    2014-07-01

    Full Text Available In the context of modern information systems, security has become one of the most critical quality attributes. The purpose of this paper is to address the problem of quality of information security. An approach to solve this problem is based on the main assumption that security is a process oriented activity. According to this approach, product quality can be achieved by means of process quality - process capability. Introduced in the paper, SPICE conformant information security process capability model is based on process capability modeling elaborated by world-wide software engineering community during the last 25 years, namely ISO/IEC 15504 that defines the capability dimension and the requirements for process definition and domain independent integrated model for enterprise-wide assessment and Enterprise SPICE improvement

  15. Direct containment heating models in the CONTAIN code

    International Nuclear Information System (INIS)

    Washington, K.E.; Williams, D.C.

    1995-08-01

    The potential exists in a nuclear reactor core melt severe accident for molten core debris to be dispersed under high pressure into the containment building. If this occurs, the set of phenomena that result in the transfer of energy to the containment atmosphere and its surroundings is referred to as direct containment heating (DCH). Because of the potential for DCH to lead to early containment failure, the U.S. Nuclear Regulatory Commission (USNRC) has sponsored an extensive research program consisting of experimental, analytical, and risk integration components. An important element of the analytical research has been the development and assessment of direct containment heating models in the CONTAIN code. This report documents the DCH models in the CONTAIN code. DCH models in CONTAIN for representing debris transport, trapping, chemical reactions, and heat transfer from debris to the containment atmosphere and surroundings are described. The descriptions include the governing equations and input instructions in CONTAIN unique to performing DCH calculations. Modifications made to the combustion models in CONTAIN for representing the combustion of DCH-produced and pre-existing hydrogen under DCH conditions are also described. Input table options for representing the discharge of debris from the RPV and the entrainment phase of the DCH process are also described. A sample calculation is presented to demonstrate the functionality of the models. The results show that reasonable behavior is obtained when the models are used to predict the sixth Zion geometry integral effects test at 1/10th scale

  16. Direct containment heating models in the CONTAIN code

    Energy Technology Data Exchange (ETDEWEB)

    Washington, K.E.; Williams, D.C.

    1995-08-01

    The potential exists in a nuclear reactor core melt severe accident for molten core debris to be dispersed under high pressure into the containment building. If this occurs, the set of phenomena that result in the transfer of energy to the containment atmosphere and its surroundings is referred to as direct containment heating (DCH). Because of the potential for DCH to lead to early containment failure, the U.S. Nuclear Regulatory Commission (USNRC) has sponsored an extensive research program consisting of experimental, analytical, and risk integration components. An important element of the analytical research has been the development and assessment of direct containment heating models in the CONTAIN code. This report documents the DCH models in the CONTAIN code. DCH models in CONTAIN for representing debris transport, trapping, chemical reactions, and heat transfer from debris to the containment atmosphere and surroundings are described. The descriptions include the governing equations and input instructions in CONTAIN unique to performing DCH calculations. Modifications made to the combustion models in CONTAIN for representing the combustion of DCH-produced and pre-existing hydrogen under DCH conditions are also described. Input table options for representing the discharge of debris from the RPV and the entrainment phase of the DCH process are also described. A sample calculation is presented to demonstrate the functionality of the models. The results show that reasonable behavior is obtained when the models are used to predict the sixth Zion geometry integral effects test at 1/10th scale.

  17. Development of Parallel Code for the Alaska Tsunami Forecast Model

    Science.gov (United States)

    Bahng, B.; Knight, W. R.; Whitmore, P.

    2014-12-01

    The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes and other means in both the Pacific and Atlantic Oceans. At the U.S. National Tsunami Warning Center (NTWC), the model is mainly used in a pre-computed fashion. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves get closer to coastal waters. Even with the pre-computation the task becomes non-trivial as sub-grid resolution gets finer. Currently, the finest resolution Digital Elevation Models (DEM) used by ATFM are 1/3 arc-seconds. With a serial code, large or multiple areas of very high resolution can produce run-times that are unrealistic even in a pre-computed approach. One way to increase the model performance is code parallelization used in conjunction with a multi-processor computing environment. NTWC developers have undertaken an ATFM code-parallelization effort to streamline the creation of the pre-computed database of results with the long term aim of tsunami forecasts from source to high resolution shoreline grids in real time. Parallelization will also permit timely regeneration of the forecast model database with new DEMs; and, will make possible future inclusion of new physics such as the non-hydrostatic treatment of tsunami propagation. The purpose of our presentation is to elaborate on the parallelization approach and to show the compute speed increase on various multi-processor systems.

  18. Channel modeling, signal processing and coding for perpendicular magnetic recording

    Science.gov (United States)

    Wu, Zheng

    With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by

  19. Induction generator model in phase coordinates for fault ride-through capability studies of wind turbines

    DEFF Research Database (Denmark)

    Fajardo, L.A.; Iov, F.; Medina, R.J.A.

    2007-01-01

    A phase coordinates induction generator model with time varying electrical parameters as influenced by magnetic saturation and rotor deep bar effects, is presented in this paper. The model exhibits a per-phase formulation, uses standard data sheet for characterization of the electrical parameters...... are conducted in a representative sized system and results show aptness of the proposed model over other two models. This approach is also constructive to support grid code requirements....

  20. Modeling of pipe break accident in a district heating system using RELAP5 computer code

    International Nuclear Information System (INIS)

    Kaliatka, A.; Valinčius, M.

    2012-01-01

    Reliability of a district heat supply system is a very important factor. However, accidents are inevitable and they occur due to various reasons, therefore it is necessary to have possibility to evaluate the consequences of possible accidents. This paper demonstrated the capabilities of developed district heating network model (for RELAP5 code) to analyze dynamic processes taking place in the network. A pipe break in a water supply line accident scenario in Kaunas city (Lithuania) heating network is presented in this paper. The results of this case study were used to demonstrate a possibility of the break location identification by pressure decrease propagation in the network. -- Highlights: ► Nuclear reactor accident analysis code RELAP5 was applied for accident analysis in a district heating network. ► Pipe break accident scenario in Kaunas city (Lithuania) district heating network has been analyzed. ► An innovative method of pipe break location identification by pressure-time data is proposed.

  1. The top-down reflooding model in the Cathare code

    International Nuclear Information System (INIS)

    Bartak, J.; Bestion, D.; Haapalehto, T.

    1993-01-01

    A top-down reflooding model was developed for the French best-estimate thermalhydraulic code CATHARE. The paper presents the current state of development of this model. Based on a literature survey and on compatibility considerations with respect to the existing CATHARE bottom reflooding package, a falling film top-down reflooding model was developed and implemented into CATHARE version 1.3E. Following a brief review of previous work, the paper describes the most important features of the model. The model was validated with the WINFRITH single tube top-down reflooding experiment and with the REWET - II simultaneous bottom and top-down reflooding experiment in rod bundle geometry. The results demonstrate the ability of the new package to describe the falling film rewetting phenomena and the main parametric trends both in a simple analytical experimental setup and in a much more complex rod bundle reflooding experiment. (authors). 9 figs., 28 refs

  2. Toward a Probabilistic Automata Model of Some Aspects of Code-Switching.

    Science.gov (United States)

    Dearholt, D. W.; Valdes-Fallis, G.

    1978-01-01

    The purpose of the model is to select either Spanish or English as the language to be used; its goals at this stage of development include modeling code-switching for lexical need, apparently random code-switching, dependency of code-switching upon sociolinguistic context, and code-switching within syntactic constraints. (EJS)

  3. A method for modeling co-occurrence propensity of clinical codes with application to ICD-10-PCS auto-coding.

    Science.gov (United States)

    Subotin, Michael; Davis, Anthony R

    2016-09-01

    Natural language processing methods for medical auto-coding, or automatic generation of medical billing codes from electronic health records, generally assign each code independently of the others. They may thus assign codes for closely related procedures or diagnoses to the same document, even when they do not tend to occur together in practice, simply because the right choice can be difficult to infer from the clinical narrative. We propose a method that injects awareness of the propensities for code co-occurrence into this process. First, a model is trained to estimate the conditional probability that one code is assigned by a human coder, given than another code is known to have been assigned to the same document. Then, at runtime, an iterative algorithm is used to apply this model to the output of an existing statistical auto-coder to modify the confidence scores of the codes. We tested this method in combination with a primary auto-coder for International Statistical Classification of Diseases-10 procedure codes, achieving a 12% relative improvement in F-score over the primary auto-coder baseline. The proposed method can be used, with appropriate features, in combination with any auto-coder that generates codes with different levels of confidence. The promising results obtained for International Statistical Classification of Diseases-10 procedure codes suggest that the proposed method may have wider applications in auto-coding. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Benchmarking of computer codes and approaches for modeling exposure scenarios

    International Nuclear Information System (INIS)

    Seitz, R.R.; Rittmann, P.D.; Wood, M.I.; Cook, J.R.

    1994-08-01

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided

  5. Nuclear Energy Advanced Modeling and Simulation (NEAMS) Waste Integrated Performance and Safety Codes (IPSC) : FY10 development and integration.

    Energy Technology Data Exchange (ETDEWEB)

    Criscenti, Louise Jacqueline; Sassani, David Carl; Arguello, Jose Guadalupe, Jr.; Dewers, Thomas A.; Bouchard, Julie F.; Edwards, Harold Carter; Freeze, Geoffrey A.; Wang, Yifeng; Schultz, Peter Andrew

    2011-02-01

    This report describes the progress in fiscal year 2010 in developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. Waste IPSC activities in fiscal year 2010 focused on specifying a challenge problem to demonstrate proof of concept, developing a verification and validation plan, and performing an initial gap analyses to identify candidate codes and tools to support the development and integration of the Waste IPSC. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. This year-end progress report documents the FY10 status of acquisition, development, and integration of thermal-hydrologic-chemical-mechanical (THCM) code capabilities, frameworks, and enabling tools and infrastructure.

  6. Simplified model for radioactive contaminant transport: the TRANSS code

    International Nuclear Information System (INIS)

    Simmons, C.S.; Kincaid, C.T.; Reisenauer, A.E.

    1986-09-01

    A simplified ground-water transport model called TRANSS was devised to estimate the rate of migration of a decaying radionuclide that is subject to sorption governed by a linear isotherm. Transport is modeled as a contaminant mass transmitted along a collection of streamlines constituting a streamtube, which connects a source release zone with an environmental arrival zone. The probability-weighted contaminant arrival distribution along each streamline is represented by an analytical solution of the one-dimensional advection-dispersion equation with constant velocity and dispersion coefficient. The appropriate effective constant velocity for each streamline is based on the exact travel time required to traverse a streamline with a known length. An assumption used in the model to facilitate the mathematical simplification is that transverse dispersion within a streamtube is negligible. Release of contaminant from a source is described in terms of a fraction-remaining curve provided as input information. However, an option included in the code is the calculation of a fraction-remaining curve based on four specialized release models: (1) constant release rate, (2) solubility-controlled release, (3) adsorption-controlled release, and (4) diffusion-controlled release from beneath an infiltration barrier. To apply the code, a user supplies only a certain minimal number of parameters: a probability-weighted list of travel times for streamlines, a local-scale dispersion coefficient, a sorption distribution coefficient, total initial radionuclide inventory, radioactive half-life, a release model choice, and size dimensions of the source. The code is intended to provide scoping estimates of contaminant transport and does not predict the evolution of a concentration distribution in a ground-water flow field. Moreover, the required travel times along streamlines must be obtained from a prior ground-water flow simulation

  7. A critical flow model for the Cathena thermalhydraulic code

    International Nuclear Information System (INIS)

    Popov, N.K.; Hanna, B.N.

    1990-01-01

    The calculation of critical flow rate, e.g., of choked flow through a break, is required for simulating a loss of coolant transient in a reactor or reactor-like experimental facility. A model was developed to calculate the flow rate through the break for given geometrical parameters near the break and fluid parameters upstream of the break for ordinary water, as well as heavy water, with or without non- condensible gases. This model has been incorporated in the CATHENA, one-dimensional, two-fluid thermalhydraulic code. In the CATHENA code a standard staggered-mesh, finite-difference representation is used to solve the thermalhydraulic equations. This model compares the fluid mixture velocity, calculated using the CATHENA momentum equations, with a critical velocity. When the mixture velocity is smaller than the critical velocity, the flow is assumed to be subcritical, and the model remains passive. When the fluid mixture velocity is higher than the critical velocity, the model sets the fluid mixture velocity equal to the critical velocity. In this paper the critical velocity at a link (momentum cell) is first estimated separately for single-phase liquid, two- phase, or single-phase gas flow condition at the upstream node (mass/energy cell). In all three regimes non-condensible gas can be present in the flow. For single-phase liquid flow, the critical velocity is estimated using a Bernoulli- type of equation, the pressure at the link is estimated by the pressure undershoot method

  8. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    Science.gov (United States)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-01-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  9. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    Science.gov (United States)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-05-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  10. Comprehensive nuclear model calculations: theory and use of the GNASH code

    International Nuclear Information System (INIS)

    Young, P.G.; Arthur, E.D.; Chadwick, M.B.

    1998-01-01

    The theory and operation of the nuclear reaction theory computer code GNASH is described, and detailed instructions are presented for code users. The code utilizes statistical Hauser-Feshbach theory with full angular momentum conservation and includes corrections for preequilibrium effects. This version is expected to be applicable for incident particle energies between 1 keV and 150 MeV and for incident photon energies to 140 MeV. General features of the code, the nuclear models that are utilized, input parameters needed to perform calculations, and the output quantities from typical problems are described in detail. A number of new features compared to previous versions are described in this manual, including the following: (1) inclusion of multiple preequilibrium processes, which allows the model calculations to be performed above 50 MeV; (2) a capability to calculate photonuclear reactions; (3) a method for determining the spin distribution of residual nuclei following preequilibrium reactions; and (4) a description of how preequilibrium spectra calculated with the FKK theory can be utilized (the 'FKK-GNASH' approach). The computational structure of the code and the subroutines and functions that are called are summarized as well. Two detailed examples are considered: 14-MeV neutrons incident on 93 Nb and 12-MeV neutrons incident on 238 U. The former example illustrates a typical calculation aimed at determining neutron, proton, and alpha emission spectra from 14-MeV reactions, and the latter example demonstrates use of the fission model in GNASH. Results from a variety of other cases are illustrated. (author)

  11. Surface Modeling, Solid Modeling and Finite Element Modeling. Analysis Capabilities of Computer-Assisted Design and Manufacturing Systems.

    Science.gov (United States)

    Nee, John G.; Kare, Audhut P.

    1987-01-01

    Explores several concepts in computer assisted design/computer assisted manufacturing (CAD/CAM). Defines, evaluates, reviews and compares advanced computer-aided geometric modeling and analysis techniques. Presents the results of a survey to establish the capabilities of minicomputer based-systems with the CAD/CAM packages evaluated. (CW)

  12. Improved Flow Modeling in Transient Reactor Safety Analysis Computer Codes

    International Nuclear Information System (INIS)

    Holowach, M.J.; Hochreiter, L.E.; Cheung, F.B.

    2002-01-01

    A method of accounting for fluid-to-fluid shear in between calculational cells over a wide range of flow conditions envisioned in reactor safety studies has been developed such that it may be easily implemented into a computer code such as COBRA-TF for more detailed subchannel analysis. At a given nodal height in the calculational model, equivalent hydraulic diameters are determined for each specific calculational cell using either laminar or turbulent velocity profiles. The velocity profile may be determined from a separate CFD (Computational Fluid Dynamics) analysis, experimental data, or existing semi-empirical relationships. The equivalent hydraulic diameter is then applied to the wall drag force calculation so as to determine the appropriate equivalent fluid-to-fluid shear caused by the wall for each cell based on the input velocity profile. This means of assigning the shear to a specific cell is independent of the actual wetted perimeter and flow area for the calculational cell. The use of this equivalent hydraulic diameter for each cell within a calculational subchannel results in a representative velocity profile which can further increase the accuracy and detail of heat transfer and fluid flow modeling within the subchannel when utilizing a thermal hydraulics systems analysis computer code such as COBRA-TF. Utilizing COBRA-TF with the flow modeling enhancement results in increased accuracy for a coarse-mesh model without the significantly greater computational and time requirements of a full-scale 3D (three-dimensional) transient CFD calculation. (authors)

  13. Co-firing biomass and coal-progress in CFD modelling capabilities

    DEFF Research Database (Denmark)

    Kær, Søren Knudsen; Rosendahl, Lasse Aistrup; Yin, Chungen

    2005-01-01

    This paper discusses the development of user defined FLUENT™ sub models to improve the modelling capabilities in the area of large biomass particle motion and conversion. Focus is put on a model that includes the influence from particle size and shape on the reactivity by resolving intra-particle......This paper discusses the development of user defined FLUENT™ sub models to improve the modelling capabilities in the area of large biomass particle motion and conversion. Focus is put on a model that includes the influence from particle size and shape on the reactivity by resolving intra......-particle gradients. The advanced reaction model predicts moisture and volatiles release characteristics that differ significantly from those found from a 0-dimensional model partly due to the processes occurring in parallel rather than sequentially. This is demonstrated for a test case that illustrates single...

  14. Validation of film dryout model in a three-fluid code FIDAS

    International Nuclear Information System (INIS)

    Sugawara, Satoru

    1989-11-01

    Analytical prediction model of critical heat flux (CHF) has been developed on the basis of film dryout criterion due to droplets deposition and entrainment in annular mist flow. CHF in round tubes were analyzed by the Film Dryout Analysis Code in Subchannels, FIDAS, which is based on the three-fluid, three-field and newly developed film dryout model. Predictions by FIDAS were compared with the world-wide experimental data on CHF obtained in water and Freon for uniformly and non-uniformly heated tubes under vertical upward flow condition. Furthermore, CHF prediction capability of FIDAS was compared with those of other film dryout models for annular flow and Katto's CHF correlation. The predictions of FIDAS are in sufficient agreement with the experimental CHF data, and indicate better agreement than the other film dryout models and empirical correlation of Katto. (author)

  15. Auditory information coding by modeled cochlear nucleus neurons.

    Science.gov (United States)

    Wang, Huan; Isik, Michael; Borst, Alexander; Hemmert, Werner

    2011-06-01

    In this paper we use information theory to quantify the information in the output spike trains of modeled cochlear nucleus globular bushy cells (GBCs). GBCs are part of the sound localization pathway. They are known for their precise temporal processing, and they code amplitude modulations with high fidelity. Here we investigated the information transmission for a natural sound, a recorded vowel. We conclude that the maximum information transmission rate for a single neuron was close to 1,050 bits/s, which corresponds to a value of approximately 5.8 bits per spike. For quasi-periodic signals like voiced speech, the transmitted information saturated as word duration increased. In general, approximately 80% of the available information from the spike trains was transmitted within about 20 ms. Transmitted information for speech signals concentrated around formant frequency regions. The efficiency of neural coding was above 60% up to the highest temporal resolution we investigated (20 μs). The increase in transmitted information to that precision indicates that these neurons are able to code information with extremely high fidelity, which is required for sound localization. On the other hand, only 20% of the information was captured when the temporal resolution was reduced to 4 ms. As the temporal resolution of most speech recognition systems is limited to less than 10 ms, this massive information loss might be one of the reasons which are responsible for the lack of noise robustness of these systems.

  16. An improved steam generator model for the SASSYS code

    International Nuclear Information System (INIS)

    Pizzica, P.A.

    1989-01-01

    A new steam generator model has been developed for the SASSYS computer code, which analyzes accident conditions in a liquid metal cooled fast reactor. It has been incorporated into the new SASSYS balance-of-plant model but it can also function on a stand-alone basis. The steam generator can be used in a once-through mode, or a variant of the model can be used as a separate evaporator and a superheater with recirculation loop. The new model provides for an exact steady-state solution as well as the transient calculation. There was a need for a faster and more flexible model than the old steam generator model. The new model provides for more detail with its multi-mode treatment as opposed to the previous model's one node per region approach. Numerical instability problems which were the result of cell-centered spatial differencing, fully explicit time differencing, and the moving boundary treatment of the boiling crisis point in the boiling region have been reduced. This leads to an increase in speed as larger time steps can now be taken. The new model is an improvement in many respects. 2 refs., 3 figs

  17. Modeling of the CTEx subcritical unit using MCNPX code

    International Nuclear Information System (INIS)

    Santos, Avelino; Silva, Ademir X. da; Rebello, Wilson F.; Cunha, Victor L. Lassance

    2011-01-01

    The present work aims at simulating the subcritical unit of Army Technology Center (CTEx) namely ARGUS pile (subcritical uranium-graphite arrangement) by using the computational code MCNPX. Once such modeling is finished, it could be used in k-effective calculations for systems using natural uranium as fuel, for instance. ARGUS is a subcritical assembly which uses reactor-grade graphite as moderator of fission neutrons and metallic uranium fuel rods with aluminum cladding. The pile is driven by an Am-Be spontaneous neutron source. In order to achieve a higher value for k eff , a higher concentration of U235 can be proposed, provided it safely remains below one. (author)

  18. Status of emergency spray modelling in the integral code ASTEC

    International Nuclear Information System (INIS)

    Plumecocq, W.; Passalacqua, R.

    2001-01-01

    Containment spray systems are emergency systems that would be used in very low probability events which may lead to severe accidents in Light Water Reactors. In most cases, the primary function of the spray would be to remove heat and condense steam in order to reduce pressure and temperature in the containment building. Spray would also wash out fission products (aerosols and gaseous species) from the containment atmosphere. The efficiency of the spray system in the containment depressurization as well as in the removal of aerosols, during a severe accident, depends on the evolution of the spray droplet size distribution with the height in the containment, due to kinetic and thermal relaxation, gravitational agglomeration and mass transfer with the gas. A model has been developed taking into account all of these phenomena. This model has been implemented in the ASTEC code with a validation of the droplets relaxation against the CARAIDAS experiment (IPSN). Applications of this modelling to a PWR 900, during a severe accident, with special emphasis on the effect of spray on containment hydrogen distribution have been performed in multi-compartment configuration with the ASTEC V0.3 code. (author)

  19. 49 CFR 41.120 - Acceptable model codes.

    Science.gov (United States)

    2010-10-01

    ... 1991 International Conference of Building Officials (ICBO) Uniform Building Code, published by the... Supplement to the Building Officials and Code Administrators International (BOCA) National Building Code, published by the Building Officials and Code Administrators, 4051 West Flossmoor Rd., Country Club Hills...

  20. Influential input parameters for reflood model of MARS code

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Deog Yeon; Bang, Young Seok [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2012-10-15

    Best Estimate (BE) calculation has been more broadly used in nuclear industries and regulations to reduce the significant conservatism for evaluating Loss of Coolant Accident (LOCA). Reflood model has been identified as one of the problems in BE calculation. The objective of the Post BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) program of OECD/NEA is to make progress the issue of the quantification of the uncertainty of the physical models in system thermal hydraulic codes, by considering an experimental result especially for reflood. It is important to establish a methodology to identify and select the parameters influential to the response of reflood phenomena following Large Break LOCA. For this aspect, a reference calculation and sensitivity analysis to select the dominant influential parameters for FEBA experiment are performed.

  1. Modeling the PUSPATI TRIGA Reactor using MCNP code

    International Nuclear Information System (INIS)

    Mohamad Hairie Rabir; Mark Dennis Usang; Naim Syauqi Hamzah; Julia Abdul Karim; Mohd Amin Sharifuldin Salleh

    2012-01-01

    The 1 MW TRIGA MARK II research reactor at Malaysian Nuclear Agency achieved initial criticality on June 28, 1982. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes. This paper describes the reactor parameters calculation for the PUSPATI TRIGA REACTOR (RTP); focusing on the application of the developed reactor 3D model for criticality calculation, analysis of power and neutron flux distribution and depletion study of TRIGA fuel. The 3D continuous energy Monte Carlo code MCNP was used to develop a versatile and accurate full model of the TRIGA reactor. The model represents in detailed all important components of the core and shielding with literally no physical approximation. (author)

  2. The capability and constraint model of recoverability: An integrated theory of continuity planning.

    Science.gov (United States)

    Lindstedt, David

    2017-01-01

    While there are best practices, good practices, regulations and standards for continuity planning, there is no single model to collate and sort their various recommended activities. To address this deficit, this paper presents the capability and constraint model of recoverability - a new model to provide an integrated foundation for business continuity planning. The model is non-linear in both construct and practice, thus allowing practitioners to remain adaptive in its application. The paper presents each facet of the model, outlines the model's use in both theory and practice, suggests a subsequent approach that arises from the model, and discusses some possible ramifications to the industry.

  3. Effective modeling of hydrogen mixing and catalytic recombination in containment atmosphere with an Eulerian Containment Code

    International Nuclear Information System (INIS)

    Bott, E.; Frepoli, C.; Monti, R.; Notini, V.; Carcassi, M.; Fineschi, F.; Heitsch, M.

    1999-01-01

    Large amounts of hydrogen can be generated in the containment of a nuclear power plant following a postulated accident with significant fuel damage. Different strategies have been proposed and implemented to prevent violent hydrogen combustion. An attractive one aims to eliminate hydrogen without burning processes; it is based on the use of catalytic hydrogen recombiners. This paper describes a simulation methodology which is being developed by Ansaldo, to support the application of the above strategy, in the frame of two projects sponsored by the Commission of the European Communities within the IV Framework Program on Reactor Safety. Involved organizations also include the DCMN of Pisa University (Italy), Battelle Institute and GRS (Germany), Politechnical University of Madrid (Spain). The aims to make available a simulation approach, suitable for use for containment design at industrial level (i.e. with reasonable computer running time) and capable to correctly capture the relevant phenomenologies (e.g. multiflow convective flow patterns, hydrogen, air and steam distribution in the containment atmosphere as determined by containment structures and geometries as well as by heat and mass sources and sinks). Eulerian algorithms provide the capability of three dimensional modelling with a fairly accurate prediction, however lower than CFD codes with a full Navier Stokes formulation. Open linking of an Eulerian code as GOTHIC to a full Navier Stokes CFD code as CFX 4.1 allows to dynamically tune the solving strategies of the Eulerian code itself. The effort in progress is an application of this innovative methodology to detailed hydrogen recombination simulation and a validation of the approach itself by reproducing experimental data. (author)

  4. A wide-range model of two-group gross sections in the dynamics code HEXTRAN

    International Nuclear Information System (INIS)

    Kaloinen, E.; Peltonen, J.

    2002-01-01

    In dynamic analyses the thermal hydraulic conditions within the reactor core may have a large variation, which sets a special requirement on the modeling of cross sections. The standard model in the dynamics code HEXTRAN is the same as in the static design code HEXBU-3D/MODS. It is based on a linear and second order fitting of two-group cross sections on fuel and moderator temperature, moderator density and boron density. A new, wide-range model of cross sections developed in Fortum Nuclear Services for HEXBU-3D/MOD6 has been included as an option into HEXTRAN. In this model the nodal cross sections are constructed from seven state variables in a polynomial of more than 40 terms. Coefficients of the polynomial are created by a least squares fitting to the results of a large number of fuel assembly calculations. Depending on the choice of state variables for the spectrum calculations, the new cross section model is capable to cover local conditions from cold zero power to boiling at full power. The 5. dynamic benchmark problem of AER is analyzed with the new option and results are compared to calculations with the standard model of cross sections in HEXTRAN (Authors)

  5. The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code

    International Nuclear Information System (INIS)

    Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.

    1999-01-01

    capability of performing iterated-source (criticality), multiplied-fixed-source, and fixed-source calculations. MCV uses a highly detailed continuous-energy (as opposed to multigroup) representation of neutron histories and cross section data. The spatial modeling is fully three-dimensional (3-D), and any geometrical region that can be described by quadric surfaces may be represented. The primary results are region-wise reaction rates, neutron production rates, slowing-down-densities, fluxes, leakages, and when appropriate the eigenvalue or multiplication factor. Region-wise nuclidic reaction rates are also computed, which may then be used by other modules in the system to determine time-dependent nuclide inventories so that RACER can perform depletion calculations. Furthermore, derived quantities such as ratios and sums of primary quantities and/or other derived quantities may also be calculated. MCV performs statistical analyses on output quantities, computing estimates of the 95% confidence intervals as well as indicators as to the reliability of these estimates. The remainder of this chapter provides an overview of the MCV algorithm. The following three chapters describe the MCV mathematical, physical, and statistical treatments in more detail. Specifically, Chapter 2 discusses topics related to tracking the histories including: geometry modeling, how histories are moved through the geometry, and variance reduction techniques related to the tracking process. Chapter 3 describes the nuclear data and physical models employed by MCV. Chapter 4 discusses the tallies, statistical analyses, and edits. Chapter 5 provides some guidance as to how to run the code, and Chapter 6 is a list of the code input options

  6. Development of multidimensional two-fluid model code ACE-3D for evaluation of constitutive equations

    Energy Technology Data Exchange (ETDEWEB)

    Ohnuki, Akira; Akimoto, Hajime [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kamo, Hideki

    1996-11-01

    In order to perform design calculations for a passive safety reactor with good accuracy by a multidimensional two-fluid model, we developed an analysis code, ACE-3D, which can apply for evaluation of constitutive equations. The developed code has the following features: 1. The basic equations are based on 3-dimensional two-fluid model and the orthogonal or the cylindrical coordinate system can be selected. The fluid system is air-water or steam-water. 2. The basic equations are formulated by the finite-difference scheme of staggered mesh. The convection term is formulated by an upwind scheme and the diffusion term by a center-difference scheme. 3. Semi-implicit numerical scheme is adopted and the mass and the energy equations are treated equally in convergent steps for Jacobi equations. 4. The interfacial stress term consists of drag force, life force, turbulent dispersion force, wall force and virtual mass force. 5. A {kappa}-{epsilon} turbulent model for bubbly flow is incorporated as the turbulent model. The predictive capability of ACE-3D has been verified using a data-base for bubbly flow in a small-scale vertical pipe. In future, the constitutive equations will be improved with a data-base in a large vertical pipe developed in our laboratory and we have a plan to construct a reliable analytical tool through the improvement work, the progress of calculational speed with vector and parallel processing, the assessments for phase change terms and so on. This report describes the outline for the basic equations and the finite-difference equations in ACE-3D code and also the outline for the program structure. Besides, the results for the assessments of ACE-3D code for the small-scale pipe are summarized. (author)

  7. Development of multidimensional two-fluid model code ACE-3D for evaluation of constitutive equations

    International Nuclear Information System (INIS)

    Ohnuki, Akira; Akimoto, Hajime; Kamo, Hideki.

    1996-11-01

    In order to perform design calculations for a passive safety reactor with good accuracy by a multidimensional two-fluid model, we developed an analysis code, ACE-3D, which can apply for evaluation of constitutive equations. The developed code has the following features: 1. The basic equations are based on 3-dimensional two-fluid model and the orthogonal or the cylindrical coordinate system can be selected. The fluid system is air-water or steam-water. 2. The basic equations are formulated by the finite-difference scheme of staggered mesh. The convection term is formulated by an upwind scheme and the diffusion term by a center-difference scheme. 3. Semi-implicit numerical scheme is adopted and the mass and the energy equations are treated equally in convergent steps for Jacobi equations. 4. The interfacial stress term consists of drag force, life force, turbulent dispersion force, wall force and virtual mass force. 5. A κ-ε turbulent model for bubbly flow is incorporated as the turbulent model. The predictive capability of ACE-3D has been verified using a data-base for bubbly flow in a small-scale vertical pipe. In future, the constitutive equations will be improved with a data-base in a large vertical pipe developed in our laboratory and we have a plan to construct a reliable analytical tool through the improvement work, the progress of calculational speed with vector and parallel processing, the assessments for phase change terms and so on. This report describes the outline for the basic equations and the finite-difference equations in ACE-3D code and also the outline for the program structure. Besides, the results for the assessments of ACE-3D code for the small-scale pipe are summarized. (author)

  8. C code generation applied to nonlinear model predictive control for an artificial pancreas

    DEFF Research Database (Denmark)

    Boiroux, Dimitri; Jørgensen, John Bagterp

    2017-01-01

    This paper presents a method to generate C code from MATLAB code applied to a nonlinear model predictive control (NMPC) algorithm. The C code generation uses the MATLAB Coder Toolbox. It can drastically reduce the time required for development compared to a manual porting of code from MATLAB to C...

  9. RAMONA-4B a computer code with three-dimensional neutron kinetics for BWR and SBWR system transient - models and correlations

    Energy Technology Data Exchange (ETDEWEB)

    Rohatgi, U.S.; Cheng, H.S.; Khan, H.J.; Mallen, A.N.; Neymotin, L.Y.

    1998-03-01

    This document describes the major modifications and improvements made to the modeling of the RAMONA-3B/MOD0 code since 1981, when the code description and assessment report was completed. The new version of the code is RAMONA-4B. RAMONA-4B is a systems transient code for application to different versions of Boiling Water Reactors (BWR) such as the current BWR, the Advanced Boiling Water Reactor (ABWR), and the Simplified Boiling Water Reactor (SBWR). This code uses a three-dimensional neutron kinetics model coupled with a multichannel, non-equilibrium, drift-flux, two-phase flow formulation of the thermal hydraulics of the reactor vessel. The code is designed to analyze a wide spectrum of BWR core and system transients and instability issues. Chapter 1 is an overview of the code`s capabilities and limitations; Chapter 2 discusses the neutron kinetics modeling and the implementation of reactivity edits. Chapter 3 is an overview of the heat conduction calculations. Chapter 4 presents modifications to the thermal-hydraulics model of the vessel, recirculation loop, steam separators, boron transport, and SBWR specific components. Chapter 5 describes modeling of the plant control and safety systems. Chapter 6 presents and modeling of Balance of Plant (BOP). Chapter 7 describes the mechanistic containment model in the code. The content of this report is complementary to the RAMONA-3B code description and assessment document. 53 refs., 81 figs., 13 tabs.

  10. Kinetic models of gene expression including non-coding RNAs

    Energy Technology Data Exchange (ETDEWEB)

    Zhdanov, Vladimir P., E-mail: zhdanov@catalysis.r

    2011-03-15

    In cells, genes are transcribed into mRNAs, and the latter are translated into proteins. Due to the feedbacks between these processes, the kinetics of gene expression may be complex even in the simplest genetic networks. The corresponding models have already been reviewed in the literature. A new avenue in this field is related to the recognition that the conventional scenario of gene expression is fully applicable only to prokaryotes whose genomes consist of tightly packed protein-coding sequences. In eukaryotic cells, in contrast, such sequences are relatively rare, and the rest of the genome includes numerous transcript units representing non-coding RNAs (ncRNAs). During the past decade, it has become clear that such RNAs play a crucial role in gene expression and accordingly influence a multitude of cellular processes both in the normal state and during diseases. The numerous biological functions of ncRNAs are based primarily on their abilities to silence genes via pairing with a target mRNA and subsequently preventing its translation or facilitating degradation of the mRNA-ncRNA complex. Many other abilities of ncRNAs have been discovered as well. Our review is focused on the available kinetic models describing the mRNA, ncRNA and protein interplay. In particular, we systematically present the simplest models without kinetic feedbacks, models containing feedbacks and predicting bistability and oscillations in simple genetic networks, and models describing the effect of ncRNAs on complex genetic networks. Mathematically, the presentation is based primarily on temporal mean-field kinetic equations. The stochastic and spatio-temporal effects are also briefly discussed.

  11. Applications of Transport/Reaction Codes to Problems in Cell Modeling; TOPICAL

    International Nuclear Information System (INIS)

    MEANS, SHAWN A.; RINTOUL, MARK DANIEL; SHADID, JOHN N.

    2001-01-01

    We demonstrate two specific examples that show how our exiting capabilities in solving large systems of partial differential equations associated with transport/reaction systems can be easily applied to outstanding problems in computational biology. First, we examine a three-dimensional model for calcium wave propagation in a Xenopus Laevis frog egg and verify that a proposed model for the distribution of calcium release sites agrees with experimental results as a function of both space and time. Next, we create a model of the neuron's terminus based on experimental observations and show that the sodium-calcium exchanger is not the route of sodium's modulation of neurotransmitter release. These state-of-the-art simulations were performed on massively parallel platforms and required almost no modification of existing Sandia codes

  12. On boundary layer modelling using the ASTEC code

    International Nuclear Information System (INIS)

    Smith, B.L.

    1991-07-01

    The modelling of fluid boundary layers adjacent to non-slip, heated surface using the ASTEC code is described. The pricipal boundary layer characteristics are derived using simple dimensional arguments and these are developed into criteria for optimum placement of the computational mesh to achieve realistic simulation. In particular, the need for externally-imposed drag and heat transfer correlations as a function of the local mesh concentration is discussed in the context of both laminar and turbulent flow conditions. Special emphasis is placed in the latter case on the (k-ε) turbulence model, which is standard in the code. As far as possible, the analyses are pursued from first principles, so that no comprehensive knowledge of the history of the subject is required for the general ASTEC user to derive practical advice from the document. Some attention is paid to the use of heat transfer correlations for internal solid/fluid surfaces, whose treatment is not straightforward in ASTEC. It is shown that three formulations are possible to effect the heat transfer, called Explicit, Jacobian and Implicit. The particular advantages and disadvantages of each are discussed with regard to numerical stability and computational efficiency. (author) 18 figs., 1 tab., 39 refs

  13. Physicochemical analog for modeling superimposed and coded memories

    Science.gov (United States)

    Ensanian, Minas

    1992-07-01

    The mammalian brain is distinguished by a life-time of memories being stored within the same general region of physicochemical space, and having two extraordinary features. First, memories to varying degrees are superimposed, as well as coded. Second, instantaneous recall of past events can often be affected by relatively simple, and seemingly unrelated sensory clues. For the purposes of attempting to mathematically model such complex behavior, and for gaining additional insights, it would be highly advantageous to be able to simulate or mimic similar behavior in a nonbiological entity where some analogical parameters of interest can reasonably be controlled. It has recently been discovered that in nonlinear accumulative metal fatigue memories (related to mechanical deformation) can be superimposed and coded in the crystal lattice, and that memory, that is, the total number of stress cycles can be recalled (determined) by scanning not the surfaces but the `edges' of the objects. The new scanning technique known as electrotopography (ETG) now makes the state space modeling of metallic networks possible. The author provides an overview of the new field and outlines the areas that are of immediate interest to the science of artificial neural networks.

  14. An Observation Capability Metadata Model for EO Sensor Discovery in Sensor Web Enablement Environments

    Directory of Open Access Journals (Sweden)

    Chuli Hu

    2014-10-01

    Full Text Available Accurate and fine-grained discovery by diverse Earth observation (EO sensors ensures a comprehensive response to collaborative observation-required emergency tasks. This discovery remains a challenge in an EO sensor web environment. In this study, we propose an EO sensor observation capability metadata model that reuses and extends the existing sensor observation-related metadata standards to enable the accurate and fine-grained discovery of EO sensors. The proposed model is composed of five sub-modules, namely, ObservationBreadth, ObservationDepth, ObservationFrequency, ObservationQuality and ObservationData. The model is applied to different types of EO sensors and is formalized by the Open Geospatial Consortium Sensor Model Language 1.0. The GeosensorQuery prototype retrieves the qualified EO sensors based on the provided geo-event. An actual application to flood emergency observation in the Yangtze River Basin in China is conducted, and the results indicate that sensor inquiry can accurately achieve fine-grained discovery of qualified EO sensors and obtain enriched observation capability information. In summary, the proposed model enables an efficient encoding system that ensures minimum unification to represent the observation capabilities of EO sensors. The model functions as a foundation for the efficient discovery of EO sensors. In addition, the definition and development of this proposed EO sensor observation capability metadata model is a helpful step in extending the Sensor Model Language (SensorML 2.0 Profile for the description of the observation capabilities of EO sensors.

  15. 7 CFR Exhibit E to Subpart A of... - Voluntary National Model Building Codes

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 12 2010-01-01 2010-01-01 false Voluntary National Model Building Codes E Exhibit E... National Model Building Codes The following documents address the health and safety aspects of buildings and related structures and are voluntary national model building codes as defined in § 1924.4(h)(2) of...

  16. Verification of the New FAST v8 Capabilities for the Modeling of Fixed-Bottom Offshore Wind Turbines: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Barahona, B.; Jonkman, J.; Damiani, R.; Robertson, A.; Hayman, G.

    2014-12-01

    Coupled dynamic analysis has an important role in the design of offshore wind turbines because the systems are subject to complex operating conditions from the combined action of waves and wind. The aero-hydro-servo-elastic tool FAST v8 is framed in a novel modularization scheme that facilitates such analysis. Here, we present the verification of new capabilities of FAST v8 to model fixed-bottom offshore wind turbines. We analyze a series of load cases with both wind and wave loads and compare the results against those from the previous international code comparison projects-the International Energy Agency (IEA) Wind Task 23 Subtask 2 Offshore Code Comparison Collaboration (OC3) and the IEA Wind Task 30 OC3 Continued (OC4) projects. The verification is performed using the NREL 5-MW reference turbine supported by monopile, tripod, and jacket substructures. The substructure structural-dynamics models are built within the new SubDyn module of FAST v8, which uses a linear finite-element beam model with Craig-Bampton dynamic system reduction. This allows the modal properties of the substructure to be synthesized and coupled to hydrodynamic loads and tower dynamics. The hydrodynamic loads are calculated using a new strip theory approach for multimember substructures in the updated HydroDyn module of FAST v8. These modules are linked to the rest of FAST through the new coupling scheme involving mapping between module-independent spatial discretizations and a numerically rigorous implicit solver. The results show that the new structural dynamics, hydrodynamics, and coupled solutions compare well to the results from the previous code comparison projects.

  17. Isotopic modelling using the ENIGMA-B fuel performance code

    International Nuclear Information System (INIS)

    Rossiter, G.D.; Cook, P.M.A.; Weston, R.

    2001-01-01

    A number of experimental programmes by BNFL and other MOX fabricators have now shown that the in-pile performance of MOX fuel is generally similar to that of conventional UO 2 fuel. Models based on UO 2 fuel experience form a good basis for a description of MOX fuel behaviour. However, an area where the performance of MOX fuel is sufficiently different from that of UO 2 to warrant model changes is in the radial power and burnup profile. The differences in radial power and burnup profile arise from the presence of significant concentrations of plutonium in MOX fuel, at beginning of life, and their subsequent evolution with burnup. Amongst other effects, plutonium has a greater neutron absorption cross-section than uranium. This paper focuses on the development of a new model for the radial power and burnup profile within a UO 2 or MOX fuel rod, in which the underlying fissile isotope concentration distributions are tracked during irradiation. The new model has been incorporated into the ENIGMA-B fuel performance code and has been extended to track the isotopic concentrations of the fission gases, xenon and krypton. The calculated distributions have been validated against results from rod puncture measurements and electron probe micro-analysis (EPMA) linescans, performed during the M501 post irradiation examination (PIE) programme. The predicted gas inventory of the fuel/clad gap is compared with the isotopic composition measured during rod puncture and the measured radial distributions of burnup (from neodymium measurements) and plutonium in the fuel are compared with the calculated distributions. It is shown that there is good agreement between the code predictions and the measurements. (author)

  18. Capability-based Access Control Delegation Model on the Federated IoT Network

    DEFF Research Database (Denmark)

    Anggorojati, Bayu; Mahalle, Parikshit N.; Prasad, Neeli R.

    2012-01-01

    Flexibility is an important property for general access control system and especially in the Internet of Things (IoT), which can be achieved by access or authority delegation. Delegation mechanisms in access control that have been studied until now have been intended mainly for a system that has...... no resource constraint, such as a web-based system, which is not very suitable for a highly pervasive system such as IoT. To this end, this paper presents an access delegation method with security considerations based on Capability-based Context Aware Access Control (CCAAC) model intended for federated...... machine-to-machine communication or IoT networks. The main idea of our proposed model is that the access delegation is realized by means of a capability propagation mechanism, and incorporating the context information as well as secure capability propagation under federated IoT environments. By using...

  19. Testing an integrated model of operations capabilities An empirical study of Australian airlines

    NARCIS (Netherlands)

    Nand, Alka Ashwini; Singh, Prakash J.; Power, Damien

    2013-01-01

    Purpose - The purpose of this paper is to test the integrated model of operations strategy as proposed by Schmenner and Swink to explain whether firms trade-off or accumulate capabilities, taking into account their positions relative to their asset and operating frontiers.

  20. University-Industry Research Collaboration: A Model to Assess University Capability

    Science.gov (United States)

    Abramo, Giovanni; D'Angelo, Ciriaco Andrea; Di Costa, Flavia

    2011-01-01

    Scholars and policy makers recognize that collaboration between industry and the public research institutions is a necessity for innovation and national economic development. This work presents an econometric model which expresses the university capability for collaboration with industry as a function of size, location and research quality. The…

  1. Analysis of different containment models for IRIS small break LOCA, using GOTHIC and RELAP5 codes

    International Nuclear Information System (INIS)

    Papini, Davide; Grgic, Davor; Cammi, Antonio; Ricotti, Marco E.

    2011-01-01

    Advanced nuclear water reactors rely on containment behaviour in realization of some of their passive safety functions. Steam condensation on containment walls, where non-condensable gas effects are significant, is an important feature of the new passive containment concepts, like the AP600/1000 ones. In this work the international reactor innovative and secure (IRIS) was taken as reference, and the relevant condensation phenomena involved within its containment were investigated with different computational tools. In particular, IRIS containment response to a small break LOCA (SBLOCA) was calculated with GOTHIC and RELAP5 codes. A simplified model of IRIS containment drywell was implemented with RELAP5 according to a sliced approach, based on the two-pipe-with-junction concept, while it was addressed with GOTHIC using several modelling options, regarding both heat transfer correlations and volume and thermal structure nodalization. The influence on containment behaviour prediction was investigated in terms of drywell temperature and pressure response, heat transfer coefficient (HTC) and steam volume fraction distribution, and internal recirculating mass flow rate. The objective of the paper is to preliminarily compare the capability of the two codes in modelling of the same postulated accident, thus to check the results obtained with RELAP5, when applied in a situation not covered by its validation matrix (comprising SBLOCA and to some extent LBLOCA transients, but not explicitly the modelling of large dry containment volumes). The option to include or not droplets in fluid mass flow discharged to the containment was the most influencing parameter for GOTHIC simulations. Despite some drawbacks, due, e.g. to a marked overestimation of internal natural recirculation, RELAP5 confirmed its capability to satisfactorily model the basic processes in IRIS containment following SBLOCA.

  2. Impurity seeding in ASDEX upgrade tokamak modeled by COREDIV code

    Energy Technology Data Exchange (ETDEWEB)

    Galazka, K.; Ivanova-Stanik, I.; Czarnecka, A.; Zagoerski, R. [Institute of Plasma Physics and Laser Microfusion, Warsaw (Poland); Bernert, M.; Kallenbach, A. [Max-Planck-Institut fuer Plasmaphysik, Garching (Germany); Collaboration: ASDEX Upgrade Team

    2016-08-15

    The self-consistent COREDIV code is used to simulate discharges in a tokamak plasma, especially the influence of impurities during nitrogen and argon seeding on the key plasma parameters. The calculations are performed with and without taking into account the W prompt redeposition in the divertor area and are compared to the experimental results acquired on ASDEX Upgrade tokamak (shots 29254 and 29257). For both impurities the modeling shows a better agreement with the experiment in the case without prompt redeposition. It is attributed to higher average tungsten concentration, which on the other hand seriously exceeds the experimental value. By turning the prompt redeposition process on, the W concentration is lowered, what, in turn, results in underestimation of the radiative power losses. By analyzing the influence of the transport coefficients on the radiative power loss and average W concentration it is concluded that the way to compromise the opposing tendencies is to include the edge-localized mode flushing mechanism into the code, which dominates the experimental particle and energy balance. Also performing the calculations with both anomalous and neoclassical diffusion transport mechanisms included is suggested. (copyright 2016 The Authors. Contributions to Plasma Physics published by Wiley-VCH Verlag GmbH and Co. KGaA Weinheim. This)

  3. The Advanced Modeling, Simulation and Analysis Capability Roadmap Vision for Engineering

    Science.gov (United States)

    Zang, Thomas; Lieber, Mike; Norton, Charles; Fucik, Karen

    2006-01-01

    This paper summarizes a subset of the Advanced Modeling Simulation and Analysis (AMSA) Capability Roadmap that was developed for NASA in 2005. The AMSA Capability Roadmap Team was chartered to "To identify what is needed to enhance NASA's capabilities to produce leading-edge exploration and science missions by improving engineering system development, operations, and science understanding through broad application of advanced modeling, simulation and analysis techniques." The AMSA roadmap stressed the need for integration, not just within the science, engineering and operations domains themselves, but also across these domains. Here we discuss the roadmap element pertaining to integration within the engineering domain, with a particular focus on implications for future observatory missions. The AMSA products supporting the system engineering function are mission information, bounds on information quality, and system validation guidance. The Engineering roadmap element contains 5 sub-elements: (1) Large-Scale Systems Models, (2) Anomalous Behavior Models, (3) advanced Uncertainty Models, (4) Virtual Testing Models, and (5) space-based Robotics Manufacture and Servicing Models.

  4. CODE's new solar radiation pressure model for GNSS orbit determination

    Science.gov (United States)

    Arnold, D.; Meindl, M.; Beutler, G.; Dach, R.; Schaer, S.; Lutz, S.; Prange, L.; Sośnica, K.; Mervart, L.; Jäggi, A.

    2015-08-01

    The Empirical CODE Orbit Model (ECOM) of the Center for Orbit Determination in Europe (CODE), which was developed in the early 1990s, is widely used in the International GNSS Service (IGS) community. For a rather long time, spurious spectral lines are known to exist in geophysical parameters, in particular in the Earth Rotation Parameters (ERPs) and in the estimated geocenter coordinates, which could recently be attributed to the ECOM. These effects grew creepingly with the increasing influence of the GLONASS system in recent years in the CODE analysis, which is based on a rigorous combination of GPS and GLONASS since May 2003. In a first step we show that the problems associated with the ECOM are to the largest extent caused by the GLONASS, which was reaching full deployment by the end of 2011. GPS-only, GLONASS-only, and combined GPS/GLONASS solutions using the observations in the years 2009-2011 of a global network of 92 combined GPS/GLONASS receivers were analyzed for this purpose. In a second step we review direct solar radiation pressure (SRP) models for GNSS satellites. We demonstrate that only even-order short-period harmonic perturbations acting along the direction Sun-satellite occur for GPS and GLONASS satellites, and only odd-order perturbations acting along the direction perpendicular to both, the vector Sun-satellite and the spacecraft's solar panel axis. Based on this insight we assess in the third step the performance of four candidate orbit models for the future ECOM. The geocenter coordinates, the ERP differences w. r. t. the IERS 08 C04 series of ERPs, the misclosures for the midnight epochs of the daily orbital arcs, and scale parameters of Helmert transformations for station coordinates serve as quality criteria. The old and updated ECOM are validated in addition with satellite laser ranging (SLR) observations and by comparing the orbits to those of the IGS and other analysis centers. Based on all tests, we present a new extended ECOM which

  5. TASS/SMR Code Topical Report for SMART Plant, Vol. I: Code Structure, System Models, and Solution Methods

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Young Jong; Kim, Soo Hyoung; Kim, See Darl (and others)

    2008-10-15

    The TASS/SMR code has been developed with domestic technologies for the safety analysis of the SMART plant which is an integral type pressurized water reactor. It can be applied to the analysis of design basis accidents including non-LOCA (loss of coolant accident) and LOCA of the SMART plant. The TASS/SMR code can be applied to any plant regardless of the structural characteristics of a reactor since the code solves the same governing equations for both the primary and secondary system. The code has been developed to meet the requirements of the safety analysis code. This report describes the overall structure of the TASS/SMR, input processing, and the processes of a steady state and transient calculations. In addition, basic differential equations, finite difference equations, state relationships, and constitutive models are described in the report. First, the conservation equations, a discretization process for numerical analysis, search method for state relationship are described. Then, a core power model, heat transfer models, physical models for various components, and control and trip models are explained.

  6. A Realistic Model under which the Genetic Code is Optimal

    NARCIS (Netherlands)

    Buhrman, H.; van der Gulik, P.T.S.; Klau, G.W.; Schaffner, C.; Speijer, D.; Stougie, L.

    2013-01-01

    The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By

  7. Simplified modeling and code usage in the PASC-3 code system by the introduction of a programming environment

    International Nuclear Information System (INIS)

    Pijlgroms, B.J.; Oppe, J.; Oudshoorn, H.L.; Slobben, J.

    1991-06-01

    A brief description is given of the PASC-3 (Petten-AMPX-SCALE) Reactor Physics code system and associated UNIPASC work environment. The PASC-3 code system is used for criticality and reactor calculations and consists of a selection from the Oak Ridge National Laboratory AMPX-SCALE-3 code collection complemented with a number of additional codes and nuclear data bases. The original codes have been adapted to run under the UNIX operating system. The recommended nuclear data base is a complete 219 group cross section library derived from JEF-1 of which some benchmark results are presented. By the addition of the UNIPASC work environment the usage of the code system is greatly simplified. Complex chains of programs can easily be coupled together to form a single job. In addition, the model parameters can be represented by variables instead of literal values which enhances the readability and may improve the integrity of the code inputs. (author). 8 refs.; 6 figs.; 1 tab

  8. RELAP5-3D Code Includes ATHENA Features and Models

    International Nuclear Information System (INIS)

    Riemke, Richard A.; Davis, Cliff B.; Schultz, Richard R.

    2006-01-01

    Version 2.3 of the RELAP5-3D computer program includes all features and models previously available only in the ATHENA version of the code. These include the addition of new working fluids (i.e., ammonia, blood, carbon dioxide, glycerol, helium, hydrogen, lead-bismuth, lithium, lithium-lead, nitrogen, potassium, sodium, and sodium-potassium) and a magnetohydrodynamic model that expands the capability of the code to model many more thermal-hydraulic systems. In addition to the new working fluids along with the standard working fluid water, one or more noncondensable gases (e.g., air, argon, carbon dioxide, carbon monoxide, helium, hydrogen, krypton, nitrogen, oxygen, SF 6 , xenon) can be specified as part of the vapor/gas phase of the working fluid. These noncondensable gases were in previous versions of RELAP5-3D. Recently four molten salts have been added as working fluids to RELAP5-3D Version 2.4, which has had limited release. These molten salts will be in RELAP5-3D Version 2.5, which will have a general release like RELAP5-3D Version 2.3. Applications that use these new features and models are discussed in this paper. (authors)

  9. IT-enabled dynamic capability on performance: An empirical study of BSC model

    Directory of Open Access Journals (Sweden)

    Adilson Carlos Yoshikuni

    2017-05-01

    Full Text Available ew studies have investigated the influence of “information capital,” through IT-enabled dynamic capability, on corporate performance, particularly in economic turbulence. Our study investigates the causal relationship between performance perspectives of the balanced scorecard using partial least squares path modeling. Using data on 845 Brazilian companies, we conduct a quantitative empirical study of firms during an economic crisis and observe the following interesting results. Operational and analytical IT-enabled dynamic capability had positive effects on business process improvement and corporate performance. Results pertaining to mediation (endogenous variables and moderation (control variables clarify IT’s role in and benefits for corporate performance.

  10. A self-organized internal models architecture for coding sensory-motor schemes

    Directory of Open Access Journals (Sweden)

    Esaú eEscobar Juárez

    2016-04-01

    Full Text Available Cognitive robotics research draws inspiration from theories and models on cognition, as conceived by neuroscience or cognitive psychology, to investigate biologically plausible computational models in artificial agents. In this field, the theoretical framework of Grounded Cognition provides epistemological and methodological grounds for the computational modeling of cognition. It has been stressed in the literature that textit{simulation}, textit{prediction}, and textit{multi-modal integration} are key aspects of cognition and that computational architectures capable of putting them into play in a biologically plausible way are a necessity.Research in this direction has brought extensive empirical evidencesuggesting that textit{Internal Models} are suitable mechanisms forsensory-motor integration. However, current Internal Models architectures show several drawbacks, mainly due to the lack of a unified substrate allowing for a true sensory-motor integration space, enabling flexible and scalable ways to model cognition under the embodiment hypothesis constraints.We propose the Self-Organized Internal ModelsArchitecture (SOIMA, a computational cognitive architecture coded by means of a network of self-organized maps, implementing coupled internal models that allow modeling multi-modal sensory-motor schemes. Our approach addresses integrally the issues of current implementations of Internal Models.We discuss the design and features of the architecture, and provide empirical results on a humanoid robot that demonstrate the benefits and potentialities of the SOIMA concept for studying cognition in artificial agents.

  11. A MODEL BUILDING CODE ARTICLE ON FALLOUT SHELTERS WITH RECOMMENDATIONS FOR INCLUSION OF REQUIREMENTS FOR FALLOUT SHELTER CONSTRUCTION IN FOUR NATIONAL MODEL BUILDING CODES.

    Science.gov (United States)

    American Inst. of Architects, Washington, DC.

    A MODEL BUILDING CODE FOR FALLOUT SHELTERS WAS DRAWN UP FOR INCLUSION IN FOUR NATIONAL MODEL BUILDING CODES. DISCUSSION IS GIVEN OF FALLOUT SHELTERS WITH RESPECT TO--(1) NUCLEAR RADIATION, (2) NATIONAL POLICIES, AND (3) COMMUNITY PLANNING. FALLOUT SHELTER REQUIREMENTS FOR SHIELDING, SPACE, VENTILATION, CONSTRUCTION, AND SERVICES SUCH AS ELECTRICAL…

  12. Modeling Vortex Generators in a Navier-Stokes Code

    Science.gov (United States)

    Dudek, Julianne C.

    2011-01-01

    A source-term model that simulates the effects of vortex generators was implemented into the Wind-US Navier-Stokes code. The source term added to the Navier-Stokes equations simulates the lift force that would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, subsonic flow in an S-duct with 22 corotating vortex generators, and supersonic flow in a rectangular duct with a counter-rotating vortex-generator pair. The model was also used to successfully simulate microramps in supersonic flow by treating each microramp as a pair of vanes with opposite angles of incidence. The validation results indicate that the source-term vortex-generator model provides a useful tool for screening vortex-generator configurations and gives comparable results to solutions computed using gridded vanes.

  13. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field ( port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  14. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT`S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (``port approximation``). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  15. Modelling RF sources using 2-D PIC codes

    International Nuclear Information System (INIS)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (''port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation

  16. Maximizing entropy of image models for 2-D constrained coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Danieli, Matteo; Burini, Nino

    2010-01-01

    This paper considers estimating and maximizing the entropy of two-dimensional (2-D) fields with application to 2-D constrained coding. We consider Markov random fields (MRF), which have a non-causal description, and the special case of Pickard random fields (PRF). The PRF are 2-D causal finite...... context models, which define stationary probability distributions on finite rectangles and thus allow for calculation of the entropy. We consider two binary constraints and revisit the hard square constraint given by forbidding neighboring 1s and provide novel results for the constraint that no uniform 2...... £ 2 squares contains all 0s or all 1s. The maximum values of the entropy for the constraints are estimated and binary PRF satisfying the constraint are characterized and optimized w.r.t. the entropy. The maximum binary PRF entropy is 0.839 bits/symbol for the no uniform squares constraint. The entropy...

  17. How do dynamic capabilities transform external technologies into firms’ renewed technological resources? – A mediation model

    DEFF Research Database (Denmark)

    Li-Ying, Jason; Wang, Yuandi; Ning, Lutao

    2016-01-01

    microfoundations of dynamic technological capabilities, mediate the relationship between external technology breadth and firms’ technological innovation performance, based on the resource-based view and dynamic capability view. Using a sample of listed Chinese licensee firms, we find that firms must broadly......How externally acquired resources may become valuable, rare, hard-to-imitate, and non-substitute resource bundles through the development of dynamic capabilities? This study proposes and tests a mediation model of how firms’ internal technological diversification and R&D, as two distinctive...... explore external technologies to ignite the dynamism in internal technological diversity and in-house R&D, which play their crucial roles differently to transform and reconfigure firms’ technological resources....

  18. Structural reliability assessment capability in NESSUS

    Science.gov (United States)

    Millwater, H.; Wu, Y.-T.

    1992-07-01

    The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.

  19. Modification of the finite element heat and mass transfer code (FEHM) to model multicomponent reactive transport

    International Nuclear Information System (INIS)

    Viswanathan, H.S.

    1996-08-01

    The finite element code FEHMN, developed by scientists at Los Alamos National Laboratory (LANL), is a three-dimensional finite element heat and mass transport simulator that can handle complex stratigraphy and nonlinear processes such as vadose zone flow, heat flow and solute transport. Scientists at LANL have been developing hydrologic flow and transport models of the Yucca Mountain site using FEHMN. Previous FEHMN simulations have used an equivalent Kd model to model solute transport. In this thesis, FEHMN is modified making it possible to simulate the transport of a species with a rigorous chemical model. Including the rigorous chemical equations into FEHMN simulations should provide for more representative transport models for highly reactive chemical species. A fully kinetic formulation is chosen for the FEHMN reactive transport model. Several methods are available to computationally implement a fully kinetic formulation. Different numerical algorithms are investigated in order to optimize computational efficiency and memory requirements of the reactive transport model. The best algorithm of those investigated is then incorporated into FEHMN. The algorithm chosen requires for the user to place strongly coupled species into groups which are then solved for simultaneously using FEHMN. The complete reactive transport model is verified over a wide variety of problems and is shown to be working properly. The new chemical capabilities of FEHMN are illustrated by using Los Alamos National Laboratory's site scale model of Yucca Mountain to model two-dimensional, vadose zone 14 C transport. The simulations demonstrate that gas flow and carbonate chemistry can significantly affect 14 C transport at Yucca Mountain. The simulations also prove that the new capabilities of FEHMN can be used to refine and buttress already existing Yucca Mountain radionuclide transport studies

  20. Nuclear model codes available at the Nuclear Energy Agency Computer Program Library (NEA-CPL)

    International Nuclear Information System (INIS)

    Sartori, E.; Garcia Viedma, L. de

    1976-01-01

    This paper briefly outlines the objectives of the NEA-CPL and its activities in the field of Nuclear Model Computer Codes. A short description of the computer codes available from the CPL in this field is also presented. (author)

  1. Large-Signal Code TESLA: Improvements in the Implementation and in the Model

    National Research Council Canada - National Science Library

    Chernyavskiy, Igor A; Vlasov, Alexander N; Anderson, Jr., Thomas M; Cooke, Simon J; Levush, Baruch; Nguyen, Khanh T

    2006-01-01

    We describe the latest improvements made in the large-signal code TESLA, which include transformation of the code to a Fortran-90/95 version with dynamical memory allocation and extension of the model...

  2. Konsep Tingkat Kematangan penerapan Internet Protokol versi 6 (Capability Maturity Model for IPv6 Implementation

    Directory of Open Access Journals (Sweden)

    Riza Azmi

    2015-03-01

    Full Text Available Internet Protocol atau IP merupakan standar penomoran internet di dunia yang jumlahnya terbatas. Di dunia, alokasi IP diatur oleh Internet Assignd Number Authority (IANA dan didelegasikan ke melalui otoritas masing-masing benua. IP sendiri terdiri dari 2 jenis versi yaitu IPv4 dan IPv6 dimana alokasi IPv4 dinyatakan habis di tingkat IANA pada bulan April 2011. Oleh karena itu, penggunaan IP diarahkan kepada penggunaan IPv6. Untuk melihat bagaimana kematangan suatu organisasi terhadap implementasi IPv6, penelitian ini mencoba membuat sebuah model tingkat kematangan penerapan IPv6. Konsep dasar dari model ini mengambil konsep Capability Maturity Model Integrated (CMMI, dengan beberapa tambahan yaitu roadmap migrasi IPv6 di Indonesia, Request for Comment (RFC yang terkait dengan IPv6 serta beberapa best-practice implementasi dari IPv6. Dengan konsep tersebut, penelitian ini menghasilkan konsep Capability Maturity for IPv6 Implementation.

  3. MINIMARS interim report appendix halo model and computer code

    International Nuclear Information System (INIS)

    Santarius, J.F.; Barr, W.L.; Deng, B.Q.; Emmert, G.A.

    1985-01-01

    A tenuous, cool plasma called the halo shields the core plasma in a tandem mirror from neutral gas and impurities. The neutral particles are ionized and then pumped by the halo to the end tanks of the device, since flow of plasma along field lines is much faster than radial flow. Plasma reaching the end tank walls recombines, and the resulting neutral gas is vacuum pumped. The basic geometry of the MINIMARS halo is shown. For halo modeling purposes, the core plasma and cold gas regions may be treated as single radial zones leading to halo source and sink terms. The halo itself is differential into two major radial zones: halo scraper and halo dump. The halo scraper zone is defined by the radial distance required for the ion end plugging potential to drop to the central cell value, and thus have no effect on axial confinement; this distance is typically a sloshing plug ion Larmor diameter. The outer edge of the halo dump zone is defined by the last central cell flux tube to pass through the choke coil. This appendix will summarize the halo model that has been developed for MINIMARS and the methodology used in implementing that model as a computer code

  4. Accident consequence assessment code development

    International Nuclear Information System (INIS)

    Homma, T.; Togawa, O.

    1991-01-01

    This paper describes the new computer code system, OSCAAR developed for off-site consequence assessment of a potential nuclear accident. OSCAAR consists of several modules which have modeling capabilities in atmospheric transport, foodchain transport, dosimetry, emergency response and radiological health effects. The major modules of the consequence assessment code are described, highlighting the validation and verification of the models. (author)

  5. Modification of the finite element heat and mass transfer code (FEHMN) to model multicomponent reactive transport

    International Nuclear Information System (INIS)

    Viswanathan, H.S.

    1995-01-01

    The finite element code FEHMN is a three-dimensional finite element heat and mass transport simulator that can handle complex stratigraphy and nonlinear processes such as vadose zone flow, heat flow and solute transport. Scientists at LANL have been developed hydrologic flow and transport models of the Yucca Mountain site using FEHMN. Previous FEHMN simulations have used an equivalent K d model to model solute transport. In this thesis, FEHMN is modified making it possible to simulate the transport of a species with a rigorous chemical model. Including the rigorous chemical equations into FEHMN simulations should provide for more representative transport models for highly reactive chemical species. A fully kinetic formulation is chosen for the FEHMN reactive transport model. Several methods are available to computationally implement a fully kinetic formulation. Different numerical algorithms are investigated in order to optimize computational efficiency and memory requirements of the reactive transport model. The best algorithm of those investigated is then incorporated into FEHMN. The algorithm chosen requires for the user to place strongly coupled species into groups which are then solved for simultaneously using FEHMN. The complete reactive transport model is verified over a wide variety of problems and is shown to be working properly. The simulations demonstrate that gas flow and carbonate chemistry can significantly affect 14 C transport at Yucca Mountain. The simulations also provide that the new capabilities of FEHMN can be used to refine and buttress already existing Yucca Mountain radionuclide transport studies

  6. Models of multi-rod code FRETA-B for transient fuel behavior analysis

    International Nuclear Information System (INIS)

    Uchida, Masaaki; Otsubo, Naoaki.

    1984-11-01

    This paper is a final report of the development of FRETA-B code, which analyzes the LWR fuel behavior during accidents, particularly the Loss-of-Coolant Accident (LOCA). The very high temperature induced by a LOCA causes oxidation of the cladding by steam and, as a combined effect with low external pressure, extensive swelling of the cladding. The latter may reach a level that the rods block the coolant channel. To analyze these phenomena, single-rod model is insufficient; FRETA-B has a capability to handle multiple fuel rods in a bundle simultaneously, including the interaction between them. In the development work, therefore, efforts were made for avoiding the excessive increase of calculation time and core memory requirement. Because of the strong dependency of the in-LOCA fuel behavior on the coolant state, FRETA-B has emphasis on heat transfer to the coolant as well as the cladding deformation. In the final version, a capability was added to analyze the fuel behavior under reflooding using empirical models. The present report describes the basic models of FRETA-B, and also gives its input manual in the appendix. (author)

  7. ETFOD: a point model physics code with arbitrary input

    International Nuclear Information System (INIS)

    Rothe, K.E.; Attenberger, S.E.

    1980-06-01

    ETFOD is a zero-dimensional code which solves a set of physics equations by minimization. The technique used is different than normally used, in that the input is arbitrary. The user is supplied with a set of variables from which he specifies which variables are input (unchanging). The remaining variables become the output. Presently the code is being used for ETF reactor design studies. The code was written in a manner to allow easy modificaton of equations, variables, and physics calculations. The solution technique is presented along with hints for using the code

  8. Modeling ion exchange in clinoptilolite using the EQ3/6 geochemical modeling code

    International Nuclear Information System (INIS)

    Viani, B.E.; Bruton, C.J.

    1992-06-01

    Assessing the suitability of Yucca Mtn., NV as a potential repository for high-level nuclear waste requires the means to simulate ion-exchange behavior of zeolites. Vanselow and Gapon convention cation-exchange models have been added to geochemical modeling codes EQ3NR/EQ6, allowing exchange to be modeled for up to three exchangers or a single exchanger with three independent sites. Solid-solution models that are numerically equivalent to the ion-exchange models were derived and also implemented in the code. The Gapon model is inconsistent with experimental adsorption isotherms of trace components in clinoptilolite. A one-site Vanselow model can describe adsorption of Cs or Sr on clinoptilolite, but a two-site Vanselow exchange model is necessary to describe K contents of natural clinoptilolites

  9. Methodology Using MELCOR Code to Model Proposed Hazard Scenario

    Energy Technology Data Exchange (ETDEWEB)

    Gavin Hawkley

    2010-07-01

    This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.

  10. Aspects of a generic photovoltaic model examined under the German grid code for medium voltage

    Energy Technology Data Exchange (ETDEWEB)

    Theologitis, Ioannis-Thomas; Troester, Eckehard; Ackermann, Thomas [Energynautics GmbH, Langen (Germany)

    2011-07-01

    The increasing peneration of photovoltaic power systems into the power grid has attached attention to the issue of ensuring the smooth absorbance of the solar energy, while securing the normal and steady operation of the grid as well. Nowadays, the PV systems must meet a number of technical requirements to address this issue. This paper investigates a generic grid-connected photovoltaic model that was developed by DIgSILENT and is part of the library in the new version of PowerFactory v.14.1 software that is used in this study. The model has a nominal rated peak power of 0.5 MVA and a designed power factor cos{phi}0.95. The study focuses on the description of the model, its control system and its ability to reflect important requirements that a grid-connected PV system should have by January 2011 according to the German grid code for medium voltage. The model undergoes various simulations. Static voltage support, active power control and dynamic voltage support - Fault Ride Through (FRT) is examined. The results show that the generic model is capable for active power reduction under over-frequency occasions and FRT behavior in cases of voltage dips. The reactive power control that is added in the model improves the control system and makes the model capable for static voltage support in sudden active power injection changes at the point of common coupling. Beside the simplifications and shortcomings of this generic model, basic requirements of the modern PV systems can be addressed. Further improvements could make it more complete and applicable for more detailed studies. (orig.)

  11. ASTEC V2 severe accident integral code: Fission product modelling and validation

    International Nuclear Information System (INIS)

    Cantrel, L.; Cousin, F.; Bosland, L.; Chevalier-Jabet, K.; Marchetto, C.

    2014-01-01

    One main goal of the severe accident integral code ASTEC V2, jointly developed since almost more than 15 years by IRSN and GRS, is to simulate the overall behaviour of fission products (FP) in a damaged nuclear facility. ASTEC applications are source term determinations, level 2 Probabilistic Safety Assessment (PSA2) studies including the determination of uncertainties, accident management studies and physical analyses of FP experiments to improve the understanding of the phenomenology. ASTEC is a modular code and models of a part of the phenomenology are implemented in each module: the release of FPs and structural materials from degraded fuel in the ELSA module; the transport through the reactor coolant system approximated as a sequence of control volumes in the SOPHAEROS module; and the radiochemistry inside the containment nuclear building in the IODE module. Three other modules, CPA, ISODOP and DOSE, allow respectively computing the deposition rate of aerosols inside the containment, the activities of the isotopes as a function of time, and the gaseous dose rate which is needed to model radiochemistry in the gaseous phase. In ELSA, release models are semi-mechanistic and have been validated for a wide range of experimental data, and noticeably for VERCORS experiments. For SOPHAEROS, the models can be divided into two parts: vapour phase phenomena and aerosol phase phenomena. For IODE, iodine and ruthenium chemistry are modelled based on a semi-mechanistic approach, these FPs can form some volatile species and are particularly important in terms of potential radiological consequences. The models in these 3 modules are based on a wide experimental database, resulting for a large part from international programmes, and they are considered at the state of the art of the R and D knowledge. This paper illustrates some FPs modelling capabilities of ASTEC and computed values are compared to some experimental results, which are parts of the validation matrix

  12. Lost opportunities: Modeling commercial building energy code adoption in the United States

    International Nuclear Information System (INIS)

    Nelson, Hal T.

    2012-01-01

    This paper models the adoption of commercial building energy codes in the US between 1977 and 2006. Energy code adoption typically results in an increase in aggregate social welfare by cost effectively reducing energy expenditures. Using a Cox proportional hazards model, I test if relative state funding, a new, objective, multivariate regression-derived measure of government capacity, as well as a vector of control variables commonly used in comparative state research, predict commercial building energy code adoption. The research shows little political influence over historical commercial building energy code adoption in the sample. Colder climates and higher electricity prices also do not predict more frequent code adoptions. I do find evidence of high government capacity states being 60 percent more likely than low capacity states to adopt commercial building energy codes in the following year. Wealthier states are also more likely to adopt commercial codes. Policy recommendations to increase building code adoption include increasing access to low cost capital for the private sector and providing noncompetitive block grants to the states from the federal government. - Highlights: ► Model the adoption of commercial building energy codes from 1977–2006 in the US. ► Little political influence over historical building energy code adoption. ► High capacity states are over 60 percent more likely than low capacity states to adopt codes. ► Wealthier states are more likely to adopt commercial codes. ► Access to capital and technical assistance is critical to increase code adoption.

  13. Discovery of rare protein-coding genes in model methylotroph Methylobacterium extorquens AM1.

    Science.gov (United States)

    Kumar, Dhirendra; Mondal, Anupam Kumar; Yadav, Amit Kumar; Dash, Debasis

    2014-12-01

    Proteogenomics involves the use of MS to refine annotation of protein-coding genes and discover genes in a genome. We carried out comprehensive proteogenomic analysis of Methylobacterium extorquens AM1 (ME-AM1) from publicly available proteomics data with a motive to improve annotation for methylotrophs; organisms capable of surviving in reduced carbon compounds such as methanol. Besides identifying 2482(50%) proteins, 29 new genes were discovered and 66 annotated gene models were revised in ME-AM1 genome. One such novel gene is identified with 75 peptides, lacks homolog in other methylobacteria but has glycosyl transferase and lipopolysaccharide biosynthesis protein domains, indicating its potential role in outer membrane synthesis. Many novel genes are present only in ME-AM1 among methylobacteria. Distant homologs of these genes in unrelated taxonomic classes and low GC-content of few genes suggest lateral gene transfer as a potential mode of their origin. Annotations of methylotrophy related genes were also improved by the discovery of a short gene in methylotrophy gene island and redefining a gene important for pyrroquinoline quinone synthesis, essential for methylotrophy. The combined use of proteogenomics and rigorous bioinformatics analysis greatly enhanced the annotation of protein-coding genes in model methylotroph ME-AM1 genome. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. COBRA-SFS [Spent Fuel Storage]: A thermal-hydraulic analysis computer code: Volume 1, Mathematical models and solution method

    International Nuclear Information System (INIS)

    Rector, D.R.; Wheeler, C.L.; Lombardo, N.J.

    1986-11-01

    COBRA-SFS (Spent Fuel Storage) is a general thermal-hydraulic analysis computer code used to predict temperatures and velocities in a wide variety of systems. The code was refined and specialized for spent fuel storage system analyses for the US Department of Energy's Commercial Spent Fuel Management Program. The finite-volume equations governing mass, momentum, and energy conservation are written for an incompressible, single-phase fluid. The flow equations model a wide range of conditions including natural circulation. The energy equations include the effects of solid and fluid conduction, natural convection, and thermal radiation. The COBRA-SFS code is structured to perform both steady-state and transient calculations: however, the transient capability has not yet been validated. This volume describes the finite-volume equations and the method used to solve these equations. It is directed toward the user who is interested in gaining a more complete understanding of these methods

  15. Code Coupling via Jacobian-Free Newton-Krylov Algorithms with Application to Magnetized Fluid Plasma and Kinetic Neutral Models

    Energy Technology Data Exchange (ETDEWEB)

    Joseph, Ilon [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-05-27

    Jacobian-free Newton-Krylov (JFNK) algorithms are a potentially powerful class of methods for solving the problem of coupling codes that address dfferent physics models. As communication capability between individual submodules varies, different choices of coupling algorithms are required. The more communication that is available, the more possible it becomes to exploit the simple sparsity pattern of the Jacobian, albeit of a large system. The less communication that is available, the more dense the Jacobian matrices become and new types of preconditioners must be sought to efficiently take large time steps. In general, methods that use constrained or reduced subsystems can offer a compromise in complexity. The specific problem of coupling a fluid plasma code to a kinetic neutrals code is discussed as an example.

  16. Coding conventions and principles for a National Land-Change Modeling Framework

    Science.gov (United States)

    Donato, David I.

    2017-07-14

    This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.

  17. Spatial Preference Modelling for equitable infrastructure provision: an application of Sen's Capability Approach

    Science.gov (United States)

    Wismadi, Arif; Zuidgeest, Mark; Brussel, Mark; van Maarseveen, Martin

    2014-01-01

    To determine whether the inclusion of spatial neighbourhood comparison factors in Preference Modelling allows spatial decision support systems (SDSSs) to better address spatial equity, we introduce Spatial Preference Modelling (SPM). To evaluate the effectiveness of this model in addressing equity, various standardisation functions in both Non-Spatial Preference Modelling and SPM are compared. The evaluation involves applying the model to a resource location-allocation problem for transport infrastructure in the Special Province of Yogyakarta in Indonesia. We apply Amartya Sen's Capability Approach to define opportunity to mobility as a non-income indicator. Using the extended Moran's I interpretation for spatial equity, we evaluate the distribution output regarding, first, `the spatial distribution patterns of priority targeting for allocation' (SPT) and, second, `the effect of new distribution patterns after location-allocation' (ELA). The Moran's I index of the initial map and its comparison with six patterns for SPT as well as ELA consistently indicates that the SPM is more effective for addressing spatial equity. We conclude that the inclusion of spatial neighbourhood comparison factors in Preference Modelling improves the capability of SDSS to address spatial equity. This study thus proposes a new formal method for SDSS with specific attention on resource location-allocation to address spatial equity.

  18. Demonstration of the Recent Additions in Modeling Capabilities for the WEC-Sim Wave Energy Converter Design Tool: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Tom, N.; Lawson, M.; Yu, Y. H.

    2015-03-01

    WEC-Sim is a mid-fidelity numerical tool for modeling wave energy conversion (WEC) devices. The code uses the MATLAB SimMechanics package to solve the multi-body dynamics and models the wave interactions using hydrodynamic coefficients derived from frequency domain boundary element methods. In this paper, the new modeling features introduced in the latest release of WEC-Sim will be presented. The first feature discussed is the conversion of the fluid memory kernel to a state-space approximation that provides significant gains in computational speed. The benefit of the state-space calculation becomes even greater after the hydrodynamic body-to-body coefficients are introduced as the number of interactions increases exponentially with the number of floating bodies. The final feature discussed is the capability toadd Morison elements to provide additional hydrodynamic damping and inertia. This is generally used as a tuning feature, because performance is highly dependent on the chosen coefficients. In this paper, a review of the hydrodynamic theory for each of the features is provided and successful implementation is verified using test cases.

  19. Landscape capability models as a tool to predict fine-scale forest bird occupancy and abundance

    Science.gov (United States)

    Loman, Zachary G.; DeLuca, William; Harrison, Daniel J.; Loftin, Cynthia S.; Rolek, Brian W.; Wood, Petra B.

    2018-01-01

    ContextSpecies-specific models of landscape capability (LC) can inform landscape conservation design. Landscape capability is “the ability of the landscape to provide the environment […] and the local resources […] needed for survival and reproduction […] in sufficient quantity, quality and accessibility to meet the life history requirements of individuals and local populations.” Landscape capability incorporates species’ life histories, ecologies, and distributions to model habitat for current and future landscapes and climates as a proactive strategy for conservation planning.ObjectivesWe tested the ability of a set of LC models to explain variation in point occupancy and abundance for seven bird species representative of spruce-fir, mixed conifer-hardwood, and riparian and wooded wetland macrohabitats.MethodsWe compiled point count data sets used for biological inventory, species monitoring, and field studies across the northeastern United States to create an independent validation data set. Our validation explicitly accounted for underestimation in validation data using joint distance and time removal sampling.ResultsBlackpoll warbler (Setophaga striata), wood thrush (Hylocichla mustelina), and Louisiana (Parkesia motacilla) and northern waterthrush (P. noveboracensis) models were validated as predicting variation in abundance, although this varied from not biologically meaningful (1%) to strongly meaningful (59%). We verified all seven species models [including ovenbird (Seiurus aurocapilla), blackburnian (Setophaga fusca) and cerulean warbler (Setophaga cerulea)], as all were positively related to occupancy data.ConclusionsLC models represent a useful tool for conservation planning owing to their predictive ability over a regional extent. As improved remote-sensed data become available, LC layers are updated, which will improve predictions.

  20. Three-field modeling for MARS 1-D code

    International Nuclear Information System (INIS)

    Hwang, Moonkyu; Lim, Ho-Gon; Jeong, Jae-Jun; Chung, Bub-Dong

    2006-01-01

    In this study, the three-field modeling of the two-phase mixture is developed. The finite difference equations for the three-field equations thereafter are devised. The solution scheme has been implemented into the MARS 1-D code. The three-field formulations adopted are similar to those for MARS 3-D module, in a sense that the mass and momentum are treated separately for the entrained liquid and continuous liquid. As in the MARS-3D module, the entrained liquid and continuous liquid are combined into one for the energy equation, assuming thermal equilibrium between the two. All the non-linear terms are linearized to arrange the finite difference equation set into a linear matrix form with respect to the unknown arguments. The problems chosen for the assessment of the newly added entrained field consist of basic conceptual tests. Among the tests are gas-only test, liquid-only test, gas-only with supplied entrained liquid test, Edwards pipe problem, and GE level swell problem. The conceptual tests performed confirm the sound integrity of the three-field solver

  1. Cross-band noise model refinement for transform domain Wyner–Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2012-01-01

    TDWZ video coding trails that of conventional video coding solutions, mainly due to the quality of side information, inaccurate noise modeling and loss in the final coding step. The major goal of this paper is to enhance the accuracy of the noise modeling, which is one of the most important aspects...... influencing the coding performance of DVC. A TDWZ video decoder with a novel cross-band based adaptive noise model is proposed, and a noise residue refinement scheme is introduced to successively update the estimated noise residue for noise modeling after each bit-plane. Experimental results show...... that the proposed noise model and noise residue refinement scheme can improve the rate-distortion (RD) performance of TDWZ video coding significantly. The quality of the side information modeling is also evaluated by a measure of the ideal code length....

  2. A model of Altio Lazio boiling water reactor using the LEGO code nuclear steam supply system simulation

    International Nuclear Information System (INIS)

    Garbossa, G.B.; Spelta, S.; Cori, R.; Mosca, R.; Cento, P.

    1989-01-01

    An extensive effort has been made at the Italian National Electricity Board (ENEL) to construct and validate a LEGO model capable of simulating the operational transients of the Alto Lazio Nuclear Station, a two-twin units site with BWR/6 class reactors, rated at 2894 MWt and with Mark III containment. The desired end-product of this effort is an overall plant model consisting of the Nuclear Steam Supply System model, described in this paper, and the Balance of Plant model, capable of simulating the transient response of Alto Lazio Station. The models utilize the in-house developed LEGO code, which is a modular package oriented to power plant modeling and suitable to perform transient analyses to assist during power plant design, control system design and operating procedure verification. The ability of the NSSS model to predict correctly the plant response is demonstrated through comparison with results calculated by the vendor, using REDY code, and by an in-house RETRAN-02 model

  3. On the development of LWR fuel analysis code (1). Analysis of the FEMAXI code and proposal of a new model

    International Nuclear Information System (INIS)

    Lemehov, Sergei; Suzuki, Motoe

    2000-01-01

    This report summarizes the review on the modeling features of FEMAXI code and proposal of a new theoretical equation model of clad creep on the basis of irradiation-induced microstructure change. It was pointed out that plutonium build-up in fuel matrix and non-uniform radial power profile at high burn-up affect significantly fuel behavior through the interconnected effects with such phenomena as clad irradiation-induced creep, fission gas release, fuel thermal conductivity degradation, rim porous band formation and associated fuel swelling. Therefore, these combined effects should be properly incorporated into the models of the FEMAXI code so that the code can carry out numerical analysis at the level of accuracy and elaboration that modern experimental data obtained in test reactors have. Also, the proposed new mechanistic clad creep model has a general formalism which allows the model to be flexibly applied for clad behavior analysis under normal operation conditions and power transients as well for Zr-based clad materials by the use of established out-of-pile mechanical properties. The model has been tested against experimental data, while further verification is needed with specific emphasis on power ramps and transients. (author)

  4. IMPACT OF CO-CREATION ON INNOVATION CAPABILITY AND FIRM PERFORMANCE: A STRUCTURAL EQUATION MODELING

    Directory of Open Access Journals (Sweden)

    FATEMEH HAMIDI

    Full Text Available ABSTRACT Traditional firms used to design products, evaluate marketing messages and control product distribution channels with no costumer interface. With the advancements in interaction technologies, however, users can easily make impacts on firms; the interaction between costumers and firms is now in peak condition in comparison to the past and is no longer controlled by firms. Customers are playing two roles of value creators and consumers simultaneously. We examine the role of co-creation on the influences of innovation capability and firm performance. We develop hypotheses and test them using researcher survey data. The results suggest that implement of co-creation partially mediate the effect of process innovation capability. We discuss the implications of these findings for research and practice on the depict and implement of unique value co-creation model.

  5. Building a Conceptual Model of Routines, Capabilities, and Absorptive Capacity Interplay

    Directory of Open Access Journals (Sweden)

    Ivan Stefanovic

    2014-05-01

    Full Text Available Researchers have often used constructs such as routines, operational capability, dynamic capability, absorptive capacity, etc., to explain various organizational phenomena, especially a competitive advantage of firms. As a consequence of their frequent use in different contexts, these constructs have become extremely broad and blurred, thus making a void in strategic management literature. In this paper we attempt to bring a sense of holistic perspective on these constructs by briefly reviewing the current state of the research and presenting a conceptual model that provides an explanation for the causal relationships between them. The final section of the paper sheds some light on this topic from the econophysics perspective. Authors hope that findings in this paper may serve as a foundation for other research endeavours related to the topic of how firms achieve competitive advantage and thrive in their environments.

  6. Recent extensions and use of the statistical model code EMPIRE-II - version: 2.17 Millesimo

    International Nuclear Information System (INIS)

    Herman, M.

    2003-01-01

    This lecture notes describe new features of the modular code EMPIRE-2.17 designed to perform comprehensive calculations of nuclear reactions using variety of nuclear reaction models. Compared to the version 2.13, the current release has been extended by including Coupled-Channel mechanism, exciton model, Monte Carlo approach to preequilibrium emission, use of microscopic level densities, widths fluctuation correction, detailed calculation of the recoil spectra, and powerful plotting capabilities provided by the ZVView package. The second part of this lecture concentrates on the use of the code in practical calculations, with emphasis on the aspects relevant to nuclear data evaluation. In particular, adjusting model parameters is discussed in details. (author)

  7. User's manual for a measurement simulation code

    International Nuclear Information System (INIS)

    Kern, E.A.

    1982-07-01

    The MEASIM code has been developed primarily for modeling process measurements in materials processing facilities associated with the nuclear fuel cycle. In addition, the code computes materials balances and the summation of materials balances along with associated variances. The code has been used primarily in performance assessment of materials' accounting systems. This report provides the necessary information for a potential user to employ the code in these applications. A number of examples that demonstrate most of the capabilities of the code are provided

  8. Transient Mathematical Modeling for Liquid Rocket Engine Systems: Methods, Capabilities, and Experience

    Science.gov (United States)

    Seymour, David C.; Martin, Michael A.; Nguyen, Huy H.; Greene, William D.

    2005-01-01

    The subject of mathematical modeling of the transient operation of liquid rocket engines is presented in overview form from the perspective of engineers working at the NASA Marshall Space Flight Center. The necessity of creating and utilizing accurate mathematical models as part of liquid rocket engine development process has become well established and is likely to increase in importance in the future. The issues of design considerations for transient operation, development testing, and failure scenario simulation are discussed. An overview of the derivation of the basic governing equations is presented along with a discussion of computational and numerical issues associated with the implementation of these equations in computer codes. Also, work in the field of generating usable fluid property tables is presented along with an overview of efforts to be undertaken in the future to improve the tools use for the mathematical modeling process.

  9. A Computational Model of the SC Multisensory Neurons: Integrative Capabilities, Maturation, and Plasticity

    Directory of Open Access Journals (Sweden)

    Cristiano Cuppini

    2011-10-01

    Full Text Available Different cortical and subcortical structures present neurons able to integrate stimuli of different sensory modalities. Among the others, one of the most investigated integrative regions is the Superior Colliculus (SC, a midbrain structure whose aim is to guide attentive behaviour and motor responses toward external events. Despite the large amount of experimental data in the literature, the neural mechanisms underlying the SC response are not completely understood. Moreover, recent data indicate that multisensory integration ability is the result of maturation after birth, depending on sensory experience. Mathematical models and computer simulations can be of value to investigate and clarify these phenomena. In the last few years, several models have been implemented to shed light on these mechanisms and to gain a deeper comprehension of the SC capabilities. Here, a neural network model (Cuppini et al., 2010 is extensively discussed. The model considers visual-auditory interaction, and is able to reproduce and explain the main physiological features of multisensory integration in SC neurons, and their acquisition during postnatal life. To reproduce a neonatal condition, the model assumes that during early life: 1 cortical-SC synapses are present but not active; 2 in this phase, responses are driven by non-cortical inputs with very large receptive fields (RFs and little spatial tuning; 3 a slight spatial preference for the visual inputs is present. Sensory experience is modeled by a “training phase” in which the network is repeatedly exposed to modality-specific and cross-modal stimuli at different locations. As results, Cortical-SC synapses are crafted during this period thanks to the Hebbian rules of potentiation and depression, RFs are reduced in size, and neurons exhibit integrative capabilities to cross-modal stimuli, such as multisensory enhancement, inverse effectiveness, and multisensory depression. The utility of the modelling

  10. Capabilities and performance of Elmer/Ice, a new-generation ice sheet model

    Directory of Open Access Journals (Sweden)

    O. Gagliardini

    2013-08-01

    Full Text Available The Fourth IPCC Assessment Report concluded that ice sheet flow models, in their current state, were unable to provide accurate forecast for the increase of polar ice sheet discharge and the associated contribution to sea level rise. Since then, the glaciological community has undertaken a huge effort to develop and improve a new generation of ice flow models, and as a result a significant number of new ice sheet models have emerged. Among them is the parallel finite-element model Elmer/Ice, based on the open-source multi-physics code Elmer. It was one of the first full-Stokes models used to make projections for the evolution of the whole Greenland ice sheet for the coming two centuries. Originally developed to solve local ice flow problems of high mechanical and physical complexity, Elmer/Ice has today reached the maturity to solve larger-scale problems, earning the status of an ice sheet model. Here, we summarise almost 10 yr of development performed by different groups. Elmer/Ice solves the full-Stokes equations, for isotropic but also anisotropic ice rheology, resolves the grounding line dynamics as a contact problem, and contains various basal friction laws. Derived fields, like the age of the ice, the strain rate or stress, can also be computed. Elmer/Ice includes two recently proposed inverse methods to infer badly known parameters. Elmer is a highly parallelised code thanks to recent developments and the implementation of a block preconditioned solver for the Stokes system. In this paper, all these components are presented in detail, as well as the numerical performance of the Stokes solver and developments planned for the future.

  11. Sodium/water pool-deposit bed model of the CONACS code

    International Nuclear Information System (INIS)

    Peak, R.D.

    1983-01-01

    A new Pool-Bed model of the CONACS (Containment Analysis Code System) code represents a major advance over the pool models of other containment analysis code (NABE code of France, CEDAN code of Japan and CACECO and CONTAIN codes of the United States). This new model advances pool-bed modeling because of the number of significant materials and processes which are included with appropriate rigor. This CONACS pool-bed model maintains material balances for eight chemical species (C, H 2 O, Na, NaH, Na 2 O, Na 2 O 2 , Na 2 CO 3 and NaOH) that collect in the stationary liquid pool on the floor and in the desposit bed on the elevated shelf of the standard CONACS analysis cell

  12. Transitioning Enhanced Land Surface Initialization and Model Verification Capabilities to the Kenya Meteorological Department (KMD)

    Science.gov (United States)

    Case, Jonathan L.; Mungai, John; Sakwa, Vincent; Zavodsky, Bradley T.; Srikishen, Jayanthi; Limaye, Ashutosh; Blankenship, Clay B.

    2016-01-01

    Flooding, severe weather, and drought are key forecasting challenges for the Kenya Meteorological Department (KMD), based in Nairobi, Kenya. Atmospheric processes leading to convection, excessive precipitation and/or prolonged drought can be strongly influenced by land cover, vegetation, and soil moisture content, especially during anomalous conditions and dry/wet seasonal transitions. It is thus important to represent accurately land surface state variables (green vegetation fraction, soil moisture, and soil temperature) in Numerical Weather Prediction (NWP) models. The NASA SERVIR and the Short-term Prediction Research and Transition (SPoRT) programs in Huntsville, AL have established a working partnership with KMD to enhance its regional modeling capabilities. SPoRT and SERVIR are providing experimental land surface initialization datasets and model verification capabilities for capacity building at KMD. To support its forecasting operations, KMD is running experimental configurations of the Weather Research and Forecasting (WRF; Skamarock et al. 2008) model on a 12-km/4-km nested regional domain over eastern Africa, incorporating the land surface datasets provided by NASA SPoRT and SERVIR. SPoRT, SERVIR, and KMD participated in two training sessions in March 2014 and June 2015 to foster the collaboration and use of unique land surface datasets and model verification capabilities. Enhanced regional modeling capabilities have the potential to improve guidance in support of daily operations and high-impact weather and climate outlooks over Eastern Africa. For enhanced land-surface initialization, the NASA Land Information System (LIS) is run over Eastern Africa at 3-km resolution, providing real-time land surface initialization data in place of interpolated global model soil moisture and temperature data available at coarser resolutions. Additionally, real-time green vegetation fraction (GVF) composites from the Suomi-NPP VIIRS instrument is being incorporated

  13. A Perceptual Model for Sinusoidal Audio Coding Based on Spectral Integration

    NARCIS (Netherlands)

    Van de Par, S.; Kohlrausch, A.; Heusdens, R.; Jensen, J.; Holdt Jensen, S.

    2005-01-01

    Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of

  14. A perceptual model for sinusoidal audio coding based on spectral integration

    NARCIS (Netherlands)

    Van de Par, S.; Kohlrauch, A.; Heusdens, R.; Jensen, J.; Jensen, S.H.

    2005-01-01

    Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of

  15. COCOA Code for Creating Mock Observations of Star Cluster Models

    OpenAIRE

    Askar, Abbas; Giersz, Mirek; Pych, Wojciech; Dalessandro, Emanuele

    2017-01-01

    We introduce and present results from the COCOA (Cluster simulatiOn Comparison with ObservAtions) code that has been developed to create idealized mock photometric observations using results from numerical simulations of star cluster evolution. COCOA is able to present the output of realistic numerical simulations of star clusters carried out using Monte Carlo or \\textit{N}-body codes in a way that is useful for direct comparison with photometric observations. In this paper, we describe the C...

  16. MIG version 0.0 model interface guidelines: Rules to accelerate installation of numerical models into any compliant parent code

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, R.M.; Wong, M.K.

    1996-08-01

    A set of model interface guidelines, called MIG, is presented as a means by which any compliant numerical material model can be rapidly installed into any parent code without having to modify the model subroutines. Here, {open_quotes}model{close_quotes} usually means a material model such as one that computes stress as a function of strain, though the term may be extended to any numerical operation. {open_quotes}Parent code{close_quotes} means a hydrocode, finite element code, etc. which uses the model and enforces, say, the fundamental laws of motion and thermodynamics. MIG requires the model developer (who creates the model package) to specify model needs in a standardized but flexible way. MIG includes a dictionary of technical terms that allows developers and parent code architects to share a common vocabulary when specifying field variables. For portability, database management is the responsibility of the parent code. Input/output occurs via structured calling arguments. As much model information as possible (such as the lists of required inputs, as well as lists of precharacterized material data and special needs) is supplied by the model developer in an ASCII text file. Every MIG-compliant model also has three required subroutines to check data, to request extra field variables, and to perform model physics. To date, the MIG scheme has proven flexible in beta installations of a simple yield model, plus a more complicated viscodamage yield model, three electromechanical models, and a complicated anisotropic microcrack constitutive model. The MIG yield model has been successfully installed using identical subroutines in three vectorized parent codes and one parallel C++ code, all predicting comparable results. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort, thereby reducing the cost of installing and sharing models in diverse new codes.

  17. Hybrid modeling approach to improve the forecasting capability for the gaseous radionuclide in a nuclear site

    International Nuclear Information System (INIS)

    Jeong, Hyojoon; Hwang, Wontae; Kim, Eunhan; Han, Moonhee

    2012-01-01

    Highlights: ► This study is to improve the reliability of air dispersion modeling. ► Tracer experiments assumed gaseous radionuclides were conducted at a nuclear site. ► The performance of a hybrid modeling combined ISC with ANFIS was investigated.. ► Hybrid modeling approach shows better performance rather than a single ISC model. - Abstract: Predicted air concentrations of radioactive materials are important for an environmental impact assessment for the public health. In this study, the performance of a hybrid modeling combined with the industrial source complex (ISC) model and an adaptive neuro-fuzzy inference system (ANFIS) for predicting tracer concentrations was investigated. Tracer dispersion experiments were performed to produce the field data assuming the accidental release of radioactive material. ANFIS was trained in order that the outputs of the ISC model are similar to the measured data. Judging from the higher correlation coefficients between the measured and the calculated ones, the hybrid modeling approach could be an appropriate technique for an improvement of the modeling capability to predict the air concentrations for radioactive materials.

  18. A model of turbocharger radial turbines appropriate to be used in zero- and one-dimensional gas dynamics codes for internal combustion engines modelling

    Energy Technology Data Exchange (ETDEWEB)

    Serrano, J.R.; Arnau, F.J.; Dolz, V.; Tiseira, A. [CMT-Motores Termicos, Universidad Politecnica de Valencia, Camino de Vera s/n, 46022 Valencia (Spain); Cervello, C. [Conselleria de Cultura, Educacion y Deporte, Generalitat Valenciana (Spain)

    2008-12-15

    The paper presents a model of fixed and variable geometry turbines. The aim of this model is to provide an efficient boundary condition to model turbocharged internal combustion engines with zero- and one-dimensional gas dynamic codes. The model is based from its very conception on the measured characteristics of the turbine. Nevertheless, it is capable of extrapolating operating conditions that differ from those included in the turbine maps, since the engines usually work within these zones. The presented model has been implemented in a one-dimensional gas dynamic code and has been used to calculate unsteady operating conditions for several turbines. The results obtained have been compared with success against pressure-time histories measured upstream and downstream of the turbine during on-engine operation. (author)

  19. A model of turbocharger radial turbines appropriate to be used in zero- and one-dimensional gas dynamics codes for internal combustion engines modelling

    International Nuclear Information System (INIS)

    Serrano, J.R.; Arnau, F.J.; Dolz, V.; Tiseira, A.; Cervello, C.

    2008-01-01

    The paper presents a model of fixed and variable geometry turbines. The aim of this model is to provide an efficient boundary condition to model turbocharged internal combustion engines with zero- and one-dimensional gas dynamic codes. The model is based from its very conception on the measured characteristics of the turbine. Nevertheless, it is capable of extrapolating operating conditions that differ from those included in the turbine maps, since the engines usually work within these zones. The presented model has been implemented in a one-dimensional gas dynamic code and has been used to calculate unsteady operating conditions for several turbines. The results obtained have been compared with success against pressure-time histories measured upstream and downstream of the turbine during on-engine operation

  20. Status Report on Modelling and Simulation Capabilities for Nuclear-Renewable Hybrid Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rabiti, C. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Epiney, A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Talbot, P. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kim, J. S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bragg-Sitton, S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Yigitoglu, A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Greenwood, S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cetiner, S. M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ganda, F. [Argonne National Lab. (ANL), Argonne, IL (United States); Maronati, G. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-09-01

    This report summarizes the current status of the modeling and simulation capabilities developed for the economic assessment of Nuclear-Renewable Hybrid Energy Systems (N-R HES). The increasing penetration of variable renewables is altering the profile of the net demand, with which the other generators on the grid have to cope. N-R HES analyses are being conducted to determine the potential feasibility of mitigating the resultant volatility in the net electricity demand by adding industrial processes that utilize either thermal or electrical energy as stabilizing loads. This coordination of energy generators and users is proposed to mitigate the increase in electricity cost and cost volatility through the production of a saleable commodity. Overall, the financial performance of a system that is comprised of peaking units (i.e. gas turbine), baseload supply (i.e. nuclear power plant), and an industrial process (e.g. hydrogen plant) should be optimized under the constraint of satisfying an electricity demand profile with a certain level of variable renewable (wind) penetration. The optimization should entail both the sizing of the components/subsystems that comprise the system and the optimal dispatch strategy (output at any given moment in time from the different subsystems). Some of the capabilities here described have been reported separately in [1, 2, 3]. The purpose of this report is to provide an update on the improvement and extension of those capabilities and to illustrate their integrated application in the economic assessment of N-R HES.

  1. The new MCNP6 depletion capability

    International Nuclear Information System (INIS)

    Fensin, M. L.; James, M. R.; Hendricks, J. S.; Goorley, J. T.

    2012-01-01

    The first MCNP based in-line Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology. (authors)

  2. The New MCNP6 Depletion Capability

    International Nuclear Information System (INIS)

    Fensin, Michael Lorne; James, Michael R.; Hendricks, John S.; Goorley, John T.

    2012-01-01

    The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology.

  3. Adaptive Planning: Understanding Organizational Workload to Capability/ Capacity through Modeling and Simulation

    Science.gov (United States)

    Hase, Chris

    2010-01-01

    In August 2003, the Secretary of Defense (SECDEF) established the Adaptive Planning (AP) initiative [1] with an objective of reducing the time necessary to develop and revise Combatant Commander (COCOM) contingency plans and increase SECDEF plan visibility. In addition to reducing the traditional plan development timeline from twenty-four months to less than twelve months (with a goal of six months)[2], AP increased plan visibility to Department of Defense (DoD) leadership through In-Progress Reviews (IPRs). The IPR process, as well as the increased number of campaign and contingency plans COCOMs had to develop, increased the workload while the number of planners remained fixed. Several efforts from collaborative planning tools to streamlined processes were initiated to compensate for the increased workload enabling COCOMS to better meet shorter planning timelines. This paper examines the Joint Strategic Capabilities Plan (JSCP) directed contingency planning and staffing requirements assigned to a combatant commander staff through the lens of modeling and simulation. The dynamics of developing a COCOM plan are captured with an ExtendSim [3] simulation. The resulting analysis provides a quantifiable means by which to measure a combatant commander staffs workload associated with development and staffing JSCP [4] directed contingency plans with COCOM capability/capacity. Modeling and simulation bring significant opportunities in measuring the sensitivity of key variables in the assessment of workload to capability/capacity analysis. Gaining an understanding of the relationship between plan complexity, number of plans, planning processes, and number of planners with time required for plan development provides valuable information to DoD leadership. Through modeling and simulation AP leadership can gain greater insight in making key decisions on knowing where to best allocate scarce resources in an effort to meet DoD planning objectives.

  4. Expand the Modeling Capabilities of DOE's EnergyPlus Building Energy Simulation Program

    Energy Technology Data Exchange (ETDEWEB)

    Don Shirey

    2008-02-28

    EnergyPlus{trademark} is a new generation computer software analysis tool that has been developed, tested, and commercialized to support DOE's Building Technologies (BT) Program in terms of whole-building, component, and systems R&D (http://www.energyplus.gov). It is also being used to support evaluation and decision making of zero energy building (ZEB) energy efficiency and supply technologies during new building design and existing building retrofits. Version 1.0 of EnergyPlus was released in April 2001, followed by semiannual updated versions over the ensuing seven-year period. This report summarizes work performed by the University of Central Florida's Florida Solar Energy Center (UCF/FSEC) to expand the modeling capabilities of EnergyPlus. The project tasks involved implementing, testing, and documenting the following new features or enhancement of existing features: (1) A model for packaged terminal heat pumps; (2) A model for gas engine-driven heat pumps with waste heat recovery; (3) Proper modeling of window screens; (4) Integrating and streamlining EnergyPlus air flow modeling capabilities; (5) Comfort-based controls for cooling and heating systems; and (6) An improved model for microturbine power generation with heat recovery. UCF/FSEC located existing mathematical models or generated new model for these features and incorporated them into EnergyPlus. The existing or new models were (re)written using Fortran 90/95 programming language and were integrated within EnergyPlus in accordance with the EnergyPlus Programming Standard and Module Developer's Guide. Each model/feature was thoroughly tested and identified errors were repaired. Upon completion of each model implementation, the existing EnergyPlus documentation (e.g., Input Output Reference and Engineering Document) was updated with information describing the new or enhanced feature. Reference data sets were generated for several of the features to aid program users in selecting proper

  5. Mechanistic modelling of gaseous fission product behaviour in UO2 fuel by Rtop code

    International Nuclear Information System (INIS)

    Kanukova, V.D.; Khoruzhii, O.V.; Kourtchatov, S.Y.; Likhanskii, V.V.; Matveew, L.V.

    2002-01-01

    The current status of a mechanistic modelling by the RTOP code of the fission product behaviour in polycrystalline UO 2 fuel is described. An outline of the code and implemented physical models is presented. The general approach to code validation is discussed. It is exemplified by the results of validation of the models of fuel oxidation and grain growth. The different models of intragranular and intergranular gas bubble behaviour have been tested and the sensitivity of the code in the framework of these models has been analysed. An analysis of available models of the resolution of grain face bubbles is also presented. The possibilities of the RTOP code are presented through the example of modelling behaviour of WWER fuel over the course of a comparative WWER-PWR experiment performed at Halden and by comparison with Yanagisawa experiments. (author)

  6. Evaluation of the Predictive Capabilities of a Phenomenological Combustion Model for Natural Gas SI Engine

    Directory of Open Access Journals (Sweden)

    Toman Rastislav

    2017-12-01

    Full Text Available The current study evaluates the predictive capabilities of a new phenomenological combustion model, available as a part of the GT-Suite software package. It is comprised of two main sub-models: 0D model of in-cylinder flow and turbulence, and turbulent SI combustion model. The 0D in-cylinder flow model (EngCylFlow uses a combined K-k-ε kinetic energy cascade approach to predict the evolution of the in-cylinder charge motion and turbulence, where K and k are the mean and turbulent kinetic energies, and ε is the turbulent dissipation rate. The subsequent turbulent combustion model (EngCylCombSITurb gives the in-cylinder burn rate; based on the calculation of flame speeds and flame kernel development. This phenomenological approach reduces significantly the overall computational effort compared to the 3D-CFD, thus allowing the computation of full engine operating map and the vehicle driving cycles. Model was calibrated using a full map measurement from a turbocharged natural gas SI engine, with swirl intake ports. Sensitivity studies on different calibration methods, and laminar flame speed sub-models were conducted. Validation process for both the calibration and sensitivity studies was concerning the in-cylinder pressure traces and burn rates for several engine operation points achieving good overall results.

  7. Using a Simple Binomial Model to Assess Improvement in Predictive Capability: Sequential Bayesian Inference, Hypothesis Testing, and Power Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sigeti, David E. [Los Alamos National Laboratory; Pelak, Robert A. [Los Alamos National Laboratory

    2012-09-11

    We present a Bayesian statistical methodology for identifying improvement in predictive simulations, including an analysis of the number of (presumably expensive) simulations that will need to be made in order to establish with a given level of confidence that an improvement has been observed. Our analysis assumes the ability to predict (or postdict) the same experiments with legacy and new simulation codes and uses a simple binomial model for the probability, {theta}, that, in an experiment chosen at random, the new code will provide a better prediction than the old. This model makes it possible to do statistical analysis with an absolute minimum of assumptions about the statistics of the quantities involved, at the price of discarding some potentially important information in the data. In particular, the analysis depends only on whether or not the new code predicts better than the old in any given experiment, and not on the magnitude of the improvement. We show how the posterior distribution for {theta} may be used, in a kind of Bayesian hypothesis testing, both to decide if an improvement has been observed and to quantify our confidence in that decision. We quantify the predictive probability that should be assigned, prior to taking any data, to the possibility of achieving a given level of confidence, as a function of sample size. We show how this predictive probability depends on the true value of {theta} and, in particular, how there will always be a region around {theta} = 1/2 where it is highly improbable that we will be able to identify an improvement in predictive capability, although the width of this region will shrink to zero as the sample size goes to infinity. We show how the posterior standard deviation may be used, as a kind of 'plan B metric' in the case that the analysis shows that {theta} is close to 1/2 and argue that such a plan B should generally be part of hypothesis testing. All the analysis presented in the paper is done with a

  8. Development of an Aeroelastic Modeling Capability for Transient Nozzle Side Load Analysis

    Science.gov (United States)

    Wang, Ten-See; Zhao, Xiang; Zhang, Sijun; Chen, Yen-Sen

    2013-01-01

    Lateral nozzle forces are known to cause severe structural damage to any new rocket engine in development during test. While three-dimensional, transient, turbulent, chemically reacting computational fluid dynamics methodology has been demonstrated to capture major side load physics with rigid nozzles, hot-fire tests often show nozzle structure deformation during major side load events, leading to structural damages if structural strengthening measures were not taken. The modeling picture is incomplete without the capability to address the two-way responses between the structure and fluid. The objective of this study is to develop a coupled aeroelastic modeling capability by implementing the necessary structural dynamics component into an anchored computational fluid dynamics methodology. The computational fluid dynamics component is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, while the computational structural dynamics component is developed in the framework of modal analysis. Transient aeroelastic nozzle startup analyses of the Block I Space Shuttle Main Engine at sea level were performed. The computed results from the aeroelastic nozzle modeling are presented.

  9. Initiative-taking, Improvisational Capability and Business Model Innovation in Emerging Market

    DEFF Research Database (Denmark)

    Cao, Yangfeng

    Business model innovation plays a very important role in developing competitive advantage when multinational small and medium-sized enterprises (SMEs) from developed country enter into emerging markets because of the large contextual distances or gaps between the emerging and developed economies....... Many prior researches have shown that the foreign subsidiaries play important role in shaping the overall strategy of the parent company. However, little is known about how subsidiary specifically facilitates business model innovation (BMI) in emerging markets. Adopting the method of comparative...... innovation in emerging markets. We find that high initiative-taking and strong improvisational capability can accelerate the business model innovation. Our research contributes to the literatures on international and strategic entrepreneurship....

  10. Entry into new markets: the development of the business model and dynamic capabilities

    Directory of Open Access Journals (Sweden)

    Victor Wolowski Kenski

    2017-12-01

    Full Text Available This work shows the path through which companies enter new markets or bring new propositions to established ones. It presents the market analysis process, the strategical decisions that determine the company’s position on it and the required changes in the configurations for this new action. It also studies the process of selecting the business model and the conditions for its definition the adoption and subsequent development of resources and capabilities required to conquer this new market. It is presented the necessary conditions to remain and maintain its market position. These concepts are presented through a case study of a business group that takes part in different franchises.

  11. Coupling the severe accident code SCDAP with the system thermal hydraulic code MARS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young Jin; Chung, Bub Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2004-07-01

    MARS is a best-estimate system thermal hydraulics code with multi-dimensional modeling capability. One of the aims in MARS code development is to make it a multi-functional code system with the analysis capability to cover the entire accident spectrum. For this purpose, MARS code has been coupled with a number of other specialized codes such as CONTEMPT for containment analysis, and MASTER for 3-dimensional kinetics. And in this study, the SCDAP code has been coupled with MARS to endow the MARS code system with severe accident analysis capability. With the SCDAP, MARS code system now has acquired the capability to simulate such severe accident related phenomena as cladding oxidation, melting and slumping of fuel and reactor structures.

  12. Coupling the severe accident code SCDAP with the system thermal hydraulic code MARS

    International Nuclear Information System (INIS)

    Lee, Young Jin; Chung, Bub Dong

    2004-01-01

    MARS is a best-estimate system thermal hydraulics code with multi-dimensional modeling capability. One of the aims in MARS code development is to make it a multi-functional code system with the analysis capability to cover the entire accident spectrum. For this purpose, MARS code has been coupled with a number of other specialized codes such as CONTEMPT for containment analysis, and MASTER for 3-dimensional kinetics. And in this study, the SCDAP code has been coupled with MARS to endow the MARS code system with severe accident analysis capability. With the SCDAP, MARS code system now has acquired the capability to simulate such severe accident related phenomena as cladding oxidation, melting and slumping of fuel and reactor structures

  13. Application of the thermal-hydraulic codes in VVER-440 steam generators modelling

    Energy Technology Data Exchange (ETDEWEB)

    Matejovic, P.; Vranca, L.; Vaclav, E. [Nuclear Power Plant Research Inst. VUJE (Slovakia)

    1995-12-31

    Performances with the CATHARE2 V1.3U and RELAP5/MOD3.0 application to the VVER-440 SG modelling during normal conditions and during transient with secondary water lowering are described. Similar recirculation model was chosen for both codes. In the CATHARE calculation, no special measures were taken with the aim to optimize artificially flow rate distribution coefficients for the junction between SG riser and steam dome. Contrary to RELAP code, the CATHARE code is able to predict reasonable the secondary swell level in nominal conditions. Both codes are able to model properly natural phase separation on the SG water level. 6 refs.

  14. Application of the thermal-hydraulic codes in VVER-440 steam generators modelling

    Energy Technology Data Exchange (ETDEWEB)

    Matejovic, P; Vranca, L; Vaclav, E [Nuclear Power Plant Research Inst. VUJE (Slovakia)

    1996-12-31

    Performances with the CATHARE2 V1.3U and RELAP5/MOD3.0 application to the VVER-440 SG modelling during normal conditions and during transient with secondary water lowering are described. Similar recirculation model was chosen for both codes. In the CATHARE calculation, no special measures were taken with the aim to optimize artificially flow rate distribution coefficients for the junction between SG riser and steam dome. Contrary to RELAP code, the CATHARE code is able to predict reasonable the secondary swell level in nominal conditions. Both codes are able to model properly natural phase separation on the SG water level. 6 refs.

  15. Implementation of a Monte Carlo based inverse planning model for clinical IMRT with MCNP code

    International Nuclear Information System (INIS)

    He, Tongming Tony

    2003-01-01

    Inaccurate dose calculations and limitations of optimization algorithms in inverse planning introduce systematic and convergence errors to treatment plans. This work was to implement a Monte Carlo based inverse planning model for clinical IMRT aiming to minimize the aforementioned errors. The strategy was to precalculate the dose matrices of beamlets in a Monte Carlo based method followed by the optimization of beamlet intensities. The MCNP 4B (Monte Carlo N-Particle version 4B) code was modified to implement selective particle transport and dose tallying in voxels and efficient estimation of statistical uncertainties. The resulting performance gain was over eleven thousand times. Due to concurrent calculation of multiple beamlets of individual ports, hundreds of beamlets in an IMRT plan could be calculated within a practical length of time. A finite-sized point source model provided a simple and accurate modeling of treatment beams. The dose matrix calculations were validated through measurements in phantoms. Agreements were better than 1.5% or 0.2 cm. The beamlet intensities were optimized using a parallel platform based optimization algorithm that was capable of escape from local minima and preventing premature convergence. The Monte Carlo based inverse planning model was applied to clinical cases. The feasibility and capability of Monte Carlo based inverse planning for clinical IMRT was demonstrated. Systematic errors in treatment plans of a commercial inverse planning system were assessed in comparison with the Monte Carlo based calculations. Discrepancies in tumor doses and critical structure doses were up to 12% and 17%, respectively. The clinical importance of Monte Carlo based inverse planning for IMRT was demonstrated

  16. Development and assessment of Multi-dimensional flow models in the thermal-hydraulic system analysis code MARS

    Energy Technology Data Exchange (ETDEWEB)

    Chung, B. D.; Bae, S. W.; Jeong, J. J.; Lee, S. M

    2005-04-15

    A new multi-dimensional component has been developed to allow for more flexible 3D capabilities in the system code, MARS. This component can be applied in the Cartesian and cylindrical coordinates. For the development of this model, the 3D convection and diffusion terms are implemented in the momentum and energy equation. And a simple Prandtl's mixing length model is applied for the turbulent viscosity. The developed multi-dimensional component was assessed against five conceptual problems with analytic solution. And some SETs are calculated and compared with experimental data. With this newly developed multi-dimensional flow module, the MARS code can realistic calculate the flow fields in pools such as those occurring in the core, steam generators and IRWST.

  17. Development and assessment of Multi-dimensional flow models in the thermal-hydraulic system analysis code MARS

    International Nuclear Information System (INIS)

    Chung, B. D.; Bae, S. W.; Jeong, J. J.; Lee, S. M.

    2005-04-01

    A new multi-dimensional component has been developed to allow for more flexible 3D capabilities in the system code, MARS. This component can be applied in the Cartesian and cylindrical coordinates. For the development of this model, the 3D convection and diffusion terms are implemented in the momentum and energy equation. And a simple Prandtl's mixing length model is applied for the turbulent viscosity. The developed multi-dimensional component was assessed against five conceptual problems with analytic solution. And some SETs are calculated and compared with experimental data. With this newly developed multi-dimensional flow module, the MARS code can realistic calculate the flow fields in pools such as those occurring in the core, steam generators and IRWST

  18. Development of realistic thermal-hydraulic system analysis codes ; development of thermal hydraulic test requirements for multidimensional flow modeling

    Energy Technology Data Exchange (ETDEWEB)

    Suh, Kune Yull; Yoon, Sang Hyuk; Noh, Sang Woo; Lee, Il Suk [Seoul National University, Seoul (Korea)

    2002-03-01

    This study is concerned with developing a multidimensional flow model required for the system analysis code MARS to more mechanistically simulate a variety of thermal hydraulic phenomena in the nuclear stem supply system. The capability of the MARS code as a thermal hydraulic analysis tool for optimized system design can be expanded by improving the current calculational methods and adding new models. In this study the relevant literature was surveyed on the multidimensional flow models that may potentially be applied to the multidimensional analysis code. Research items were critically reviewed and suggested to better predict the multidimensional thermal hydraulic behavior and to identify test requirements. A small-scale preliminary test was performed in the downcomer formed by two vertical plates to analyze multidimensional flow pattern in a simple geometry. The experimental result may be applied to the code for analysis of the fluid impingement to the reactor downcomer wall. Also, data were collected to find out the controlling parameters for the one-dimensional and multidimensional flow behavior. 22 refs., 40 figs., 7 tabs. (Author)

  19. Progress in nuclear well logging modeling using deterministic transport codes

    International Nuclear Information System (INIS)

    Kodeli, I.; Aldama, D.L.; Maucec, M.; Trkov, A.

    2002-01-01

    Further studies in continuation of the work presented in 2001 in Portoroz were performed in order to study and improve the performances, precission and domain of application of the deterministic transport codes with respect to the oil well logging analysis. These codes are in particular expected to complement the Monte Carlo solutions, since they can provide a detailed particle flux distribution in the whole geometry in a very reasonable CPU time. Real-time calculation can be envisaged. The performances of deterministic transport methods were compared to those of the Monte Carlo method. IRTMBA generic benchmark was analysed using the codes MCNP-4C and DORT/TORT. Centric as well as excentric casings were considered using 14 MeV point neutron source and NaI scintillation detectors. Neutron and gamma spectra were compared at two detector positions.(author)

  20. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments.

    Science.gov (United States)

    Santos, José; Monteagudo, Angel

    2011-02-21

    As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the fact that the best possible codes show the patterns of the

  1. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments

    Directory of Open Access Journals (Sweden)

    Monteagudo Ángel

    2011-02-01

    Full Text Available Abstract Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the

  2. Hybrid microscopic depletion model in nodal code DYN3D

    International Nuclear Information System (INIS)

    Bilodid, Y.; Kotlyar, D.; Shwageraus, E.; Fridman, E.; Kliem, S.

    2016-01-01

    Highlights: • A new hybrid method of accounting for spectral history effects is proposed. • Local concentrations of over 1000 nuclides are calculated using micro depletion. • The new method is implemented in nodal code DYN3D and verified. - Abstract: The paper presents a general hybrid method that combines the micro-depletion technique with correction of micro- and macro-diffusion parameters to account for the spectral history effects. The fuel in a core is subjected to time- and space-dependent operational conditions (e.g. coolant density), which cannot be predicted in advance. However, lattice codes assume some average conditions to generate cross sections (XS) for nodal diffusion codes such as DYN3D. Deviation of local operational history from average conditions leads to accumulation of errors in XS, which is referred as spectral history effects. Various methods to account for the spectral history effects, such as spectral index, burnup-averaged operational parameters and micro-depletion, were implemented in some nodal codes. Recently, an alternative method, which characterizes fuel depletion state by burnup and 239 Pu concentration (denoted as Pu-correction) was proposed, implemented in nodal code DYN3D and verified for a wide range of history effects. The method is computationally efficient, however, it has applicability limitations. The current study seeks to improve the accuracy and applicability range of Pu-correction method. The proposed hybrid method combines the micro-depletion method with a XS characterization technique similar to the Pu-correction method. The method was implemented in DYN3D and verified on multiple test cases. The results obtained with DYN3D were compared to those obtained with Monte Carlo code Serpent, which was also used to generate the XS. The observed differences are within the statistical uncertainties.

  3. Predictive capabilities of a two-dimensional model in the ground water transport of radionuclides

    International Nuclear Information System (INIS)

    Gureghian, A.B.; Beskid, N.J.; Marmer, G.J.

    1978-01-01

    The discharge of low-level radioactive waste into tailings ponds is a potential source of ground water contamination. The estimation of the radiological hazards related to the ground water transport of radionuclides from tailings retention systems depends on reasonably accurate estimates of the movement of both water and solute. A two-dimensional mathematical model having predictive capability for ground water flow and solute transport has been developed. The flow equation has been solved under steady-state conditions and the mass transport equation under transient conditions. The simultaneous solution of both equations is achieved through the finite element technique using isoparametric elements, based on the Galerkin formulation. However, in contrast to the flow equation solution, the weighting functions used in the solution of the mass transport equation have a non-symmetric form. The predictive capability of the model is demonstrated using an idealized case based on analyses of field data obtained from the sites of operating uranium mills. The pH of the solution, which regulates the variation of the distribution coefficient (K/sub d/) in a particular site, appears to be the most important factor in the assessment of the rate of migration of the elements considered herein

  4. INTRA/Mod3.2. Manual and Code Description. Volume I - Physical Modelling

    International Nuclear Information System (INIS)

    Andersson, Jenny; Edlund, O.; Hermann, J.; Johansson, Lise-Lotte

    1999-01-01

    The INTRA Manual consists of two volumes. Volume I of the manual is a thorough description of the code INTRA, the Physical modelling of INTRA and the ruling numerical methods and volume II, the User's Manual is an input description. This document, the Physical modelling of INTRA, contains code characteristics, integration methods and applications

  5. INTRA/Mod3.2. Manual and Code Description. Volume I - Physical Modelling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Jenny; Edlund, O; Hermann, J; Johansson, Lise-Lotte

    1999-01-01

    The INTRA Manual consists of two volumes. Volume I of the manual is a thorough description of the code INTRA, the Physical modelling of INTRA and the ruling numerical methods and volume II, the User`s Manual is an input description. This document, the Physical modelling of INTRA, contains code characteristics, integration methods and applications

  6. Modelling of the RA-1 reactor using a Monte Carlo code

    International Nuclear Information System (INIS)

    Quinteiro, Guillermo F.; Calabrese, Carlos R.

    2000-01-01

    It was carried out for the first time, a model of the Argentine RA-1 reactor using the MCNP Monte Carlo code. This model was validated using data for experimental neutron and gamma measurements at different energy ranges and locations. In addition, the resulting fluxes were compared with the data obtained using a 3D diffusion code. (author)

  7. Full optical model of micro-endoscope with optical coherence microscopy, multiphoton microscopy and visible capabilities

    Science.gov (United States)

    Vega, David; Kiekens, Kelli C.; Syson, Nikolas C.; Romano, Gabriella; Baker, Tressa; Barton, Jennifer K.

    2018-02-01

    While Optical Coherence Microscopy (OCM), Multiphoton Microscopy (MPM), and narrowband imaging are powerful imaging techniques that can be used to detect cancer, each imaging technique has limitations when used by itself. Combining them into an endoscope to work in synergy can help achieve high sensitivity and specificity for diagnosis at the point of care. Such complex endoscopes have an elevated risk of failure, and performing proper modelling ensures functionality and minimizes risk. We present full 2D and 3D models of a multimodality optical micro-endoscope to provide real-time detection of carcinomas, called a salpingoscope. The models evaluate the endoscope illumination and light collection capabilities of various modalities. The design features two optical paths with different numerical apertures (NA) through a single lens system with a scanning optical fiber. The dual path is achieved using dichroic coatings embedded in a triplet. A high NA optical path is designed to perform OCM and MPM while a low NA optical path is designed for the visible spectrum to navigate the endoscope to areas of interest and narrowband imaging. Different tests such as the reflectance profile of homogeneous epithelial tissue were performed to adjust the models properly. Light collection models for the different modalities were created and tested for efficiency. While it is challenging to evaluate the efficiency of multimodality endoscopes, the models ensure that the system is design for the expected light collection levels to provide detectable signal to work for the intended imaging.

  8. A static VAR compensator model for improved ride-through capability of wind farms

    Energy Technology Data Exchange (ETDEWEB)

    Akhmatov, V.; Soebrink, K.

    2004-12-01

    Dynamic reactive compensation is associated with reactive power and voltage control of induction generator based wind turbines. With regard to wind power, application areas of dynamic reactive compensation can be improvement of the power quality and the voltage stability, the control of the reactive power exchange between the wind farm and the power grid in the wind farm connection point as well as improvement of the ride-through capability of the wind farm. This article presents a model of a Static VAR Compensator (SVC) with dynamic generic control that is a kind of dynamic reactive compensation device. The term 'generic' implies that the model is general and must cover a variety of the SVC units and their specific controls from different manufacturers. The SVC model with dynamic generic control is implemented by Eltra in the simulation tool Powerfactory and validated from the SVC model in the tool PSCAD/EMTDC. Implementation in the tool Powerfactory makes it possible to apply the SVC model with dynamic generic control in investigations of power system stability with regard to establishment of large wind farms without restrictions on the model size of the power grid. (Author)

  9. Functional capabilities of the breadboard model of SIDRA satellite-borne instrument

    International Nuclear Information System (INIS)

    Dudnik, O.V.; Kurbatov, E.V.; Titov, K.G.; Prieto, M.; Sanchez, S.; Sylwester, J.; Gburek, S.; Podgorski, P.

    2013-01-01

    This paper presents the structure, principles of operation and functional capabilities of the breadboard model of SIDRA compact satellite-borne instrument. SIDRA is intended for monitoring fluxes of high-energy charged particles under outer-space conditions. We present the reasons to develop a particle spectrometer and we list the main objectives to be achieved with the help of this instrument. The paper describes the major specifications of the analog and digital signal processing units of the breadboard model. A specially designed and developed data processing module based on the Actel ProAsic3E A3PE3000 FPGA is presented and compared with the all-in one digital processing signal board based on the Xilinx Spartan 3 XC3S1500 FPGA.

  10. Programming with models: modularity and abstraction provide powerful capabilities for systems biology.

    Science.gov (United States)

    Mallavarapu, Aneil; Thomson, Matthew; Ullian, Benjamin; Gunawardena, Jeremy

    2009-03-06

    Mathematical models are increasingly used to understand how phenotypes emerge from systems of molecular interactions. However, their current construction as monolithic sets of equations presents a fundamental barrier to progress. Overcoming this requires modularity, enabling sub-systems to be specified independently and combined incrementally, and abstraction, enabling generic properties of biological processes to be specified independently of specific instances. These, in turn, require models to be represented as programs rather than as datatypes. Programmable modularity and abstraction enables libraries of modules to be created, which can be instantiated and reused repeatedly in different contexts with different components. We have developed a computational infrastructure that accomplishes this. We show here why such capabilities are needed, what is required to implement them and what can be accomplished with them that could not be done previously.

  11. Concerns over modeling and warning capabilities in wake of Tohoku Earthquake and Tsunami

    Science.gov (United States)

    Showstack, Randy

    2011-04-01

    Improved earthquake models, better tsunami modeling and warning capabilities, and a review of nuclear power plant safety are all greatly needed following the 11 March Tohoku earthquake and tsunami, according to scientists at the European Geosciences Union's (EGU) General Assembly, held 3-8 April in Vienna, Austria. EGU quickly organized a morning session of oral presentations and an afternoon panel discussion less than 1 month after the earthquake and the tsunami and the resulting crisis at Japan's Fukushima nuclear power plant, which has now been identified as having reached the same level of severity as the 1986 Chernobyl disaster. Many of the scientists at the EGU sessions expressed concern about the inability to have anticipated the size of the earthquake and the resulting tsunami, which appears likely to have caused most of the fatalities and damage, including damage to the nuclear plant.

  12. The Aviation System Analysis Capability Air Carrier Cost-Benefit Model

    Science.gov (United States)

    Gaier, Eric M.; Edlich, Alexander; Santmire, Tara S.; Wingrove, Earl R.., III

    1999-01-01

    To meet its objective of assisting the U.S. aviation industry with the technological challenges of the future, NASA must identify research areas that have the greatest potential for improving the operation of the air transportation system. Therefore, NASA is developing the ability to evaluate the potential impact of various advanced technologies. By thoroughly understanding the economic impact of advanced aviation technologies and by evaluating how the new technologies will be used in the integrated aviation system, NASA aims to balance its aeronautical research program and help speed the introduction of high-leverage technologies. To meet these objectives, NASA is building the Aviation System Analysis Capability (ASAC). NASA envisions ASAC primarily as a process for understanding and evaluating the impact of advanced aviation technologies on the U.S. economy. ASAC consists of a diverse collection of models and databases used by analysts and other individuals from the public and private sectors brought together to work on issues of common interest to organizations in the aviation community. ASAC also will be a resource available to the aviation community to analyze; inform; and assist scientists, engineers, analysts, and program managers in their daily work. The ASAC differs from previous NASA modeling efforts in that the economic behavior of buyers and sellers in the air transportation and aviation industries is central to its conception. Commercial air carriers, in particular, are an important stakeholder in this community. Therefore, to fully evaluate the implications of advanced aviation technologies, ASAC requires a flexible financial analysis tool that credibly links the technology of flight with the financial performance of commercial air carriers. By linking technical and financial information, NASA ensures that its technology programs will continue to benefit the user community. In addition, the analysis tool must be capable of being incorporated into the

  13. Improving system modeling accuracy with Monte Carlo codes

    International Nuclear Information System (INIS)

    Johnson, A.S.

    1996-01-01

    The use of computer codes based on Monte Carlo methods to perform criticality calculations has become common-place. Although results frequently published in the literature report calculated k eff values to four decimal places, people who use the codes in their everyday work say that they only believe the first two decimal places of any result. The lack of confidence in the computed k eff values may be due to the tendency of the reported standard deviation to underestimate errors associated with the Monte Carlo process. The standard deviation as reported by the codes is the standard deviation of the mean of the k eff values for individual generations in the computer simulation, not the standard deviation of the computed k eff value compared with the physical system. A more subtle problem with the standard deviation of the mean as reported by the codes is that all the k eff values from the separate generations are not statistically independent since the k eff of a given generation is a function of k eff of the previous generation, which is ultimately based on the starting source. To produce a standard deviation that is more representative of the physical system, statistically independent values of k eff are needed

  14. The APS SASE FEL: modeling and code comparison

    International Nuclear Information System (INIS)

    Biedron, S. G.

    1999-01-01

    A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL

  15. XSOR codes users manual

    International Nuclear Information System (INIS)

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ''XSOR''. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms

  16. Basic data, computer codes and integral experiments: The tools for modelling in nuclear technology

    International Nuclear Information System (INIS)

    Sartori, E.

    2001-01-01

    When studying applications in nuclear technology we need to understand and be able to predict the behavior of systems manufactured by human enterprise. First, the underlying basic physical and chemical phenomena need to be understood. We have then to predict the results from the interplay of the large number of the different basic events: i.e. the macroscopic effects. In order to be able to build confidence in our modelling capability, we need then to compare these results against measurements carried out on such systems. The different levels of modelling require the solution of different types of equations using different type of parameters. The tools required for carrying out a complete validated analysis are: - The basic nuclear or chemical data; - The computer codes, and; - The integral experiments. This article describes the role each component plays in a computational scheme designed for modelling purposes. It describes also which tools have been developed and are internationally available. The role of the OECD/NEA Data Bank, the Radiation Shielding Information Computational Center (RSICC), and the IAEA Nuclear Data Section are playing in making these elements available to the community of scientists and engineers is described. (author)

  17. Qualification and application of nuclear reactor accident analysis code with the capability of internal assessment of uncertainty; Qualificacao e aplicacao de codigo de acidentes de reatores nucleares com capacidade interna de avaliacao de incerteza

    Energy Technology Data Exchange (ETDEWEB)

    Borges, Ronaldo Celem

    2001-10-15

    This thesis presents an independent qualification of the CIAU code ('Code with the capability of - Internal Assessment of Uncertainty') which is part of the internal uncertainty evaluation process with a thermal hydraulic system code on a realistic basis. This is done by combining the uncertainty methodology UMAE ('Uncertainty Methodology based on Accuracy Extrapolation') with the RELAP5/Mod3.2 code. This allows associating uncertainty band estimates with the results obtained by the realistic calculation of the code, meeting licensing requirements of safety analysis. The independent qualification is supported by simulations with RELAP5/Mod3.2 related to accident condition tests of LOBI experimental facility and to an event which has occurred in Angra 1 nuclear power plant, by comparison with measured results and by establishing uncertainty bands on safety parameter calculated time trends. These bands have indeed enveloped the measured trends. Results from this independent qualification of CIAU have allowed to ascertain the adequate application of a systematic realistic code procedure to analyse accidents with uncertainties incorporated in the results, although there is an evident need of extending the uncertainty data base. It has been verified that use of the code with this internal assessment of uncertainty is feasible in the design and license stages of a NPP. (author)

  18. Improving National Capability in Biogeochemical Flux Modelling: the UK Environmental Virtual Observatory (EVOp)

    Science.gov (United States)

    Johnes, P.; Greene, S.; Freer, J. E.; Bloomfield, J.; Macleod, K.; Reaney, S. M.; Odoni, N. A.

    2012-12-01

    The best outcomes from watershed management arise where policy and mitigation efforts are underpinned by strong science evidence, but there are major resourcing problems associated with the scale of monitoring needed to effectively characterise the sources rates and impacts of nutrient enrichment nationally. The challenge is to increase national capability in predictive modelling of nutrient flux to waters, securing an effective mechanism for transferring knowledge and management tools from data-rich to data-poor regions. The inadequacy of existing tools and approaches to address these challenges provided the motivation for the Environmental Virtual Observatory programme (EVOp), an innovation from the UK Natural Environment Research Council (NERC). EVOp is exploring the use of a cloud-based infrastructure in catchment science, developing an exemplar to explore N and P fluxes to inland and coastal waters in the UK from grid to catchment and national scale. EVOp is bringing together for the first time national data sets, models and uncertainty analysis into cloud computing environments to explore and benchmark current predictive capability for national scale biogeochemical modelling. The objective is to develop national biogeochemical modelling capability, capitalising on extensive national investment in the development of science understanding and modelling tools to support integrated catchment management, and supporting knowledge transfer from data rich to data poor regions, The AERC export coefficient model (Johnes et al., 2007) has been adapted to function within the EVOp cloud environment, and on a geoclimatic basis, using a range of high resolution, geo-referenced digital datasets as an initial demonstration of the enhanced national capacity for N and P flux modelling using cloud computing infrastructure. Geoclimatic regions are landscape units displaying homogenous or quasi-homogenous functional behaviour in terms of process controls on N and P cycling

  19. Development of a three-dimensional computer code for reconstructing power distributions by means of side reflector instrumentation and determination of the capabilities and limitations of this method

    International Nuclear Information System (INIS)

    Knob, P.J.

    1982-07-01

    This work is concerned with the detection of flux disturbances in pebble bed high temperature reactors by means of flux measurements in the side reflector. Included among the disturbances studied are xenon oscillations, rod group insertions, and individual rod insertions. Using the three-dimensional diffusion code CITATION, core calculations for both a very small reactor (KAHTER) and a large reactor (PNP-3000) were carried out to determine the neutron fluxes at the detector positions. These flux values were then used in flux mapping codes for reconstructing the flux distribution in the core. As an extension of the already existing two-dimensional MOFA code, which maps azimuthal disturbances, a new three-dimensional flux mapping code ZELT was developed for handling axial disturbances as well. It was found that both flux mapping programs give satisfactory results for small and large pebble bed reactors alike. (orig.) [de

  20. Modeling the reactor core of MNSR to simulate its dynamic behavior using the code PARET

    International Nuclear Information System (INIS)

    Hainoun, A.; Alhabet, F.

    2004-02-01

    Using the computer code PARET the core of the MNSR reactor was modelled and the neutronics and thermal hydraulic behaviour of the reactor core for the steady state and selected transients, that deal with step change of reactivity including control rod withdraw starting from steady state at various low power level, were simulated. For this purpose a PARET input model for the core of MNSR reactor has been developed enabling the simulation of neutron kinetic and thermal hydraulic of reactor core including reactivity feedback effects. The neutron kinetic model depends on the point kinetic with 15 groups delayed neutrons including photo neutrons of beryllium reflector. In this regard the effect of photo neutron on the dynamic behaviour has been analysed through two additional calculation. In the first the yield of photo neutrons was neglected completely and in the second its share was added to the sixth group of delayed neutrons. In the thermal hydraulic model the fuel elements with their cooling channels were distributed to 4 different groups with various radial power factors. The pressure lose factors for friction, flow direction change, expansion and contraction were estimated using suitable approaches. The post calculations of the relative neutron flux change and core average temperature were found to be consistent with the experimental measurements. Furthermore, the simulation has indicated the influence of photo neutrons of the Beryllium reflector on the neutron flux behaviour. For the reliability of the results sensitivity analysis was carried out to consider the uncertainty in some important parameters like temperature feedback coefficient and flow velocity. On the other hand the application of PARET in simulation of void formation in the subcooled boiling regime were tested. The calculation indicates the capability of PARET in modelling this phenomenon. However, big discrepancy between calculation results and measurement of axial void distribution were observed

  1. Modular Modeling System (MMS) code: a versatile power plant analysis package

    International Nuclear Information System (INIS)

    Divakaruni, S.M.; Wong, F.K.L.

    1987-01-01

    The basic version of the Modular Modeling System (MMS-01), a power plant systems analysis computer code jointly developed by the Nuclear Power and the Coal Combustion Systems Divisions of the Electric Power Research Institute (EPRI), has been released to the utility power industry in April 1983 at a code release workshop held in Charlotte, North Carolina. Since then, additional modules have been developed to analyze the Pressurized Water Reactors (PWRs) and the Boiling Water Reactors (BWRs) when the safety systems are activated. Also, a selected number of modules in the MMS-01 library have been modified to allow the code users more flexibility in constructing plant specific systems for analysis. These new PWR and BWR modules constitute the new MMS library, and it includes the modifications to the MMS-01 library. A year and half long extensive code qualification program of this new version of the MMS code at EPRI and the contractor sites, back by further code testing in an user group environment is culminating in the MMS-02 code release announcement seminar. At this seminar, the results of user group efforts and the code qualification program will be presented in a series of technical sessions. A total of forty-nine papers will be presented to describe the new code features and the code qualification efforts. For the sake of completion, an overview of the code is presented to include the history of the code development, description of the MMS code and its structure, utility engineers involvement in MMS-01 and MMS-02 validations, the enhancements made in the last 18 months to the code, and finally the perspective on the code future in the fossil and nuclear industry

  2. The modeling of core melting and in-vessel corium relocation in the APRIL code

    Energy Technology Data Exchange (ETDEWEB)

    Kim. S.W.; Podowski, M.Z.; Lahey, R.T. [Rensselaer Polytechnic Institute, Troy, NY (United States)] [and others

    1995-09-01

    This paper is concerned with the modeling of severe accident phenomena in boiling water reactors (BWR). New models of core melting and in-vessel corium debris relocation are presented, developed for implementation in the APRIL computer code. The results of model testing and validations are given, including comparisons against available experimental data and parametric/sensitivity studies. Also, the application of these models, as parts of the APRIL code, is presented to simulate accident progression in a typical BWR reactor.

  3. Extending the Lunar Mapping and Modeling Portal - New Capabilities and New Worlds

    Science.gov (United States)

    Day, B. H.; Law, E.; Arevalo, E.; Bui, B.; Chang, G.; Dodge, K.; Kim, R. M.; Malhotra, S.; Sadaqathullah, S.

    2015-12-01

    NASA's Lunar Mapping and Modeling Portal (LMMP) provides a web-based Portal and a suite of interactive visualization and analysis tools to enable mission planners, lunar scientists, and engineers to access mapped lunar data products from past and current lunar missions (http://lmmp.nasa.gov). During the past year, the capabilities and data served by LMMP have been significantly expanded. New interfaces are providing improved ways to access and visualize data. Many of the recent enhancements to LMMP have been specifically in response to the requirements of NASA's proposed Resource Prospector lunar rover, and as such, provide an excellent example of the application of LMMP to mission planning. At the request of NASA's Science Mission Directorate, LMMP's technology and capabilities are now being extended to additional planetary bodies. New portals for Vesta and Mars are the first of these new products to be released. On March 31, 2015, the LMMP team released Vesta Trek (http://vestatrek.jpl.nasa.gov), a web-based application applying LMMP technology to visualizations of the asteroid Vesta. Data gathered from multiple instruments aboard Dawn have been compiled into Vesta Trek's user-friendly set of tools, enabling users to study the asteroid's features. With an initial release on July 1, 2015, Mars Trek replicates the functionality of Vesta Trek for the surface of Mars. While the entire surface of Mars is covered, higher levels of resolution and greater numbers of data products are provided for special areas of interest. Early releases focus on past, current, and future robotic sites of operation. Future releases will add many new data products and analysis tools as Mars Trek has been selected for use in site selection for the Mars 2020 rover and in identifying potential human landing sites on Mars. Other destinations will follow soon. The user community is invited to provide suggestions and requests as the development team continues to expand the capabilities of LMMP

  4. Modelling of Cold Water Hammer with WAHA code

    International Nuclear Information System (INIS)

    Gale, J.; Tiselj, I.

    2003-01-01

    The Cold Water Hammer experiment described in the present paper is a simple facility where overpressure accelerates a column of liquid water into the steam bubble at the closed vertical end of the pipe. Severe water hammer with high pressure peak occurs when the vapor bubble condenses and the liquid column hits the closed end of the pipe. Experimental data of Forschungszentrum Rossendorf are being used to test the newly developed computer code WAHA and the computer code RELAP5. Results show that a small amount of noncondensable air in the steam bubble significantly affects the magnitude of the calculated pressure peak, while the wall friction and condensation rate only slightly affect the simulated phenomena. (author)

  5. Modelling of the Rod Control System in the coupled code RELAP5-QUABOX/CUBBOX

    International Nuclear Information System (INIS)

    Bencik, V.; Feretic, D.; Grgic, D.

    1999-01-01

    There is a general agreement that for many light water reactor transient calculations, it is necessary to use a multidimensional neutron kinetics model coupled to sophisticated thermal-hydraulic models in order to obtain satisfactory results. These calculations are needed for a variety of applications for licensing safety analyses, probabilistic risk assessment, operational support, and training. At FER, Zagreb, a coupling of 3D neutronics code QUABOX/CUBBOX and system code RELAP5 was performed. In the paper the Rod Control System model in the RELAP5 part of the coupled code is presented. A first testing of the model was performed by calculation of reactor trip from full power for NPP Krsko. Results of 3D neutronics calculation obtained by coupled code QUABOX/CUBBOX were compared with point kinetics calculation performed with RELAP5 stand alone code.(author)

  6. Improved virtual channel noise model for transform domain Wyner-Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2009-01-01

    Distributed video coding (DVC) has been proposed as a new video coding paradigm to deal with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. A virtual channel noise model is utilized at the decoder to estimate...... the noise distribution between the side information frame and the original frame. This is one of the most important aspects influencing the coding performance of DVC. Noise models with different granularity have been proposed. In this paper, an improved noise model for transform domain Wyner-Ziv video...... coding is proposed, which utilizes cross-band correlation to estimate the Laplacian parameters more accurately. Experimental results show that the proposed noise model can improve the rate-distortion (RD) performance....

  7. DESTINY: A Comprehensive Tool with 3D and Multi-Level Cell Memory Modeling Capability

    Directory of Open Access Journals (Sweden)

    Sparsh Mittal

    2017-09-01

    Full Text Available To enable the design of large capacity memory structures, novel memory technologies such as non-volatile memory (NVM and novel fabrication approaches, e.g., 3D stacking and multi-level cell (MLC design have been explored. The existing modeling tools, however, cover only a few memory technologies, technology nodes and fabrication approaches. We present DESTINY, a tool for modeling 2D/3D memories designed using SRAM, resistive RAM (ReRAM, spin transfer torque RAM (STT-RAM, phase change RAM (PCM and embedded DRAM (eDRAM and 2D memories designed using spin orbit torque RAM (SOT-RAM, domain wall memory (DWM and Flash memory. In addition to single-level cell (SLC designs for all of these memories, DESTINY also supports modeling MLC designs for NVMs. We have extensively validated DESTINY against commercial and research prototypes of these memories. DESTINY is very useful for performing design-space exploration across several dimensions, such as optimizing for a target (e.g., latency, area or energy-delay product for a given memory technology, choosing the suitable memory technology or fabrication method (i.e., 2D v/s 3D for a given optimization target, etc. We believe that DESTINY will boost studies of next-generation memory architectures used in systems ranging from mobile devices to extreme-scale supercomputers. The latest source-code of DESTINY is available from the following git repository: https://bitbucket.org/sparshmittal/destinyv2.

  8. Assessment of the Effects on PCT Evaluation of Enhanced Fuel Model Facilitated by Coupling the MARS Code with the FRAPTRAN Code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyong Chol; Lee, Young Jin; Han, Sam Hee [NSE Technology Inc., Daejeon (Korea, Republic of)

    2016-10-15

    The principal objectives of the two safety criteria, peak cladding temperature (PCT) and total oxidation limits, are to ensure that the fuel rod claddings remain sufficiently ductile so that they do not crack and fragment during a LOCA. Another important purpose of the PCT limit is to ensure that the fuel cladding does not enter the regime of runaway oxidation and uncontrollable heat-up. However, even when the PCT limit is satisfied, it is known that cladding failures may still occur in a certain percentage of the fuel rods during a LOCA. This is largely because a 100% fuel failure is assumed for the radiological consequence analysis in the US regulatory practices. In this study, we analyze the effects of cladding failure and other fuel model features on PCT during a LOCA using the MARS-FRAPTRAN coupled code. MARS code has been coupled with FRAPTRAN code to extend fuel modeling capability. The coupling allows feedback of FRAPTRAN results in real time. Because of the significant impact of fuel models on key safety parameters such as PCT, detailed and accurate fuel models should be employed when evaluating PCT in LOCA analysis. It is noteworthy that the ECCS evaluation models laid out in the Appendix K to 10CFR50 require a provision for predicting cladding swelling and rupture and require to assume that the inside of the cladding react with steam after the rupture. The metal-water reaction energy can have significantly large effect on the reflood PCT, especially when fuel failure occurs. Effects of applying an advanced fuel model on the PCT evaluation can be clearly seen when comparing the MARS and the FRAPTRAN results in both the one-way calculation and the feedback calculation. As long as MARS and FRAPTRAN are used respectively in the ranges where they have been validated, the coupled calculation results are expected to be valid and to reveal various aspects of phenomena which have not been discovered in previous uncoupled calculations by MARS or FRAPTRAN.

  9. Overview of the development of a biosphere modelling capability for UK DoE (HMIP)

    International Nuclear Information System (INIS)

    Nancarrow, D.J.; Ashton, J.; Little, R.H.

    1990-01-01

    A programme of research has been funded, since 1982, by the United Kingdom Department of the Environment (Her Majesty's Inspectorate of Pollution, HMIP), to develop a procedure for post-closure radiological assessment of underground disposal facilities for low and intermediate level radioactive wastes. It is conventional to regard the disposal system as comprising the engineered barriers of the repository, the geological setting which provides natural barriers to migration, and the surface environment or biosphere. The requirement of a biosphere submodel, therefore, is to provide estimates, for given radionuclide inputs, of the dose or probability distribution function of dose to a maximally exposed individual as a function of time. This paper describes the development of the capability for biosphere modelling for HMIP in the context of the development of other assessment procedures. 11 refs., 3 figs., 2 tabs

  10. New Modelling Capabilities in Commercial Software for High-Gain Antennas

    DEFF Research Database (Denmark)

    Jørgensen, Erik; Lumholt, Michael; Meincke, Peter

    2012-01-01

    characterization of the reflectarray element, an initial phaseonly synthesis, followed by a full optimization procedure taking into account the near-field from the feed and the finite extent of the array. Another interesting new modelling capability is made available through the DIATOOL software, which is a new...... type of EM software tool aimed at extending the ways engineers can use antenna measurements in the antenna design process. The tool allows reconstruction of currents and near fields on a 3D surface conformal to the antenna, by using the measured antenna field as input. The currents on the antenna...... surface can provide valuable information about the antenna performance or undesired contributions, e.g. currents on a cable,can be artificially removed. Finally, the CHAMP software will be extended to cover reflector shaping and more complex materials,which combined with a much faster execution speed...

  11. A vectorized Monte Carlo code for modeling photon transport in SPECT

    International Nuclear Information System (INIS)

    Smith, M.F.; Floyd, C.E. Jr.; Jaszczak, R.J.

    1993-01-01

    A vectorized Monte Carlo computer code has been developed for modeling photon transport in single photon emission computed tomography (SPECT). The code models photon transport in a uniform attenuating region and photon detection by a gamma camera. It is adapted from a history-based Monte Carlo code in which photon history data are stored in scalar variables and photon histories are computed sequentially. The vectorized code is written in FORTRAN77 and uses an event-based algorithm in which photon history data are stored in arrays and photon history computations are performed within DO loops. The indices of the DO loops range over the number of photon histories, and these loops may take advantage of the vector processing unit of our Stellar GS1000 computer for pipelined computations. Without the use of the vector processor the event-based code is faster than the history-based code because of numerical optimization performed during conversion to the event-based algorithm. When only the detection of unscattered photons is modeled, the event-based code executes 5.1 times faster with the use of the vector processor than without; when the detection of scattered and unscattered photons is modeled the speed increase is a factor of 2.9. Vectorization is a valuable way to increase the performance of Monte Carlo code for modeling photon transport in SPECT

  12. Evaluation of the analysis models in the ASTRA nuclear design code system

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Nam Jin; Park, Chang Jea; Kim, Do Sam; Lee, Kyeong Taek; Kim, Jong Woon [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    2000-11-15

    In the field of nuclear reactor design, main practice was the application of the improved design code systems. During the process, a lot of basis and knowledge were accumulated in processing input data, nuclear fuel reload design, production and analysis of design data, et al. However less efforts were done in the analysis of the methodology and in the development or improvement of those code systems. Recently, KEPO Nuclear Fuel Company (KNFC) developed the ASTRA (Advanced Static and Transient Reactor Analyzer) code system for the purpose of nuclear reactor design and analysis. In the code system, two group constants were generated from the CASMO-3 code system. The objective of this research is to analyze the analysis models used in the ASTRA/CASMO-3 code system. This evaluation requires indepth comprehension of the models, which is important so much as the development of the code system itself. Currently, most of the code systems used in domestic Nuclear Power Plant were imported, so it is very difficult to maintain and treat the change of the situation in the system. Therefore, the evaluation of analysis models in the ASTRA nuclear reactor design code system in very important.

  13. Nodal kinetics model upgrade in the Penn State coupled TRAC/NEM codes

    International Nuclear Information System (INIS)

    Beam, Tara M.; Ivanov, Kostadin N.; Baratta, Anthony J.; Finnemann, Herbert

    1999-01-01

    The Pennsylvania State University currently maintains and does development and verification work for its own versions of the coupled three-dimensional kinetics/thermal-hydraulics codes TRAC-PF1/NEM and TRAC-BF1/NEM. The subject of this paper is nodal model enhancements in the above mentioned codes. Because of the numerous validation studies that have been performed on almost every aspect of these codes, this upgrade is done without a major code rewrite. The upgrade consists of four steps. The first two steps are designed to improve the accuracy of the kinetics model, based on the nodal expansion method. The polynomial expansion solution of 1D transverse integrated diffusion equation is replaced with a solution, which uses a semi-analytic expansion. Further the standard parabolic polynomial representation of the transverse leakage in the above 1D equations is replaced with an improved approximation. The last two steps of the upgrade address the code efficiency by improving the solution of the time-dependent NEM equations and implementing a multi-grid solver. These four improvements are implemented into the standalone NEM kinetics code. Verification of this code was accomplished based on the original verification studies. The results show that the new methods improve the accuracy and efficiency of the code. The verification of the upgraded NEM model in the TRAC-PF1/NEM and TRAC-BF1/NEM coupled codes is underway

  14. Research Capabilities Directed to all Electric Engineering Teachers, from an Alternative Energy Model

    Directory of Open Access Journals (Sweden)

    Víctor Hugo Ordóñez Navea

    2017-08-01

    Full Text Available The purpose of this work was to contemplate research capabilities directed to all electric engineering teachers from an alternative energy model intro the explanation of a semiconductor in the National Training Program in Electricity. Some authors, such as. Vidal (2016, Atencio (2014 y Camilo (2012 point out to technological applications with semiconductor electrical devices. In this way; a diagnostic phase is presented, held on this field research as a descriptive type about: a how to identify the necessities of alternative energies, and b The research competences in the alternatives energies of researcher from a solar cell model, to boost and innovate the academic praxis and technologic ingenuity. Themselves was applied a survey for a group of 15 teachers in the National Program of Formation in electricity to diagnose the deficiencies in the research area of alternatives energies. The process of data analysis was carried out through descriptive statistic. Later the conclusions are presented the need to generate strategies for stimulate and propose exploration of alternatives energies to the development of research competences directed to the teachers of electrical engineering for develop the research competences in the enforcement of the teachers exercise for the electric engineering, from an alternative energy model and boost the technologic research in the renewal energies field.

  15. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    Science.gov (United States)

    Galelli, S.; Castelletti, A.

    2013-07-01

    Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modelling. In this p