WorldWideScience

Sample records for modeling codes capable

  1. System Code Models and Capabilities

    International Nuclear Information System (INIS)

    Bestion, D.

    2008-01-01

    System thermalhydraulic codes such as RELAP, TRACE, CATHARE or ATHLET are now commonly used for reactor transient simulations. The whole methodology of code development is described including the derivation of the system of equations, the analysis of experimental data to obtain closure relation and the validation process. The characteristics of the models are briefly presented starting with the basic assumptions, the system of equations and the derivation of closure relationships. An extensive work was devoted during the last three decades to the improvement and validation of these models, which resulted in some homogenisation of the different codes although separately developed. The so called two-fluid model is the common basis of these codes and it is shown how it can describe both thermal and mechanical nonequilibrium. A review of some important physical models allows to illustrate the main capabilities and limitations of system codes. Attention is drawn on the role of flow regime maps, on the various methods for developing closure laws, on the role of interfacial area and turbulence on interfacial and wall transfers. More details are given for interfacial friction laws and their relation with drift flux models. Prediction of chocked flow and CFFL is also addressed. Based on some limitations of the present generation of codes, perspectives for future are drawn.

  2. Fuel analysis code FAIR and its high burnup modelling capabilities

    International Nuclear Information System (INIS)

    Prasad, P.S.; Dutta, B.K.; Kushwaha, H.S.; Mahajan, S.C.; Kakodkar, A.

    1995-01-01

    A computer code FAIR has been developed for analysing performance of water cooled reactor fuel pins. It is capable of analysing high burnup fuels. This code has recently been used for analysing ten high burnup fuel rods irradiated at Halden reactor. In the present paper, the code FAIR and its various high burnup models are described. The performance of code FAIR in analysing high burnup fuels and its other applications are highlighted. (author). 21 refs., 12 figs

  3. Assessing the LWR codes capability to address SFR BDBAs: Modeling of the ABCOVE tests

    International Nuclear Information System (INIS)

    Garcia, M.; Herranz, L. E.

    2012-01-01

    Tic present paper is aimed at assessing the current capability of LWR codes to model aerosol transport within a SFR containment under BDBA conditions. Through a systematic application of the ASTEC and MELCOR codes lo relevant ABCOVE tests, insights have been gained into drawbacks and capabilities of these computation tools. Hypotheses and approximations have been adopted so that differences in boundary conditions between LWR and SFR containments under BDBA can be accommodated to some extent.

  4. Present capabilities and new developments in antenna modeling with the numerical electromagnetics code NEC

    Energy Technology Data Exchange (ETDEWEB)

    Burke, G.J.

    1988-04-08

    Computer modeling of antennas, since its start in the late 1960's, has become a powerful and widely used tool for antenna design. Computer codes have been developed based on the Method-of-Moments, Geometrical Theory of Diffraction, or integration of Maxwell's equations. Of such tools, the Numerical Electromagnetics Code-Method of Moments (NEC) has become one of the most widely used codes for modeling resonant sized antennas. There are several reasons for this including the systematic updating and extension of its capabilities, extensive user-oriented documentation and accessibility of its developers for user assistance. The result is that there are estimated to be several hundred users of various versions of NEC world wide. 23 refs., 10 figs.

  5. Benchmarking LWR codes capability to model radionuclide deposition within SFR containments: An analysis of the Na ABCOVE tests

    International Nuclear Information System (INIS)

    Herranz, Luis E.; Garcia, Monica; Morandi, Sonia

    2013-01-01

    Highlights: • Assessment of LWR codes capability to model aerosol deposition within SFR containments. • Original hypotheses proposed to partially accommodate drawbacks from Na oxidation reactions. • A defined methodology to derive a more accurate characterization of Na-based particles. • Key missing models in LWR codes for SFR applications are identified. - Abstract: Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide transport, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. Postulated BDBAs in SFRs might result in contaminated-coolant discharge at high temperature into the containment. A full scope safety analysis of this reactor type requires computation tools properly validated in all the related fields. Radionuclide deposition, particularly within the containment, is one of those fields. This sets two major challenges: to have reliable codes available and to build up a sound data base. Development of SFR source term codes was abandoned in the 80's and few data are available at present. The ABCOVE experimental programme conducted in the 80's is still a reference in the field. The present paper is aimed at assessing the current capability of LWR codes to model aerosol deposition within a SFR containment under BDBA conditions. Through a systematic application of the ASTEC, ECART and MELCOR codes to relevant ABCOVE tests, insights have been gained into drawbacks and capabilities of these computation tools. Hypotheses and approximations have been adopted so that

  6. Development of covariance capabilities in EMPIRE code

    Energy Technology Data Exchange (ETDEWEB)

    Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.

    2008-06-24

    The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.

  7. Group Capability Model

    Science.gov (United States)

    Olejarski, Michael; Appleton, Amy; Deltorchio, Stephen

    2009-01-01

    The Group Capability Model (GCM) is a software tool that allows an organization, from first line management to senior executive, to monitor and track the health (capability) of various groups in performing their contractual obligations. GCM calculates a Group Capability Index (GCI) by comparing actual head counts, certifications, and/or skills within a group. The model can also be used to simulate the effects of employee usage, training, and attrition on the GCI. A universal tool and common method was required due to the high risk of losing skills necessary to complete the Space Shuttle Program and meet the needs of the Constellation Program. During this transition from one space vehicle to another, the uncertainty among the critical skilled workforce is high and attrition has the potential to be unmanageable. GCM allows managers to establish requirements for their group in the form of head counts, certification requirements, or skills requirements. GCM then calculates a Group Capability Index (GCI), where a score of 1 indicates that the group is at the appropriate level; anything less than 1 indicates a potential for improvement. This shows the health of a group, both currently and over time. GCM accepts as input head count, certification needs, critical needs, competency needs, and competency critical needs. In addition, team members are categorized by years of experience, percentage of contribution, ex-members and their skills, availability, function, and in-work requirements. Outputs are several reports, including actual vs. required head count, actual vs. required certificates, CGI change over time (by month), and more. The program stores historical data for summary and historical reporting, which is done via an Excel spreadsheet that is color-coded to show health statistics at a glance. GCM has provided the Shuttle Ground Processing team with a quantifiable, repeatable approach to assessing and managing the skills in their organization. They now have a common

  8. New capabilities of the lattice code WIMS-AECL

    International Nuclear Information System (INIS)

    Altiparmakov, Dimitar

    2008-01-01

    The lattice code WIMS-AECL has been restructured and rewritten in Fortran 95 in order to increase the accuracy of its responses and extend its capabilities. Significant changes of computing algorithms have been made in the following two areas: geometric calculations and resonance self-shielding. Among various geometry enhancements, the code is no longer restricted to deal with single lattice cell problems. The multi-cell capability allows modelling of various lattice structures such as checkerboard lattices, a de-fuelled channel, and core-reflector interface problems. The new resonance method performs distributed resonance self-shielding including the skin effect. This paper describes the main code changes and presents selected code verification results. (authors)

  9. Demonstration of capabilities of high temperature composites analyzer code HITCAN

    Science.gov (United States)

    Singhal, Surendra N.; Lackney, Joseph J.; Chamis, Christos C.; Murthy, Pappu L. N.

    1990-01-01

    The capabilities a high temperature composites analyzer code, HITCAN which predicts global structural and local stress-strain response of multilayered metal matrix composite structures, are demonstrated. The response can be determined both at the constituent (fiber, matrix, and interphase) and the structure level and includes the fabrication process effects. The thermo-mechanical properties of the constituents are considered to be nonlinearly dependent on several parameters including temperature, stress, and stress rate. The computational procedure employs an incremental iterative nonlinear approach utilizing a multifactor-interactive constituent material behavior model. Various features of the code are demonstrated through example problems for typical structures.

  10. User Instructions for the Systems Assessment Capability, Rev. 1, Computer Codes Volume 3: Utility Codes

    Energy Technology Data Exchange (ETDEWEB)

    Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.; Miley, Terri B.; Nichols, William E.; Strenge, Dennis L.

    2004-09-14

    This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.

  11. Model Children's Code.

    Science.gov (United States)

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  12. Model and code development

    International Nuclear Information System (INIS)

    Anon.

    1977-01-01

    Progress in model and code development for reactor physics calculations is summarized. The codes included CINDER-10, PHROG, RAFFLE GAPP, DCFMR, RELAP/4, PARET, and KENO. Kinetics models for the PBF were developed

  13. Business models and dynamic capabilities

    OpenAIRE

    Teece, DJ

    2017-01-01

    © 2017 The Author. Business models, dynamic capabilities, and strategy are interdependent. The strength of a firm's dynamic capabilities help shape its proficiency at business model design. Through its effect on organization design, a business model influences the firm's dynamic capabilities and places bounds on the feasibility of particular strategies. While these relationships are understood at a theoretical level, there is a need for future empirical work to flesh out the details. In parti...

  14. Reactivity Insertion Accident (RIA) Capability Status in the BISON Fuel Performance Code

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, Richard L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Folsom, Charles Pearson [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pastore, Giovanni [Idaho National Lab. (INL), Idaho Falls, ID (United States); Veeraraghavan, Swetha [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-05-01

    One of the Challenge Problems being considered within CASL relates to modelling and simulation of Light Water Reactor LWR) fuel under Reactivity Insertion Accident (RIA) conditions. BISON is the fuel performance code used within CASL for LWR fuel under both normal operating and accident conditions, and thus must be capable of addressing the RIA challenge problem. This report outlines required BISON capabilities for RIAs and describes the current status of the code. Information on recent accident capability enhancements, application of BISON to a RIA benchmark exercise, and plans for validation to RIA behavior are included.

  15. Code Verification Capabilities and Assessments in Support of ASC V&V Level 2 Milestone #6035

    Energy Technology Data Exchange (ETDEWEB)

    Doebling, Scott William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Budzien, Joanne Louise [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ferguson, Jim Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Harwell, Megan Louise [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hickmann, Kyle Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Israel, Daniel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Magrogan, William Richard III [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Singleton, Jr., Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Srinivasan, Gowri [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Walter, Jr, John William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Woods, Charles Nathan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-26

    This document provides a summary of the code verification activities supporting the FY17 Level 2 V&V milestone entitled “Deliver a Capability for V&V Assessments of Code Implementations of Physics Models and Numerical Algorithms in Support of Future Predictive Capability Framework Pegposts.” The physics validation activities supporting this milestone are documented separately. The objectives of this portion of the milestone are: 1) Develop software tools to support code verification analysis; 2) Document standard definitions of code verification test problems; and 3) Perform code verification assessments (focusing on error behavior of algorithms). This report and a set of additional standalone documents serve as the compilation of results demonstrating accomplishment of these objectives.

  16. Monte Carlo capabilities of the SCALE code system

    International Nuclear Information System (INIS)

    Rearden, B.T.; Petrie, L.M.; Peplow, D.E.; Bekar, K.B.; Wiarda, D.; Celik, C.; Perfetti, C.M.; Ibrahim, A.M.; Hart, S.W.D.; Dunn, M.E.; Marshall, W.J.

    2015-01-01

    Highlights: • Foundational Monte Carlo capabilities of SCALE are described. • Improvements in continuous-energy treatments are detailed. • New methods for problem-dependent temperature corrections are described. • New methods for sensitivity analysis and depletion are described. • Nuclear data, users interfaces, and quality assurance activities are summarized. - Abstract: SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2

  17. System Reliability Analysis Capability and Surrogate Model Application in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Huang, Dongli [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gleicher, Frederick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Wang, Bei [Idaho National Lab. (INL), Idaho Falls, ID (United States); Adbel-Khalik, Hany S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pascucci, Valerio [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-11-01

    This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.

  18. Monte Carlo capabilities of the SCALE code system

    International Nuclear Information System (INIS)

    Rearden, B.T.; Petrie, L.M.; Peplow, D.E.; Bekar, K.B.; Wiarda, D.; Celik, C.; Perfetti, C.M.; Ibrahim, A.M.; Dunn, M.E.; Hart, S.W.D.

    2013-01-01

    SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a 'plug-and-play' framework that includes three deterministic and three Monte Carlo radiation transport solvers (KENO, MAVRIC, TSUNAMI) that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE's graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2, to be released in 2014, will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2. (authors)

  19. Development of an integrated thermal-hydraulics capability incorporating RELAP5 and PANTHER neutronics code

    Energy Technology Data Exchange (ETDEWEB)

    Page, R.; Jones, J.R.

    1997-07-01

    Ensuring that safety analysis needs are met in the future is likely to lead to the development of new codes and the further development of existing codes. It is therefore advantageous to define standards for data interfaces and to develop software interfacing techniques which can readily accommodate changes when they are made. Defining interface standards is beneficial but is necessarily restricted in application if future requirements are not known in detail. Code interfacing methods are of particular relevance with the move towards automatic grid frequency response operation where the integration of plant dynamic, core follow and fault study calculation tools is considered advantageous. This paper describes the background and features of a new code TALINK (Transient Analysis code LINKage program) used to provide a flexible interface to link the RELAP5 thermal hydraulics code with the PANTHER neutron kinetics and the SIBDYM whole plant dynamic modelling codes used by Nuclear Electric. The complete package enables the codes to be executed in parallel and provides an integrated whole plant thermal-hydraulics and neutron kinetics model. In addition the paper discusses the capabilities and pedigree of the component codes used to form the integrated transient analysis package and the details of the calculation of a postulated Sizewell `B` Loss of offsite power fault transient.

  20. Numerical modeling capabilities to predict repository performance

    International Nuclear Information System (INIS)

    1979-09-01

    This report presents a summary of current numerical modeling capabilities that are applicable to the design and performance evaluation of underground repositories for the storage of nuclear waste. The report includes codes that are available in-house, within Golder Associates and Lawrence Livermore Laboratories; as well as those that are generally available within the industry and universities. The first listing of programs are in-house codes in the subject areas of hydrology, solute transport, thermal and mechanical stress analysis, and structural geology. The second listing of programs are divided by subject into the following categories: site selection, structural geology, mine structural design, mine ventilation, hydrology, and mine design/construction/operation. These programs are not specifically designed for use in the design and evaluation of an underground repository for nuclear waste; but several or most of them may be so used

  1. Alquimia: Exposing mature biogeochemistry capabilities for easier benchmarking and development of next-generation subsurface codes

    Science.gov (United States)

    Johnson, J. N.; Molins, S.

    2015-12-01

    The complexity of subsurface models is increasing in order to address pressing scientific questions in hydrology and climate science. In particular, models that attempt to explore the coupling between microbial metabolic activity and hydrology at larger scales need an accurate representation of their underlying biogeochemical systems. These systems tend to be very complicated, and they result in large nonlinear systems that have to be coupled with flow and transport algorithms in reactive transport codes. The complexity inherent in implementing a robust treatment of biogeochemistry is a significant obstacle in the development of new codes. Alquimia is an open-source software library intended to help developers of these codes overcome this obstacle by exposing tried-and-true biogeochemical capabilities in existing software. It provides an interface through which a reactive transport code can access and evolve a chemical system, using one of several supported geochemical "engines." We will describe Alquimia's current capabilities, and how they can be used for benchmarking reactive transport codes. We will also discuss upcoming features that will facilitate the coupling of biogeochemistry to other processes in new codes.

  2. Towards a national cybersecurity capability development model

    CSIR Research Space (South Africa)

    Jacobs, Pierre C

    2017-06-01

    Full Text Available - the incident management cybersecurity capability - is selected to illustrate the application of the national cybersecurity capability development model. This model was developed as part of previous research, and is called the Embryonic Cyberdefence Monitoring...

  3. An Enhanced GINGER Simulation Code with Harmonic Emission and HDF5 IO Capabilities

    International Nuclear Information System (INIS)

    Fawley, William M.

    2006-01-01

    GINGER [1] is an axisymmetric, polychromatic (r-z-t) FEL simulation code originally developed in the mid-1980's to model the performance of single-pass amplifiers. Over the past 15 years GINGER's capabilities have been extended to include more complicated configurations such as undulators with drift spaces, dispersive sections, and vacuum chamber wakefield effects; multi-pass oscillators; and multi-stage harmonic cascades. Its coding base has been tuned to permit running effectively on platforms ranging from desktop PC's to massively parallel processors such as the IBM-SP. Recently, we have made significant changes to GINGER by replacing the original predictor-corrector field solver with a new direct implicit algorithm, adding harmonic emission capability, and switching to the HDF5 IO library [2] for output diagnostics. In this paper, we discuss some details regarding these changes and also present simulation results for LCLS SASE emission at λ = 0.15 nm and higher harmonics

  4. Studies on DANESS Code Modeling

    International Nuclear Information System (INIS)

    Jeong, Chang Joon

    2009-09-01

    The DANESS code modeling study has been performed. DANESS code is widely used in a dynamic fuel cycle analysis. Korea Atomic Energy Research Institute (KAERI) has used the DANESS code for the Korean national nuclear fuel cycle scenario analysis. In this report, the important models such as Energy-demand scenario model, New Reactor Capacity Decision Model, Reactor and Fuel Cycle Facility History Model, and Fuel Cycle Model are investigated. And, some models in the interface module are refined and inserted for Korean nuclear fuel cycle model. Some application studies have also been performed for GNEP cases and for US fast reactor scenarios with various conversion ratios

  5. NGNP Component Test Capability Design Code of Record

    Energy Technology Data Exchange (ETDEWEB)

    S.L. Austad; D.S. Ferguson; L.E. Guillen; C.W. McKnight; P.J. Petersen

    2009-09-01

    The Next Generation Nuclear Plant Project is conducting a trade study to select a preferred approach for establishing a capability whereby NGNP technology development testing—through large-scale, integrated tests—can be performed for critical HTGR structures, systems, and components (SSCs). The mission of this capability includes enabling the validation of interfaces, interactions, and performance for critical systems and components prior to installation in the NGNP prototype.

  6. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  7. Additions and improvements to the high energy density physics capabilities in the FLASH code

    Science.gov (United States)

    Lamb, D.; Bogale, A.; Feister, S.; Flocke, N.; Graziani, C.; Khiar, B.; Laune, J.; Tzeferacos, P.; Walker, C.; Weide, K.

    2017-10-01

    FLASH is an open-source, finite-volume Eulerian, spatially-adaptive radiation magnetohydrodynamics code that has the capabilities to treat a broad range of physical processes. FLASH performs well on a wide range of computer architectures, and has a broad user base. Extensive high energy density physics (HEDP) capabilities exist in FLASH, which make it a powerful open toolset for the academic HEDP community. We summarize these capabilities, emphasizing recent additions and improvements. We describe several non-ideal MHD capabilities that are being added to FLASH, including the Hall and Nernst effects, implicit resistivity, and a circuit model, which will allow modeling of Z-pinch experiments. We showcase the ability of FLASH to simulate Thomson scattering polarimetry, which measures Faraday due to the presence of magnetic fields, as well as proton radiography, proton self-emission, and Thomson scattering diagnostics. Finally, we describe several collaborations with the academic HEDP community in which FLASH simulations were used to design and interpret HEDP experiments. This work was supported in part at U. Chicago by DOE NNSA ASC through the Argonne Institute for Computing in Science under FWP 57789; DOE NNSA under NLUF Grant DE-NA0002724; DOE SC OFES Grant DE-SC0016566; and NSF Grant PHY-1619573.

  8. Cheetah: Starspot modeling code

    Science.gov (United States)

    Walkowicz, Lucianne; Thomas, Michael; Finkestein, Adam

    2014-12-01

    Cheetah models starspots in photometric data (lightcurves) by calculating the modulation of a light curve due to starspots. The main parameters of the program are the linear and quadratic limb darkening coefficients, stellar inclination, spot locations and sizes, and the intensity ratio of the spots to the stellar photosphere. Cheetah uses uniform spot contrast and the minimum number of spots needed to produce a good fit and ignores bright regions for the sake of simplicity.

  9. Geospatial Information System Capability Maturity Models

    Science.gov (United States)

    2017-06-01

    To explore how State departments of transportation (DOTs) evaluate geospatial tool applications and services within their own agencies, particularly their experiences using capability maturity models (CMMs) such as the Urban and Regional Information ...

  10. High burnup models in computer code fair

    International Nuclear Information System (INIS)

    Dutta, B.K.; Swami Prasad, P.; Kushwaha, H.S.; Mahajan, S.C.; Kakodar, A.

    1997-01-01

    An advanced fuel analysis code FAIR has been developed for analyzing the behavior of fuel rods of water cooled reactors under severe power transients and high burnups. The code is capable of analyzing fuel pins of both collapsible clad, as in PHWR and free standing clad as in LWR. The main emphasis in the development of this code is on evaluating the fuel performance at extended burnups and modelling of the fuel rods for advanced fuel cycles. For this purpose, a number of suitable models have been incorporated in FAIR. For modelling the fission gas release, three different models are implemented, namely Physically based mechanistic model, the standard ANS 5.4 model and the Halden model. Similarly the pellet thermal conductivity can be modelled by the MATPRO equation, the SIMFUEL relation or the Halden equation. The flux distribution across the pellet is modelled by using the model RADAR. For modelling pellet clad interaction (PCMI)/ stress corrosion cracking (SCC) induced failure of sheath, necessary routines are provided in FAIR. The validation of the code FAIR is based on the analysis of fuel rods of EPRI project ''Light water reactor fuel rod modelling code evaluation'' and also the analytical simulation of threshold power ramp criteria of fuel rods of pressurized heavy water reactors. In the present work, a study is carried out by analysing three CRP-FUMEX rods to show the effect of various combinations of fission gas release models and pellet conductivity models, on the fuel analysis parameters. The satisfactory performance of FAIR may be concluded through these case studies. (author). 12 refs, 5 figs

  11. The GNASH preequilibrium-statistical nuclear model code

    International Nuclear Information System (INIS)

    Arthur, E. D.

    1988-01-01

    The following report is based on materials presented in a series of lectures at the International Center for Theoretical Physics, Trieste, which were designed to describe the GNASH preequilibrium statistical model code and its use. An overview is provided of the code with emphasis upon code's calculational capabilities and the theoretical models that have been implemented in it. Two sample problems are discussed, the first dealing with neutron reactions on 58 Ni. the second illustrates the fission model capabilities implemented in the code and involves n + 235 U reactions. Finally a description is provided of current theoretical model and code development underway. Examples of calculated results using these new capabilities are also given. 19 refs., 17 figs., 3 tabs

  12. Validation of Heat Transfer and Film Cooling Capabilities of the 3-D RANS Code TURBO

    Science.gov (United States)

    Shyam, Vikram; Ameri, Ali; Chen, Jen-Ping

    2010-01-01

    The capabilities of the 3-D unsteady RANS code TURBO have been extended to include heat transfer and film cooling applications. The results of simulations performed with the modified code are compared to experiment and to theory, where applicable. Wilcox s k-turbulence model has been implemented to close the RANS equations. Two simulations are conducted: (1) flow over a flat plate and (2) flow over an adiabatic flat plate cooled by one hole inclined at 35 to the free stream. For (1) agreement with theory is found to be excellent for heat transfer, represented by local Nusselt number, and quite good for momentum, as represented by the local skin friction coefficient. This report compares the local skin friction coefficients and Nusselt numbers on a flat plate obtained using Wilcox's k-model with the theory of Blasius. The study looks at laminar and turbulent flows over an adiabatic flat plate and over an isothermal flat plate for two different wall temperatures. It is shown that TURBO is able to accurately predict heat transfer on a flat plate. For (2) TURBO shows good qualitative agreement with film cooling experiments performed on a flat plate with one cooling hole. Quantitatively, film effectiveness is under predicted downstream of the hole.

  13. Demonstration of a Model Averaging Capability in FRAMES

    Science.gov (United States)

    Meyer, P. D.; Castleton, K. J.

    2009-12-01

    Uncertainty in model structure can be incorporated in risk assessment using multiple alternative models and model averaging. To facilitate application of this approach to regulatory applications based on risk or dose assessment, a model averaging capability was integrated with the Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES) version 2 software. FRAMES is a software platform that allows the non-parochial communication between disparate models, databases, and other frameworks. Users have the ability to implement and select environmental models for specific risk assessment and management problems. Standards are implemented so that models produce information that is readable by other downstream models and accept information from upstream models. Models can be linked across multiple media and from source terms to quantitative risk/dose estimates. Parameter sensitivity and uncertainty analysis tools are integrated. A model averaging module was implemented to accept output from multiple models and produce average results. These results can be deterministic quantities or probability distributions obtained from an analysis of parameter uncertainty. Output from alternative models is averaged using weights determined from user input and/or model calibration results. A model calibration module based on the PEST code was implemented to provide FRAMES with a general calibration capability. An application illustrates the implementation, user interfaces, execution, and results of the FRAMES model averaging capabilities.

  14. Evacuation emergency response model coupling atmospheric release advisory capability output

    International Nuclear Information System (INIS)

    Rosen, L.C.; Lawver, B.S.; Buckley, D.W.; Finn, S.P.; Swenson, J.B.

    1983-01-01

    A Federal Emergency Management Agency (FEMA) sponsored project to develop a coupled set of models between those of the Lawrence Livermore National Laboratory (LLNL) Atmospheric Release Advisory Capability (ARAC) system and candidate evacuation models is discussed herein. This report describes the ARAC system and discusses the rapid computer code developed and the coupling with ARAC output. The computer code is adapted to the use of color graphics as a means to display and convey the dynamics of an emergency evacuation. The model is applied to a specific case of an emergency evacuation of individuals surrounding the Rancho Seco Nuclear Power Plant, located approximately 25 miles southeast of Sacramento, California. The graphics available to the model user for the Rancho Seco example are displayed and noted in detail. Suggestions for future, potential improvements to the emergency evacuation model are presented

  15. Sodium pool fire model for CONACS code

    International Nuclear Information System (INIS)

    Yung, S.C.

    1982-01-01

    The modeling of sodium pool fires constitutes an important ingredient in conducting LMFBR accident analysis. Such modeling capability has recently come under scrutiny at Westinghouse Hanford Company (WHC) within the context of developing CONACS, the Containment Analysis Code System. One of the efforts in the CONACS program is to model various combustion processes anticipated to occur during postulated accident paths. This effort includes the selection or modification of an existing model and development of a new model if it clearly contributes to the program purpose. As part of this effort, a new sodium pool fire model has been developed that is directed at removing some of the deficiencies in the existing models, such as SOFIRE-II and FEUNA

  16. Pump Component Model in SPACE Code

    International Nuclear Information System (INIS)

    Kim, Byoung Jae; Kim, Kyoung Doo

    2010-08-01

    This technical report describes the pump component model in SPACE code. A literature survey was made on pump models in existing system codes. The models embedded in SPACE code were examined to check the confliction with intellectual proprietary rights. Design specifications, computer coding implementation, and test results are included in this report

  17. Business Models for Cost Sharing & Capability Sustainment

    Science.gov (United States)

    2012-08-18

    Masanell and Ricart (2010), we can arrive at the working definition of a business model used in this report, namely, that a business model is a...capabilities over a long time frame. In order to identify the key factors in the Harrier RTI success, a SWOT analysis was carried out. The results are shown in...Table 1. Table 1. SWOT Analysis of Harrier Strengths - Small team - UK/BAE controlled - RTI Weaknesses - Small program—little

  18. U.S. Sodium Fast Reactor Codes and Methods: Current Capabilities and Path Forward

    Energy Technology Data Exchange (ETDEWEB)

    Brunett, A. J.; Fanning, T. H.

    2017-06-26

    The United States has extensive experience with the design, construction, and operation of sodium cooled fast reactors (SFRs) over the last six decades. Despite the closure of various facilities, the U.S. continues to dedicate research and development (R&D) efforts to the design of innovative experimental, prototype, and commercial facilities. Accordingly, in support of the rich operating history and ongoing design efforts, the U.S. has been developing and maintaining a series of tools with capabilities that envelope all facets of SFR design and safety analyses. This paper provides an overview of the current U.S. SFR analysis toolset, including codes such as SAS4A/SASSYS-1, MC2-3, SE2-ANL, PERSENT, NUBOW-3D, and LIFE-METAL, as well as the higher-fidelity tools (e.g. PROTEUS) being integrated into the toolset. Current capabilities of the codes are described and key ongoing development efforts are highlighted for some codes.

  19. Assessment of Prediction Capabilities of COCOSYS and CFX Code for Simplified Containment

    Directory of Open Access Journals (Sweden)

    Jia Zhu

    2016-01-01

    Full Text Available The acceptable accuracy for simulation of severe accident scenarios in containments of nuclear power plants is required to investigate the consequences of severe accidents and effectiveness of potential counter measures. For this purpose, the actual capability of CFX tool and COCOSYS code is assessed in prototypical geometries for simplified physical process-plume (due to a heat source under adiabatic and convection boundary condition, respectively. Results of the comparison under adiabatic boundary condition show that good agreement is obtained among the analytical solution, COCOSYS prediction, and CFX prediction for zone temperature. The general trend of the temperature distribution along the vertical direction predicted by COCOSYS agrees with the CFX prediction except in dome, and this phenomenon is predicted well by CFX and failed to be reproduced by COCOSYS. Both COCOSYS and CFX indicate that there is no temperature stratification inside dome. CFX prediction shows that temperature stratification area occurs beneath the dome and away from the heat source. Temperature stratification area under adiabatic boundary condition is bigger than that under convection boundary condition. The results indicate that the average temperature inside containment predicted with COCOSYS model is overestimated under adiabatic boundary condition, while it is underestimated under convection boundary condition compared to CFX prediction.

  20. Milagro Version 2 An Implicit Monte Carlo Code for Thermal Radiative Transfer: Capabilities, Development, and Usage

    Energy Technology Data Exchange (ETDEWEB)

    T.J. Urbatsch; T.M. Evans

    2006-02-15

    We have released Version 2 of Milagro, an object-oriented, C++ code that performs radiative transfer using Fleck and Cummings' Implicit Monte Carlo method. Milagro, a part of the Jayenne program, is a stand-alone driver code used as a methods research vehicle and to verify its underlying classes. These underlying classes are used to construct Implicit Monte Carlo packages for external customers. Milagro-2 represents a design overhaul that allows better parallelism and extensibility. New features in Milagro-2 include verified momentum deposition, restart capability, graphics capability, exact energy conservation, and improved load balancing and parallel efficiency. A users' guide also describes how to configure, make, and run Milagro2.

  1. Interfacial and Wall Transport Models for SPACE-CAP Code

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Soon Joon; Choo, Yeon Joon; Han, Tae Young; Hwang, Su Hyun; Lee, Byung Chul [FNC Tech., Seoul (Korea, Republic of); Choi, Hoon; Ha, Sang Jun [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)

    2009-10-15

    The development project for the domestic design code was launched to be used for the safety and performance analysis of pressurized light water reactors. And CAP (Containment Analysis Package) code has been also developed for the containment safety and performance analysis side by side with SPACE. The CAP code treats three fields (gas, continuous liquid, and dispersed drop) for the assessment of containment specific phenomena, and is featured by its multidimensional assessment capabilities. Thermal hydraulics solver was already developed and now under testing of its stability and soundness. As a next step, interfacial and wall transport models was setup. In order to develop the best model and correlation package for the CAP code, various models currently used in major containment analysis codes, which are GOTHIC, CONTAIN2.0, and CONTEMPT-LT, have been reviewed. The origins of the selected models used in these codes have also been examined to find out if the models have not conflict with a proprietary right. In addition, a literature survey of the recent studies has been performed in order to incorporate the better models for the CAP code. The models and correlations of SPACE were also reviewed. CAP models and correlations are composed of interfacial heat/mass, and momentum transport models, and wall heat/mass, and momentum transport models. This paper discusses on those transport models in the CAP code.

  2. Impacts of Model Building Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Athalye, Rahul A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sivaraman, Deepak [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elliott, Douglas B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Liu, Bing [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bartlett, Rosemarie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-10-31

    The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO2 emissions at the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.

  3. Large-Scale Condensed Matter DFT Simulations: Performance and Capabilities of the CRYSTAL Code.

    Science.gov (United States)

    Erba, A; Baima, J; Bush, I; Orlando, R; Dovesi, R

    2017-10-10

    Nowadays, the efficient exploitation of high-performance computing resources is crucial to extend the applicability of first-principles theoretical methods to the description of large, progressively more realistic molecular and condensed matter systems. This can be achieved only by devising effective parallelization strategies for the most time-consuming steps of a calculation, which requires some effort given the usual complexity of quantum-mechanical algorithms, particularly so if parallelization is to be extended to all properties and not just to the basic functionalities of the code. In this Article, the performance and capabilities of the massively parallel version of the Crystal17 package for first-principles calculations on solids are discussed. In particular, we present: (i) recent developments allowing for a further improvement of the code scalability (up to 32 768 cores); (ii) a quantitative analysis of the scaling and memory requirements of the code when running calculations with several thousands (up to about 14 000) of atoms per cell; (iii) a documentation of the high numerical size consistency of the code; and (iv) an overview of recent ab initio studies of several physical properties (structural, energetic, electronic, vibrational, spectroscopic, thermodynamic, elastic, piezoelectric, topological) of large systems investigated with the code.

  4. Extending the capabilities of CFD codes to assess ash related problems

    DEFF Research Database (Denmark)

    Kær, Søren Knudsen; Rosendahl, Lasse Aistrup; Baxter, B. B.

    2004-01-01

    This paper discusses the application of FLUENT? in theanalysis of grate-fired biomass boilers. A short description of theconcept used to model fuel conversion on the grate and the couplingto the CFD code is offered. The development and implementation ofa CFD-based deposition model is presented...... in the reminder of thepaper. The growth of deposits on furnace walls and super heatertubes is treated including the impact on heat transfer rates determinedby the CFD code. Based on the commercial CFD code FLUENT?,the overall model is fully implemented through the User DefinedFunctions. The model is configured....... The model is applied to the straw-fired Masnedøboiler. Results are in good qualitative agreement with bothmeasurements and observations at the plants....

  5. DOE International Collaboration; Seismic Modeling and Simulation Capability Project

    Energy Technology Data Exchange (ETDEWEB)

    Leininger, Lara D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Settgast, Randolph R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2010-10-12

    The following report describes the development and exercise of a new capability at LLNL to model complete, non-linear, seismic events in 3-dimensions with a fully-coupled soil structure interaction response. This work is specifically suited to nuclear reactor design because this design space is exempt from the Seismic Design requirements of International Building Code (IBC) and the American Society of Civil Engineers (ASCE) [4,2]. Both IBC and ASCE-7 exempt nuclear reactors because they are considered “structures that require special consideration” and their design is governed only by “other regulations”. In the case of nuclear reactors, the regulations are from both the Nuclear Regulatory Commission (NRC) [10] and ASCE 43 [3]. This current framework of design guidance, coupled to this new and evolving capability to provide high fidelity design solutions as presented in this report, enables the growing field of Performance-Based Design (PBD) for nuclear reactors subjected to earthquake ground motions.

  6. Dual-code quantum computation model

    Science.gov (United States)

    Choi, Byung-Soo

    2015-08-01

    In this work, we propose the dual-code quantum computation model—a fault-tolerant quantum computation scheme which alternates between two different quantum error-correction codes. Since the chosen two codes have different sets of transversal gates, we can implement a universal set of gates transversally, thereby reducing the overall cost. We use code teleportation to convert between quantum states in different codes. The overall cost is decreased if code teleportation requires fewer resources than the fault-tolerant implementation of the non-transversal gate in a specific code. To analyze the cost reduction, we investigate two cases with different base codes, namely the Steane and Bacon-Shor codes. For the Steane code, neither the proposed dual-code model nor another variation of it achieves any cost reduction since the conventional approach is simple. For the Bacon-Shor code, the three proposed variations of the dual-code model reduce the overall cost. However, as the encoding level increases, the cost reduction decreases and becomes negative. Therefore, the proposed dual-code model is advantageous only when the encoding level is low and the cost of the non-transversal gate is relatively high.

  7. Extending the capabilities of CFD codes to assess ash related problems

    DEFF Research Database (Denmark)

    Kær, Søren Knudsen; Rosendahl, Lasse Aistrup; Baxter, B. B.

    2004-01-01

    in the reminder of thepaper. The growth of deposits on furnace walls and super heatertubes is treated including the impact on heat transfer rates determinedby the CFD code. Based on the commercial CFD code FLUENT?,the overall model is fully implemented through the User DefinedFunctions. The model is configured......This paper discusses the application of FLUENT? in theanalysis of grate-fired biomass boilers. A short description of theconcept used to model fuel conversion on the grate and the couplingto the CFD code is offered. The development and implementation ofa CFD-based deposition model is presented...... entirely through a graphical userinterface integrated in the standard FLUENT? interface. The modelconsiders fine and coarse mode ash deposition and stickingmechanisms for the complete deposit growth, as well as an influenceon the local boundary conditions for heat transfer due to thermalresistance changes...

  8. MARS code manual volume I: code structure, system models, and solution methods

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Kim, Kyung Doo; Bae, Sung Won; Jeong, Jae Jun; Lee, Seung Wook; Hwang, Moon Kyu; Yoon, Churl

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This theory manual provides a complete list of overall information of code structure and major function of MARS including code architecture, hydrodynamic model, heat structure, trip / control system and point reactor kinetics model. Therefore, this report would be very useful for the code users. The overall structure of the manual is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  9. SIMMER-3 and SIMMER-4 safety code development for reactors with transmutation capability

    International Nuclear Information System (INIS)

    Maschek, W.; Rineiski, A.; Suzuki, T.; Wang, S.; Mori, M.; Wiegner, E.; Wilhelm, D.; Kretzschmar, F.; Tobita, Y.; Yamano, H.; Fujita, S.; Coste, P.; Pigny, S.; Henriques, A.; Cadiou, T.; Morita, K.; Bandini, G.

    2005-01-01

    The SIMMER-code family has played an outstanding role in the framework and history of mechanistic safety code development for liquid metal cooled reactors with fast neutron spectrum. Its unmatched position is related to the extended simulation range from normal operation conditions, via transients and accidents, core melting and core destruction to in-vessel relocation phenomena and post accident heat removal conditions. SIMMER-III is a two-dimensional, SIMMER-IV a three-dimensional, multi-velocity-field, multi-phase, multi-component, Eulerian, fluid-dynamics code system coupled with a structure model for fuel-pins, hex-cans and general structures, and a space-, time- and energy-dependent transport theory neutron dynamics model. An elaborate analytical equation-of-state closes the fluid-dynamics conservation equations. The code has originally been developed for the severe accident simulation of sodium cooled fast reactors. However, the general philosophy behind the SIMMER development was to generate a very versatile and flexible tool, applicable for the safety analysis of various reactor types with different neutron spectra and coolants. The multi-physics and multi-scale code family SIMMER is developed in a joint effort of different laboratories, where currently the main application area in Europe is in the framework of accelerator-driven systems. The most recent code developments and SIMMER application areas are discussed in this paper. (authors)

  10. New strategies of sensitivity analysis capabilities in continuous-energy Monte Carlo code RMC

    International Nuclear Information System (INIS)

    Qiu, Yishu; Liang, Jingang; Wang, Kan; Yu, Jiankai

    2015-01-01

    Highlights: • Data decomposition techniques are proposed for memory reduction. • New strategies are put forward and implemented in RMC code to improve efficiency and accuracy for sensitivity calculations. • A capability to compute region-specific sensitivity coefficients is developed in RMC code. - Abstract: The iterated fission probability (IFP) method has been demonstrated to be an accurate alternative for estimating the adjoint-weighted parameters in continuous-energy Monte Carlo forward calculations. However, the memory requirements of this method are huge especially when a large number of sensitivity coefficients are desired. Therefore, data decomposition techniques are proposed in this work. Two parallel strategies based on the neutron production rate (NPR) estimator and the fission neutron population (FNP) estimator for adjoint fluxes, as well as a more efficient algorithm which has multiple overlapping blocks (MOB) in a cycle, are investigated and implemented in the continuous-energy Reactor Monte Carlo code RMC for sensitivity analysis. Furthermore, a region-specific sensitivity analysis capability is developed in RMC. These new strategies, algorithms and capabilities are verified against analytic solutions of a multi-group infinite-medium problem and against results from other software packages including MCNP6, TSUANAMI-1D and multi-group TSUNAMI-3D. While the results generated by the NPR and FNP strategies agree within 0.1% of the analytic sensitivity coefficients, the MOB strategy surprisingly produces sensitivity coefficients exactly equal to the analytic ones. Meanwhile, the results generated by the three strategies in RMC are in agreement with those produced by other codes within a few percent. Moreover, the MOB strategy performs the most efficient sensitivity coefficient calculations (offering as much as an order of magnitude gain in FoMs over MCNP6), followed by the NPR and FNP strategies, and then MCNP6. The results also reveal that these

  11. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    Science.gov (United States)

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.

    2016-03-01

    This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package authored at Oak Ridge National Laboratory. Shift has been developed to scale well from laptops to small computing clusters to advanced supercomputers and includes features such as support for multiple geometry and physics engines, hybrid capabilities for variance reduction methods such as the Consistent Adjoint-Driven Importance Sampling methodology, advanced parallel decompositions, and tally methods optimized for scalability on supercomputing architectures. The scaling studies presented in this paper demonstrate good weak and strong scaling behavior for the implemented algorithms. Shift has also been validated and verified against various reactor physics benchmarks, including the Consortium for Advanced Simulation of Light Water Reactors' Virtual Environment for Reactor Analysis criticality test suite and several Westinghouse AP1000® problems presented in this paper. These benchmark results compare well to those from other contemporary Monte Carlo codes such as MCNP5 and KENO.

  12. Stochastic Capability Models for Degrading Satellite Constellations

    National Research Council Canada - National Science Library

    Gulyas, Cole W

    2005-01-01

    This thesis proposes and analyzes a new measure of functional capability for satellite constellations that incorporates the instantaneous availability and mission effectiveness of individual satellites...

  13. Overview of ASC Capability Computing System Governance Model

    Energy Technology Data Exchange (ETDEWEB)

    Doebling, Scott W. [Los Alamos National Laboratory

    2012-07-11

    This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a priority-driven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.

  14. Fatigue modelling according to the JCSS Probabilistic model code

    NARCIS (Netherlands)

    Vrouwenvelder, A.C.W.M.

    2007-01-01

    The Joint Committee on Structural Safety is working on a Model Code for full probabilistic design. The code consists out of three major parts: Basis of design, Load Models and Models for Material and Structural Properties. The code is intended as the operational counter part of codes like ISO,

  15. Assessing the Predictive Capability of the LIFEIV Nuclear Fuel Performance Code using Sequential Calibration

    International Nuclear Information System (INIS)

    Stull, Christopher J.; Williams, Brian J.; Unal, Cetin

    2012-01-01

    This report considers the problem of calibrating a numerical model to data from an experimental campaign (or series of experimental tests). The issue is that when an experimental campaign is proposed, only the input parameters associated with each experiment are known (i.e. outputs are not known because the experiments have yet to be conducted). Faced with such a situation, it would be beneficial from the standpoint of resource management to carefully consider the sequence in which the experiments are conducted. In this way, the resources available for experimental tests may be allocated in a way that best 'informs' the calibration of the numerical model. To address this concern, the authors propose decomposing the input design space of the experimental campaign into its principal components. Subsequently, the utility (to be explained) of each experimental test to the principal components of the input design space is used to formulate the sequence in which the experimental tests will be used for model calibration purposes. The results reported herein build on those presented and discussed in (1,2) wherein Verification and Validation and Uncertainty Quantification (VU) capabilities were applied to the nuclear fuel performance code LIFEIV. In addition to the raw results from the sequential calibration studies derived from the above, a description of the data within the context of the Predictive Maturity Index (PMI) will also be provided. The PMI (3,4) is a metric initiated and developed at Los Alamos National Laboratory to quantitatively describe the ability of a numerical model to make predictions in the absence of experimental data, where it is noted that 'predictions in the absence of experimental data' is not synonymous with extrapolation. This simply reflects the fact that resources do not exist such that each and every execution of the numerical model can be compared against experimental data. If such resources existed, the justification for numerical models

  16. Coding with partially hidden Markov models

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Rissanen, J.

    1995-01-01

    Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...

  17. Assessment of Static Delamination Propagation Capabilities in Commercial Finite Element Codes Using Benchmark Analysis

    Science.gov (United States)

    Orifici, Adrian C.; Krueger, Ronald

    2010-01-01

    With capabilities for simulating delamination growth in composite materials becoming available, the need for benchmarking and assessing these capabilities is critical. In this study, benchmark analyses were performed to assess the delamination propagation simulation capabilities of the VCCT implementations in Marc TM and MD NastranTM. Benchmark delamination growth results for Double Cantilever Beam, Single Leg Bending and End Notched Flexure specimens were generated using a numerical approach. This numerical approach was developed previously, and involves comparing results from a series of analyses at different delamination lengths to a single analysis with automatic crack propagation. Specimens were analyzed with three-dimensional and two-dimensional models, and compared with previous analyses using Abaqus . The results demonstrated that the VCCT implementation in Marc TM and MD Nastran(TradeMark) was capable of accurately replicating the benchmark delamination growth results and that the use of the numerical benchmarks offers advantages over benchmarking using experimental and analytical results.

  18. Coupling a Basin Modeling and a Seismic Code using MOAB

    KAUST Repository

    Yan, Mi

    2012-06-02

    We report on a demonstration of loose multiphysics coupling between a basin modeling code and a seismic code running on a large parallel machine. Multiphysics coupling, which is one critical capability for a high performance computing (HPC) framework, was implemented using the MOAB open-source mesh and field database. MOAB provides for code coupling by storing mesh data and input and output field data for the coupled analysis codes and interpolating the field values between different meshes used by the coupled codes. We found it straightforward to use MOAB to couple the PBSM basin modeling code and the FWI3D seismic code on an IBM Blue Gene/P system. We describe how the coupling was implemented and present benchmarking results for up to 8 racks of Blue Gene/P with 8192 nodes and MPI processes. The coupling code is fast compared to the analysis codes and it scales well up to at least 8192 nodes, indicating that a mesh and field database is an efficient way to implement loose multiphysics coupling for large parallel machines.

  19. Development of a system code with CFD capability for analyzing turbulent mixed convection in gas-cooled reactors

    International Nuclear Information System (INIS)

    Kim, Hyeon Il

    2010-02-01

    convection regime, and (4) recently conducted experiments in a deteriorated turbulent heat transfer regime. The validation proved that the Launder-Sharma model can supply improved solutions and much better knowledge about not only the wall temperature but also the heat transfer phenomena in turbulent mixed convection regime, the DTHT, compared to that offered by a single-dimensional empirical correlation. A set of modules to provide Computational Fluid Dynamics (CFD) capability being able to handle multi-dimensional heat transfer is incorporated into a system code for GCRs, GAMMA+, by adopting the Launder-Sharma model of turbulence. We implemented the model into the original system code based on the same schemes, that is, the Implicit Continuous fluid Eulerian (ICE) scheme in a staggered mesh layout, and Newton linearization as constructed in the original code in such a way that the model did not interfere with the numerical stability. The extended code, GAMMA T , was successfully verified and validated in that the model was well formulated with a firmly established numerical foundation through comparisons with an available set of data covering turbulent forced convection regime. The GAMMA T code showed strong potential for future use as a robust integrated system code with the capability of multi-scale analysis in it

  20. Source-optimized irregular repeat accumulate codes with inherent unequal error protection capabilities and their application to scalable image transmission.

    Science.gov (United States)

    Lan, Ching-Fu; Xiong, Zixiang; Narayanan, Krishna R

    2006-07-01

    The common practice for achieving unequal error protection (UEP) in scalable multimedia communication systems is to design rate-compatible punctured channel codes before computing the UEP rate assignments. This paper proposes a new approach to designing powerful irregular repeat accumulate (IRA) codes that are optimized for the multimedia source and to exploiting the inherent irregularity in IRA codes for UEP. Using the end-to-end distortion due to the first error bit in channel decoding as the cost function, which is readily given by the operational distortion-rate function of embedded source codes, we incorporate this cost function into the channel code design process via density evolution and obtain IRA codes that minimize the average cost function instead of the usual probability of error. Because the resulting IRA codes have inherent UEP capabilities due to irregularity, the new IRA code design effectively integrates channel code optimization and UEP rate assignments, resulting in source-optimized channel coding or joint source-channel coding. We simulate our source-optimized IRA codes for transporting SPIHT-coded images over a binary symmetric channel with crossover probability p. When p = 0.03 and the channel code length is long (e.g., with one codeword for the whole 512 x 512 image), we are able to operate at only 9.38% away from the channel capacity with code length 132380 bits, achieving the best published results in terms of average peak signal-to-noise ratio (PSNR). Compared to conventional IRA code design (that minimizes the probability of error) with the same code rate, the performance gain in average PSNR from using our proposed source-optimized IRA code design is 0.8759 dB when p = 0.1 and the code length is 12800 bits. As predicted by Shannon's separation principle, we observe that this performance gain diminishes as the code length increases.

  1. A study on the prediction capability of GOTHIC and HYCA3D code for local hydrogen concentrations

    International Nuclear Information System (INIS)

    Choi, Y. S.; Lee, W. J.; Lee, J. J.; Park, K. C.

    2002-01-01

    In this study the prediction capability of GOTHIC and HYCA3D code for local hydrogen concentrations was verified with experimental results. Among the experiments, executed by SNU and other organization inside and outside of the country, the fast transient and the obstacle cases are selected. In case of large subcompartment both the code show good agreement with the experimental data. But in case of small and complex geometry or fast transient the results of GOTHIC code have the large difference from experimental ones. This represents that GOTHIC code is unsuitable for these cases. On the contrary HTCA3D code agrees well with all the experimental data

  2. Error detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3

    Science.gov (United States)

    Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1989-09-01

    The error-detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3 are investigated. These codes are also used for error detection in the data link layer of the Ethernet, a local area network. The weight distributions for various code lengths are calculated to obtain the probability of undetectable error and that of detectable error for a binary symmetric channel with bit-error rate between 0.00001 and 1/2.

  3. Genetic coding and gene expression - new Quadruplet genetic coding model

    Science.gov (United States)

    Shankar Singh, Rama

    2012-07-01

    Successful demonstration of human genome project has opened the door not only for developing personalized medicine and cure for genetic diseases, but it may also answer the complex and difficult question of the origin of life. It may lead to making 21st century, a century of Biological Sciences as well. Based on the central dogma of Biology, genetic codons in conjunction with tRNA play a key role in translating the RNA bases forming sequence of amino acids leading to a synthesized protein. This is the most critical step in synthesizing the right protein needed for personalized medicine and curing genetic diseases. So far, only triplet codons involving three bases of RNA, transcribed from DNA bases, have been used. Since this approach has several inconsistencies and limitations, even the promise of personalized medicine has not been realized. The new Quadruplet genetic coding model proposed and developed here involves all four RNA bases which in conjunction with tRNA will synthesize the right protein. The transcription and translation process used will be the same, but the Quadruplet codons will help overcome most of the inconsistencies and limitations of the triplet codes. Details of this new Quadruplet genetic coding model and its subsequent potential applications including relevance to the origin of life will be presented.

  4. RELAP5/MOD3 code coupling model

    International Nuclear Information System (INIS)

    Martin, R.P.; Johnsen, G.W.

    1994-01-01

    A new capability has been incorporated into RELAP5/MOD3 that enables the coupling of RELAP5/MOD3 to other computer codes. The new capability has been designed to support analysis of the new advanced reactor concepts. Its user features rely solely on new RELAP5 open-quotes styledclose quotes input and the Parallel Virtual Machine (PVM) software, which facilitates process management and distributed communication of multiprocess problems. RELAP5/MOD3 manages the input processing, communication instruction, process synchronization, and its own send and receive data processing. The flexible capability requires that an explicit coupling be established, which updates boundary conditions at discrete time intervals. Two test cases are presented that demonstrate the functionality, applicability, and issues involving use of this capability

  5. Improvement of MARS code reflood model

    International Nuclear Information System (INIS)

    Hwang, Moonkyu; Chung, Bub-Dong

    2011-01-01

    A specifically designed heat transfer model for the reflood process which normally occurs at low flow and low pressure was originally incorporated in the MARS code. The model is essentially identical to that of the RELAP5/MOD3.3 code. The model, however, is known to have under-estimated the peak cladding temperature (PCT) with earlier turn-over. In this study, the original MARS code reflood model is improved. Based on the extensive sensitivity studies for both hydraulic and wall heat transfer models, it is found that the dispersed flow film boiling (DFFB) wall heat transfer is the most influential process determining the PCT, whereas the interfacial drag model most affects the quenching time through the liquid carryover phenomenon. The model proposed by Bajorek and Young is incorporated for the DFFB wall heat transfer. Both space grid and droplet enhancement models are incorporated. Inverted annular film boiling (IAFB) is modeled by using the original PSI model of the code. The flow transition between the DFFB and IABF, is modeled using the TRACE code interpolation. A gas velocity threshold is also added to limit the top-down quenching effect. Assessment calculations are performed for the original and modified MARS codes for the Flecht-Seaset test and RBHT test. Improvements are observed in terms of the PCT and quenching time predictions in the Flecht-Seaset assessment. In case of the RBHT assessment, the improvement over the original MARS code is found marginal. A space grid effect, however, is clearly seen from the modified version of the MARS code. (author)

  6. Diagnosis code assignment: models and evaluation metrics.

    Science.gov (United States)

    Perotte, Adler; Pivovarov, Rimma; Natarajan, Karthik; Weiskopf, Nicole; Wood, Frank; Elhadad, Noémie

    2014-01-01

    The volume of healthcare data is growing rapidly with the adoption of health information technology. We focus on automated ICD9 code assignment from discharge summary content and methods for evaluating such assignments. We study ICD9 diagnosis codes and discharge summaries from the publicly available Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC II) repository. We experiment with two coding approaches: one that treats each ICD9 code independently of each other (flat classifier), and one that leverages the hierarchical nature of ICD9 codes into its modeling (hierarchy-based classifier). We propose novel evaluation metrics, which reflect the distances among gold-standard and predicted codes and their locations in the ICD9 tree. Experimental setup, code for modeling, and evaluation scripts are made available to the research community. The hierarchy-based classifier outperforms the flat classifier with F-measures of 39.5% and 27.6%, respectively, when trained on 20,533 documents and tested on 2282 documents. While recall is improved at the expense of precision, our novel evaluation metrics show a more refined assessment: for instance, the hierarchy-based classifier identifies the correct sub-tree of gold-standard codes more often than the flat classifier. Error analysis reveals that gold-standard codes are not perfect, and as such the recall and precision are likely underestimated. Hierarchy-based classification yields better ICD9 coding than flat classification for MIMIC patients. Automated ICD9 coding is an example of a task for which data and tools can be shared and for which the research community can work together to build on shared models and advance the state of the art.

  7. Generation of Java code from Alvis model

    Science.gov (United States)

    Matyasik, Piotr; Szpyrka, Marcin; Wypych, Michał

    2015-12-01

    Alvis is a formal language that combines graphical modelling of interconnections between system entities (called agents) and a high level programming language to describe behaviour of any individual agent. An Alvis model can be verified formally with model checking techniques applied to the model LTS graph that represents the model state space. This paper presents transformation of an Alvis model into executable Java code. Thus, the approach provides a method of automatic generation of a Java application from formally verified Alvis model.

  8. MC21 v.6.0 - A continuous-energy Monte Carlo particle transport code with integrated reactor feedback capabilities

    International Nuclear Information System (INIS)

    Grieshemer, D.P.; Gill, D.F.; Nease, B.R.; Carpenter, D.C.; Joo, H.; Millman, D.L.; Sutton, T.M.; Stedry, M.H.; Dobreff, P.S.; Trumbull, T.H.; Caro, E.

    2013-01-01

    MC21 is a continuous-energy Monte Carlo radiation transport code for the calculation of the steady-state spatial distributions of reaction rates in three-dimensional models. The code supports neutron and photon transport in fixed source problems, as well as iterated-fission-source (eigenvalue) neutron transport problems. MC21 has been designed and optimized to support large-scale problems in reactor physics, shielding, and criticality analysis applications. The code also supports many in-line reactor feedback effects, including depletion, thermal feedback, xenon feedback, eigenvalue search, and neutron and photon heating. MC21 uses continuous-energy neutron/nucleus interaction physics over the range from 10 -5 eV to 20 MeV. The code treats all common neutron scattering mechanisms, including fast-range elastic and non-elastic scattering, and thermal- and epithermal-range scattering from molecules and crystalline materials. For photon transport, MC21 uses continuous-energy interaction physics over the energy range from 1 keV to 100 GeV. The code treats all common photon interaction mechanisms, including Compton scattering, pair production, and photoelectric interactions. All of the nuclear data required by MC21 is provided by the NDEX system of codes, which extracts and processes data from EPDL-, ENDF-, and ACE-formatted source files. For geometry representation, MC21 employs a flexible constructive solid geometry system that allows users to create spatial cells from first- and second-order surfaces. The system also allows models to be built up as hierarchical collections of previously defined spatial cells, with interior detail provided by grids and template overlays. Results are collected by a generalized tally capability which allows users to edit integral flux and reaction rate information. Results can be collected over the entire problem or within specific regions of interest through the use of phase filters that control which particles are allowed to score each

  9. PetriCode: A Tool for Template-Based Code Generation from CPN Models

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge

    2014-01-01

    levels of abstraction. The elements of the models are annotated with code generation pragmatics enabling PetriCode to use a template-based approach to generate code while keeping the models uncluttered from implementation artefacts. PetriCode is the realization of our code generation approach which has...

  10. Frameworks for Assessing the Quality of Modeling and Simulation Capabilities

    Science.gov (United States)

    Rider, W. J.

    2012-12-01

    The importance of assuring quality in modeling and simulation has spawned several frameworks for structuring the examination of quality. The format and content of these frameworks provides an emphasis, completeness and flow to assessment activities. I will examine four frameworks that have been developed and describe how they can be improved and applied to a broader set of high consequence applications. Perhaps the first of these frameworks was known as CSAU [Boyack] (code scaling, applicability and uncertainty) used for nuclear reactor safety and endorsed the United States' Nuclear Regulatory Commission (USNRC). This framework was shaped by nuclear safety practice, and the practical structure needed after the Three Mile Island accident. It incorporated the dominant experimental program, the dominant analysis approach, and concerns about the quality of modeling. The USNRC gave it the force of law that made the nuclear industry take it seriously. After the cessation of nuclear weapons' testing the United States began a program of examining the reliability of these weapons without testing. This program utilizes science including theory, modeling, simulation and experimentation to replace the underground testing. The emphasis on modeling and simulation necessitated attention on the quality of these simulations. Sandia developed the PCMM (predictive capability maturity model) to structure this attention [Oberkampf]. PCMM divides simulation into six core activities to be examined and graded relative to the needs of the modeling activity. NASA [NASA] has built yet another framework in response to the tragedy of the space shuttle accidents. Finally, Ben-Haim and Hemez focus upon modeling robustness and predictive fidelity in another approach. These frameworks are similar, and applied in a similar fashion. The adoption of these frameworks at Sandia and NASA has been slow and arduous because the force of law has not assisted acceptance. All existing frameworks are

  11. Steam condensation modelling in aerosol codes

    International Nuclear Information System (INIS)

    Dunbar, I.H.

    1986-01-01

    The principal subject of this study is the modelling of the condensation of steam into and evaporation of water from aerosol particles. These processes introduce a new type of term into the equation for the development of the aerosol particle size distribution. This new term faces the code developer with three major problems: the physical modelling of the condensation/evaporation process, the discretisation of the new term and the separate accounting for the masses of the water and of the other components. This study has considered four codes which model the condensation of steam into and its evaporation from aerosol particles: AEROSYM-M (UK), AEROSOLS/B1 (France), NAUA (Federal Republic of Germany) and CONTAIN (USA). The modelling in the codes has been addressed under three headings. These are the physical modelling of condensation, the mathematics of the discretisation of the equations, and the methods for modelling the separate behaviour of different chemical components of the aerosol. The codes are least advanced in area of solute effect modelling. At present only AEROSOLS/B1 includes the effect. The effect is greater for more concentrated solutions. Codes without the effect will be more in error (underestimating the total airborne mass) the less condensation they predict. Data are needed on the water vapour pressure above concentrated solutions of the substances of interest (especially CsOH and CsI) if the extent to which aerosols retain water under superheated conditions is to be modelled. 15 refs

  12. Capabilities and accuracy of energy modelling software

    CSIR Research Space (South Africa)

    Osburn, L

    2010-11-01

    Full Text Available Energy modelling can be used in a number of different ways to fulfill different needs, including certification within building regulations or green building rating tools. Energy modelling can also be used in order to try and predict what the energy...

  13. Economic aspects and models for building codes

    DEFF Research Database (Denmark)

    Bonke, Jens; Pedersen, Dan Ove; Johnsen, Kjeld

    It is the purpose of this bulletin to present an economic model for estimating the consequence of new or changed building codes. The object is to allow comparative analysis in order to improve the basis for decisions in this field. The model is applied in a case study.......It is the purpose of this bulletin to present an economic model for estimating the consequence of new or changed building codes. The object is to allow comparative analysis in order to improve the basis for decisions in this field. The model is applied in a case study....

  14. Modeling report of DYMOND code (DUPIC version)

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo Hwan [KAERI, Taejon (Korea, Republic of); Yacout, Abdellatif M. [Argonne National Laboratory, Ilinois (United States)

    2003-04-01

    The DYMOND code employs the ITHINK dynamic modeling platform to assess the 100-year dynamic evolution scenarios for postulated global nuclear energy parks. Firstly, DYMOND code has been developed by ANL(Argonne National Laboratory) to perform the fuel cycle analysis of LWR once-through and LWR-FBR mixed plant. Since the extensive application of DYMOND code has been requested, the first version of DYMOND has been modified to adapt the DUPIC, MSR and RTF fuel cycle. DYMOND code is composed of three parts; the source language platform, input supply and output. But those platforms are not clearly distinguished. This report described all the equations which were modeled in the modified DYMOND code (which is called as DYMOND-DUPIC version). It divided into five parts;Part A deals model in reactor history which is included amount of the requested fuels and spent fuels. Part B aims to describe model of fuel cycle about fuel flow from the beginning to the end of fuel cycle. Part C is for model in re-processing which is included recovery of burned uranium, plutonium, minor actinide and fission product as well as the amount of spent fuels in storage and disposal. Part D is for model in other fuel cycle which is considered the thorium fuel cycle for MSR and RTF reactor. Part E is for model in economics. This part gives all the information of cost such as uranium mining cost, reactor operating cost, fuel cost etc.

  15. MARS CODE MANUAL VOLUME V: Models and Correlations

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Bae, Sung Won; Lee, Seung Wook; Yoon, Churl; Hwang, Moon Kyu; Kim, Kyung Doo; Jeong, Jae Jun

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This models and correlations manual provides a complete list of detailed information of the thermal-hydraulic models used in MARS, so that this report would be very useful for the code users. The overall structure of the manual is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  16. Predictive capabilities of various constitutive models for arterial tissue.

    Science.gov (United States)

    Schroeder, Florian; Polzer, Stanislav; Slažanský, Martin; Man, Vojtěch; Skácel, Pavel

    2018-02-01

    Aim of this study is to validate some constitutive models by assessing their capabilities in describing and predicting uniaxial and biaxial behavior of porcine aortic tissue. 14 samples from porcine aortas were used to perform 2 uniaxial and 5 biaxial tensile tests. Transversal strains were furthermore stored for uniaxial data. The experimental data were fitted by four constitutive models: Holzapfel-Gasser-Ogden model (HGO), model based on generalized structure tensor (GST), Four-Fiber-Family model (FFF) and Microfiber model. Fitting was performed to uniaxial and biaxial data sets separately and descriptive capabilities of the models were compared. Their predictive capabilities were assessed in two ways. Firstly each model was fitted to biaxial data and its accuracy (in term of R 2 and NRMSE) in prediction of both uniaxial responses was evaluated. Then this procedure was performed conversely: each model was fitted to both uniaxial tests and its accuracy in prediction of 5 biaxial responses was observed. Descriptive capabilities of all models were excellent. In predicting uniaxial response from biaxial data, microfiber model was the most accurate while the other models showed also reasonable accuracy. Microfiber and FFF models were capable to reasonably predict biaxial responses from uniaxial data while HGO and GST models failed completely in this task. HGO and GST models are not capable to predict biaxial arterial wall behavior while FFF model is the most robust of the investigated constitutive models. Knowledge of transversal strains in uniaxial tests improves robustness of constitutive models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Facility Modeling Capability Demonstration Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Key, Brian P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sadasivan, Pratap [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Fallgren, Andrew James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Demuth, Scott Francis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Aleman, Sebastian E. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); de Almeida, Valmor F. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Chiswell, Steven R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hamm, Larry [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Tingey, Joel M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-02-01

    A joint effort has been initiated by Los Alamos National Laboratory (LANL), Oak Ridge National Laboratory (ORNL), Savanah River National Laboratory (SRNL), Pacific Northwest National Laboratory (PNNL), sponsored by the National Nuclear Security Administration’s (NNSA’s) office of Proliferation Detection, to develop and validate a flexible framework for simulating effluents and emissions from spent fuel reprocessing facilities. These effluents and emissions can be measured by various on-site and/or off-site means, and then the inverse problem can ideally be solved through modeling and simulation to estimate characteristics of facility operation such as the nuclear material production rate. The flexible framework called Facility Modeling Toolkit focused on the forward modeling of PUREX reprocessing facility operating conditions from fuel storage and chopping to effluent and emission measurements.

  18. Computable general equilibrium model fiscal year 2014 capability development report

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Laboratory; Boero, Riccardo [Los Alamos National Laboratory

    2016-05-11

    This report provides an overview of the development of the NISAC CGE economic modeling capability since 2012. This capability enhances NISAC's economic modeling and analysis capabilities to answer a broader set of questions than possible with previous economic analysis capability. In particular, CGE modeling captures how the different sectors of the economy, for example, households, businesses, government, etc., interact to allocate resources in an economy and this approach captures these interactions when it is used to estimate the economic impacts of the kinds of events NISAC often analyzes.

  19. Plaspp: A New X-Ray Postprocessing Capability for ASCI Codes

    International Nuclear Information System (INIS)

    Pollak, Gregory

    2003-01-01

    This report announces the availability of the beta version of a (partly) new code, Plaspp (Plasma Postprocessor). This code postprocesses (graphics) dumps from at least two ASCI code suites: Crestone Project and Shavano Project. The basic structure of the code follows that of TDG, the equivalent postprocessor code for LASNEX. In addition to some new commands, the basic differences between TDG and Plaspp are the following: Plaspp uses a graphics dump instead of the unique TDG dump, it handles the unstructured meshes that the ASCI codes produce, and it can use its own multigroup opacity data. Because of the dump format, this code should be useable by any code that produces Cartesian, cylindrical, or spherical graphics formats. This report details the new commands; the required information to be placed on the dumps; some new commands and edits that are applicable to TDG as well, but have not been documented elsewhere; and general information about execution on the open and secure networks.

  20. A MATLAB based 3D modeling and inversion code for MT data

    Science.gov (United States)

    Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.

    2017-07-01

    The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.

  1. A Thermo-Optic Propagation Modeling Capability.

    Energy Technology Data Exchange (ETDEWEB)

    Schrader, Karl; Akau, Ron

    2014-10-01

    A new theoretical basis is derived for tracing optical rays within a finite-element (FE) volume. The ray-trajectory equations are cast into the local element coordinate frame and the full finite-element interpolation is used to determine instantaneous index gradient for the ray-path integral equation. The FE methodology (FEM) is also used to interpolate local surface deformations and the surface normal vector for computing the refraction angle when launching rays into the volume, and again when rays exit the medium. The method is implemented in the Matlab(TM) environment and compared to closed- form gradient index models. A software architecture is also developed for implementing the algorithms in the Zemax(TM) commercial ray-trace application. A controlled thermal environment was constructed in the laboratory, and measured data was collected to validate the structural, thermal, and optical modeling methods.

  2. Development of SFR Primary System Simulation Capability for Advanced System Codes

    Energy Technology Data Exchange (ETDEWEB)

    Hu, R. [Argonne National Lab. (ANL), Argonne, IL (United States); Thomas, J. W. [Argonne National Lab. (ANL), Argonne, IL (United States); Munkhzul, E. [Argonne National Lab. (ANL), Argonne, IL (United States); Fanning, T. H. [Argonne National Lab. (ANL), Argonne, IL (United States); Zhang, H. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Martineau, R. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2014-01-14

    Under the Reactor Product Line (RPL) of DOE/NE’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) program, an SFR System Analysis Module is being developed at Argonne National Laboratory for whole-plant safety analysis. This tool will simulate tightly coupled physical phenomena – including nuclear fission, heat transfer, fluid dynamics, and thermal-mechanical response – in SFR structures, systems, and components. It is based on the MOOSE (Multi-physics Object-Oriented Simulation Environment) framework, which relies upon open-source libraries such as libMesh and PETSc for mesh generation, finite element analysis, and numerical solutions. This development is a coordinated effort along with the development of RELAP-7, which is an advanced safety analysis tool for light-water reactors developed at Idaho National Laboratory. The SFR Module is aimed to model and simulate the SFR systems with much higher fidelity and with well-defined and validated prediction capabilities. It will provide fast-running, modest-fidelity, whole-plant transient analyses capability, which is essential for fast turnaround design scoping and engineering analyses.

  3. Hydrogen recycle modeling in transport codes

    International Nuclear Information System (INIS)

    Howe, H.C.

    1979-01-01

    The hydrogen recycling models now used in Tokamak transport codes are reviewed and the method by which realistic recycling models are being added is discussed. Present models use arbitrary recycle coefficients and therefore do not model the actual recycling processes at the wall. A model for the hydrogen concentration in the wall serves two purposes: (1) it allows a better understanding of the density behavior in present gas puff, pellet, and neutral beam heating experiments; and (2) it allows one to extrapolate to long pulse devices such as EBT, ISX-C and reactors where the walls are observed or expected to saturate. Several wall models are presently being studied for inclusion in transport codes

  4. Science version 2: the most recent capabilities of the Framatome 3-D nuclear code package

    International Nuclear Information System (INIS)

    Girieud, P.; Daudin, L.; Garat, C.; Marotte, P.; Tarle, S.

    2001-01-01

    The Framatome nuclear code package SCIENCE developed in the 1990's has been fully operational for nuclear design since 1997. Results obtained using the package demonstrate the high accuracy of its physical models. Nevertheless, since the first release of the SCIENCE package, continuous improvement work has been carried out at Framatome, which leads today to Version 2 of the package. The intensive use of the package by Framatome teams, for example, while performing reload calculations and the associated core follow, is a permanent opportunity to point out any trend or scattering in the results, even the smaller they are. Thus the main objective of improvements was to take advantage of the progress in computer performances in using more sophisticated calculation schemes conducting to more accurate results. Besides the implementation of more accurate physical models, SCIENCE Version 2 also exploits developments conducted in other fields, mainly for transient calculations using 3-D kinetics or coupling with open-channel core thermal-hydraulics and the plant simulator. These developments allow Framatome to perform accident analyses with advanced methodologies using the SCIENCE package. (author)

  5. Chemistry models in the Victoria code

    International Nuclear Information System (INIS)

    Grimley, A.J. III

    1988-01-01

    The VICTORIA Computer code consists of the fission product release and chemistry models for the MELPROG severe accident analysis code. The chemistry models in VICTORIA are used to treat multi-phase interactions in four separate physical regions: fuel grains, gap/open porosity/clad, coolant/aerosols, and structure surfaces. The physical and chemical environment of each region is very different from the others and different models are required for each. The common thread in the modelling is the use of a chemical equilibrium assumption. The validity of this assumption along with a description of the various physical constraints applicable to each region will be discussed. The models that result from the assumptions and constraints will be presented along with samples of calculations in each region

  6. Model comparisons of the reactive burn model SURF in three ASC codes

    Energy Technology Data Exchange (ETDEWEB)

    Whitley, Von Howard [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Stalsberg, Krista Lynn [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Reichelt, Benjamin Lee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Shipley, Sarah Jayne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-12

    A study of the SURF reactive burn model was performed in FLAG, PAGOSA and XRAGE. In this study, three different shock-to-detonation transition experiments were modeled in each code. All three codes produced similar model results for all the experiments modeled and at all resolutions. Buildup-to-detonation time, particle velocities and resolution dependence of the models was notably similar between the codes. Given the current PBX 9502 equations of state and SURF calibrations, each code is equally capable of predicting the correct detonation time and distance when impacted by a 1D impactor at pressures ranging from 10-16 GPa, as long as the resolution of the mesh is not too coarse.

  7. Hybrid Modeling Capability for Aircraft Electrical Propulsion Systems, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — PC Krause and Associates is partnering with Purdue University, EleQuant, and GridQuant to create a hybrid modeling capability. The combination of PCKA's extensive...

  8. Qualification and application of nuclear reactor accident analysis code with the capability of internal assessment of uncertainty

    International Nuclear Information System (INIS)

    Borges, Ronaldo Celem

    2001-10-01

    This thesis presents an independent qualification of the CIAU code ('Code with the capability of - Internal Assessment of Uncertainty') which is part of the internal uncertainty evaluation process with a thermal hydraulic system code on a realistic basis. This is done by combining the uncertainty methodology UMAE ('Uncertainty Methodology based on Accuracy Extrapolation') with the RELAP5/Mod3.2 code. This allows associating uncertainty band estimates with the results obtained by the realistic calculation of the code, meeting licensing requirements of safety analysis. The independent qualification is supported by simulations with RELAP5/Mod3.2 related to accident condition tests of LOBI experimental facility and to an event which has occurred in Angra 1 nuclear power plant, by comparison with measured results and by establishing uncertainty bands on safety parameter calculated time trends. These bands have indeed enveloped the measured trends. Results from this independent qualification of CIAU have allowed to ascertain the adequate application of a systematic realistic code procedure to analyse accidents with uncertainties incorporated in the results, although there is an evident need of extending the uncertainty data base. It has been verified that use of the code with this internal assessment of uncertainty is feasible in the design and license stages of a NPP. (author)

  9. Verification and Validation of Heat Transfer Model of AGREE Code

    Energy Technology Data Exchange (ETDEWEB)

    Tak, N. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Seker, V.; Drzewiecki, T. J.; Downar, T. J. [Department of Nuclear Engineering and Radiological Sciences, Univ. of Michigan, Michigan (United States); Kelly, J. M. [US Nuclear Regulatory Commission, Washington (United States)

    2013-05-15

    The AGREE code was originally developed as a multi physics simulation code to perform design and safety analysis of Pebble Bed Reactors (PBR). Currently, additional capability for the analysis of Prismatic Modular Reactor (PMR) core is in progress. Newly implemented fluid model for a PMR core is based on a subchannel approach which has been widely used in the analyses of light water reactor (LWR) cores. A hexagonal fuel (or graphite block) is discretized into triangular prism nodes having effective conductivities. Then, a meso-scale heat transfer model is applied to the unit cell geometry of a prismatic fuel block. Both unit cell geometries of multi-hole and pin-in-hole types of prismatic fuel blocks are considered in AGREE. The main objective of this work is to verify and validate the heat transfer model newly implemented for a PMR core in the AGREE code. The measured data in the HENDEL experiment were used for the validation of the heat transfer model for a pin-in-hole fuel block. However, the HENDEL tests were limited to only steady-state conditions of pin-in-hole fuel blocks. There exist no available experimental data regarding a heat transfer in multi-hole fuel blocks. Therefore, numerical benchmarks using conceptual problems are considered to verify the heat transfer model of AGREE for multi-hole fuel blocks as well as transient conditions. The CORONA and GAMMA+ codes were used to compare the numerical results. In this work, the verification and validation study were performed for the heat transfer model of the AGREE code using the HENDEL experiment and the numerical benchmarks of selected conceptual problems. The results of the present work show that the heat transfer model of AGREE is accurate and reliable for prismatic fuel blocks. Further validation of AGREE is in progress for a whole reactor problem using the HTTR safety test data such as control rod withdrawal tests and loss-of-forced convection tests.

  10. Modeling peripheral olfactory coding in Drosophila larvae.

    Directory of Open Access Journals (Sweden)

    Derek J Hoare

    Full Text Available The Drosophila larva possesses just 21 unique and identifiable pairs of olfactory sensory neurons (OSNs, enabling investigation of the contribution of individual OSN classes to the peripheral olfactory code. We combined electrophysiological and computational modeling to explore the nature of the peripheral olfactory code in situ. We recorded firing responses of 19/21 OSNs to a panel of 19 odors. This was achieved by creating larvae expressing just one functioning class of odorant receptor, and hence OSN. Odor response profiles of each OSN class were highly specific and unique. However many OSN-odor pairs yielded variable responses, some of which were statistically indistinguishable from background activity. We used these electrophysiological data, incorporating both responses and spontaneous firing activity, to develop a bayesian decoding model of olfactory processing. The model was able to accurately predict odor identity from raw OSN responses; prediction accuracy ranged from 12%-77% (mean for all odors 45.2% but was always significantly above chance (5.6%. However, there was no correlation between prediction accuracy for a given odor and the strength of responses of wild-type larvae to the same odor in a behavioral assay. We also used the model to predict the ability of the code to discriminate between pairs of odors. Some of these predictions were supported in a behavioral discrimination (masking assay but others were not. We conclude that our model of the peripheral code represents basic features of odor detection and discrimination, yielding insights into the information available to higher processing structures in the brain.

  11. Advanced capabilities for materials modelling with Quantum ESPRESSO

    Science.gov (United States)

    Giannozzi, P.; Andreussi, O.; Brumme, T.; Bunau, O.; Buongiorno Nardelli, M.; Calandra, M.; Car, R.; Cavazzoni, C.; Ceresoli, D.; Cococcioni, M.; Colonna, N.; Carnimeo, I.; Dal Corso, A.; de Gironcoli, S.; Delugas, P.; DiStasio, R. A., Jr.; Ferretti, A.; Floris, A.; Fratesi, G.; Fugallo, G.; Gebauer, R.; Gerstmann, U.; Giustino, F.; Gorni, T.; Jia, J.; Kawamura, M.; Ko, H.-Y.; Kokalj, A.; Küçükbenli, E.; Lazzeri, M.; Marsili, M.; Marzari, N.; Mauri, F.; Nguyen, N. L.; Nguyen, H.-V.; Otero-de-la-Roza, A.; Paulatto, L.; Poncé, S.; Rocca, D.; Sabatini, R.; Santra, B.; Schlipf, M.; Seitsonen, A. P.; Smogunov, A.; Timrov, I.; Thonhauser, T.; Umari, P.; Vast, N.; Wu, X.; Baroni, S.

    2017-11-01

    Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.

  12. Capabilities needed for the next generation of thermo-hydraulic codes for use in real time applications

    Energy Technology Data Exchange (ETDEWEB)

    Arndt, S.A.

    1997-07-01

    The real-time reactor simulation field is currently at a crossroads in terms of the capability to perform real-time analysis using the most sophisticated computer codes. Current generation safety analysis codes are being modified to replace simplified codes that were specifically designed to meet the competing requirement for real-time applications. The next generation of thermo-hydraulic codes will need to have included in their specifications the specific requirement for use in a real-time environment. Use of the codes in real-time applications imposes much stricter requirements on robustness, reliability and repeatability than do design and analysis applications. In addition, the need for code use by a variety of users is a critical issue for real-time users, trainers and emergency planners who currently use real-time simulation, and PRA practitioners who will increasingly use real-time simulation for evaluating PRA success criteria in near real-time to validate PRA results for specific configurations and plant system unavailabilities.

  13. Capabilities needed for the next generation of thermo-hydraulic codes for use in real time applications

    International Nuclear Information System (INIS)

    Arndt, S.A.

    1997-01-01

    The real-time reactor simulation field is currently at a crossroads in terms of the capability to perform real-time analysis using the most sophisticated computer codes. Current generation safety analysis codes are being modified to replace simplified codes that were specifically designed to meet the competing requirement for real-time applications. The next generation of thermo-hydraulic codes will need to have included in their specifications the specific requirement for use in a real-time environment. Use of the codes in real-time applications imposes much stricter requirements on robustness, reliability and repeatability than do design and analysis applications. In addition, the need for code use by a variety of users is a critical issue for real-time users, trainers and emergency planners who currently use real-time simulation, and PRA practitioners who will increasingly use real-time simulation for evaluating PRA success criteria in near real-time to validate PRA results for specific configurations and plant system unavailabilities

  14. Dynamic alignment models for neural coding.

    Directory of Open Access Journals (Sweden)

    Sepp Kollmorgen

    2014-03-01

    Full Text Available Recently, there have been remarkable advances in modeling the relationships between the sensory environment, neuronal responses, and behavior. However, most models cannot encompass variable stimulus-response relationships such as varying response latencies and state or context dependence of the neural code. Here, we consider response modeling as a dynamic alignment problem and model stimulus and response jointly by a mixed pair hidden Markov model (MPH. In MPHs, multiple stimulus-response relationships (e.g., receptive fields are represented by different states or groups of states in a Markov chain. Each stimulus-response relationship features temporal flexibility, allowing modeling of variable response latencies, including noisy ones. We derive algorithms for learning of MPH parameters and for inference of spike response probabilities. We show that some linear-nonlinear Poisson cascade (LNP models are a special case of MPHs. We demonstrate the efficiency and usefulness of MPHs in simulations of both jittered and switching spike responses to white noise and natural stimuli. Furthermore, we apply MPHs to extracellular single and multi-unit data recorded in cortical brain areas of singing birds to showcase a novel method for estimating response lag distributions. MPHs allow simultaneous estimation of receptive fields, latency statistics, and hidden state dynamics and so can help to uncover complex stimulus response relationships that are subject to variable timing and involve diverse neural codes.

  15. Dynamic alignment models for neural coding.

    Science.gov (United States)

    Kollmorgen, Sepp; Hahnloser, Richard H R

    2014-03-01

    Recently, there have been remarkable advances in modeling the relationships between the sensory environment, neuronal responses, and behavior. However, most models cannot encompass variable stimulus-response relationships such as varying response latencies and state or context dependence of the neural code. Here, we consider response modeling as a dynamic alignment problem and model stimulus and response jointly by a mixed pair hidden Markov model (MPH). In MPHs, multiple stimulus-response relationships (e.g., receptive fields) are represented by different states or groups of states in a Markov chain. Each stimulus-response relationship features temporal flexibility, allowing modeling of variable response latencies, including noisy ones. We derive algorithms for learning of MPH parameters and for inference of spike response probabilities. We show that some linear-nonlinear Poisson cascade (LNP) models are a special case of MPHs. We demonstrate the efficiency and usefulness of MPHs in simulations of both jittered and switching spike responses to white noise and natural stimuli. Furthermore, we apply MPHs to extracellular single and multi-unit data recorded in cortical brain areas of singing birds to showcase a novel method for estimating response lag distributions. MPHs allow simultaneous estimation of receptive fields, latency statistics, and hidden state dynamics and so can help to uncover complex stimulus response relationships that are subject to variable timing and involve diverse neural codes.

  16. The discrete-dipole-approximation code ADDA: capabilities and known limitations

    NARCIS (Netherlands)

    Yurkin, M.A.; Hoekstra, A.G.

    2011-01-01

    The open-source code ADDA is described, which implements the discrete dipole approximation (DDA), a method to simulate light scattering by finite 3D objects of arbitrary shape and composition. Besides standard sequential execution, ADDA can run on a multiprocessor distributed-memory system,

  17. Capability Maturity Model (CMM) for Software Process Improvements

    Science.gov (United States)

    Ling, Robert Y.

    2000-01-01

    This slide presentation reviews the Avionic Systems Division's implementation of the Capability Maturity Model (CMM) for improvements in the software development process. The presentation reviews the process involved in implementing the model and the benefits of using CMM to improve the software development process.

  18. The MESORAD dose assessment model: Computer code

    International Nuclear Information System (INIS)

    Ramsdell, J.V.; Athey, G.F.; Bander, T.J.; Scherpelz, R.I.

    1988-10-01

    MESORAD is a dose equivalent model for emergency response applications that is designed to be run on minicomputers. It has been developed by the Pacific Northwest Laboratory for use as part of the Intermediate Dose Assessment System in the US Nuclear Regulatory Commission Operations Center in Washington, DC, and the Emergency Management System in the US Department of Energy Unified Dose Assessment Center in Richland, Washington. This volume describes the MESORAD computer code and contains a listing of the code. The technical basis for MESORAD is described in the first volume of this report (Scherpelz et al. 1986). A third volume of the documentation planned. That volume will contain utility programs and input and output files that can be used to check the implementation of MESORAD. 18 figs., 4 tabs

  19. The WARP Code: Modeling High Intensity Ion Beams

    Science.gov (United States)

    Grote, David P.; Friedman, Alex; Vay, Jean-Luc; Haber, Irving

    2005-03-01

    The Warp code, developed for heavy-ion driven inertial fusion energy studies, is used to model high intensity ion (and electron) beams. Significant capability has been incorporated in Warp, allowing nearly all sections of an accelerator to be modeled, beginning with the source. Warp has as its core an explicit, three-dimensional, particle-in-cell model. Alongside this is a rich set of tools for describing the applied fields of the accelerator lattice, and embedded conducting surfaces (which are captured at sub-grid resolution). Also incorporated are models with reduced dimensionality: an axisymmetric model and a transverse "slice" model. The code takes advantage of modern programming techniques, including object orientation, parallelism, and scripting (via Python). It is at the forefront in the use of the computational technique of adaptive mesh refinement, which has been particularly successful in the area of diode and injector modeling, both steady-state and time-dependent. In the presentation, some of the major aspects of Warp will be overviewed, especially those that could be useful in modeling ECR sources. Warp has been benchmarked against both theory and experiment. Recent results will be presented showing good agreement of Warp with experimental results from the STS500 injector test stand. Additional information can be found on the web page http://hif.lbl.gov/theory/WARP_summary.html.

  20. The WARP Code: Modeling High Intensity Ion Beams

    International Nuclear Information System (INIS)

    Grote, David P.; Friedman, Alex; Vay, Jean-Luc; Haber, Irving

    2005-01-01

    The Warp code, developed for heavy-ion driven inertial fusion energy studies, is used to model high intensity ion (and electron) beams. Significant capability has been incorporated in Warp, allowing nearly all sections of an accelerator to be modeled, beginning with the source. Warp has as its core an explicit, three-dimensional, particle-in-cell model. Alongside this is a rich set of tools for describing the applied fields of the accelerator lattice, and embedded conducting surfaces (which are captured at sub-grid resolution). Also incorporated are models with reduced dimensionality: an axisymmetric model and a transverse ''slice'' model. The code takes advantage of modern programming techniques, including object orientation, parallelism, and scripting (via Python). It is at the forefront in the use of the computational technique of adaptive mesh refinement, which has been particularly successful in the area of diode and injector modeling, both steady-state and time-dependent. In the presentation, some of the major aspects of Warp will be overviewed, especially those that could be useful in modeling ECR sources. Warp has been benchmarked against both theory and experiment. Recent results will be presented showing good agreement of Warp with experimental results from the STS500 injector test stand

  1. Neural network modeling of a dolphin's sonar discrimination capabilities

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; René Rasmussen, A; Au, WWL

    1994-01-01

    The capability of an echo-locating dolphin to discriminate differences in the wall thickness of cylinders was previously modeled by a counterpropagation neural network using only spectral information of the echoes [W. W. L. Au, J. Acoust. Soc. Am. 95, 2728–2735 (1994)]. In this study, both time...... and frequency information were used to model the dolphin discrimination capabilities. Echoes from the same cylinders were digitized using a broadband simulated dolphin sonar signal with the transducer mounted on the dolphin's pen. The echoes were filtered by a bank of continuous constant-Q digital filters...

  2. Development of the coupled 'system thermal-hydraulics, 3D reactor kinetics, and hot channel' analysis capability of the MARS code

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, J. J.; Chung, B. D.; Lee, W.J

    2005-02-01

    The subchannel analysis capability of the MARS 3D module has been improved. Especially, the turbulent mixing and void drift models for flow mixing phenomena in rod bundles have been assessed using some well-known rod bundle test data. Then, the subchannel analysis feature was combined to the existing coupled 'system Thermal-Hydraulics (T/H) and 3D reactor kinetics' calculation capability of MARS. These features allow the coupled 'system T/H, 3D reactor kinetics, and hot channel' analysis capability and, thus, realistic simulations of hot channel behavior as well as global system T/H behavior. In this report, the MARS code features for the coupled analysis capability are described first. The code modifications relevant to the features are also given. Then, a coupled analysis of the Main Steam Line Break (MSLB) is carried out for demonstration. The results of the coupled calculations are very reasonable and realistic, and show these methods can be used to reduce the over-conservatism in the conventional safety analysis.

  3. Implementation of a Model of Turbulence into a System Code, GAMMA+

    International Nuclear Information System (INIS)

    Kim, Hyeonil; Lim, Hong-Sik; No, Hee-Cheon

    2015-01-01

    The Launder-Sharma model was selected as the best model to predict the heat transfer performance while offsetting the lack of accuracy in even recently updated empirical correlations from both the extensive review of numerical analyses and the validation process. An application of the Launder-Sharma model into the system analysis code GAMMA+ for gas-cooled reactors is presented: 1) governing equations, discretization, and algebraic equations, 2) an application result of GAMMA''T, an integrated GAMMA+ code with CFD capability of low-Re resolution incorporated. The numerical foundation was formulated and implemented in a way such that the capability of the LS model was incorporated into GAMMA+, a system code for gas-cooled reactors, based on the same backbone of the ICE scheme on stagger mesh, that is, the code structure and numerical schemes used in the original code. The GAMMA''T code, an integrated system code with low-Re CFD capability on board, was suitably verified using an available set of data covering a turbulent flow and turbulent forced convection. In addition, a much better solution with the same quality of prediction with fewer meshes was given. This is a considerable advantage of the application into the system code

  4. The discrete-dipole-approximation code ADDA: Capabilities and known limitations

    International Nuclear Information System (INIS)

    Yurkin, Maxim A.; Hoekstra, Alfons G.

    2011-01-01

    The open-source code ADDA is described, which implements the discrete dipole approximation (DDA), a method to simulate light scattering by finite 3D objects of arbitrary shape and composition. Besides standard sequential execution, ADDA can run on a multiprocessor distributed-memory system, parallelizing a single DDA calculation. Hence the size parameter of the scatterer is in principle limited only by total available memory and computational speed. ADDA is written in C99 and is highly portable. It provides full control over the scattering geometry (particle morphology and orientation, and incident beam) and allows one to calculate a wide variety of integral and angle-resolved scattering quantities (cross sections, the Mueller matrix, etc.). Moreover, ADDA incorporates a range of state-of-the-art DDA improvements, aimed at increasing the accuracy and computational speed of the method. We discuss both physical and computational aspects of the DDA simulations and provide a practical introduction into performing such simulations with the ADDA code. We also present several simulation results, in particular, for a sphere with size parameter 320 (100-wavelength diameter) and refractive index 1.05.

  5. Experiences with the Capability Maturity Model in a research environment

    NARCIS (Netherlands)

    Velden, van der M.J.; Vreke, J.; Wal, van der B.; Symons, A.

    1996-01-01

    The project described here was aimed at evaluating the Capability Maturity Model (CMM) in the context of a research organization. Part of the evaluation was a standard CMM assessment. It was found that CMM could be applied to a research organization, although its five maturity levels were considered

  6. Gap Conductance model Validation in the TASS/SMR-S code using MARS code

    International Nuclear Information System (INIS)

    Ahn, Sang Jun; Yang, Soo Hyung; Chung, Young Jong; Lee, Won Jae

    2010-01-01

    Korea Atomic Energy Research Institute (KAERI) has been developing the TASS/SMR-S (Transient and Setpoint Simulation/Small and Medium Reactor) code, which is a thermal hydraulic code for the safety analysis of the advanced integral reactor. An appropriate work to validate the applicability of the thermal hydraulic models within the code should be demanded. Among the models, the gap conductance model which is describes the thermal gap conductivity between fuel and cladding was validated through the comparison with MARS code. The validation of the gap conductance model was performed by evaluating the variation of the gap temperature and gap width as the changed with the power fraction. In this paper, a brief description of the gap conductance model in the TASS/SMR-S code is presented. In addition, calculated results to validate the gap conductance model are demonstrated by comparing with the results of the MARS code with the test case

  7. A graph model for opportunistic network coding

    KAUST Repository

    Sorour, Sameh

    2015-08-12

    © 2015 IEEE. Recent advancements in graph-based analysis and solutions of instantly decodable network coding (IDNC) trigger the interest to extend them to more complicated opportunistic network coding (ONC) scenarios, with limited increase in complexity. In this paper, we design a simple IDNC-like graph model for a specific subclass of ONC, by introducing a more generalized definition of its vertices and the notion of vertex aggregation in order to represent the storage of non-instantly-decodable packets in ONC. Based on this representation, we determine the set of pairwise vertex adjacency conditions that can populate this graph with edges so as to guarantee decodability or aggregation for the vertices of each clique in this graph. We then develop the algorithmic procedures that can be applied on the designed graph model to optimize any performance metric for this ONC subclass. A case study on reducing the completion time shows that the proposed framework improves on the performance of IDNC and gets very close to the optimal performance.

  8. Modelling of LOCA Tests with the BISON Fuel Performance Code

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, Richard L [Idaho National Laboratory; Pastore, Giovanni [Idaho National Laboratory; Novascone, Stephen Rhead [Idaho National Laboratory; Spencer, Benjamin Whiting [Idaho National Laboratory; Hales, Jason Dean [Idaho National Laboratory

    2016-05-01

    BISON is a modern finite-element based, multidimensional nuclear fuel performance code that is under development at Idaho National Laboratory (USA). Recent advances of BISON include the extension of the code to the analysis of LWR fuel rod behaviour during loss-of-coolant accidents (LOCAs). In this work, BISON models for the phenomena relevant to LWR cladding behaviour during LOCAs are described, followed by presentation of code results for the simulation of LOCA tests. Analysed experiments include separate effects tests of cladding ballooning and burst, as well as the Halden IFA-650.2 fuel rod test. Two-dimensional modelling of the experiments is performed, and calculations are compared to available experimental data. Comparisons include cladding burst pressure and temperature in separate effects tests, as well as the evolution of fuel rod inner pressure during ballooning and time to cladding burst. Furthermore, BISON three-dimensional simulations of separate effects tests are performed, which demonstrate the capability to reproduce the effect of azimuthal temperature variations in the cladding. The work has been carried out in the frame of the collaboration between Idaho National Laboratory and Halden Reactor Project, and the IAEA Coordinated Research Project FUMAC.

  9. Conservation of concrete structures in fib Model Code 2010

    NARCIS (Netherlands)

    Matthews, S.L.; Ueda, T.; Bigaj-Van Vliet, A.J.

    2010-01-01

    Chapter 9: Conservation of Concrete Structures forms part of the forthcoming fib new Model Code. As it is expected that the fib new Model Code will be largely completed in 2010, it is being referred to as fib Model Code 2010 (fib MC2010) and it will soon become available for wider review by the

  10. Conservation of concrete structures according to fib Model Code 2010

    NARCIS (Netherlands)

    Matthews, S.; Bigaj-Van Vliet, A.; Ueda, T.

    2013-01-01

    Conservation of concrete structures forms an essential part of the fib Model Code for Concrete Structures 2010 (fib Model Code 2010). In particular, Chapter 9 of fib Model Code 2010 addresses issues concerning conservation strategies and tactics, conservation management, condition surveys, condition

  11. INTEGRATION OF FACILITY MODELING CAPABILITIES FOR NUCLEAR NONPROLIFERATION ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Gorensek, M.; Hamm, L.; Garcia, H.; Burr, T.; Coles, G.; Edmunds, T.; Garrett, A.; Krebs, J.; Kress, R.; Lamberti, V.; Schoenwald, D.; Tzanos, C.; Ward, R.

    2011-07-18

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  12. Integration Of Facility Modeling Capabilities For Nuclear Nonproliferation Analysis

    International Nuclear Information System (INIS)

    Gorensek, M.; Hamm, L.; Garcia, H.; Burr, T.; Coles, G.; Edmunds, T.; Garrett, A.; Krebs, J.; Kress, R.; Lamberti, V.; Schoenwald, D.; Tzanos, C.; Ward, R.

    2011-01-01

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  13. NGNP Data Management and Analysis System Modeling Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Cynthia D. Gentillon

    2009-09-01

    Projects for the very-high-temperature reactor (VHTR) program provide data in support of Nuclear Regulatory Commission licensing of the VHTR. Fuel and materials to be used in the reactor are tested and characterized to quantify performance in high temperature and high fluence environments. In addition, thermal-hydraulic experiments are conducted to validate codes used to assess reactor safety. The VHTR Program has established the NGNP Data Management and Analysis System (NDMAS) to ensure that VHTR data are (1) qualified for use, (2) stored in a readily accessible electronic form, and (3) analyzed to extract useful results. This document focuses on the third NDMAS objective. It describes capabilities for displaying the data in meaningful ways and identifying relationships among the measured quantities that contribute to their understanding.

  14. Experimental data bases useful for quantification of model uncertainties in best estimate codes

    International Nuclear Information System (INIS)

    Wilson, G.E.; Katsma, K.R.; Jacobson, J.L.; Boodry, K.S.

    1988-01-01

    A data base is necessary for assessment of thermal hydraulic codes within the context of the new NRC ECCS Rule. Separate effect tests examine particular phenomena that may be used to develop and/or verify models and constitutive relationships in the code. Integral tests are used to demonstrate the capability of codes to model global characteristics and sequence of events for real or hypothetical transients. The nuclear industry has developed a large experimental data base of fundamental nuclear, thermal-hydraulic phenomena for code validation. Given a particular scenario, and recognizing the scenario's important phenomena, selected information from this data base may be used to demonstrate applicability of a particular code to simulate the scenario and to determine code model uncertainties. LBLOCA experimental data bases useful to this objective are identified in this paper. 2 tabs

  15. Epitaxial growth of hetero-Ln-MOF hierarchical single crystals for domain- and orientation-controlled multicolor luminescence 3D coding capability

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Mei; Zhu, Yi-Xuan; Wu, Kai; Chen, Ling; Hou, Ya-Jun; Yin, Shao-Yun; Wang, Hai-Ping; Fan, Ya-Nan [MOE Laboratory of Bioinorganic and Synthetic Chemistry, Lehn Institute of Functional Materials, School of Chemistry, Sun Yat-Sen University, Guangzhou (China); Su, Cheng-Yong [MOE Laboratory of Bioinorganic and Synthetic Chemistry, Lehn Institute of Functional Materials, School of Chemistry, Sun Yat-Sen University, Guangzhou (China); State Key Laboratory of Applied Organic Chemistry, Lanzhou University, Lanzhou (China)

    2017-11-13

    Core-shell or striped heteroatomic lanthanide metal-organic framework hierarchical single crystals were obtained by liquid-phase anisotropic epitaxial growth, maintaining identical periodic organization while simultaneously exhibiting spatially segregated structure. Different types of domain and orientation-controlled multicolor photophysical models are presented, which show either visually distinguishable or visible/near infrared (NIR) emissive colors. This provides a new bottom-up strategy toward the design of hierarchical molecular systems, offering high-throughput and multiplexed luminescence color tunability and readability. The unique capability of combining spectroscopic coding with 3D (three-dimensional) microscale spatial coding is established, providing potential applications in anti-counterfeiting, color barcoding, and other types of integrated and miniaturized optoelectronic materials and devices. (copyright 2017 Wiley-VCH Verlag GmbH and Co. KGaA, Weinheim)

  16. Simulation and Modeling Capability for Standard Modular Hydropower Technology

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Kevin M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Smith, Brennan T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Witt, Adam M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); DeNeale, Scott T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bevelhimer, Mark S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Pries, Jason L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Burress, Timothy A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kao, Shih-Chieh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Mobley, Miles H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lee, Kyutae [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Curd, Shelaine L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Tsakiris, Achilleas [Univ. of Tennessee, Knoxville, TN (United States); Mooneyham, Christian [Univ. of Tennessee, Knoxville, TN (United States); Papanicolaou, Thanos [Univ. of Tennessee, Knoxville, TN (United States); Ekici, Kivanc [Univ. of Tennessee, Knoxville, TN (United States); Whisenant, Matthew J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Welch, Tim [US Department of Energy, Washington, DC (United States); Rabon, Daniel [US Department of Energy, Washington, DC (United States)

    2017-08-01

    Grounded in the stakeholder-validated framework established in Oak Ridge National Laboratory’s SMH Exemplary Design Envelope Specification, this report on Simulation and Modeling Capability for Standard Modular Hydropower (SMH) Technology provides insight into the concepts, use cases, needs, gaps, and challenges associated with modeling and simulating SMH technologies. The SMH concept envisions a network of generation, passage, and foundation modules that achieve environmentally compatible, cost-optimized hydropower using standardization and modularity. The development of standardized modeling approaches and simulation techniques for SMH (as described in this report) will pave the way for reliable, cost-effective methods for technology evaluation, optimization, and verification.

  17. Modeling experimental plasma diagnostics in the FLASH code: proton radiography

    Science.gov (United States)

    Flocke, Norbert; Weide, Klaus; Feister, Scott; Tzeferacos, Petros; Lamb, Donald

    2017-10-01

    Proton radiography is an important diagnostic tool for laser plasma experiments and for studying magnetized plasmas. We describe a new synthetic proton radiography diagnostic recently implemented into the FLASH code. FLASH is an open source, finite-volume Eulerian, spatially adaptive radiation hydrodynamics and magneto-hydrodynamics code that incorporates capabilities for a broad range of physical processes. Proton radiography is modeled through the use of the (relativistic) Lorentz force equation governing the motion of protons through 3D domains. Both instantaneous (one time step) and time-resolved (over many time steps) proton radiography can be simulated. The code module is also equipped with several different setup options (beam structure and detector screen placements) to reproduce a large variety of experimental proton radiography designs. FLASH's proton radiography diagnostic unit can be used either during runtime or in post-processing of simulation results. FLASH is publicly available at flash.uchicago.edu. U.S. DOE NNSA, U.S. DOE NNSA ASC, U.S. DOE Office of Science and NSF.

  18. Dataset of coded handwriting features for use in statistical modelling

    Directory of Open Access Journals (Sweden)

    Anna Agius

    2018-02-01

    Full Text Available The data presented here is related to the article titled, “Using handwriting to infer a writer's country of origin for forensic intelligence purposes” (Agius et al., 2017 [1]. This article reports original writer, spatial and construction characteristic data for thirty-seven English Australian writers and thirty-seven Vietnamese writers. All of these characteristics were coded and recorded in Microsoft Excel 2013 (version 15.31. The construction characteristics coded were only extracted from seven characters, which were: ‘g’, ‘h’, ‘th’, ‘M’, ‘0’, ‘7’ and ‘9’. The coded format of the writer, spatial and construction characteristics is made available in this Data in Brief in order to allow others to perform statistical analyses and modelling to investigate whether there is a relationship between the handwriting features and the nationality of the writer, and whether the two nationalities can be differentiated. Furthermore, to employ mathematical techniques that are capable of characterising the extracted features from each participant.

  19. ER@CEBAF: Modeling code developments

    Energy Technology Data Exchange (ETDEWEB)

    Meot, F. [Brookhaven National Lab. (BNL), Upton, NY (United States); Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Roblin, Y. [Brookhaven National Lab. (BNL), Upton, NY (United States); Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)

    2016-04-13

    A proposal for a multiple-pass, high-energy, energy-recovery experiment using CEBAF is under preparation in the frame of a JLab-BNL collaboration. In view of beam dynamics investigations regarding this project, in addition to the existing model in use in Elegant a version of CEBAF is developed in the stepwise ray-tracing code Zgoubi, Beyond the ER experiment, it is also planned to use the latter for the study of polarization transport in the presence of synchrotron radiation, down to Hall D line where a 12 GeV polarized beam can be delivered. This Note briefly reports on the preliminary steps, and preliminary outcomes, based on an Elegant to Zgoubi translation.

  20. Development of a fourth generation predictive capability maturity model.

    Energy Technology Data Exchange (ETDEWEB)

    Hills, Richard Guy; Witkowski, Walter R.; Urbina, Angel; Rider, William J.; Trucano, Timothy Guy

    2013-09-01

    The Predictive Capability Maturity Model (PCMM) is an expert elicitation tool designed to characterize and communicate completeness of the approaches used for computational model definition, verification, validation, and uncertainty quantification associated for an intended application. The primary application of this tool at Sandia National Laboratories (SNL) has been for physics-based computational simulations in support of nuclear weapons applications. The two main goals of a PCMM evaluation are 1) the communication of computational simulation capability, accurately and transparently, and 2) the development of input for effective planning. As a result of the increasing importance of computational simulation to SNLs mission, the PCMM has evolved through multiple generations with the goal to provide more clarity, rigor, and completeness in its application. This report describes the approach used to develop the fourth generation of the PCMM.

  1. The interpersonal circumplex as a model of interpersonal capabilities.

    Science.gov (United States)

    Hofsess, Christy D; Tracey, Terence J G

    2005-04-01

    In this study, we sought to challenge the existing conceptualization of interpersonal capabilities as a distinct construct from interpersonal traits by explicitly taking into account the general factor inherent within most models of circumplexes. A sample of 206 college students completed a battery of measures including the Battery of Interpersonal Capabilities (BIC; Paulhus & Martin, 1987). Principal components analysis and the randomization test of hypothesized order relations demonstrated that contrary to previous findings, the BIC adhered to a circular ordering. Joint analysis of the BIC with the Interpersonal Adjective Scale (Wiggins, 1995) using principal components analysis and structural equation modeling demonstrated that the 2 measures represented similar constructs. Furthermore, the general factor in the BIC was not correlated with measures of general self-competence, satisfaction with life, or general pathology.

  2. User Instructions for the Systems Assessment Capability, Rev. 0, Computer Codes Volume 1: Inventory, Release, and Transport Modules

    International Nuclear Information System (INIS)

    Eslinger, Paul W.; Engel, David W.; Gerhardstein, Lawrence H.; Lopresti, Charles A.; Nichols, William E.; Strenge, Dennis L.

    2001-12-01

    One activity of the Department of Energy's Groundwater/Vadose Zone Integration Project is an assessment of cumulative impacts from Hanford Site wastes on the subsurface environment and the Columbia River. Through the application of a system assessment capability (SAC), decisions for each cleanup and disposal action will be able to take into account the composite effect of other cleanup and disposal actions. The SAC has developed a suite of computer programs to simulate the migration of contaminants (analytes) present on the Hanford Site and to assess the potential impacts of the analytes, including dose to humans, socio-cultural impacts, economic impacts, and ecological impacts. The general approach to handling uncertainty in the SAC computer codes is a Monte Carlo approach. Conceptually, one generates a value for every stochastic parameter in the code (the entire sequence of modules from inventory through transport and impacts) and then executes the simulation, obtaining an output value, or result. This document provides user instructions for the SAC codes that handle inventory tracking, release of contaminants to the environment, and transport of contaminants through the unsaturated zone, saturated zone, and the Columbia River

  3. Hybriding CMMI and requirement engineering maturity and capability models

    OpenAIRE

    Buglione, Luigi; Hauck, Jean Carlo R.; Gresse von Wangenheim, Christiane; Mc Caffery, Fergal

    2012-01-01

    peer-reviewed Estimation represents one of the most critical processes for any project and it is highly dependent on the quality of requirements elicitation and management. Therefore, the management of requirements should be prioritised in any process improvement program, because the less precise the requirements gathering, analysis and sizing, the greater the error in terms of time and cost estimation. Maturity and Capability Models (MCM) represent a good tool for assessing the status of ...

  4. Improved choked flow model for MARS code

    International Nuclear Information System (INIS)

    Chung, Moon Sun; Lee, Won Jae; Ha, Kwi Seok; Hwang, Moon Kyu

    2002-01-01

    Choked flow calculation is improved by using a new sound speed criterion for bubbly flow that is derived by the characteristic analysis of hyperbolic two-fluid model. This model was based on the notion of surface tension for the interfacial pressure jump terms in the momentum equations. Real eigenvalues obtained as the closed-form solution of characteristic polynomial represent the sound speed in the bubbly flow regime that agrees well with the existing experimental data. The present sound speed shows more reasonable result in the extreme case than the Nguyens did. The present choked flow criterion derived by the present sound speed is employed in the MARS code and assessed by using the Marviken choked flow tests. The assessment results without any adjustment made by some discharge coefficients demonstrate more accurate predictions of choked flow rate in the bubbly flow regime than those of the earlier choked flow calculations. By calculating the Typical PWR (SBLOCA) problem, we make sure that the present model can reproduce the reasonable transients of integral reactor system

  5. Off-Gas Adsorption Model Capabilities and Recommendations

    Energy Technology Data Exchange (ETDEWEB)

    Lyon, Kevin L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Welty, Amy K. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Law, Jack [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ladshaw, Austin [Georgia Inst. of Technology, Atlanta, GA (United States); Yiacoumi, Sotira [Georgia Inst. of Technology, Atlanta, GA (United States); Tsouris, Costas [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-03-01

    Off-gas treatment is required to reduce emissions from aqueous fuel reprocessing. Evaluating the products of innovative gas adsorption research requires increased computational simulation capability to more effectively transition from fundamental research to operational design. Early modeling efforts produced the Off-Gas SeParation and REcoverY (OSPREY) model that, while efficient in terms of computation time, was of limited value for complex systems. However, the computational and programming lessons learned in development of the initial model were used to develop Discontinuous Galerkin OSPREY (DGOSPREY), a more effective model. Initial comparisons between OSPREY and DGOSPREY show that, while OSPREY does reasonably well to capture the initial breakthrough time, it displays far too much numerical dispersion to accurately capture the real shape of the breakthrough curves. DGOSPREY is a much better tool as it utilizes a more stable set of numerical methods. In addition, DGOSPREY has shown the capability to capture complex, multispecies adsorption behavior, while OSPREY currently only works for a single adsorbing species. This capability makes DGOSPREY ultimately a more practical tool for real world simulations involving many different gas species. While DGOSPREY has initially performed very well, there is still need for improvement. The current state of DGOSPREY does not include any micro-scale adsorption kinetics and therefore assumes instantaneous adsorption. This is a major source of error in predicting water vapor breakthrough because the kinetics of that adsorption mechanism is particularly slow. However, this deficiency can be remedied by building kinetic kernels into DGOSPREY. Another source of error in DGOSPREY stems from data gaps in single species, such as Kr and Xe, isotherms. Since isotherm data for each gas is currently available at a single temperature, the model is unable to predict adsorption at temperatures outside of the set of data currently

  6. Simulink Code Generation: Tutorial for Generating C Code from Simulink Models using Simulink Coder

    Science.gov (United States)

    MolinaFraticelli, Jose Carlos

    2012-01-01

    This document explains all the necessary steps in order to generate optimized C code from Simulink Models. This document also covers some general information on good programming practices, selection of variable types, how to organize models and subsystems, and finally how to test the generated C code and compare it with data from MATLAB.

  7. Climbing the ladder: capability maturity model integration level 3

    Science.gov (United States)

    Day, Bryce; Lutteroth, Christof

    2011-02-01

    This article details the attempt to form a complete workflow model for an information and communication technologies (ICT) company in order to achieve a capability maturity model integration (CMMI) maturity rating of 3. During this project, business processes across the company's core and auxiliary sectors were documented and extended using modern enterprise modelling tools and a The Open Group Architectural Framework (TOGAF) methodology. Different challenges were encountered with regard to process customisation and tool support for enterprise modelling. In particular, there were problems with the reuse of process models, the integration of different project management methodologies and the integration of the Rational Unified Process development process framework that had to be solved. We report on these challenges and the perceived effects of the project on the company. Finally, we point out research directions that could help to improve the situation in the future.

  8. Capability to model reactor regulating system in RFSP

    International Nuclear Information System (INIS)

    Chow, H.C.; Rouben, B.; Younis, M.H.; Jenkins, D.A.; Baudouin, A.; Thompson, P.D.

    1995-01-01

    The Reactor Regulating System package extracted from SMOKIN-G2 was linked within RFSP to the spatial kinetics calculation. The objective is to use this new capability in safety analysis to model the actions of RRS in hypothetical events such as in-core LOCA or moderator drain scenarios. This paper describes the RRS modelling in RFSP and its coupling to the neutronics calculations, verification of the RRS control routine functions, sample applications and comparisons to SMOKIN-G2 results for the same transient simulations. (author). 7 refs., 6 figs

  9. Nuclear Hybrid Energy System Modeling: RELAP5 Dynamic Coupling Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Piyush Sabharwall; Nolan Anderson; Haihua Zhao; Shannon Bragg-Sitton; George Mesina

    2012-09-01

    The nuclear hybrid energy systems (NHES) research team is currently developing a dynamic simulation of an integrated hybrid energy system. A detailed simulation of proposed NHES architectures will allow initial computational demonstration of a tightly coupled NHES to identify key reactor subsystem requirements, identify candidate reactor technologies for a hybrid system, and identify key challenges to operation of the coupled system. This work will provide a baseline for later coupling of design-specific reactor models through industry collaboration. The modeling capability addressed in this report focuses on the reactor subsystem simulation.

  10. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-03-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  11. 40 CFR 194.23 - Models and computer codes.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  12. Spent fuel reprocessing system security engineering capability maturity model

    International Nuclear Information System (INIS)

    Liu Yachun; Zou Shuliang; Yang Xiaohua; Ouyang Zigen; Dai Jianyong

    2011-01-01

    In the field of nuclear safety, traditional work places extra emphasis on risk assessment related to technical skills, production operations, accident consequences through deterministic or probabilistic analysis, and on the basis of which risk management and control are implemented. However, high quality of product does not necessarily mean good safety quality, which implies a predictable degree of uniformity and dependability suited to the specific security needs. In this paper, we make use of the system security engineering - capability maturity model (SSE-CMM) in the field of spent fuel reprocessing, establish a spent fuel reprocessing systems security engineering capability maturity model (SFR-SSE-CMM). The base practices in the model are collected from the materials of the practice of the nuclear safety engineering, which represent the best security implementation activities, reflect the regular and basic work of the implementation of the security engineering in the spent fuel reprocessing plant, the general practices reveal the management, measurement and institutional characteristics of all process activities. The basic principles that should be followed in the course of implementation of safety engineering activities are indicated from 'what' and 'how' aspects. The model provides a standardized framework and evaluation system for the safety engineering of the spent fuel reprocessing system. As a supplement to traditional methods, this new assessment technique with property of repeatability and predictability with respect to cost, procedure and quality control, can make or improve the activities of security engineering to become a serial of mature, measurable and standard activities. (author)

  13. Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety.

    Energy Technology Data Exchange (ETDEWEB)

    Carbajo, Juan (Oak Ridge National Laboratory, Oak Ridge, TN); Jeong, Hae-Yong (Korea Atomic Energy Research Institute, Daejeon, Korea); Wigeland, Roald (Idaho National Laboratory, Idaho Falls, ID); Corradini, Michael (University of Wisconsin, Madison, WI); Schmidt, Rodney Cannon; Thomas, Justin (Argonne National Laboratory, Argonne, IL); Wei, Tom (Argonne National Laboratory, Argonne, IL); Sofu, Tanju (Argonne National Laboratory, Argonne, IL); Ludewig, Hans (Brookhaven National Laboratory, Upton, NY); Tobita, Yoshiharu (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Ohshima, Hiroyuki (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Serre, Frederic (Centre d' %C3%94etudes nucl%C3%94eaires de Cadarache %3CU%2B2013%3E CEA, France)

    2011-06-01

    This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the

  14. Image Coding using Markov Models with Hidden States

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto

    1999-01-01

    The Cylinder Partially Hidden Markov Model (CPH-MM) is applied to lossless coding of bi-level images. The original CPH-MM is relaxed for the purpose of coding by not imposing stationarity, but otherwise the model description is the same.......The Cylinder Partially Hidden Markov Model (CPH-MM) is applied to lossless coding of bi-level images. The original CPH-MM is relaxed for the purpose of coding by not imposing stationarity, but otherwise the model description is the same....

  15. Documentation for grants equal to tax model: Volume 3, Source code

    International Nuclear Information System (INIS)

    Boryczka, M.K.

    1986-01-01

    The GETT model is capable of forecasting the amount of tax liability associated with all property owned and all activities undertaken by the US Department of Energy (DOE) in site characterization and repository development. The GETT program is a user-friendly, menu-driven model developed using dBASE III/trademark/, a relational data base management system. The data base for GETT consists primarily of eight separate dBASE III/trademark/ files corresponding to each of the eight taxes (real property, personal property, corporate income, franchise, sales, use, severance, and excise) levied by State and local jurisdictions on business property and activity. Additional smaller files help to control model inputs and reporting options. Volume 3 of the GETT model documentation is the source code. The code is arranged primarily by the eight tax types. Other code files include those for JURISDICTION, SIMULATION, VALIDATION, TAXES, CHANGES, REPORTS, GILOT, and GETT. The code has been verified through hand calculations

  16. MMA, A Computer Code for Multi-Model Analysis

    Science.gov (United States)

    Poeter, Eileen P.; Hill, Mary C.

    2007-01-01

    be well served by the default methods provided. To use the default methods, the only required input for MMA is a list of directories where the files for the alternate models are located. Evaluation and development of model-analysis methods are active areas of research. To facilitate exploration and innovation, MMA allows the user broad discretion to define alternatives to the default procedures. For example, MMA allows the user to (a) rank models based on model criteria defined using a wide range of provided and user-defined statistics in addition to the default AIC, AICc, BIC, and KIC criteria, (b) create their own criteria using model measures available from the code, and (c) define how each model criterion is used to calculate related posterior model probabilities. The default model criteria rate models are based on model fit to observations, the number of observations and estimated parameters, and, for KIC, the Fisher information matrix. In addition, MMA allows the analysis to include an evaluation of estimated parameter values. This is accomplished by allowing the user to define unreasonable estimated parameter values or relative estimated parameter values. An example of the latter is that it may be expected that one parameter value will be less than another, as might be the case if two parameters represented the hydraulic conductivity of distinct materials such as fine and coarse sand. Models with parameter values that violate the user-defined conditions are excluded from further consideration by MMA. Ground-water models are used as examples in this report, but MMA can be used to evaluate any set of models for which the required files have been produced. MMA needs to read files from a separate directory for each alternative model considered. The needed files are produced when using the Sensitivity-Analysis or Parameter-Estimation mode of UCODE_2005, or, possibly, the equivalent capability of another program. MMA is constructed using

  17. Turbomachinery Heat Transfer and Loss Modeling for 3D Navier-Stokes Codes

    Science.gov (United States)

    DeWitt, Kenneth; Ameri, Ali

    2005-01-01

    This report's contents focus on making use of NASA Glenn on-site computational facilities,to develop, validate, and apply models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes to enhance the capability to compute heat transfer and losses in turbomachiney.

  18. A cellular automata model for traffic flow based on kinetics theory, vehicles capabilities and driver reactions

    Science.gov (United States)

    Guzmán, H. A.; Lárraga, M. E.; Alvarez-Icaza, L.; Carvajal, J.

    2018-02-01

    In this paper, a reliable cellular automata model oriented to faithfully reproduce deceleration and acceleration according to realistic reactions of drivers, when vehicles with different deceleration capabilities are considered is presented. The model focuses on describing complex traffic phenomena by coding in its rules the basic mechanisms of drivers behavior, vehicles capabilities and kinetics, while preserving simplicity. In particular, vehiclés kinetics is based on uniform accelerated motion, rather than in impulsive accelerated motion as in most existing CA models. Thus, the proposed model calculates in an analytic way three safe preserving distances to determine the best action a follower vehicle can take under a worst case scenario. Besides, the prediction analysis guarantees that under the proper assumptions, collision between vehicles may not happen at any future time. Simulations results indicate that all interactions of heterogeneous vehicles (i.e., car-truck, truck-car, car-car and truck-truck) are properly reproduced by the model. In addition, the model overcomes one of the major limitations of CA models for traffic modeling: the inability to perform smooth approach to slower or stopped vehicles. Moreover, the model is also capable of reproducing most empirical findings including the backward speed of the downstream front of the traffic jam, and different congested traffic patterns induced by a system with open boundary conditions with an on-ramp. Like most CA models, integer values are used to make the model run faster, which makes the proposed model suitable for real time traffic simulation of large networks.

  19. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  20. Code Generation for Protocols from CPN models Annotated with Pragmatics

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars Michael; Kindler, Ekkart

    of the same model and sufficiently detailed to serve as a basis for automated code generation when annotated with code generation pragmatics. Pragmatics are syntactical annotations designed to make the CPN models descriptive and to address the problem that models with enough details for generating code from...... them tend to be verbose and cluttered. Our code generation approach consists of three main steps, starting from a CPN model that the modeller has annotated with a set of pragmatics that make the protocol structure and the control-flow explicit. The first step is to compute for the CPN model, a set...... of derived pragmatics that identify control-flow structures and operations, e. g., for sending and receiving packets, and for manipulating the state. In the second step, an abstract template tree (ATT) is constructed providing an association between pragmatics and code generation templates. The ATT...

  1. Development of the Monju core safety analysis numerical models by super-COPD code

    International Nuclear Information System (INIS)

    Yamada, Fumiaki; Minami, Masaki

    2010-12-01

    Japan Atomic Energy Agency constructed a computational model for safety analysis of Monju reactor core to be built into a modularized plant dynamics analysis code Super-COPD code, for the purpose of heat removal capability evaluation at the in total 21 defined transients in the annex to the construction permit application. The applicability of this model to core heat removal capability evaluation has been estimated by back to back result comparisons of the constituent models with conventionally applied codes and by application of the unified model. The numerical model for core safety analysis has been built based on the best estimate model validated by the actually measured plant behavior up to 40% rated power conditions, taking over safety analysis models of conventionally applied COPD and HARHO-IN codes, to be capable of overall calculations of the entire plant with the safety protection and control systems. Among the constituents of the analytical model, neutronic-thermal model, heat transfer and hydraulic models of PHTS, SHTS, and water/steam system are individually verified by comparisons with the conventional calculations. Comparisons are also made with the actually measured plant behavior up to 40% rated power conditions to confirm the calculation adequacy and conservativeness of the input data. The unified analytical model was applied to analyses of in total 8 anomaly events; reactivity insertion, abnormal power distribution, decrease and increase of coolant flow rate in PHTS, SHTS and water/steam systems. The resulting maximum values and temporal variations of the key parameters in safety evaluation; temperatures of fuel, cladding, in core sodium coolant and RV inlet and outlet coolant have negligible discrepancies against the existing analysis result in the annex to the construction permit application, verifying the unified analytical model. These works have enabled analytical evaluation of Monju core heat removal capability by Super-COPD utilizing the

  2. Subgroup A: nuclear model codes report to the Sixteenth Meeting of the WPEC

    International Nuclear Information System (INIS)

    Talou, P.; Chadwick, M.B.; Dietrich, F.S.; Herman, M.; Kawano, T.; Konig, A.; Oblozinsky, P.

    2004-01-01

    The Subgroup A activities focus on the development of nuclear reaction models and codes, used in evaluation work for nuclear reactions from the unresolved energy region up to the pion threshold production limit, and for target nuclides from the low teens and heavier. Much of the efforts are devoted by each participant to the continuing development of their own Institution codes. Progresses in this arena are reported in detail for each code in the present document. EMPIRE-II is of public access. The release of the TALYS code has been announced for the ND2004 Conference in Santa Fe, NM, October 2004. McGNASH is still under development and is not expected to be released in the very near future. In addition, Subgroup A members have demonstrated a growing interest in working on common modeling and codes capabilities, which would significantly reduce the amount of duplicate work, help manage efficiently the growing lines of existing codes, and render codes inter-comparison much easier. A recent and important activity of the Subgroup A has therefore been to develop the framework and the first bricks of the ModLib library, which is constituted of mostly independent pieces of codes written in Fortran 90 (and above) to be used in existing and future nuclear reaction codes. Significant progresses in the development of ModLib have been made during the past year. Several physics modules have been added to the library, and a few more have been planned in detail for the coming year.

  3. Isotope-coded N-terminal sulfonation of peptides allows quantitative proteomic analysis with increased de novo peptide sequencing capability.

    Science.gov (United States)

    Lee, Yong Ho; Han, Hoon; Chang, Seok-Bok; Lee, Sang-Won

    2004-01-01

    Recently various methods for the N-terminal sulfonation of peptides have been developed for the mass spectrometric analyses of proteomic samples to facilitate de novo sequencing of the peptides produced. This paper describes the isotope-coded N-terminal sulfonation (ICenS) of peptides; this procedure allows both de novo peptide sequencing and quantitative proteomics to be studied simultaneously. As N-terminal sulfonation reagents, 13C-labeled 4-sulfophenyl[13C6]isothiocyanate (13C-SPITC) and unlabeled 4-sulfophenyl isothiocyanate (12C-SPITC) were synthesized. The experimental and reference peptide mixtures were derivatized independently using 13C-SPITC and 12C-SPITC and then combined to generate an isotopically labeled peptide mixture in which each isotopic pair differs in mass by 6 Da. Capillary reverse-phase liquid chromatography/tandem mass spectrometry experiments on the resulting peptide mixtures revealed several immediate advantages of ICenS in addition to the de novo sequencing capability of N-terminal sulfonation, namely, differentiation between N-terminal sulfonated peptides and unmodified peptides in mass spectra, differentiation between N- and C-terminal fragments in tandem mass spectra of multiply protonated peptides by comparing fragmentations of the isotopic pairs, and relative peptide quantification between proteome samples. We demonstrate that the combination of N-terminal sulfonation and isotope coding in the mass spectrometric analysis of proteomic samples is a viable method that overcomes many problems associated with current N-terminal sulfonation methods. Copyright 2004 John Wiley & Sons, Ltd.

  4. Nuclear model computer codes available from the NEA Data Bank

    International Nuclear Information System (INIS)

    Sartori, E.

    1989-01-01

    A library of computer codes for nuclear model calculations, a subset of a library covering the different aspects of reactor physics and technology applications has been established at the NEA Data Bank. These codes are listed and classified according to the model used in the text. Copies of the programs can be obtained from the NEA Data Bank. (author). 8 refs

  5. WWER radial reflector modeling by diffusion codes

    International Nuclear Information System (INIS)

    Petkov, P. T.; Mittag, S.

    2005-01-01

    The two commonly used approaches to describe the WWER radial reflectors in diffusion codes, by albedo on the core-reflector boundary and by a ring of diffusive assembly size nodes, are discussed. The advantages and disadvantages of the first approach are presented first, then the Koebke's equivalence theory is outlined and its implementation for the WWER radial reflectors is discussed. Results for the WWER-1000 reactor are presented. Then the boundary conditions on the outer reflector boundary are discussed. The possibility to divide the library into fuel assembly and reflector parts and to generate each library by a separate code package is discussed. Finally, the homogenization errors for rodded assemblies are presented and discussed (Author)

  6. Aviation System Analysis Capability Air Carrier Investment Model-Cargo

    Science.gov (United States)

    Johnson, Jesse; Santmire, Tara

    1999-01-01

    The purpose of the Aviation System Analysis Capability (ASAC) Air Cargo Investment Model-Cargo (ACIMC), is to examine the economic effects of technology investment on the air cargo market, particularly the market for new cargo aircraft. To do so, we have built an econometrically based model designed to operate like the ACIM. Two main drivers account for virtually all of the demand: the growth rate of the Gross Domestic Product (GDP) and changes in the fare yield (which is a proxy of the price charged or fare). These differences arise from a combination of the nature of air cargo demand and the peculiarities of the air cargo market. The net effect of these two factors are that sales of new cargo aircraft are much less sensitive to either increases in GDP or changes in the costs of labor, capital, fuel, materials, and energy associated with the production of new cargo aircraft than the sales of new passenger aircraft. This in conjunction with the relatively small size of the cargo aircraft market means technology improvements to the cargo aircraft will do relatively very little to spur increased sales of new cargo aircraft.

  7. Expanding of reactor power calculation model of RELAP5 code

    International Nuclear Information System (INIS)

    Lin Meng; Yang Yanhua; Chen Yuqing; Zhang Hong; Liu Dingming

    2007-01-01

    For better analyzing of the nuclear power transient in rod-controlled reactor core by RELAP5 code, a nuclear reactor thermal-hydraulic best-estimate system code, it is expected to get the nuclear power using not only the point neutron kinetics model but also one-dimension neutron kinetics model. Thus an existing one-dimension nuclear reactor physics code was modified, to couple its neutron kinetics model with the RELAP5 thermal-hydraulic model. The detailed example test proves that the coupling is valid and correct. (authors)

  8. RELAP5/MOD3 code manual: Code structure, system models, and solution methods. Volume 1

    International Nuclear Information System (INIS)

    1995-08-01

    The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling, approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I provides modeling theory and associated numerical schemes

  9. Lidar Remote Sensing of Forests: New Instruments and Modeling Capabilities

    Science.gov (United States)

    Cook, Bruce D.

    2012-01-01

    Lidar instruments provide scientists with the unique opportunity to characterize the 3D structure of forest ecosystems. This information allows us to estimate properties such as wood volume, biomass density, stocking density, canopy cover, and leaf area. Structural information also can be used as drivers for photosynthesis and ecosystem demography models to predict forest growth and carbon sequestration. All lidars use time-in-flight measurements to compute accurate ranging measurements; however, there is a wide range of instruments and data types that are currently available, and instrument technology continues to advance at a rapid pace. This seminar will present new technologies that are in use and under development at NASA for airborne and space-based missions. Opportunities for instrument and data fusion will also be discussed, as Dr. Cook is the PI for G-LiHT, Goddard's LiDAR, Hyperspectral, and Thermal airborne imager. Lastly, this talk will introduce radiative transfer models that can simulate interactions between laser light and forest canopies. Developing modeling capabilities is important for providing continuity between observations made with different lidars, and to assist the design of new instruments. Dr. Bruce Cook is a research scientist in NASA's Biospheric Sciences Laboratory at Goddard Space Flight Center, and has more than 25 years of experience conducting research on ecosystem processes, soil biogeochemistry, and exchange of carbon, water vapor and energy between the terrestrial biosphere and atmosphere. His research interests include the combined use of lidar, hyperspectral, and thermal data for characterizing ecosystem form and function. He is Deputy Project Scientist for the Landsat Data Continuity Mission (LDCM); Project Manager for NASA s Carbon Monitoring System (CMS) pilot project for local-scale forest biomass; and PI of Goddard's LiDAR, Hyperspectral, and Thermal (G-LiHT) airborne imager.

  10. Fuel behavior modeling using the MARS computer code

    International Nuclear Information System (INIS)

    Faya, S.C.S.; Faya, A.J.G.

    1983-01-01

    The fuel behaviour modeling code MARS against experimental data, was evaluated. Two cases were selected: an early comercial PWR rod (Maine Yankee rod) and an experimental rod from the Canadian BWR program (Canadian rod). The MARS predictions are compared with experimental data and predictions made by other fuel modeling codes. Improvements are suggested for some fuel behaviour models. Mars results are satisfactory based on the data available. (Author) [pt

  11. Coupled neutronics and thermal hydraulics modelling in reactor dynamics codes TRAB-3D and HEXTRAN

    International Nuclear Information System (INIS)

    Kyrki-Rajamaeki, R.; Raety, H.

    1999-01-01

    The reactor dynamics codes for transient and accident analyses inherently include the coupling of neutronics and thermal hydraulics modelling. In Finland a number of codes with 1D and 3D neutronic models have been developed, which include models also for the cooling circuits. They have been used mainly for the needs of Finnish power plants, but some of the codes have also been utilized elsewhere. The continuous validation, simultaneous development, and experiences obtained in commercial applications have considerably improved the performance and range of application of the codes. The fast operation of the codes has enabled realistic analysis of 3D core combined to a full model of the cooling circuit even in such long reactivity scenarios as ATWS. The reactor dynamics methods are developed further and new more detailed models are created for tasks related to increased safety requirements. For thermal hydraulics calculations, an accurate general flow model based on a new solution method has been developed. Although mainly intended for analysis purposes, the reactor dynamics codes also provide reference solutions for simulator applications. As computer capability increases, these more sophisticated methods can be taken into use also in simulator environments. (author)

  12. Quantitative Model for Supply Chain Visibility: Process Capability Perspective

    Directory of Open Access Journals (Sweden)

    Youngsu Lee

    2016-01-01

    Full Text Available Currently, the intensity of enterprise competition has increased as a result of a greater diversity of customer needs as well as the persistence of a long-term recession. The results of competition are becoming severe enough to determine the survival of company. To survive global competition, each firm must focus on achieving innovation excellence and operational excellence as core competency for sustainable competitive advantage. Supply chain management is now regarded as one of the most effective innovation initiatives to achieve operational excellence, and its importance has become ever more apparent. However, few companies effectively manage their supply chains, and the greatest difficulty is in achieving supply chain visibility. Many companies still suffer from a lack of visibility, and in spite of extensive research and the availability of modern technologies, the concepts and quantification methods to increase supply chain visibility are still ambiguous. Based on the extant researches in supply chain visibility, this study proposes an extended visibility concept focusing on a process capability perspective and suggests a more quantitative model using Z score in Six Sigma methodology to evaluate and improve the level of supply chain visibility.

  13. Innovation and dynamic capabilities of the firm: Defining an assessment model

    Directory of Open Access Journals (Sweden)

    André Cherubini Alves

    2017-05-01

    Full Text Available Innovation and dynamic capabilities have gained considerable attention in both academia and practice. While one of the oldest inquiries in economic and strategy literature involves understanding the features that drive business success and a firm’s perpetuity, the literature still lacks a comprehensive model of innovation and dynamic capabilities. This study presents a model that assesses firms’ innovation and dynamic capabilities perspectives based on four essential capabilities: development, operations, management, and transaction capabilities. Data from a survey of 1,107 Brazilian manufacturing firms were used for empirical testing and discussion of the dynamic capabilities framework. Regression and factor analyses validated the model; we discuss the results, contrasting with the dynamic capabilities’ framework. Operations Capability is the least dynamic of all capabilities, with the least influence on innovation. This reinforces the notion that operations capabilities as “ordinary capabilities,” whereas management, development, and transaction capabilities better explain firms’ dynamics and innovation.

  14. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  15. EM modeling for GPIR using 3D FDTD modeling codes

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, S.D.

    1994-10-01

    An analysis of the one-, two-, and three-dimensional electrical characteristics of structural cement and concrete is presented. This work connects experimental efforts in characterizing cement and concrete in the frequency and time domains with the Finite Difference Time Domain (FDTD) modeling efforts of these substances. These efforts include Electromagnetic (EM) modeling of simple lossless homogeneous materials with aggregate and targets and the modeling dispersive and lossy materials with aggregate and complex target geometries for Ground Penetrating Imaging Radar (GPIR). Two- and three-dimensional FDTD codes (developed at LLNL) where used for the modeling efforts. Purpose of the experimental and modeling efforts is to gain knowledge about the electrical properties of concrete typically used in the construction industry for bridges and other load bearing structures. The goal is to optimize the performance of a high-sample-rate impulse radar and data acquisition system and to design an antenna system to match the characteristics of this material. Results show agreement to within 2 dB of the amplitudes of the experimental and modeled data while the frequency peaks correlate to within 10% the differences being due to the unknown exact nature of the aggregate placement.

  16. Case studies in Gaussian process modelling of computer codes

    International Nuclear Information System (INIS)

    Kennedy, Marc C.; Anderson, Clive W.; Conti, Stefano; O'Hagan, Anthony

    2006-01-01

    In this paper we present a number of recent applications in which an emulator of a computer code is created using a Gaussian process model. Tools are then applied to the emulator to perform sensitivity analysis and uncertainty analysis. Sensitivity analysis is used both as an aid to model improvement and as a guide to how much the output uncertainty might be reduced by learning about specific inputs. Uncertainty analysis allows us to reflect output uncertainty due to unknown input parameters, when the finished code is used for prediction. The computer codes themselves are currently being developed within the UK Centre for Terrestrial Carbon Dynamics

  17. Expanded rock blast modeling capabilities of DMC{_}BLAST, including buffer blasting

    Energy Technology Data Exchange (ETDEWEB)

    Preece, D.S. [Sandia National Labs., Albuquerque, NM (United States); Tidman, J.P.; Chung, S.H. [ICI Explosives (Canada)

    1996-12-31

    A discrete element computer program named DMC{_}BLAST (Distinct Motion Code) has been under development since 1987 for modeling rock blasting. This program employs explicit time integration and uses spherical or cylindrical elements that are represented as circles in 2-D. DMC{_}BLAST calculations compare favorably with data from actual bench blasts. The blast modeling capabilities of DMC{_}BLAST have been expanded to include independently dipping geologic layers, top surface, bottom surface and pit floor. The pit can also now be defined using coordinates based on the toe of the bench. A method for modeling decked explosives has been developed which allows accurate treatment of the inert materials (stemming) in the explosive column and approximate treatment of different explosives in the same blasthole. A DMC{_}BLAST user can specify decking through a specific geologic layer with either inert material or a different explosive. Another new feature of DMC{_}BLAST is specification of an uplift angle which is the angle between the normal to the blasthole and a vector defining the direction of explosive loading on particles adjacent to the blasthole. A buffer (choke) blast capability has been added for situations where previously blasted material is adjacent to the free face of the bench preventing any significant lateral motion during the blast.

  18. ATHENA code manual. Volume 1. Code structure, system models, and solution methods

    International Nuclear Information System (INIS)

    Carlson, K.E.; Roth, P.A.; Ransom, V.H.

    1986-09-01

    The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code has been developed to perform transient simulation of the thermal hydraulic systems which may be found in fusion reactors, space reactors, and other advanced systems. A generic modeling approach is utilized which permits as much of a particular system to be modeled as necessary. Control system and secondary system components are included to permit modeling of a complete facility. Several working fluids are available to be used in one or more interacting loops. Different loops may have different fluids with thermal connections between loops. The modeling theory and associated numerical schemes are documented in Volume I in order to acquaint the user with the modeling base and thus aid effective use of the code. The second volume contains detailed instructions for input data preparation

  19. A model based lean approach to capability management

    CSIR Research Space (South Africa)

    Venter, Jacobus P

    2017-09-01

    Full Text Available for cyberwar and counter terrorism capabilities as these are fairly new and rapidly changes environments. It is therefore necessary to employ a Capability Management mechanism that can provide answers in the short term, are able to handle continuous changes... is only included or excluded from the Mission Plan. A further refinement is to indicate the role that the FSC play in the mission. The following classification is used for this purpose: • c = the FSC can / should command (directly or indirectly, taking...

  20. Using Genome-scale Models to Predict Biological Capabilities

    DEFF Research Database (Denmark)

    O’Brien, Edward J.; Monk, Jonathan M.; Palsson, Bernhard O.

    2015-01-01

    growth capabilities on various substrates and the effect of gene knockouts at the genome scale. Thus, much interest has developed in understanding and applying these methods to areas such as metabolic engineering, antibiotic design, and organismal and enzyme evolution. This Primer will get you started....

  1. CMCpy: Genetic Code-Message Coevolution Models in Python.

    Science.gov (United States)

    Becich, Peter J; Stark, Brian P; Bhat, Harish S; Ardell, David H

    2013-01-01

    Code-message coevolution (CMC) models represent coevolution of a genetic code and a population of protein-coding genes ("messages"). Formally, CMC models are sets of quasispecies coupled together for fitness through a shared genetic code. Although CMC models display plausible explanations for the origin of multiple genetic code traits by natural selection, useful modern implementations of CMC models are not currently available. To meet this need we present CMCpy, an object-oriented Python API and command-line executable front-end that can reproduce all published results of CMC models. CMCpy implements multiple solvers for leading eigenpairs of quasispecies models. We also present novel analytical results that extend and generalize applications of perturbation theory to quasispecies models and pioneer the application of a homotopy method for quasispecies with non-unique maximally fit genotypes. Our results therefore facilitate the computational and analytical study of a variety of evolutionary systems. CMCpy is free open-source software available from http://pypi.python.org/pypi/CMCpy/.

  2. Noise Residual Learning for Noise Modeling in Distributed Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Forchhammer, Søren

    2012-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The noise model is one of the inherently difficult challenges in DVC. This paper considers Transform Domain Wyner-Ziv (TDWZ) coding and proposes...... noise residual learning techniques that take residues from previously decoded frames into account to estimate the decoding residue more precisely. Moreover, the techniques calculate a number of candidate noise residual distributions within a frame to adaptively optimize the soft side information during...

  3. A fully scalable motion model for scalable video coding.

    Science.gov (United States)

    Kao, Meng-Ping; Nguyen, Truong

    2008-06-01

    Motion information scalability is an important requirement for a fully scalable video codec, especially for decoding scenarios of low bit rate or small image size. So far, several scalable coding techniques on motion information have been proposed, including progressive motion vector precision coding and motion vector field layered coding. However, it is still vague on the required functionalities of motion scalability and how it collaborates flawlessly with other scalabilities, such as spatial, temporal, and quality, in a scalable video codec. In this paper, we first define the functionalities required for motion scalability. Based on these requirements, a fully scalable motion model is proposed along with tailored encoding techniques to minimize the coding overhead of scalability. Moreover, the associated rate distortion optimized motion estimation algorithm will be provided to achieve better efficiency throughout various decoding scenarios. Simulation results will be presented to verify the superiorities of proposed scalable motion model over nonscalable ones.

  4. Modeling Guidelines for Code Generation in the Railway Signaling Context

    Science.gov (United States)

    Ferrari, Alessio; Bacherini, Stefano; Fantechi, Alessandro; Zingoni, Niccolo

    2009-01-01

    Modeling guidelines constitute one of the fundamental cornerstones for Model Based Development. Their relevance is essential when dealing with code generation in the safety-critical domain. This article presents the experience of a railway signaling systems manufacturer on this issue. Introduction of Model-Based Development (MBD) and code generation in the industrial safety-critical sector created a crucial paradigm shift in the development process of dependable systems. While traditional software development focuses on the code, with MBD practices the focus shifts to model abstractions. The change has fundamental implications for safety-critical systems, which still need to guarantee a high degree of confidence also at code level. Usage of the Simulink/Stateflow platform for modeling, which is a de facto standard in control software development, does not ensure by itself production of high-quality dependable code. This issue has been addressed by companies through the definition of modeling rules imposing restrictions on the usage of design tools components, in order to enable production of qualified code. The MAAB Control Algorithm Modeling Guidelines (MathWorks Automotive Advisory Board)[3] is a well established set of publicly available rules for modeling with Simulink/Stateflow. This set of recommendations has been developed by a group of OEMs and suppliers of the automotive sector with the objective of enforcing and easing the usage of the MathWorks tools within the automotive industry. The guidelines have been published in 2001 and afterwords revisited in 2007 in order to integrate some additional rules developed by the Japanese division of MAAB [5]. The scope of the current edition of the guidelines ranges from model maintainability and readability to code generation issues. The rules are conceived as a reference baseline and therefore they need to be tailored to comply with the characteristics of each industrial context. Customization of these

  5. A Single Model Explains both Visual and Auditory Precortical Coding

    OpenAIRE

    Shan, Honghao; Tong, Matthew H.; Cottrell, Garrison W.

    2016-01-01

    Precortical neural systems encode information collected by the senses, but the driving principles of the encoding used have remained a subject of debate. We present a model of retinal coding that is based on three constraints: information preservation, minimization of the neural wiring, and response equalization. The resulting novel version of sparse principal components analysis successfully captures a number of known characteristics of the retinal coding system, such as center-surround rece...

  6. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC).

    Energy Technology Data Exchange (ETDEWEB)

    Schultz, Peter Andrew

    2011-12-01

    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomic scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V&V) is required throughout the system to establish evidence-based metrics for the level of confidence in M&S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V&V challenge at the subcontinuum scale, an approach to incorporate V&V concepts into subcontinuum scale modeling and simulation (M&S), and a plan to incrementally incorporate effective V&V into subcontinuum scale M&S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.

  7. On the Delay Characteristics for Point-to-Point links using Random Linear Network Coding with On-the-fly Coding Capabilities

    DEFF Research Database (Denmark)

    Tömösközi, Máté; Fitzek, Frank; Roetter, Daniel Enrique Lucani

    2014-01-01

    . This metric captures the elapsed time between (network) encoding RTP packets and completely decoding the packets in-order on the receiver side. Our solutions are implemented and evaluated on a point-to-point link between a Raspberry Pi device and a network (de)coding enabled software running on a regular PC...

  8. Nascap-2k Spacecraft-Plasma Environment Interactions Modeling: New Capabilities and Verification

    National Research Council Canada - National Science Library

    Davis, V. A; Mandell, M. J; Cooke, D. L; Ferguson, D. C

    2007-01-01

    .... Here we examine the accuracy and limitations of two new capabilities of Nascap-2k: modeling of plasma plumes such as generated by electric thrusters and enhanced PIC computational capabilities...

  9. Development of Parallel Computing Framework to Enhance Radiation Transport Code Capabilities for Rare Isotope Beam Facility Design

    Energy Technology Data Exchange (ETDEWEB)

    Kostin, Mikhail [Michigan State Univ., East Lansing, MI (United States); Mokhov, Nikolai [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Niita, Koji [Research Organization for Information Science and Technology, Ibaraki-ken (Japan)

    2013-09-25

    A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA and MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.

  10. MELMRK 2. 0: A description of computer models and results of code testing

    Energy Technology Data Exchange (ETDEWEB)

    Wittman, R.S. (ed.) (Westinghouse Savannah River Co., Aiken, SC (United States)); Denny, V.; Mertol, A. (Science Applications International Corp., Los Atlos, CA (United States))

    1992-05-31

    An advanced version of the MELMRK computer code has been developed that provides detailed models for conservation of mass, momentum, and thermal energy within relocating streams of molten metallics during meltdown of Savannah River Site (SRS) reactor assemblies. In addition to a mechanistic treatment of transport phenomena within a relocating stream, MELMRK 2.0 retains the MOD1 capability for real-time coupling of the in-depth thermal response of participating assembly heat structure and, further, augments this capability with models for self-heating of relocating melt owing to steam oxidation of metallics and fission product decay power. As was the case for MELMRK 1.0, the MOD2 version offers state-of-the-art numerics for solving coupled sets of nonlinear differential equations. Principal features include application of multi-dimensional Newton-Raphson techniques to accelerate convergence behavior and direct matrix inversion to advance primitive variables from one iterate to the next. Additionally, MELMRK 2.0 provides logical event flags for managing the broad range of code options available for treating such features as (1) coexisting flow regimes, (2) dynamic transitions between flow regimes, and (3) linkages between heatup and relocation code modules. The purpose of this report is to provide a detailed description of the MELMRK 2.0 computer models for melt relocation. Also included are illustrative results for code testing, as well as an integrated calculation for meltdown of a Mark 31a assembly.

  11. MELMRK 2.0: A description of computer models and results of code testing

    Energy Technology Data Exchange (ETDEWEB)

    Wittman, R.S. [ed.] [Westinghouse Savannah River Co., Aiken, SC (United States); Denny, V.; Mertol, A. [Science Applications International Corp., Los Atlos, CA (United States)

    1992-05-31

    An advanced version of the MELMRK computer code has been developed that provides detailed models for conservation of mass, momentum, and thermal energy within relocating streams of molten metallics during meltdown of Savannah River Site (SRS) reactor assemblies. In addition to a mechanistic treatment of transport phenomena within a relocating stream, MELMRK 2.0 retains the MOD1 capability for real-time coupling of the in-depth thermal response of participating assembly heat structure and, further, augments this capability with models for self-heating of relocating melt owing to steam oxidation of metallics and fission product decay power. As was the case for MELMRK 1.0, the MOD2 version offers state-of-the-art numerics for solving coupled sets of nonlinear differential equations. Principal features include application of multi-dimensional Newton-Raphson techniques to accelerate convergence behavior and direct matrix inversion to advance primitive variables from one iterate to the next. Additionally, MELMRK 2.0 provides logical event flags for managing the broad range of code options available for treating such features as (1) coexisting flow regimes, (2) dynamic transitions between flow regimes, and (3) linkages between heatup and relocation code modules. The purpose of this report is to provide a detailed description of the MELMRK 2.0 computer models for melt relocation. Also included are illustrative results for code testing, as well as an integrated calculation for meltdown of a Mark 31a assembly.

  12. Fuel rod modelling during transients: The TOUTATIS code

    International Nuclear Information System (INIS)

    Bentejac, F.; Bourreau, S.; Brochard, J.; Hourdequin, N.; Lansiart, S.

    2001-01-01

    The TOUTATIS code is devoted to the PCI local phenomena simulation, in correlation with the METEOR code for the global behaviour of the fuel rod. More specifically, the TOUTATIS objective is to evaluate the mechanical constraints on the cladding during a power transient thus predicting its behaviour in term of stress corrosion cracking. Based upon the finite element computation code CASTEM 2000, TOUTATIS is a set of modules written in a macro language. The aim of this paper is to present both code modules: The axisymmetric bi-dimensional module, modeling a unique block pellet; The tri dimensional module modeling a radially fragmented pellet. Having shown the boundary conditions and the algorithms used, the application will be illustrated by: A short presentation of the bidimensional axisymmetric modeling performances as well as its limits; The enhancement due to the three dimensional modeling will be displayed by sensitivity studies to the geometry, in this case the pellet height/diameter ratio. Finally, we will show the easiness of the development inherent to the CASTEM 2000 system by depicting the process of a modeling enhancement by adding the possibility of an axial (horizontal) fissuration of the pellet. As conclusion, the future improvements planned for the code are depicted. (author)

  13. Capabilities and requirements for modelling radionuclide transport in the geosphere

    International Nuclear Information System (INIS)

    Paige, R.W.; Piper, D.

    1989-02-01

    This report gives an overview of geosphere flow and transport models suitable for use by the Department of the Environment in the performance assessment of radioactive waste disposal sites. An outline methodology for geosphere modelling is proposed, consisting of a number of different types of model. A brief description of each of the component models is given, indicating the purpose of the model, the processes being modelled and the methodologies adopted. Areas requiring development are noted. (author)

  14. Compositional Model Based Fisher Vector Coding for Image Classification.

    Science.gov (United States)

    Liu, Lingqiao; Wang, Peng; Shen, Chunhua; Wang, Lei; Hengel, Anton van den; Wang, Chao; Shen, Heng Tao

    2017-12-01

    Deriving from the gradient vector of a generative model of local features, Fisher vector coding (FVC) has been identified as an effective coding method for image classification. Most, if not all, FVC implementations employ the Gaussian mixture model (GMM) as the generative model for local features. However, the representative power of a GMM can be limited because it essentially assumes that local features can be characterized by a fixed number of feature prototypes, and the number of prototypes is usually small in FVC. To alleviate this limitation, in this work, we break the convention which assumes that a local feature is drawn from one of a few Gaussian distributions. Instead, we adopt a compositional mechanism which assumes that a local feature is drawn from a Gaussian distribution whose mean vector is composed as a linear combination of multiple key components, and the combination weight is a latent random variable. In doing so we greatly enhance the representative power of the generative model underlying FVC. To implement our idea, we design two particular generative models following this compositional approach. In our first model, the mean vector is sampled from the subspace spanned by a set of bases and the combination weight is drawn from a Laplace distribution. In our second model, we further assume that a local feature is composed of a discriminative part and a residual part. As a result, a local feature is generated by the linear combination of discriminative part bases and residual part bases. The decomposition of the discriminative and residual parts is achieved via the guidance of a pre-trained supervised coding method. By calculating the gradient vector of the proposed models, we derive two new Fisher vector coding strategies. The first is termed Sparse Coding-based Fisher Vector Coding (SCFVC) and can be used as the substitute of traditional GMM based FVC. The second is termed Hybrid Sparse Coding-based Fisher vector coding (HSCFVC) since it

  15. Extension of the simulation capabilities of the 1D system code ATHLET by coupling with the 3D CFD software package ANSYS CFX

    International Nuclear Information System (INIS)

    Papukchiev, Angel; Lerchl, Georg; Waata, Christine; Frank, Thomas

    2009-01-01

    The thermal-hydraulic system code ATHLET (Analysis of THermal-hydraulics of LEaks and Transients) is developed at Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) for the analysis of anticipated and abnormal plant transients, small and intermediate leaks as well as large breaks in light water reactors. The aim of the code development is to cover the whole spectrum of design basis and beyond design basis accidents (without core degradation) for PWRs and BWRs. In order to extend the simulation capabilities of the 1D system code ATHLET, different approaches are applied at GRS to enable multidimensional thermal-hydraulic representation of relevant primary circuit geometries. One of the current major strategies at the technical safety organization is the coupling of ATHLET with the commercial 3D Computational Fluid Dynamics (CFD) software package ANSYS CFX. This code is a general purpose CFD software program that combines an advanced solver with powerful pre- and post-processing capabilities. It is an efficient tool for simulating the behavior of systems involving fluid flow, heat transfer, and other related physical processes. In the frame of the German CFD Network on Nuclear Reactor Safety, GRS and ANSYS Germany developed a general computer interface for the coupling of both codes. This paper focuses on the methodology and the challenges related to the coupling process. A great number of simulations including test cases with closed loop configurations have been carried out to evaluate and improve the performance of the coupled code system. Selected results of the 1D-3D thermal-hydraulic calculations are presented and analyzed. Preliminary comparative calculations with CFX-ATHLET and ATHLET stand alone showed very good agreement. Nevertheless, an extensive validation of the developed coupled code is planned. Finally, the optimization potential of the coupling methodology is discussed. (author)

  16. The JCSS probabilistic model code: Experience and recent developments

    NARCIS (Netherlands)

    Chryssanthopoulos, M.; Diamantidis, D.; Vrouwenvelder, A.C.W.M.

    2003-01-01

    The JCSS Probabilistic Model Code (JCSS-PMC) has been available for public use on the JCSS website (www.jcss.ethz.ch) for over two years. During this period, several examples have been worked out and new probabilistic models have been added. Since the engineering community has already been exposed

  17. Are Hydrostatic Models Still Capable of Simulating Oceanic Fronts

    Science.gov (United States)

    2016-11-10

    stress components which can be modeled by a turbulence closure model. In the present study, the standard Smagorinsky LES model is used. The conservation...is used to solve the pressure Poisson equation. The model is parallelized with Message Passing Interface (MPI). 2.2 Modification to NHWAVE

  18. Communications, Navigation, and Surveillance Models in ACES: Design Implementation and Capabilities

    Science.gov (United States)

    Kubat, Greg; Vandrei, Don; Satapathy, Goutam; Kumar, Anil; Khanna, Manu

    2006-01-01

    Presentation objectives include: a) Overview of the ACES/CNS System Models Design and Integration; b) Configuration Capabilities available for Models and Simulations using ACES with CNS Modeling; c) Descriptions of recently added, Enhanced CNS Simulation Capabilities; and d) General Concepts Ideas that Utilize CNS Modeling to Enhance Concept Evaluations.

  19. A predictive transport modeling code for ICRF-heated tokamaks

    International Nuclear Information System (INIS)

    Phillips, C.K.; Hwang, D.Q.

    1992-02-01

    In this report, a detailed description of the physic included in the WHIST/RAZE package as well as a few illustrative examples of the capabilities of the package will be presented. An in depth analysis of ICRF heating experiments using WHIST/RAZE will be discussed in a forthcoming report. A general overview of philosophy behind the structure of the WHIST/RAZE package, a summary of the features of the WHIST code, and a description of the interface to the RAZE subroutines are presented in section 2 of this report. Details of the physics contained in the RAZE code are examined in section 3. Sample results from the package follow in section 4, with concluding remarks and a discussion of possible improvements to the package discussed in section 5

  20. An improved thermal model for the computer code NAIAD

    International Nuclear Information System (INIS)

    Rainbow, M.T.

    1982-12-01

    An improved thermal model, based on the concept of heat slabs, has been incorporated as an option into the thermal hydraulic computer code NAIAD. The heat slabs are one-dimensional thermal conduction models with temperature independent thermal properties which may be internal and/or external to the fluid. Thermal energy may be added to or removed from the fluid via heat slabs and passed across the external boundary of external heat slabs at a rate which is a linear function of the external surface temperatures. The code input for the new option has been restructured to simplify data preparation. A full description of current input requirements is presented

  1. Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach

    Directory of Open Access Journals (Sweden)

    W. Bastiaan Kleijn

    2005-06-01

    Full Text Available Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel coding.

  2. Thermal-hydraulic models and correlations for the SPACE code

    International Nuclear Information System (INIS)

    Kim, K. D.; Lee, S. W.; Bae, S. W.; Moon, S. K.; Kim, S. Y.; Lee, Y. H.

    2009-01-01

    The SPACE code which is based on a multi-dimensional two-fluid, three-field model is under development to be used for licensing future pressurized water reactors. Several research and industrial organizations are participated in the collaboration of the development program, including KAERI, KHNP, KOPEC, KNF, and KEPRI. KAERI has been assigned to develop the thermal-hydraulic models and correlations which are required to solve the field equations as the closure relationships. This task can be categorized into five packages; i) a flow regime selection package, ii) a wall and interfacial friction package, iii) an interfacial heat and mass transfer package iv) a droplet entrainment and de-entrainment package and v) a wall heat transfer package. Since the SPACE code, unlike other major best-estimate nuclear reactor system analysis codes, RELAP5, TRAC-M and CATHARE which consider only liquid and vapor phases, incorporates a dispersed liquid field in addition to vapor and continuous liquid fields, intel facial interaction models between continuous, dispersed liquid phases and vapor phase have to be developed separately. The proper physical models can significantly improve the accuracy of the prediction of a nuclear reactor system behavior under many different transient conditions because those models are composed of the source terms for the governing equations. In this paper, a development program for the physical models and correlations for the SPACE code will be introduced briefly

  3. COCOA code for creating mock observations of star cluster models

    Science.gov (United States)

    Askar, Abbas; Giersz, Mirek; Pych, Wojciech; Dalessandro, Emanuele

    2018-04-01

    We introduce and present results from the COCOA (Cluster simulatiOn Comparison with ObservAtions) code that has been developed to create idealized mock photometric observations using results from numerical simulations of star cluster evolution. COCOA is able to present the output of realistic numerical simulations of star clusters carried out using Monte Carlo or N-body codes in a way that is useful for direct comparison with photometric observations. In this paper, we describe the COCOA code and demonstrate its different applications by utilizing globular cluster (GC) models simulated with the MOCCA (MOnte Carlo Cluster simulAtor) code. COCOA is used to synthetically observe these different GC models with optical telescopes, perform point spread function photometry, and subsequently produce observed colour-magnitude diagrams. We also use COCOA to compare the results from synthetic observations of a cluster model that has the same age and metallicity as the Galactic GC NGC 2808 with observations of the same cluster carried out with a 2.2 m optical telescope. We find that COCOA can effectively simulate realistic observations and recover photometric data. COCOA has numerous scientific applications that maybe be helpful for both theoreticians and observers that work on star clusters. Plans for further improving and developing the code are also discussed in this paper.

  4. TRIPOLI-4{sup ®} Monte Carlo code ITER A-lite neutronic model validation

    Energy Technology Data Exchange (ETDEWEB)

    Jaboulay, Jean-Charles, E-mail: jean-charles.jaboulay@cea.fr [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France); Cayla, Pierre-Yves; Fausser, Clement [MILLENNIUM, 16 Av du Québec Silic 628, F-91945 Villebon sur Yvette (France); Damian, Frederic; Lee, Yi-Kang; Puma, Antonella Li; Trama, Jean-Christophe [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France)

    2014-10-15

    3D Monte Carlo transport codes are extensively used in neutronic analysis, especially in radiation protection and shielding analyses for fission and fusion reactors. TRIPOLI-4{sup ®} is a Monte Carlo code developed by CEA. The aim of this paper is to show its capability to model a large-scale fusion reactor with complex neutron source and geometry. A benchmark between MCNP5 and TRIPOLI-4{sup ®}, on the ITER A-lite model was carried out; neutron flux, nuclear heating in the blankets and tritium production rate in the European TBMs were evaluated and compared. The methodology to build the TRIPOLI-4{sup ®} A-lite model is based on MCAM and the MCNP A-lite model. Simplified TBMs, from KIT, were integrated in the equatorial-port. A good agreement between MCNP and TRIPOLI-4{sup ®} is shown; discrepancies are mainly included in the statistical error.

  5. Code Development for Control Design Applications: Phase I: Structural Modeling

    International Nuclear Information System (INIS)

    Bir, G. S.; Robinson, M.

    1998-01-01

    The design of integrated controls for a complex system like a wind turbine relies on a system model in an explicit format, e.g., state-space format. Current wind turbine codes focus on turbine simulation and not on system characterization, which is desired for controls design as well as applications like operating turbine model analysis, optimal design, and aeroelastic stability analysis. This paper reviews structural modeling that comprises three major steps: formation of component equations, assembly into system equations, and linearization

  6. Atmospheric disturbance model for aircraft and space capable vehicles

    Science.gov (United States)

    Chimene, Beau C.; Park, Young W.; Bielski, W. P.; Shaughnessy, John D.; Mcminn, John D.

    1992-01-01

    An atmospheric disturbance model (ADM) is developed that considers the requirements of advanced aerospace vehicles and balances algorithmic assumptions with computational constraints. The requirements for an ADM include a realistic power spectrum, inhomogeneity, and the cross-correlation of atmospheric effects. The baseline models examined include the Global Reference Atmospheric Model Perturbation-Modeling Technique, the Dryden Small-Scale Turbulence Description, and the Patchiness Model. The Program to Enhance Random Turbulence (PERT) is developed based on the previous models but includes a revised formulation of large-scale atmospheric disturbance, an inhomogeneous Dryden filter, turbulence statistics, and the cross-correlation between Dryden Turbulence Filters and small-scale thermodynamics. Verification with the Monte Carlo approach demonstrates that the PERT software provides effective simulations of inhomogeneous atmospheric parameters.

  7. Development of Improved Algorithms and Multiscale Modeling Capability with SUNTANS

    Science.gov (United States)

    2015-09-30

    wind-and thermohaline -forced isopycnic coordinate model of the North Atlantic. J. Phys. Oceanogr. 22, 1486–1505. Bleck, R., 2002. An oceanic general... circulation model framed in hybrid isopycnic-Cartesian coordinates. Ocean Modell. 4, 55–88. Buijsman, M.C., Kanarska, Y., McWilliams, J.C., 2010...continental margin. Cont. Shelf Res. 24 (6), 693–720. Nakayama, K. and Imberger, J. 2010 Residual circulation due to internal waves shoaling on a slope

  8. The Global Modeling Test Bed - Building a New National Capability for Advancing Operational Global Modeling in the United States.

    Science.gov (United States)

    Toepfer, F.; Cortinas, J. V., Jr.; Kuo, W.; Tallapragada, V.; Stajner, I.; Nance, L. B.; Kelleher, K. E.; Firl, G.; Bernardet, L.

    2017-12-01

    NOAA develops, operates, and maintains an operational global modeling capability for weather, sub seasonal and seasonal prediction for the protection of life and property and fostering the US economy. In order to substantially improve the overall performance and accelerate advancements of the operational modeling suite, NOAA is partnering with NCAR to design and build the Global Modeling Test Bed (GMTB). The GMTB has been established to provide a platform and a capability for researchers to contribute to the advancement primarily through the development of physical parameterizations needed to improve operational NWP. The strategy to achieve this goal relies on effectively leveraging global expertise through a modern collaborative software development framework. This framework consists of a repository of vetted and supported physical parameterizations known as the Common Community Physics Package (CCPP), a common well-documented interface known as the Interoperable Physics Driver (IPD) for combining schemes into suites and for their configuration and connection to dynamic cores, and an open evidence-based governance process for managing the development and evolution of CCPP. In addition, a physics test harness designed to work within this framework has been established in order to facilitate easier like-to-like comparison of physics advancements. This paper will present an overview of the design of the CCPP and test platform. Additionally, an overview of potential new opportunities of how physics developers can engage in the process, from implementing code for CCPP/IPD compliance to testing their development within an operational-like software environment, will be presented. In addition, insight will be given as to how development gets elevated to CPPP-supported status, the pre-cursor to broad availability and use within operational NWP. An overview of how the GMTB can be expanded to support other global or regional modeling capabilities will also be presented.

  9. Inclusion of models to describe severe accident conditions in the fuel simulation code DIONISIO

    Energy Technology Data Exchange (ETDEWEB)

    Lemes, Martín; Soba, Alejandro [Sección Códigos y Modelos, Gerencia Ciclo del Combustible Nuclear, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina); Daverio, Hernando [Gerencia Reactores y Centrales Nucleares, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina); Denis, Alicia [Sección Códigos y Modelos, Gerencia Ciclo del Combustible Nuclear, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina)

    2017-04-15

    The simulation of fuel rod behavior is a complex task that demands not only accurate models to describe the numerous phenomena occurring in the pellet, cladding and internal rod atmosphere but also an adequate interconnection between them. In the last years several models have been incorporated to the DIONISIO code with the purpose of increasing its precision and reliability. After the regrettable events at Fukushima, the need for codes capable of simulating nuclear fuels under accident conditions has come forth. Heat removal occurs in a quite different way than during normal operation and this fact determines a completely new set of conditions for the fuel materials. A detailed description of the different regimes the coolant may exhibit in such a wide variety of scenarios requires a thermal-hydraulic formulation not suitable to be included in a fuel performance code. Moreover, there exist a number of reliable and famous codes that perform this task. Nevertheless, and keeping in mind the purpose of building a code focused on the fuel behavior, a subroutine was developed for the DIONISIO code that performs a simplified analysis of the coolant in a PWR, restricted to the more representative situations and provides to the fuel simulation the boundary conditions necessary to reproduce accidental situations. In the present work this subroutine is described and the results of different comparisons with experimental data and with thermal-hydraulic codes are offered. It is verified that, in spite of its comparative simplicity, the predictions of this module of DIONISIO do not differ significantly from those of the specific, complex codes.

  10. Capabilities For Modelling Of Conversion Processes In Life Cycle Assessment

    DEFF Research Database (Denmark)

    Damgaard, Anders; Zarrin, Bahram; Tonini, Davide

    Life cycle assessment was traditionally used for modelling of product design and optimization. This is also seen in the conventional LCA software which is optimized for the modelling of single materials streams of a homogeneous nature that is assembled into a final product. There has therefore been...

  11. The Creation and Use of an Analysis Capability Maturity Model (trademark) (ACMM)

    National Research Council Canada - National Science Library

    Covey, R. W; Hixon, D. J

    2005-01-01

    .... Capability Maturity Models (trademark) (CMMs) are being used in several intellectual endeavors, such as software engineering, software acquisition, and systems engineering. This Analysis CMM (ACMM...

  12. Code Shift: Grid Specifications and Dynamic Wind Turbine Models

    DEFF Research Database (Denmark)

    Ackermann, Thomas; Ellis, Abraham; Fortmann, Jens

    2013-01-01

    Grid codes (GCs) and dynamic wind turbine (WT) models are key tools to allow increasing renewable energy penetration without challenging security of supply. In this article, the state of the art and the further development of both tools are discussed, focusing on the European and North American e...

  13. Testing geochemical modeling codes using New Zealand hydrothermal systems

    International Nuclear Information System (INIS)

    Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.

    1993-12-01

    Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of selected portions of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will: (1) ensure that we are providing adequately for all significant processes occurring in natural systems; (2) determine the adequacy of the mathematical descriptions of the processes; (3) check the adequacy and completeness of thermodynamic data as a function of temperature for solids, aqueous species and gases; and (4) determine the sensitivity of model results to the manner in which the problem is conceptualized by the user and then translated into constraints in the code input. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions. The kinetics of silica precipitation in EQ6 will be tested using field data from silica-lined drain channels carrying hot water away from the Wairakei borefield

  14. On the predictive capabilities of multiphase Darcy flow models

    KAUST Repository

    Icardi, Matteo

    2016-01-09

    Darcy s law is a widely used model and the limit of its validity is fairly well known. When the flow is sufficiently slow and the porosity relatively homogeneous and low, Darcy s law is the homogenized equation arising from the Stokes and Navier- Stokes equations and depends on a single effective parameter (the absolute permeability). However when the model is extended to multiphase flows, the assumptions are much more restrictive and less realistic. Therefore it is often used in conjunction with empirical models (such as relative permeability and capillary pressure curves), derived usually from phenomenological speculations and experimental data fitting. In this work, we present the results of a Bayesian calibration of a two-phase flow model, using high-fidelity DNS numerical simulation (at the pore-scale) in a realistic porous medium. These reference results have been obtained from a Navier-Stokes solver coupled with an explicit interphase-tracking scheme. The Bayesian inversion is performed on a simplified 1D model in Matlab by using adaptive spectral method. Several data sets are generated and considered to assess the validity of this 1D model.

  15. Uncertainty quantification's role in modeling and simulation planning, and credibility assessment through the predictive capability maturity model

    Energy Technology Data Exchange (ETDEWEB)

    Rider, William J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Witkowski, Walter R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mousseau, Vincent Andrew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-04-13

    The importance of credible, trustworthy numerical simulations is obvious especially when using the results for making high-consequence decisions. Determining the credibility of such numerical predictions is much more difficult and requires a systematic approach to assessing predictive capability, associated uncertainties and overall confidence in the computational simulation process for the intended use of the model. This process begins with an evaluation of the computational modeling of the identified, important physics of the simulation for its intended use. This is commonly done through a Phenomena Identification Ranking Table (PIRT). Then an assessment of the evidence basis supporting the ability to computationally simulate these physics can be performed using various frameworks such as the Predictive Capability Maturity Model (PCMM). There were several critical activities that follow in the areas of code and solution verification, validation and uncertainty quantification, which will be described in detail in the following sections. Here, we introduce the subject matter for general applications but specifics are given for the failure prediction project. In addition, the first task that must be completed in the verification & validation procedure is to perform a credibility assessment to fully understand the requirements and limitations of the current computational simulation capability for the specific application intended use. The PIRT and PCMM are tools used at Sandia National Laboratories (SNL) to provide a consistent manner to perform such an assessment. Ideally, all stakeholders should be represented and contribute to perform an accurate credibility assessment. PIRTs and PCMMs are both described in brief detail below and the resulting assessments for an example project are given.

  16. Computable general equilibrium model fiscal year 2013 capability development report

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-17

    This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences in the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.

  17. Lattice Boltzmann model capable of mesoscopic vorticity computation.

    Science.gov (United States)

    Peng, Cheng; Guo, Zhaoli; Wang, Lian-Ping

    2017-11-01

    It is well known that standard lattice Boltzmann (LB) models allow the strain-rate components to be computed mesoscopically (i.e., through the local particle distributions) and as such possess a second-order accuracy in strain rate. This is one of the appealing features of the lattice Boltzmann method (LBM) which is of only second-order accuracy in hydrodynamic velocity itself. However, no known LB model can provide the same quality for vorticity and pressure gradients. In this paper, we design a multiple-relaxation time LB model on a three-dimensional 27-discrete-velocity (D3Q27) lattice. A detailed Chapman-Enskog analysis is presented to illustrate all the necessary constraints in reproducing the isothermal Navier-Stokes equations. The remaining degrees of freedom are carefully analyzed to derive a model that accommodates mesoscopic computation of all the velocity and pressure gradients from the nonequilibrium moments. This way of vorticity calculation naturally ensures a second-order accuracy, which is also proven through an asymptotic analysis. We thus show, with enough degrees of freedom and appropriate modifications, the mesoscopic vorticity computation can be achieved in LBM. The resulting model is then validated in simulations of a three-dimensional decaying Taylor-Green flow, a lid-driven cavity flow, and a uniform flow passing a fixed sphere. Furthermore, it is shown that the mesoscopic vorticity computation can be realized even with single relaxation parameter.

  18. Lattice Boltzmann model capable of mesoscopic vorticity computation

    Science.gov (United States)

    Peng, Cheng; Guo, Zhaoli; Wang, Lian-Ping

    2017-11-01

    It is well known that standard lattice Boltzmann (LB) models allow the strain-rate components to be computed mesoscopically (i.e., through the local particle distributions) and as such possess a second-order accuracy in strain rate. This is one of the appealing features of the lattice Boltzmann method (LBM) which is of only second-order accuracy in hydrodynamic velocity itself. However, no known LB model can provide the same quality for vorticity and pressure gradients. In this paper, we design a multiple-relaxation time LB model on a three-dimensional 27-discrete-velocity (D3Q27) lattice. A detailed Chapman-Enskog analysis is presented to illustrate all the necessary constraints in reproducing the isothermal Navier-Stokes equations. The remaining degrees of freedom are carefully analyzed to derive a model that accommodates mesoscopic computation of all the velocity and pressure gradients from the nonequilibrium moments. This way of vorticity calculation naturally ensures a second-order accuracy, which is also proven through an asymptotic analysis. We thus show, with enough degrees of freedom and appropriate modifications, the mesoscopic vorticity computation can be achieved in LBM. The resulting model is then validated in simulations of a three-dimensional decaying Taylor-Green flow, a lid-driven cavity flow, and a uniform flow passing a fixed sphere. Furthermore, it is shown that the mesoscopic vorticity computation can be realized even with single relaxation parameter.

  19. Dynamic Model on the Transmission of Malicious Codes in Network

    OpenAIRE

    Bimal Kumar Mishra; Apeksha Prajapati

    2013-01-01

    This paper introduces differential susceptible e-epidemic model S_i IR (susceptible class-1 for virus (S1) - susceptible class-2 for worms (S2) -susceptible class-3 for Trojan horse (S3) – infectious (I) – recovered (R)) for the transmission of malicious codes in a computer network. We derive the formula for reproduction number (R0) to study the spread of malicious codes in computer network. We show that the Infectious free equilibrium is globally asymptotically stable and endemic equilibrium...

  20. Multi-phase model development to assess RCIC system capabilities under severe accident conditions

    Energy Technology Data Exchange (ETDEWEB)

    Kirkland, Karen Vierow [Texas A & M Univ., College Station, TX (United States); Ross, Kyle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Beeny, Bradley [Texas A & M Univ., College Station, TX (United States); Luthman, Nicholas [Texas A& M Engineering Experiment Station, College Station, TX (United States); Strater, Zachary [Texas A & M Univ., College Station, TX (United States)

    2017-12-23

    The Reactor Core Isolation Cooling (RCIC) System is a safety-related system that provides makeup water for core cooling of some Boiling Water Reactors (BWRs) with a Mark I containment. The RCIC System consists of a steam-driven Terry turbine that powers a centrifugal, multi-stage pump for providing water to the reactor pressure vessel. The Fukushima Dai-ichi accidents demonstrated that the RCIC System can play an important role under accident conditions in removing core decay heat. The unexpectedly sustained, good performance of the RCIC System in the Fukushima reactor demonstrates, firstly, that its capabilities are not well understood, and secondly, that the system has high potential for extended core cooling in accident scenarios. Better understanding and analysis tools would allow for more options to cope with a severe accident situation and to reduce the consequences. The objectives of this project were to develop physics-based models of the RCIC System, incorporate them into a multi-phase code and validate the models. This Final Technical Report details the progress throughout the project duration and the accomplishments.

  1. A compressible Navier-Stokes code for turbulent flow modeling

    Science.gov (United States)

    Coakley, T. J.

    1984-01-01

    An implicit, finite volume code for solving two dimensional, compressible turbulent flows is described. Second order upwind differencing of the inviscid terms of the equations is used to enhance stability and accuracy. A diagonal form of the implicit algorithm is used to improve efficiency. Several zero and two equation turbulence models are incorporated to study their impact on overall flow modeling accuracy. Applications to external and internal flows are discussed.

  2. Thermohydraulic modeling of nuclear thermal rockets: The KLAXON code

    International Nuclear Information System (INIS)

    Hall, M.L.; Rider, W.J.; Cappiello, M.W.

    1992-01-01

    The hydrogen flow from the storage tanks, through the reactor core, and out the nozzle of a Nuclear Thermal Rocket is an integral design consideration. To provide an analysis and design tool for this phenomenon, the KLAXON code is being developed. A shock-capturing numerical methodology is used to model the gas flow (the Harten, Lax, and van Leer method, as implemented by Einfeldt). Preliminary results of modeling the flow through the reactor core and nozzle are given in this paper

  3. ORIGEN-ARP 2.00, Isotope Generation and Depletion Code System-Matrix Exponential Method with GUI and Graphics Capability

    International Nuclear Information System (INIS)

    2002-01-01

    1 - Description of program or function: ORIGEN-ARP was developed for the Nuclear Regulatory Commission and the Department of Energy to satisfy a need for an easy-to-use standardized method of isotope depletion/decay analysis for spent fuel, fissile material, and radioactive material. It can be used to solve for spent fuel characterization, isotopic inventory, radiation source terms, and decay heat. This release of ORIGEN-ARP is a standalone code package that contains an updated version of the SCALE-4.4a ORIGEN-S code. It contains a subset of the modules, data libraries, and miscellaneous utilities in SCALE-4.4a. This package is intended for users who do not need the entire SCALE package. ORIGEN-ARP 2.00 (2-12-2002) differs from the previous release ORIGEN-ARP 1.0 (July 2001) in the following ways: 1.The neutron source and energy spectrum routines were replaced with computational algorithms and data from the SOURCES-4B code (RSICC package CCC-661) to provide more accurate spontaneous fission and (alpha,n) neutron sources, and a delayed neutron source capability was added. 2.The printout of the fixed energy group structure photon tables was removed. Gamma sources and spectra are now printed for calculations using the Master Photon Library only. 2 - Methods: ORIGEN-ARP is an automated sequence to perform isotopic depletion / decay calculations using the ARP and ORIGEN-S codes of the SCALE system. The sequence includes the OrigenArp for Windows graphical user interface (GUI) that prepares input for ARP (Automated Rapid Processing) and ORIGEN-S. ARP automatically interpolates cross sections for the ORIGEN-S depletion/decay analysis using enrichment, burnup, and, optionally moderator density, from a set of libraries generated with the SCALE SAS2 depletion sequence. Library sets for four LWR fuel assembly designs (BWR 8 x 8, PWR 14 x 14, 15 x 15, 17 x 17) are included. The libraries span enrichments from 1.5 to 5 wt% U-235 and burnups of 0 to 60,000 MWD/MTU. Other

  4. A new nuclide transport model in soil in the GENII-LIN health physics code

    Science.gov (United States)

    Teodori, F.

    2017-11-01

    The nuclide soil transfer model, originally included in the GENII-LIN software system, was intended for residual contamination from long term activities and from waste form degradation. Short life nuclides were supposed absent or at equilibrium with long life parents. Here we present an enhanced soil transport model, where short life nuclide contributions are correctly accounted. This improvement extends the code capabilities to handle incidental release of contaminant to soil, by evaluating exposure since the very beginning of the contamination event, before the radioactive decay chain equilibrium is reached.

  5. EASEWASTE-life cycle modeling capabilities for waste management technologies

    DEFF Research Database (Denmark)

    Bhander, Gurbakhash Singh; Christensen, Thomas Højlund; Hauschild, Michael Zwicky

    2010-01-01

    Background, Aims and Scope The management of municipal solid waste and the associated environmental impacts are subject of growing attention in industrialized countries. EU has recently strongly emphasized the role of LCA in its waste and resource strategies. The development of sustainable solid...... waste management model EASEWASTE, developed at the Technical University of Denmark specifically to meet the needs of the waste system developer with the objective to evaluate the environmental performance of the various elements of existing or proposed solid waste management systems. Materials...... and quantities as well as for the waste technologies mentioned above. The model calculates environmental impacts and resource consumptions and allows the user to trace all impacts to their source in a waste treatment processes or in a specific waste material fraction. In addition to the traditional impact...

  6. Capabilities for modelling of conversion processes in LCA

    DEFF Research Database (Denmark)

    Damgaard, Anders; Zarrin, Bahram; Tonini, Davide

    2015-01-01

    , EASETECH (Clavreul et al., 2014) was developed which integrates a matrix approach for the functional unit which contains the full chemical composition for different material fractions, and also the number of different material fractions present in the overall mass being handled. These chemical substances...... able to set constraints for a possible flow on basis of other flows, and also do return flows for some material streams. We have therefore developed a new editor for the EASETECH software, which allows the user to make specific process modules where the actual chemical conversion processes can...... be modelled and then integrated into the overall LCA model. This allows for flexible modules which automatically will adjust the material flows it is handling on basis of its chemical information, which can be set for multiple input materials at the same time. A case example of this was carried out for a bio...

  7. Application of flow network models of SINDA/FLUINT{sup TM} to a nuclear power plant system thermal hydraulic code

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Ji Bum [Institute for Advanced Engineering, Yongin (Korea, Republic of); Park, Jong Woon [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    In order to enhance the dynamic and interactive simulation capability of a system thermal hydraulic code for nuclear power plant, applicability of flow network models in SINDA/FLUINT{sup TM} has been tested by modeling feedwater system and coupling to DSNP which is one of a system thermal hydraulic simulation code for a pressurized heavy water reactor. The feedwater system is selected since it is one of the most important balance of plant systems with a potential to greatly affect the behavior of nuclear steam supply system. The flow network model of this feedwater system consists of condenser, condensate pumps, low and high pressure heaters, deaerator, feedwater pumps, and control valves. This complicated flow network is modeled and coupled to DSNP and it is tested for several normal and abnormal transient conditions such turbine load maneuvering, turbine trip, and loss of class IV power. The results show reasonable behavior of the coupled code and also gives a good dynamic and interactive simulation capabilities for the several mild transient conditions. It has been found that coupling system thermal hydraulic code with a flow network code is a proper way of upgrading simulation capability of DSNP to mature nuclear plant analyzer (NPA). 5 refs., 10 figs. (Author)

  8. Subsurface flow and transport of organic chemicals: an assessment of current modeling capability and priority directions for future research (1987-1995)

    Energy Technology Data Exchange (ETDEWEB)

    Streile, G.P.; Simmons, C.S.

    1986-09-01

    Theoretical and computer modeling capability for assessing the subsurface movement and fate of organic contaminants in groundwater was examined. Hence, this study is particularly concerned with energy-related, organic compounds that could enter a subsurface environment and move as components of a liquid phase separate from groundwater. The migration of organic chemicals that exist in an aqueous dissolved state is certainly a part of this more general scenario. However, modeling of the transport of chemicals in aqueous solution has already been the subject of several reviews. Hence, this study emphasizes the multiphase scenario. This study was initiated to focus on the important physicochemical processes that control the behavior of organic substances in groundwater systems, to evaluate the theory describing these processes, and to search for and evaluate computer codes that implement models that correctly conceptualize the problem situation. This study is not a code inventory, and no effort was made to identify every available code capable of representing a particular process.

  9. An improved steam generator model for the SASSYS code

    International Nuclear Information System (INIS)

    Pizzica, P.A.

    1989-01-01

    A new steam generator model has been developed for the SASSYS computer code, which analyzes accident conditions in a liquid-metal-cooled fast reactor. It has been incorporated into the new SASSYS balance-of-plant model, but it can also function as a stand-alone model. The model provides a full solution of the steady-state condition before the transient calculation begins for given sodium and water flow rates, inlet and outlet sodium temperatures, and inlet enthalpy and region lengths on the water side

  10. Modeling developments for the SAS4A and SASSYS computer codes

    International Nuclear Information System (INIS)

    Cahalan, J.E.; Wei, T.Y.C.

    1990-01-01

    The SAS4A and SASSYS computer codes are being developed at Argonne National Laboratory for transient analysis of liquid metal cooled reactors. The SAS4A code is designed to analyse severe loss-of-coolant flow and overpower accidents involving coolant boiling, Cladding failures, and fuel melting and relocation. Recent SAS4A modeling developments include extension of the coolant boiling model to treat sudden fission gas release upon pin failure, expansion of the DEFORM fuel behavior model to handle advanced cladding materials and metallic fuel, and addition of metallic fuel modeling capability to the PINACLE and LEVITATE fuel relocation models. The SASSYS code is intended for the analysis of operational and beyond-design-basis transients, and provides a detailed transient thermal and hydraulic simulation of the core, the primary and secondary coolant circuits, and the balance-of-plant, in addition to a detailed model of the plant control and protection systems. Recent SASSYS modeling developments have resulted in detailed representations of the balance of plant piping network and components, including steam generators, feedwater heaters and pumps, and the turbine. 12 refs., 2 tabs

  11. Improvement of reflood model in RELAP5 code based on sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Li, Dong; Liu, Xiaojing; Yang, Yanhua, E-mail: yanhuay@sjtu.edu.cn

    2016-07-15

    Highlights: • Sensitivity analysis is performed on the reflood model of RELAP5. • The selected influential models are discussed and modified. • The modifications are assessed by FEBA experiment and better predictions are obtained. - Abstract: Reflooding is an important and complex process to the safety of nuclear reactor during loss of coolant accident (LOCA). Accurate prediction of the reflooding behavior is one of the challenge tasks for the current system code development. RELAP5 as a widely used system code has the capability to simulate this process but with limited accuracy, especially for low inlet flow rate reflooding conditions. Through the preliminary assessment with six FEBA (Flooding Experiments with Blocked Arrays) tests, it is observed that the peak cladding temperature (PCT) is generally underestimated and bundle quench is predicted too early compared to the experiment data. In this paper, the improvement of constitutive models related to reflooding is carried out based on single parametric sensitivity analysis. Film boiling heat transfer model and interfacial friction model of dispersed flow are selected as the most influential models to the results of interests. Then studies and discussions are specifically focused on these sensitive models and proper modifications are recommended. These proposed improvements are implemented in RELAP5 code and assessed against FEBA experiment. Better agreement between calculations and measured data for both cladding temperature and quench time is obtained.

  12. NASA Air Force Cost Model (NAFCOM): Capabilities and Results

    Science.gov (United States)

    McAfee, Julie; Culver, George; Naderi, Mahmoud

    2011-01-01

    NAFCOM is a parametric estimating tool for space hardware. Uses cost estimating relationships (CERs) which correlate historical costs to mission characteristics to predict new project costs. It is based on historical NASA and Air Force space projects. It is intended to be used in the very early phases of a development project. NAFCOM can be used at the subsystem or component levels and estimates development and production costs. NAFCOM is applicable to various types of missions (crewed spacecraft, uncrewed spacecraft, and launch vehicles). There are two versions of the model: a government version that is restricted and a contractor releasable version.

  13. Expanding the modeling capabilities of the cognitive environment simulation

    International Nuclear Information System (INIS)

    Roth, E.M.; Mumaw, R.J.; Pople, H.E. Jr.

    1991-01-01

    The Nuclear Regulatory Commission has been conducting a research program to develop more effective tools to model the cognitive activities that underlie intention formation during nuclear power plant (NPP) emergencies. Under this program an artificial intelligence (AI) computer simulation called Cognitive Environment Simulation (CES) has been developed. CES simulates the cognitive activities involved in responding to a NPP accident situation. It is intended to provide an analytic tool for predicting likely human responses, and the kinds of errors that can plausibly arise under different accident conditions to support human reliability analysis. Recently CES was extended to handle a class of interfacing loss of coolant accidents (ISLOCAs). This paper summarizes the results of these exercises and describes follow-on work currently underway

  14. The 2010 fib Model Code for Structural Concrete: A new approach to structural engineering

    NARCIS (Netherlands)

    Walraven, J.C.; Bigaj-Van Vliet, A.

    2011-01-01

    The fib Model Code is a recommendation for the design of reinforced and prestressed concrete which is intended to be a guiding document for future codes. Model Codes have been published before, in 1978 and 1990. The draft for fib Model Code 2010 was published in May 2010. The most important new

  15. Developing Materials Processing to Performance Modeling Capabilities and the Need for Exascale Computing Architectures (and Beyond)

    Energy Technology Data Exchange (ETDEWEB)

    Schraad, Mark William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Physics and Engineering Models; Luscher, Darby Jon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Advanced Simulation and Computing

    2016-09-06

    Additive Manufacturing techniques are presenting the Department of Energy and the NNSA Laboratories with new opportunities to consider novel component production and repair processes, and to manufacture materials with tailored response and optimized performance characteristics. Additive Manufacturing technologies already are being applied to primary NNSA mission areas, including Nuclear Weapons. These mission areas are adapting to these new manufacturing methods, because of potential advantages, such as smaller manufacturing footprints, reduced needs for specialized tooling, an ability to embed sensing, novel part repair options, an ability to accommodate complex geometries, and lighter weight materials. To realize the full potential of Additive Manufacturing as a game-changing technology for the NNSA’s national security missions; however, significant progress must be made in several key technical areas. In addition to advances in engineering design, process optimization and automation, and accelerated feedstock design and manufacture, significant progress must be made in modeling and simulation. First and foremost, a more mature understanding of the process-structure-property-performance relationships must be developed. Because Additive Manufacturing processes change the nature of a material’s structure below the engineering scale, new models are required to predict materials response across the spectrum of relevant length scales, from the atomistic to the continuum. New diagnostics will be required to characterize materials response across these scales. And not just models, but advanced algorithms, next-generation codes, and advanced computer architectures will be required to complement the associated modeling activities. Based on preliminary work in each of these areas, a strong argument for the need for Exascale computing architectures can be made, if a legitimate predictive capability is to be developed.

  16. Plutonium explosive dispersal modeling using the MACCS2 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Steele, C.M.; Wald, T.L.; Chanin, D.I.

    1998-11-01

    The purpose of this paper is to derive the necessary parameters to be used to establish a defensible methodology to perform explosive dispersal modeling of respirable plutonium using Gaussian methods. A particular code, MACCS2, has been chosen for this modeling effort due to its application of sophisticated meteorological statistical sampling in accordance with the philosophy of Nuclear Regulatory Commission (NRC) Regulatory Guide 1.145, ``Atmospheric Dispersion Models for Potential Accident Consequence Assessments at Nuclear Power Plants``. A second advantage supporting the selection of the MACCS2 code for modeling purposes is that meteorological data sets are readily available at most Department of Energy (DOE) and NRC sites. This particular MACCS2 modeling effort focuses on the calculation of respirable doses and not ground deposition. Once the necessary parameters for the MACCS2 modeling are developed and presented, the model is benchmarked against empirical test data from the Double Tracks shot of project Roller Coaster (Shreve 1965) and applied to a hypothetical plutonium explosive dispersal scenario. Further modeling with the MACCS2 code is performed to determine a defensible method of treating the effects of building structure interaction on the respirable fraction distribution as a function of height. These results are related to the Clean Slate 2 and Clean Slate 3 bunkered shots of Project Roller Coaster. Lastly a method is presented to determine the peak 99.5% sector doses on an irregular site boundary in the manner specified in NRC Regulatory Guide 1.145 (1983). Parametric analyses are performed on the major analytic assumptions in the MACCS2 model to define the potential errors that are possible in using this methodology.

  17. Plutonium explosive dispersal modeling using the MACCS2 computer code

    International Nuclear Information System (INIS)

    Steele, C.M.; Wald, T.L.; Chanin, D.I.

    1998-01-01

    The purpose of this paper is to derive the necessary parameters to be used to establish a defensible methodology to perform explosive dispersal modeling of respirable plutonium using Gaussian methods. A particular code, MACCS2, has been chosen for this modeling effort due to its application of sophisticated meteorological statistical sampling in accordance with the philosophy of Nuclear Regulatory Commission (NRC) Regulatory Guide 1.145, ''Atmospheric Dispersion Models for Potential Accident Consequence Assessments at Nuclear Power Plants''. A second advantage supporting the selection of the MACCS2 code for modeling purposes is that meteorological data sets are readily available at most Department of Energy (DOE) and NRC sites. This particular MACCS2 modeling effort focuses on the calculation of respirable doses and not ground deposition. Once the necessary parameters for the MACCS2 modeling are developed and presented, the model is benchmarked against empirical test data from the Double Tracks shot of project Roller Coaster (Shreve 1965) and applied to a hypothetical plutonium explosive dispersal scenario. Further modeling with the MACCS2 code is performed to determine a defensible method of treating the effects of building structure interaction on the respirable fraction distribution as a function of height. These results are related to the Clean Slate 2 and Clean Slate 3 bunkered shots of Project Roller Coaster. Lastly a method is presented to determine the peak 99.5% sector doses on an irregular site boundary in the manner specified in NRC Regulatory Guide 1.145 (1983). Parametric analyses are performed on the major analytic assumptions in the MACCS2 model to define the potential errors that are possible in using this methodology

  18. Galactic Cosmic Ray Event-Based Risk Model (GERM) Code

    Science.gov (United States)

    Cucinotta, Francis A.; Plante, Ianik; Ponomarev, Artem L.; Kim, Myung-Hee Y.

    2013-01-01

    This software describes the transport and energy deposition of the passage of galactic cosmic rays in astronaut tissues during space travel, or heavy ion beams in patients in cancer therapy. Space radiation risk is a probability distribution, and time-dependent biological events must be accounted for physical description of space radiation transport in tissues and cells. A stochastic model can calculate the probability density directly without unverified assumptions about shape of probability density function. The prior art of transport codes calculates the average flux and dose of particles behind spacecraft and tissue shielding. Because of the signaling times for activation and relaxation in the cell and tissue, transport code must describe temporal and microspatial density of functions to correlate DNA and oxidative damage with non-targeted effects of signals, bystander, etc. These are absolutely ignored or impossible in the prior art. The GERM code provides scientists data interpretation of experiments; modeling of beam line, shielding of target samples, and sample holders; and estimation of basic physical and biological outputs of their experiments. For mono-energetic ion beams, basic physical and biological properties are calculated for a selected ion type, such as kinetic energy, mass, charge number, absorbed dose, or fluence. Evaluated quantities are linear energy transfer (LET), range (R), absorption and fragmentation cross-sections, and the probability of nuclear interactions after 1 or 5 cm of water equivalent material. In addition, a set of biophysical properties is evaluated, such as the Poisson distribution for a specified cellular area, cell survival curves, and DNA damage yields per cell. Also, the GERM code calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle in a selected material. The GERM code makes the numerical estimates of basic

  19. Geochemical modelling of groundwater evolution using chemical equilibrium codes

    International Nuclear Information System (INIS)

    Pitkaenen, P.; Pirhonen, V.

    1991-01-01

    Geochemical equilibrium codes are a modern tool in studying interaction between groundwater and solid phases. The most common used programs and application subjects are shortly presented in this article. The main emphasis is laid on the approach method of using calculated results in evaluating groundwater evolution in hydrogeological system. At present in geochemical equilibrium modelling also kinetic as well as hydrologic constrains along a flow path are taken into consideration

  20. Description and assessment of deformation and temperature models in the FRAP-T6 code

    International Nuclear Information System (INIS)

    Siefken, L.J.

    1983-01-01

    The FRAP-T6 code was developed at the Idaho National Engineering Laboratory (INEL) for the purpose of calculating the transient performance of light water reactor fuel rods during reactor transients ranging from mild operational transients to severe hypothetical loss-of-coolant accidents. An important application of the FRAP-T6 code is to calculate the structural performance of fuel rod cladding. During a reactor transient, the cladding may be overstressed by mechanical interaction with thermally expanding fuel, overpressurized by fill and fission gases, embrittled by oxidation, and weakened by temperature increase. If the cladding does not crack, rupture or melt, the cladding has achieved the important objective of containing radioactive fission products. A combination of first principle models and empirical correlations are used to solve for fuel rod performance. The incremental cladding plastic strains are calculated by the Prandtl-Reuss flow rule. The total strains and stresses are solved iteratively by Mendelson's method of successive substitution. A circumferentially nonuniform cladding temperature is taken into account in the modeling of cladding ballooning. Iodine concentration and irradiation are taken into account in the modelling of stress corrosion cracking. The gap heat transfer model takes into account the size of the fuel-cladding gap, fission gas release, fuel-cladding interface pressure, and the incomplete thermal accommodation of gas molecules to surface temperature. The heat transfer at the cladding surface is determined using a set of empirical correlations which cover a wide range of heat transfer regimes. The capabilities of the FRAP-T6 code are assessed by comparisons of code calculations with the measurements of several hundred in-pile experiments on fuel rods. The results of the assessments show that the code accurately and efficiently models the structural and thermal response of fuel rods. (orig.)

  1. Review of calculational models and computer codes for environmental dose assessment of radioactive releases

    International Nuclear Information System (INIS)

    Strenge, D.L.; Watson, E.C.; Droppo, J.G.

    1976-06-01

    The development of technological bases for siting nuclear fuel cycle facilities requires calculational models and computer codes for the evaluation of risks and the assessment of environmental impact of radioactive effluents. A literature search and review of available computer programs revealed that no one program was capable of performing all of the great variety of calculations (i.e., external dose, internal dose, population dose, chronic release, accidental release, etc.). Available literature on existing computer programs has been reviewed and a description of each program reviewed is given

  2. Review of calculational models and computer codes for environmental dose assessment of radioactive releases

    Energy Technology Data Exchange (ETDEWEB)

    Strenge, D.L.; Watson, E.C.; Droppo, J.G.

    1976-06-01

    The development of technological bases for siting nuclear fuel cycle facilities requires calculational models and computer codes for the evaluation of risks and the assessment of environmental impact of radioactive effluents. A literature search and review of available computer programs revealed that no one program was capable of performing all of the great variety of calculations (i.e., external dose, internal dose, population dose, chronic release, accidental release, etc.). Available literature on existing computer programs has been reviewed and a description of each program reviewed is given.

  3. Direct containment heating models in the CONTAIN code

    Energy Technology Data Exchange (ETDEWEB)

    Washington, K.E.; Williams, D.C.

    1995-08-01

    The potential exists in a nuclear reactor core melt severe accident for molten core debris to be dispersed under high pressure into the containment building. If this occurs, the set of phenomena that result in the transfer of energy to the containment atmosphere and its surroundings is referred to as direct containment heating (DCH). Because of the potential for DCH to lead to early containment failure, the U.S. Nuclear Regulatory Commission (USNRC) has sponsored an extensive research program consisting of experimental, analytical, and risk integration components. An important element of the analytical research has been the development and assessment of direct containment heating models in the CONTAIN code. This report documents the DCH models in the CONTAIN code. DCH models in CONTAIN for representing debris transport, trapping, chemical reactions, and heat transfer from debris to the containment atmosphere and surroundings are described. The descriptions include the governing equations and input instructions in CONTAIN unique to performing DCH calculations. Modifications made to the combustion models in CONTAIN for representing the combustion of DCH-produced and pre-existing hydrogen under DCH conditions are also described. Input table options for representing the discharge of debris from the RPV and the entrainment phase of the DCH process are also described. A sample calculation is presented to demonstrate the functionality of the models. The results show that reasonable behavior is obtained when the models are used to predict the sixth Zion geometry integral effects test at 1/10th scale.

  4. Channel modeling, signal processing and coding for perpendicular magnetic recording

    Science.gov (United States)

    Wu, Zheng

    With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by

  5. Temporal perceptual coding using a visual acuity model

    Science.gov (United States)

    Adzic, Velibor; Cohen, Robert A.; Vetro, Anthony

    2014-02-01

    This paper describes research and results in which a visual acuity (VA) model of the human visual system (HVS) is used to reduce the bitrate of coded video sequences, by eliminating the need to signal transform coefficients when their corresponding frequencies will not be detected by the HVS. The VA model is integrated into the state of the art HEVC HM codec. Compared to the unmodified codec, up to 45% bitrate savings are achieved while maintaining the same subjective quality of the video sequences. Encoding times are reduced as well.

  6. Systematic effects in CALOR simulation code to model experimental configurations

    International Nuclear Information System (INIS)

    Job, P.K.; Proudfoot, J.; Handler, T.

    1991-01-01

    CALOR89 code system is being used to simulate test beam results and the design parameters of several calorimeter configurations. It has been bench-marked against the ZEUS, Dθ and HELIOS data. This study identifies the systematic effects in CALOR simulation to model the experimental configurations. Five major systematic effects are identified. These are the choice of high energy nuclear collision model, material composition, scintillator saturation, shower integration time, and the shower containment. Quantitative estimates of these systematic effects are presented. 23 refs., 6 figs., 7 tabs

  7. Toward a Probabilistic Automata Model of Some Aspects of Code-Switching.

    Science.gov (United States)

    Dearholt, D. W.; Valdes-Fallis, G.

    1978-01-01

    The purpose of the model is to select either Spanish or English as the language to be used; its goals at this stage of development include modeling code-switching for lexical need, apparently random code-switching, dependency of code-switching upon sociolinguistic context, and code-switching within syntactic constraints. (EJS)

  8. Benchmarking of computer codes and approaches for modeling exposure scenarios

    International Nuclear Information System (INIS)

    Seitz, R.R.; Rittmann, P.D.; Wood, M.I.; Cook, J.R.

    1994-08-01

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided

  9. Finite element code development for modeling detonation of HMX composites

    Science.gov (United States)

    Duran, Adam V.; Sundararaghavan, Veera

    2017-01-01

    In this work, we present a hydrodynamics code for modeling shock and detonation waves in HMX. A stable efficient solution strategy based on a Taylor-Galerkin finite element (FE) discretization was developed to solve the reactive Euler equations. In our code, well calibrated equations of state for the solid unreacted material and gaseous reaction products have been implemented, along with a chemical reaction scheme and a mixing rule to define the properties of partially reacted states. A linear Gruneisen equation of state was employed for the unreacted HMX calibrated from experiments. The JWL form was used to model the EOS of gaseous reaction products. It is assumed that the unreacted explosive and reaction products are in both pressure and temperature equilibrium. The overall specific volume and internal energy was computed using the rule of mixtures. Arrhenius kinetics scheme was integrated to model the chemical reactions. A locally controlled dissipation was introduced that induces a non-oscillatory stabilized scheme for the shock front. The FE model was validated using analytical solutions for SOD shock and ZND strong detonation models. Benchmark problems are presented for geometries in which a single HMX crystal is subjected to a shock condition.

  10. Nuclear Energy Advanced Modeling and Simulation (NEAMS) Waste Integrated Performance and Safety Codes (IPSC): FY10 development and integration

    International Nuclear Information System (INIS)

    Criscenti, Louise Jacqueline; Sassani, David Carl; Arguello, Jose Guadalupe Jr.; Dewers, Thomas A.; Bouchard, Julie F.; Edwards, Harold Carter; Freeze, Geoffrey A.; Wang, Yifeng; Schultz, Peter Andrew

    2011-01-01

    This report describes the progress in fiscal year 2010 in developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. Waste IPSC activities in fiscal year 2010 focused on specifying a challenge problem to demonstrate proof of concept, developing a verification and validation plan, and performing an initial gap analyses to identify candidate codes and tools to support the development and integration of the Waste IPSC. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. This year-end progress report documents the FY10 status of acquisition, development, and integration of thermal-hydrologic-chemical-mechanical (THCM) code capabilities, frameworks, and enabling tools and infrastructure.

  11. Nuclear Energy Advanced Modeling and Simulation (NEAMS) Waste Integrated Performance and Safety Codes (IPSC) : FY10 development and integration.

    Energy Technology Data Exchange (ETDEWEB)

    Criscenti, Louise Jacqueline; Sassani, David Carl; Arguello, Jose Guadalupe, Jr.; Dewers, Thomas A.; Bouchard, Julie F.; Edwards, Harold Carter; Freeze, Geoffrey A.; Wang, Yifeng; Schultz, Peter Andrew

    2011-02-01

    This report describes the progress in fiscal year 2010 in developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. Waste IPSC activities in fiscal year 2010 focused on specifying a challenge problem to demonstrate proof of concept, developing a verification and validation plan, and performing an initial gap analyses to identify candidate codes and tools to support the development and integration of the Waste IPSC. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. This year-end progress report documents the FY10 status of acquisition, development, and integration of thermal-hydrologic-chemical-mechanical (THCM) code capabilities, frameworks, and enabling tools and infrastructure.

  12. Development of SSUBPIC code for modeling the neutral gas depletion effect in helicon discharges

    Science.gov (United States)

    Kollasch, Jeffrey; Sovenic, Carl; Schmitz, Oliver

    2017-10-01

    The SSUBPIC (steady-state unstructured-boundary particle-in-cell) code is being developed to model helicon plasma devices. The envisioned modeling framework incorporates (1) a kinetic neutral particle model, (2) a kinetic ion model, (3) a fluid electron model, and (4) an RF power deposition model. The models are loosely coupled and iterated until convergence to steady-state. Of the four required solvers, the kinetic ion and neutral particle simulation can now be done within the SSUBPIC code. Recent SSUBPIC modifications include implementation and testing of a Coulomb collision model (Lemons et al., JCP, 228(5), pp. 1391-1403) allowing efficient coupling of kineticly-treated ions to fluid electrons, and implementation of a neutral particle tracking mode with charge-exchange and electron impact ionization physics. These new simulation capabilities are demonstrated working independently and coupled to ``dummy'' profiles for RF power deposition to converge on steady-state plasma and neutral profiles. The geometry and conditions considered are similar to those of the MARIA experiment at UW-Madison. Initial results qualitatively show the expected neutral gas depletion effect in which neutrals in the plasma core are not replenished at a sufficient rate to sustain a higher plasma density. This work is funded by the NSF CAREER award PHY-1455210 and NSF Grant PHY-1206421.

  13. Estimating Heat and Mass Transfer Processes in Green Roof Systems: Current Modeling Capabilities and Limitations (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Tabares Velasco, P. C.

    2011-04-01

    This presentation discusses estimating heat and mass transfer processes in green roof systems: current modeling capabilities and limitations. Green roofs are 'specialized roofing systems that support vegetation growth on rooftops.'

  14. A variable capacitance based modeling and power capability predicting method for ultracapacitor

    Science.gov (United States)

    Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang

    2018-01-01

    Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.

  15. Guidelines for Applying the Capability Maturity Model Analysis to Connected and Automated Vehicle Deployment

    Science.gov (United States)

    2017-11-23

    The Federal Highway Administration (FHWA) has adapted the Transportation Systems Management and Operations (TSMO) Capability Maturity Model (CMM) to describe the operational maturity of Infrastructure Owner-Operator (IOO) agencies across a range of i...

  16. A model code for the radiative theta pinch

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S., E-mail: leesing@optusnet.com.au [INTI International University, 71800 Nilai (Malaysia); Institute for Plasma Focus Studies, 32 Oakpark Drive, Chadstone 3148 Australia (Australia); Physics Department, University of Malaya, Kuala Lumpur (Malaysia); Saw, S. H. [INTI International University, 71800 Nilai (Malaysia); Institute for Plasma Focus Studies, 32 Oakpark Drive, Chadstone 3148 Australia (Australia); Lee, P. C. K. [Nanyang Technological University, National Institute of Education, Singapore 637616 (Singapore); Akel, M. [Department of Physics, Atomic Energy Commission, Damascus, P. O. Box 6091, Damascus (Syrian Arab Republic); Damideh, V. [INTI International University, 71800 Nilai (Malaysia); Khattak, N. A. D. [Department of Physics, Gomal University, Dera Ismail Khan (Pakistan); Mongkolnavin, R.; Paosawatyanyong, B. [Department of Physics, Faculty of Science, Chulalongkorn University, Bangkok 10140 (Thailand)

    2014-07-15

    A model for the theta pinch is presented with three modelled phases of radial inward shock phase, reflected shock phase, and a final pinch phase. The governing equations for the phases are derived incorporating thermodynamics and radiation and radiation-coupled dynamics in the pinch phase. A code is written incorporating correction for the effects of transit delay of small disturbing speeds and the effects of plasma self-absorption on the radiation. Two model parameters are incorporated into the model, the coupling coefficient f between the primary loop current and the induced plasma current and the mass swept up factor f{sub m}. These values are taken from experiments carried out in the Chulalongkorn theta pinch.

  17. MMA, A Computer Code for Multi-Model Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Eileen P. Poeter and Mary C. Hill

    2007-08-20

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.

  18. Improved Flow Modeling in Transient Reactor Safety Analysis Computer Codes

    International Nuclear Information System (INIS)

    Holowach, M.J.; Hochreiter, L.E.; Cheung, F.B.

    2002-01-01

    A method of accounting for fluid-to-fluid shear in between calculational cells over a wide range of flow conditions envisioned in reactor safety studies has been developed such that it may be easily implemented into a computer code such as COBRA-TF for more detailed subchannel analysis. At a given nodal height in the calculational model, equivalent hydraulic diameters are determined for each specific calculational cell using either laminar or turbulent velocity profiles. The velocity profile may be determined from a separate CFD (Computational Fluid Dynamics) analysis, experimental data, or existing semi-empirical relationships. The equivalent hydraulic diameter is then applied to the wall drag force calculation so as to determine the appropriate equivalent fluid-to-fluid shear caused by the wall for each cell based on the input velocity profile. This means of assigning the shear to a specific cell is independent of the actual wetted perimeter and flow area for the calculational cell. The use of this equivalent hydraulic diameter for each cell within a calculational subchannel results in a representative velocity profile which can further increase the accuracy and detail of heat transfer and fluid flow modeling within the subchannel when utilizing a thermal hydraulics systems analysis computer code such as COBRA-TF. Utilizing COBRA-TF with the flow modeling enhancement results in increased accuracy for a coarse-mesh model without the significantly greater computational and time requirements of a full-scale 3D (three-dimensional) transient CFD calculation. (authors)

  19. Auditory information coding by modeled cochlear nucleus neurons.

    Science.gov (United States)

    Wang, Huan; Isik, Michael; Borst, Alexander; Hemmert, Werner

    2011-06-01

    In this paper we use information theory to quantify the information in the output spike trains of modeled cochlear nucleus globular bushy cells (GBCs). GBCs are part of the sound localization pathway. They are known for their precise temporal processing, and they code amplitude modulations with high fidelity. Here we investigated the information transmission for a natural sound, a recorded vowel. We conclude that the maximum information transmission rate for a single neuron was close to 1,050 bits/s, which corresponds to a value of approximately 5.8 bits per spike. For quasi-periodic signals like voiced speech, the transmitted information saturated as word duration increased. In general, approximately 80% of the available information from the spike trains was transmitted within about 20 ms. Transmitted information for speech signals concentrated around formant frequency regions. The efficiency of neural coding was above 60% up to the highest temporal resolution we investigated (20 μs). The increase in transmitted information to that precision indicates that these neurons are able to code information with extremely high fidelity, which is required for sound localization. On the other hand, only 20% of the information was captured when the temporal resolution was reduced to 4 ms. As the temporal resolution of most speech recognition systems is limited to less than 10 ms, this massive information loss might be one of the reasons which are responsible for the lack of noise robustness of these systems.

  20. Physics models in the toroidal transport code PROCTR

    Energy Technology Data Exchange (ETDEWEB)

    Howe, H.C.

    1990-08-01

    The physics models that are contained in the toroidal transport code PROCTR are described in detail. Time- and space-dependent models are included for the plasma hydrogenic-ion, helium, and impurity densities, the electron and ion temperatures, the toroidal rotation velocity, and the toroidal current profile. Time- and depth-dependent models for the trapped and mobile hydrogenic particle concentrations in the wall and a time-dependent point model for the number of particles in the limiter are also included. Time-dependent models for neutral particle transport, neutral beam deposition and thermalization, fusion heating, impurity radiation, pellet injection, and the radial electric potential are included and recalculated periodically as the time-dependent models evolve. The plasma solution is obtained either in simple flux coordinates, where the radial shift of each elliptical, toroidal flux surface is included to maintain an approximate pressure equilibrium, or in general three-dimensional torsatron coordinates represented by series of helical harmonics. The detailed coupling of the plasma, scrape-off layer, limiter, and wall models through the neutral transport model makes PROCTR especially suited for modeling of recycling and particle control in toroidal plasmas. The model may also be used in a steady-state profile analysis mode for studying energy and particle balances starting with measured plasma profiles.

  1. An improved steam generator model for the SASSYS code

    International Nuclear Information System (INIS)

    Pizzica, P.A.

    1989-01-01

    A new steam generator model has been developed for the SASSYS computer code, which analyzes accident conditions in a liquid metal cooled fast reactor. It has been incorporated into the new SASSYS balance-of-plant model but it can also function on a stand-alone basis. The steam generator can be used in a once-through mode, or a variant of the model can be used as a separate evaporator and a superheater with recirculation loop. The new model provides for an exact steady-state solution as well as the transient calculation. There was a need for a faster and more flexible model than the old steam generator model. The new model provides for more detail with its multi-mode treatment as opposed to the previous model's one node per region approach. Numerical instability problems which were the result of cell-centered spatial differencing, fully explicit time differencing, and the moving boundary treatment of the boiling crisis point in the boiling region have been reduced. This leads to an increase in speed as larger time steps can now be taken. The new model is an improvement in many respects. 2 refs., 3 figs

  2. Simulation of Two-group IATE models with EAGLE code

    International Nuclear Information System (INIS)

    Nguyen, V. T.; Bae, B. U.; Song, C. H.

    2011-01-01

    The two-group transport equation should be employed in order to describe correctly the interfacial area transport in various two phase flow regimes, especially at the bubbly-to-slug flow transition. This is because the differences in bubble sizes or shapes cause substantial differences in their transport mechanisms and interaction phenomena. The basic concept of two group interfacial area transport equations have been formulated and demonstrated for vertical gas-liquid bubbly-to-slug flow transition by Hibiki and his coworkers. More than twelve adjustable parameters need to be determined based on extensive experimental database. It should be noted that these parameters were adjusted only in one-dimensional approach by area averaged flow parameters in a vertical pipe under adiabatic and steady conditions. This obviously brings up the following experimental issue: how to adjust all these parameters as independently as possible by considering experiments where a single physical phenomenon is of importance. The vertical air-water loop (VAWL) has been used for investigating the transport phenomena of two-phase flow at Korea Atomic Energy Research Institute (KAERI). The data for local void fraction and interfacial area concentration are measured by using five-sensor conductivity probe method and classified into two groups, the small spherical bubble group and the cap/slug one. The initial bubble size, which has a big influence on the interaction mechanism between phases, was controlled. In the present work, two-group interfacial area transport equation (IATE) was implemented in the EAGLE code and assessed against VAWL data. The purpose of this study is to investigate the capability of coefficients derived by Hibiki in the two-group interfacial area transport equations with CFD code

  3. Modeling of the CTEx subcritical unit using MCNPX code

    International Nuclear Information System (INIS)

    Santos, Avelino; Silva, Ademir X. da; Rebello, Wilson F.; Cunha, Victor L. Lassance

    2011-01-01

    The present work aims at simulating the subcritical unit of Army Technology Center (CTEx) namely ARGUS pile (subcritical uranium-graphite arrangement) by using the computational code MCNPX. Once such modeling is finished, it could be used in k-effective calculations for systems using natural uranium as fuel, for instance. ARGUS is a subcritical assembly which uses reactor-grade graphite as moderator of fission neutrons and metallic uranium fuel rods with aluminum cladding. The pile is driven by an Am-Be spontaneous neutron source. In order to achieve a higher value for k eff , a higher concentration of U235 can be proposed, provided it safely remains below one. (author)

  4. Status of emergency spray modelling in the integral code ASTEC

    International Nuclear Information System (INIS)

    Plumecocq, W.; Passalacqua, R.

    2001-01-01

    Containment spray systems are emergency systems that would be used in very low probability events which may lead to severe accidents in Light Water Reactors. In most cases, the primary function of the spray would be to remove heat and condense steam in order to reduce pressure and temperature in the containment building. Spray would also wash out fission products (aerosols and gaseous species) from the containment atmosphere. The efficiency of the spray system in the containment depressurization as well as in the removal of aerosols, during a severe accident, depends on the evolution of the spray droplet size distribution with the height in the containment, due to kinetic and thermal relaxation, gravitational agglomeration and mass transfer with the gas. A model has been developed taking into account all of these phenomena. This model has been implemented in the ASTEC code with a validation of the droplets relaxation against the CARAIDAS experiment (IPSN). Applications of this modelling to a PWR 900, during a severe accident, with special emphasis on the effect of spray on containment hydrogen distribution have been performed in multi-compartment configuration with the ASTEC V0.3 code. (author)

  5. Security Process Capability Model Based on ISO/IEC 15504 Conformant Enterprise SPICE

    Directory of Open Access Journals (Sweden)

    Mitasiunas Antanas

    2014-07-01

    Full Text Available In the context of modern information systems, security has become one of the most critical quality attributes. The purpose of this paper is to address the problem of quality of information security. An approach to solve this problem is based on the main assumption that security is a process oriented activity. According to this approach, product quality can be achieved by means of process quality - process capability. Introduced in the paper, SPICE conformant information security process capability model is based on process capability modeling elaborated by world-wide software engineering community during the last 25 years, namely ISO/IEC 15504 that defines the capability dimension and the requirements for process definition and domain independent integrated model for enterprise-wide assessment and Enterprise SPICE improvement

  6. Co-firing biomass and coal-progress in CFD modelling capabilities

    DEFF Research Database (Denmark)

    Kær, Søren Knudsen; Rosendahl, Lasse Aistrup; Yin, Chungen

    2005-01-01

    This paper discusses the development of user defined FLUENT™ sub models to improve the modelling capabilities in the area of large biomass particle motion and conversion. Focus is put on a model that includes the influence from particle size and shape on the reactivity by resolving intra-particle...

  7. Effective modeling of hydrogen mixing and catalytic recombination in containment atmosphere with an Eulerian Containment Code

    International Nuclear Information System (INIS)

    Bott, E.; Frepoli, C.; Monti, R.; Notini, V.; Carcassi, M.; Fineschi, F.; Heitsch, M.

    1999-01-01

    Large amounts of hydrogen can be generated in the containment of a nuclear power plant following a postulated accident with significant fuel damage. Different strategies have been proposed and implemented to prevent violent hydrogen combustion. An attractive one aims to eliminate hydrogen without burning processes; it is based on the use of catalytic hydrogen recombiners. This paper describes a simulation methodology which is being developed by Ansaldo, to support the application of the above strategy, in the frame of two projects sponsored by the Commission of the European Communities within the IV Framework Program on Reactor Safety. Involved organizations also include the DCMN of Pisa University (Italy), Battelle Institute and GRS (Germany), Politechnical University of Madrid (Spain). The aims to make available a simulation approach, suitable for use for containment design at industrial level (i.e. with reasonable computer running time) and capable to correctly capture the relevant phenomenologies (e.g. multiflow convective flow patterns, hydrogen, air and steam distribution in the containment atmosphere as determined by containment structures and geometries as well as by heat and mass sources and sinks). Eulerian algorithms provide the capability of three dimensional modelling with a fairly accurate prediction, however lower than CFD codes with a full Navier Stokes formulation. Open linking of an Eulerian code as GOTHIC to a full Navier Stokes CFD code as CFX 4.1 allows to dynamically tune the solving strategies of the Eulerian code itself. The effort in progress is an application of this innovative methodology to detailed hydrogen recombination simulation and a validation of the approach itself by reproducing experimental data. (author)

  8. A wide-range model of two-group gross sections in the dynamics code HEXTRAN

    International Nuclear Information System (INIS)

    Kaloinen, E.; Peltonen, J.

    2002-01-01

    In dynamic analyses the thermal hydraulic conditions within the reactor core may have a large variation, which sets a special requirement on the modeling of cross sections. The standard model in the dynamics code HEXTRAN is the same as in the static design code HEXBU-3D/MODS. It is based on a linear and second order fitting of two-group cross sections on fuel and moderator temperature, moderator density and boron density. A new, wide-range model of cross sections developed in Fortum Nuclear Services for HEXBU-3D/MOD6 has been included as an option into HEXTRAN. In this model the nodal cross sections are constructed from seven state variables in a polynomial of more than 40 terms. Coefficients of the polynomial are created by a least squares fitting to the results of a large number of fuel assembly calculations. Depending on the choice of state variables for the spectrum calculations, the new cross section model is capable to cover local conditions from cold zero power to boiling at full power. The 5. dynamic benchmark problem of AER is analyzed with the new option and results are compared to calculations with the standard model of cross sections in HEXTRAN (Authors)

  9. Dispersed Two-Phase Flow Modelling for Nuclear Safety in the NEPTUNE_CFD Code

    Directory of Open Access Journals (Sweden)

    Stephane Mimouni

    2017-01-01

    Full Text Available The objective of this paper is to give an overview of the capabilities of Eulerian bifluid approach to meet the needs of studies for nuclear safety regarding hydrogen risk, boiling crisis, and pipes and valves maintenance. The Eulerian bifluid approach has been implemented in a CFD code named NEPTUNE_CFD. NEPTUNE_CFD is a three-dimensional multifluid code developed especially for nuclear reactor applications by EDF, CEA, AREVA, and IRSN. The first set of models is dedicated to wall vapor condensation and spray modelling. Moreover, boiling crisis remains a major limiting phenomenon for the analysis of operation and safety of both nuclear reactors and conventional thermal power systems. The paper aims at presenting the generalization of the previous DNB model and its validation against 1500 validation cases. The modelling and the numerical simulation of cavitation phenomena are of relevant interest in many industrial applications, especially regarding pipes and valves maintenance where cavitating flows are responsible for harmful acoustics effects. In the last section, models are validated against experimental data of pressure profiles and void fraction visualisations obtained downstream of an orifice with the EPOCA facility (EDF R&D. Finally, a multifield approach is presented as an efficient tool to run all models together.

  10. The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.

    1999-07-01

    capability of performing iterated-source (criticality), multiplied-fixed-source, and fixed-source calculations. MCV uses a highly detailed continuous-energy (as opposed to multigroup) representation of neutron histories and cross section data. The spatial modeling is fully three-dimensional (3-D), and any geometrical region that can be described by quadric surfaces may be represented. The primary results are region-wise reaction rates, neutron production rates, slowing-down-densities, fluxes, leakages, and when appropriate the eigenvalue or multiplication factor. Region-wise nuclidic reaction rates are also computed, which may then be used by other modules in the system to determine time-dependent nuclide inventories so that RACER can perform depletion calculations. Furthermore, derived quantities such as ratios and sums of primary quantities and/or other derived quantities may also be calculated. MCV performs statistical analyses on output quantities, computing estimates of the 95% confidence intervals as well as indicators as to the reliability of these estimates. The remainder of this chapter provides an overview of the MCV algorithm. The following three chapters describe the MCV mathematical, physical, and statistical treatments in more detail. Specifically, Chapter 2 discusses topics related to tracking the histories including: geometry modeling, how histories are moved through the geometry, and variance reduction techniques related to the tracking process. Chapter 3 describes the nuclear data and physical models employed by MCV. Chapter 4 discusses the tallies, statistical analyses, and edits. Chapter 5 provides some guidance as to how to run the code, and Chapter 6 is a list of the code input options.

  11. Development of condensation modeling modeling and simulation code for IRWST

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sang Nyung; Jang, Wan Ho; Ko, Jong Hyun; Ha, Jong Baek; Yang, Chang Keun; Son, Myung Seong [Kyung Hee Univ., Seoul (Korea)

    1997-07-01

    One of the design improvements of the KNGR(Korean Next Generation Reactor) which is advanced to safety and economy is the adoption of IRWST(In-Containment Refueling Water Storage Tank). The IRWST, installed inside of the containment building, has more designed purpose than merely the location change of the tank. Since the design functions of the IRWST is similar to these of the BWR's suppression pool, theoretical models applicable to BWR's suppression pool can be mostly applied to the IRWST. But for the PWR, the geometry of the sparger, the operation mode and the steam quantity and temperature and pressure of discharged fluid from primary system to IRWST through PSV or SDV may be different from those of BWR. Also there is some defects in detailed parts of condensation model. Therefore we, as the first nation to construct PWR with IRWST, must carry out profound research for there problems such that the results can be utilized and localized as an exclusive technology. All kinds of thermal hydraulics phenomena was investigated and existing condensation models by Hideki Nariai and Izuo Aya were analyzed. Also throuh a rigorous literature review such as operation experience, experimental data, design document of KNGR, items which need modification and supplementation were derived. Analytical model for chugging phenomena is also presented. 15 refs., 18 figs., 4 tabs. (Author)

  12. 7 CFR Exhibit E to Subpart A of... - Voluntary National Model Building Codes

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 12 2010-01-01 2010-01-01 false Voluntary National Model Building Codes E Exhibit E... National Model Building Codes The following documents address the health and safety aspects of buildings and related structures and are voluntary national model building codes as defined in § 1924.4(h)(2) of...

  13. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    Science.gov (United States)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-01-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  14. Applications of Transport/Reaction Codes to Problems in Cell Modeling; TOPICAL

    International Nuclear Information System (INIS)

    MEANS, SHAWN A.; RINTOUL, MARK DANIEL; SHADID, JOHN N.

    2001-01-01

    We demonstrate two specific examples that show how our exiting capabilities in solving large systems of partial differential equations associated with transport/reaction systems can be easily applied to outstanding problems in computational biology. First, we examine a three-dimensional model for calcium wave propagation in a Xenopus Laevis frog egg and verify that a proposed model for the distribution of calcium release sites agrees with experimental results as a function of both space and time. Next, we create a model of the neuron's terminus based on experimental observations and show that the sodium-calcium exchanger is not the route of sodium's modulation of neurotransmitter release. These state-of-the-art simulations were performed on massively parallel platforms and required almost no modification of existing Sandia codes

  15. SIR rumor spreading model considering the effect of difference in nodes’ identification capabilities

    Science.gov (United States)

    Wang, Ya-Qi; Wang, Jing

    In this paper, we study the effect of difference in network nodes’ identification capabilities on rumor propagation. A novel susceptible-infected-removed (SIR) model is proposed, based on the mean-field theory, to investigate the dynamical behaviors of such model on homogeneous networks and inhomogeneous networks, respectively. Theoretical analysis and simulation results demonstrate that when we consider the influence of difference in nodes’ identification capabilities, the critical thresholds obviously increase, but the final rumor sizes are apparently reduced. We also find that the difference in nodes’ identification capabilities prolongs the time of rumor propagation reaching a steady state, and decreases the number of nodes that finally accept rumors. Additionally, under the influence of difference of nodes’ identification capabilities, compared with the homogeneous networks, the rumor transmission rate on the inhomogeneous networks is relatively large.

  16. Exploring a capability-demand interaction model for inclusive design evaluation

    OpenAIRE

    Persad, Umesh

    2012-01-01

    Designers are required to evaluate their designs against the needs and capabilities of their target user groups in order to achieve successful, inclusive products. This dissertation presents exploratory research into the specific problem of supporting analytical design evaluation for Inclusive Design. The analytical evaluation process involves evaluating products with user data rather than testing with actual users. The work focuses on the exploration of a capability-demand model of product i...

  17. 24 CFR 200.925c - Model codes.

    Science.gov (United States)

    2010-04-01

    ... Hills, Illinois 60478. (ii) Standard Building Code, 1991 Edition, including 1992/1993 revisions... (Chapter 6) of the Standard Building Code, but including Appendices A, C, E, J, K, M, and R. Available from...

  18. Fission product core release model evaluation in MELCOR code

    International Nuclear Information System (INIS)

    Song, Y. M.; Kim, D. H.; Kim, H. D.

    2003-01-01

    The fission product core release in the MELCOR code is based on the CORSOR models developed by Battelle Memorial Institute. Release of radionuclides can occur from the fuel-cladding gap when a failure temperature criterion exceeds or intact geometry is lost, and various CORSOR empirical release correlations based on fuel temperatures are used for the release. Released masses into the core may exist as aerosols and/or vapors, depending on the vapor pressure of the radionuclide class and the surrounding temperature. This paper shows a release analysis for selected representative volatile and non-volatile radionuclides during conservative high and low pressure sequences in the APR1400 plant. Three core release models (CORSOR, CORSOR-M, CORSOR-Booth) in the latest MELCOR 1.8.5 version are used. In the analysis, the option of the fuel component surface-to-volume ratio in the CORSOR and CORSOR-M models and the option of the high and low burn-up in the CORSOR-Booth model are considered together. As the results, the CORSOR-M release rate is high for volatile radionuclides, and the CORSOR release rate is high for non-volatile radionuclides with insufficient consistency. As the uncertainty range for the release rate expands from several times (volatile radionuclides) to more than maximum 10,000 times (non-volatile radionuclides), user's careful choice for core release models is needed

  19. Mathematical modeling of wiped-film evaporators. [MAIN codes

    Energy Technology Data Exchange (ETDEWEB)

    Sommerfeld, J.T.

    1976-05-01

    A mathematical model and associated computer program were developed to simulate the steady-state operation of wiped-film evaporators for the concentration of typical waste solutions produced at the Savannah River Plant. In this model, which treats either a horizontal or a vertical wiped-film evaporator as a plug-flow device with no backmixing, three fundamental phenomena are described: sensible heating of the waste solution, vaporization of water, and crystallization of solids from solution. Physical property data were coded into the computer program, which performs the calculations of this model. Physical properties of typical waste solutions and of the heating steam, generally as analytical functions of temperature, were obtained from published data or derived by regression analysis of tabulated or graphical data. Preliminary results from tests of the Savannah River Laboratory semiworks wiped-film evaporators were used to select a correlation for the inside film heat transfer coefficient. This model should be a useful aid in the specification, operation, and control of the full-scale wiped-film evaporators proposed for application under plant conditions. In particular, it should be of value in the development and analysis of feed-forward control schemes for the plant units. Also, this model can be readily adapted, with only minor changes, to simulate the operation of wiped-film evaporators for other conceivable applications, such as the concentration of acid wastes.

  20. TASS/SMR Code Topical Report for SMART Plant, Vol. I: Code Structure, System Models, and Solution Methods

    International Nuclear Information System (INIS)

    Chung, Young Jong; Kim, Soo Hyoung; Kim, See Darl

    2008-10-01

    The TASS/SMR code has been developed with domestic technologies for the safety analysis of the SMART plant which is an integral type pressurized water reactor. It can be applied to the analysis of design basis accidents including non-LOCA (loss of coolant accident) and LOCA of the SMART plant. The TASS/SMR code can be applied to any plant regardless of the structural characteristics of a reactor since the code solves the same governing equations for both the primary and secondary system. The code has been developed to meet the requirements of the safety analysis code. This report describes the overall structure of the TASS/SMR, input processing, and the processes of a steady state and transient calculations. In addition, basic differential equations, finite difference equations, state relationships, and constitutive models are described in the report. First, the conservation equations, a discretization process for numerical analysis, search method for state relationship are described. Then, a core power model, heat transfer models, physical models for various components, and control and trip models are explained

  1. TASS/SMR Code Topical Report for SMART Plant, Vol. I: Code Structure, System Models, and Solution Methods

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Young Jong; Kim, Soo Hyoung; Kim, See Darl (and others)

    2008-10-15

    The TASS/SMR code has been developed with domestic technologies for the safety analysis of the SMART plant which is an integral type pressurized water reactor. It can be applied to the analysis of design basis accidents including non-LOCA (loss of coolant accident) and LOCA of the SMART plant. The TASS/SMR code can be applied to any plant regardless of the structural characteristics of a reactor since the code solves the same governing equations for both the primary and secondary system. The code has been developed to meet the requirements of the safety analysis code. This report describes the overall structure of the TASS/SMR, input processing, and the processes of a steady state and transient calculations. In addition, basic differential equations, finite difference equations, state relationships, and constitutive models are described in the report. First, the conservation equations, a discretization process for numerical analysis, search method for state relationship are described. Then, a core power model, heat transfer models, physical models for various components, and control and trip models are explained.

  2. Model code for energy conservation in new building construction

    Energy Technology Data Exchange (ETDEWEB)

    None

    1977-12-01

    In response to the recognized lack of existing consensus standards directed to the conservation of energy in building design and operation, the preparation and publication of such a standard was accomplished with the issuance of ASHRAE Standard 90-75 ''Energy Conservation in New Building Design,'' by the American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc., in 1975. This standard addressed itself to recommended practices for energy conservation, using both depletable and non-depletable sources. A model code for energy conservation in building construction has been developed, setting forth the minimum regulations found necessary to mandate such conservation. The code addresses itself to the administration, design criteria, systems elements, controls, service water heating and electrical distribution and use, both for depletable and non-depletable energy sources. The technical provisions of the document are based on ASHRAE 90-75 and it is intended for use by state and local building officials in the implementation of a statewide energy conservation program.

  3. Laser Energy Deposition Model for the ICF3D Code

    Science.gov (United States)

    Kaiser, Thomas B.; Byers, Jack A.

    1996-11-01

    We have built a laser deposition module for the new ICF physics design code, ICF3D(``3D Unstructured Mesh ALE Hydrodynamics with the Upwind Discontinuous Finite Element Method,'' D. S. Kershaw, M. K. Prasad and M. J. Shaw,'' LLNL Report UCRL-JC-122104, (1995)), being developed at LLNL. The code uses a 3D unstructured grid on which hydrodynamic quantities are represented in terms of discontinuous linear finite elements (hexahedrons, prisms, tetrahedrons or pyramids). Because of the complex mesh geometry and (in general) non-uniform index of refraction (i.e., plasma density), the geometrical-optical ray-tracing problem is quite complicated. To solve it we have developed a grid-cell-face-crossing detection algorithm, an integrator for the ray equations of motion and a path-length calculator that are encapsulated in a C++ class that is used to create ray-bundle objects. Additional classes are being developed for inverse-bremsstrahlung and resonance-absorption heating models. A quasi-optical technique will be used to include diffractive effects. We use the ICF3D Python shell, a very flexible interface that allows command-line invocation of member functions.

  4. Development of Eulerian Code Modeling for ICF Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, Paul A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-02-27

    One of the most pressing unexplained phenomena standing in the way of ICF ignition is understanding mix and how it interacts with burn. Experiments were being designed and fielded as part of the Defect-Induced Mix Experiment (DIME) project to obtain data about the extent of material mix and how this mix influenced burn. Experiments on the Omega laser and National Ignition Facility (NIF) provided detailed data for comparison to the Eulerian code RAGE1. The Omega experiments were able to resolve the mix and provide “proof of principle” support for subsequent NIF experiments, which were fielded from July 2012 through June 2013. The Omega shots were fired at least once per year between 2009 and 2012. RAGE was not originally designed to model inertial confinement fusion (ICF) implosions. It still lacks lasers, so the code has been validated using an energy source. To test RAGE, the simulation output is compared to data and by means of postprocessing tools that were developed. Here, the various postprocessing tools are described with illustrative examples.

  5. Simplified modeling and code usage in the PASC-3 code system by the introduction of a programming environment

    International Nuclear Information System (INIS)

    Pijlgroms, B.J.; Oppe, J.; Oudshoorn, H.L.; Slobben, J.

    1991-06-01

    A brief description is given of the PASC-3 (Petten-AMPX-SCALE) Reactor Physics code system and associated UNIPASC work environment. The PASC-3 code system is used for criticality and reactor calculations and consists of a selection from the Oak Ridge National Laboratory AMPX-SCALE-3 code collection complemented with a number of additional codes and nuclear data bases. The original codes have been adapted to run under the UNIX operating system. The recommended nuclear data base is a complete 219 group cross section library derived from JEF-1 of which some benchmark results are presented. By the addition of the UNIPASC work environment the usage of the code system is greatly simplified. Complex chains of programs can easily be coupled together to form a single job. In addition, the model parameters can be represented by variables instead of literal values which enhances the readability and may improve the integrity of the code inputs. (author). 8 refs.; 6 figs.; 1 tab

  6. CODE's new solar radiation pressure model for GNSS orbit determination

    Science.gov (United States)

    Arnold, D.; Meindl, M.; Beutler, G.; Dach, R.; Schaer, S.; Lutz, S.; Prange, L.; Sośnica, K.; Mervart, L.; Jäggi, A.

    2015-08-01

    The Empirical CODE Orbit Model (ECOM) of the Center for Orbit Determination in Europe (CODE), which was developed in the early 1990s, is widely used in the International GNSS Service (IGS) community. For a rather long time, spurious spectral lines are known to exist in geophysical parameters, in particular in the Earth Rotation Parameters (ERPs) and in the estimated geocenter coordinates, which could recently be attributed to the ECOM. These effects grew creepingly with the increasing influence of the GLONASS system in recent years in the CODE analysis, which is based on a rigorous combination of GPS and GLONASS since May 2003. In a first step we show that the problems associated with the ECOM are to the largest extent caused by the GLONASS, which was reaching full deployment by the end of 2011. GPS-only, GLONASS-only, and combined GPS/GLONASS solutions using the observations in the years 2009-2011 of a global network of 92 combined GPS/GLONASS receivers were analyzed for this purpose. In a second step we review direct solar radiation pressure (SRP) models for GNSS satellites. We demonstrate that only even-order short-period harmonic perturbations acting along the direction Sun-satellite occur for GPS and GLONASS satellites, and only odd-order perturbations acting along the direction perpendicular to both, the vector Sun-satellite and the spacecraft's solar panel axis. Based on this insight we assess in the third step the performance of four candidate orbit models for the future ECOM. The geocenter coordinates, the ERP differences w. r. t. the IERS 08 C04 series of ERPs, the misclosures for the midnight epochs of the daily orbital arcs, and scale parameters of Helmert transformations for station coordinates serve as quality criteria. The old and updated ECOM are validated in addition with satellite laser ranging (SLR) observations and by comparing the orbits to those of the IGS and other analysis centers. Based on all tests, we present a new extended ECOM which

  7. Analysis of different containment models for IRIS small break LOCA, using GOTHIC and RELAP5 codes

    International Nuclear Information System (INIS)

    Papini, Davide; Grgic, Davor; Cammi, Antonio; Ricotti, Marco E.

    2011-01-01

    Advanced nuclear water reactors rely on containment behaviour in realization of some of their passive safety functions. Steam condensation on containment walls, where non-condensable gas effects are significant, is an important feature of the new passive containment concepts, like the AP600/1000 ones. In this work the international reactor innovative and secure (IRIS) was taken as reference, and the relevant condensation phenomena involved within its containment were investigated with different computational tools. In particular, IRIS containment response to a small break LOCA (SBLOCA) was calculated with GOTHIC and RELAP5 codes. A simplified model of IRIS containment drywell was implemented with RELAP5 according to a sliced approach, based on the two-pipe-with-junction concept, while it was addressed with GOTHIC using several modelling options, regarding both heat transfer correlations and volume and thermal structure nodalization. The influence on containment behaviour prediction was investigated in terms of drywell temperature and pressure response, heat transfer coefficient (HTC) and steam volume fraction distribution, and internal recirculating mass flow rate. The objective of the paper is to preliminarily compare the capability of the two codes in modelling of the same postulated accident, thus to check the results obtained with RELAP5, when applied in a situation not covered by its validation matrix (comprising SBLOCA and to some extent LBLOCA transients, but not explicitly the modelling of large dry containment volumes). The option to include or not droplets in fluid mass flow discharged to the containment was the most influencing parameter for GOTHIC simulations. Despite some drawbacks, due, e.g. to a marked overestimation of internal natural recirculation, RELAP5 confirmed its capability to satisfactorily model the basic processes in IRIS containment following SBLOCA.

  8. A Realistic Model Under Which the Genetic Code is Optimal

    NARCIS (Netherlands)

    Buhrman, Harry; van der Gulik, Peter T. S.; Klau, Gunnar W.; Schaffner, Christian; Speijer, Dave; Stougie, Leen

    2013-01-01

    The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By

  9. A Realistic Model under which the Genetic Code is Optimal

    NARCIS (Netherlands)

    Buhrman, H.; van der Gulik, P.T.S.; Klau, G.W.; Schaffner, C.; Speijer, D.; Stougie, L.

    2013-01-01

    The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By

  10. C code generation applied to nonlinear model predictive control for an artificial pancreas

    DEFF Research Database (Denmark)

    Boiroux, Dimitri; Jørgensen, John Bagterp

    2017-01-01

    This paper presents a method to generate C code from MATLAB code applied to a nonlinear model predictive control (NMPC) algorithm. The C code generation uses the MATLAB Coder Toolbox. It can drastically reduce the time required for development compared to a manual porting of code from MATLAB to C...... of glucose regulation for people with type 1 diabetes as a case study. The average computation time when using generated C code is 0.21 s (MATLAB: 1.5 s), and the maximum computation time when using generated C code is 0.97 s (MATLAB: 5.7 s). Compared to the MATLAB implementation, generated C code can run...

  11. The capability and constraint model of recoverability: An integrated theory of continuity planning.

    Science.gov (United States)

    Lindstedt, David

    2017-01-01

    While there are best practices, good practices, regulations and standards for continuity planning, there is no single model to collate and sort their various recommended activities. To address this deficit, this paper presents the capability and constraint model of recoverability - a new model to provide an integrated foundation for business continuity planning. The model is non-linear in both construct and practice, thus allowing practitioners to remain adaptive in its application. The paper presents each facet of the model, outlines the model's use in both theory and practice, suggests a subsequent approach that arises from the model, and discusses some possible ramifications to the industry.

  12. A MODEL BUILDING CODE ARTICLE ON FALLOUT SHELTERS WITH RECOMMENDATIONS FOR INCLUSION OF REQUIREMENTS FOR FALLOUT SHELTER CONSTRUCTION IN FOUR NATIONAL MODEL BUILDING CODES.

    Science.gov (United States)

    American Inst. of Architects, Washington, DC.

    A MODEL BUILDING CODE FOR FALLOUT SHELTERS WAS DRAWN UP FOR INCLUSION IN FOUR NATIONAL MODEL BUILDING CODES. DISCUSSION IS GIVEN OF FALLOUT SHELTERS WITH RESPECT TO--(1) NUCLEAR RADIATION, (2) NATIONAL POLICIES, AND (3) COMMUNITY PLANNING. FALLOUT SHELTER REQUIREMENTS FOR SHIELDING, SPACE, VENTILATION, CONSTRUCTION, AND SERVICES SUCH AS ELECTRICAL…

  13. Modeling Vortex Generators in a Navier-Stokes Code

    Science.gov (United States)

    Dudek, Julianne C.

    2011-01-01

    A source-term model that simulates the effects of vortex generators was implemented into the Wind-US Navier-Stokes code. The source term added to the Navier-Stokes equations simulates the lift force that would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, subsonic flow in an S-duct with 22 corotating vortex generators, and supersonic flow in a rectangular duct with a counter-rotating vortex-generator pair. The model was also used to successfully simulate microramps in supersonic flow by treating each microramp as a pair of vanes with opposite angles of incidence. The validation results indicate that the source-term vortex-generator model provides a useful tool for screening vortex-generator configurations and gives comparable results to solutions computed using gridded vanes.

  14. A self-organized internal models architecture for coding sensory-motor schemes

    Directory of Open Access Journals (Sweden)

    Esaú eEscobar Juárez

    2016-04-01

    Full Text Available Cognitive robotics research draws inspiration from theories and models on cognition, as conceived by neuroscience or cognitive psychology, to investigate biologically plausible computational models in artificial agents. In this field, the theoretical framework of Grounded Cognition provides epistemological and methodological grounds for the computational modeling of cognition. It has been stressed in the literature that textit{simulation}, textit{prediction}, and textit{multi-modal integration} are key aspects of cognition and that computational architectures capable of putting them into play in a biologically plausible way are a necessity.Research in this direction has brought extensive empirical evidencesuggesting that textit{Internal Models} are suitable mechanisms forsensory-motor integration. However, current Internal Models architectures show several drawbacks, mainly due to the lack of a unified substrate allowing for a true sensory-motor integration space, enabling flexible and scalable ways to model cognition under the embodiment hypothesis constraints.We propose the Self-Organized Internal ModelsArchitecture (SOIMA, a computational cognitive architecture coded by means of a network of self-organized maps, implementing coupled internal models that allow modeling multi-modal sensory-motor schemes. Our approach addresses integrally the issues of current implementations of Internal Models.We discuss the design and features of the architecture, and provide empirical results on a humanoid robot that demonstrate the benefits and potentialities of the SOIMA concept for studying cognition in artificial agents.

  15. Maximizing entropy of image models for 2-D constrained coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Danieli, Matteo; Burini, Nino

    2010-01-01

    This paper considers estimating and maximizing the entropy of two-dimensional (2-D) fields with application to 2-D constrained coding. We consider Markov random fields (MRF), which have a non-causal description, and the special case of Pickard random fields (PRF). The PRF are 2-D causal finite...... context models, which define stationary probability distributions on finite rectangles and thus allow for calculation of the entropy. We consider two binary constraints and revisit the hard square constraint given by forbidding neighboring 1s and provide novel results for the constraint that no uniform 2...... £ 2 squares contains all 0s or all 1s. The maximum values of the entropy for the constraints are estimated and binary PRF satisfying the constraint are characterized and optimized w.r.t. the entropy. The maximum binary PRF entropy is 0.839 bits/symbol for the no uniform squares constraint. The entropy...

  16. RAMONA-4B a computer code with three-dimensional neutron kinetics for BWR and SBWR system transient - models and correlations

    International Nuclear Information System (INIS)

    Rohatgi, U.S.; Cheng, H.S.; Khan, H.J.; Mallen, A.N.; Neymotin, L.Y.

    1998-03-01

    This document describes the major modifications and improvements made to the modeling of the RAMONA-3B/MOD0 code since 1981, when the code description and assessment report was completed. The new version of the code is RAMONA-4B. RAMONA-4B is a systems transient code for application to different versions of Boiling Water Reactors (BWR) such as the current BWR, the Advanced Boiling Water Reactor (ABWR), and the Simplified Boiling Water Reactor (SBWR). This code uses a three-dimensional neutron kinetics model coupled with a multichannel, non-equilibrium, drift-flux, two-phase flow formulation of the thermal hydraulics of the reactor vessel. The code is designed to analyze a wide spectrum of BWR core and system transients and instability issues. Chapter 1 is an overview of the code's capabilities and limitations; Chapter 2 discusses the neutron kinetics modeling and the implementation of reactivity edits. Chapter 3 is an overview of the heat conduction calculations. Chapter 4 presents modifications to the thermal-hydraulics model of the vessel, recirculation loop, steam separators, boron transport, and SBWR specific components. Chapter 5 describes modeling of the plant control and safety systems. Chapter 6 presents and modeling of Balance of Plant (BOP). Chapter 7 describes the mechanistic containment model in the code. The content of this report is complementary to the RAMONA-3B code description and assessment document. 53 refs., 81 figs., 13 tabs

  17. Verification of the New FAST v8 Capabilities for the Modeling of Fixed-Bottom Offshore Wind Turbines: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Barahona, B.; Jonkman, J.; Damiani, R.; Robertson, A.; Hayman, G.

    2014-12-01

    Coupled dynamic analysis has an important role in the design of offshore wind turbines because the systems are subject to complex operating conditions from the combined action of waves and wind. The aero-hydro-servo-elastic tool FAST v8 is framed in a novel modularization scheme that facilitates such analysis. Here, we present the verification of new capabilities of FAST v8 to model fixed-bottom offshore wind turbines. We analyze a series of load cases with both wind and wave loads and compare the results against those from the previous international code comparison projects-the International Energy Agency (IEA) Wind Task 23 Subtask 2 Offshore Code Comparison Collaboration (OC3) and the IEA Wind Task 30 OC3 Continued (OC4) projects. The verification is performed using the NREL 5-MW reference turbine supported by monopile, tripod, and jacket substructures. The substructure structural-dynamics models are built within the new SubDyn module of FAST v8, which uses a linear finite-element beam model with Craig-Bampton dynamic system reduction. This allows the modal properties of the substructure to be synthesized and coupled to hydrodynamic loads and tower dynamics. The hydrodynamic loads are calculated using a new strip theory approach for multimember substructures in the updated HydroDyn module of FAST v8. These modules are linked to the rest of FAST through the new coupling scheme involving mapping between module-independent spatial discretizations and a numerically rigorous implicit solver. The results show that the new structural dynamics, hydrodynamics, and coupled solutions compare well to the results from the previous code comparison projects.

  18. Large-Signal Code TESLA: Improvements in the Implementation and in the Model

    National Research Council Canada - National Science Library

    Chernyavskiy, Igor A; Vlasov, Alexander N; Anderson, Jr., Thomas M; Cooke, Simon J; Levush, Baruch; Nguyen, Khanh T

    2006-01-01

    We describe the latest improvements made in the large-signal code TESLA, which include transformation of the code to a Fortran-90/95 version with dynamical memory allocation and extension of the model...

  19. Code-switched English pronunciation modeling for Swahili spoken term detection

    CSIR Research Space (South Africa)

    Kleynhans, N

    2016-05-01

    Full Text Available We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code switching, where speakers switch language in a conversation, occurs frequently in multilingual environments, and typically...

  20. An Observation Capability Metadata Model for EO Sensor Discovery in Sensor Web Enablement Environments

    Directory of Open Access Journals (Sweden)

    Chuli Hu

    2014-10-01

    Full Text Available Accurate and fine-grained discovery by diverse Earth observation (EO sensors ensures a comprehensive response to collaborative observation-required emergency tasks. This discovery remains a challenge in an EO sensor web environment. In this study, we propose an EO sensor observation capability metadata model that reuses and extends the existing sensor observation-related metadata standards to enable the accurate and fine-grained discovery of EO sensors. The proposed model is composed of five sub-modules, namely, ObservationBreadth, ObservationDepth, ObservationFrequency, ObservationQuality and ObservationData. The model is applied to different types of EO sensors and is formalized by the Open Geospatial Consortium Sensor Model Language 1.0. The GeosensorQuery prototype retrieves the qualified EO sensors based on the provided geo-event. An actual application to flood emergency observation in the Yangtze River Basin in China is conducted, and the results indicate that sensor inquiry can accurately achieve fine-grained discovery of qualified EO sensors and obtain enriched observation capability information. In summary, the proposed model enables an efficient encoding system that ensures minimum unification to represent the observation capabilities of EO sensors. The model functions as a foundation for the efficient discovery of EO sensors. In addition, the definition and development of this proposed EO sensor observation capability metadata model is a helpful step in extending the Sensor Model Language (SensorML 2.0 Profile for the description of the observation capabilities of EO sensors.

  1. An overview of the transient thermal-hydraulic analysis code, GINKGO. Fluid model, numerical solution and phenomenal tests

    International Nuclear Information System (INIS)

    Ren Zhihao; Kong Xiangyin; Tsai Chiungwen; Ruan Jialei; Li Jinggang; Ma Zhongying; Yan Jianxing; Ma Yinxiang

    2015-01-01

    A system transient thermal-hydraulic analysis code for PWRs named GINKGO is being developed as part of the indigenous effort of China General Nuclear Power Corp. (CGN) to develop a full-spectrum software package for reactor design and safety analysis. Implemented using the Object-Oriented programming technology, GINKGO is designed to be used for simulating all PWR transients except LBLOCA. The primary physical models and key algorithms applied in GINKGO and also the preliminary validation with the phenomena cases are introduced in the paper. To account for reactor coolant transients, the separated phase model under thermal equilibrium is used in the code. The three governing mixture balance equations augmented with Chexal-Lellouche drift-flux model to determine phase velocities are solved at each time step. Thermal equilibrium between the vapor and liquid phases is assumed with the exception of the upper head volume and pressurizer. And two-region non-equilibrium model and multi-region non-equilibrium model are available for the pressurizer simulation. The reactor point kinetics model with six groups of delayed neutrons, the partial derivative approximation of the DNBR model and decay heat model are combined to give a full description for the reactor core. The additional component model, engineered safety system model and models for other auxiliary systems in GINKGO demonstrate a complete capability for PWR safety analysis and thermal-hydraulic design. A fully implicit solution algorithm involving pressure search is applied in GINKGO to improve the stability of the solution method, especially when two-phase conditions with unequal phase velocities exist. Different phenomena cases are set up to demonstrate the capability of GINKGO used in different boundary conditions, steady state achievement, reverse and branch flow, etc. The GINKGO code uses the C/C++ programming language to take advantage of the language's inherent Object Oriented characteristic and to

  2. Modification of the finite element heat and mass transfer code (FEHM) to model multicomponent reactive transport

    International Nuclear Information System (INIS)

    Viswanathan, H.S.

    1996-08-01

    The finite element code FEHMN, developed by scientists at Los Alamos National Laboratory (LANL), is a three-dimensional finite element heat and mass transport simulator that can handle complex stratigraphy and nonlinear processes such as vadose zone flow, heat flow and solute transport. Scientists at LANL have been developing hydrologic flow and transport models of the Yucca Mountain site using FEHMN. Previous FEHMN simulations have used an equivalent Kd model to model solute transport. In this thesis, FEHMN is modified making it possible to simulate the transport of a species with a rigorous chemical model. Including the rigorous chemical equations into FEHMN simulations should provide for more representative transport models for highly reactive chemical species. A fully kinetic formulation is chosen for the FEHMN reactive transport model. Several methods are available to computationally implement a fully kinetic formulation. Different numerical algorithms are investigated in order to optimize computational efficiency and memory requirements of the reactive transport model. The best algorithm of those investigated is then incorporated into FEHMN. The algorithm chosen requires for the user to place strongly coupled species into groups which are then solved for simultaneously using FEHMN. The complete reactive transport model is verified over a wide variety of problems and is shown to be working properly. The new chemical capabilities of FEHMN are illustrated by using Los Alamos National Laboratory's site scale model of Yucca Mountain to model two-dimensional, vadose zone 14 C transport. The simulations demonstrate that gas flow and carbonate chemistry can significantly affect 14 C transport at Yucca Mountain. The simulations also prove that the new capabilities of FEHMN can be used to refine and buttress already existing Yucca Mountain radionuclide transport studies

  3. Modeling Vortex Generators in the Wind-US Code

    Science.gov (United States)

    Dudek, Julianne C.

    2010-01-01

    A source term model which simulates the effects of vortex generators was implemented into the Wind-US Navier Stokes code. The source term added to the Navier-Stokes equations simulates the lift force which would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, supersonic flow in a rectangular duct with a counterrotating vortex generator pair, and subsonic flow in an S-duct with 22 co-rotating vortex generators. The validation results indicate that the source term vortex generator model provides a useful tool for screening vortex generator configurations and gives comparable results to solutions computed using a gridded vane.

  4. A theoretical model for developing core capabilities from an intellectual capital perspective (Part 1

    Directory of Open Access Journals (Sweden)

    Marius Ungerer

    2005-10-01

    Full Text Available One of the basic assumptions associated with the theoretical model as described in this article is that an organisation (a system can acquire capabilities through intentional strategic and operational initiatives. This intentional capability-building process also implies that the organisation intends to use these capabilities in a constructive way to increase competitive advantage for the firm. Opsomming Een van die basiese aannames wat geassosieer word met die teoretiese model wat in hierdie artikel beskryf word, is dat ’n organisasie (’n stelsel vermoëns deur doelgerigte strategiese en operasionele inisiatiewe kan bekom. Hierdie voorgenome vermoë-skeppingsproses, veronderstel ook dat die onderneming daarop ingestel is om hierdie vermoëns op ’n konstruktiewe wyse te benut om die mededingende voordeel van die organisasie te verhoog.

  5. A theoretical model for developing core capabilities from an intellectual capital perspective (Part 2

    Directory of Open Access Journals (Sweden)

    Marius Ungerer

    2005-10-01

    Full Text Available One of the basic assumptions associated with the theoretical model as described in this article is that an organization (a system can acquire capabilities through intentional strategic and operational initiatives. This intentional capability-building process also implies that the organisation intends to use these capabilities in a constructive way to increase competitive advantage for the firm. Opsomming Een van die basiese aannames wat geassosieer word met die teoretiese model wat in hierdie artikel beskryf word, is dat ’n organisasie (’n stelsel vermoëns deur doelgerigte strategiese en operasionele inisiatiewe kan bekom. Hierdie voorgenome vermoë-skeppingsproses, veronderstel ook dat die onderneming daarop ingestel is om hierdie vermoëns op ’n konstruktiewe wyse te benut om die mededingende voordeel van die organisasie te verhoog.

  6. Capability-based Access Control Delegation Model on the Federated IoT Network

    DEFF Research Database (Denmark)

    Anggorojati, Bayu; Mahalle, Parikshit N.; Prasad, Neeli R.

    2012-01-01

    Flexibility is an important property for general access control system and especially in the Internet of Things (IoT), which can be achieved by access or authority delegation. Delegation mechanisms in access control that have been studied until now have been intended mainly for a system that has...... no resource constraint, such as a web-based system, which is not very suitable for a highly pervasive system such as IoT. To this end, this paper presents an access delegation method with security considerations based on Capability-based Context Aware Access Control (CCAAC) model intended for federated...... machine-to-machine communication or IoT networks. The main idea of our proposed model is that the access delegation is realized by means of a capability propagation mechanism, and incorporating the context information as well as secure capability propagation under federated IoT environments. By using...

  7. Existing and Required Modeling Capabilities for Evaluating ATM Systems and Concepts

    Science.gov (United States)

    Odoni, Amedeo R.; Bowman, Jeremy; Delahaye, Daniel; Deyst, John J.; Feron, Eric; Hansman, R. John; Khan, Kashif; Kuchar, James K.; Pujet, Nicolas; Simpson, Robert W.

    1997-01-01

    ATM systems throughout the world are entering a period of major transition and change. The combination of important technological developments and of the globalization of the air transportation industry has necessitated a reexamination of some of the fundamental premises of existing Air Traffic Management (ATM) concepts. New ATM concepts have to be examined, concepts that may place more emphasis on: strategic traffic management; planning and control; partial decentralization of decision-making; and added reliance on the aircraft to carry out strategic ATM plans, with ground controllers confined primarily to a monitoring and supervisory role. 'Free Flight' is a case in point. In order to study, evaluate and validate such new concepts, the ATM community will have to rely heavily on models and computer-based tools/utilities, covering a wide range of issues and metrics related to safety, capacity and efficiency. The state of the art in such modeling support is adequate in some respects, but clearly deficient in others. It is the objective of this study to assist in: (1) assessing the strengths and weaknesses of existing fast-time models and tools for the study of ATM systems and concepts and (2) identifying and prioritizing the requirements for the development of additional modeling capabilities in the near future. A three-stage process has been followed to this purpose: 1. Through the analysis of two case studies involving future ATM system scenarios, as well as through expert assessment, modeling capabilities and supporting tools needed for testing and validating future ATM systems and concepts were identified and described. 2. Existing fast-time ATM models and support tools were reviewed and assessed with regard to the degree to which they offer the capabilities identified under Step 1. 3 . The findings of 1 and 2 were combined to draw conclusions about (1) the best capabilities currently existing, (2) the types of concept testing and validation that can be carried

  8. Reactive transport simulation via combination of a multiphase-capable transport code for unstructured meshes with a Gibbs energy minimization solver of geochemical equilibria

    Science.gov (United States)

    Fowler, S. J.; Driesner, T.; Hingerl, F. F.; Kulik, D. A.; Wagner, T.

    2011-12-01

    We apply a new, C++-based computational model for hydrothermal fluid-rock interaction and scale formation in geothermal reservoirs. The model couples the Complex System Modelling Platform (CSMP++) code for fluid flow in porous and fractured media (Matthai et al., 2007) with the Gibbs energy minimization numerical kernel GEMS3K of the GEM-Selektor (GEMS3) geochemical modelling package (Kulik et al., 2010) in a modular fashion. CSMP++ includes interfaces to commercial file formats, accommodating complex geometry construction using CAD (Rhinoceros) and meshing (ANSYS) software. The CSMP++ approach employs finite element-finite volume spatial discretization, implicit or explicit time discretization, and operator splitting. GEMS3K can calculate complex fluid-mineral equilibria based on a variety of equation of state and activity models. A selection of multi-electrolyte aqueous solution models, such as extended Debye-Huckel, Pitzer (Harvie et al., 1984), EUNIQUAC (Thomsen et al., 1996), and the new ELVIS model (Hingerl et al., this conference), makes it well-suited for application to a wide range of geothermal conditions. An advantage of the GEMS3K solver is simultaneous consideration of complex solid solutions (e.g., clay minerals), gases, fluids, and aqueous solutions. Each coupled simulation results in a thermodynamically-based description of the geochemical and physical state of a hydrothermal system evolving along a complex P-T-X path. The code design allows efficient, flexible incorporation of numerical and thermodynamic database improvements. We demonstrate the coupled code workflow and applicability to compositionally and physically complex natural systems relevant to enhanced geothermal systems, where temporally and spatially varying chemical interactions may take place within diverse lithologies of varying geometry. Engesgaard, P. & Kipp, K. L. (1992). Water Res. Res. 28: 2829-2843. Harvie, C. E.; Møller, N. & Weare, J. H. (1984). Geochim. Cosmochim. Acta 48

  9. Modification of the finite element heat and mass transfer code (FEHMN) to model multicomponent reactive transport

    International Nuclear Information System (INIS)

    Viswanathan, H.S.

    1995-01-01

    The finite element code FEHMN is a three-dimensional finite element heat and mass transport simulator that can handle complex stratigraphy and nonlinear processes such as vadose zone flow, heat flow and solute transport. Scientists at LANL have been developed hydrologic flow and transport models of the Yucca Mountain site using FEHMN. Previous FEHMN simulations have used an equivalent K d model to model solute transport. In this thesis, FEHMN is modified making it possible to simulate the transport of a species with a rigorous chemical model. Including the rigorous chemical equations into FEHMN simulations should provide for more representative transport models for highly reactive chemical species. A fully kinetic formulation is chosen for the FEHMN reactive transport model. Several methods are available to computationally implement a fully kinetic formulation. Different numerical algorithms are investigated in order to optimize computational efficiency and memory requirements of the reactive transport model. The best algorithm of those investigated is then incorporated into FEHMN. The algorithm chosen requires for the user to place strongly coupled species into groups which are then solved for simultaneously using FEHMN. The complete reactive transport model is verified over a wide variety of problems and is shown to be working properly. The simulations demonstrate that gas flow and carbonate chemistry can significantly affect 14 C transport at Yucca Mountain. The simulations also provide that the new capabilities of FEHMN can be used to refine and buttress already existing Yucca Mountain radionuclide transport studies

  10. Models of multi-rod code FRETA-B for transient fuel behavior analysis

    International Nuclear Information System (INIS)

    Uchida, Masaaki; Otsubo, Naoaki.

    1984-11-01

    This paper is a final report of the development of FRETA-B code, which analyzes the LWR fuel behavior during accidents, particularly the Loss-of-Coolant Accident (LOCA). The very high temperature induced by a LOCA causes oxidation of the cladding by steam and, as a combined effect with low external pressure, extensive swelling of the cladding. The latter may reach a level that the rods block the coolant channel. To analyze these phenomena, single-rod model is insufficient; FRETA-B has a capability to handle multiple fuel rods in a bundle simultaneously, including the interaction between them. In the development work, therefore, efforts were made for avoiding the excessive increase of calculation time and core memory requirement. Because of the strong dependency of the in-LOCA fuel behavior on the coolant state, FRETA-B has emphasis on heat transfer to the coolant as well as the cladding deformation. In the final version, a capability was added to analyze the fuel behavior under reflooding using empirical models. The present report describes the basic models of FRETA-B, and also gives its input manual in the appendix. (author)

  11. University-Industry Research Collaboration: A Model to Assess University Capability

    Science.gov (United States)

    Abramo, Giovanni; D'Angelo, Ciriaco Andrea; Di Costa, Flavia

    2011-01-01

    Scholars and policy makers recognize that collaboration between industry and the public research institutions is a necessity for innovation and national economic development. This work presents an econometric model which expresses the university capability for collaboration with industry as a function of size, location and research quality. The…

  12. Semantic Model of Variability and Capabilities of IoT Applications for Embedded Software Ecosystems

    DEFF Research Database (Denmark)

    Tomlein, Matus; Grønbæk, Kaj

    2016-01-01

    Applications in embedded open software ecosystems for Internet of Things devices open new challenges regarding how their variability and capabilities should be modeled. In collaboration with an industrial partner, we have recognized that such applications have complex constraints on the context. We...

  13. Methodology Using MELCOR Code to Model Proposed Hazard Scenario

    Energy Technology Data Exchange (ETDEWEB)

    Gavin Hawkley

    2010-07-01

    This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.

  14. A New Approach to Model Pitch Perception Using Sparse Coding.

    Directory of Open Access Journals (Sweden)

    Oded Barzelay

    2017-01-01

    Full Text Available Our acoustical environment abounds with repetitive sounds, some of which are related to pitch perception. It is still unknown how the auditory system, in processing these sounds, relates a physical stimulus and its percept. Since, in mammals, all auditory stimuli are conveyed into the nervous system through the auditory nerve (AN fibers, a model should explain the perception of pitch as a function of this particular input. However, pitch perception is invariant to certain features of the physical stimulus. For example, a missing fundamental stimulus with resolved or unresolved harmonics, or a low and high-level amplitude stimulus with the same spectral content-these all give rise to the same percept of pitch. In contrast, the AN representations for these different stimuli are not invariant to these effects. In fact, due to saturation and non-linearity of both cochlear and inner hair cells responses, these differences are enhanced by the AN fibers. Thus there is a difficulty in explaining how pitch percept arises from the activity of the AN fibers. We introduce a novel approach for extracting pitch cues from the AN population activity for a given arbitrary stimulus. The method is based on a technique known as sparse coding (SC. It is the representation of pitch cues by a few spatiotemporal atoms (templates from among a large set of possible ones (a dictionary. The amount of activity of each atom is represented by a non-zero coefficient, analogous to an active neuron. Such a technique has been successfully applied to other modalities, particularly vision. The model is composed of a cochlear model, an SC processing unit, and a harmonic sieve. We show that the model copes with different pitch phenomena: extracting resolved and non-resolved harmonics, missing fundamental pitches, stimuli with both high and low amplitudes, iterated rippled noises, and recorded musical instruments.

  15. Modeling ion exchange in clinoptilolite using the EQ3/6 geochemical modeling code

    International Nuclear Information System (INIS)

    Viani, B.E.; Bruton, C.J.

    1992-06-01

    Assessing the suitability of Yucca Mtn., NV as a potential repository for high-level nuclear waste requires the means to simulate ion-exchange behavior of zeolites. Vanselow and Gapon convention cation-exchange models have been added to geochemical modeling codes EQ3NR/EQ6, allowing exchange to be modeled for up to three exchangers or a single exchanger with three independent sites. Solid-solution models that are numerically equivalent to the ion-exchange models were derived and also implemented in the code. The Gapon model is inconsistent with experimental adsorption isotherms of trace components in clinoptilolite. A one-site Vanselow model can describe adsorption of Cs or Sr on clinoptilolite, but a two-site Vanselow exchange model is necessary to describe K contents of natural clinoptilolites

  16. Analysis on fuel thermal conductivity model of the computer code for performance prediction of fuel rods

    International Nuclear Information System (INIS)

    Li Hai; Huang Chen; Du Aibing; Xu Baoyu

    2014-01-01

    The thermal conductivity is one of the most important parameters in the computer code for performance prediction for fuel rods. Several fuel thermal conductivity models used in foreign computer code, including thermal conductivity models for MOX fuel and UO 2 fuel were introduced in this paper. Thermal conductivities were calculated by using these models, and the results were compared and analyzed. Finally, the thermal conductivity model for the native computer code for performance prediction for fuel rods in fast reactor was recommended. (authors)

  17. Development Status of the PEBBLES Code for Pebble Mechanics: Improved Physical Models and Speed-up

    Energy Technology Data Exchange (ETDEWEB)

    Joshua J. Cogliati; Abderrafi M. Ougouag

    2009-09-01

    PEBBLES is a code for simulating the motion of all the pebbles in a pebble bed reactor. Since pebble bed reactors are packed randomly and not precisely placed, the location of the fuel elements in the reactor is not deterministically known. Instead, when determining operating parameters the motion of the pebbles can be simulated and stochastic locations can be found. The PEBBLES code can output information relevant for other simulations of the pebble bed reactors such as the positions of the pebbles in the reactor, packing fraction change in an earthquake, and velocity profiles created by recirculation. The goal for this level three milestone was to speedup the PEBBLES code through implementation on massively parallel computer. Work on this goal has resulted in speeding up both the single processor version and creation of a new parallel version of PEBBLES. Both the single processor version and the parallel running capability of the PEBBLES code have improved since the fiscal year start. The hybrid MPI/OpenMP PEBBLES version was created this year to run on the increasingly common cluster hardware profile that combines nodes with multiple processors that share memory and a cluster of nodes that are networked together. The OpenMP portions use the Open Multi-Processing shared memory parallel processing model to split the task across processors in a single node that shares memory. The Message Passing Interface (MPI) portion uses messages to communicate between different nodes over a network. The following are wall clock speed up for simulating an NGNP-600 sized reactor. The single processor version runs 1.5 times faster compared to the single processor version at the beginning of the fiscal year. This speedup is primarily due to the improved static friction model described in the report. When running on 64 processors, the new MPI/OpenMP hybrid version has a wall clock speed up of 22 times compared to the current single processor version. When using 88 processors, a

  18. Development Status of the PEBBLES Code for Pebble Mechanics: Improved Physical Models and Speed-up

    Energy Technology Data Exchange (ETDEWEB)

    Joshua J. Cogliati; Abderrafi M. Ougouag

    2009-12-01

    PEBBLES is a code for simulating the motion of all the pebbles in a pebble bed reactor. Since pebble bed reactors are packed randomly and not precisely placed, the location of the fuel elements in the reactor is not deterministically known. Instead, when determining operating parameters the motion of the pebbles can be simulated and stochastic locations can be found. The PEBBLES code can output information relevant for other simulations of the pebble bed reactors such as the positions of the pebbles in the reactor, packing fraction change in an earthquake, and velocity profiles created by recirculation. The goal for this level three milestone was to speedup the PEBBLES code through implementation on massively parallel computer. Work on this goal has resulted in speeding up both the single processor version and creation of a new parallel version of PEBBLES. Both the single processor version and the parallel running capability of the PEBBLES code have improved since the fiscal year start. The hybrid MPI/OpenMP PEBBLES version was created this year to run on the increasingly common cluster hardware profile that combines nodes with multiple processors that share memory and a cluster of nodes that are networked together. The OpenMP portions use the Open Multi-Processing shared memory parallel processing model to split the task across processors in a single node that shares memory. The Message Passing Interface (MPI) portion uses messages to communicate between different nodes over a network. The following are wall clock speed up for simulating an NGNP-600 sized reactor. The single processor version runs 1.5 times faster compared to the single processor version at the beginning of the fiscal year. This speedup is primarily due to the improved static friction model described in the report. When running on 64 processors, the new MPI/OpenMP hybrid version has a wall clock speed up of 22 times compared to the current single processor version. When using 88 processors, a

  19. Coding conventions and principles for a National Land-Change Modeling Framework

    Science.gov (United States)

    Donato, David I.

    2017-07-14

    This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.

  20. Code Coupling via Jacobian-Free Newton-Krylov Algorithms with Application to Magnetized Fluid Plasma and Kinetic Neutral Models

    Energy Technology Data Exchange (ETDEWEB)

    Joseph, Ilon [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-05-27

    Jacobian-free Newton-Krylov (JFNK) algorithms are a potentially powerful class of methods for solving the problem of coupling codes that address dfferent physics models. As communication capability between individual submodules varies, different choices of coupling algorithms are required. The more communication that is available, the more possible it becomes to exploit the simple sparsity pattern of the Jacobian, albeit of a large system. The less communication that is available, the more dense the Jacobian matrices become and new types of preconditioners must be sought to efficiently take large time steps. In general, methods that use constrained or reduced subsystems can offer a compromise in complexity. The specific problem of coupling a fluid plasma code to a kinetic neutrals code is discussed as an example.

  1. How do dynamic capabilities transform external technologies into firms’ renewed technological resources? – A mediation model

    DEFF Research Database (Denmark)

    Li-Ying, Jason; Wang, Yuandi; Ning, Lutao

    2016-01-01

    How externally acquired resources may become valuable, rare, hard-to-imitate, and non-substitute resource bundles through the development of dynamic capabilities? This study proposes and tests a mediation model of how firms’ internal technological diversification and R&D, as two distinctive...... microfoundations of dynamic technological capabilities, mediate the relationship between external technology breadth and firms’ technological innovation performance, based on the resource-based view and dynamic capability view. Using a sample of listed Chinese licensee firms, we find that firms must broadly...... explore external technologies to ignite the dynamism in internal technological diversity and in-house R&D, which play their crucial roles differently to transform and reconfigure firms’ technological resources....

  2. Development, Verification and Use of Gust Modeling in the NASA Computational Fluid Dynamics Code FUN3D

    Science.gov (United States)

    Bartels, Robert E.

    2012-01-01

    This paper presents the implementation of gust modeling capability in the CFD code FUN3D. The gust capability is verified by computing the response of an airfoil to a sharp edged gust. This result is compared with the theoretical result. The present simulations will be compared with other CFD gust simulations. This paper also serves as a users manual for FUN3D gust analyses using a variety of gust profiles. Finally, the development of an Auto-Regressive Moving-Average (ARMA) reduced order gust model using a gust with a Gaussian profile in the FUN3D code is presented. ARMA simulated results of a sequence of one-minus-cosine gusts is shown to compare well with the same gust profile computed with FUN3D. Proper Orthogonal Decomposition (POD) is combined with the ARMA modeling technique to predict the time varying pressure coefficient increment distribution due to a novel gust profile. The aeroelastic response of a pitch/plunge airfoil to a gust environment is computed with a reduced order model, and compared with a direct simulation of the system in the FUN3D code. The two results are found to agree very well.

  3. Cross-band noise model refinement for transform domain Wyner–Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2012-01-01

    TDWZ video coding trails that of conventional video coding solutions, mainly due to the quality of side information, inaccurate noise modeling and loss in the final coding step. The major goal of this paper is to enhance the accuracy of the noise modeling, which is one of the most important aspects...... influencing the coding performance of DVC. A TDWZ video decoder with a novel cross-band based adaptive noise model is proposed, and a noise residue refinement scheme is introduced to successively update the estimated noise residue for noise modeling after each bit-plane. Experimental results show...... that the proposed noise model and noise residue refinement scheme can improve the rate-distortion (RD) performance of TDWZ video coding significantly. The quality of the side information modeling is also evaluated by a measure of the ideal code length....

  4. On the Generalization Capabilities of the Ten-Parameter Jiles-Atherton Model

    Directory of Open Access Journals (Sweden)

    Gabriele Maria Lozito

    2015-01-01

    Full Text Available This work proposes an analysis on the generalization capabilities for the modified version of the classic Jiles-Atherton model for magnetic hysteresis. The modified model takes into account the use of dynamic parameterization, as opposed to the classic model where the parameters are constant. Two different dynamic parameterizations are taken into account: a dependence on the excitation and a dependence on the response. The identification process is performed by using a novel nonlinear optimization technique called Continuous Flock-of-Starling Optimization Cube (CFSO3, an algorithm belonging to the class of swarm intelligence. The algorithm exploits parallel architecture and uses a supervised strategy to alternate between exploration and exploitation capabilities. Comparisons between the obtained results are presented at the end of the paper.

  5. Konsep Tingkat Kematangan penerapan Internet Protokol versi 6 (Capability Maturity Model for IPv6 Implementation

    Directory of Open Access Journals (Sweden)

    Riza Azmi

    2015-03-01

    Full Text Available Internet Protocol atau IP merupakan standar penomoran internet di dunia yang jumlahnya terbatas. Di dunia, alokasi IP diatur oleh Internet Assignd Number Authority (IANA dan didelegasikan ke melalui otoritas masing-masing benua. IP sendiri terdiri dari 2 jenis versi yaitu IPv4 dan IPv6 dimana alokasi IPv4 dinyatakan habis di tingkat IANA pada bulan April 2011. Oleh karena itu, penggunaan IP diarahkan kepada penggunaan IPv6. Untuk melihat bagaimana kematangan suatu organisasi terhadap implementasi IPv6, penelitian ini mencoba membuat sebuah model tingkat kematangan penerapan IPv6. Konsep dasar dari model ini mengambil konsep Capability Maturity Model Integrated (CMMI, dengan beberapa tambahan yaitu roadmap migrasi IPv6 di Indonesia, Request for Comment (RFC yang terkait dengan IPv6 serta beberapa best-practice implementasi dari IPv6. Dengan konsep tersebut, penelitian ini menghasilkan konsep Capability Maturity for IPv6 Implementation.

  6. Three-field modeling for MARS 1-D code

    International Nuclear Information System (INIS)

    Hwang, Moonkyu; Lim, Ho-Gon; Jeong, Jae-Jun; Chung, Bub-Dong

    2006-01-01

    In this study, the three-field modeling of the two-phase mixture is developed. The finite difference equations for the three-field equations thereafter are devised. The solution scheme has been implemented into the MARS 1-D code. The three-field formulations adopted are similar to those for MARS 3-D module, in a sense that the mass and momentum are treated separately for the entrained liquid and continuous liquid. As in the MARS-3D module, the entrained liquid and continuous liquid are combined into one for the energy equation, assuming thermal equilibrium between the two. All the non-linear terms are linearized to arrange the finite difference equation set into a linear matrix form with respect to the unknown arguments. The problems chosen for the assessment of the newly added entrained field consist of basic conceptual tests. Among the tests are gas-only test, liquid-only test, gas-only with supplied entrained liquid test, Edwards pipe problem, and GE level swell problem. The conceptual tests performed confirm the sound integrity of the three-field solver

  7. Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection (Pub Version, Open Access)

    Science.gov (United States)

    2016-05-03

    resourced Languages, SLTU 2016, 9-12 May 2016, Yogyakarta, Indonesia Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection Neil...Abstract We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code switching...et al. / Procedia Computer Science 81 ( 2016 ) 128 – 135 Our research focuses on pronunciation modeling of English (embedded language) words within

  8. User's manual for a measurement simulation code

    International Nuclear Information System (INIS)

    Kern, E.A.

    1982-07-01

    The MEASIM code has been developed primarily for modeling process measurements in materials processing facilities associated with the nuclear fuel cycle. In addition, the code computes materials balances and the summation of materials balances along with associated variances. The code has been used primarily in performance assessment of materials' accounting systems. This report provides the necessary information for a potential user to employ the code in these applications. A number of examples that demonstrate most of the capabilities of the code are provided

  9. Recent extensions and use of the statistical model code EMPIRE-II - version: 2.17 Millesimo

    International Nuclear Information System (INIS)

    Herman, M.

    2003-01-01

    This lecture notes describe new features of the modular code EMPIRE-2.17 designed to perform comprehensive calculations of nuclear reactions using variety of nuclear reaction models. Compared to the version 2.13, the current release has been extended by including Coupled-Channel mechanism, exciton model, Monte Carlo approach to preequilibrium emission, use of microscopic level densities, widths fluctuation correction, detailed calculation of the recoil spectra, and powerful plotting capabilities provided by the ZVView package. The second part of this lecture concentrates on the use of the code in practical calculations, with emphasis on the aspects relevant to nuclear data evaluation. In particular, adjusting model parameters is discussed in details. (author)

  10. A Perceptual Model for Sinusoidal Audio Coding Based on Spectral Integration

    NARCIS (Netherlands)

    Van de Par, S.; Kohlrausch, A.; Heusdens, R.; Jensen, J.; Holdt Jensen, S.

    2005-01-01

    Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of

  11. A perceptual model for sinusoidal audio coding based on spectral integration

    NARCIS (Netherlands)

    Van de Par, S.; Kohlrauch, A.; Heusdens, R.; Jensen, J.; Jensen, S.H.

    2005-01-01

    Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of

  12. New Modelling Capabilities in Commercial Software for High-Gain Antennas

    DEFF Research Database (Denmark)

    Jørgensen, Erik; Lumholt, Michael; Meincke, Peter

    2012-01-01

    characterization of the reflectarray element, an initial phaseonly synthesis, followed by a full optimization procedure taking into account the near-field from the feed and the finite extent of the array. Another interesting new modelling capability is made available through the DIATOOL software, which is a new......This paper presents an overview of selected new modelling algorithms and capabilities in commercial software tools developed by TICRA. A major new area is design and analysis of printed reflectarrays where a fully integrated design environment is under development, allowing fast and accurate...... type of EM software tool aimed at extending the ways engineers can use antenna measurements in the antenna design process. The tool allows reconstruction of currents and near fields on a 3D surface conformal to the antenna, by using the measured antenna field as input. The currents on the antenna...

  13. MIG version 0.0 model interface guidelines: Rules to accelerate installation of numerical models into any compliant parent code

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, R.M.; Wong, M.K.

    1996-08-01

    A set of model interface guidelines, called MIG, is presented as a means by which any compliant numerical material model can be rapidly installed into any parent code without having to modify the model subroutines. Here, {open_quotes}model{close_quotes} usually means a material model such as one that computes stress as a function of strain, though the term may be extended to any numerical operation. {open_quotes}Parent code{close_quotes} means a hydrocode, finite element code, etc. which uses the model and enforces, say, the fundamental laws of motion and thermodynamics. MIG requires the model developer (who creates the model package) to specify model needs in a standardized but flexible way. MIG includes a dictionary of technical terms that allows developers and parent code architects to share a common vocabulary when specifying field variables. For portability, database management is the responsibility of the parent code. Input/output occurs via structured calling arguments. As much model information as possible (such as the lists of required inputs, as well as lists of precharacterized material data and special needs) is supplied by the model developer in an ASCII text file. Every MIG-compliant model also has three required subroutines to check data, to request extra field variables, and to perform model physics. To date, the MIG scheme has proven flexible in beta installations of a simple yield model, plus a more complicated viscodamage yield model, three electromechanical models, and a complicated anisotropic microcrack constitutive model. The MIG yield model has been successfully installed using identical subroutines in three vectorized parent codes and one parallel C++ code, all predicting comparable results. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort, thereby reducing the cost of installing and sharing models in diverse new codes.

  14. Applicability of SEI's Capability Maturity Model in Joint Information Technology, Supreme Command Headquarters

    OpenAIRE

    Thongmuang, Jitti.

    1995-01-01

    The Software Engineering Institute's (SEI) Capability Maturity Model (CMM) is analyzed to identify its technological and economic applicability for the Joint Information Technology (JIT), Supreme Command Headquarters, Royal Thai Ministry of Defense. Kurt Lewin's force field theory was used to analyze different dimensions of CMM's applicability for JIT's organizational environment (defined by the stakeholder concept). It suggests that introducing CMM technology into JIT is unwarranted at this ...

  15. Three Quality Journeys - Capability Maturity Model Integration, Baldrige Performance Excellence Program, and ISO 9000 Series

    Science.gov (United States)

    2012-04-26

    management; also demands involvement by upper executives in order to integrate quality into the business. o ISO 9004:2000 standard provided method for...previously used methods . o Indicated that ISO 9000:2008 provided roadmap for creating a quality management system that addressed issues specific to this...Capability Maturity Model Integration CMMI-DEV – CMMI for Development PDCA – Plan-Do-Check-Act SCAMPI – Standard CMMI Appraisal Method for Process

  16. A model of turbocharger radial turbines appropriate to be used in zero- and one-dimensional gas dynamics codes for internal combustion engines modelling

    Energy Technology Data Exchange (ETDEWEB)

    Serrano, J.R.; Arnau, F.J.; Dolz, V.; Tiseira, A. [CMT-Motores Termicos, Universidad Politecnica de Valencia, Camino de Vera s/n, 46022 Valencia (Spain); Cervello, C. [Conselleria de Cultura, Educacion y Deporte, Generalitat Valenciana (Spain)

    2008-12-15

    The paper presents a model of fixed and variable geometry turbines. The aim of this model is to provide an efficient boundary condition to model turbocharged internal combustion engines with zero- and one-dimensional gas dynamic codes. The model is based from its very conception on the measured characteristics of the turbine. Nevertheless, it is capable of extrapolating operating conditions that differ from those included in the turbine maps, since the engines usually work within these zones. The presented model has been implemented in a one-dimensional gas dynamic code and has been used to calculate unsteady operating conditions for several turbines. The results obtained have been compared with success against pressure-time histories measured upstream and downstream of the turbine during on-engine operation. (author)

  17. A model of turbocharger radial turbines appropriate to be used in zero- and one-dimensional gas dynamics codes for internal combustion engines modelling

    International Nuclear Information System (INIS)

    Serrano, J.R.; Arnau, F.J.; Dolz, V.; Tiseira, A.; Cervello, C.

    2008-01-01

    The paper presents a model of fixed and variable geometry turbines. The aim of this model is to provide an efficient boundary condition to model turbocharged internal combustion engines with zero- and one-dimensional gas dynamic codes. The model is based from its very conception on the measured characteristics of the turbine. Nevertheless, it is capable of extrapolating operating conditions that differ from those included in the turbine maps, since the engines usually work within these zones. The presented model has been implemented in a one-dimensional gas dynamic code and has been used to calculate unsteady operating conditions for several turbines. The results obtained have been compared with success against pressure-time histories measured upstream and downstream of the turbine during on-engine operation

  18. Mechanistic modelling of gaseous fission product behaviour in UO2 fuel by Rtop code

    International Nuclear Information System (INIS)

    Kanukova, V.D.; Khoruzhii, O.V.; Kourtchatov, S.Y.; Likhanskii, V.V.; Matveew, L.V.

    2002-01-01

    The current status of a mechanistic modelling by the RTOP code of the fission product behaviour in polycrystalline UO 2 fuel is described. An outline of the code and implemented physical models is presented. The general approach to code validation is discussed. It is exemplified by the results of validation of the models of fuel oxidation and grain growth. The different models of intragranular and intergranular gas bubble behaviour have been tested and the sensitivity of the code in the framework of these models has been analysed. An analysis of available models of the resolution of grain face bubbles is also presented. The possibilities of the RTOP code are presented through the example of modelling behaviour of WWER fuel over the course of a comparative WWER-PWR experiment performed at Halden and by comparison with Yanagisawa experiments. (author)

  19. Demonstration of the Recent Additions in Modeling Capabilities for the WEC-Sim Wave Energy Converter Design Tool: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Tom, N.; Lawson, M.; Yu, Y. H.

    2015-03-01

    WEC-Sim is a mid-fidelity numerical tool for modeling wave energy conversion (WEC) devices. The code uses the MATLAB SimMechanics package to solve the multi-body dynamics and models the wave interactions using hydrodynamic coefficients derived from frequency domain boundary element methods. In this paper, the new modeling features introduced in the latest release of WEC-Sim will be presented. The first feature discussed is the conversion of the fluid memory kernel to a state-space approximation that provides significant gains in computational speed. The benefit of the state-space calculation becomes even greater after the hydrodynamic body-to-body coefficients are introduced as the number of interactions increases exponentially with the number of floating bodies. The final feature discussed is the capability toadd Morison elements to provide additional hydrodynamic damping and inertia. This is generally used as a tuning feature, because performance is highly dependent on the chosen coefficients. In this paper, a review of the hydrodynamic theory for each of the features is provided and successful implementation is verified using test cases.

  20. General Description of Fission Observables: GEF Model Code

    Science.gov (United States)

    Schmidt, K.-H.; Jurado, B.; Amouroux, C.; Schmitt, C.

    2016-01-01

    The GEF ("GEneral description of Fission observables") model code is documented. It describes the observables for spontaneous fission, neutron-induced fission and, more generally, for fission of a compound nucleus from any other entrance channel, with given excitation energy and angular momentum. The GEF model is applicable for a wide range of isotopes from Z = 80 to Z = 112 and beyond, up to excitation energies of about 100 MeV. The results of the GEF model are compared with fission barriers, fission probabilities, fission-fragment mass- and nuclide distributions, isomeric ratios, total kinetic energies, and prompt-neutron and prompt-gamma yields and energy spectra from neutron-induced and spontaneous fission. Derived properties of delayed neutrons and decay heat are also considered. The GEF model is based on a general approach to nuclear fission that explains a great part of the complex appearance of fission observables on the basis of fundamental laws of physics and general properties of microscopic systems and mathematical objects. The topographic theorem is used to estimate the fission-barrier heights from theoretical macroscopic saddle-point and ground-state masses and experimental ground-state masses. Motivated by the theoretically predicted early localisation of nucleonic wave functions in a necked-in shape, the properties of the relevant fragment shells are extracted. These are used to determine the depths and the widths of the fission valleys corresponding to the different fission channels and to describe the fission-fragment distributions and deformations at scission by a statistical approach. A modified composite nuclear-level-density formula is proposed. It respects some features in the superfluid regime that are in accordance with new experimental findings and with theoretical expectations. These are a constant-temperature behaviour that is consistent with a considerably increased heat capacity and an increased pairing condensation energy that is

  1. Spatial Preference Modelling for equitable infrastructure provision: an application of Sen's Capability Approach

    Science.gov (United States)

    Wismadi, Arif; Zuidgeest, Mark; Brussel, Mark; van Maarseveen, Martin

    2014-01-01

    To determine whether the inclusion of spatial neighbourhood comparison factors in Preference Modelling allows spatial decision support systems (SDSSs) to better address spatial equity, we introduce Spatial Preference Modelling (SPM). To evaluate the effectiveness of this model in addressing equity, various standardisation functions in both Non-Spatial Preference Modelling and SPM are compared. The evaluation involves applying the model to a resource location-allocation problem for transport infrastructure in the Special Province of Yogyakarta in Indonesia. We apply Amartya Sen's Capability Approach to define opportunity to mobility as a non-income indicator. Using the extended Moran's I interpretation for spatial equity, we evaluate the distribution output regarding, first, `the spatial distribution patterns of priority targeting for allocation' (SPT) and, second, `the effect of new distribution patterns after location-allocation' (ELA). The Moran's I index of the initial map and its comparison with six patterns for SPT as well as ELA consistently indicates that the SPM is more effective for addressing spatial equity. We conclude that the inclusion of spatial neighbourhood comparison factors in Preference Modelling improves the capability of SDSS to address spatial equity. This study thus proposes a new formal method for SDSS with specific attention on resource location-allocation to address spatial equity.

  2. Improved virtual channel noise model for transform domain Wyner-Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2009-01-01

    Distributed video coding (DVC) has been proposed as a new video coding paradigm to deal with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. A virtual channel noise model is utilized at the decoder to estimate...... the noise distribution between the side information frame and the original frame. This is one of the most important aspects influencing the coding performance of DVC. Noise models with different granularity have been proposed. In this paper, an improved noise model for transform domain Wyner-Ziv video...

  3. Application of the thermal-hydraulic codes in VVER-440 steam generators modelling

    Energy Technology Data Exchange (ETDEWEB)

    Matejovic, P.; Vranca, L.; Vaclav, E. [Nuclear Power Plant Research Inst. VUJE (Slovakia)

    1995-12-31

    Performances with the CATHARE2 V1.3U and RELAP5/MOD3.0 application to the VVER-440 SG modelling during normal conditions and during transient with secondary water lowering are described. Similar recirculation model was chosen for both codes. In the CATHARE calculation, no special measures were taken with the aim to optimize artificially flow rate distribution coefficients for the junction between SG riser and steam dome. Contrary to RELAP code, the CATHARE code is able to predict reasonable the secondary swell level in nominal conditions. Both codes are able to model properly natural phase separation on the SG water level. 6 refs.

  4. Coupling the severe accident code SCDAP with the system thermal hydraulic code MARS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young Jin; Chung, Bub Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2004-07-01

    MARS is a best-estimate system thermal hydraulics code with multi-dimensional modeling capability. One of the aims in MARS code development is to make it a multi-functional code system with the analysis capability to cover the entire accident spectrum. For this purpose, MARS code has been coupled with a number of other specialized codes such as CONTEMPT for containment analysis, and MASTER for 3-dimensional kinetics. And in this study, the SCDAP code has been coupled with MARS to endow the MARS code system with severe accident analysis capability. With the SCDAP, MARS code system now has acquired the capability to simulate such severe accident related phenomena as cladding oxidation, melting and slumping of fuel and reactor structures.

  5. Coupling the severe accident code SCDAP with the system thermal hydraulic code MARS

    International Nuclear Information System (INIS)

    Lee, Young Jin; Chung, Bub Dong

    2004-01-01

    MARS is a best-estimate system thermal hydraulics code with multi-dimensional modeling capability. One of the aims in MARS code development is to make it a multi-functional code system with the analysis capability to cover the entire accident spectrum. For this purpose, MARS code has been coupled with a number of other specialized codes such as CONTEMPT for containment analysis, and MASTER for 3-dimensional kinetics. And in this study, the SCDAP code has been coupled with MARS to endow the MARS code system with severe accident analysis capability. With the SCDAP, MARS code system now has acquired the capability to simulate such severe accident related phenomena as cladding oxidation, melting and slumping of fuel and reactor structures

  6. Landscape capability models as a tool to predict fine-scale forest bird occupancy and abundance

    Science.gov (United States)

    Loman, Zachary G.; DeLuca, William; Harrison, Daniel J.; Loftin, Cynthia S.; Rolek, Brian W.; Wood, Petra

    2018-01-01

    ContextSpecies-specific models of landscape capability (LC) can inform landscape conservation design. Landscape capability is “the ability of the landscape to provide the environment […] and the local resources […] needed for survival and reproduction […] in sufficient quantity, quality and accessibility to meet the life history requirements of individuals and local populations.” Landscape capability incorporates species’ life histories, ecologies, and distributions to model habitat for current and future landscapes and climates as a proactive strategy for conservation planning.ObjectivesWe tested the ability of a set of LC models to explain variation in point occupancy and abundance for seven bird species representative of spruce-fir, mixed conifer-hardwood, and riparian and wooded wetland macrohabitats.MethodsWe compiled point count data sets used for biological inventory, species monitoring, and field studies across the northeastern United States to create an independent validation data set. Our validation explicitly accounted for underestimation in validation data using joint distance and time removal sampling.ResultsBlackpoll warbler (Setophaga striata), wood thrush (Hylocichla mustelina), and Louisiana (Parkesia motacilla) and northern waterthrush (P. noveboracensis) models were validated as predicting variation in abundance, although this varied from not biologically meaningful (1%) to strongly meaningful (59%). We verified all seven species models [including ovenbird (Seiurus aurocapilla), blackburnian (Setophaga fusca) and cerulean warbler (Setophaga cerulea)], as all were positively related to occupancy data.ConclusionsLC models represent a useful tool for conservation planning owing to their predictive ability over a regional extent. As improved remote-sensed data become available, LC layers are updated, which will improve predictions.

  7. Linear-Time Non-Malleable Codes in the Bit-Wise Independent Tampering Model

    DEFF Research Database (Denmark)

    Cramer, Ronald; Damgård, Ivan Bjerre; Döttling, Nico

    Non-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m' (eventuall...... non-malleable codes of Agrawal et al. (TCC 2015) and of Cher- aghchi and Guruswami (TCC 2014) and improves the previous result in the bit-wise tampering model: it builds the first non-malleable codes with linear-time complexity and optimal-rate (i.e. rate 1 - o(1))....

  8. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments.

    Science.gov (United States)

    Santos, José; Monteagudo, Angel

    2011-02-21

    As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the fact that the best possible codes show the patterns of the

  9. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments

    Directory of Open Access Journals (Sweden)

    Monteagudo Ángel

    2011-02-01

    Full Text Available Abstract Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the

  10. Development and assessment of Multi-dimensional flow models in the thermal-hydraulic system analysis code MARS

    International Nuclear Information System (INIS)

    Chung, B. D.; Bae, S. W.; Jeong, J. J.; Lee, S. M.

    2005-04-01

    A new multi-dimensional component has been developed to allow for more flexible 3D capabilities in the system code, MARS. This component can be applied in the Cartesian and cylindrical coordinates. For the development of this model, the 3D convection and diffusion terms are implemented in the momentum and energy equation. And a simple Prandtl's mixing length model is applied for the turbulent viscosity. The developed multi-dimensional component was assessed against five conceptual problems with analytic solution. And some SETs are calculated and compared with experimental data. With this newly developed multi-dimensional flow module, the MARS code can realistic calculate the flow fields in pools such as those occurring in the core, steam generators and IRWST

  11. IMPACT OF CO-CREATION ON INNOVATION CAPABILITY AND FIRM PERFORMANCE: A STRUCTURAL EQUATION MODELING

    Directory of Open Access Journals (Sweden)

    FATEMEH HAMIDI

    Full Text Available ABSTRACT Traditional firms used to design products, evaluate marketing messages and control product distribution channels with no costumer interface. With the advancements in interaction technologies, however, users can easily make impacts on firms; the interaction between costumers and firms is now in peak condition in comparison to the past and is no longer controlled by firms. Customers are playing two roles of value creators and consumers simultaneously. We examine the role of co-creation on the influences of innovation capability and firm performance. We develop hypotheses and test them using researcher survey data. The results suggest that implement of co-creation partially mediate the effect of process innovation capability. We discuss the implications of these findings for research and practice on the depict and implement of unique value co-creation model.

  12. INTRA/Mod3.2. Manual and Code Description. Volume I - Physical Modelling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Jenny; Edlund, O.; Hermann, J.; Johansson, Lise-Lotte

    1999-01-01

    The INTRA Manual consists of two volumes. Volume I of the manual is a thorough description of the code INTRA, the Physical modelling of INTRA and the ruling numerical methods and volume II, the User`s Manual is an input description. This document, the Physical modelling of INTRA, contains code characteristics, integration methods and applications

  13. Transient Mathematical Modeling for Liquid Rocket Engine Systems: Methods, Capabilities, and Experience

    Science.gov (United States)

    Seymour, David C.; Martin, Michael A.; Nguyen, Huy H.; Greene, William D.

    2005-01-01

    The subject of mathematical modeling of the transient operation of liquid rocket engines is presented in overview form from the perspective of engineers working at the NASA Marshall Space Flight Center. The necessity of creating and utilizing accurate mathematical models as part of liquid rocket engine development process has become well established and is likely to increase in importance in the future. The issues of design considerations for transient operation, development testing, and failure scenario simulation are discussed. An overview of the derivation of the basic governing equations is presented along with a discussion of computational and numerical issues associated with the implementation of these equations in computer codes. Also, work in the field of generating usable fluid property tables is presented along with an overview of efforts to be undertaken in the future to improve the tools use for the mathematical modeling process.

  14. XSOR codes users manual

    International Nuclear Information System (INIS)

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ''XSOR''. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms

  15. Capabilities and performance of Elmer/Ice, a new-generation ice sheet model

    Directory of Open Access Journals (Sweden)

    O. Gagliardini

    2013-08-01

    Full Text Available The Fourth IPCC Assessment Report concluded that ice sheet flow models, in their current state, were unable to provide accurate forecast for the increase of polar ice sheet discharge and the associated contribution to sea level rise. Since then, the glaciological community has undertaken a huge effort to develop and improve a new generation of ice flow models, and as a result a significant number of new ice sheet models have emerged. Among them is the parallel finite-element model Elmer/Ice, based on the open-source multi-physics code Elmer. It was one of the first full-Stokes models used to make projections for the evolution of the whole Greenland ice sheet for the coming two centuries. Originally developed to solve local ice flow problems of high mechanical and physical complexity, Elmer/Ice has today reached the maturity to solve larger-scale problems, earning the status of an ice sheet model. Here, we summarise almost 10 yr of development performed by different groups. Elmer/Ice solves the full-Stokes equations, for isotropic but also anisotropic ice rheology, resolves the grounding line dynamics as a contact problem, and contains various basal friction laws. Derived fields, like the age of the ice, the strain rate or stress, can also be computed. Elmer/Ice includes two recently proposed inverse methods to infer badly known parameters. Elmer is a highly parallelised code thanks to recent developments and the implementation of a block preconditioned solver for the Stokes system. In this paper, all these components are presented in detail, as well as the numerical performance of the Stokes solver and developments planned for the future.

  16. Improving system modeling accuracy with Monte Carlo codes

    International Nuclear Information System (INIS)

    Johnson, A.S.

    1996-01-01

    The use of computer codes based on Monte Carlo methods to perform criticality calculations has become common-place. Although results frequently published in the literature report calculated k eff values to four decimal places, people who use the codes in their everyday work say that they only believe the first two decimal places of any result. The lack of confidence in the computed k eff values may be due to the tendency of the reported standard deviation to underestimate errors associated with the Monte Carlo process. The standard deviation as reported by the codes is the standard deviation of the mean of the k eff values for individual generations in the computer simulation, not the standard deviation of the computed k eff value compared with the physical system. A more subtle problem with the standard deviation of the mean as reported by the codes is that all the k eff values from the separate generations are not statistically independent since the k eff of a given generation is a function of k eff of the previous generation, which is ultimately based on the starting source. To produce a standard deviation that is more representative of the physical system, statistically independent values of k eff are needed

  17. The new MCNP6 depletion capability

    International Nuclear Information System (INIS)

    Fensin, M. L.; James, M. R.; Hendricks, J. S.; Goorley, J. T.

    2012-01-01

    The first MCNP based in-line Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology. (authors)

  18. The New MCNP6 Depletion Capability

    International Nuclear Information System (INIS)

    Fensin, Michael Lorne; James, Michael R.; Hendricks, John S.; Goorley, John T.

    2012-01-01

    The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology.

  19. Transitioning Enhanced Land Surface Initialization and Model Verification Capabilities to the Kenya Meteorological Department (KMD)

    Science.gov (United States)

    Case, Jonathan L.; Mungai, John; Sakwa, Vincent; Zavodsky, Bradley T.; Srikishen, Jayanthi; Limaye, Ashutosh; Blankenship, Clay B.

    2016-01-01

    Flooding, severe weather, and drought are key forecasting challenges for the Kenya Meteorological Department (KMD), based in Nairobi, Kenya. Atmospheric processes leading to convection, excessive precipitation and/or prolonged drought can be strongly influenced by land cover, vegetation, and soil moisture content, especially during anomalous conditions and dry/wet seasonal transitions. It is thus important to represent accurately land surface state variables (green vegetation fraction, soil moisture, and soil temperature) in Numerical Weather Prediction (NWP) models. The NASA SERVIR and the Short-term Prediction Research and Transition (SPoRT) programs in Huntsville, AL have established a working partnership with KMD to enhance its regional modeling capabilities. SPoRT and SERVIR are providing experimental land surface initialization datasets and model verification capabilities for capacity building at KMD. To support its forecasting operations, KMD is running experimental configurations of the Weather Research and Forecasting (WRF; Skamarock et al. 2008) model on a 12-km/4-km nested regional domain over eastern Africa, incorporating the land surface datasets provided by NASA SPoRT and SERVIR. SPoRT, SERVIR, and KMD participated in two training sessions in March 2014 and June 2015 to foster the collaboration and use of unique land surface datasets and model verification capabilities. Enhanced regional modeling capabilities have the potential to improve guidance in support of daily operations and high-impact weather and climate outlooks over Eastern Africa. For enhanced land-surface initialization, the NASA Land Information System (LIS) is run over Eastern Africa at 3-km resolution, providing real-time land surface initialization data in place of interpolated global model soil moisture and temperature data available at coarser resolutions. Additionally, real-time green vegetation fraction (GVF) composites from the Suomi-NPP VIIRS instrument is being incorporated

  20. Model document for code officials on solar heating and cooling of buildings. First draft

    Energy Technology Data Exchange (ETDEWEB)

    1979-03-01

    The primary purpose of this document is to promote the use and further development of solar energy through a systematic categorizing of all the attributes in a solar energy system that may impact on those requirements in the nationally recognized model codes relating to the safeguard of life or limb, health, property, and public welfare. Administrative provisions have been included to integrate this document with presently adopted codes, so as to allow incorporation into traditional building, plumbing, mechanical, and electrical codes. In those areas where model codes are not used it is recommended that the requirements, references, and standards herein be adopted to regulate all solar energy systems. (MOW)

  1. Basic data, computer codes and integral experiments: The tools for modelling in nuclear technology

    International Nuclear Information System (INIS)

    Sartori, E.

    2001-01-01

    When studying applications in nuclear technology we need to understand and be able to predict the behavior of systems manufactured by human enterprise. First, the underlying basic physical and chemical phenomena need to be understood. We have then to predict the results from the interplay of the large number of the different basic events: i.e. the macroscopic effects. In order to be able to build confidence in our modelling capability, we need then to compare these results against measurements carried out on such systems. The different levels of modelling require the solution of different types of equations using different type of parameters. The tools required for carrying out a complete validated analysis are: - The basic nuclear or chemical data; - The computer codes, and; - The integral experiments. This article describes the role each component plays in a computational scheme designed for modelling purposes. It describes also which tools have been developed and are internationally available. The role of the OECD/NEA Data Bank, the Radiation Shielding Information Computational Center (RSICC), and the IAEA Nuclear Data Section are playing in making these elements available to the community of scientists and engineers is described. (author)

  2. Uncertainty Quantification and Learning in Geophysical Modeling: How Information is Coded into Dynamical Models

    Science.gov (United States)

    Gupta, H. V.

    2014-12-01

    There is a clear need for comprehensive quantification of simulation uncertainty when using geophysical models to support and inform decision-making. Further, it is clear that the nature of such uncertainty depends on the quality of information in (a) the forcing data (driver information), (b) the model code (prior information), and (c) the specific values of inferred model components that localize the model to the system of interest (inferred information). Of course, the relative quality of each varies with geophysical discipline and specific application. In this talk I will discuss a structured approach to characterizing how 'Information', and hence 'Uncertainty', is coded into the structures of physics-based geophysical models. I propose that a better understanding of what is meant by "Information", and how it is embodied in models and data, can offer a structured (less ad-hoc), robust and insightful basis for diagnostic learning through the model-data juxtaposition. In some fields, a natural consequence may be to emphasize the a priori role of System Architecture (Process Modeling) over that of the selection of System Parameterization, thereby emphasizing the more creative aspect of scientific investigation - the use of models for Discovery and Learning.

  3. Using a Simple Binomial Model to Assess Improvement in Predictive Capability: Sequential Bayesian Inference, Hypothesis Testing, and Power Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sigeti, David E. [Los Alamos National Laboratory; Pelak, Robert A. [Los Alamos National Laboratory

    2012-09-11

    We present a Bayesian statistical methodology for identifying improvement in predictive simulations, including an analysis of the number of (presumably expensive) simulations that will need to be made in order to establish with a given level of confidence that an improvement has been observed. Our analysis assumes the ability to predict (or postdict) the same experiments with legacy and new simulation codes and uses a simple binomial model for the probability, {theta}, that, in an experiment chosen at random, the new code will provide a better prediction than the old. This model makes it possible to do statistical analysis with an absolute minimum of assumptions about the statistics of the quantities involved, at the price of discarding some potentially important information in the data. In particular, the analysis depends only on whether or not the new code predicts better than the old in any given experiment, and not on the magnitude of the improvement. We show how the posterior distribution for {theta} may be used, in a kind of Bayesian hypothesis testing, both to decide if an improvement has been observed and to quantify our confidence in that decision. We quantify the predictive probability that should be assigned, prior to taking any data, to the possibility of achieving a given level of confidence, as a function of sample size. We show how this predictive probability depends on the true value of {theta} and, in particular, how there will always be a region around {theta} = 1/2 where it is highly improbable that we will be able to identify an improvement in predictive capability, although the width of this region will shrink to zero as the sample size goes to infinity. We show how the posterior standard deviation may be used, as a kind of 'plan B metric' in the case that the analysis shows that {theta} is close to 1/2 and argue that such a plan B should generally be part of hypothesis testing. All the analysis presented in the paper is done with a

  4. Status Report on Modelling and Simulation Capabilities for Nuclear-Renewable Hybrid Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rabiti, C. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Epiney, A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Talbot, P. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kim, J. S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bragg-Sitton, S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Yigitoglu, A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Greenwood, S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cetiner, S. M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ganda, F. [Argonne National Lab. (ANL), Argonne, IL (United States); Maronati, G. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-09-01

    This report summarizes the current status of the modeling and simulation capabilities developed for the economic assessment of Nuclear-Renewable Hybrid Energy Systems (N-R HES). The increasing penetration of variable renewables is altering the profile of the net demand, with which the other generators on the grid have to cope. N-R HES analyses are being conducted to determine the potential feasibility of mitigating the resultant volatility in the net electricity demand by adding industrial processes that utilize either thermal or electrical energy as stabilizing loads. This coordination of energy generators and users is proposed to mitigate the increase in electricity cost and cost volatility through the production of a saleable commodity. Overall, the financial performance of a system that is comprised of peaking units (i.e. gas turbine), baseload supply (i.e. nuclear power plant), and an industrial process (e.g. hydrogen plant) should be optimized under the constraint of satisfying an electricity demand profile with a certain level of variable renewable (wind) penetration. The optimization should entail both the sizing of the components/subsystems that comprise the system and the optimal dispatch strategy (output at any given moment in time from the different subsystems). Some of the capabilities here described have been reported separately in [1, 2, 3]. The purpose of this report is to provide an update on the improvement and extension of those capabilities and to illustrate their integrated application in the economic assessment of N-R HES.

  5. Modeling the reactor core of MNSR to simulate its dynamic behavior using the code PARET

    International Nuclear Information System (INIS)

    Hainoun, A.; Alhabet, F.

    2004-02-01

    Using the computer code PARET the core of the MNSR reactor was modelled and the neutronics and thermal hydraulic behaviour of the reactor core for the steady state and selected transients, that deal with step change of reactivity including control rod withdraw starting from steady state at various low power level, were simulated. For this purpose a PARET input model for the core of MNSR reactor has been developed enabling the simulation of neutron kinetic and thermal hydraulic of reactor core including reactivity feedback effects. The neutron kinetic model depends on the point kinetic with 15 groups delayed neutrons including photo neutrons of beryllium reflector. In this regard the effect of photo neutron on the dynamic behaviour has been analysed through two additional calculation. In the first the yield of photo neutrons was neglected completely and in the second its share was added to the sixth group of delayed neutrons. In the thermal hydraulic model the fuel elements with their cooling channels were distributed to 4 different groups with various radial power factors. The pressure lose factors for friction, flow direction change, expansion and contraction were estimated using suitable approaches. The post calculations of the relative neutron flux change and core average temperature were found to be consistent with the experimental measurements. Furthermore, the simulation has indicated the influence of photo neutrons of the Beryllium reflector on the neutron flux behaviour. For the reliability of the results sensitivity analysis was carried out to consider the uncertainty in some important parameters like temperature feedback coefficient and flow velocity. On the other hand the application of PARET in simulation of void formation in the subcooled boiling regime were tested. The calculation indicates the capability of PARET in modelling this phenomenon. However, big discrepancy between calculation results and measurement of axial void distribution were observed

  6. The modeling of core melting and in-vessel corium relocation in the APRIL code

    Energy Technology Data Exchange (ETDEWEB)

    Kim. S.W.; Podowski, M.Z.; Lahey, R.T. [Rensselaer Polytechnic Institute, Troy, NY (United States)] [and others

    1995-09-01

    This paper is concerned with the modeling of severe accident phenomena in boiling water reactors (BWR). New models of core melting and in-vessel corium debris relocation are presented, developed for implementation in the APRIL computer code. The results of model testing and validations are given, including comparisons against available experimental data and parametric/sensitivity studies. Also, the application of these models, as parts of the APRIL code, is presented to simulate accident progression in a typical BWR reactor.

  7. Evaluation of the Predictive Capabilities of a Phenomenological Combustion Model for Natural Gas SI Engine

    Directory of Open Access Journals (Sweden)

    Toman Rastislav

    2017-12-01

    Full Text Available The current study evaluates the predictive capabilities of a new phenomenological combustion model, available as a part of the GT-Suite software package. It is comprised of two main sub-models: 0D model of in-cylinder flow and turbulence, and turbulent SI combustion model. The 0D in-cylinder flow model (EngCylFlow uses a combined K-k-ε kinetic energy cascade approach to predict the evolution of the in-cylinder charge motion and turbulence, where K and k are the mean and turbulent kinetic energies, and ε is the turbulent dissipation rate. The subsequent turbulent combustion model (EngCylCombSITurb gives the in-cylinder burn rate; based on the calculation of flame speeds and flame kernel development. This phenomenological approach reduces significantly the overall computational effort compared to the 3D-CFD, thus allowing the computation of full engine operating map and the vehicle driving cycles. Model was calibrated using a full map measurement from a turbocharged natural gas SI engine, with swirl intake ports. Sensitivity studies on different calibration methods, and laminar flame speed sub-models were conducted. Validation process for both the calibration and sensitivity studies was concerning the in-cylinder pressure traces and burn rates for several engine operation points achieving good overall results.

  8. Modelling of Cold Water Hammer with WAHA code

    International Nuclear Information System (INIS)

    Gale, J.; Tiselj, I.

    2003-01-01

    The Cold Water Hammer experiment described in the present paper is a simple facility where overpressure accelerates a column of liquid water into the steam bubble at the closed vertical end of the pipe. Severe water hammer with high pressure peak occurs when the vapor bubble condenses and the liquid column hits the closed end of the pipe. Experimental data of Forschungszentrum Rossendorf are being used to test the newly developed computer code WAHA and the computer code RELAP5. Results show that a small amount of noncondensable air in the steam bubble significantly affects the magnitude of the calculated pressure peak, while the wall friction and condensation rate only slightly affect the simulated phenomena. (author)

  9. A high burnup model developed for the DIONISIO code

    Science.gov (United States)

    Soba, A.; Denis, A.; Romero, L.; Villarino, E.; Sardella, F.

    2013-02-01

    A group of subroutines, designed to extend the application range of the fuel performance code DIONISIO to high burn up, has recently been included in the code. The new calculation tools, which are tuned for UO2 fuels in LWR conditions, predict the radial distribution of power density, burnup, and concentration of diverse nuclides within the pellet. The balance equations of all the isotopes involved in the fission process are solved in a simplified manner, and the one-group effective cross sections of all of them are obtained as functions of the radial position in the pellet, burnup, and enrichment in 235U. In this work, the subroutines are described and the results of the simulations performed with DIONISIO are presented. The good agreement with the data provided in the FUMEX II/III NEA data bank can be easily recognized.

  10. Initiative-taking, Improvisational Capability and Business Model Innovation in Emerging Market

    DEFF Research Database (Denmark)

    Cao, Yangfeng

    . Many prior researches have shown that the foreign subsidiaries play important role in shaping the overall strategy of the parent company. However, little is known about how subsidiary specifically facilitates business model innovation (BMI) in emerging markets. Adopting the method of comparative......Business model innovation plays a very important role in developing competitive advantage when multinational small and medium-sized enterprises (SMEs) from developed country enter into emerging markets because of the large contextual distances or gaps between the emerging and developed economies...... innovation in emerging markets. We find that high initiative-taking and strong improvisational capability can accelerate the business model innovation. Our research contributes to the literatures on international and strategic entrepreneurship....

  11. Initiative-taking, Improvisational Capability and Business Model Innovation in Emerging Market

    DEFF Research Database (Denmark)

    Cao, Yangfeng

    Business model innovation plays a very important role in developing competitive advantage when multinational small and medium-sized enterprises (SMEs) from developed country enter into emerging markets because of the large contextual distances or gaps between the emerging and developed economies....... Many prior researches have shown that the foreign subsidiaries play important role in shaping the overall strategy of the parent company. However, little is known about how subsidiary specifically facilitates business model innovation (BMI) in emerging markets. Adopting the method of comparative...... and longitudinal case study, we tracked the BMI processes of four SMEs from Denmark operating in China. Using resource-based view (RBV), we develop one theoretical framework which indicates that initiative-taking and improvisational capability of subsidiary are the two primary facilitators of business model...

  12. Entry into new markets: the development of the business model and dynamic capabilities

    Directory of Open Access Journals (Sweden)

    Victor Wolowski Kenski

    2017-12-01

    Full Text Available This work shows the path through which companies enter new markets or bring new propositions to established ones. It presents the market analysis process, the strategical decisions that determine the company’s position on it and the required changes in the configurations for this new action. It also studies the process of selecting the business model and the conditions for its definition the adoption and subsequent development of resources and capabilities required to conquer this new market. It is presented the necessary conditions to remain and maintain its market position. These concepts are presented through a case study of a business group that takes part in different franchises.

  13. Development of NSSS Thermal-Hydraulic Model for KNPEC-2 Simulator Using the Best-Estimate Code RETRAN-3D

    International Nuclear Information System (INIS)

    Kim, Kyung-Doo; Jeong, Jae-Jun; Lee, Seung-Wook; Lee, Myeong-Soo; Suh, Jae-Seung; Hong, Jin-Hyuk; Lee, Yong-Kwan

    2004-01-01

    The Nuclear Steam Supply System (NSSS) thermal-hydraulic model adopted in the Korea Nuclear Plant Education Center (KNPEC)-2 simulator was provided in the early 1980s. The reference plant for KNPEC-2 is the Yong Gwang Nuclear Unit 1, which is a Westinghouse-type 3-loop, 950 MW(electric) pressurized water reactor. Because of the limited computational capability at that time, it uses overly simplified physical models and assumptions for a real-time simulation of NSSS thermal-hydraulic transients. This may entail inaccurate results and thus, the possibility of so-called ''negative training,'' especially for complicated two-phase flows in the reactor coolant system. To resolve the problem, we developed a realistic NSSS thermal-hydraulic program (named ARTS code) based on the best-estimate code RETRAN-3D. The systematic assessment of ARTS has been conducted by both a stand-alone test and an integrated test in the simulator environment. The non-integrated stand-alone test (NIST) results were reasonable in terms of accuracy, real-time simulation capability, and robustness. After successful completion of the NIST, ARTS was integrated with a 3-D reactor kinetics model and other system models. The site acceptance test (SAT) has been completed successively and confirmed to comply with the ANSI/ANS-3.5-1998 simulator software performance criteria. This paper presents our efforts for the ARTS development and some test results of the NIST and SAT

  14. Accurate modeling of a DOI capable small animal PET scanner using GATE

    International Nuclear Information System (INIS)

    Zagni, F.; D'Ambrosio, D.; Spinelli, AE.; Cicoria, G.; Fanti, S.; Marengo, M.

    2013-01-01

    In this work we developed a Monte Carlo (MC) model of the Sedecal Argus pre-clinical PET scanner, using GATE (Geant4 Application for Tomographic Emission). This is a dual-ring scanner which features DOI compensation by means of two layers of detector crystals (LYSO and GSO). Geometry of detectors and sources, pulses readout and selection of coincidence events were modeled with GATE, while a separate code was developed in order to emulate the processing of digitized data (for example, customized time windows and data flow saturation), the final binning of the lines of response and to reproduce the data output format of the scanner's acquisition software. Validation of the model was performed by modeling several phantoms used in experimental measurements, in order to compare the results of the simulations. Spatial resolution, sensitivity, scatter fraction, count rates and NECR were tested. Moreover, the NEMA NU-4 phantom was modeled in order to check for the image quality yielded by the model. Noise, contrast of cold and hot regions and recovery coefficient were calculated and compared using images of the NEMA phantom acquired with our scanner. The energy spectrum of coincidence events due to the small amount of 176 Lu in LYSO crystals, which was suitably included in our model, was also compared with experimental measurements. Spatial resolution, sensitivity and scatter fraction showed an agreement within 7%. Comparison of the count rates curves resulted satisfactory, being the values within the uncertainties, in the range of activities practically used in research scans. Analysis of the NEMA phantom images also showed a good agreement between simulated and acquired data, within 9% for all the tested parameters. This work shows that basic MC modeling of this kind of system is possible using GATE as a base platform; extension through suitably written customized code allows for an adequate level of accuracy in the results. Our careful validation against experimental

  15. Qualification and application of nuclear reactor accident analysis code with the capability of internal assessment of uncertainty; Qualificacao e aplicacao de codigo de acidentes de reatores nucleares com capacidade interna de avaliacao de incerteza

    Energy Technology Data Exchange (ETDEWEB)

    Borges, Ronaldo Celem

    2001-10-15

    This thesis presents an independent qualification of the CIAU code ('Code with the capability of - Internal Assessment of Uncertainty') which is part of the internal uncertainty evaluation process with a thermal hydraulic system code on a realistic basis. This is done by combining the uncertainty methodology UMAE ('Uncertainty Methodology based on Accuracy Extrapolation') with the RELAP5/Mod3.2 code. This allows associating uncertainty band estimates with the results obtained by the realistic calculation of the code, meeting licensing requirements of safety analysis. The independent qualification is supported by simulations with RELAP5/Mod3.2 related to accident condition tests of LOBI experimental facility and to an event which has occurred in Angra 1 nuclear power plant, by comparison with measured results and by establishing uncertainty bands on safety parameter calculated time trends. These bands have indeed enveloped the measured trends. Results from this independent qualification of CIAU have allowed to ascertain the adequate application of a systematic realistic code procedure to analyse accidents with uncertainties incorporated in the results, although there is an evident need of extending the uncertainty data base. It has been verified that use of the code with this internal assessment of uncertainty is feasible in the design and license stages of a NPP. (author)

  16. A computer code for calculations in the algebraic collective model of the atomic nucleus

    OpenAIRE

    Welsh, T. A.; Rowe, D. J.

    2014-01-01

    A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1,1) x SO(5) dynamical group. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code enables a wide range of model Hamiltonians to be analysed. This range includes essentially all Hamiltonians that are rational functi...

  17. Nodal kinetics model upgrade in the Penn State coupled TRAC/NEM codes

    International Nuclear Information System (INIS)

    Beam, Tara M.; Ivanov, Kostadin N.; Baratta, Anthony J.; Finnemann, Herbert

    1999-01-01

    The Pennsylvania State University currently maintains and does development and verification work for its own versions of the coupled three-dimensional kinetics/thermal-hydraulics codes TRAC-PF1/NEM and TRAC-BF1/NEM. The subject of this paper is nodal model enhancements in the above mentioned codes. Because of the numerous validation studies that have been performed on almost every aspect of these codes, this upgrade is done without a major code rewrite. The upgrade consists of four steps. The first two steps are designed to improve the accuracy of the kinetics model, based on the nodal expansion method. The polynomial expansion solution of 1D transverse integrated diffusion equation is replaced with a solution, which uses a semi-analytic expansion. Further the standard parabolic polynomial representation of the transverse leakage in the above 1D equations is replaced with an improved approximation. The last two steps of the upgrade address the code efficiency by improving the solution of the time-dependent NEM equations and implementing a multi-grid solver. These four improvements are implemented into the standalone NEM kinetics code. Verification of this code was accomplished based on the original verification studies. The results show that the new methods improve the accuracy and efficiency of the code. The verification of the upgraded NEM model in the TRAC-PF1/NEM and TRAC-BF1/NEM coupled codes is underway

  18. Evaluation of the analysis models in the ASTRA nuclear design code system

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Nam Jin; Park, Chang Jea; Kim, Do Sam; Lee, Kyeong Taek; Kim, Jong Woon [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    2000-11-15

    In the field of nuclear reactor design, main practice was the application of the improved design code systems. During the process, a lot of basis and knowledge were accumulated in processing input data, nuclear fuel reload design, production and analysis of design data, et al. However less efforts were done in the analysis of the methodology and in the development or improvement of those code systems. Recently, KEPO Nuclear Fuel Company (KNFC) developed the ASTRA (Advanced Static and Transient Reactor Analyzer) code system for the purpose of nuclear reactor design and analysis. In the code system, two group constants were generated from the CASMO-3 code system. The objective of this research is to analyze the analysis models used in the ASTRA/CASMO-3 code system. This evaluation requires indepth comprehension of the models, which is important so much as the development of the code system itself. Currently, most of the code systems used in domestic Nuclear Power Plant were imported, so it is very difficult to maintain and treat the change of the situation in the system. Therefore, the evaluation of analysis models in the ASTRA nuclear reactor design code system in very important.

  19. Development of a three-dimensional computer code for reconstructing power distributions by means of side reflector instrumentation and determination of the capabilities and limitations of this method

    International Nuclear Information System (INIS)

    Knob, P.J.

    1982-07-01

    This work is concerned with the detection of flux disturbances in pebble bed high temperature reactors by means of flux measurements in the side reflector. Included among the disturbances studied are xenon oscillations, rod group insertions, and individual rod insertions. Using the three-dimensional diffusion code CITATION, core calculations for both a very small reactor (KAHTER) and a large reactor (PNP-3000) were carried out to determine the neutron fluxes at the detector positions. These flux values were then used in flux mapping codes for reconstructing the flux distribution in the core. As an extension of the already existing two-dimensional MOFA code, which maps azimuthal disturbances, a new three-dimensional flux mapping code ZELT was developed for handling axial disturbances as well. It was found that both flux mapping programs give satisfactory results for small and large pebble bed reactors alike. (orig.) [de

  20. Validation of the containment code Sirius: interpretation of an explosion experiment on a scale model

    International Nuclear Information System (INIS)

    Blanchet, Y.; Obry, P.; Louvet, J.; Deshayes, M.; Phalip, C.

    1979-01-01

    The explicit 2-D axisymmetric Langrangian code SIRIUS, developed at the CEA/DRNR, Cadarache, deals with transient compressive flows in deformable primary tanks with more or less complex internal component geometries. This code has been subjected to a two-year intensive validation program on scale model experiments and a number of improvements have been incorporated. This paper presents a recent calculation of one of these experiments using the SIRIUS code, and the comparison with experimental results shows the encouraging possibilities of this Lagrangian code

  1. Modeling Proton- and Light Ion-Induced Reactions at Low Energies in the MARS15 Code

    Energy Technology Data Exchange (ETDEWEB)

    Rakhno, I. L. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Mokhov, N. V. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Gudima, K. K. [National Academy of Sciences, Cisineu (Moldova)

    2015-04-25

    An implementation of both ALICE code and TENDL evaluated nuclear data library in order to describe nuclear reactions induced by low-energy projectiles in the Monte Carlo code MARS15 is presented. Comparisons between results of modeling and experimental data on reaction cross sections and secondary particle distributions are shown.

  2. A Test of Two Alternative Cognitive Processing Models: Learning Styles and Dual Coding

    Science.gov (United States)

    Cuevas, Joshua; Dawson, Bryan L.

    2018-01-01

    This study tested two cognitive models, learning styles and dual coding, which make contradictory predictions about how learners process and retain visual and auditory information. Learning styles-based instructional practices are common in educational environments despite a questionable research base, while the use of dual coding is less…

  3. Coupling of 3D neutronics models with the system code ATHLET

    International Nuclear Information System (INIS)

    Langenbuch, S.; Velkov, K.

    1999-01-01

    The system code ATHLET for plant transient and accident analysis has been coupled with 3D neutronics models, like QUABOX/CUBBOX, for the realistic evaluation of some specific safety problems under discussion. The considerations for the coupling approach and its realization are discussed. The specific features of the coupled code system established are explained and experience from first applications is presented. (author)

  4. A Construction of Multisender Authentication Codes with Sequential Model from Symplectic Geometry over Finite Fields

    Directory of Open Access Journals (Sweden)

    Shangdi Chen

    2014-01-01

    Full Text Available Multisender authentication codes allow a group of senders to construct an authenticated message for a receiver such that the receiver can verify authenticity of the received message. In this paper, we construct multisender authentication codes with sequential model from symplectic geometry over finite fields, and the parameters and the maximum probabilities of deceptions are also calculated.

  5. Implementation of the critical points model in a SFM-FDTD code working in oblique incidence

    Energy Technology Data Exchange (ETDEWEB)

    Hamidi, M; Belkhir, A; Lamrous, O [Laboratoire de Physique et Chimie Quantique, Universite Mouloud Mammeri, Tizi-Ouzou (Algeria); Baida, F I, E-mail: omarlamrous@mail.ummto.dz [Departement d' Optique P.M. Duffieux, Institut FEMTO-ST UMR 6174 CNRS Universite de Franche-Comte, 25030 Besancon Cedex (France)

    2011-06-22

    We describe the implementation of the critical points model in a finite-difference-time-domain code working in oblique incidence and dealing with dispersive media through the split field method. Some tests are presented to validate our code in addition to an application devoted to plasmon resonance of a gold nanoparticles grating.

  6. FISIC - a full-wave code to model ion cyclotron resonance heating of tokamak plasmas

    International Nuclear Information System (INIS)

    Kruecken, T.

    1988-08-01

    We present a user manual for the FISIC code which solves the integrodifferential wave equation in the finite Larmor radius approximation in fully toroidal geometry to simulate ICRF heating experiments. The code models the electromagnetic wave field as well as antenna coupling and power deposition profiles in axisymmetric plasmas. (orig.)

  7. HADES. A computer code for fast neutron cross section from the Optical Model

    International Nuclear Information System (INIS)

    Guasp, J.; Navarro, C.

    1973-01-01

    A FORTRAN V computer code for UNIVAC 1108/6 using a local Optical Model with spin-orbit interaction is described. The code calculates fast neutron cross sections, angular distribution, and Legendre moments for heavy and intermediate spherical nuclei. It allows for the possibility of automatic variation of potential parameters for experimental data fitting. (Author) 55 refs

  8. The Nuremberg Code subverts human health and safety by requiring animal modeling

    OpenAIRE

    Greek, Ray; Pippus, Annalea; Hansen, Lawrence A

    2012-01-01

    Abstract Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive...

  9. Functional capabilities of the breadboard model of SIDRA satellite-borne instrument

    International Nuclear Information System (INIS)

    Dudnik, O.V.; Kurbatov, E.V.; Titov, K.G.; Prieto, M.; Sanchez, S.; Sylwester, J.; Gburek, S.; Podgorski, P.

    2013-01-01

    This paper presents the structure, principles of operation and functional capabilities of the breadboard model of SIDRA compact satellite-borne instrument. SIDRA is intended for monitoring fluxes of high-energy charged particles under outer-space conditions. We present the reasons to develop a particle spectrometer and we list the main objectives to be achieved with the help of this instrument. The paper describes the major specifications of the analog and digital signal processing units of the breadboard model. A specially designed and developed data processing module based on the Actel ProAsic3E A3PE3000 FPGA is presented and compared with the all-in one digital processing signal board based on the Xilinx Spartan 3 XC3S1500 FPGA.

  10. PEBBLES: A COMPUTER CODE FOR MODELING PACKING, FLOW AND RECIRCULATIONOF PEBBLES IN A PEBBLE BED REACTOR

    Energy Technology Data Exchange (ETDEWEB)

    Joshua J. Cogliati; Abderrafi M. Ougouag

    2006-10-01

    A comprehensive, high fidelity model for pebble flow has been developed and embodied in the PEBBLES computer code. In this paper, a description of the physical artifacts included in the model is presented and some results from using the computer code for predicting the features of pebble flow and packing in a realistic pebble bed reactor design are shown. The sensitivity of models to various physical parameters is also discussed.

  11. A combined N-body and hydrodynamic code for modeling disk galaxies

    International Nuclear Information System (INIS)

    Schroeder, M.C.

    1989-01-01

    A combined N-body and hydrodynamic computer code for the modeling of two dimensional galaxies is described. The N-body portion of the code is used to calculate the motion of the particle component of a galaxy, while the hydrodynamics portion of the code is used to follow the motion and evolution of the fluid component. A complete description of the numerical methods used for each portion of the code is given. Additionally, the proof tests of the separate and combined portions of the code are presented and discussed. Finally, a discussion of the topics researched with the code and results obtained is presented. These include: the measurement of stellar relaxation times in disk galaxy simulations; the effects of two-armed spiral perturbations on stable axisymmetric disks; the effects of the inclusion of an instellar medium (ISM) on the stability of disk galaxies; and the effect of the inclusion of stellar evolution on disk galaxy simulations

  12. Improving National Capability in Biogeochemical Flux Modelling: the UK Environmental Virtual Observatory (EVOp)

    Science.gov (United States)

    Johnes, P.; Greene, S.; Freer, J. E.; Bloomfield, J.; Macleod, K.; Reaney, S. M.; Odoni, N. A.

    2012-12-01

    The best outcomes from watershed management arise where policy and mitigation efforts are underpinned by strong science evidence, but there are major resourcing problems associated with the scale of monitoring needed to effectively characterise the sources rates and impacts of nutrient enrichment nationally. The challenge is to increase national capability in predictive modelling of nutrient flux to waters, securing an effective mechanism for transferring knowledge and management tools from data-rich to data-poor regions. The inadequacy of existing tools and approaches to address these challenges provided the motivation for the Environmental Virtual Observatory programme (EVOp), an innovation from the UK Natural Environment Research Council (NERC). EVOp is exploring the use of a cloud-based infrastructure in catchment science, developing an exemplar to explore N and P fluxes to inland and coastal waters in the UK from grid to catchment and national scale. EVOp is bringing together for the first time national data sets, models and uncertainty analysis into cloud computing environments to explore and benchmark current predictive capability for national scale biogeochemical modelling. The objective is to develop national biogeochemical modelling capability, capitalising on extensive national investment in the development of science understanding and modelling tools to support integrated catchment management, and supporting knowledge transfer from data rich to data poor regions, The AERC export coefficient model (Johnes et al., 2007) has been adapted to function within the EVOp cloud environment, and on a geoclimatic basis, using a range of high resolution, geo-referenced digital datasets as an initial demonstration of the enhanced national capacity for N and P flux modelling using cloud computing infrastructure. Geoclimatic regions are landscape units displaying homogenous or quasi-homogenous functional behaviour in terms of process controls on N and P cycling

  13. Development of a model and computer code to describe solar grade silicon production processes. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Gould, R K; Srivastava, R

    1979-12-01

    Models and computer codes which may be used to describe flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides are described. A prominent example of the type of process which may be studied using the codes developed in this program is the SiCl/sub 4//Na reactor currently being developed by the Westinghouse Electric Corp. During this program two large computer codes were developed. The first is the CHEMPART code, an axisymmetric, marching code which treats two-phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. This code, based on the AeroChem LAPP (Low Altitude Plume Program) code can be used to describe flow reactors in which reactants mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, depositon of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail than can be afforded using CHEMPART. These two codes have been used in this program to predict particle formation characteristics and wall collection efficiencies for SiCl/sub 4//Na flow reactors. Results are described.

  14. The MELTSPREAD Code for Modeling of Ex-Vessel Core Debris Spreading Behavior, Code Manual – Version3-beta

    Energy Technology Data Exchange (ETDEWEB)

    Farmer, M. T. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-09-01

    MELTSPREAD3 is a transient one-dimensional computer code that has been developed to predict the gravity-driven flow and freezing behavior of molten reactor core materials (corium) in containment geometries. Predictions can be made for corium flowing across surfaces under either dry or wet cavity conditions. The spreading surfaces that can be selected are steel, concrete, a user-specified material (e.g., a ceramic), or an arbitrary combination thereof. The corium can have a wide range of compositions of reactor core materials that includes distinct oxide phases (predominantly Zr, and steel oxides) plus metallic phases (predominantly Zr and steel). The code requires input that describes the containment geometry, melt “pour” conditions, and cavity atmospheric conditions (i.e., pressure, temperature, and cavity flooding information). For cases in which the cavity contains a preexisting water layer at the time of RPV failure, melt jet breakup and particle bed formation can be calculated mechanistically given the time-dependent melt pour conditions (input data) as well as the heatup and boiloff of water in the melt impingement zone (calculated). For core debris impacting either the containment floor or previously spread material, the code calculates the transient hydrodynamics and heat transfer which determine the spreading and freezing behavior of the melt. The code predicts conditions at the end of the spreading stage, including melt relocation distance, depth and material composition profiles, substrate ablation profile, and wall heatup. Code output can be used as input to other models such as CORQUENCH that evaluate long term core-concrete interaction behavior following the transient spreading stage. MELTSPREAD3 was originally developed to investigate BWR Mark I liner vulnerability, but has been substantially upgraded and applied to other reactor designs (e.g., the EPR), and more recently to the plant accidents at Fukushima Daiichi. The most recent round of

  15. Realistic edge field model code REFC for designing and study of isochronous cyclotron

    International Nuclear Information System (INIS)

    Ismail, M.

    1989-01-01

    The focussing properties and the requirements for isochronism in cyclotron magnet configuration are well-known in hard edge field model. The fact that they quite often change considerably in realistic field can be attributed mainly to the influence of the edge field. A solution to this problem requires a field model which allows a simple construction of equilibrium orbit and yield simple formulae. This can be achieved by using a fitted realistic edge field (Hudson et al 1975) in the region of the pole edge and such a field model is therefore called a realistic edge field model. A code REFC based on realistic edge field model has been developed to design the cyclotron sectors and the code FIELDER has been used to study the beam properties. In this report REFC code has been described along with some relevant explaination of the FIELDER code. (author). 11 refs., 6 figs

  16. Implementation and validation of the extended Hill-type muscle model with robust routing capabilities in LS-DYNA for active human body models.

    Science.gov (United States)

    Kleinbach, Christian; Martynenko, Oleksandr; Promies, Janik; Haeufle, Daniel F B; Fehr, Jörg; Schmitt, Syn

    2017-09-02

    In the state of the art finite element AHBMs for car crash analysis in the LS-DYNA software material named *MAT_MUSCLE (*MAT_156) is used for active muscles modeling. It has three elements in parallel configuration, which has several major drawbacks: restraint approximation of the physical reality, complicated parameterization and absence of the integrated activation dynamics. This study presents implementation of the extended four element Hill-type muscle model with serial damping and eccentric force-velocity relation including [Formula: see text] dependent activation dynamics and internal method for physiological muscle routing. Proposed model was implemented into the general-purpose finite element (FE) simulation software LSDYNA as a user material for truss elements. This material model is verified and validated with three different sets of mammalian experimental data, taken from the literature. It is compared to the *MAT_MUSCLE (*MAT_156) Hill-type muscle model already existing in LS-DYNA, which is currently used in finite element human body models (HBMs). An application example with an arm model extracted from the FE ViVA OpenHBM is given, taking into account physiological muscle paths. The simulation results show better material model accuracy, calculation robustness and improved muscle routing capability compared to *MAT_156. The FORTRAN source code for the user material subroutine dyn21.f and the muscle parameters for all simulations, conducted in the study, are given at https://zenodo.org/record/826209 under an open source license. This enables a quick application of the proposed material model in LS-DYNA, especially in active human body models (AHBMs) for applications in automotive safety.

  17. A generic method for automatic translation between input models for different versions of simulation codes

    International Nuclear Information System (INIS)

    Serfontein, Dawid E.; Mulder, Eben J.; Reitsma, Frederik

    2014-01-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications

  18. Improvement on reaction model for sodium-water reaction jet code and application analysis

    International Nuclear Information System (INIS)

    Itooka, Satoshi; Saito, Yoshinori; Okabe, Ayao; Fujimata, Kazuhiro; Murata, Shuuichi

    2000-03-01

    In selecting the reasonable DBL on steam generator (SG), it is necessary to improve analytical method for estimating the sodium temperature on failure propagation due to overheating. Improvement on sodium-water reaction (SWR) jet code (LEAP-JET ver.1.30) and application analysis to the water injection tests for confirmation of code propriety were performed. On the improvement of the code, a gas-liquid interface area density model was introduced to develop a chemical reaction model with a little dependence on calculation mesh size. The test calculation using the improved code (LEAP-JET ver.1.40) were carried out with conditions of the SWAT-3·Run-19 test and an actual scale SG. It is confirmed that the SWR jet behavior on the results and the influence to analysis result of a model are reasonable. For the application analysis to the water injection tests, water injection behavior and SWR jet behavior analyses on the new SWAT-1 (SWAT-1R) and SWAT-3 (SWAT-3R) tests were performed using the LEAP-BLOW code and the LEAP-JET code. In the application analysis of the LEAP-BLOW code, parameter survey study was performed. As the results, the condition of the injection nozzle diameter needed to simulate the water leak rate was confirmed. In the application analysis of the LEAP-JET code, temperature behavior of the SWR jet was investigated. (author)

  19. Development of seismic analysis model for HTGR core on commercial FEM code

    International Nuclear Information System (INIS)

    Tsuji, Nobumasa; Ohashi, Kazutaka

    2015-01-01

    The aftermath of the Great East Japan Earthquake prods to revise the design basis earthquake intensity severely. In aseismic design of block-type HTGR, the securement of structural integrity of core blocks and other structures which are made of graphite become more important. For the aseismic design of block-type HTGR, it is necessary to predict the motion of core blocks which are collided with adjacent blocks. Some seismic analysis codes have been developed in 1970s, but these codes are special purpose-built codes and have poor collaboration with other structural analysis code. We develop the vertical 2 dimensional analytical model on multi-purpose commercial FEM code, which take into account the multiple impacts and friction between block interfaces and rocking motion on contact with dowel pins of the HTGR core by using contact elements. This model is verified by comparison with the experimental results of 12 column vertical slice vibration test. (author)

  20. Turbine Internal and Film Cooling Modeling For 3D Navier-Stokes Codes

    Science.gov (United States)

    DeWitt, Kenneth; Garg Vijay; Ameri, Ali

    2005-01-01

    The aim of this research project is to make use of NASA Glenn on-site computational facilities in order to develop, validate and apply aerodynamic, heat transfer, and turbine cooling models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes such as the Glenn-" code. Specific areas of effort include: Application of the Glenn-HT code to specific configurations made available under Turbine Based Combined Cycle (TBCC), and Ultra Efficient Engine Technology (UEET) projects. Validating the use of a multi-block code for the time accurate computation of the detailed flow and heat transfer of cooled turbine airfoils. The goal of the current research is to improve the predictive ability of the Glenn-HT code. This will enable one to design more efficient turbine components for both aviation and power generation. The models will be tested against specific configurations provided by NASA Glenn.

  1. DESTINY: A Comprehensive Tool with 3D and Multi-Level Cell Memory Modeling Capability

    Directory of Open Access Journals (Sweden)

    Sparsh Mittal

    2017-09-01

    Full Text Available To enable the design of large capacity memory structures, novel memory technologies such as non-volatile memory (NVM and novel fabrication approaches, e.g., 3D stacking and multi-level cell (MLC design have been explored. The existing modeling tools, however, cover only a few memory technologies, technology nodes and fabrication approaches. We present DESTINY, a tool for modeling 2D/3D memories designed using SRAM, resistive RAM (ReRAM, spin transfer torque RAM (STT-RAM, phase change RAM (PCM and embedded DRAM (eDRAM and 2D memories designed using spin orbit torque RAM (SOT-RAM, domain wall memory (DWM and Flash memory. In addition to single-level cell (SLC designs for all of these memories, DESTINY also supports modeling MLC designs for NVMs. We have extensively validated DESTINY against commercial and research prototypes of these memories. DESTINY is very useful for performing design-space exploration across several dimensions, such as optimizing for a target (e.g., latency, area or energy-delay product for a given memory technology, choosing the suitable memory technology or fabrication method (i.e., 2D v/s 3D for a given optimization target, etc. We believe that DESTINY will boost studies of next-generation memory architectures used in systems ranging from mobile devices to extreme-scale supercomputers. The latest source-code of DESTINY is available from the following git repository: https://bitbucket.org/sparshmittal/destinyv2.

  2. Extending the Lunar Mapping and Modeling Portal - New Capabilities and New Worlds

    Science.gov (United States)

    Day, B. H.; Law, E.; Arevalo, E.; Bui, B.; Chang, G.; Dodge, K.; Kim, R. M.; Malhotra, S.; Sadaqathullah, S.

    2015-12-01

    NASA's Lunar Mapping and Modeling Portal (LMMP) provides a web-based Portal and a suite of interactive visualization and analysis tools to enable mission planners, lunar scientists, and engineers to access mapped lunar data products from past and current lunar missions (http://lmmp.nasa.gov). During the past year, the capabilities and data served by LMMP have been significantly expanded. New interfaces are providing improved ways to access and visualize data. Many of the recent enhancements to LMMP have been specifically in response to the requirements of NASA's proposed Resource Prospector lunar rover, and as such, provide an excellent example of the application of LMMP to mission planning. At the request of NASA's Science Mission Directorate, LMMP's technology and capabilities are now being extended to additional planetary bodies. New portals for Vesta and Mars are the first of these new products to be released. On March 31, 2015, the LMMP team released Vesta Trek (http://vestatrek.jpl.nasa.gov), a web-based application applying LMMP technology to visualizations of the asteroid Vesta. Data gathered from multiple instruments aboard Dawn have been compiled into Vesta Trek's user-friendly set of tools, enabling users to study the asteroid's features. With an initial release on July 1, 2015, Mars Trek replicates the functionality of Vesta Trek for the surface of Mars. While the entire surface of Mars is covered, higher levels of resolution and greater numbers of data products are provided for special areas of interest. Early releases focus on past, current, and future robotic sites of operation. Future releases will add many new data products and analysis tools as Mars Trek has been selected for use in site selection for the Mars 2020 rover and in identifying potential human landing sites on Mars. Other destinations will follow soon. The user community is invited to provide suggestions and requests as the development team continues to expand the capabilities of LMMP

  3. An Advanced simulation Code for Modeling Inductive Output Tubes

    Energy Technology Data Exchange (ETDEWEB)

    Thuc Bui; R. Lawrence Ives

    2012-04-27

    During the Phase I program, CCR completed several major building blocks for a 3D large signal, inductive output tube (IOT) code using modern computer language and programming techniques. These included a 3D, Helmholtz, time-harmonic, field solver with a fully functional graphical user interface (GUI), automeshing and adaptivity. Other building blocks included the improved electrostatic Poisson solver with temporal boundary conditions to provide temporal fields for the time-stepping particle pusher as well as the self electric field caused by time-varying space charge. The magnetostatic field solver was also updated to solve for the self magnetic field caused by time changing current density in the output cavity gap. The goal function to optimize an IOT cavity was also formulated, and the optimization methodologies were investigated.

  4. GEMSFITS: Code package for optimization of geochemical model parameters and inverse modeling

    International Nuclear Information System (INIS)

    Miron, George D.; Kulik, Dmitrii A.; Dmytrieva, Svitlana V.; Wagner, Thomas

    2015-01-01

    Highlights: • Tool for generating consistent parameters against various types of experiments. • Handles a large number of experimental data and parameters (is parallelized). • Has a graphical interface and can perform statistical analysis on the parameters. • Tested on fitting the standard state Gibbs free energies of aqueous Al species. • Example on fitting interaction parameters of mixing models and thermobarometry. - Abstract: GEMSFITS is a new code package for fitting internally consistent input parameters of GEM (Gibbs Energy Minimization) geochemical–thermodynamic models against various types of experimental or geochemical data, and for performing inverse modeling tasks. It consists of the gemsfit2 (parameter optimizer) and gfshell2 (graphical user interface) programs both accessing a NoSQL database, all developed with flexibility, generality, efficiency, and user friendliness in mind. The parameter optimizer gemsfit2 includes the GEMS3K chemical speciation solver ( (http://gems.web.psi.ch/GEMS3K)), which features a comprehensive suite of non-ideal activity- and equation-of-state models of solution phases (aqueous electrolyte, gas and fluid mixtures, solid solutions, (ad)sorption. The gemsfit2 code uses the robust open-source NLopt library for parameter fitting, which provides a selection between several nonlinear optimization algorithms (global, local, gradient-based), and supports large-scale parallelization. The gemsfit2 code can also perform comprehensive statistical analysis of the fitted parameters (basic statistics, sensitivity, Monte Carlo confidence intervals), thus supporting the user with powerful tools for evaluating the quality of the fits and the physical significance of the model parameters. The gfshell2 code provides menu-driven setup of optimization options (data selection, properties to fit and their constraints, measured properties to compare with computed counterparts, and statistics). The practical utility, efficiency, and

  5. Improving high-altitude emp modeling capabilities by using a non-equilibrium electron swarm model to monitor conduction electron evolution

    Science.gov (United States)

    Pusateri, Elise Noel

    abruptly. The objective of the PhD research is to mitigate this effect by integrating a conduction electron model into CHAP-LA which can calculate the conduction current based on a non-equilibrium electron distribution. We propose to use an electron swarm model to monitor the time evolution of conduction electrons in the EMP environment which is characterized by electric field and pressure. Swarm theory uses various collision frequencies and reaction rates to study how the electron distribution and the resultant transport coefficients change with time, ultimately reaching an equilibrium distribution. Validation of the swarm model we develop is a necessary step for completion of the thesis work. After validation, the swarm model is integrated in the air chemistry model CHAP-LA employs for conduction electron simulations. We test high altitude EMP simulations with the swarm model option in the air chemistry model to show improvements in the computational capability of CHAP-LA. A swarm model has been developed that is based on a previous swarm model developed by Higgins, Longmire and O'Dell 1973, hereinafter HLO. The code used for the swarm model calculation solves a system of coupled differential equations for electric field, electron temperature, electron number density, and drift velocity. Important swarm parameters, including the momentum transfer collision frequency, energy transfer collision frequency, and ionization rate, are recalculated and compared to the previously reported empirical results given by HLO. These swarm parameters are found using BOLSIG+, a two term Boltzmann solver developed by Hagelaar and Pitchford 2005. BOLSIG+ utilizes updated electron scattering cross sections that are defined over an expanded energy range found in the atomic and molecular cross section database published by Phelps in the Phelps Database 2014 on the LXcat website created by Pancheshnyi et al. 2012. The swarm model is also updated from the original HLO model by including

  6. Defining Building Information Modeling implementation activities based on capability maturity evaluation: a theoretical model

    Directory of Open Access Journals (Sweden)

    Romain Morlhon

    2015-01-01

    Full Text Available Building Information Modeling (BIM has become a widely accepted tool to overcome the many hurdles that currently face the Architecture, Engineering and Construction industries. However, implementing such a system is always complex and the recent introduction of BIM does not allow organizations to build their experience on acknowledged standards and procedures. Moreover, data on implementation projects is still disseminated and fragmentary. The objective of this study is to develop an assistance model for BIM implementation. Solutions that are proposed will help develop BIM that is better integrated and better used, and take into account the different maturity levels of each organization. Indeed, based on Critical Success Factors, concrete activities that help in implementation are identified and can be undertaken according to the previous maturity evaluation of an organization. The result of this research consists of a structured model linking maturity, success factors and actions, which operates on the following principle: once an organization has assessed its BIM maturity, it can identify various weaknesses and find relevant answers in the success factors and the associated actions.

  7. Field-based tests of geochemical modeling codes: New Zealand hydrothermal systems

    International Nuclear Information System (INIS)

    Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.

    1993-12-01

    Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will determine how the codes can be used to predict the chemical and mineralogical response of the environment to nuclear waste emplacement. Field-based exercises allow us to test the models on time scales unattainable in the laboratory. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei and Kawerau geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions

  8. Field-based tests of geochemical modeling codes usign New Zealand hydrothermal systems

    International Nuclear Information System (INIS)

    Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.

    1994-06-01

    Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will determine how the codes can be used to predict the chemical and mineralogical response of the environment to nuclear waste emplacement. Field-based exercises allow us to test the models on time scales unattainable in the laboratory. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei and Kawerau geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions

  9. Implementing the WebSocket Protocol Based on Formal Modelling and Automated Code Generation

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars Michael

    2014-01-01

    Model-based software engineering offers several attractive benefits for the implementation of protocols, including automated code generation for different platforms from design-level models. In earlier work, we have proposed a template-based approach using Coloured Petri Net formal models...... with pragmatic annotations for automated code generation of protocol software. The contribution of this paper is an application of the approach as implemented in the PetriCode tool to obtain protocol software implementing the IETF WebSocket protocol. This demonstrates the scalability of our approach to real...... protocols. Furthermore, we perform formal verification of the CPN model prior to code generation, and test the implementation for interoperability against the Autobahn WebSocket test-suite resulting in 97% and 99% success rate for the client and server implementation, respectively. The tests show...

  10. Context discovery using attenuated Bloom codes: model description and validation

    NARCIS (Netherlands)

    Liu, F.; Heijenk, Geert

    A novel approach to performing context discovery in ad-hoc networks based on the use of attenuated Bloom filters is proposed in this report. In order to investigate the performance of this approach, a model has been developed. This document describes the model and its validation. The model has been

  11. Recent progress of an integrated implosion code and modeling of element physics

    International Nuclear Information System (INIS)

    Nagatomo, H.; Takabe, H.; Mima, K.; Ohnishi, N.; Sunahara, A.; Takeda, T.; Nishihara, K.; Nishiguchu, A.; Sawada, K.

    2001-01-01

    Physics of the inertial fusion is based on a variety of elements such as compressible hydrodynamics, radiation transport, non-ideal equation of state, non-LTE atomic process, and relativistic laser plasma interaction. In addition, implosion process is not in stationary state and fluid dynamics, energy transport and instabilities should be solved simultaneously. In order to study such complex physics, an integrated implosion code including all physics important in the implosion process should be developed. The details of physics elements should be studied and the resultant numerical modeling should be installed in the integrated code so that the implosion can be simulated with available computer within realistic CPU time. Therefore, this task can be basically separated into two parts. One is to integrate all physics elements into a code, which is strongly related to the development of hydrodynamic equation solver. We have developed 2-D integrated implosion code which solves mass, momentum, electron energy, ion energy, equation of states, laser ray-trace, laser absorption radiation, surface tracing and so on. The reasonable results in simulating Rayleigh-Taylor instability and cylindrical implosion are obtained using this code. The other is code development on each element physics and verification of these codes. We had progress in developing a nonlocal electron transport code and 2 and 3 dimension radiation hydrodynamic code. (author)

  12. Further assessment of the chemical modelling of iodine in IMPAIR 3 code using ACE/RTF data

    Energy Technology Data Exchange (ETDEWEB)

    Cripps, R.C.; Guentay, S. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1996-12-01

    This paper introduces the assessment of the computer code IMPAIR 3 (Iodine Matter Partitioning And Iodine Release) which simulates physical and chemical iodine processes in a LWR containment with one or more compartments under conditions relevant to a severe accident in a nuclear reactor. The first version was published in 1992 to replace both the multi-compartment code IMPAIR 2/M and the single-compartment code IMPAIR 2.2. IMPAIR 2.2 was restricted to a single pH value specified before programme execution and precluded any variation of pH or calculation of H{sup +} changes during program execution. This restriction is removed in IMPAIR 3. Results of the IMPAIR 2.2 assessment using ACE/RTF Test 2 and the acidic phase of Test 3 B data were presented at the 3rd CSNI Workshop. The purpose of the current assessment is to verify the IMPAIR 3 capability to follow the whole test duration with changing boundary conditions. Besides revisiting ACE/RTF Test 3B, Test 4 data were also used for the current assessment. A limited data analysis was conducted using the outcome of the current ACEX iodine work to understand the iodine behaviour observed during these tests. This paper presents comparisons of the predicted results with the test data. The code capabilities are demonstrated to focus on still unresolved modelling problems. The unclear behaviour observed in the gaseous molecular iodine behaviour and its inconclusive effect on the calculated behaviour in the acidic phase of the Test 4 and importance of the catalytic effect of stainless steel are also indicated. (author) 18 figs., 1 tab., 11 refs.

  13. Further assessment of the chemical modelling of iodine in IMPAIR 3 code using ACE/RTF data

    International Nuclear Information System (INIS)

    Cripps, R.C.; Guentay, S.

    1996-01-01

    This paper introduces the assessment of the computer code IMPAIR 3 (Iodine Matter Partitioning And Iodine Release) which simulates physical and chemical iodine processes in a LWR containment with one or more compartments under conditions relevant to a severe accident in a nuclear reactor. The first version was published in 1992 to replace both the multi-compartment code IMPAIR 2/M and the single-compartment code IMPAIR 2.2. IMPAIR 2.2 was restricted to a single pH value specified before programme execution and precluded any variation of pH or calculation of H + changes during program execution. This restriction is removed in IMPAIR 3. Results of the IMPAIR 2.2 assessment using ACE/RTF Test 2 and the acidic phase of Test 3 B data were presented at the 3rd CSNI Workshop. The purpose of the current assessment is to verify the IMPAIR 3 capability to follow the whole test duration with changing boundary conditions. Besides revisiting ACE/RTF Test 3B, Test 4 data were also used for the current assessment. A limited data analysis was conducted using the outcome of the current ACEX iodine work to understand the iodine behaviour observed during these tests. This paper presents comparisons of the predicted results with the test data. The code capabilities are demonstrated to focus on still unresolved modelling problems. The unclear behaviour observed in the gaseous molecular iodine behaviour and its inconclusive effect on the calculated behaviour in the acidic phase of the Test 4 and importance of the catalytic effect of stainless steel are also indicated. (author) 18 figs., 1 tab., 11 refs

  14. Overview of the development of a biosphere modelling capability for UK DoE (HMIP)

    International Nuclear Information System (INIS)

    Nancarrow, D.J.; Ashton, J.; Little, R.H.

    1990-01-01

    A programme of research has been funded, since 1982, by the United Kingdom Department of the Environment (Her Majesty's Inspectorate of Pollution, HMIP), to develop a procedure for post-closure radiological assessment of underground disposal facilities for low and intermediate level radioactive wastes. It is conventional to regard the disposal system as comprising the engineered barriers of the repository, the geological setting which provides natural barriers to migration, and the surface environment or biosphere. The requirement of a biosphere submodel, therefore, is to provide estimates, for given radionuclide inputs, of the dose or probability distribution function of dose to a maximally exposed individual as a function of time. This paper describes the development of the capability for biosphere modelling for HMIP in the context of the development of other assessment procedures. 11 refs., 3 figs., 2 tabs

  15. Predictions for the drive capabilities of the RancheroS Flux Compression Generator into various load inductances using the Eulerian AMR Code Roxane

    Energy Technology Data Exchange (ETDEWEB)

    Watt, Robert Gregory [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-06

    The Ranchero Magnetic Flux Compression Generator (FCG) has been used to create current pulses in the 10-­100 MA range for driving both “static” low inductance (0.5 nH) loads1 for generator demonstration purposes and high inductance (10-­20 nH) imploding liner loads2 for ultimate use in physics experiments at very high energy density. Simulations of the standard Ranchero generator have recently shown that it had a design issue that could lead to flux trapping in the generator, and a non-­ robust predictability in its use in high energy density experiments. A re-­examination of the design concept for the standard Ranchero generator, prompted by the possible appearance of an aneurism at the output glide plane, has led to a new generation of Ranchero generators designated the RancheroS (for swooped). This generator has removed the problematic output glide plane and replaced it with a region of constantly increasing diameter in the output end of the FCG cavity in which the armature is driven outward under the influence of an additional HE load not present in the original Ranchero. The resultant RancheroS generator, to be tested in LA43S-­L13, probably in early FY17, has a significantly increased initial inductance and may be able to drive a somewhat higher load inductance than the standard Ranchero. This report will use the Eulerian AMR code Roxane to study the ability of the new design to drive static loads, with a goal of providing a database corresponding to the load inductances for which the generator might be used and the anticipated peak currents such loads might produce in physics experiments. Such a database, combined with a simple analytic model of an ideal generator, where d(LI)/dt = 0, and supplemented by earlier estimates of losses in actual use of the standard Ranchero, scaled to estimate the increase in losses due to the longer current carrying perimeter in the RancheroS, can then be used to bound the expectations for the current drive one may

  16. Modelling of blackout sequence at Atucha-1 using the MARCH3 code

    International Nuclear Information System (INIS)

    Baron, J.; Bastianelli, B.

    1997-01-01

    This paper presents the modelling of a complete blackout at the Atucha-1 NPP as preliminary phase for a Level II safety probabilistic analysis. The MARCH3 code of the STCP (Source Term Code Package) is used, based on a plant model made in accordance with particularities of the plant design. The analysis covers all the severe accident phases. The results allow to view the time sequence of the events, and provide the basis for source term studies. (author). 6 refs., 2 figs

  17. Capabilities of stochastic rainfall models as data providers for urban hydrology

    Science.gov (United States)

    Haberlandt, Uwe

    2017-04-01

    For planning of urban drainage systems using hydrological models, long, continuous precipitation series with high temporal resolution are needed. Since observed time series are often too short or not available everywhere, the use of synthetic precipitation is a common alternative. This contribution compares three precipitation models regarding their suitability to provide 5 minute continuous rainfall time series for a) sizing of drainage networks for urban flood protection and b) dimensioning of combined sewage systems for pollution reduction. The rainfall models are a parametric stochastic model (Haberlandt et al., 2008), a non-parametric probabilistic approach (Bárdossy, 1998) and a stochastic downscaling of dynamically simulated rainfall (Berg et al., 2013); all models are operated both as single site and multi-site generators. The models are applied with regionalised parameters assuming that there is no station at the target location. Rainfall and discharge characteristics are utilised for evaluation of the model performance. The simulation results are compared against results obtained from reference rainfall stations not used for parameter estimation. The rainfall simulations are carried out for the federal states of Baden-Württemberg and Lower Saxony in Germany and the discharge simulations for the drainage networks of the cities of Hamburg, Brunswick and Freiburg. Altogether, the results show comparable simulation performance for the three models, good capabilities for single site simulations but low skills for multi-site simulations. Remarkably, there is no significant difference in simulation performance comparing the tasks flood protection with pollution reduction, so the models are finally able to simulate both the extremes and the long term characteristics of rainfall equally well. Bárdossy, A., 1998. Generating precipitation time series using simulated annealing. Wat. Resour. Res., 34(7): 1737-1744. Berg, P., Wagner, S., Kunstmann, H., Schädler, G

  18. Development of an operational waterborne weaponized chemical agent transport modeling capability

    International Nuclear Information System (INIS)

    Ward, M.C.; Cragan, J.A.; Mueller, C.

    2009-01-01

    The fate of chemical warfare agents (CWAs) in aqueous environments is not well characterized. Limited physical and kinetic data are available for these chemicals in the open literature, partly due to their inherent lethality. As a result, the development of methods for determining the persistence and extent of impact for waterborne chemical agent releases is a significant challenge. In this study, a hydrolysis model was developed to track the fate of several critical CWAs. VX, sarin, soman, tabun, and cyclosarin modeling capabilities were developed for an instantaneous point source aqueous release. Hydrolysis products were tracked and the resulting change in pH was calculated for the local dispersive environment. Using this data, instantaneous hydrolysis rates were calculated. This framework was applied to assess the persistence and fate of the CWAs in different turbulent environments. From this hydrolysis model, estimates of the time and extent of lethality from an aqueous release can be made. Refinement to these estimates requires further investigation into the impact of potential catalysts on these chemicals. Enhanced understanding of equivalent acute percutaneous toxicity for solutions requires changes to current testing and estimation methods.(author)

  19. Research Capabilities Directed to all Electric Engineering Teachers, from an Alternative Energy Model

    Directory of Open Access Journals (Sweden)

    Víctor Hugo Ordóñez Navea

    2017-08-01

    Full Text Available The purpose of this work was to contemplate research capabilities directed to all electric engineering teachers from an alternative energy model intro the explanation of a semiconductor in the National Training Program in Electricity. Some authors, such as. Vidal (2016, Atencio (2014 y Camilo (2012 point out to technological applications with semiconductor electrical devices. In this way; a diagnostic phase is presented, held on this field research as a descriptive type about: a how to identify the necessities of alternative energies, and b The research competences in the alternatives energies of researcher from a solar cell model, to boost and innovate the academic praxis and technologic ingenuity. Themselves was applied a survey for a group of 15 teachers in the National Program of Formation in electricity to diagnose the deficiencies in the research area of alternatives energies. The process of data analysis was carried out through descriptive statistic. Later the conclusions are presented the need to generate strategies for stimulate and propose exploration of alternatives energies to the development of research competences directed to the teachers of electrical engineering for develop the research competences in the enforcement of the teachers exercise for the electric engineering, from an alternative energy model and boost the technologic research in the renewal energies field.

  20. The fast code

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, L.N.; Wilson, R.E. [Oregon State Univ., Dept. of Mechanical Engineering, Corvallis, OR (United States)

    1996-09-01

    The FAST Code which is capable of determining structural loads on a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data are given at two wind speeds for the ESI-80. The FAST Code models a two-bladed HAWT with degrees of freedom for blade bending, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffnesses, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms, and azimuth averaged bin plots. It is concluded that agreement between the FAST Code and test results is good. (au)

  1. Fast, Statistical Model of Surface Roughness for Ion-Solid Interaction Simulations and Efficient Code Coupling

    Science.gov (United States)

    Drobny, Jon; Curreli, Davide; Ruzic, David; Lasa, Ane; Green, David; Canik, John; Younkin, Tim; Blondel, Sophie; Wirth, Brian

    2017-10-01

    Surface roughness greatly impacts material erosion, and thus plays an important role in Plasma-Surface Interactions. Developing strategies for efficiently introducing rough surfaces into ion-solid interaction codes will be an important step towards whole-device modeling of plasma devices and future fusion reactors such as ITER. Fractal TRIDYN (F-TRIDYN) is an upgraded version of the Monte Carlo, BCA program TRIDYN developed for this purpose that includes an explicit fractal model of surface roughness and extended input and output options for file-based code coupling. Code coupling with both plasma and material codes has been achieved and allows for multi-scale, whole-device modeling of plasma experiments. These code coupling results will be presented. F-TRIDYN has been further upgraded with an alternative, statistical model of surface roughness. The statistical model is significantly faster than and compares favorably to the fractal model. Additionally, the statistical model compares well to alternative computational surface roughness models and experiments. Theoretical links between the fractal and statistical models are made, and further connections to experimental measurements of surface roughness are explored. This work was supported by the PSI-SciDAC Project funded by the U.S. Department of Energy through contract DOE-DE-SC0008658.

  2. A predictive coding account of bistable perception - a model-based fMRI study.

    Science.gov (United States)

    Weilnhammer, Veith; Stuke, Heiner; Hesselmann, Guido; Sterzer, Philipp; Schmack, Katharina

    2017-05-01

    In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current work

  3. A predictive coding account of bistable perception - a model-based fMRI study.

    Directory of Open Access Journals (Sweden)

    Veith Weilnhammer

    2017-05-01

    Full Text Available In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together

  4. An Eulerian transport-dispersion model of passive effluents: the Difeul code

    International Nuclear Information System (INIS)

    Wendum, D.

    1994-11-01

    R and D has decided to develop an Eulerian diffusion model easy to adapt to meteorological data coming from different sources: for instance the ARPEGE code of Meteo-France or the MERCURE code of EDF. We demand this in order to be able to apply the code in independent cases: a posteriori studies of accidental releases from nuclear power plants ar large or medium scale, simulation of urban pollution episodes within the ''Reactive Atmospheric Flows'' research project. For simplicity reasons, the numerical formulation of our code is the same as the one used in Meteo-France's MEDIA model. The numerical tests presented in this report show the good performance of those schemes. In order to illustrate the method by a concrete example a fictitious release from Saint-Laurent has been simulated at national scale: the results of this simulation agree quite well with those of the trajectory model DIFTRA. (author). 6 figs., 4 tabs

  5. UCODE, a computer code for universal inverse modeling

    Science.gov (United States)

    Poeter, Eileen P.; Hill, Mary C.

    1999-05-01

    This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating

  6. CERT Resilience Management Model Capability Appraisal Method (CAM) Version 1.1

    Science.gov (United States)

    2011-10-01

    used codes of practice such as the ISO27000 series, NIST special publications, ITIL , BS25999, or COBIT. These point-in-time reviews using codes of...CAP) − SANS Institute (GIAC, GSEC) − Disaster Recovery Institute (CBCP, MBCP) − Business Continuity Institute (CBCI, MBCI) − itSMF ( ITIL ) − PMI

  7. Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF

    Science.gov (United States)

    Blyth, Taylor S.

    The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics-based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR) cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal-hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.

  8. Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF

    Energy Technology Data Exchange (ETDEWEB)

    Blyth, Taylor S. [Pennsylvania State Univ., University Park, PA (United States); Avramova, Maria [North Carolina State Univ., Raleigh, NC (United States)

    2017-04-01

    The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics- based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR) cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal- hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.

  9. Geochemical computer codes. A review

    International Nuclear Information System (INIS)

    Andersson, K.

    1987-01-01

    In this report a review of available codes is performed and some code intercomparisons are also discussed. The number of codes treating natural waters (groundwater, lake water, sea water) is large. Most geochemical computer codes treat equilibrium conditions, although some codes with kinetic capability are available. A geochemical equilibrium model consists of a computer code, solving a set of equations by some numerical method and a data base, consisting of thermodynamic data required for the calculations. There are some codes which treat coupled geochemical and transport modeling. Some of these codes solve the equilibrium and transport equations simultaneously while other solve the equations separately from each other. The coupled codes require a large computer capacity and have thus as yet limited use. Three code intercomparisons have been found in literature. It may be concluded that there are many codes available for geochemical calculations but most of them require a user that us quite familiar with the code. The user also has to know the geochemical system in order to judge the reliability of the results. A high quality data base is necessary to obtain a reliable result. The best results may be expected for the major species of natural waters. For more complicated problems, including trace elements, precipitation/dissolution, adsorption, etc., the results seem to be less reliable. (With 44 refs.) (author)

  10. Stimulus-dependent maximum entropy models of neural population codes.

    Directory of Open Access Journals (Sweden)

    Einat Granot-Atedgi

    Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.

  11. User's guide for waste tank corrosion data model code

    International Nuclear Information System (INIS)

    Mackey, D.B.; Divine, J.R.

    1986-12-01

    Corrosion tests were conducted on A-516 and A-537 carbon steel in simulated Double Shell Slurry, Future PUREX, and Hanford Facilities wastes. The corrosion rate data, gathered between 25 and 180 0 C, were statistically ''modeled'' for each waste; a fourth model was developed that utilized the combined data. The report briefly describes the modeling procedure and details on how to access information through a computerized data system. Copies of the report and operating information may be obtained from the author (DB Mackey) at 509-376-9844 of FTS 444-9844

  12. Underwater Shock-Induced Responses of Stiffened Flat Plates: An Investigation Into the Predictive Capabilities of the USA-STAGS Code.

    Science.gov (United States)

    1984-12-01

    significant. C. BUBBLE EFFECT The foregoing discussion of the development of the pres- sure wave and its impact upon the target assumes that a single...MAR Source MACRO prograi 4 or STAGS utility litbrary. 5 STAGS1 FOR Source PORTRAN prog-am; forSTAGSI library 6 MOUNTEL. FOR Source FORTRAN p- ogms 4or...test platform dimensions makes it impossible to use any computer code in its preshot capacity and impacts negatively upon the reproduceability of test

  13. The influence of ligament modelling strategies on the predictive capability of finite element models of the human knee joint.

    Science.gov (United States)

    Naghibi Beidokhti, Hamid; Janssen, Dennis; van de Groes, Sebastiaan; Hazrati, Javad; Van den Boogaard, Ton; Verdonschot, Nico

    2017-12-08

    In finite element (FE) models knee ligaments can represented either by a group of one-dimensional springs, or by three-dimensional continuum elements based on segmentations. Continuum models closer approximate the anatomy, and facilitate ligament wrapping, while spring models are computationally less expensive. The mechanical properties of ligaments can be based on literature, or adjusted specifically for the subject. In the current study we investigated the effect of ligament modelling strategy on the predictive capability of FE models of the human knee joint. The effect of literature-based versus specimen-specific optimized material parameters was evaluated. Experiments were performed on three human cadaver knees, which were modelled in FE models with ligaments represented either using springs, or using continuum representations. In spring representation collateral ligaments were each modelled with three and cruciate ligaments with two single-element bundles. Stiffness parameters and pre-strains were optimized based on laxity tests for both approaches. Validation experiments were conducted to evaluate the outcomes of the FE models. Models (both spring and continuum) with subject-specific properties improved the predicted kinematics and contact outcome parameters. Models incorporating literature-based parameters, and particularly the spring models (with the representations implemented in this study), led to relatively high errors in kinematics and contact pressures. Using a continuum modelling approach resulted in more accurate contact outcome variables than the spring representation with two (cruciate ligaments) and three (collateral ligaments) single-element-bundle representations. However, when the prediction of joint kinematics is of main interest, spring ligament models provide a faster option with acceptable outcome. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. The aeroelastic code HawC - model and comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Thirstrup Petersen, J. [Risoe National Lab., The Test Station for Wind Turbines, Roskilde (Denmark)

    1996-09-01

    A general aeroelastic finite element model for simulation of the dynamic response of horizontal axis wind turbines is presented. The model has been developed with the aim to establish an effective research tool, which can support the general investigation of wind turbine dynamics and research in specific areas of wind turbine modelling. The model concentrates on the correct representation of the inertia forces in a form, which makes it possible to recognize and isolate effects originating from specific degrees of freedom. The turbine structure is divided into substructures, and nonlinear kinematic terms are retained in the equations of motion. Moderate geometric nonlinearities are allowed for. Gravity and a full wind field including 3-dimensional 3-component turbulence are included in the loading. Simulation results for a typical three bladed, stall regulated wind turbine are presented and compared with measurements. (au)

  15. Uncertainty modelling and code calibration for composite materials

    DEFF Research Database (Denmark)

    Toft, Henrik Stensgaard; Branner, Kim; Mishnaevsky, Leon, Jr

    2013-01-01

    Uncertainties related to the material properties of a composite material can be determined from the micro-, meso- or macro-scales. These three starting points for a stochastic modelling of the material properties are investigated. The uncertainties are divided into physical, model, statistical...... between risk of failure and cost of the structure. Consideration related to calibration of partial safety factors for composite material is described, including the probability of failure, format for the partial safety factor method and weight factors for different load cases. In a numerical example......, it is demonstrated how probabilistic models for the material properties formulated on micro-scale can be calibrated using tests on the meso- and macro-scales. The results are compared to probabilistic models estimated directly from tests on the macro-scale. In another example, partial safety factors for application...

  16. Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach

    International Nuclear Information System (INIS)

    2014-12-01

    In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the

  17. An implicit Navier-Stokes code for turbulent flow modeling

    Science.gov (United States)

    Huang, P. G.; Coakley, T. J.

    1992-01-01

    This paper presents a numerical approach to calculating turbulent flows employing advanced turbulence models. The main features include a line-by-line Gauss-Seidel algorithm using Roe's approximate Riemann solver, TVD numerical schemes, implicit boundary conditions and a decoupled turbulence-model solver. Based on the problems tested so far, the method has consistently demonstrated its ability in offering accuracy, boundedness and a fast rate of convergence to steady-state solution.

  18. Improved gap conductance model for the TRAC code

    International Nuclear Information System (INIS)

    Hatch, S.W.; Mandell, D.A.

    1980-01-01

    The purpose of the present work, as indicated earlier, is to improve the present constant fuel clad spacing in TRAC-P1A without significantly increasing the computer costs. It is realized that the simple model proposed may not be accurate enough for some cases, but for the initial calculations made the DELTAR model improves the predictions over the constant Δr results of TRAC-P1A and the additional computing costs are negligible

  19. Code modernization and modularization of APEX and SWAT watershed simulation models

    Science.gov (United States)

    SWAT (Soil and Water Assessment Tool) and APEX (Agricultural Policy / Environmental eXtender) are respectively large and small watershed simulation models derived from EPIC Environmental Policy Integrated Climate), a field-scale agroecology simulation model. All three models are coded in FORTRAN an...

  20. Enhancing Interoperability and Capabilities of Earth Science Data using the Observations Data Model 2 (ODM2

    Directory of Open Access Journals (Sweden)

    Leslie Hsu

    2017-02-01

    Full Text Available Earth Science researchers require access to integrated, cross-disciplinary data in order to answer critical research questions. Partially due to these science drivers, it is common for disciplinary data systems to expand from their original scope in order to accommodate collaborative research. The result is multiple disparate databases with overlapping but incompatible data. In order to enable more complete data integration and analysis, the Observations Data Model Version 2 (ODM2 was developed to be a general information model, with one of its major goals to integrate data collected by 'in situ' sensors with those by 'ex-situ' analyses of field specimens. Four use cases with different science drivers and disciplines have adopted ODM2 because of benefits to their users. The disciplines behind the four cases are diverse – hydrology, rock geochemistry, soil geochemistry, and biogeochemistry. For each case, we outline the benefits, challenges, and rationale for adopting ODM2. In each case, the decision to implement ODM2 was made to increase interoperability and expand data and metadata capabilities. One of the common benefits was the ability to use the flexible handling and comprehensive description of specimens and data collection sites in ODM2’s sampling feature concept. We also summarize best practices for implementing ODM2 based on the experience of these initial adopters. The descriptions here should help other potential adopters of ODM2 implement their own instances or to modify ODM2 to suit their needs.

  1. Capability of Spaceborne Hyperspectral EnMAP Mission for Mapping Fractional Cover for Soil Erosion Modeling

    Directory of Open Access Journals (Sweden)

    Sarah Malec

    2015-09-01

    Full Text Available Soil erosion can be linked to relative fractional cover of photosynthetic-active vegetation (PV, non-photosynthetic-active vegetation (NPV and bare soil (BS, which can be integrated into erosion models as the cover-management C-factor. This study investigates the capability of EnMAP imagery to map fractional cover in a region near San Jose, Costa Rica, characterized by spatially extensive coffee plantations and grazing in a mountainous terrain. Simulated EnMAP imagery is based on airborne hyperspectral HyMap data. Fractional cover estimates are derived in an automated fashion by extracting image endmembers to be used with a Multiple End-member Spectral Mixture Analysis approach. The C-factor is calculated based on the fractional cover estimates determined independently for EnMAP and HyMap. Results demonstrate that with EnMAP imagery it is possible to extract quality endmember classes with important spectral features related to PV, NPV and soil, and be able to estimate relative cover fractions. This spectral information is critical to separate BS and NPV which greatly can impact the C-factor derivation. From a regional perspective, we can use EnMAP to provide good fractional cover estimates that can be integrated into soil erosion modeling.

  2. Meeting Capability Goals through Effective Modelling and Experimentation of C4ISTAR Options

    Science.gov (United States)

    2011-06-01

    connection, management and visualisation capability provided by Salamander’s MooD ® software [13]. MooD has been chosen for this central role as it offers...suggested the use of capability ‘bullseye charts’ as the visualisation tool, using different colours to indicate the level of capability available at...based information environment (using Salamander’s MooD technology), containing all of the visualisations delivered by the project and the linkages

  3. PWR hot leg natural circulation modeling with MELCOR code

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jae Hong; Lee, Jong In [Korea Institute of Nuclear Safety, Taejon (Korea, Republic of)

    1997-12-31

    Previous MELCOR and SCDAP/RELAP5 nodalizations for simulating the counter-current, natural circulation behavior of vapor flow within the RCS hot legs and SG U-tubes when core damage progress can not be applied to the steady state and water-filled conditions during the initial period of accident progression because of the artificially high loss coefficients in the hot legs and SG U-tubes which were chosen from results of COMMIX calculation and the Westinghouse natural circulation experiments in a 1/7-scale facility for simulating steam natural circulation behavior in the vessel and circulation modeling which can be used both for the liquid flow condition at steady state and for the vapor flow condition at the later period of in-vessel core damage. For this, the drag forces resulting from the momentum exchange effects between the two vapor streams in the hot leg was modeled as a pressure drop by pump model. This hot leg natural circulation modeling of MELCOR was able to reproduce similar mass flow rates with those predicted by previous models. 6 refs., 2 figs. (Author)

  4. The improvement of the heat transfer model for sodium-water reaction jet code

    International Nuclear Information System (INIS)

    Hashiguchi, Yoshirou; Yamamoto, Hajime; Kamoshida, Norio; Murata, Shuuichi

    2001-02-01

    For confirming the reasonable DBL (Design Base Leak) on steam generator (SG), it is necessary to evaluate phenomena of sodium-water reaction (SWR) in an actual steam generator realistically. The improvement of a heat transfer model on sodium-water reaction (SWR) jet code (LEAP-JET ver.1.40) and application analysis to the water injection tests for confirmation of propriety for the code were performed. On the improvement of the code, the heat transfer model between a inside fluid and a tube wall was introduced instead of the prior model which was heat capacity model including both heat capacity of the tube wall and inside fluid. And it was considered that the fluid of inside the heat exchange tube was able to treat as water or sodium and typical heat transfer equations used in SG design were also introduced in the new heat transfer model. Further additional work was carried out in order to improve the stability of the calculation for long calculation time. The test calculation using the improved code (LEAP-JET ver.1.50) were carried out with conditions of the SWAT-IR·Run-HT-2 test. It was confirmed that the SWR jet behavior on the result and the influence to the result of the heat transfer model were reasonable. And also on the improved code (LEAP-JET ver.1.50), user's manual was revised with additional I/O manual and explanation of the heat transfer model and new variable name. (author)

  5. Capability Paternalism

    NARCIS (Netherlands)

    Claassen, R.J.G.

    A capability approach prescribes paternalist government actions to the extent that it requires the promotion of specific functionings, instead of the corresponding capabilities. Capability theorists have argued that their theories do not have much of these paternalist implications, since promoting

  6. Recommendations for computer modeling codes to support the UMTRA groundwater restoration project

    International Nuclear Information System (INIS)

    Tucker, M.D.; Khan, M.A.

    1996-04-01

    The Uranium Mill Tailings Remediation Action (UMTRA) Project is responsible for the assessment and remedial action at the 24 former uranium mill tailings sites located in the US. The surface restoration phase, which includes containment and stabilization of the abandoned uranium mill tailings piles, has a specific termination date and is nearing completion. Therefore, attention has now turned to the groundwater restoration phase, which began in 1991. Regulated constituents in groundwater whose concentrations or activities exceed maximum contaminant levels (MCLs) or background levels at one or more sites include, but are not limited to, uranium, selenium, arsenic, molybdenum, nitrate, gross alpha, radium-226 and radium-228. The purpose of this report is to recommend computer codes that can be used to assist the UMTRA groundwater restoration effort. The report includes a survey of applicable codes in each of the following areas: (1) groundwater flow and contaminant transport modeling codes, (2) hydrogeochemical modeling codes, (3) pump and treat optimization codes, and (4) decision support tools. Following the survey of the applicable codes, specific codes that can best meet the needs of the UMTRA groundwater restoration program in each of the four areas are recommended

  7. Recommendations for computer modeling codes to support the UMTRA groundwater restoration project

    Energy Technology Data Exchange (ETDEWEB)

    Tucker, M.D. [Sandia National Labs., Albuquerque, NM (United States); Khan, M.A. [IT Corp., Albuquerque, NM (United States)

    1996-04-01

    The Uranium Mill Tailings Remediation Action (UMTRA) Project is responsible for the assessment and remedial action at the 24 former uranium mill tailings sites located in the US. The surface restoration phase, which includes containment and stabilization of the abandoned uranium mill tailings piles, has a specific termination date and is nearing completion. Therefore, attention has now turned to the groundwater restoration phase, which began in 1991. Regulated constituents in groundwater whose concentrations or activities exceed maximum contaminant levels (MCLs) or background levels at one or more sites include, but are not limited to, uranium, selenium, arsenic, molybdenum, nitrate, gross alpha, radium-226 and radium-228. The purpose of this report is to recommend computer codes that can be used to assist the UMTRA groundwater restoration effort. The report includes a survey of applicable codes in each of the following areas: (1) groundwater flow and contaminant transport modeling codes, (2) hydrogeochemical modeling codes, (3) pump and treat optimization codes, and (4) decision support tools. Following the survey of the applicable codes, specific codes that can best meet the needs of the UMTRA groundwater restoration program in each of the four areas are recommended.

  8. Development of the next generation code system as an engineering modeling language. (2). Study with prototyping

    International Nuclear Information System (INIS)

    Yokoyama, Kenji; Uto, Nariaki; Kasahara, Naoto; Ishikawa, Makoto

    2003-04-01

    In the fast reactor development, numerical simulation using analytical codes plays an important role for complementing theory and experiment. It is necessary that the engineering models and analysis methods can be flexibly changed, because the phenomena to be investigated become more complicated due to the diversity of the needs for research. And, there are large problems in combining physical properties and engineering models in many different fields. Aiming to the realization of the next generation code system which can solve those problems, the authors adopted three methods, (1) Multi-language (SoftWIRE.NET, Visual Basic.NET and Fortran) (2) Fortran 90 and (3) Python to make a prototype of the next generation code system. As this result, the followings were confirmed. (1) It is possible to reuse a function of the existing codes written in Fortran as an object of the next generation code system by using Visual Basic.NET. (2) The maintainability of the existing code written by Fortran 77 can be improved by using the new features of Fortran 90. (3) The toolbox-type code system can be built by using Python. (author)

  9. Numerical and modeling techniques used in the EPIC code

    International Nuclear Information System (INIS)

    Pizzica, P.A.; Abramson, P.B.

    1977-01-01

    EPIC models fuel and coolant motion which result from internal fuel pin pressure (from fission gas or fuel vapor) and/or from the generation of sodium vapor pressures in the coolant channel subsequent to pin failure in an LMFBR. The modeling includes the ejection of molten fuel from the pin into a coolant channel with any amount of voiding through a clad rip which may be of any length or which may expand with time. One-dimensional Eulerian hydrodynamics is used to model both the motion of fuel and fission gas inside a molten fuel cavity and the mixture of two-phase sodium and fission gas in the channel. Motion of molten fuel particles in the coolant channel is tracked with a particle-in-cell technique

  10. Demonstrations and verification of debris bed models in the MEDICI reactor cavity code

    International Nuclear Information System (INIS)

    Trebilcock, W.R.; Bergeron, K.D.; Gorham-Bergeron, E.D.

    1984-01-01

    The MEDICI reactor cavity model is under development at Sandia National Laboratories to provide a simple but realistic treatment of ex-vessel severe accident phenomena. Several demonstration cases have been run and are discussed as illustrations of the model's capabilities. Verification of the model with experiments has supplied confidence in the model

  11. Defect Detection: Combining Bounded Model Checking and Code Contracts

    Directory of Open Access Journals (Sweden)

    Marat Akhin

    2013-01-01

    Full Text Available Bounded model checking (BMC of C/C++ programs is a matter of scientific enquiry that attracts great attention in the last few years. In this paper, we present our approach to this problem. It is based on combining several recent results in BMC, namely, the use of LLVM as a baseline for model generation, employment of high-performance Z3 SMT solver to do the formula heavy-lifting, and the use of various function summaries to improve analysis efficiency and expressive power. We have implemented a basic prototype; experiment results on a set of simple test BMC problems are satisfactory.  

  12. A Secure Network Coding-based Data Gathering Model and Its Protocol in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Qian Xiao

    2012-09-01

    Full Text Available To provide security for data gathering based on network coding in wireless sensor networks (WSNs, a secure network coding-based data gathering model is proposed, and a data-privacy preserving and pollution preventing (DPPaamp;PP protocol using network coding is designed. DPPaamp;PP makes use of a new proposed pollution symbol selection and pollution (PSSP scheme based on a new obfuscation idea to pollute existing symbols. Analyses of DPPaamp;PP show that it not only requires low overhead on computation and communication, but also provides high security on resisting brute-force attacks.

  13. Comparison for the interfacial and wall friction models in thermal-hydraulic system analysis codes

    International Nuclear Information System (INIS)

    Hwang, Moon Kyu; Park, Jee Won; Chung, Bub Dong; Kim, Soo Hyung; Kim, See Dal

    2007-07-01

    The average equations employed in the current thermal hydraulic analysis codes need to be closed with the appropriate models and correlations to specify the interphase phenomena along with fluid/structure interactions. This includes both thermal and mechanical interactions. Among the closure laws, an interfacial and wall frictions, which are included in the momentum equations, not only affect pressure drops along the fluid flow, but also have great effects for the numerical stability of the codes. In this study, the interfacial and wall frictions are reviewed for the commonly applied thermal-hydraulic system analysis codes, i.e. RELAP5-3D, MARS-3D, TRAC-M, and CATHARE

  14. ABAREX -- A neutron spherical optical-statistical-model code -- A user`s manual

    Energy Technology Data Exchange (ETDEWEB)

    Smith, A.B. [ed.; Lawson, R.D.

    1998-06-01

    The contemporary version of the neutron spherical optical-statistical-model code ABAREX is summarized with the objective of providing detailed operational guidance for the user. The physical concepts involved are very briefly outlined. The code is described in some detail and a number of explicit examples are given. With this document one should very quickly become fluent with the use of ABAREX. While the code has operated on a number of computing systems, this version is specifically tailored for the VAX/VMS work station and/or the IBM-compatible personal computer.

  15. Nuclear model codes and related software distributed by the OECD/NEA Data Bank

    International Nuclear Information System (INIS)

    Sartori, E.

    1993-01-01

    Software and data for nuclear energy applications is acquired, tested and distributed by several information centres; in particular, relevant computer codes are distributed internationally by the OECD/NEA Data Bank (France) and by ESTSC and EPIC/RSIC (United States). This activity is coordinated among the centres and is extended outside the OECD area through an arrangement with the IAEA. This article covers more specifically the availability of nuclear model codes and also those codes which further process their results into data sets needed for specific nuclear application projects. (author). 2 figs

  16. Code package {open_quotes}SVECHA{close_quotes}: Modeling of core degradation phenomena at severe accidents

    Energy Technology Data Exchange (ETDEWEB)

    Veshchunov, M.S.; Kisselev, A.E.; Palagin, A.V. [Nuclear Safety Institute, Moscow (Russian Federation)] [and others

    1995-09-01

    The code package SVECHA for the modeling of in-vessel core degradation (CD) phenomena in severe accidents is being developed in the Nuclear Safety Institute, Russian Academy of Science (NSI RAS). The code package presents a detailed mechanistic description of the phenomenology of severe accidents in a reactor core. The modules of the package were developed and validated on separate effect test data. These modules were then successfully implemented in the ICARE2 code and validated against a wide range of integral tests. Validation results have shown good agreement with separate effect tests data and with the integral tests CORA-W1/W2, CORA-13, PHEBUS-B9+.

  17. Savannah River Laboratory DOSTOMAN code: a compartmental pathways computer model of contaminant transport

    International Nuclear Information System (INIS)

    King, C.M.; Wilhite, E.L.; Root, R.W. Jr.

    1985-01-01

    The Savannah River Laboratory DOSTOMAN code has been used since 1978 for environmental pathway analysis of potential migration of radionuclides and hazardous chemicals. The DOSTOMAN work is reviewed including a summary of historical use of compartmental models, the mathematical basis for the DOSTOMAN code, examples of exact analytical solutions for simple matrices, methods for numerical solution of complex matrices, and mathematical validation/calibration of the SRL code. The review includes the methodology for application to nuclear and hazardous chemical waste disposal, examples of use of the model in contaminant transport and pathway analysis, a user's guide for computer implementation, peer review of the code, and use of DOSTOMAN at other Department of Energy sites. 22 refs., 3 figs

  18. Radiation transport phenomena and modeling. Part A: Codes; Part B: Applications with examples

    Energy Technology Data Exchange (ETDEWEB)

    Lorence, L.J. Jr.; Beutler, D.E. [Sandia National Labs., Albuquerque, NM (United States). Simulation Technology Research Dept.

    1997-09-01

    This report contains the notes from the second session of the 1997 IEEE Nuclear and Space Radiation Effects Conference Short Course on Applying Computer Simulation Tools to Radiation Effects Problems. Part A discusses the physical phenomena modeled in radiation transport codes and various types of algorithmic implementations. Part B gives examples of how these codes can be used to design experiments whose results can be easily analyzed and describes how to calculate quantities of interest for electronic devices.

  19. Domain-specific modeling enabling full code generation

    CERN Document Server

    Kelly, Steven

    2007-01-01

    Domain-Specific Modeling (DSM) is the latest approach tosoftware development, promising to greatly increase the speed andease of software creation. Early adopters of DSM have been enjoyingproductivity increases of 500–1000% in production for over adecade. This book introduces DSM and offers examples from variousfields to illustrate to experienced developers how DSM can improvesoftware development in their teams. Two authorities in the field explain what DSM is, why it works,and how to successfully create and use a DSM solution to improveproductivity and quality. Divided into four parts, the book covers:background and motivation; fundamentals; in-depth examples; andcreating DSM solutions. There is an emphasis throughout the book onpractical guidelines for implementing DSM, including how toidentify the nece sary language constructs, how to generate fullcode from models, and how to provide tool support for a new DSMlanguage. The example cases described in the book are available thebook's Website, www.dsmbook....

  20. Atmospheric Transport Modeling with 3D Lagrangian Dispersion Codes Compared with SF6 Tracer Experiments at Regional Scale

    Directory of Open Access Journals (Sweden)

    François Van Dorpe

    2007-01-01

    Full Text Available The results of four gas tracer experiments of atmospheric dispersion on a regional scale are used for the benchmarking of two atmospheric dispersion modeling codes, MINERVE-SPRAY (CEA, and NOSTRADAMUS (IBRAE. The main topic of this comparison is to estimate the Lagrangian code capability to predict the radionuclide atmospheric transfer on a large field, in the case of risk assessment of nuclear power plant for example. For the four experiments, the results of calculations show a rather good agreement between the two codes, and the order of magnitude of the concentrations measured on the soil is predicted. Simulation is best for sampling points located ten kilometers from the source, while we note a divergence for more distant points results (difference in concentrations by a factor 2 to 5. This divergence may be explained by the fact that, for these four experiments, only one weather station (near the point source was used on a field of 10 000 km2, generating the simulation of a uniform wind field throughout the calculation domain.

  1. Advancements in reactor physics modelling methodology of Monte Carlo Burnup Code MCB dedicated to higher simulation fidelity of HTR cores

    International Nuclear Information System (INIS)

    Cetnar, Jerzy

    2014-01-01

    The recent development of MCB - Monte Carlo Continuous Energy Burn-up code is directed towards advanced description of modern reactors, including double heterogeneity structures that exist in HTR-s. In this, we exploit the advantages of MCB methodology in integrated approach, where physics, neutronics, burnup, reprocessing, non-stationary process modeling (control rod operation) and refined spatial modeling are carried in a single flow. This approach allows for implementations of advanced statistical options like analysis of error propagation, perturbation in time domain, sensitivity and source convergence analyses. It includes statistical analysis of burnup process, emitted particle collection, thermal-hydraulic coupling, automatic power profile calculations, advanced procedures of burnup step normalization and enhanced post processing capabilities. (author)

  2. Intercomparison of radiation codes for Mars Models: SW and LW

    Science.gov (United States)

    Savijarvi, H. I.; Crisp, D.; Harri, A.-M.

    2002-09-01

    We have enlarged our radiation scheme intercomparison for Mars models into the SW region. A reference mean case is introduced by having a T(z) -profile based on Mariner 9 IRIS observations at 35 fixed- altitude points for a 95.3 per cent CO2-atmosphere plus optional trace gases and well-mixed dust at visible optical depths of 0, 0.3, 0.6, 1.0 and 5.0. A Spectrum Resolving (line-by-line) multiple scattering multi-stream Model (SRM, by Crisp) is used as the first-principles reference calculation. The University of Helsinki (UH) old and new (improved) Mars model schemes are also included. The intercomparisons have pointed out the importance of dust and water vapour in the LW, while the CO2 spectral line data difference effects were minimal but nonzero. In the shortwave, the results show that the CO2 absorption of solar radiation by the line-by-line scheme is relatively intense, especially so at low solar height angles. This is attributed to the (often neglected) very weak lines and bands in the near-infrared. The other trace gases are not important but dust, of course, scatters and absorbs strongly in the shortwave. The old, very simple, UH SW scheme was surprisingly good at low dust concentrations, compared to SRM. It was however considerably improved for both low and high dust amounts by using the SRM results as benchmark. Other groups are welcome to join.

  3. Immune Modulating Capability of Two Exopolysaccharide-Producing Bifidobacterium Strains in a Wistar Rat Model

    Directory of Open Access Journals (Sweden)

    Nuria Salazar

    2014-01-01

    Full Text Available Fermented dairy products are the usual carriers for the delivery of probiotics to humans, Bifidobacterium and Lactobacillus being the most frequently used bacteria. In this work, the strains Bifidobacterium animalis subsp. lactis IPLA R1 and Bifidobacterium longum IPLA E44 were tested for their capability to modulate immune response and the insulin-dependent glucose homeostasis using male Wistar rats fed with a standard diet. Three intervention groups were fed daily for 24 days with 10% skimmed milk, or with 109 cfu of the corresponding strain suspended in the same vehicle. A significant increase of the suppressor-regulatory TGF-β cytokine occurred with both strains in comparison with a control (no intervention group of rats; the highest levels were reached in rats fed IPLA R1. This strain presented an immune protective profile, as it was able to reduce the production of the proinflammatory IL-6. Moreover, phosphorylated Akt kinase decreased in gastroctemius muscle of rats fed the strain IPLA R1, without affecting the glucose, insulin, and HOMA index in blood, or levels of Glut-4 located in the membrane of muscle and adipose tissue cells. Therefore, the strain B. animalis subsp. lactis IPLA R1 is a probiotic candidate to be tested in mild grade inflammation animal models.

  4. Aerodynamic modelling of a Cretaceous bird reveals thermal soaring capabilities during early avian evolution.

    Science.gov (United States)

    Serrano, Francisco José; Chiappe, Luis María

    2017-07-01

    Several flight modes are thought to have evolved during the early evolution of birds. Here, we use a combination of computational modelling and morphofunctional analyses to infer the flight properties of the raven-sized, Early Cretaceous bird Sapeornis chaoyangensis -a likely candidate to have evolved soaring capabilities. Specifically, drawing information from (i) mechanical inferences of the deltopectoral crest of the humerus, (ii) wing shape (i.e. aspect ratio), (iii) estimations of power margin (i.e. difference between power required for flight and available power from muscles), (iv) gliding behaviour (i.e. forward speed and sinking speed), and (v) palaeobiological evidence, we conclude that S. chaoyangensis was a thermal soarer with an ecology similar to that of living South American screamers. Our results indicate that as early as 125 Ma, some birds evolved the morphological and aerodynamic requirements for soaring on continental thermals, a conclusion that highlights the degree of ecological, functional and behavioural diversity that resulted from the first major evolutionary radiation of birds. © 2017 The Author(s).

  5. Model-based Assessment for Balancing Privacy Requirements and Operational Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Knirsch, Fabian [Salzburg Univ. (Austria); Engel, Dominik [Salzburg Univ. (Austria); Frincu, Marc [Univ. of Southern California, Los Angeles, CA (United States); Prasanna, Viktor [Univ. of Southern California, Los Angeles, CA (United States)

    2015-02-17

    The smart grid changes the way energy is produced and distributed. In addition both, energy and information is exchanged bidirectionally among participating parties. Therefore heterogeneous systems have to cooperate effectively in order to achieve a common high-level use case, such as smart metering for billing or demand response for load curtailment. Furthermore, a substantial amount of personal data is often needed for achieving that goal. Capturing and processing personal data in the smart grid increases customer concerns about privacy and in addition, certain statutory and operational requirements regarding privacy aware data processing and storage have to be met. An increase of privacy constraints, however, often limits the operational capabilities of the system. In this paper, we present an approach that automates the process of finding an optimal balance between privacy requirements and operational requirements in a smart grid use case and application scenario. This is achieved by formally describing use cases in an abstract model and by finding an algorithm that determines the optimum balance by forward mapping privacy and operational impacts. For this optimal balancing algorithm both, a numeric approximation and – if feasible – an analytic assessment are presented and investigated. The system is evaluated by applying the tool to a real-world use case from the University of Southern California (USC) microgrid.

  6. Pre-engineering Spaceflight Validation of Environmental Models and the 2005 HZETRN Simulation Code

    Science.gov (United States)

    Nealy, John E.; Cucinotta, Francis A.; Wilson, John W.; Badavi, Francis F.; Dachev, Ts. P.; Tomov, B. T.; Walker, Steven A.; DeAngelis, Giovanni; Blattnig, Steve R.; Atwell, William

    2006-01-01

    The HZETRN code has been identified by NASA for engineering design in the next phase of space exploration highlighting a return to the Moon in preparation for a Mars mission. In response, a new series of algorithms beginning with 2005 HZETRN, will be issued by correcting some prior limitations and improving control of propagated errors along with established code verification processes. Code validation processes will use new/improved low Earth orbit (LEO) environmental models with a recently improved International Space Station (ISS) shield model to validate computational models and procedures using measured data aboard ISS. These validated models will provide a basis for flight-testing the designs of future space vehicles and systems of the Constellation program in the LEO environment.

  7. The Nuremberg Code subverts human health and safety by requiring animal modeling

    Directory of Open Access Journals (Sweden)

    Greek Ray

    2012-07-01

    Full Text Available Abstract Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive value of animal models when used as test subjects for human response to drugs and disease. We explore the use of animals for models in toxicity testing as an example of the problem with using animal models. Summary We conclude that the requirements for animal testing found in the Nuremberg Code were based on scientifically outdated principles, compromised by people with a vested interest in animal experimentation, serve no useful function, increase the cost of drug development, and prevent otherwise safe and efficacious drugs and therapies from being implemented.

  8. Modeling of BWR core meltdown accidents - for application in the MELRPI. MOD2 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Koh, B R; Kim, S H; Taleyarkhan, R P; Podowski, M Z; Lahey, Jr, R T

    1985-04-01

    This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing.

  9. Modeling of BWR core meltdown accidents - for application in the MELRPI.MOD2 computer code

    International Nuclear Information System (INIS)

    Koh, B.R.; Kim, S.H.; Taleyarkhan, R.P.; Podowski, M.Z.; Lahey, R.T. Jr.

    1985-04-01

    This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing

  10. The Nuremberg Code subverts human health and safety by requiring animal modeling.

    Science.gov (United States)

    Greek, Ray; Pippus, Annalea; Hansen, Lawrence A

    2012-07-08

    The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive value of animal models when used as test subjects for human response to drugs and disease. We explore the use of animals for models in toxicity testing as an example of the problem with using animal models. We conclude that the requirements for animal testing found in the Nuremberg Code were based on scientifically outdated principles, compromised by people with a vested interest in animal experimentation, serve no useful function, increase the cost of drug development, and prevent otherwise safe and efficacious drugs and therapies from being implemented.

  11. Coded Random Access

    DEFF Research Database (Denmark)

    Paolini, Enrico; Stefanovic, Cedomir; Liva, Gianluigi

    2015-01-01

    The rise of machine-to-machine communications has rekindled the interest in random access protocols as a support for a massive number of uncoordinatedly transmitting devices. The legacy ALOHA approach is developed under a collision model, where slots containing collided packets are considered...... as waste. However, if the common receiver (e.g., base station) is capable to store the collision slots and use them in a transmission recovery process based on successive interference cancellation, the design space for access protocols is radically expanded. We present the paradigm of coded random access......, in which the structure of the access protocol can be mapped to a structure of an erasure-correcting code defined on graph. This opens the possibility to use coding theory and tools for designing efficient random access protocols, offering markedly better performance than ALOHA. Several instances of coded...

  12. Modelling Chemical Equilibrium Partitioning with the GEMS-PSI Code

    Energy Technology Data Exchange (ETDEWEB)

    Kulik, D.; Berner, U.; Curti, E

    2004-03-01

    Sorption, co-precipitation and re-crystallisation are important retention processes for dissolved contaminants (radionuclides) migrating through the sub-surface. The retention of elements is usually measured by empirical partition coefficients (Kd), which vary in response to many factors: temperature, solid/liquid ratio, total contaminant loading, water composition, host-mineral composition, etc. The Kd values can be predicted for in-situ conditions from thermodynamic modelling of solid solution, aqueous solution or sorption equilibria, provided that stoichiometry, thermodynamic stability and mixing properties of the pure components are known (Example 1). Unknown thermodynamic properties can be retrieved from experimental Kd values using inverse modelling techniques (Example 2). An efficient, advanced tool for performing both tasks is the Gibbs Energy Minimization (GEM) approach, implemented in the user-friendly GEM-Selector (GEMS) program package, which includes the Nagra-PSI chemical thermodynamic database. The package is being further developed at PSI and used extensively in studies relating to nuclear waste disposal. (author)

  13. Modelling Chemical Equilibrium Partitioning with the GEMS-PSI Code

    International Nuclear Information System (INIS)

    Kulik, D.; Berner, U.; Curti, E.

    2004-01-01

    Sorption, co-precipitation and re-crystallisation are important retention processes for dissolved contaminants (radionuclides) migrating through the sub-surface. The retention of elements is usually measured by empirical partition coefficients (Kd), which vary in response to many factors: temperature, solid/liquid ratio, total contaminant loading, water composition, host-mineral composition, etc. The Kd values can be predicted for in-situ conditions from thermodynamic modelling of solid solution, aqueous solution or sorption equilibria, provided that stoichiometry, thermodynamic stability and mixing properties of the pure components are known (Example 1). Unknown thermodynamic properties can be retrieved from experimental Kd values using inverse modelling techniques (Example 2). An efficient, advanced tool for performing both tasks is the Gibbs Energy Minimization (GEM) approach, implemented in the user-friendly GEM-Selector (GEMS) program package, which includes the Nagra-PSI chemical thermodynamic database. The package is being further developed at PSI and used extensively in studies relating to nuclear waste disposal. (author)

  14. Incorporating numerical modeling into estimates of the detection capability of the IMS infrasound network

    Science.gov (United States)

    Le Pichon, A.; Ceranna, L.; Vergoz, J.

    2012-03-01

    To monitor compliance with the Comprehensive Nuclear-Test ban Treaty (CTBT), a dedicated International Monitoring System (IMS) is being deployed. Recent global scale observations recorded by this network confirm that its detection capability is highly variable in space and time. Previous studies estimated the radiated source energy from remote observations using empirical yield-scaling relations which account for the along-path stratospheric winds. Although the empirical wind correction reduces the variance in the explosive energy versus pressure relationship, strong variability remains in the yield estimate. Today, numerical modeling techniques provide a basis to better understand the role of different factors describing the source and the atmosphere that influence propagation predictions. In this study, the effects of the source frequency and the stratospheric wind speed are simulated. In order to characterize fine-scale atmospheric structures which are excluded from the current atmospheric specifications, model predictions are further enhanced by the addition of perturbation terms. A theoretical attenuation relation is thus developed from massive numerical simulations using the Parabolic Equation method. Compared with previous studies, our approach provides a more realistic physical description of long-range infrasound propagation. We obtain a new relation combining a near-field and a far-field term, which account for the effects of both geometrical spreading and absorption. In the context of the future verification of the CTBT, the derived attenuation relation quantifies the spatial and temporal variability of the IMS infrasound network performance in higher resolution, and will be helpful for the design and prioritizing maintenance of any arbitrary infrasound monitoring network.

  15. Incorporating numerical modelling into estimates of the detection capability of the IMS infrasound network

    Science.gov (United States)

    Le Pichon, A.; Ceranna, L.

    2011-12-01

    To monitor compliance with the Comprehensive Nuclear-Test-Ban Treaty (CTBT), a dedicated International Monitoring System (IMS) is being deployed. Recent global scale observations recorded by this network confirm that its detection capability is highly variable in space and time. Previous studies estimated the radiated source energy from remote observations using empirical yield-scaling relations which account for the along-path stratospheric winds. Although the empirical wind correction reduces the variance in the explosive energy versus pressure relationship, strong variability remains in the yield estimate. Today, numerical modelling techniques provide a basis to better understand the role of different factors describing the source and the atmosphere that influence propagation predictions. In this study, the effects of the source frequency and the stratospheric wind speed are simulated. In order to characterize fine-scale atmospheric structures which are excluded from the current atmospheric specifications, model predictions are further enhanced by the addition of perturbation terms. Thus, a theoretical attenuation relation is developed from massive numerical simulations using the Parabolic Equation method. Compared with previous studies, our approach provides a more realistic physical description of infrasound propagation. We obtain a new relation combining a near-field and far-field term which account for the effects of both geometrical spreading and dissipation on the pressure wave attenuation. By incorporating real ambient infrasound noise at the receivers which significantly limits the ability to detect and identify signals of interest, the minimum detectable source amplitude can be derived in a broad frequency range. Empirical relations between the source spectrum and the yield of explosions are used to infer detection thresholds in tons of TNT equivalent. In the context of the future verification of the CTBT, the obtained attenuation relation quantifies

  16. An improved UO2 thermal conductivity model in the ELESTRES computer code

    International Nuclear Information System (INIS)

    Chassie, G.G.; Tochaie, M.; Xu, Z.

    2010-01-01

    This paper describes the improved UO 2 thermal conductivity model for use in the ELESTRES (ELEment Simulation and sTRESses) computer code. The ELESTRES computer code models the thermal, mechanical and microstructural behaviour of a CANDU® fuel element under normal operating conditions. The main purpose of the code is to calculate fuel temperatures, fission gas release, internal gas pressure, fuel pellet deformation, and fuel sheath strains for fuel element design and assessment. It is also used to provide initial conditions for evaluating fuel behaviour during high temperature transients. The thermal conductivity of UO 2 fuel is one of the key parameters that affect ELESTRES calculations. The existing ELESTRES thermal conductivity model has been assessed and improved based on a large amount of thermal conductivity data from measurements of irradiated and un-irradiated UO 2 fuel with different densities. The UO 2 thermal conductivity data cover 90% to 99% theoretical density of UO 2 , temperature up to 3027 K, and burnup up to 1224 MW·h/kg U. The improved thermal conductivity model, which is recommended for a full implementation in the ELESTRES computer code, has reduced the ELESTRES code prediction biases of temperature, fission gas release, and fuel sheath strains when compared with the available experimental data. This improved thermal conductivity model has also been checked with a test version of ELESTRES over the full ranges of fuel temperature, fuel burnup, and fuel density expected in CANDU fuel. (author)

  17. Quantitative analysis of crossflow model of the COBRA-IV.1 code

    International Nuclear Information System (INIS)

    Lira, C.A.B.O.

    1983-01-01

    Based on experimental data in a rod bundle test section, the crossflow model of the COBRA-IV.1 code was quantitatively analysed. The analysis showed that is possible to establish some operational conditions in which the results of the theoretical model are acceptable. (author) [pt

  18. A model for bootstrap current calculations with bounce averaged Fokker-Planck codes

    NARCIS (Netherlands)

    Westerhof, E.; Peeters, A.G.

    1996-01-01

    A model is presented that allows the calculation of the neoclassical bootstrap current originating from the radial electron density and pressure gradients in standard (2+1)D bounce averaged Fokker-Planck codes. The model leads to an electron momentum source located almost exclusively at the

  19. OWL: A code for the two-center shell model with spherical Woods-Saxon potentials

    Science.gov (United States)

    Diaz-Torres, Alexis

    2018-03-01

    A Fortran-90 code for solving the two-center nuclear shell model problem is presented. The model is based on two spherical Woods-Saxon potentials and the potential separable expansion method. It describes the single-particle motion in low-energy nuclear collisions, and is useful for characterizing a broad range of phenomena from fusion to nuclear molecular structures.

  20. Pitchcontrol of wind turbines using model free adaptivecontrol based on wind turbine code

    DEFF Research Database (Denmark)

    Zhang, Yunqian; Chen, Zhe; Cheng, Ming

    2011-01-01

    value is only based on I/O data of the wind turbine is identified and then the wind turbine system is replaced by a dynamic linear time-varying model. In order to verify the correctness and robustness of the proposed model free adaptive pitch controller, the wind turbine code FAST which can predict...

  1. THELMA code electromagnetic model of ITER superconducting cables and application to the ENEA Stability Experiment

    NARCIS (Netherlands)

    Ciotti, M.; Nijhuis, Arend; Ribani, P.L.; Savoldi Richard, L.; Zanino, R.

    2006-01-01

    The new THELMA code, including a thermal-hydraulic (TH) and an electro-magnetic (EM) model of a cable-in-conduit conductor (CICC), has been developed. The TH model is at this stage relatively conventional, with two fluid components (He flowing in the annular cable region and He flowing in the

  2. The modelling of wall condensation with noncondensable gases for the containment codes

    Energy Technology Data Exchange (ETDEWEB)

    Leduc, C.; Coste, P.; Barthel, V.; Deslandes, H. [Commissariat a l`Energi Atomique, Grenoble (France)

    1995-09-01

    This paper presents several approaches in the modelling of wall condensation in the presence of noncondensable gases for containment codes. The lumped-parameter modelling and the local modelling by 3-D codes are discussed. Containment analysis codes should be able to predict the spatial distributions of steam, air, and hydrogen as well as the efficiency of cooling by wall condensation in both natural convection and forced convection situations. 3-D calculations with a turbulent diffusion modelling are necessary since the diffusion controls the local condensation whereas the wall condensation may redistribute the air and hydrogen mass in the containment. A fine mesh modelling of film condensation in forced convection has been in the developed taking into account the influence of the suction velocity at the liquid-gas interface. It is associated with the 3-D model of the TRIO code for the gas mixture where a k-{xi} turbulence model is used. The predictions are compared to the Huhtiniemi`s experimental data. The modelling of condensation in natural convection or mixed convection is more complex. As no universal velocity and temperature profile exist for such boundary layers, a very fine nodalization is necessary. More simple models integrate equations over the boundary layer thickness, using the heat and mass transfer analogy. The model predictions are compared with a MIT experiment. For the containment compartments a two node model is proposed using the lumped parameter approach. Heat and mass transfer coefficients are tested on separate effect tests and containment experiments. The CATHARE code has been adapted to perform such calculations and shows a reasonable agreement with data.

  3. Slag transport models for vertical and horizontal surfaces. [SLGTR code

    Energy Technology Data Exchange (ETDEWEB)

    Chow, L S.H.; Johnson, T R

    1978-01-01

    In a coal-fired MHD system, all downstream component surfaces that are exposed to combustion gases will be covered by a solid, liquid, or solid-liquid film of slag, seed, or a mixture of the two, the specific nature of the film depending on the physical properties of the slag and seed and on local conditions. An analysis was made of a partly-liquid slag film flowing on a cooled vertical or horizontal wall of a large duct, through which passed slag-laden combustion gases. The model is applicable to the high-temperature steam generators in the downstream system of an MHD power plant and was used in calculations for a radiant-boiler concept similar to that in the 1000-MWe Gilbert-STD Baseline Plant study and also for units large enough for 230 and 8 lb/s (104.3 and 3.5 kg/s) of combustion gas. The qualitative trends of the results are similar for both vertical and horizontal surfaces. The results show the effects of the slag film, slag properties, and gas emissivity on the heat flux to the steam tubes. The slag film does not reduce the rate of heat transfer in proportion to its surface temperature, because most of the heat is radiated from the gas and particles suspended in it to the slag surface.

  4. Development and Verification of a Pilot Code based on Two-fluid Three-field Model

    International Nuclear Information System (INIS)

    Hwang, Moon Kyu; Bae, S. W.; Lee, Y. J.; Chung, B. D.; Jeong, J. J.; Ha, K. S.; Kang, D. H.

    2006-09-01

    In this study, a semi-implicit pilot code is developed for a one-dimensional channel flow as three-fields. The three fields are comprised of a gas, continuous liquid and entrained liquid fields. All the three fields are allowed to have their own velocities. The temperatures of the continuous liquid and the entrained liquid are, however, assumed to be equilibrium. The interphase phenomena include heat and mass transfer, as well as momentum transfer. The fluid/structure interaction, generally, include both heat and momentum transfer. Assuming adiabatic system, only momentum transfer is considered in this study, leaving the wall heat transfer for the future study. Using 10 conceptual problems, the basic pilot code has been verified. The results of the verification are summarized below: It was confirmed that the basic pilot code can simulate various flow conditions (such as single-phase liquid flow, bubbly flow, slug/churn turbulent flow, annular-mist flow, and single-phase vapor flow) and transitions of the flow conditions. The pilot code was programmed so that the source terms of the governing equations and numerical solution schemes can be easily tested. The mass and energy conservation was confirmed for single-phase liquid and single-phase vapor flows. It was confirmed that the inlet pressure and velocity boundary conditions work properly. It was confirmed that, for single- and two-phase flows, the velocity and temperature of non-existing phase are calculated as intended. Complete phase depletion which might occur during a phase change was found to adversely affect the code stability. A further study would be required to enhance code capability in this regard

  5. A new modelling method and unified code with MCRT for concentrating solar collectors and its applications

    International Nuclear Information System (INIS)

    Cheng, Z.D.; He, Y.L.; Cui, F.Q.

    2013-01-01

    Highlights: ► A general-purpose method or design/simulation tool needs to be developed for CSCs. ► A new modelling method and homemade unified code with MCRT are presented. ► The photo-thermal conversion processes in three typical CSCs were analyzed. ► The results show that the proposed method and model are feasible and reliable. -- Abstract: The main objective of the present work is to develop a general-purpose numerical method for improving design/simulation tools for the concentrating solar collectors (CSCs) of concentrated solar power (CSP) systems. A new modelling method and homemade unified code with the Monte Carlo Ray-Trace (MCRT) method for the CSCs are presented firstly. The details of the new designing method and homemade unified code with MCRT for numerical investigations on solar concentrating and collecting characteristics of the CSCs are introduced. Three coordinate systems are used in the MCRT program and can be totally independent from each other. Solar radiation in participating medium and/or non-participating medium can be taken into account simultaneously or dividedly in the simulation. The criteria of data processing and method/code checking are also proposed in detail. Finally the proposed method and code are applied to simulate and analyze the involuted photo-thermal conversion processes in three typical CSCs. The results show that the proposed method and model are reliable to simulate various types of CSCs.

  6. The implementation of a toroidal limiter model into the gyrokinetic code ELMFIRE

    Energy Technology Data Exchange (ETDEWEB)

    Leerink, S.; Janhunen, S.J.; Kiviniemi, T.P.; Nora, M. [Euratom-Tekes Association, Helsinki University of Technology (Finland); Heikkinen, J.A. [Euratom-Tekes Association, VTT, P.O. Box 1000, FI-02044 VTT (Finland); Ogando, F. [Universidad Nacional de Educacion a Distancia, Madrid (Spain)

    2008-03-15

    The ELMFIRE full nonlinear gyrokinetic simulation code has been developed for calculations of plasma evolution and dynamics of turbulence in tokamak geometry. The code is applicable for calculations of strong perturbations in particle distribution function, rapid transients and steep gradients in plasma. Benchmarking against experimental reflectometry data from the FT2 tokamak is being discussed and in this paper a model for comparison and studying poloidal velocity is presented. To make the ELMFIRE code suitable for scrape-off layer simulations a simplified toroidal limiter model has been implemented. The model is be discussed and first results are presented. (copyright 2008 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  7. Code-To-Code Benchmarking Of The Porflow And GoldSim Contaminant Transport Models Using A Simple 1-D Domain - 11191

    International Nuclear Information System (INIS)

    Hiergesell, R.; Taylor, G.

    2010-01-01

    An investigation was conducted to compare and evaluate contaminant transport results of two model codes, GoldSim and Porflow, using a simple 1-D string of elements in each code. Model domains were constructed to be identical with respect to cell numbers and dimensions, matrix material, flow boundary and saturation conditions. One of the codes, GoldSim, does not simulate advective movement of water; therefore the water flux term was specified as a boundary condition. In the other code, Porflow, a steady-state flow field was computed and contaminant transport was simulated within that flow-field. The comparisons were made solely in terms of the ability of each code to perform contaminant transport. The purpose of the investigation was to establish a basis for, and to validate follow-on work that was conducted in which a 1-D GoldSim model developed by abstracting information from Porflow 2-D and 3-D unsaturated and saturated zone models and then benchmarked to produce equivalent contaminant transport results. A handful of contaminants were selected for the code-to-code comparison simulations, including a non-sorbing tracer and several long- and short-lived radionuclides exhibiting both non-sorbing to strongly-sorbing characteristics with respect to the matrix material, including several requiring the simulation of in-growth of daughter radionuclides. The same diffusion and partitioning coefficients associated with each contaminant and the half-lives associated with each radionuclide were incorporated into each model. A string of 10-elements, having identical spatial dimensions and properties, were constructed within each code. GoldSim's basic contaminant transport elements, Mixing cells, were utilized in this construction. Sand was established as the matrix material and was assigned identical properties (e.g. bulk density, porosity, saturated hydraulic conductivity) in both codes. Boundary conditions applied included an influx of water at the rate of 40 cm/yr at one

  8. Neutron-photon energy deposition in CANDU reactor fuel channels: a comparison of modelling techniques using ANISN and MCNP computer codes

    International Nuclear Information System (INIS)

    Bilanovic, Z.; McCracken, D.R.

    1994-12-01

    In order to assess irradiation-induced corrosion effects, coolant radiolysis and the degradation of the physical properties of reactor materials and components, it is necessary to determine the neutron, photon, and electron energy deposition profiles in the fuel channels of the reactor core. At present, several different computer codes must be used to do this. The most recent, advanced and versatile of these is the latest version of MCNP, which may be capable of replacing all the others. Different codes have different assumptions and different restrictions on the way they can model the core physics and geometry. This report presents the results of ANISN and MCNP models of neutron and photon energy deposition. The results validate the use of MCNP for simplified geometrical modelling of energy deposition by neutrons and photons in the complex geometry of the CANDU reactor fuel channel. Discrete ordinates codes such as ANISN were the benchmark codes used in previous work. The results of calculations using various models are presented, and they show very good agreement for fast-neutron energy deposition. In the case of photon energy deposition, however, some modifications to the modelling procedures had to be incorporated. Problems with the use of reflective boundaries were solved by either including the eight surrounding fuel channels in the model, or using a boundary source at the bounding surface of the problem. Once these modifications were incorporated, consistent results between the computer codes were achieved. Historically, simple annular representations of the core were used, because of the difficulty of doing detailed modelling with older codes. It is demonstrated that modelling by MCNP, using more accurate and more detailed geometry, gives significantly different and improved results. (author). 9 refs., 12 tabs., 20 figs

  9. Aquelarre. A computer code for fast neutron cross sections from the statistical model

    International Nuclear Information System (INIS)

    Guasp, J.

    1974-01-01

    A Fortran V computer code for Univac 1108/6 using the partial statistical (or compound nucleus) model is described. The code calculates fast neutron cross sections for the (n, n'), (n, p), (n, d) and (n, α reactions and the angular distributions and Legendre moments.for the (n, n) and (n, n') processes in heavy and intermediate spherical nuclei. A local Optical Model with spin-orbit interaction for each level is employed, allowing for the width fluctuation and Moldauer corrections, as well as the inclusion of discrete and continuous levels. (Author) 67 refs

  10. Self-shielding models of MICROX-2 code: Review and updates

    International Nuclear Information System (INIS)

    Hou, J.; Choi, H.; Ivanov, K.N.

    2014-01-01

    Highlights: • The MICROX-2 code has been improved to expand its application to advanced reactors. • New fine-group cross section libraries based on ENDF/B-VII have been generated. • Resonance self-shielding and spatial self-shielding models have been improved. • The improvements were assessed by a series of benchmark calculations against MCNPX. - Abstract: The MICROX-2 is a transport theory code that solves for the neutron slowing-down and thermalization equations of a two-region lattice cell. The MICROX-2 code has been updated to expand its application to advanced reactor concepts and fuel cycle simulations, including generation of new fine-group cross section libraries based on ENDF/B-VII. In continuation of previous work, the MICROX-2 methods are reviewed and updated in this study, focusing on its resonance self-shielding and spatial self-shielding models for neutron spectrum calculations. The improvement of self-shielding method was assessed by a series of benchmark calculations against the Monte Carlo code, using homogeneous and heterogeneous pin cell models. The results have shown that the implementation of the updated self-shielding models is correct and the accuracy of physics calculation is improved. Compared to the existing models, the updates reduced the prediction error of the infinite multiplication factor by ∼0.1% and ∼0.2% for the homogeneous and heterogeneous pin cell models, respectively, considered in this study

  11. Sensitivity Analysis of the TRIGA IPR-R1 Reactor Models Using the MCNP Code

    Directory of Open Access Journals (Sweden)

    C. A. M. Silva

    2014-01-01

    Full Text Available In the process of verification and validation of code modelling, the sensitivity analysis including systematic variations in code input variables must be used to help identifying the relevant parameters necessary for a determined type of analysis. The aim of this work is to identify how much the code results are affected by two different types of the TRIGA IPR-R1 reactor modelling processes performed using the MCNP (Monte Carlo N-Particle Transport code. The sensitivity analyses included small differences of the core and the rods dimensions and different levels of model detailing. Four models were simulated and neutronic parameters such as effective multiplication factor (keff, reactivity (ρ, and thermal and total neutron flux in central thimble in some different conditions of the reactor operation were analysed. The simulated models presented good agreement between them, as well as in comparison with available experimental data. In this way, the sensitivity analyses demonstrated that simulations of the TRIGA IPR-R1 reactor can be performed using any one of the four investigated MCNP models to obtain the referenced neutronic parameters.

  12. Off-take Model of the SPACE Code and Its Validation

    International Nuclear Information System (INIS)

    Oh, Myung Taek; Park, Chan Eok; Sohn, Jong Joo

    2011-01-01

    Liquid entrainment and vapor pull-through models of horizontal pipe have been implemented in the SPACE code. The model of SPACE accounts for the phase separation phenomena and computes the flux of mass and energy through an off-take attached to a horizontal pipe when stratified conditions occur in the horizontal pipe. This model is referred to as the off-take model. The importance of predicting the fluid conditions through an off-take in a small-break LOCA has been well known. In this case, the occurrence of the stratification can affect the break node void fraction and thus the break flow discharged from the primary system. In order to validate the off-take model newly developed for the SPACE code, a simulation of the HDU experiments has been performed. The main feature of the off-take model and its application results will be presented in this paper

  13. A Perceptual Model for Sinusoidal Audio Coding Based on Spectral Integration

    Directory of Open Access Journals (Sweden)

    Jensen Søren Holdt

    2005-01-01

    Full Text Available Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of audio signals. In this paper, we present a new perceptual model that predicts masked thresholds for sinusoidal distortions. The model relies on signal detection theory and incorporates more recent insights about spectral and temporal integration in auditory masking. As a consequence, the model is able to predict the distortion detectability. In fact, the distortion detectability defines a (perceptually relevant norm on the underlying signal space which is beneficial for optimisation algorithms such as rate-distortion optimisation or linear predictive coding. We evaluate the merits of the model by combining it with a sinusoidal extraction method and compare the results with those obtained with the ISO MPEG-1 Layer I-II recommended model. Listening tests show a clear preference for the new model. More specifically, the model presented here leads to a reduction of more than 20% in terms of number of sinusoids needed to represent signals at a given quality level.

  14. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) verification and validation plan. version 1.

    Energy Technology Data Exchange (ETDEWEB)

    Bartlett, Roscoe Ainsworth; Arguello, Jose Guadalupe, Jr.; Urbina, Angel; Bouchard, Julie F.; Edwards, Harold Carter; Freeze, Geoffrey A.; Knupp, Patrick Michael; Wang, Yifeng; Schultz, Peter Andrew; Howard, Robert (Oak Ridge National Laboratory, Oak Ridge, TN); McCornack, Marjorie Turner

    2011-01-01

    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. To meet this objective, NEAMS Waste IPSC M&S capabilities will be applied to challenging spatial domains, temporal domains, multiphysics couplings, and multiscale couplings. A strategic verification and validation (V&V) goal is to establish evidence-based metrics for the level of confidence in M&S codes and capabilities. Because it is economically impractical to apply the maximum V&V rigor to each and every M&S capability, M&S capabilities will be ranked for their impact on the performance assessments of various components of the repository systems. Those M&S capabilities with greater impact will require a greater level of confidence and a correspondingly greater investment in V&V. This report includes five major components: (1) a background summary of the NEAMS Waste IPSC to emphasize M&S challenges; (2) the conceptual foundation for verification, validation, and confidence assessment of NEAMS Waste IPSC M&S capabilities; (3) specifications for the planned verification, validation, and confidence-assessment practices; (4) specifications for the planned evidence information management system; and (5) a path forward for the incremental implementation of this V&V plan.

  15. THATCH: A computer code for modelling thermal networks of high- temperature gas-cooled nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Kroeger, P.G.; Kennett, R.J.; Colman, J.; Ginsberg, T. (Brookhaven National Lab., Upton, NY (United States))

    1991-10-01

    This report documents the THATCH code, which can be used to model general thermal and flow networks of solids and coolant channels in two-dimensional r-z geometries. The main application of THATCH is to model reactor thermo-hydraulic transients in High-Temperature Gas-Cooled Reactors (HTGRs). The available modules simulate pressurized or depressurized core heatup transients, heat transfer to general exterior sinks or to specific passive Reactor Cavity Cooling Systems, which can be air or water-cooled. Graphite oxidation during air or water ingress can be modelled, including the effects of added combustion products to the gas flow and the additional chemical energy release. A point kinetics model is available for analyzing reactivity excursions; for instance due to water ingress, and also for hypothetical no-scram scenarios. For most HTGR transients, which generally range over hours, a user-selected nodalization of the core in r-z geometry is used. However, a separate model of heat transfer in the symmetry element of each fuel element is also available for very rapid transients. This model can be applied coupled to the traditional coarser r-z nodalization. This report described the mathematical models used in the code and the method of solution. It describes the code and its various sub-elements. Details of the input data and file usage, with file formats, is given for the code, as well as for several preprocessing and postprocessing options. The THATCH model of the currently applicable 350 MW{sub th} reactor is described. Input data for four sample cases are given with output available in fiche form. Installation requirements and code limitations, as well as the most common error indications are listed. 31 refs., 23 figs., 32 tabs.

  16. Reactor physics modelling of accident tolerant fuel for LWRs using ANSWERS codes

    Directory of Open Access Journals (Sweden)

    Lindley Benjamin A.

    2016-01-01

    adopts an integral configuration and a fully passive decay heat removal system to provide indefinite cooling capability for a class of accidents. This paper presents the equilibrium cycle core design and reactor physics behaviour of the I2S-LWR with U3Si2 and the advanced steel cladding. The results were obtained using the traditional two-stage approach, in which homogenized macroscopic cross-section sets were generated by WIMS and applied in a full 3D core solution with PANTHER. The results obtained with WIMS/PANTHER were compared against the Monte Carlo Serpent code developed by VTT and previously reported results for the I2S-LWR. The results were found to be in a good agreement (e.g. <200 pcm in reactivity among the compared codes, giving confidence that the WIMS/PANTHER reactor physics package can be reliably used in modelling advanced LWR systems.

  17. Gamma spectroscopy modelization intercomparison of the modelization results using two different codes (MCNP, and Pascalys-mercure)

    International Nuclear Information System (INIS)

    Luneville, L.; Chiron, M.; Toubon, H.; Dogny, S.; Huver, M.; Berger, L.

    2001-01-01

    The research performed in common these last 3 years by the French Atomic Commission CEA, COGEMA and Eurisys Mesures had for main subject the realization of a complete tool of modelization for the largest range of realistic cases, the Pascalys modelization software. The main purpose of the modelization was to calculate the global measurement efficiency, which delivers the most accurate relationship between the photons emitted by the nuclear source in volume, punctual or deposited form and the germanium hyper pure detector, which detects and analyzes the received photons. It has been stated since long time that experimental global measurement efficiency becomes more and more difficult to address especially for complex scene as we can find in decommissioning and dismantling or in case of high activities for which the use of high activity reference sources become difficult to use for both health physics point of view and regulations. The choice of a calculation code is fundamental if accurate modelization is searched. MCNP represents the reference code but its use is long time calculation consuming and then not practicable in line on the field. Direct line-of-sight point kernel code as the French Atomic Commission 3-D analysis Mercure code can represent the practicable compromise between the most accurate MCNP reference code and the realistic performances needed in modelization. The comparison between the results of Pascalys-Mercure and MCNP code taking in account the last improvements of Mercure in the low energy range where the most important errors can occur, is presented in this paper, Mercure code being supported in line by the recent Pascalys 3-D modelization scene software. The incidence of the intrinsic efficiency of the Germanium detector is also approached for the total efficiency of measurement. (authors)

  18. Numerical Stability and Control Analysis Towards Falling-Leaf Prediction Capabilities of Splitflow for Two Generic High-Performance Aircraft Models

    Science.gov (United States)

    Charlton, Eric F.

    1998-01-01

    Aerodynamic analysis are performed using the Lockheed-Martin Tactical Aircraft Systems (LMTAS) Splitflow computational fluid dynamics code to investigate the computational prediction capabilities for vortex-dominated flow fields of two different tailless aircraft models at large angles of attack and sideslip. These computations are performed with the goal of providing useful stability and control data to designers of high performance aircraft. Appropriate metrics for accuracy, time, and ease of use are determined in consultations with both the LMTAS Advanced Design and Stability and Control groups. Results are obtained and compared to wind-tunnel data for all six components of forces and moments. Moment data is combined to form a "falling leaf" stability analysis. Finally, a handful of viscous simulations were also performed to further investigate nonlinearities and possible viscous effects in the differences between the accumulated inviscid computational and experimental data.

  19. Implementation of a structural dependent model for the superalloy IN738LC in ABAQUS-code

    International Nuclear Information System (INIS)

    Wolters, J.; Betten, J.; Penkalla, H.J.

    1994-05-01

    Superalloys, mainly consisting of nickel, are used for applications in aerospace as well as in stationary gas turbines. In the temperature range above 800 C the blades, which are manufactured of these superalloys, are subjected to high centrifugal forces and thermal induced loads. For computer based analysis of the thermo-mechanical behaviour of the blades models for the stress-strain behaviour are necessary. These models have to give a reliable description of the stress-strain behaviour, with emphasis on inelastic affects. The implementation of the model in finite element codes requires a numerical treatment of the constitutive equations with respect to the given interface of the used code. In this paper constitutive equations for the superalloy IN738LC are presented and the implementation in the finite element code ABAQUS with the numerical preparation of the model is described. In order to validate the model calculations were performed for simple uniaxial loading conditions as well as for a complete cross section of a turbine blade under combined thermal and mechanical loading. The achieved results were compared with those of additional calculations by using ABAQUS, including Norton's law, which was already implemented in this code. (orig.) [de

  20. Bootstrap imputation with a disease probability model minimized bias from misclassification due to administrative database codes.

    Science.gov (United States)

    van Walraven, Carl

    2017-04-01

    Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. A Deep Penetration Problem Calculation Using AETIUS:An Easy Modeling Discrete Ordinates Transport Code UsIng Unstructured Tetrahedral Mesh, Shared Memory Parallel

    Science.gov (United States)

    KIM, Jong Woon; LEE, Young-Ouk

    2017-09-01

    As computing power gets better and better, computer codes that use a deterministic method seem to be less useful than those using the Monte Carlo method. In addition, users do not like to think about space, angles, and energy discretization for deterministic codes. However, a deterministic method is still powerful in that we can obtain a solution of the flux throughout the problem, particularly as when particles can barely penetrate, such as in a deep penetration problem with small detection volumes. Recently, a new state-of-the-art discrete-ordinates code, ATTILA, was developed and has been widely used in several applications. ATTILA provides the capabilities to solve geometrically complex 3-D transport problems by using an unstructured tetrahedral mesh. Since 2009, we have been developing our own code by benchmarking ATTILA. AETIUS is a discrete ordinates code that uses an unstructured tetrahedral mesh such as ATTILA. For pre- and post- processing, Gmsh is used to generate an unstructured tetrahedral mesh by importing a CAD file (*.step) and visualizing the calculation results of AETIUS. Using a CAD tool, the geometry can be modeled very easily. In this paper, we describe a brief overview of AETIUS and provide numerical results from both AETIUS and a Monte Carlo code, MCNP5, in a deep penetration problem with small detection volumes. The results demonstrate the effectiveness and efficiency of AETIUS for such calculations.

  2. Development and verification of 'system thermalhydraulics - 3 dimensional reactor kinetics' coupled calculation capability using the MARS 1D module and MASTER code

    International Nuclear Information System (INIS)

    Jeong, J. J.; Joo, H. G.; Lee, W. J.; Chung, B. D.; Zee, S. Q.

    2002-07-01

    In this study, we performed the coupling of the MARS 1D module and MASTER to develop the 'system thermal-hydraulics - 3D reactor kinetics' coupled calculation capability. The new feature has been assessed with the OECD NEA MSLB benchmark exercise III simulations. Four different calculations were carried out for comparisons: - 1D base case calculation: MARS 1D Module + MASTER, - 1D refined calculation: MARS 1D Module + MASTER + COBRA-III/CP, - 3D base case calculation: MARS 3D Module + MASTER, - 3D refined calculation: MARS 3D Module + MASTER + COBRA-III/CP. The comparison of the results shows that the coupled calculation using 'MARS 1D module and MASTER' worked well as intended and that the results were very similar and consistent with those of the MARS 3D module. In particular, it is shown that the new feature can be utilized efficiently for analyzing the transients, which are characterized by multi-dimensional reactor kinetics and one-dimensional core thermal-hydraulics

  3. Seepage and Piping through Levees and Dikes using 2D and 3D Modeling Codes

    Science.gov (United States)

    2016-06-01

    Flood & Coastal Storm Damage Reduction Program ERDC/CHL TR-16-6 June 2016 Seepage and Piping through Levees and Dikes Using 2D and 3D Modeling Codes...of this Technical Report is to evaluate the benefits of three- dimensional ( 3D ) modeling of common seepage and piping issues along embankments over...traditional two-dimensional (2D) models . To facilitate the evaluation, one 3D model , two 2D cross-sectional models , and one 2D plan-view model were

  4. On models of the genetic code generated by binary dichotomic algorithms.

    Science.gov (United States)

    Gumbel, Markus; Fimmel, Elena; Danielli, Alberto; Strüngmann, Lutz

    2015-02-01

    In this paper we introduce the concept of a BDA-generated model of the genetic code which is based on binary dichotomic algorithms (BDAs). A BDA-generated model is based on binary dichotomic algorithms (BDAs). Such a BDA partitions the set of 64 codons into two disjoint classes of size 32 each and provides a generalization of known partitions like the Rumer dichotomy. We investigate what partitions can be generated when a set of different BDAs is applied sequentially to the set of codons. The search revealed that these models are able to generate code tables with very different numbers of classes ranging from 2 to 64. We have analyzed whether there are models that map the codons to their amino acids. A perfect matching is not possible. However, we present models that describe the standard genetic code with only few errors. There are also models that map all 64 codons uniquely to 64 classes showing that BDAs can be used to identify codons precisely. This could serve as a basis for further mathematical analysis using coding theory, for example. The hypothesis that BDAs might reflect a molecular mechanism taking place in the decoding center of the ribosome is discussed. The scan demonstrated that binary dichotomic partitions are able to model different aspects of the genetic code very well. The search was performed with our tool Beady-A. This software is freely available at http://mi.informatik.hs-mannheim.de/beady-a. It requires a JVM version 6 or higher. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. Heavy Ion Fusion Science Virtual National Laboratory 1st Quarter FY08 Milestone Report: Report Initial Work on Developing Plasma Modeling Capability in WARP for NDCX Experiments Report. Initial work on developing Plasma Modeling Capability in WARP for NDCX Experiments

    International Nuclear Information System (INIS)

    Friedman, A.; Cohen, R.H.; Grote, D.P.; Vay, J.-L.

    2007-01-01

    This milestone has been accomplished. The Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL) has developed and implemented an initial beam-in-plasma implicit modeling capability in Warp; has carried out tests validating the behavior of the models employed; has compared the results of electrostatic and electromagnetic models when applied to beam expansion in an NDCX-I relevant regime; has compared Warp and LSP results on a problem relevant to NDCX-I; has modeled wave excitation by a rigid beam propagating through plasma; and has implemented and begun testing a more advanced implicit method that correctly captures electron drift motion even when timesteps too large to resolve the electron gyro-period are employed. The HIFS-VNL is well on its way toward having a state-of-the-art source-to-target simulation capability that will enable more effective support of ongoing experiments in the NDCX series and allow more confident planning for future ones

  6. Re-framing Inclusive Education Through the Capability Approach: An Elaboration of the Model of Relational Inclusion

    Directory of Open Access Journals (Sweden)

    Maryam Dalkilic

    2016-09-01

    Full Text Available Scholars have called for the articulation of new frameworks in special education that are responsive to culture and context and that address the limitations of medical and social models of disability. In this article, we advance a theoretical and practical framework for inclusive education based on the integration of a model of relational inclusion with Amartya Sen’s (1985 Capability Approach. This integrated framework engages children, educators, and families in principled practices that acknowledge differences, rather than deficits, and enable attention to enhancing the capabilities of children with disabilities in inclusive educational environments. Implications include the development of policy that clarifies the process required to negotiate capabilities and valued functionings and the types of resources required to permit children, educators, and families to create relationally inclusive environments.

  7. Development of Off-take Model, Subcooled Boiling Model, and Radiation Heat Transfer Input Model into the MARS Code for a Regulatory Auditing of CANDU Reactors

    International Nuclear Information System (INIS)

    Yoon, C.; Rhee, B. W.; Chung, B. D.; Ahn, S. H.; Kim, M. W.

    2009-01-01

    Korea currently has four operating units of the CANDU-6 type reactor in Wolsong. However, the safety assessment system for CANDU reactors has not been fully established due to a lack of self-reliance technology. Although the CATHENA code had been introduced from AECL, it is undesirable to use a vendor's code for a regulatory auditing analysis. In Korea, the MARS code has been developed for decades and is being considered by KINS as a thermal hydraulic regulatory auditing tool for nuclear power plants. Before this decision, KINS (Korea Institute of Nuclear Safety) had developed the RELAP5/MOD3/CANDU code for CANDU safety analyses by modifying the model of the existing PWR auditing tool, RELAP5/MOD3. The main purpose of this study is to transplant the CANDU models of the RELAP5/MOD3/CANDU code to the MARS code including a quality assurance of the developed models

  8. Analysis of PWR control rod ejection accident with the coupled code system SKETCH-INS/TRACE by incorporating pin power reconstruction model

    International Nuclear Information System (INIS)

    Nakajima, T.; Sakai, T.

    2010-01-01

    The pin power reconstruction model was incorporated in the 3-D nodal kinetics code SKETCH-INS in order to produce accurate calculation of three-dimensional pin power distributions throughout the reactor core. In order to verify the employed pin power reconstruction model, the PWR MOX/UO 2 core transient benchmark problem was analyzed with the coupled code system SKETCH-INS/TRACE by incorporating the model and the influence of pin power reconstruction model was studied. SKETCH-INS pin power distributions for 3 benchmark problems were compared with the PARCS solutions which were provided by the host organisation of the benchmark. SKETCH-INS results were in good agreement with the PARCS results. The capability of employed pin power reconstruction model was confirmed through the analysis of benchmark problems. A PWR control rod ejection benchmark problem was analyzed with the coupled code system SKETCH-INS/ TRACE by incorporating the pin power reconstruction model. The influence of pin power reconstruction model was studied by comparing with the result of conventional node averaged flux model. The results indicate that the pin power reconstruction model has significant effect on the pin powers during transient and hence on the fuel enthalpy

  9. Solar optical codes evaluation for modeling and analyzing complex solar receiver geometries

    Science.gov (United States)

    Yellowhair, Julius; Ortega, Jesus D.; Christian, Joshua M.; Ho, Clifford K.

    2014-09-01

    Solar optical modeling tools are valuable for modeling and predicting the performance of solar technology systems. Four optical modeling tools were evaluated using the National Solar Thermal Test Facility heliostat field combined with flat plate receiver geometry as a benchmark. The four optical modeling tools evaluated were DELSOL, HELIOS, SolTrace, and Tonatiuh. All are available for free from their respective developers. DELSOL and HELIOS both use a convolution of the sunshape and optical errors for rapid calculation of the incident irradiance profiles on the receiver surfaces. SolTrace and Tonatiuh use ray-tracing methods to intersect the reflected solar rays with the receiver surfaces and construct irradiance profiles. We found the ray-tracing tools, although slower in computation speed, to be more flexible for modeling complex receiver geometries, whereas DELSOL and HELIOS were limited to standard receiver geometries such as flat plate, cylinder, and cavity receivers. We also list the strengths and deficiencies of the tools to show tool preference depending on the modeling and design needs. We provide an example of using SolTrace for modeling nonconventional receiver geometries. The goal is to transfer the irradiance profiles on the receiver surfaces calculated in an optical code to a computational fluid dynamics code such as ANSYS Fluent. This approach eliminates the need for using discrete ordinance or discrete radiation transfer models, which are computationally intensive, within the CFD code. The irradiance profiles on the receiver surfaces then allows for thermal and fluid analysis on the receiver.

  10. Contact and Impact Dynamic Modeling Capabilities of LS-DYNA for Fluid-Structure Interaction Problems

    Science.gov (United States)

    2010-12-02

    2003, providing a summary of the major theoretical, experimental and numerical accomplishments in the field. Melis and Khanh Bui (2003) studied the ALE...and Khanh Bui (2003) studied the ALE capability to predict splashdown loads on a proposed replacement/upgrade of the hydrazine tanks on the thrust

  11. A study on the dependency between turbulent models and mesh configurations of CFD codes

    International Nuclear Information System (INIS)

    Bang, Jungjin; Heo, Yujin; Jerng, Dong-Wook

    2015-01-01

    This paper focuses on the analysis of the behavior of hydrogen mixing and hydrogen stratification, using the GOTHIC code and the CFD code. Specifically, we examined the mesh sensitivity and how the turbulence model affects hydrogen stratification or hydrogen mixing, depending on the mesh configuration. In this work, sensitivity analyses for the meshes and the turbulence models were conducted for missing and stratification phenomena. During severe accidents in a nuclear power plants, the generation of hydrogen may occur and this will complicate the atmospheric condition of the containment by causing stratification of air, steam, and hydrogen. This could significantly impact containment integrity analyses, as hydrogen could be accumulated in local region. From this need arises the importance of research about stratification of gases in the containment. Two computation fluid dynamics code, i.e. GOTHIC and STAR-CCM+ were adopted and the computational results were benchmarked against the experimental data from PANDA facility. The main findings observed through the present work can be summarized as follows: 1) In the case of the GOTHIC code, it was observed that the aspect ratio of the mesh was found more important than the mesh size. Also, if the number of the mesh is over 3,000, the effects of the turbulence models were marginal. 2) For STAR-CCM+, the tendency is quite different from the GOTHIC code. That is, the effects of the turbulence models were small for fewer number of the mesh, however, as the number of mesh increases, the effects of the turbulence models becomes significant. Another observation is that away from the injection orifice, the role of the turbulence models tended to be important due to the nature of mixing process and inducted jet stream

  12. A study on the dependency between turbulent models and mesh configurations of CFD codes

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Jungjin; Heo, Yujin; Jerng, Dong-Wook [CAU, Seoul (Korea, Republic of)

    2015-10-15

    This paper focuses on the analysis of the behavior of hydrogen mixing and hydrogen stratification, using the GOTHIC code and the CFD code. Specifically, we examined the mesh sensitivity and how the turbulence model affects hydrogen stratification or hydrogen mixing, depending on the mesh configuration. In this work, sensitivity analyses for the meshes and the turbulence models were conducted for missing and stratification phenomena. During severe accidents in a nuclear power plants, the generation of hydrogen may occur and this will complicate the atmospheric condition of the containment by causing stratification of air, steam, and hydrogen. This could significantly impact containment integrity analyses, as hydrogen could be accumulated in local region. From this need arises the importance of research about stratification of gases in the containment. Two computation fluid dynamics code, i.e. GOTHIC and STAR-CCM+ were adopted and the computational results were benchmarked against the experimental data from PANDA facility. The main findings observed through the present work can be summarized as follows: 1) In the case of the GOTHIC code, it was observed that the aspect ratio of the mesh was found more important than the mesh size. Also, if the number of the mesh is over 3,000, the effects of the turbulence models were marginal. 2) For STAR-CCM+, the tendency is quite different from the GOTHIC code. That is, the effects of the turbulence models were small for fewer number of the mesh, however, as the number of mesh increases, the effects of the turbulence models becomes significant. Another observation is that away from the injection orifice, the role of the turbulence models tended to be important due to the nature of mixing process and inducted jet stream.

  13. INDOSE V2.1.1, Internal Dosimetry Code Using Biokinetics Models

    International Nuclear Information System (INIS)

    Silverman, Ido

    2002-01-01

    A - Description of program or function: InDose is an internal dosimetry code developed to enable dose estimations using the new biokinetic models (presented in ICRP-56 to ICRP71) as well as the old ones. The code is written in FORTRAN90 and uses the ICRP-66 respiratory tract model and the ICRP-30 gastrointestinal tract model as well as the new and old biokinetic models. The code has been written in such a way that the user is able to change any of the parameters of any one of the models without recompiling the code. All the parameters are given in well annotated parameters files that the user may change and the code reads during invocation. As default, these files contains the values listed in ICRP publications. The full InDose code is planed to have three parts: 1) the main part includes the uptake and systemic models and is used to calculate the activities in the body tissues and excretion as a function of time for a given intake. 2) An optimization module for automatic estimation of the intake for a specific exposure case. 3) A module to calculate the dose due to the estimated intake. Currently, the code is able to perform only its main task (part 1) while the other two have to be done externally using other tools. In the future we would like to add these modules in order to provide a complete solution for the people in the laboratory. The code has been tested extensively to verify the accuracy of its results. The verification procedure was divided into three parts: 1) verification of the implementation of each model, 2) verification of the integrity of the whole code, and 3) usability test. The first two parts consisted of comparing results obtained with InDose to published results for the same cases. For example ICRP-78 monitoring data. The last part consisted of participating in the 3. EIE-IDA and assessing some of the scenarios provided in this exercise. These tests where presented in a few publications. It has been found that there is very good agreement

  14. A Comparison of Capability Assessment Using the LOGRAM and Dyna-METRIC Computer Models.

    Science.gov (United States)

    1983-09-01

    Buy Rqmt N 6 81 War Leadtime N 2 83 War Production Deliveries ( Iz4 ) N 48 131 Item Essentiality Code AN 3 134 War Depot Repair Cycle Days N 3 137 War...3),DCP(3),NV(3),IIRP(3). TD (3I INTEGER aSAUAOAVSUAVSDAWOF,AOoc. Alit, Ail, SDIT ,VNIKADREG INTEGER 50118V,PB , ,PRI,P157N,FSf, PSTI,VDPD,VRI INTIGER

  15. Reliability in the performance-based concept of fib Model Code 2010

    NARCIS (Netherlands)

    Bigaj-van Vliet, A.; Vrouwenvelder, T.

    2013-01-01

    The design philosophy of the new fib Model Code for Concrete Structures 2010 represents the state of the art with regard to performance-based approach to the design and assessment of concrete structures. Given the random nature of quantities determining structural behaviour, the assessment of

  16. ALICE-87 (Livermore). Precompound Nuclear Model Code. Version for Personal Computer IBM/AT

    International Nuclear Information System (INIS)

    Blann, M.

    1988-05-01

    The precompound nuclear model code ALICE-87 from the Lawrence Livermore National Laboratory (USA) was implemented for use on personal computer. It is available on a set of high density diskettes from the Data Bank of Nuclear Energy Agency (Saclay) and the IAEA Nuclear Data Section. (author). Refs and figs

  17. Assessment of Programming Language Learning Based on Peer Code Review Model: Implementation and Experience Report

    Science.gov (United States)

    Wang, Yanqing; Li, Hang; Feng, Yuqiang; Jiang, Yu; Liu, Ying

    2012-01-01

    The traditional assessment approach, in which one single written examination counts toward a student's total score, no longer meets new demands of programming language education. Based on a peer code review process model, we developed an online assessment system called "EduPCR" and used a novel approach to assess the learning of computer…

  18. Overlaid Alice: a statistical model computer code including fission and preequilibrium models. [FORTRAN, cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Blann, M.

    1976-01-01

    The most recent edition of an evaporation code originally written previously with frequent updating and improvement. This version replaces the version Alice described previously. A brief summary is given of the types of calculations which can be done. A listing of the code and the results of several sample calculations are presented. (JFP)

  19. Comparison of the containment codes used in the benchmark exercise from the modelling and numerical treatment point of view

    International Nuclear Information System (INIS)

    Washby, V.

    1987-01-01

    This report is the subject of a study contract sponsored by the containment loading and response group (CONT), a sub-group of the safety working group of the fast reactor co-ordinating committee - CEC. The analysises provided here will form part of a final report on containment codes, sensitivity analysis, and benchmark comparison, performed by the group in recent years. The contribution of this study contract is to assess the six different containment codes, used in the benchmark comparison, with regard to their procedures and methods, and also to provide an assessment of their benchmark calculation results, so that an overall assessment of their effectiveness for use in containment problems can be made. Each code description, which has been provided by the relevant user, contains a large amount of detailed information and a large number of equations, which would be unwieldy to reproduce and probably unnecessary. For this reason the report has concentrated on a fuller description of the SEURBNUK code, this being the code most familiar to the author, and other code descriptions have concentrated on noting variations and differences. Also, the code SEURBNUK/EURDYN has been used for the sensitivity analysis, this code being an extension of the original code SEURBNUK with the addition of axi-symmetric finite element capabilities. The six containment codes described and assessed in this report are those which were being actively used within the European community at the time

  20. Reduced-order LPV model of flexible wind turbines from high fidelity aeroelastic codes

    DEFF Research Database (Denmark)

    Adegas, Fabiano Daher; Sønderby, Ivan Bergquist; Hansen, Morten Hartvig

    2013-01-01

    of high-order linear time invariant (LTI) models. Firstly, the high-order LTI models are locally approximated using modal and balanced truncation and residualization. Then, an appropriate coordinate transformation is applied to allow interpolation of the model matrices between points on the parameter...... space. The obtained LPV model is of suitable size for designing modern gain-scheduling controllers based on recently developed LPV control design techniques. Results are thoroughly assessed on a set of industrial wind turbine models generated by the recently developed aeroelastic code HAWCStab2....