Sample records for modeling codes capable

  1. Current Capabilities of the Fuel Performance Modeling Code PARFUME

    Energy Technology Data Exchange (ETDEWEB)

    G. K. Miller; D. A. Petti; J. T. Maki; D. L. Knudson


    The success of gas reactors depends upon the safety and quality of the coated particle fuel. A fuel performance modeling code (called PARFUME), which simulates the mechanical and physico-chemical behavior of fuel particles during irradiation, is under development at the Idaho National Engineering and Environmental Laboratory. Among current capabilities in the code are: 1) various options for calculating CO production and fission product gas release, 2) a thermal model that calculates a time-dependent temperature profile through a pebble bed sphere or a prismatic block core, as well as through the layers of each analyzed particle, 3) simulation of multi-dimensional particle behavior associated with cracking in the IPyC layer, partial debonding of the IPyC from the SiC, particle asphericity, kernel migration, and thinning of the SiC caused by interaction of fission products with the SiC, 4) two independent methods for determining particle failure probabilities, 5) a model for calculating release-to-birth (R/B) ratios of gaseous fission products, that accounts for particle failures and uranium contamination in the fuel matrix, and 6) the evaluation of an accident condition, where a particle experiences a sudden change in temperature following a period of normal irradiation. This paper presents an overview of the code.

  2. Relativistic modeling capabilities in PERSEUS extended MHD simulation code for HED plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Hamlin, Nathaniel D., E-mail: [438 Rhodes Hall, Cornell University, Ithaca, NY, 14853 (United States); Seyler, Charles E., E-mail: [Cornell University, Ithaca, NY, 14853 (United States)


    We discuss the incorporation of relativistic modeling capabilities into the PERSEUS extended MHD simulation code for high-energy-density (HED) plasmas, and present the latest hybrid X-pinch simulation results. The use of fully relativistic equations enables the model to remain self-consistent in simulations of such relativistic phenomena as X-pinches and laser-plasma interactions. By suitable formulation of the relativistic generalized Ohm’s law as an evolution equation, we have reduced the recovery of primitive variables, a major technical challenge in relativistic codes, to a straightforward algebraic computation. Our code recovers expected results in the non-relativistic limit, and reveals new physics in the modeling of electron beam acceleration following an X-pinch. Through the use of a relaxation scheme, relativistic PERSEUS is able to handle nine orders of magnitude in density variation, making it the first fluid code, to our knowledge, that can simulate relativistic HED plasmas.

  3. Verification of the predictive capabilities of the 4C code cryogenic circuit model (United States)

    Zanino, R.; Bonifetto, R.; Hoa, C.; Richard, L. Savoldi


    The 4C code was developed to model thermal-hydraulics in superconducting magnet systems and related cryogenic circuits. It consists of three coupled modules: a quasi-3D thermal-hydraulic model of the winding; a quasi-3D model of heat conduction in the magnet structures; an object-oriented a-causal model of the cryogenic circuit. In the last couple of years the code and its different modules have undergone a series of validation exercises against experimental data, including also data coming from the supercritical He loop HELIOS at CEA Grenoble. However, all this analysis work was done each time after the experiments had been performed. In this paper a first demonstration is given of the predictive capabilities of the 4C code cryogenic circuit module. To do that, a set of ad-hoc experimental scenarios have been designed, including different heating and control strategies. Simulations with the cryogenic circuit module of 4C have then been performed before the experiment. The comparison presented here between the code predictions and the results of the HELIOS measurements gives the first proof of the excellent predictive capability of the 4C code cryogenic circuit module.

  4. Present capabilities and new developments in antenna modeling with the numerical electromagnetics code NEC

    Energy Technology Data Exchange (ETDEWEB)

    Burke, G.J.


    Computer modeling of antennas, since its start in the late 1960's, has become a powerful and widely used tool for antenna design. Computer codes have been developed based on the Method-of-Moments, Geometrical Theory of Diffraction, or integration of Maxwell's equations. Of such tools, the Numerical Electromagnetics Code-Method of Moments (NEC) has become one of the most widely used codes for modeling resonant sized antennas. There are several reasons for this including the systematic updating and extension of its capabilities, extensive user-oriented documentation and accessibility of its developers for user assistance. The result is that there are estimated to be several hundred users of various versions of NEC world wide. 23 refs., 10 figs.

  5. Capabilities of the ATHENA computer code for modeling the SP-100 space reactor concept (United States)

    Fletcher, C. D.


    The capability to perform thermal-hydraulic analyses of an SP-100 space reactor was demonstrated using the ATHENA computer code. The preliminary General Electric SP-100 design was modeled using Athena. The model simulates the fast reactor, liquid-lithium coolant loops, and lithium-filled heat pipes of this design. Two ATHENA demonstration calculations were performed simulating accident scenarios. A mask for the SP-100 model and an interface with the Nuclear Plant Analyzer (NPA) were developed, allowing a graphic display of the calculated results on the NPA.

  6. Validation and comparison of two-phase flow modeling capabilities of CFD, sub channel and system codes by means of post-test calculations of BFBT transient tests

    Energy Technology Data Exchange (ETDEWEB)

    Jaeger, Wadim; Manes, Jorge Perez; Imke, Uwe; Escalante, Javier Jimenez; Espinoza, Victor Sanchez, E-mail:


    Highlights: • Simulation of BFBT turbine and pump transients at multiple scales. • CFD, sub-channel and system codes are used for the comparative study. • Heat transfer models are compared to identify difference between the code predictions. • All three scales predict results in good agreement to experiment. • Sub cooled boiling models are identified as field for future research. -- Abstract: The Institute for Neutron Physics and Reactor Technology (INR) at the Karlsruhe Institute of Technology (KIT) is involved in the validation and qualification of modern thermo hydraulic simulations tools at various scales. In the present paper, the prediction capabilities of four codes from three different scales – NEPTUNE{sub C}FD as fine mesh computational fluid dynamics code, SUBCHANFLOW and COBRA-TF as sub channels codes and TRACE as system code – are assessed with respect to their two-phase flow modeling capabilities. The subject of the investigations is the well-known and widely used data base provided within the NUPEC BFBT benchmark related to BWRs. Void fraction measurements simulating a turbine and a re-circulation pump trip are provided at several axial levels of the bundle. The prediction capabilities of the codes for transient conditions with various combinations of boundary conditions are validated by comparing the code predictions with the experimental data. In addition, the physical models of the different codes are described and compared to each other in order to explain the different results and to identify areas for further improvements.

  7. Development of covariance capabilities in EMPIRE code

    Energy Technology Data Exchange (ETDEWEB)

    Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.


    The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.

  8. Establishment and assessment of code scaling capability (United States)

    Lim, Jaehyok

    In this thesis, a method for using RELAP5/MOD3.3 (Patch03) code models is described to establish and assess the code scaling capability and to corroborate the scaling methodology that has been used in the design of the Purdue University Multi-Dimensional Integral Test Assembly for ESBWR applications (PUMA-E) facility. It was sponsored by the United States Nuclear Regulatory Commission (USNRC) under the program "PUMA ESBWR Tests". PUMA-E facility was built for the USNRC to obtain data on the performance of the passive safety systems of the General Electric (GE) Nuclear Energy Economic Simplified Boiling Water Reactor (ESBWR). Similarities between the prototype plant and the scaled-down test facility were investigated for a Gravity-Driven Cooling System (GDCS) Drain Line Break (GDLB). This thesis presents the results of the GDLB test, i.e., the GDLB test with one Isolation Condenser System (ICS) unit disabled. The test is a hypothetical multi-failure small break loss of coolant (SB LOCA) accident scenario in the ESBWR. The test results indicated that the blow-down phase, Automatic Depressurization System (ADS) actuation, and GDCS injection processes occurred as expected. The GDCS as an emergency core cooling system provided adequate supply of water to keep the Reactor Pressure Vessel (RPV) coolant level well above the Top of Active Fuel (TAF) during the entire GDLB transient. The long-term cooling phase, which is governed by the Passive Containment Cooling System (PCCS) condensation, kept the reactor containment system that is composed of Drywell (DW) and Wetwell (WW) below the design pressure of 414 kPa (60 psia). In addition, the ICS continued participating in heat removal during the long-term cooling phase. A general Code Scaling, Applicability, and Uncertainty (CSAU) evaluation approach was discussed in detail relative to safety analyses of Light Water Reactor (LWR). The major components of the CSAU methodology that were highlighted particularly focused on the

  9. User Instructions for the Systems Assessment Capability, Rev. 1, Computer Codes Volume 3: Utility Codes

    Energy Technology Data Exchange (ETDEWEB)

    Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.; Miley, Terri B.; Nichols, William E.; Strenge, Dennis L.


    This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.

  10. Model Children's Code. (United States)

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  11. Towards Petaflops Capability of the VERTEX Supernova Code

    CERN Document Server

    Marek, Andreas; Hanke, Florian; Janka, Hans-Thomas


    The VERTEX code is employed for multi-dimensional neutrino-radiation hydrodynamics simulations of core-collapse supernova explosions from first principles. The code is considered state-of-the-art in supernova research and it has been used for modeling for more than a decade, resulting in numerous scientific publications. The computational performance of the code, which is currently deployed on several high-performance computing (HPC) systems up to the Tier-0 class (e.g. in the framework of the European PRACE initiative and the German GAUSS program), however, has so far not been extensively documented. This paper presents a high-level overview of the relevant algorithms and parallelization strategies and outlines the technical challenges and achievements encountered along the evolution of the code from the gigaflops scale with the first, serial simulations in 2000, up to almost petaflops capabilities, as demonstrated lately on the SuperMUC system of the Leibniz Supercomputing Centre (LRZ). In particular, we sh...

  12. Waste package performance assessment code with automated sensitivity-calculation capability

    Energy Technology Data Exchange (ETDEWEB)

    Worley, B.A.; Horwedel, J.E.


    WAPPA-C is a waste package performance assessment code that predicts the temporal and spatial extent of the loss of containment capability of a given waste package design. This code was enhanced by the addition of the capability to calculate the sensitivity of model results to any parameter. The GRESS automated procedure was used to add this capability in only two man-months of effort. The verification analysis of the enhanced code, WAPPAG, showed that the sensitivities calculated using GRESS were accurate to within the precision of perturbation results against which the sensitivities were compared. Sensitivities of all summary table values to eight diverse data values were verified.

  13. Reactivity Insertion Accident (RIA) Capability Status in the BISON Fuel Performance Code

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, Richard L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Folsom, Charles Pearson [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pastore, Giovanni [Idaho National Lab. (INL), Idaho Falls, ID (United States); Veeraraghavan, Swetha [Idaho National Lab. (INL), Idaho Falls, ID (United States)


    One of the Challenge Problems being considered within CASL relates to modelling and simulation of Light Water Reactor LWR) fuel under Reactivity Insertion Accident (RIA) conditions. BISON is the fuel performance code used within CASL for LWR fuel under both normal operating and accident conditions, and thus must be capable of addressing the RIA challenge problem. This report outlines required BISON capabilities for RIAs and describes the current status of the code. Information on recent accident capability enhancements, application of BISON to a RIA benchmark exercise, and plans for validation to RIA behavior are included.

  14. System Reliability Analysis Capability and Surrogate Model Application in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Rabiti, Cristian; Alfonsi, Andrea; Huang, Dongli; Gleicher, Frederick; Wang, Bei; Adbel-Khalik, Hany S.; Pascucci, Valerio; Smith, Curtis L.


    This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.

  15. Development of an integrated thermal-hydraulics capability incorporating RELAP5 and PANTHER neutronics code

    Energy Technology Data Exchange (ETDEWEB)

    Page, R.; Jones, J.R.


    Ensuring that safety analysis needs are met in the future is likely to lead to the development of new codes and the further development of existing codes. It is therefore advantageous to define standards for data interfaces and to develop software interfacing techniques which can readily accommodate changes when they are made. Defining interface standards is beneficial but is necessarily restricted in application if future requirements are not known in detail. Code interfacing methods are of particular relevance with the move towards automatic grid frequency response operation where the integration of plant dynamic, core follow and fault study calculation tools is considered advantageous. This paper describes the background and features of a new code TALINK (Transient Analysis code LINKage program) used to provide a flexible interface to link the RELAP5 thermal hydraulics code with the PANTHER neutron kinetics and the SIBDYM whole plant dynamic modelling codes used by Nuclear Electric. The complete package enables the codes to be executed in parallel and provides an integrated whole plant thermal-hydraulics and neutron kinetics model. In addition the paper discusses the capabilities and pedigree of the component codes used to form the integrated transient analysis package and the details of the calculation of a postulated Sizewell `B` Loss of offsite power fault transient.

  16. Alquimia: Exposing mature biogeochemistry capabilities for easier benchmarking and development of next-generation subsurface codes (United States)

    Johnson, J. N.; Molins, S.


    The complexity of subsurface models is increasing in order to address pressing scientific questions in hydrology and climate science. In particular, models that attempt to explore the coupling between microbial metabolic activity and hydrology at larger scales need an accurate representation of their underlying biogeochemical systems. These systems tend to be very complicated, and they result in large nonlinear systems that have to be coupled with flow and transport algorithms in reactive transport codes. The complexity inherent in implementing a robust treatment of biogeochemistry is a significant obstacle in the development of new codes. Alquimia is an open-source software library intended to help developers of these codes overcome this obstacle by exposing tried-and-true biogeochemical capabilities in existing software. It provides an interface through which a reactive transport code can access and evolve a chemical system, using one of several supported geochemical "engines." We will describe Alquimia's current capabilities, and how they can be used for benchmarking reactive transport codes. We will also discuss upcoming features that will facilitate the coupling of biogeochemistry to other processes in new codes.

  17. Numerical modeling capabilities to predict repository performance

    Energy Technology Data Exchange (ETDEWEB)


    This report presents a summary of current numerical modeling capabilities that are applicable to the design and performance evaluation of underground repositories for the storage of nuclear waste. The report includes codes that are available in-house, within Golder Associates and Lawrence Livermore Laboratories; as well as those that are generally available within the industry and universities. The first listing of programs are in-house codes in the subject areas of hydrology, solute transport, thermal and mechanical stress analysis, and structural geology. The second listing of programs are divided by subject into the following categories: site selection, structural geology, mine structural design, mine ventilation, hydrology, and mine design/construction/operation. These programs are not specifically designed for use in the design and evaluation of an underground repository for nuclear waste; but several or most of them may be so used.

  18. GPEC, a real-time capable Tokamak equilibrium code

    CERN Document Server

    Rampp, Markus; Fischer, Rainer


    A new parallel equilibrium reconstruction code for tokamak plasmas is presented. GPEC allows to compute equilibrium flux distributions sufficiently accurate to derive parameters for plasma control within 1 ms of runtime which enables real-time applications at the ASDEX Upgrade experiment (AUG) and other machines with a control cycle of at least this size. The underlying algorithms are based on the well-established offline-analysis code CLISTE, following the classical concept of iteratively solving the Grad-Shafranov equation and feeding in diagnostic signals from the experiment. The new code adopts a hybrid parallelization scheme for computing the equilibrium flux distribution and extends the fast, shared-memory-parallel Poisson solver which we have described previously by a distributed computation of the individual Poisson problems corresponding to different basis functions. The code is based entirely on open-source software components and runs on standard server hardware and software environments. The real-...

  19. Development, Verification and Validation of Enclosure Radiation Capabilities in the CHarring Ablator Response (CHAR) Code (United States)

    Salazar, Giovanni; Droba, Justin C.; Oliver, Brandon; Amar, Adam J.


    With the recent development of multi-dimensional thermal protection system (TPS) material response codes including the capabilities to account for radiative heating is a requirement. This paper presents the recent efforts to implement such capabilities in the CHarring Ablator Response (CHAR) code developed at NASA's Johnson Space Center. This work also describes the different numerical methods implemented in the code to compute view factors for radiation problems involving multiple surfaces. Furthermore, verification and validation of the code's radiation capabilities are demonstrated by comparing solutions to analytical results, to other codes, and to radiant test data.

  20. NGNP Component Test Capability Design Code of Record

    Energy Technology Data Exchange (ETDEWEB)

    S.L. Austad; D.S. Ferguson; L.E. Guillen; C.W. McKnight; P.J. Petersen


    The Next Generation Nuclear Plant Project is conducting a trade study to select a preferred approach for establishing a capability whereby NGNP technology development testing—through large-scale, integrated tests—can be performed for critical HTGR structures, systems, and components (SSCs). The mission of this capability includes enabling the validation of interfaces, interactions, and performance for critical systems and components prior to installation in the NGNP prototype.

  1. Extending the code generation capabilities of the Together CASE tool to support Data Definition languages

    CERN Document Server

    Marino, M


    Together is the recommended software development tool in the Atlas collaboration. The programmatic API, which provides the capability to use and augment Together's internal functionality, is comprised of three major components - IDE, RWI and SCI. IDE is a read-only interface used to generate custom outputs based on the information contained in a Together model. RWI allows to both extract and write information to a Together model. SCI is the Source Code Interface, as the name implies it allows to work at the level of the source code. Together is extended by writing modules (java classes) extensively making use of the relevant API. We exploited Together extensibility to add support for the Atlas Dictionary Language. ADL is an extended subset of OMG IDL. The implemented module (ADLModule) makes Together to support ADL keywords, enables options and generate ADL object descriptions directly from UML Class diagrams. The module thoroughly accesses a Together reverse engineered C++ project - and/or design only class ...

  2. Activity-based resource capability modeling

    Institute of Scientific and Technical Information of China (English)

    CHENG Shao-wu; XU Xiao-fei; WANG Gang; SUN Xue-dong


    To analyse and optimize a enterprise process in a wide scope, an activity-based method of modeling resource capabilities is presented. It models resource capabilities by means of the same structure as an activity, that is, resource capabilities are defined by input objects, actions and output objects. A set of activity-based re-source capability modeling rules and matching rules between an activity and a resource are introduced. This method can not only be used to describe capability of manufacturing tools, but also capability of persons and applications, etc. It unifies methods of modeling capability of all kinds of resources in an enterprise and supports the optimization of the resource allocation of a process.

  3. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.


    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  4. Modeling of Network Identification Capability. (United States)


    scalar moment is assumed to follow a Poisson distribution, as suggested by Lomnitz (1966). The A cumulative number of events occurring per year at or...Spectral Ratios from Point Sources in Plane-Layered Earth V Models," BSSA. 60, pp 1937-1987 Lomnitz . C. (1966). -Statistical Prediction of Earthquakes...Moment-Magritude Relations in Theory and Practice," J Geophy. Res., 89 (B7). pp. 6229-6235. Lomnitz , C. (1966), Statistical Prediction of Earthquakes

  5. Exploiting the Error-Correcting Capabilities of Low Density Parity Check Codes in Distributed Video Coding using Optical Flow

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau; Søgaard, Jacob; Salmistraro, Matteo;


    . Thereafter methods for exploiting the error-correcting capabilities of the LDPCA code in DVC are investigated. The proposed frame interpolation includes a symmetric flow constraint to the standard forward-backward frame interpolation scheme, which improves quality and handling of large motion. The three......, an average bitrate saving of more than 40% is achieved compared to DISCOVER on Wyner-Ziv frames. In addition we also exploit and investigate the internal error-correcting capabilities of the LDPCA code in order to make it more robust to errors. We investigate how to achieve this goal by only modifying...

  6. Integrated simulation and modeling capability for alternate magnetic fusion concepts

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, B. I.; Hooper, E.B.; Jarboe, T. R.; LoDestro, L. L.; Pearlstein, L. D.; Prager, S. C.; Sarff, J. S.


    This document summarizes a strategic study addressing the development of a comprehensive modeling and simulation capability for magnetic fusion experiments with particular emphasis on devices that are alternatives to the mainline tokamak device. A code development project in this area supports two defined strategic thrust areas in the Magnetic Fusion Energy Program: (1) comprehensive simulation and modeling of magnetic fusion experiments and (2) development, operation, and modeling of magnetic fusion alternate- concept experiment

  7. The analysis of thermal-hydraulic models in MELCOR code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, M. H.; Hur, C.; Kim, D. K.; Cho, H. J. [POhang Univ., of Science and TECHnology, Pohang (Korea, Republic of)


    The objective of the present work is to verify the prediction and analysis capability of MELCOR code about the progression of severe accidents in light water reactor and also to evaluate appropriateness of thermal-hydraulic models used in MELCOR code. Comparing the results of experiment and calculation with MELCOR code is carried out to achieve the above objective. Specially, the comparison between the CORA-13 experiment and the MELCOR code calculation was performed.

  8. Cheetah: Starspot modeling code (United States)

    Walkowicz, Lucianne; Thomas, Michael; Finkestein, Adam


    Cheetah models starspots in photometric data (lightcurves) by calculating the modulation of a light curve due to starspots. The main parameters of the program are the linear and quadratic limb darkening coefficients, stellar inclination, spot locations and sizes, and the intensity ratio of the spots to the stellar photosphere. Cheetah uses uniform spot contrast and the minimum number of spots needed to produce a good fit and ignores bright regions for the sake of simplicity.

  9. Strategies for developing subchannel capability in an advanced system thermalhydraulic code: a literature review

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, J.; Rao, Y.F., E-mail: [Canadian Nuclear Laboratories, Chalk River, Ontario (Canada)


    In the framework of developing next generation safety analysis tools, Canadian Nuclear Laboratories (CNL) has planned to incorporate subchannel analysis capability into its advanced system thermalhydraulic code CATHENA 4. This paper provides a literature review and an assessment of current subchannel codes. It also evaluates three code-development methods: (i) static coupling of CATHENA 4 with the subchannel code ASSERT-PV, (ii) dynamic coupling of the two codes, and (iii) fully implicit implementation for a new, standalone CATHENA 4 version with subchannel capability. Results of the review and assessment suggest that the current ASSERT-PV modules can be used as the base for the fully implicit implementation of subchannel capability in CATHENA 4, and that this option may be the most cost-effective in the long run, resulting in savings in user application and maintenance costs. In addition, improved versatility of the tool could be accomplished by the addition of new features that could be added as part of its development. The new features would improve the capabilities of the existing subchannel code in handling low, reverse, and stagnant flows often encountered in system thermalhydraulic analysis. Therefore, the method of fully implicit implementation is preliminarily recommended for further exploration. A feasibility study will be performed in an attempt to extend the present work into a preliminary development plan. (author)

  10. Validation of Heat Transfer and Film Cooling Capabilities of the 3-D RANS Code TURBO (United States)

    Shyam, Vikram; Ameri, Ali; Chen, Jen-Ping


    The capabilities of the 3-D unsteady RANS code TURBO have been extended to include heat transfer and film cooling applications. The results of simulations performed with the modified code are compared to experiment and to theory, where applicable. Wilcox s k-turbulence model has been implemented to close the RANS equations. Two simulations are conducted: (1) flow over a flat plate and (2) flow over an adiabatic flat plate cooled by one hole inclined at 35 to the free stream. For (1) agreement with theory is found to be excellent for heat transfer, represented by local Nusselt number, and quite good for momentum, as represented by the local skin friction coefficient. This report compares the local skin friction coefficients and Nusselt numbers on a flat plate obtained using Wilcox's k-model with the theory of Blasius. The study looks at laminar and turbulent flows over an adiabatic flat plate and over an isothermal flat plate for two different wall temperatures. It is shown that TURBO is able to accurately predict heat transfer on a flat plate. For (2) TURBO shows good qualitative agreement with film cooling experiments performed on a flat plate with one cooling hole. Quantitatively, film effectiveness is under predicted downstream of the hole.

  11. Business Models for Cost Sharing & Capability Sustainment (United States)


    business models need to adapt in a continuous process in most cases, notably the major platforms and technologies featured in this research. Demil and...capability or availability. The business model, as seen by Demil and Lecocq (2010), delivers dynamic consistency by ensuring that profitability of projects. Cambridge, UK: Cambridge University Press. Demil , B., & Lecocq, X. (2010). Business model evolution: In search of dynamic

  12. U.S. Sodium Fast Reactor Codes and Methods: Current Capabilities and Path Forward

    Energy Technology Data Exchange (ETDEWEB)

    Brunett, A. J.; Fanning, T. H.


    The United States has extensive experience with the design, construction, and operation of sodium cooled fast reactors (SFRs) over the last six decades. Despite the closure of various facilities, the U.S. continues to dedicate research and development (R&D) efforts to the design of innovative experimental, prototype, and commercial facilities. Accordingly, in support of the rich operating history and ongoing design efforts, the U.S. has been developing and maintaining a series of tools with capabilities that envelope all facets of SFR design and safety analyses. This paper provides an overview of the current U.S. SFR analysis toolset, including codes such as SAS4A/SASSYS-1, MC2-3, SE2-ANL, PERSENT, NUBOW-3D, and LIFE-METAL, as well as the higher-fidelity tools (e.g. PROTEUS) being integrated into the toolset. Current capabilities of the codes are described and key ongoing development efforts are highlighted for some codes.

  13. Capability maturity models for offshore organisational management. (United States)

    Strutt, J E; Sharp, J V; Terry, E; Miles, R


    The goal setting regime imposed by the UK safety regulator has important implications for an organisation's ability to manage health and safety related risks. Existing approaches to safety assurance based on risk analysis and formal safety assessments are increasingly considered unlikely to create the step change improvement in safety to which the offshore industry aspires and alternative approaches are being considered. One approach, which addresses the important issue of organisational behaviour and which can be applied at a very early stage of design, is the capability maturity model (CMM). The paper describes the development of a design safety capability maturity model, outlining the key processes considered necessary to safety achievement, definition of maturity levels and scoring methods. The paper discusses how CMM is related to regulatory mechanisms and risk based decision making together with the potential of CMM to environmental risk management.

  14. Capabilities and accessibility: a model for progress

    Directory of Open Access Journals (Sweden)

    Nick Tyler


    Full Text Available Accessibility is seen to be a core issue which relates directly to the quality of life: if a person cannot reach and use a facility then they cannot take advantage of the benefits that the facility is seeking to provide. In some cases this is about being able to take part in an activity for enjoyment, but in some it is a question of the exercise of human rights – access to healthcare, education, voting and other citizens’ rights. This paper argues that such an equitable accessibility approach requires understanding of the relationships between the capabilities that a person has and the capabilities required of them by society in order to achieve the accessibility they seek. The Capabilities Model, which has been developed at UCL is an attempt to understand this relationship and the paper sets out an approach to quantifying the capabilities in a way that allows designers and implementers of environmental construction and operation to have a more robust approach to their decisions about providing accessibility.

  15. Milagro Version 2 An Implicit Monte Carlo Code for Thermal Radiative Transfer: Capabilities, Development, and Usage

    Energy Technology Data Exchange (ETDEWEB)

    T.J. Urbatsch; T.M. Evans


    We have released Version 2 of Milagro, an object-oriented, C++ code that performs radiative transfer using Fleck and Cummings' Implicit Monte Carlo method. Milagro, a part of the Jayenne program, is a stand-alone driver code used as a methods research vehicle and to verify its underlying classes. These underlying classes are used to construct Implicit Monte Carlo packages for external customers. Milagro-2 represents a design overhaul that allows better parallelism and extensibility. New features in Milagro-2 include verified momentum deposition, restart capability, graphics capability, exact energy conservation, and improved load balancing and parallel efficiency. A users' guide also describes how to configure, make, and run Milagro2.

  16. Interfacial and Wall Transport Models for SPACE-CAP Code

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Soon Joon; Choo, Yeon Joon; Han, Tae Young; Hwang, Su Hyun; Lee, Byung Chul [FNC Tech., Seoul (Korea, Republic of); Choi, Hoon; Ha, Sang Jun [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)


    The development project for the domestic design code was launched to be used for the safety and performance analysis of pressurized light water reactors. And CAP (Containment Analysis Package) code has been also developed for the containment safety and performance analysis side by side with SPACE. The CAP code treats three fields (gas, continuous liquid, and dispersed drop) for the assessment of containment specific phenomena, and is featured by its multidimensional assessment capabilities. Thermal hydraulics solver was already developed and now under testing of its stability and soundness. As a next step, interfacial and wall transport models was setup. In order to develop the best model and correlation package for the CAP code, various models currently used in major containment analysis codes, which are GOTHIC, CONTAIN2.0, and CONTEMPT-LT, have been reviewed. The origins of the selected models used in these codes have also been examined to find out if the models have not conflict with a proprietary right. In addition, a literature survey of the recent studies has been performed in order to incorporate the better models for the CAP code. The models and correlations of SPACE were also reviewed. CAP models and correlations are composed of interfacial heat/mass, and momentum transport models, and wall heat/mass, and momentum transport models. This paper discusses on those transport models in the CAP code.

  17. Impacts of Model Building Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Athalye, Rahul A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sivaraman, Deepak [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elliott, Douglas B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Liu, Bing [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bartlett, Rosemarie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)


    The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO2 emissions at the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.

  18. Object-Oriented MDAO Tool with Aeroservoelastic Model Tuning Capability (United States)

    Pak, Chan-gi; Li, Wesley; Lung, Shun-fat


    An object-oriented multi-disciplinary analysis and optimization (MDAO) tool has been developed at the NASA Dryden Flight Research Center to automate the design and analysis process and leverage existing commercial as well as in-house codes to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic and hypersonic aircraft. Once the structural analysis discipline is finalized and integrated completely into the MDAO process, other disciplines such as aerodynamics and flight controls will be integrated as well. Simple and efficient model tuning capabilities based on optimization problem are successfully integrated with the MDAO tool. More synchronized all phases of experimental testing (ground and flight), analytical model updating, high-fidelity simulations for model validation, and integrated design may result in reduction of uncertainties in the aeroservoelastic model and increase the flight safety.

  19. Conceptual Model of IT Infrastructure Capability and Its Empirical Justification

    Institute of Scientific and Technical Information of China (English)

    QI Xianfeng; LAN Boxiong; GUO Zhenwei


    Increasing importance has been attached to the value of information technology (IT) infrastructure in today's organizations. The development of efficacious IT infrastructure capability enhances business performance and brings sustainable competitive advantage. This study analyzed the IT infrastructure capability in a holistic way and then presented a concept model of IT capability. IT infrastructure capability was categorized into sharing capability, service capability, and flexibility. This study then empirically tested the model using a set of survey data collected from 145 firms. Three factors emerge from the factor analysis as IT flexibility, IT service capability, and IT sharing capability, which agree with those in the conceptual model built in this study.

  20. Assessing the Predictive Capability of the LIFEIV Nuclear Fuel Performance Code using Sequential Calibration

    Energy Technology Data Exchange (ETDEWEB)

    Stull, Christopher J. [Los Alamos National Laboratory; Williams, Brian J. [Los Alamos National Laboratory; Unal, Cetin [Los Alamos National Laboratory


    This report considers the problem of calibrating a numerical model to data from an experimental campaign (or series of experimental tests). The issue is that when an experimental campaign is proposed, only the input parameters associated with each experiment are known (i.e. outputs are not known because the experiments have yet to be conducted). Faced with such a situation, it would be beneficial from the standpoint of resource management to carefully consider the sequence in which the experiments are conducted. In this way, the resources available for experimental tests may be allocated in a way that best 'informs' the calibration of the numerical model. To address this concern, the authors propose decomposing the input design space of the experimental campaign into its principal components. Subsequently, the utility (to be explained) of each experimental test to the principal components of the input design space is used to formulate the sequence in which the experimental tests will be used for model calibration purposes. The results reported herein build on those presented and discussed in [1,2] wherein Verification & Validation and Uncertainty Quantification (VU) capabilities were applied to the nuclear fuel performance code LIFEIV. In addition to the raw results from the sequential calibration studies derived from the above, a description of the data within the context of the Predictive Maturity Index (PMI) will also be provided. The PMI [3,4] is a metric initiated and developed at Los Alamos National Laboratory to quantitatively describe the ability of a numerical model to make predictions in the absence of experimental data, where it is noted that 'predictions in the absence of experimental data' is not synonymous with extrapolation. This simply reflects the fact that resources do not exist such that each and every execution of the numerical model can be compared against experimental data. If such resources existed, the justification for

  1. Assessment of Static Delamination Propagation Capabilities in Commercial Finite Element Codes Using Benchmark Analysis (United States)

    Orifici, Adrian C.; Krueger, Ronald


    With capabilities for simulating delamination growth in composite materials becoming available, the need for benchmarking and assessing these capabilities is critical. In this study, benchmark analyses were performed to assess the delamination propagation simulation capabilities of the VCCT implementations in Marc TM and MD NastranTM. Benchmark delamination growth results for Double Cantilever Beam, Single Leg Bending and End Notched Flexure specimens were generated using a numerical approach. This numerical approach was developed previously, and involves comparing results from a series of analyses at different delamination lengths to a single analysis with automatic crack propagation. Specimens were analyzed with three-dimensional and two-dimensional models, and compared with previous analyses using Abaqus . The results demonstrated that the VCCT implementation in Marc TM and MD Nastran(TradeMark) was capable of accurately replicating the benchmark delamination growth results and that the use of the numerical benchmarks offers advantages over benchmarking using experimental and analytical results.

  2. Overview of ASC Capability Computing System Governance Model

    Energy Technology Data Exchange (ETDEWEB)

    Doebling, Scott W. [Los Alamos National Laboratory


    This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a priority-driven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.

  3. Evaluation of help model replacement codes

    Energy Technology Data Exchange (ETDEWEB)

    Whiteside, Tad [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hang, Thong [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Flach, Gregory [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)


    This work evaluates the computer codes that are proposed to be used to predict percolation of water through the closure-cap and into the waste containment zone at the Department of Energy closure sites. This work compares the currently used water-balance code (HELP) with newly developed computer codes that use unsaturated flow (Richards’ equation). It provides a literature review of the HELP model and the proposed codes, which result in two recommended codes for further evaluation: HYDRUS-2D3D and VADOSE/W. This further evaluation involved performing actual simulations on a simple model and comparing the results of those simulations to those obtained with the HELP code and the field data. From the results of this work, we conclude that the new codes perform nearly the same, although moving forward, we recommend HYDRUS-2D3D.

  4. Additions and improvements to the high energy density physics capabilities in the FLASH code (United States)

    Lamb, D. Q.; Flocke, N.; Graziani, C.; Tzeferacos, P.; Weide, K.


    FLASH is an open source, finite-volume Eulerian, spatially adaptive radiation magnetohydrodynamics code that has the capabilities to treat a broad range of physical processes. FLASH performs well on a wide range of computer architectures, and has a broad user base. Extensive high energy density physics (HEDP) capabilities have been added to FLASH to make it an open toolset for the academic HEDP community. We summarize these capabilities, emphasizing recent additions and improvements. In particular, we showcase the ability of FLASH to simulate the Faraday Rotation Measure produced by the presence of magnetic fields; and proton radiography, proton self-emission, and Thomson scattering diagnostics with and without the presence of magnetic fields. We also describe several collaborations with the academic HEDP community in which FLASH simulations were used to design and interpret HEDP experiments. This work was supported in part at the University of Chicago by the DOE NNSA ASC through the Argonne Institute for Computing in Science under field work proposal 57789; and the NSF under Grant PHY-0903997.

  5. Coupling a Basin Modeling and a Seismic Code using MOAB

    KAUST Repository

    Yan, Mi


    We report on a demonstration of loose multiphysics coupling between a basin modeling code and a seismic code running on a large parallel machine. Multiphysics coupling, which is one critical capability for a high performance computing (HPC) framework, was implemented using the MOAB open-source mesh and field database. MOAB provides for code coupling by storing mesh data and input and output field data for the coupled analysis codes and interpolating the field values between different meshes used by the coupled codes. We found it straightforward to use MOAB to couple the PBSM basin modeling code and the FWI3D seismic code on an IBM Blue Gene/P system. We describe how the coupling was implemented and present benchmarking results for up to 8 racks of Blue Gene/P with 8192 nodes and MPI processes. The coupling code is fast compared to the analysis codes and it scales well up to at least 8192 nodes, indicating that a mesh and field database is an efficient way to implement loose multiphysics coupling for large parallel machines.

  6. Realtime capable first principle based modelling of tokamak turbulent transport (United States)

    Citrin, Jonathan; Breton, Sarah; Felici, Federico; Imbeaux, Frederic; Redondo, Juan; Aniel, Thierry; Artaud, Jean-Francois; Baiocchi, Benedetta; Bourdelle, Clarisse; Camenen, Yann; Garcia, Jeronimo


    Transport in the tokamak core is dominated by turbulence driven by plasma microinstabilities. When calculating turbulent fluxes, maintaining both a first-principle-based model and computational tractability is a strong constraint. We present a pathway to circumvent this constraint by emulating quasilinear gyrokinetic transport code output through a nonlinear regression using multilayer perceptron neural networks. This recovers the original code output, while accelerating the computing time by five orders of magnitude, allowing realtime applications. A proof-of-principle is presented based on the QuaLiKiz quasilinear transport model, using a training set of five input dimensions, relevant for ITG turbulence. The model is implemented in the RAPTOR real-time capable tokamak simulator, and simulates a 300s ITER discharge in 10s. Progress in generalizing the emulation to include 12 input dimensions is presented. This opens up new possibilities for interpretation of present-day experiments, scenario preparation and open-loop optimization, realtime controller design, realtime discharge supervision, and closed-loop trajectory optimization.

  7. Coding with partially hidden Markov models

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Rissanen, J.


    Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general...... 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...

  8. Transmutation Fuel Performance Code Thermal Model Verification

    Energy Technology Data Exchange (ETDEWEB)

    Gregory K. Miller; Pavel G. Medvedev


    FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.

  9. MC21 v.6.0 - A Continuous-Energy Monte Carlo Particle Transport Code with Integrated Reactor Feedback Capabilities (United States)

    Griesheimer, D. P.; Gill, D. F.; Nease, B. R.; Sutton, T. M.; Stedry, M. H.; Dobreff, P. S.; Carpenter, D. C.; Trumbull, T. H.; Caro, E.; Joo, H.; Millman, D. L.


    MC21 is a continuous-energy Monte Carlo radiation transport code for the calculation of the steady-state spatial distributions of reaction rates in three-dimensional models. The code supports neutron and photon transport in fixed source problems, as well as iterated-fission-source (eigenvalue) neutron transport problems. MC21 has been designed and optimized to support large-scale problems in reactor physics, shielding, and criticality analysis applications. The code also supports many in-line reactor feedback effects, including depletion, thermal feedback, xenon feedback, eigenvalue search, and neutron and photon heating. MC21 uses continuous-energy neutron/nucleus interaction physics over the range from 10-5 eV to 20 MeV. The code treats all common neutron scattering mechanisms, including fast-range elastic and non-elastic scattering, and thermal- and epithermal-range scattering from molecules and crystalline materials. For photon transport, MC21 uses continuous-energy interaction physics over the energy range from 1 keV to 100 GeV. The code treats all common photon interaction mechanisms, including Compton scattering, pair production, and photoelectric interactions. All of the nuclear data required by MC21 is provided by the NDEX system of codes, which extracts and processes data from EPDL-, ENDF-, and ACE-formatted source files. For geometry representation, MC21 employs a flexible constructive solid geometry system that allows users to create spatial cells from first- and second-order surfaces. The system also allows models to be built up as hierarchical collections of previously defined spatial cells, with interior detail provided by grids and template overlays. Results are collected by a generalized tally capability which allows users to edit integral flux and reaction rate information. Results can be collected over the entire problem or within specific regions of interest through the use of phase filters that control which particles are allowed to score each

  10. Modelling and Analysis Capabilities for Lightweight Masts (United States)


    needed in DREA’s VAST finite element code in order to be able to analyse composite mast structures. Future Plans The second phase of this...résultats Les travaux effectués pour ce projet fournissent la base du développement futur de l’outil de modélisation MASTAS des exercices financiers...pouvoir analyser les structures des mâts composites. Projets futurs La deuxième phase de ce projet d’application de technologie est actuellement en

  11. Generation of Java code from Alvis model (United States)

    Matyasik, Piotr; Szpyrka, Marcin; Wypych, Michał


    Alvis is a formal language that combines graphical modelling of interconnections between system entities (called agents) and a high level programming language to describe behaviour of any individual agent. An Alvis model can be verified formally with model checking techniques applied to the model LTS graph that represents the model state space. This paper presents transformation of an Alvis model into executable Java code. Thus, the approach provides a method of automatic generation of a Java application from formally verified Alvis model.

  12. NBC Hazard Prediction Model Capability Analysis (United States)


    Puff( SCIPUFF ) Model Verification and Evaluation Study, Air Resources Laboratory, NOAA, May 1998. Based on the NOAA review, the VLSTRACK developers...TO SUBSTANTIAL DIFFERENCES IN PREDICTIONS HPAC uses a transport and dispersion (T&D) model called SCIPUFF and an associated mean wind field model... SCIPUFF is a model for atmospheric dispersion that uses the Gaussian puff method - an arbitrary time-dependent concentration field is represented

  13. Advanced Modeling, Simulation and Analysis (AMSA) Capability Roadmap Progress Review (United States)

    Antonsson, Erik; Gombosi, Tamas


    Contents include the following: NASA capability roadmap activity. Advanced modeling, simulation, and analysis overview. Scientific modeling and simulation. Operations modeling. Multi-special sensing (UV-gamma). System integration. M and S Environments and Infrastructure.

  14. Capabilities and accuracy of energy modelling software

    CSIR Research Space (South Africa)

    Osburn, L


    Full Text Available Energy modelling can be used in a number of different ways to fulfill different needs, including certification within building regulations or green building rating tools. Energy modelling can also be used in order to try and predict what the energy...

  15. Economic aspects and models for building codes

    DEFF Research Database (Denmark)

    Bonke, Jens; Pedersen, Dan Ove; Johnsen, Kjeld

    It is the purpose of this bulletin to present an economic model for estimating the consequence of new or changed building codes. The object is to allow comparative analysis in order to improve the basis for decisions in this field. The model is applied in a case study.......It is the purpose of this bulletin to present an economic model for estimating the consequence of new or changed building codes. The object is to allow comparative analysis in order to improve the basis for decisions in this field. The model is applied in a case study....

  16. Mathematical models for the EPIC code

    Energy Technology Data Exchange (ETDEWEB)

    Buchanan, H.L.


    EPIC is a fluid/envelope type computer code designed to study the energetics and dynamics of a high energy, high current electron beam passing through a gas. The code is essentially two dimensional (x, r, t) and assumes an axisymmetric beam whose r.m.s. radius is governed by an envelope model. Electromagnetic fields, background gas chemistry, and gas hydrodynamics (density channel evolution) are all calculated self-consistently as functions of r, x, and t. The code is a collection of five major subroutines, each of which is described in some detail in this report.

  17. A MATLAB based 3D modeling and inversion code for MT data (United States)

    Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.


    The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.

  18. Facility Modeling Capability Demonstration Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Key, Brian P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sadasivan, Pratap [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Fallgren, Andrew James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Demuth, Scott Francis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Aleman, Sebastian E. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); de Almeida, Valmor F. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Chiswell, Steven R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hamm, Larry [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Tingey, Joel M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)


    A joint effort has been initiated by Los Alamos National Laboratory (LANL), Oak Ridge National Laboratory (ORNL), Savanah River National Laboratory (SRNL), Pacific Northwest National Laboratory (PNNL), sponsored by the National Nuclear Security Administration’s (NNSA’s) office of Proliferation Detection, to develop and validate a flexible framework for simulating effluents and emissions from spent fuel reprocessing facilities. These effluents and emissions can be measured by various on-site and/or off-site means, and then the inverse problem can ideally be solved through modeling and simulation to estimate characteristics of facility operation such as the nuclear material production rate. The flexible framework called Facility Modeling Toolkit focused on the forward modeling of PUREX reprocessing facility operating conditions from fuel storage and chopping to effluent and emission measurements.

  19. Towards a national cybersecurity capability development model

    CSIR Research Space (South Africa)

    Jacobs, Pierre C


    Full Text Available on organisational strategy (D. Cooper; S. Dhiri & J. Root, 2012). There are many industry standard operating models such as the TM Forum’s Enhanced Telecom Operations Map (eTOM) Business Process Framework (Cisco Systems, 2009). Other industry standard operating..., the eTOM Business Process Framework is chosen as the operating model for the E-CMIRC. The eTOM framework is a complete framework addressing marketing and sales, strategy, infrastructure and product as well as operations and enterprise management...

  20. Computable general equilibrium model fiscal year 2014 capability development report

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Laboratory; Boero, Riccardo [Los Alamos National Laboratory


    This report provides an overview of the development of the NISAC CGE economic modeling capability since 2012. This capability enhances NISAC's economic modeling and analysis capabilities to answer a broader set of questions than possible with previous economic analysis capability. In particular, CGE modeling captures how the different sectors of the economy, for example, households, businesses, government, etc., interact to allocate resources in an economy and this approach captures these interactions when it is used to estimate the economic impacts of the kinds of events NISAC often analyzes.

  1. Real-time capable first principle based modelling of tokamak turbulent transport

    CERN Document Server

    Breton, S; Felici, F; Imbeaux, F; Aniel, T; Artaud, J F; Baiocchi, B; Bourdelle, C; Camenen, Y; Garcia, J


    A real-time capable core turbulence tokamak transport model is developed. This model is constructed from the regularized nonlinear regression of quasilinear gyrokinetic transport code output. The regression is performed with a multilayer perceptron neural network. The transport code input for the neural network training set consists of five dimensions, and is limited to adiabatic electrons. The neural network model successfully reproduces transport fluxes predicted by the original quasilinear model, while gaining five orders of magnitude in computation time. The model is implemented in a real-time capable tokamak simulator, and simulates a 300s ITER discharge in 10s. This proof-of-principle for regression based transport models anticipates a significant widening of input space dimensionality and physics realism for future training sets. This aims to provide unprecedented computational speed coupled with first-principle based physics for real-time control and integrated modelling applications.

  2. Distributed generation capabilities of the national energy modeling system

    Energy Technology Data Exchange (ETDEWEB)

    LaCommare, Kristina Hamachi; Edwards, Jennifer L.; Marnay, Chris


    the n umber of years to a positive cash flow. Some important technologies, e.g. thermally activated cooling, are absent, and ceilings on DG adoption are determined by some what arbitrary caps on the number of buildings that can adopt DG. These caps are particularly severe for existing buildings, where the maximum penetration for any one technology is 0.25 percent. On the other hand, competition among technologies is not fully considered, and this may result in double-counting for certain applications. A series of sensitivity runs show greater penetration with net metering enhancements and aggressive tax credits and a more limited response to lowered DG technology costs. Discussion of alternatives to the current code is presented in Section 4. Alternatives or improvements to how DG is modeled in NEMS cover three basic areas: expanding on the existing total market for DG both by changing existing parameters in NEMS and by adding new capabilities, such as for missing technologies; enhancing the cash flow analysis but incorporating aspects of DG economics that are not currently represented, e.g. complex tariffs; and using an external geographic information system (GIS) driven analysis that can better and more intuitively identify niche markets.

  3. A Thermo-Optic Propagation Modeling Capability.

    Energy Technology Data Exchange (ETDEWEB)

    Schrader, Karl; Akau, Ron


    A new theoretical basis is derived for tracing optical rays within a finite-element (FE) volume. The ray-trajectory equations are cast into the local element coordinate frame and the full finite-element interpolation is used to determine instantaneous index gradient for the ray-path integral equation. The FE methodology (FEM) is also used to interpolate local surface deformations and the surface normal vector for computing the refraction angle when launching rays into the volume, and again when rays exit the medium. The method is implemented in the Matlab(TM) environment and compared to closed- form gradient index models. A software architecture is also developed for implementing the algorithms in the Zemax(TM) commercial ray-trace application. A controlled thermal environment was constructed in the laboratory, and measured data was collected to validate the structural, thermal, and optical modeling methods.

  4. Model Description of TASS/SMR Code

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Y. D.; Yang, S. H.; Kim, S. H.; Lee, S. W.; Kim, H. K.; Yoon, H. Y.; Lee, G. H.; Bae, K. H.; Chung, Y. J


    TASS/SMR(Transient And Setpoint Simulation/System-integrated Modular Reactor) code has been developed for the safety analysis of the SMART-P reactor. TASS/SMR code can be applied for the analysis of design base accidents including small break loss of coolant accident of the SMART research reactor. TASS/SMR code models the primary and secondary system using a node and flow path. A node represents the control volume which defines the fluid mass and energy. Flow path connects the nodes to define the momentum of the fluid. The mass and energy conservation equations are applied to the node and the momentum conservation equation applied to the flow path. In TASS/SMR, the governing equations are applied for both the primary and the secondary coolant system and are solved simultaneously. The governing equations of TASS/SMR are based on the drift-flux model so that the accidents or transients accompanying with two-phase flow can be analyzed. Also, the SMART-P reactor specific thermal-hydraulic models are incorporated, such as non-condensable gas model, helical steam generator heat transfer model, and passive residual heat removal system (PRHRS) heat transfer model. This technical report describes the governing equations, solution method, thermal hydraulics, reactor core, control system models used in TASS/SMR code. Also, the description for the steady state simulation, the minimum CHFR and hottest fuel temperature calculation methods are described in this report.

  5. Mapping Initial Hydrostatic Models in Godunov Codes

    CERN Document Server

    Zingale, M A; Zu Hone, J; Calder, A C; Fryxell, B; Plewa, T; Truran, J W; Caceres, A; Olson, K; Ricker, P M; Riley, K; Rosner, R; Siegel, A; Timmes, F X; Vladimirova, N


    We look in detail at the process of mapping an astrophysical initial model from a stellar evolution code onto the computational grid of an explicit, Godunov type code while maintaining hydrostatic equilibrium. This mapping process is common in astrophysical simulations, when it is necessary to follow short-timescale dynamics after a period of long timescale buildup. We look at the effects of spatial resolution, boundary conditions, the treatment of the gravitational source terms in the hydrodynamics solver, and the initialization process itself. We conclude with a summary detailing the mapping process that yields the lowest ambient velocities in the mapped model.

  6. Source coding model for repeated snapshot imaging

    CERN Document Server

    Li, Junhui; Yang, Dongyue; wu, Guohua; Yin, Longfei; Guo, Hong


    Imaging based on successive repeated snapshot measurement is modeled as a source coding process in information theory. The necessary number of measurement to maintain a certain level of error rate is depicted as the rate-distortion function of the source coding. Quantitative formula of the error rate versus measurement number relation is derived, based on the information capacity of imaging system. Second order fluctuation correlation imaging (SFCI) experiment with pseudo-thermal light verifies this formula, which paves the way for introducing information theory into the study of ghost imaging (GI), both conventional and computational.

  7. Cyber Capability Development Centre (CCDC): Proposed Governance Model (United States)


    Canada. Contract Report DRDC-RDDC-2014-C170 December 2013 Cyber Capability Development Centre ( CCDC ) Proposed governance model Douglas...13 ii Table of Figures Figure 1: CCDC organization and infrastructure

  8. Verification and Validation of Heat Transfer Model of AGREE Code

    Energy Technology Data Exchange (ETDEWEB)

    Tak, N. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Seker, V.; Drzewiecki, T. J.; Downar, T. J. [Department of Nuclear Engineering and Radiological Sciences, Univ. of Michigan, Michigan (United States); Kelly, J. M. [US Nuclear Regulatory Commission, Washington (United States)


    The AGREE code was originally developed as a multi physics simulation code to perform design and safety analysis of Pebble Bed Reactors (PBR). Currently, additional capability for the analysis of Prismatic Modular Reactor (PMR) core is in progress. Newly implemented fluid model for a PMR core is based on a subchannel approach which has been widely used in the analyses of light water reactor (LWR) cores. A hexagonal fuel (or graphite block) is discretized into triangular prism nodes having effective conductivities. Then, a meso-scale heat transfer model is applied to the unit cell geometry of a prismatic fuel block. Both unit cell geometries of multi-hole and pin-in-hole types of prismatic fuel blocks are considered in AGREE. The main objective of this work is to verify and validate the heat transfer model newly implemented for a PMR core in the AGREE code. The measured data in the HENDEL experiment were used for the validation of the heat transfer model for a pin-in-hole fuel block. However, the HENDEL tests were limited to only steady-state conditions of pin-in-hole fuel blocks. There exist no available experimental data regarding a heat transfer in multi-hole fuel blocks. Therefore, numerical benchmarks using conceptual problems are considered to verify the heat transfer model of AGREE for multi-hole fuel blocks as well as transient conditions. The CORONA and GAMMA+ codes were used to compare the numerical results. In this work, the verification and validation study were performed for the heat transfer model of the AGREE code using the HENDEL experiment and the numerical benchmarks of selected conceptual problems. The results of the present work show that the heat transfer model of AGREE is accurate and reliable for prismatic fuel blocks. Further validation of AGREE is in progress for a whole reactor problem using the HTTR safety test data such as control rod withdrawal tests and loss-of-forced convection tests.

  9. Capabilities needed for the next generation of thermo-hydraulic codes for use in real time applications

    Energy Technology Data Exchange (ETDEWEB)

    Arndt, S.A.


    The real-time reactor simulation field is currently at a crossroads in terms of the capability to perform real-time analysis using the most sophisticated computer codes. Current generation safety analysis codes are being modified to replace simplified codes that were specifically designed to meet the competing requirement for real-time applications. The next generation of thermo-hydraulic codes will need to have included in their specifications the specific requirement for use in a real-time environment. Use of the codes in real-time applications imposes much stricter requirements on robustness, reliability and repeatability than do design and analysis applications. In addition, the need for code use by a variety of users is a critical issue for real-time users, trainers and emergency planners who currently use real-time simulation, and PRA practitioners who will increasingly use real-time simulation for evaluating PRA success criteria in near real-time to validate PRA results for specific configurations and plant system unavailabilities.

  10. TRIPOLI capabilities proved by a set of solved problems. [Monte Carlo neutron and gamma ray transport code

    Energy Technology Data Exchange (ETDEWEB)

    Vergnaud, T.; Nimal, J.C. (CEA Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France))


    The three-dimensional polycinetic Monte Carlo particle transport code TRIPOLI has been under development in the French Shielding Laboratory at Saclay since 1965. TRIPOLI-1 began to run in 1970 and became TRIPOLI-2 in 1978: since then its capabilities have been improved and many studies have been performed. TRIPOLI can treat stationary or time dependent problems in shielding and in neutronics. Some examples of solved problems are presented to demonstrate the many possibilities of the system. (author).

  11. Modeling peripheral olfactory coding in Drosophila larvae.

    Directory of Open Access Journals (Sweden)

    Derek J Hoare

    Full Text Available The Drosophila larva possesses just 21 unique and identifiable pairs of olfactory sensory neurons (OSNs, enabling investigation of the contribution of individual OSN classes to the peripheral olfactory code. We combined electrophysiological and computational modeling to explore the nature of the peripheral olfactory code in situ. We recorded firing responses of 19/21 OSNs to a panel of 19 odors. This was achieved by creating larvae expressing just one functioning class of odorant receptor, and hence OSN. Odor response profiles of each OSN class were highly specific and unique. However many OSN-odor pairs yielded variable responses, some of which were statistically indistinguishable from background activity. We used these electrophysiological data, incorporating both responses and spontaneous firing activity, to develop a bayesian decoding model of olfactory processing. The model was able to accurately predict odor identity from raw OSN responses; prediction accuracy ranged from 12%-77% (mean for all odors 45.2% but was always significantly above chance (5.6%. However, there was no correlation between prediction accuracy for a given odor and the strength of responses of wild-type larvae to the same odor in a behavioral assay. We also used the model to predict the ability of the code to discriminate between pairs of odors. Some of these predictions were supported in a behavioral discrimination (masking assay but others were not. We conclude that our model of the peripheral code represents basic features of odor detection and discrimination, yielding insights into the information available to higher processing structures in the brain.

  12. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions. (United States)


    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Model code provisions for use in partially accepted code jurisdictions. 200.926c Section 200.926c Housing and Urban Development Regulations... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted...

  13. MEMOPS: data modelling and automatic code generation. (United States)

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D


    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  14. Development of Improved Algorithms and Multiscale Modeling Capability with SUNTANS (United States)


    High-resolution simulations using nonhydrostatic models like SUNTANS are crucial for understanding multiscale processes that are unresolved, and...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Development of Improved Algorithms and Multiscale ... Modeling Capability with SUNTANS Oliver B. Fringer 473 Via Ortega, Room 187 Dept. of Civil and Environmental Engineering Stanford University

  15. Creating Models for the ORIGEN Codes (United States)

    Louden, G. D.; Mathews, K. A.


    Our research focused on the development of a methodology for creating reactor-specific cross-section libraries for nuclear reactor and nuclear fuel cycle analysis codes available from the Radiation Safety Information Computational Center. The creation of problem-specific models allows more detailed anlaysis than is possible using the generic models provided with ORIGEN2 and ORIGEN-S. A model of the Ohio State University Research Reactor was created using the Coupled 1-D Shielding Analysis (SAS2H) module of the Modular Code System for Performing Standardized Computer Analysis for Licensing Evaluation (SCALE4.3). Six different reactor core models were compared to identify the effect of changing the SAS2H Larger Unit Cell on the predicted isotopic composition of spent fuel. Seven different power histories were then applied to a Core-Average model to determine the ability of ORIGEN-S to distinguish spent fuel produced under varying operating conditions. Several actinide and fission product concentrations were identified which were sensitive to the power history, however the majority of the isotope concentrations were not dependent on operating history.

  16. Business Models for Cost Sharing and Capability Sustainment (United States)


    business models need to adapt in a continuous process in most cases, notably the major platforms and technologies featured in this research. Demil and...capability or availability. The business model, as seen by Demil and Lecocq (2010), delivers dynamic consistency by ensuring that profitability and...Cambridge, UK: Cambridge University Press. Demil , B., & Lecocq, X. (2010). Business model evolution: In search of dynamic consistency. Long Range

  17. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding (United States)

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry


    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  18. Neural network modeling of a dolphin's sonar discrimination capabilities

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; René Rasmussen, A; Au, WWL


    and frequency information were used to model the dolphin discrimination capabilities. Echoes from the same cylinders were digitized using a broadband simulated dolphin sonar signal with the transducer mounted on the dolphin's pen. The echoes were filtered by a bank of continuous constant-Q digital filters...

  19. Experiences with the Capability Maturity Model in a research environment

    NARCIS (Netherlands)

    Velden, van der M.J.; Vreke, J.; Wal, van der B.; Symons, A.


    The project described here was aimed at evaluating the Capability Maturity Model (CMM) in the context of a research organization. Part of the evaluation was a standard CMM assessment. It was found that CMM could be applied to a research organization, although its five maturity levels were considered

  20. Capable of Suicide: A Functional Model of the Acquired Capability Component of the Interpersonal-Psychological Theory of Suicide (United States)

    Smith, Phillip N.; Cukrowicz, Kelly C.


    A functional model of the acquired capability for suicide, a component of Joiner's (2005) Interpersonal-Psychological Theory of Suicide, is presented. A component of Joiner's (2005) Interpersonal-Psychological Theory of Suicide a functional model of the acquired capability for suicide is presented. The model integrates the points discussed by…

  1. Modelling of LOCA Tests with the BISON Fuel Performance Code

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, Richard L [Idaho National Laboratory; Pastore, Giovanni [Idaho National Laboratory; Novascone, Stephen Rhead [Idaho National Laboratory; Spencer, Benjamin Whiting [Idaho National Laboratory; Hales, Jason Dean [Idaho National Laboratory


    BISON is a modern finite-element based, multidimensional nuclear fuel performance code that is under development at Idaho National Laboratory (USA). Recent advances of BISON include the extension of the code to the analysis of LWR fuel rod behaviour during loss-of-coolant accidents (LOCAs). In this work, BISON models for the phenomena relevant to LWR cladding behaviour during LOCAs are described, followed by presentation of code results for the simulation of LOCA tests. Analysed experiments include separate effects tests of cladding ballooning and burst, as well as the Halden IFA-650.2 fuel rod test. Two-dimensional modelling of the experiments is performed, and calculations are compared to available experimental data. Comparisons include cladding burst pressure and temperature in separate effects tests, as well as the evolution of fuel rod inner pressure during ballooning and time to cladding burst. Furthermore, BISON three-dimensional simulations of separate effects tests are performed, which demonstrate the capability to reproduce the effect of azimuthal temperature variations in the cladding. The work has been carried out in the frame of the collaboration between Idaho National Laboratory and Halden Reactor Project, and the IAEA Coordinated Research Project FUMAC.

  2. SIMULATE-4 multigroup nodal code with microscopic depletion model

    Energy Technology Data Exchange (ETDEWEB)

    Bahadir, T. [Studsvik Scandpower, Inc., Newton, MA (United States); Lindahl, St.O. [Studsvik Scandpower AB, Vasteras (Sweden); Palmtag, S.P. [Studsvik Scandpower, Inc., Idaho Falls, ID (United States)


    SIMULATE-4 is a three-dimensional multigroup analytical nodal code with microscopic depletion capability. It has been developed employing 'first principal models' thus avoiding ad hoc approximations. The multigroup diffusion equations or, optionally, the simplified P{sub 3} equations are solved. Cross sections are described by a hybrid microscopic-macroscopic model that includes approximately 50 heavy nuclides and fission products. Heterogeneities in the axial direction of an assembly are treated systematically. Radially, the assembly is divided into heterogeneous sub-meshes, thereby overcoming the shortcomings of spatially-averaged assembly cross sections and discontinuity factors generated with zero net-current boundary conditions. Numerical tests against higher order transport methods and critical experiments show substantial improvements compared to results of existing nodal models. (authors)

  3. A graph model for opportunistic network coding

    KAUST Repository

    Sorour, Sameh


    © 2015 IEEE. Recent advancements in graph-based analysis and solutions of instantly decodable network coding (IDNC) trigger the interest to extend them to more complicated opportunistic network coding (ONC) scenarios, with limited increase in complexity. In this paper, we design a simple IDNC-like graph model for a specific subclass of ONC, by introducing a more generalized definition of its vertices and the notion of vertex aggregation in order to represent the storage of non-instantly-decodable packets in ONC. Based on this representation, we determine the set of pairwise vertex adjacency conditions that can populate this graph with edges so as to guarantee decodability or aggregation for the vertices of each clique in this graph. We then develop the algorithmic procedures that can be applied on the designed graph model to optimize any performance metric for this ONC subclass. A case study on reducing the completion time shows that the proposed framework improves on the performance of IDNC and gets very close to the optimal performance.

  4. A Systems Engineering Capability Maturity Model, Version 1.1, (United States)


    Ongoing Skills and Knowledge 4-113 PA 18: Coordinate with Suppliers 4-120 Part 3: Appendices Appendix A Appendix B Appendix C Appendix D...Ward-Callan, C. Wasson, A. Wilbur, A.M. Wilhite, R. Williams, H. Wilson, D. Zaugg, and C. Zumba . continued on next page SM CMM and Capability...Model (SE-CMM) was developed as a response to industry requests for assistance in coordinating and publishing a model that would foster improvement

  5. NGNP Data Management and Analysis System Modeling Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Cynthia D. Gentillon


    Projects for the very-high-temperature reactor (VHTR) program provide data in support of Nuclear Regulatory Commission licensing of the VHTR. Fuel and materials to be used in the reactor are tested and characterized to quantify performance in high temperature and high fluence environments. In addition, thermal-hydraulic experiments are conducted to validate codes used to assess reactor safety. The VHTR Program has established the NGNP Data Management and Analysis System (NDMAS) to ensure that VHTR data are (1) qualified for use, (2) stored in a readily accessible electronic form, and (3) analyzed to extract useful results. This document focuses on the third NDMAS objective. It describes capabilities for displaying the data in meaningful ways and identifying relationships among the measured quantities that contribute to their understanding.


    Energy Technology Data Exchange (ETDEWEB)

    Gorensek, M.; Hamm, L.; Garcia, H.; Burr, T.; Coles, G.; Edmunds, T.; Garrett, A.; Krebs, J.; Kress, R.; Lamberti, V.; Schoenwald, D.; Tzanos, C.; Ward, R.


    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come from many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.

  7. Simulation modeling on the growth of firm's safety management capability

    Institute of Scientific and Technical Information of China (English)

    LIU Tie-zhong; LI Zhi-xiang


    Aiming to the deficiency of safety management measure, established simulation model about firm's safety management capability(FSMC) based on organizational learning theory. The system dynamics(SD) method was used, in which level and rate system, variable equation and system structure flow diagram was concluded. Simulation model was verified from two aspects: first, model's sensitivity to variable was tested from the gross of safety investment and the proportion of safety investment; second, variables dependency was checked up from the correlative variable of FSMC and organizational learning. The feasibility of simulation model is verified though these processes.

  8. Capability maturity models in engineering companies: case study analysis

    Directory of Open Access Journals (Sweden)

    Titov Sergei


    Full Text Available In the conditions of the current economic downturn engineering companies in Russia and worldwide are searching for new approaches and frameworks to improve their strategic position, increase the efficiency of the internal business processes and enhance the quality of the final products. Capability maturity models are well-known tools used by many foreign engineering companies to assess the productivity of the processes, to elaborate the program of business process improvement and to prioritize the efforts to optimize the whole company performance. The impact of capability maturity model implementation on cost and time are documented and analyzed in the existing research. However, the potential of maturity models as tools of quality management is less known. The article attempts to analyze the impact of CMM implementation on the quality issues. The research is based on a case study methodology and investigates the real life situation in a Russian engineering company.

  9. Assessing reactor physics codes capabilities to simulate fast reactors on the example of the BN-600 benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, Vladimir [Scientific and Engineering Centre for Nuclear and Radiation Safety (SES NRS), Moscow (Russian Federation); Bousquet, Jeremy [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH, Garching (Germany)


    This work aims to assess the capabilities of reactor physics codes (initially validated for thermal reactors) to simulate fast sodium cooled reactors. The BFS-62-3A critical experiment from the BN-600 Hybrid Core Benchmark Analyses was chosen for the investigation. Monte-Carlo codes (KENO from SCALE and SERPENT 2.1.23) and the deterministic diffusion code DYN3D-MG are applied to calculate the neutronic parameters. It was found that the multiplication factor and reactivity effects calculated by KENO and SERPENT using the ENDF/B-VII.0 continuous energy library are in a good agreement with each other and with the measured benchmark values. Few-groups macroscopic cross sections, required for DYN3D-MG, were prepared in applying different methods implemented in SCALE and SERPENT. The DYN3D-MG results of a simplified benchmark show reasonable agreement with results from Monte-Carlo calculations and measured values. The former results are used to justify DYN3D-MG implementation for sodium cooled fast reactors coupled deterministic analysis.

  10. Aeroheating Mapping to Thermal Model for Autonomous Aerobraking Capability (United States)

    Amundsen, Ruth M.


    Thermal modeling has been performed to evaluate the potential for autonomous aerobraking of a spacecraft in the atmosphere of a planet. As part of this modeling, the aeroheating flux during aerobraking must be applied to the spacecraft solar arrays to evaluate their thermal response. On the Mars Reconnaissance Orbiter (MRO) mission, this was done via two separate thermal models and an extensive suite of mapping scripts. That method has been revised, and the thermal analysis of an aerobraking pass can now be accomplished via a single thermal model, using a new capability in the Thermal Desktop software. This capability, Boundary Condition Mapper, has the ability to input heating flux files that vary with time, position on the solar array, and with the skin temperature. A recently added feature to the Boundary Condition Mapper is that this module can also utilize files that describe the variation of aeroheating over the surface with atmospheric density (rather than time); this is the format of the MRO aeroheating files. This capability has allowed a huge streamlining of the MRO thermal process, simplifying the procedure for importing new aeroheating files and trajectory information. The new process, as well as the quantified time savings, is described.

  11. ER@CEBAF: Modeling code developments

    Energy Technology Data Exchange (ETDEWEB)

    Meot, F. [Brookhaven National Lab. (BNL), Upton, NY (United States); Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Roblin, Y. [Brookhaven National Lab. (BNL), Upton, NY (United States); Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)


    A proposal for a multiple-pass, high energy, energy-recovery experiment using CEBAF is under preparation in the frame of a JLab-BNL collaboration. In view of beam dynamics investigations regarding this project, in addition to the existing model in use in Elegant a version of CEBAF is developed in the stepwise ray-tracing code Zgoubi, Beyond the ER experiment, it is also planned to use the latter for the study of polarization transport in the presence of synchrotron radiation, down to Hall D line where a 12 GeV polarized beam can be delivered. This Note briefly reports on the preliminary steps, and preliminary outcomes, based on an Elegant to Zgoubi translation.

  12. Characteristic Analysis of Fire Modeling Codes

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Hwan; Yang, Joon Eon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Jong Hoon [Kyeongmin College, Ujeongbu (Korea, Republic of)


    This report documents and compares key features of four zone models: CFAST, COMPBRN IIIE, MAGIC and the Fire Induced Vulnerability Evaluation (FIVE) methodology. CFAST and MAGIC handle multi-compartment, multi-fire problems, using many equations; COMPBRN and FIVE handle single compartment, single fire source problems, using simpler equation. The increased rigor of the formulation of CFAST and MAGIC does not mean that these codes are more accurate in every domain; for instance, the FIVE methodology uses a single zone approximation with a plume/ceiling jet sublayer, while the other models use a two-zone treatment without a plume/ceiling jet sublayer. Comparisons with enclosure fire data indicate that inclusion of plume/ceiling jet sublayer temperatures is more conservative, and generally more accurate than neglecting them. Adding a plume/ceiling jet sublayer to the two-zone models should be relatively straightforward, but it has not been done yet for any of the two-zone models. Such an improvement is in progress for MAGIC.

  13. Development of a fourth generation predictive capability maturity model.

    Energy Technology Data Exchange (ETDEWEB)

    Hills, Richard Guy; Witkowski, Walter R.; Urbina, Angel; Rider, William J.; Trucano, Timothy Guy


    The Predictive Capability Maturity Model (PCMM) is an expert elicitation tool designed to characterize and communicate completeness of the approaches used for computational model definition, verification, validation, and uncertainty quantification associated for an intended application. The primary application of this tool at Sandia National Laboratories (SNL) has been for physics-based computational simulations in support of nuclear weapons applications. The two main goals of a PCMM evaluation are 1) the communication of computational simulation capability, accurately and transparently, and 2) the development of input for effective planning. As a result of the increasing importance of computational simulation to SNLs mission, the PCMM has evolved through multiple generations with the goal to provide more clarity, rigor, and completeness in its application. This report describes the approach used to develop the fourth generation of the PCMM.

  14. Towards Preserving Model Coverage and Structural Code Coverage

    Directory of Open Access Journals (Sweden)

    Raimund Kirner


    Full Text Available Embedded systems are often used in safety-critical environments. Thus, thorough testing of them is mandatory. To achieve a required structural code-coverage criteria it is beneficial to derive the test data at a higher program-representation level than machine code. Higher program-representation levels include, beside the source-code level, languages of domain-specific modeling environments with automatic code generation. For a testing framework with automatic generation of test data this will enable high retargetability of the framework. In this article we address the challenge of ensuring that the structural code coverage achieved at a higher program representation level is preserved during the code generations and code transformations down to machine code. We define the formal properties that have to be fullfilled by a code transformation to guarantee preservation of structural code coverage. Based on these properties we discuss how to preserve code coverage achieved at source-code level. Additionally, we discuss how structural code coverage at model level could be preserved. The results presented in this article are aimed toward the integration of support for preserving structural code coverage into compilers and code generators.

  15. Simulink Code Generation: Tutorial for Generating C Code from Simulink Models using Simulink Coder (United States)

    MolinaFraticelli, Jose Carlos


    This document explains all the necessary steps in order to generate optimized C code from Simulink Models. This document also covers some general information on good programming practices, selection of variable types, how to organize models and subsystems, and finally how to test the generated C code and compare it with data from MATLAB.

  16. PetriCode: A Tool for Template-Based Code Generation from CPN Models

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge


    Code generation is an important part of model driven methodologies. In this paper, we present PetriCode, a software tool for generating protocol software from a subclass of Coloured Petri Nets (CPNs). The CPN subclass is comprised of hierarchical CPN models describing a protocol system at different...

  17. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng


    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  18. Off-Gas Adsorption Model Capabilities and Recommendations

    Energy Technology Data Exchange (ETDEWEB)

    Lyon, Kevin L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Welty, Amy K. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Law, Jack [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ladshaw, Austin [Georgia Inst. of Technology, Atlanta, GA (United States); Yiacoumi, Sotira [Georgia Inst. of Technology, Atlanta, GA (United States); Tsouris, Costas [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)


    Off-gas treatment is required to reduce emissions from aqueous fuel reprocessing. Evaluating the products of innovative gas adsorption research requires increased computational simulation capability to more effectively transition from fundamental research to operational design. Early modeling efforts produced the Off-Gas SeParation and REcoverY (OSPREY) model that, while efficient in terms of computation time, was of limited value for complex systems. However, the computational and programming lessons learned in development of the initial model were used to develop Discontinuous Galerkin OSPREY (DGOSPREY), a more effective model. Initial comparisons between OSPREY and DGOSPREY show that, while OSPREY does reasonably well to capture the initial breakthrough time, it displays far too much numerical dispersion to accurately capture the real shape of the breakthrough curves. DGOSPREY is a much better tool as it utilizes a more stable set of numerical methods. In addition, DGOSPREY has shown the capability to capture complex, multispecies adsorption behavior, while OSPREY currently only works for a single adsorbing species. This capability makes DGOSPREY ultimately a more practical tool for real world simulations involving many different gas species. While DGOSPREY has initially performed very well, there is still need for improvement. The current state of DGOSPREY does not include any micro-scale adsorption kinetics and therefore assumes instantaneous adsorption. This is a major source of error in predicting water vapor breakthrough because the kinetics of that adsorption mechanism is particularly slow. However, this deficiency can be remedied by building kinetic kernels into DGOSPREY. Another source of error in DGOSPREY stems from data gaps in single species, such as Kr and Xe, isotherms. Since isotherm data for each gas is currently available at a single temperature, the model is unable to predict adsorption at temperatures outside of the set of data currently

  19. Climbing the ladder: capability maturity model integration level 3 (United States)

    Day, Bryce; Lutteroth, Christof


    This article details the attempt to form a complete workflow model for an information and communication technologies (ICT) company in order to achieve a capability maturity model integration (CMMI) maturity rating of 3. During this project, business processes across the company's core and auxiliary sectors were documented and extended using modern enterprise modelling tools and a The Open Group Architectural Framework (TOGAF) methodology. Different challenges were encountered with regard to process customisation and tool support for enterprise modelling. In particular, there were problems with the reuse of process models, the integration of different project management methodologies and the integration of the Rational Unified Process development process framework that had to be solved. We report on these challenges and the perceived effects of the project on the company. Finally, we point out research directions that could help to improve the situation in the future.

  20. Nuclear Hybrid Energy System Modeling: RELAP5 Dynamic Coupling Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Piyush Sabharwall; Nolan Anderson; Haihua Zhao; Shannon Bragg-Sitton; George Mesina


    The nuclear hybrid energy systems (NHES) research team is currently developing a dynamic simulation of an integrated hybrid energy system. A detailed simulation of proposed NHES architectures will allow initial computational demonstration of a tightly coupled NHES to identify key reactor subsystem requirements, identify candidate reactor technologies for a hybrid system, and identify key challenges to operation of the coupled system. This work will provide a baseline for later coupling of design-specific reactor models through industry collaboration. The modeling capability addressed in this report focuses on the reactor subsystem simulation.

  1. FREYA-a new Monte Carlo code for improved modeling of fission chains

    Energy Technology Data Exchange (ETDEWEB)

    Hagmann, C A; Randrup, J; Vogt, R L


    A new simulation capability for modeling of individual fission events and chains and the transport of fission products in materials is presented. FREYA ( Fission Yield Event Yield Algorithm ) is a Monte Carlo code for generating fission events providing correlated kinematic information for prompt neutrons, gammas, and fragments. As a standalone code, FREYA calculates quantities such as multiplicity-energy, angular, and gamma-neutron energy sharing correlations. To study materials with multiplication, shielding effects, and detectors, we have integrated FREYA into the general purpose Monte Carlo code MCNP. This new tool will allow more accurate modeling of detector responses including correlations and the development of SNM detectors with increased sensitivity.

  2. EMPIRE: Nuclear Reaction Model Code System for Data Evaluation (United States)

    Herman, M.; Capote, R.; Carlson, B. V.; Obložinský, P.; Sin, M.; Trkov, A.; Wienke, H.; Zerkin, V.


    accompanying code EMPEND and completed with neutron resonances extracted from the existing evaluations. The package contains the full EXFOR (CSISRS) library of experimental reaction data that are automatically retrieved during the calculations. Publication quality graphs can be obtained using the powerful and flexible plotting package ZVView. The graphic user interface, written in Tcl/Tk, provides for easy operation of the system. This paper describes the capabilities of the code, outlines physical models and indicates parameter libraries used by EMPIRE to predict reaction cross sections and spectra, mainly for nucleon-induced reactions. Selected applications of EMPIRE are discussed, the most important being an extensive use of the code in evaluations of neutron reactions for the new US library ENDF/B-VII.0. Future extensions of the system are outlined, including neutron resonance module as well as capabilities of generating covariances, using both KALMAN and Monte-Carlo methods, that are still being advanced and refined.

  3. Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety.

    Energy Technology Data Exchange (ETDEWEB)

    Carbajo, Juan (Oak Ridge National Laboratory, Oak Ridge, TN); Jeong, Hae-Yong (Korea Atomic Energy Research Institute, Daejeon, Korea); Wigeland, Roald (Idaho National Laboratory, Idaho Falls, ID); Corradini, Michael (University of Wisconsin, Madison, WI); Schmidt, Rodney Cannon; Thomas, Justin (Argonne National Laboratory, Argonne, IL); Wei, Tom (Argonne National Laboratory, Argonne, IL); Sofu, Tanju (Argonne National Laboratory, Argonne, IL); Ludewig, Hans (Brookhaven National Laboratory, Upton, NY); Tobita, Yoshiharu (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Ohshima, Hiroyuki (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Serre, Frederic (Centre d' %C3%94etudes nucl%C3%94eaires de Cadarache %3CU%2B2013%3E CEA, France)


    This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the

  4. 40 CFR 194.23 - Models and computer codes. (United States)


    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e.,...

  5. Validation of an electroseismic and seismoelectric modeling code, for layered earth models, by the explicit homogeneous space solutions

    NARCIS (Netherlands)

    Grobbe, N.; Slob, E.C.


    We have developed an analytically based, energy fluxnormalized numerical modeling code (ESSEMOD), capable of modeling the wave propagation of all existing ElectroSeismic and SeismoElectric source-receiver combinations in horizontally layered configurations. We compare the results of several of these

  6. EASEWASTE-life cycle modeling capabilities for waste management technologies

    DEFF Research Database (Denmark)

    Bhander, Gurbakhash Singh; Christensen, Thomas Højlund; Hauschild, Michael Zwicky


    Background, Aims and Scope The management of municipal solid waste and the associated environmental impacts are subject of growing attention in industrialized countries. EU has recently strongly emphasized the role of LCA in its waste and resource strategies. The development of sustainable solid...... waste management systems applying a life-cycle perspective requires readily understandable tools for modelling the life cycle impacts of waste management systems. The aim of the paper is to demonstrate the structure, functionalities and LCA modelling capabilities of the PC-based life cycle oriented...... waste management model EASEWASTE, developed at the Technical University of Denmark specifically to meet the needs of the waste system developer with the objective to evaluate the environmental performance of the various elements of existing or proposed solid waste management systems. Materials...

  7. Image Coding using Markov Models with Hidden States

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto


    The Cylinder Partially Hidden Markov Model (CPH-MM) is applied to lossless coding of bi-level images. The original CPH-MM is relaxed for the purpose of coding by not imposing stationarity, but otherwise the model description is the same.......The Cylinder Partially Hidden Markov Model (CPH-MM) is applied to lossless coding of bi-level images. The original CPH-MM is relaxed for the purpose of coding by not imposing stationarity, but otherwise the model description is the same....

  8. Challenges in Developing Strategic Capabilities in SEP Event Modeling (United States)

    Luhmann, J. G.; S., L.; Krauss-Varban, D.; Li, G.; Odstrcil, D.; Riley, P.; Owens, M.; Sokolov, I.; Manchester, W.; Kota, J.


    Realistic major SEP event modeling lags behind current efforts in CME/ICME modeling at this point in time. While on the surface the implementation of such models might seem straightforward, in practice there are many aspects of their construction that make progress slow. Part of the difficulty stems from the complex physics of the problem, some of which remains controversial, and part is simply related to the logistics of coupling the various concepts of acceleration and transport to the increasingly realistic MHD models of CMEs and ICMEs. Before a Strategic Capability in SEP event modeling can begin to be realized, the latter challenge must be addressed. Several groups, including CISM, CSEM and the LWS TR&T Focus Science Team on the subject have been grappling with this ostensibly more tractable part of the problem. This presentation is an effort to collect and communicate the challenges faced in the course of applying MHD results in various approaches. One goal is to suggest what MHD model improvements and products would be most useful toward this goal. Another is to highlight the realities of compromises that must necessarily be made in the SEP event models regardless of the perfection of the MHD descriptions of CMEs and ICMEs.

  9. Assessment of Modeling Capability for Reproducing Storm Impacts on TEC (United States)

    Shim, J. S.; Kuznetsova, M. M.; Rastaetter, L.; Bilitza, D.; Codrescu, M.; Coster, A. J.; Emery, B. A.; Foerster, M.; Foster, B.; Fuller-Rowell, T. J.; Huba, J. D.; Goncharenko, L. P.; Mannucci, A. J.; Namgaladze, A. A.; Pi, X.; Prokhorov, B. E.; Ridley, A. J.; Scherliess, L.; Schunk, R. W.; Sojka, J. J.; Zhu, L.


    During geomagnetic storm, the energy transfer from solar wind to magnetosphere-ionosphere system adversely affects the communication and navigation systems. Quantifying storm impacts on TEC (Total Electron Content) and assessment of modeling capability of reproducing storm impacts on TEC are of importance to specifying and forecasting space weather. In order to quantify storm impacts on TEC, we considered several parameters: TEC changes compared to quiet time (the day before storm), TEC difference between 24-hour intervals, and maximum increase/decrease during the storm. We investigated the spatial and temporal variations of the parameters during the 2006 AGU storm event (14-15 Dec. 2006) using ground-based GPS TEC measurements in the selected 5 degree eight longitude sectors. The latitudinal variations were also studied in two longitude sectors among the eight sectors where data coverage is relatively better. We obtained modeled TEC from various ionosphere/thermosphere (IT) models. The parameters from the models were compared with each other and with the observed values. We quantified performance of the models in reproducing the TEC variations during the storm using skill scores. This study has been supported by the Community Coordinated Modeling Center (CCMC) at the Goddard Space Flight Center. Model outputs and observational data used for the study will be permanently posted at the CCMC website ( for the space science communities to use.

  10. Subgrid Combustion Modeling for the Next Generation National Combustion Code (United States)

    Menon, Suresh; Sankaran, Vaidyanathan; Stone, Christopher


    In the first year of this research, a subgrid turbulent mixing and combustion methodology developed earlier at Georgia Tech has been provided to researchers at NASA/GRC for incorporation into the next generation National Combustion Code (called NCCLES hereafter). A key feature of this approach is that scalar mixing and combustion processes are simulated within the LES grid using a stochastic 1D model. The subgrid simulation approach recovers locally molecular diffusion and reaction kinetics exactly without requiring closure and thus, provides an attractive feature to simulate complex, highly turbulent reacting flows of interest. Data acquisition algorithms and statistical analysis strategies and routines to analyze NCCLES results have also been provided to NASA/GRC. The overall goal of this research is to systematically develop and implement LES capability into the current NCC. For this purpose, issues regarding initialization and running LES are also addressed in the collaborative effort. In parallel to this technology transfer effort (that is continuously on going), research has also been underway at Georgia Tech to enhance the LES capability to tackle more complex flows. In particular, subgrid scalar mixing and combustion method has been evaluated in three distinctly different flow field in order to demonstrate its generality: (a) Flame-Turbulence Interactions using premixed combustion, (b) Spatially evolving supersonic mixing layers, and (c) Temporal single and two-phase mixing layers. The configurations chosen are such that they can be implemented in NCCLES and used to evaluate the ability of the new code. Future development and validation will be in spray combustion in gas turbine engine and supersonic scalar mixing.

  11. Coarse-grained DNA model capable of simulating ribose flexibility

    CERN Document Server

    Kovaleva, Natalya A; Mazo, Mikhail A; Zubova, Elena A


    We propose a "sugar" coarse-grained (CG) DNA model capable of simulating both biologically significant B- and A-DNA forms. The number of degrees of freedom is reduced to six grains per nucleotide. We show that this is the minimal number sufficient for this purpose. The key features of the sugar CG DNA model are: (1) simulation of sugar repuckering between C2'-endo and C3'-endo by the use of one non-harmonic potential and one three-particle potential, (2) explicit representation of sodium counterions and (3) implicit solvent approach. Effects of solvation and of partial charge screening at small distances are taken into account through the shape of potentials of interactions between charged particles. We obtain parameters of the sugar CG DNA model from the all-atom AMBER model. The suggested model allows adequate simulation of the transitions between A- and B-DNA forms, as well as of large deformations of long DNA molecules, for example, in binding with proteins. Small modifications of the model can provide th...

  12. MMA, A Computer Code for Multi-Model Analysis (United States)

    Poeter, Eileen P.; Hill, Mary C.


    be well served by the default methods provided. To use the default methods, the only required input for MMA is a list of directories where the files for the alternate models are located. Evaluation and development of model-analysis methods are active areas of research. To facilitate exploration and innovation, MMA allows the user broad discretion to define alternatives to the default procedures. For example, MMA allows the user to (a) rank models based on model criteria defined using a wide range of provided and user-defined statistics in addition to the default AIC, AICc, BIC, and KIC criteria, (b) create their own criteria using model measures available from the code, and (c) define how each model criterion is used to calculate related posterior model probabilities. The default model criteria rate models are based on model fit to observations, the number of observations and estimated parameters, and, for KIC, the Fisher information matrix. In addition, MMA allows the analysis to include an evaluation of estimated parameter values. This is accomplished by allowing the user to define unreasonable estimated parameter values or relative estimated parameter values. An example of the latter is that it may be expected that one parameter value will be less than another, as might be the case if two parameters represented the hydraulic conductivity of distinct materials such as fine and coarse sand. Models with parameter values that violate the user-defined conditions are excluded from further consideration by MMA. Ground-water models are used as examples in this report, but MMA can be used to evaluate any set of models for which the required files have been produced. MMA needs to read files from a separate directory for each alternative model considered. The needed files are produced when using the Sensitivity-Analysis or Parameter-Estimation mode of UCODE_2005, or, possibly, the equivalent capability of another program. MMA is constructed using

  13. Research on regional capability constructive models of cleat development mechanism

    Institute of Scientific and Technical Information of China (English)

    Chen Leishan; Liu Qingqiang; Geng Jie; Lu Genfa


    Global climate change has been identified as the ftrst of the top ten environmental problems in the world,As climate change will have serious effects on the social and economic development and everyday living of people in the world,many of the countries and governments are taking untiring efforts to combat climate change.As one of the important mechanisms of reducing greenhouse gas (GHG) emissions in the Kyoto Protocol,Clean DevelopmentMechanism (CDM) has not only provided chance.for developed countries to ftdfill greenhouse emission reduction obligations,but also provided an opportunity for developing countries to combat climate change under the sustainabledevelopment frame.The dual objectives of developed countries' GHG emissions' reduction obligation achievement and developing countries'sustainable development will be achieved under the CDM.As a country with responsibility,China has been positively developing CDM projects and promoting energy saving and emissions reduction during the three years after the Kyoto Protocol came into force,and CDM projects development has always been in the front tank in the world However,as the vast clime within China,notable differences occur in different regions.In order to promote the CDM development in China,it is necessary to have regional CDM capability construction in accor dance with the practicality in different regions.Based on the Slat Analysis of developed CDM projects and current CDM development status in China,problems in the CDM development of China,including the inefficiency in sinall and medium-sized CDM Projects development,over centralization of CDM development scope and especially the differentiated provincial CDM projects developing capability are pointed out in the paper.What's more,reasons forthe problems are analyzed from the leading factors,including policy orient,information asymmetry and weak CDMcapability.In order to promote CDM projects development in China,a new CDM capability construction model is put

  14. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee


    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  15. Subgroup A : nuclear model codes report to the Sixteenth Meeting of the WPEC

    Energy Technology Data Exchange (ETDEWEB)

    Talou, P. (Patrick); Chadwick, M. B. (Mark B.); Dietrich, F. S.; Herman, M.; Kawano, T. (Toshihiko); Konig, A.; Obložinský, P.


    The Subgroup A activities focus on the development of nuclear reaction models and codes, used in evaluation work for nuclear reactions from the unresolved energy region up to the pion threshold production limit, and for target nuclides from the low teens and heavier. Much of the efforts are devoted by each participant to the continuing development of their own Institution codes. Progresses in this arena are reported in detail for each code in the present document. EMPIRE-II is of public access. The release of the TALYS code has been announced for the ND2004 Conference in Santa Fe, NM, October 2004. McGNASH is still under development and is not expected to be released in the very near future. In addition, Subgroup A members have demonstrated a growing interest in working on common modeling and codes capabilities, which would significantly reduce the amount of duplicate work, help manage efficiently the growing lines of existing codes, and render codes inter-comparison much easier. A recent and important activity of the Subgroup A has therefore been to develop the framework and the first bricks of the ModLib library, which is constituted of mostly independent pieces of codes written in Fortran 90 (and above) to be used in existing and future nuclear reaction codes. Significant progresses in the development of ModLib have been made during the past year. Several physics modules have been added to the library, and a few more have been planned in detail for the coming year.

  16. Code Generation for Protocols from CPN models Annotated with Pragmatics

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars Michael; Kindler, Ekkart

    Model-driven engineering (MDE) provides a foundation for automatically generating software based on models. Models allow software designs to be specified focusing on the problem domain and abstracting from the details of underlying implementation platforms. When applied in the context of formal...... modelling languages, MDE further has the advantage that models are amenable to model checking which allows key behavioural properties of the software design to be verified. The combination of formally verified models and automated code generation contributes to a high degree of assurance that the resulting...... of the same model and sufficiently detailed to serve as a basis for automated code generation when annotated with code generation pragmatics. Pragmatics are syntactical annotations designed to make the CPN models descriptive and to address the problem that models with enough details for generating code from...

  17. Modelling force deployment from army intelligence using the transportation system capability (TRANSCAP) model : a standardized approach.

    Energy Technology Data Exchange (ETDEWEB)

    Burke, J. F., Jr.; Love, R. J.; Macal, C. M.; Decision and Information Sciences


    Argonne National Laboratory (Argonne) developed the transportation system capability (TRANSCAP) model to simulate the deployment of forces from Army bases, in collaboration with and under the sponsorship of the Military Transportation Management Command Transportation Engineering Agency (MTMCTEA). TRANSCAP's design separates its pre- and post-processing modules (developed in Java) from its simulation module (developed in MODSIM III). This paper describes TRANSCAP's modelling approach, emphasizing Argonne's highly detailed, object-oriented, multilanguage software design principles. Fundamental to these design principles is TRANSCAP's implementation of an improved method for standardizing the transmission of simulated data to output analysis tools and the implementation of three Army deployment/redeployment community standards, all of which are in the final phases of community acceptance. The first is the extensive hierarchy and object representation for transport simulations (EXHORT), which is a reusable, object-oriented deployment simulation source code framework of classes. The second and third are algorithms for rail deployment operations at a military base.

  18. ISO 9000 and/or Systems Engineering Capability Maturity Model? (United States)

    Gholston, Sampson E.


    For businesses and organizations to remain competitive today they must have processes and systems in place that will allow them to first identify customer needs and then develop products/processes that will meet or exceed the customers needs and expectations. Customer needs, once identified, are normally stated as requirements. Designers can then develop products/processes that will meet these requirements. Several functions, such as quality management and systems engineering management are used to assist product development teams in the development process. Both functions exist in all organizations and both have a similar objective, which is to ensure that developed processes will meet customer requirements. Are efforts in these organizations being duplicated? Are both functions needed by organizations? What are the similarities and differences between the functions listed above? ISO 9000 is an international standard of goods and services. It sets broad requirements for the assurance of quality and for management's involvement. It requires organizations to document the processes and to follow these documented processes. ISO 9000 gives customers assurance that the suppliers have control of the process for product development. Systems engineering can broadly be defined as a discipline that seeks to ensure that all requirements for a system are satisfied throughout the life of the system by preserving their interrelationship. The key activities of systems engineering include requirements analysis, functional analysis/allocation, design synthesis and verification, and system analysis and control. The systems engineering process, when followed properly, will lead to higher quality products, lower cost products, and shorter development cycles. The System Engineering Capability Maturity Model (SE-CMM) will allow companies to measure their system engineering capability and continuously improve those capabilities. ISO 9000 and SE-CMM seem to have a similar objective, which

  19. Conservation of concrete structures in fib model code 2010

    NARCIS (Netherlands)

    Matthews, S.L.; Ueda, T.; Bigaj-van Vliet, A.


    Chapter 9: Conservation of concrete structures forms part of fib Model Code 2010, the first draft of which was published for comment as fib Bulletins 55 and 56 (fib 2010). Numerous comments were received and considered by fib Special Activity Group 5 responsible for the preparation of fib Model Code

  20. Conservation of concrete structures in fib model code 2010

    NARCIS (Netherlands)

    Matthews, S.L.; Ueda, T.; Bigaj-van Vliet, A.


    Chapter 9: Conservation of concrete structures forms part of fib Model Code 2010, the first draft of which was published for comment as fib Bulletins 55 and 56 (fib 2010). Numerous comments were received and considered by fib Special Activity Group 5 responsible for the preparation of fib Model Code

  1. Evaluation of Advanced Models for PAFS Condensation Heat Transfer in SPACE Code

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Byoung-Uhn; Kim, Seok; Park, Yu-Sun; Kang, Kyung Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Ahn, Tae-Hwan; Yun, Byong-Jo [Pusan National University, Busan (Korea, Republic of)


    The PAFS (Passive Auxiliary Feedwater System) is operated by the natural circulation to remove the core decay heat through the PCHX (Passive Condensation Heat Exchanger) which is composed of the nearly horizontal tubes. For validation of the cooling and operational performance of the PAFS, PASCAL (PAFS Condensing Heat Removal Assessment Loop) facility was constructed and the condensation heat transfer and natural convection phenomena in the PAFS was experimentally investigated at KAERI (Korea Atomic Energy Research Institute). From the PASCAL experimental result, it was found that conventional system analysis code underestimated the condensation heat transfer. In this study, advanced condensation heat transfer models which can treat the heat transfer mechanisms with the different flow regimes in the nearly horizontal heat exchanger tube were analyzed. The models were implemented in a thermal hydraulic safety analysis code, SPACE (Safety and Performance Analysis Code for Nuclear Power Plant), and it was evaluated with the PASCAL experimental data. With an aim of enhancing the prediction capability for the condensation phenomenon inside the PCHX tube of the PAFS, advanced models for the condensation heat transfer were implemented into the wall condensation model of the SPACE code, so that the PASCAL experimental result was utilized to validate the condensation models. Calculation results showed that the improved model for the condensation heat transfer coefficient enhanced the prediction capability of the SPACE code. This result confirms that the mechanistic modeling for the film condensation in the steam phase and the convection in the condensate liquid contributed to enhance the prediction capability of the wall condensation model of the SPACE code and reduce conservatism in prediction of condensation heat transfer.

  2. Code Generation for Embedded Software for Modeling Clear Box Structures

    Directory of Open Access Journals (Sweden)

    V. Chandra Prakash


    Full Text Available Cleanroom software Engineering (CRSE recommended that the code related to the Application systems be generated either manually or through code generation models or represents the same as a hierarchy of clear box structures. CRSE has even advocated that the code be developed using the State models that models the internal behavior of the systems. No framework has been recommended by any Author using which the Clear boxes are designed using the code generation methods. Code Generation is one of the important quality issues addressed in cleanroom software engineering. It has been investigated that CRSE can be used for life cycle management of the embedded systems when the hardware-software co-design is in-built as part and parcel of CRSE by way of adding suitable models to CRSE and redefining the same. The design of Embedded Systems involves code generation in respect of hardware and Embedded Software. In this paper, a framework is proposed using which the embedded software is generated. The method is unique that it considers various aspects of the code generation which includes Code Segments, Code Functions, Classes, Globalization, Variable propagation etc. The proposed Framework has been applied to a Pilot project and the experimental results are presented.

  3. Automatic code generation from the OMT-based dynamic model

    Energy Technology Data Exchange (ETDEWEB)

    Ali, J.; Tanaka, J.


    The OMT object-oriented software development methodology suggests creating three models of the system, i.e., object model, dynamic model and functional model. We have developed a system that automatically generates implementation code from the dynamic model. The system first represents the dynamic model as a table and then generates executable Java language code from it. We used inheritance for super-substate relationships. We considered that transitions relate to states in a state diagram exactly as operations relate to classes in an object diagram. In the generated code, each state in the state diagram becomes a class and each event on a state becomes an operation on the corresponding class. The system is implemented and can generate executable code for any state diagram. This makes the role of the dynamic model more significant and the job of designers even simpler.

  4. Aviation System Analysis Capability Air Carrier Investment Model-Cargo (United States)

    Johnson, Jesse; Santmire, Tara


    The purpose of the Aviation System Analysis Capability (ASAC) Air Cargo Investment Model-Cargo (ACIMC), is to examine the economic effects of technology investment on the air cargo market, particularly the market for new cargo aircraft. To do so, we have built an econometrically based model designed to operate like the ACIM. Two main drivers account for virtually all of the demand: the growth rate of the Gross Domestic Product (GDP) and changes in the fare yield (which is a proxy of the price charged or fare). These differences arise from a combination of the nature of air cargo demand and the peculiarities of the air cargo market. The net effect of these two factors are that sales of new cargo aircraft are much less sensitive to either increases in GDP or changes in the costs of labor, capital, fuel, materials, and energy associated with the production of new cargo aircraft than the sales of new passenger aircraft. This in conjunction with the relatively small size of the cargo aircraft market means technology improvements to the cargo aircraft will do relatively very little to spur increased sales of new cargo aircraft.

  5. RELAP5/MOD3 code manual: Code structure, system models, and solution methods. Volume 1

    Energy Technology Data Exchange (ETDEWEB)



    The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling, approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I provides modeling theory and associated numerical schemes.

  6. Noise Residual Learning for Noise Modeling in Distributed Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Forchhammer, Søren


    Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The noise model is one of the inherently difficult challenges in DVC. This paper considers Transform Domain Wyner-Ziv (TDWZ) coding and proposes...... decoding. A residual refinement step is also introduced to take advantage of correlation of DCT coefficients. Experimental results show that the proposed techniques robustly improve the coding efficiency of TDWZ DVC and for GOP=2 bit-rate savings up to 35% on WZ frames are achieved compared with DISCOVER....

  7. Lidar Remote Sensing of Forests: New Instruments and Modeling Capabilities (United States)

    Cook, Bruce D.


    Lidar instruments provide scientists with the unique opportunity to characterize the 3D structure of forest ecosystems. This information allows us to estimate properties such as wood volume, biomass density, stocking density, canopy cover, and leaf area. Structural information also can be used as drivers for photosynthesis and ecosystem demography models to predict forest growth and carbon sequestration. All lidars use time-in-flight measurements to compute accurate ranging measurements; however, there is a wide range of instruments and data types that are currently available, and instrument technology continues to advance at a rapid pace. This seminar will present new technologies that are in use and under development at NASA for airborne and space-based missions. Opportunities for instrument and data fusion will also be discussed, as Dr. Cook is the PI for G-LiHT, Goddard's LiDAR, Hyperspectral, and Thermal airborne imager. Lastly, this talk will introduce radiative transfer models that can simulate interactions between laser light and forest canopies. Developing modeling capabilities is important for providing continuity between observations made with different lidars, and to assist the design of new instruments. Dr. Bruce Cook is a research scientist in NASA's Biospheric Sciences Laboratory at Goddard Space Flight Center, and has more than 25 years of experience conducting research on ecosystem processes, soil biogeochemistry, and exchange of carbon, water vapor and energy between the terrestrial biosphere and atmosphere. His research interests include the combined use of lidar, hyperspectral, and thermal data for characterizing ecosystem form and function. He is Deputy Project Scientist for the Landsat Data Continuity Mission (LDCM); Project Manager for NASA s Carbon Monitoring System (CMS) pilot project for local-scale forest biomass; and PI of Goddard's LiDAR, Hyperspectral, and Thermal (G-LiHT) airborne imager.

  8. Multiscale Modeling of Nano-scale Phenomena: Towards a Multiphysics Simulation Capability for Design and Optimization of Sensor Systems

    Energy Technology Data Exchange (ETDEWEB)

    Becker, R; McElfresh, M; Lee, C; Balhorn, R; White, D


    In this white paper, a road map is presented to establish a multiphysics simulation capability for the design and optimization of sensor systems that incorporate nanomaterials and technologies. The Engineering Directorate's solid/fluid mechanics and electromagnetic computer codes will play an important role in both multiscale modeling and integration of required physics issues to achieve a baseline simulation capability. Molecular dynamic simulations performed primarily in the BBRP, CMS and PAT directorates, will provide information for the construction of multiscale models. All of the theoretical developments will require closely coupled experimental work to develop material models and validate simulations. The plan is synergistic and complimentary with the Laboratory's emerging core competency of multiscale modeling. The first application of the multiphysics computer code is the simulation of a ''simple'' biological system (protein recognition utilizing synthesized ligands) that has a broad range of applications including detection of biological threats, presymptomatic detection of illnesses, and drug therapy. While the overall goal is to establish a simulation capability, the near-term work is mainly focused on (1) multiscale modeling, i.e., the development of ''continuum'' representations of nanostructures based on information from molecular dynamics simulations and (2) experiments for model development and validation. A list of LDRDER proposals and ongoing projects that could be coordinated to achieve these near-term objectives and demonstrate the feasibility and utility of a multiphysics simulation capability is given.

  9. The SCEC Community Modeling Environment (SCEC/CME) - An Overview of its Architecture and Current Capabilities (United States)

    Maechling, P. J.; Jordan, T. H.; Minster, B.; Moore, R.; Kesselman, C.; SCEC ITR Collaboration


    The Southern California Earthquake Center (SCEC), in collaboration with the San Diego Supercomputer Center, the USC Information Sciences Institute, the Incorporated Research Institutions for Seismology, and the U.S. Geological Survey, is developing the Southern California Earthquake Center Community Modeling Environment (CME) under a five-year grant from the National Science Foundation's Information Technology Research (ITR) Program jointly funded by the Geosciences and Computer and Information Science & Engineering Directorates. The CME system is an integrated geophysical simulation modeling framework that automates the process of selecting, configuring, and executing models of earthquake systems. During the Project's first three years, we have performed fundamental geophysical and information technology research and have also developed substantial system capabilities, software tools, and data collections that can help scientist perform systems-level earthquake science. The CME system provides collaborative tools to facilitate distributed research and development. These collaborative tools are primarily communication tools, providing researchers with access to information in ways that are convenient and useful. The CME system provides collaborators with access to significant computing and storage resources. The computing resources of the Project include in-house servers, Project allocations on USC High Performance Computing Linux Cluster, as well as allocations on NPACI Supercomputers and the TeraGrid. The CME system provides access to SCEC community geophysical models such as the Community Velocity Model, Community Fault Model, Community Crustal Motion Model, and the Community Block Model. The organizations that develop these models often provide access to them so it is not necessary to use the CME system to access these models. However, in some cases, the CME system supplements the SCEC community models with utility codes that make it easier to use or access

  10. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models (United States)

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.


    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  11. Quantitative Model for Supply Chain Visibility: Process Capability Perspective

    Directory of Open Access Journals (Sweden)

    Youngsu Lee


    Full Text Available Currently, the intensity of enterprise competition has increased as a result of a greater diversity of customer needs as well as the persistence of a long-term recession. The results of competition are becoming severe enough to determine the survival of company. To survive global competition, each firm must focus on achieving innovation excellence and operational excellence as core competency for sustainable competitive advantage. Supply chain management is now regarded as one of the most effective innovation initiatives to achieve operational excellence, and its importance has become ever more apparent. However, few companies effectively manage their supply chains, and the greatest difficulty is in achieving supply chain visibility. Many companies still suffer from a lack of visibility, and in spite of extensive research and the availability of modern technologies, the concepts and quantification methods to increase supply chain visibility are still ambiguous. Based on the extant researches in supply chain visibility, this study proposes an extended visibility concept focusing on a process capability perspective and suggests a more quantitative model using Z score in Six Sigma methodology to evaluate and improve the level of supply chain visibility.

  12. Innovation and dynamic capabilities of the firm: Defining an assessment model

    Directory of Open Access Journals (Sweden)

    André Cherubini Alves


    Full Text Available Innovation and dynamic capabilities have gained considerable attention in both academia and practice. While one of the oldest inquiries in economic and strategy literature involves understanding the features that drive business success and a firm’s perpetuity, the literature still lacks a comprehensive model of innovation and dynamic capabilities. This study presents a model that assesses firms’ innovation and dynamic capabilities perspectives based on four essential capabilities: development, operations, management, and transaction capabilities. Data from a survey of 1,107 Brazilian manufacturing firms were used for empirical testing and discussion of the dynamic capabilities framework. Regression and factor analyses validated the model; we discuss the results, contrasting with the dynamic capabilities’ framework. Operations Capability is the least dynamic of all capabilities, with the least influence on innovation. This reinforces the notion that operations capabilities as “ordinary capabilities,” whereas management, development, and transaction capabilities better explain firms’ dynamics and innovation.

  13. Two Phase Flow Models and Numerical Methods of the Commercial CFD Codes

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Sung Won; Jeong, Jae Jun; Chang, Seok Kyu; Cho, Hyung Kyu


    The use of commercial CFD codes extend to various field of engineering. The thermal hydraulic analysis is one of the promising engineering field of application of the CFD codes. Up to now, the main application of the commercial CFD code is focused within the single phase, single composition fluid dynamics. Nuclear thermal hydraulics, however, deals with abrupt pressure changes, high heat fluxes, and phase change heat transfer. In order to overcome the CFD limitation and to extend the capability of the nuclear thermal hydraulics analysis, the research efforts are made to collaborate the CFD and nuclear thermal hydraulics. To achieve the final goal, the current useful model and correlations used in commercial CFD codes should be reviewed and investigated. This report gives the summary information about the constitutive relationships that are used in the FLUENT, STAR-CD, and CFX. The brief information of the solution technologies are also enveloped.

  14. On the Delay Characteristics for Point-to-Point links using Random Linear Network Coding with On-the-fly Coding Capabilities

    DEFF Research Database (Denmark)

    Tömösközi, Máté; Fitzek, Frank; Roetter, Daniel Enrique Lucani


    overall delays due to processing overheads. This paper describes the delay reduction achieved through online network coding approaches with a limit on the number of packets to be mixed before decoding and a systematic encoding structure. We use the inorder per packet delay as our key performance metric....... This metric captures the elapsed time between (network) encoding RTP packets and completely decoding the packets in-order on the receiver side. Our solutions are implemented and evaluated on a point-to-point link between a Raspberry Pi device and a network (de)coding enabled software running on a regular PC......Video surveillance and similar real-time applications on wireless networks require increased reliability and high performance of the underlying transmission layer. Classical solutions, such as Reed-Solomon codes, increase the reliability, but typically have the negative side-effect of additional...

  15. A unified model of the standard genetic code. (United States)

    José, Marco V; Zamudio, Gabriel S; Morgado, Eberto R


    The Rodin-Ohno (RO) and the Delarue models divide the table of the genetic code into two classes of aminoacyl-tRNA synthetases (aaRSs I and II) with recognition from the minor or major groove sides of the tRNA acceptor stem, respectively. These models are asymmetric but they are biologically meaningful. On the other hand, the standard genetic code (SGC) can be derived from the primeval RNY code (R stands for purines, Y for pyrimidines and N any of them). In this work, the RO-model is derived by means of group actions, namely, symmetries represented by automorphisms, assuming that the SGC originated from a primeval RNY code. It turns out that the RO-model is symmetric in a six-dimensional (6D) hypercube. Conversely, using the same automorphisms, we show that the RO-model can lead to the SGC. In addition, the asymmetric Delarue model becomes symmetric by means of quotient group operations. We formulate isometric functions that convert the class aaRS I into the class aaRS II and vice versa. We show that the four polar requirement categories display a symmetrical arrangement in our 6D hypercube. Altogether these results cannot be attained, neither in two nor in three dimensions. We discuss the present unified 6D algebraic model, which is compatible with both the SGC (based upon the primeval RNY code) and the RO-model.

  16. Fusion safety codes International modeling with MELCOR and ATHENA- INTRA

    CERN Document Server

    Marshall, T; Topilski, L; Merrill, B


    For a number of years, the world fusion safety community has been involved in benchmarking their safety analyses codes against experiment data to support regulatory approval of a next step fusion device. This paper discusses the benchmarking of two prominent fusion safety thermal-hydraulic computer codes. The MELCOR code was developed in the US for fission severe accident safety analyses and has been modified for fusion safety analyses. The ATHENA code is a multifluid version of the US-developed RELAP5 code that is also widely used for fusion safety analyses. The ENEA Fusion Division uses ATHENA in conjunction with the INTRA code for its safety analyses. The INTRA code was developed in Germany and predicts containment building pressures, temperatures and fluid flow. ENEA employs the French-developed ISAS system to couple ATHENA and INTRA. This paper provides a brief introduction of the MELCOR and ATHENA-INTRA codes and presents their modeling results for the following breaches of a water cooling line into the...

  17. RHOCUBE: 3D density distributions modeling code (United States)

    Nikutta, Robert; Agliozzo, Claudia


    RHOCUBE models 3D density distributions on a discrete Cartesian grid and their integrated 2D maps. It can be used for a range of applications, including modeling the electron number density in LBV shells and computing the emission measure. The RHOCUBE Python package provides several 3D density distributions, including a powerlaw shell, truncated Gaussian shell, constant-density torus, dual cones, and spiralling helical tubes, and can accept additional distributions. RHOCUBE provides convenient methods for shifts and rotations in 3D, and if necessary, an arbitrary number of density distributions can be combined into the same model cube and the integration ∫ dz performed through the joint density field.

  18. On the Delay Characteristics for Point-to-Point links using Random Linear Network Coding with On-the-fly Coding Capabilities

    DEFF Research Database (Denmark)

    Tömösközi, Máté; Fitzek, Frank; Roetter, Daniel Enrique Lucani


    Video surveillance and similar real-time applications on wireless networks require increased reliability and high performance of the underlying transmission layer. Classical solutions, such as Reed-Solomon codes, increase the reliability, but typically have the negative side-effect of additional ...

  19. LMFBR models for the ORIGEN2 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.


    Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 233/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.

  20. LMFBR models for the ORIGEN2 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.


    Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 238/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.

  1. Information Theoretic Authentication and Secrecy Codes in the Splitting Model

    CERN Document Server

    Huber, Michael


    In the splitting model, information theoretic authentication codes allow non-deterministic encoding, that is, several messages can be used to communicate a particular plaintext. Certain applications require that the aspect of secrecy should hold simultaneously. Ogata-Kurosawa-Stinson-Saido (2004) have constructed optimal splitting authentication codes achieving perfect secrecy for the special case when the number of keys equals the number of messages. In this paper, we establish a construction method for optimal splitting authentication codes with perfect secrecy in the more general case when the number of keys may differ from the number of messages. To the best knowledge, this is the first result of this type.


    Institute of Scientific and Technical Information of China (English)

    Xiao Jiang; Wu Chengke


    In order to apply the Human Visual System (HVS) model to JPEG2000 standard,several implementation alternatives are discussed and a new scheme of visual optimization isintroduced with modifying the slope of rate-distortion. The novelty is that the method of visual weighting is not lifting the coefficients in wavelet domain, but is complemented by code stream organization. It remains all the features of Embedded Block Coding with Optimized Truncation (EBCOT) such as resolution progressive, good robust for error bit spread and compatibility of lossless compression. Well performed than other methods, it keeps the shortest standard codestream and decompression time and owns the ability of VIsual Progressive (VIP) coding.

  3. Differences between the 1993 and 1995 CABO Model Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Conover, D.R.; Lucas, R.G.


    The Energy Policy Act of 1992 requires the US DOE to determine if changes to the Council of American Building Officials` (CABO) 1993 Model Energy Code (MEC) (CABO 1993), published in the 1995 edition of the MEC (CABO 1995), will improve energy efficiency in residential buildings. The DOE, the states, and others have expressed an interest in the differences between the 1993 and 1995 editions of the MEC. This report describes each change to the 1993 MEC, and its impact. Referenced publications are also listed along with discrepancies between code changes approved in the 1994 and 1995 code-change cycles and what actually appears in the 1995 MEC.

  4. Development of Parallel Computing Framework to Enhance Radiation Transport Code Capabilities for Rare Isotope Beam Facility Design

    Energy Technology Data Exchange (ETDEWEB)

    Kostin, Mikhail [FRIB, MSU; Mokhov, Nikolai [FNAL; Niita, Koji [RIST, Japan


    A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA and MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.

  5. Modeling Guidelines for Code Generation in the Railway Signaling Context (United States)

    Ferrari, Alessio; Bacherini, Stefano; Fantechi, Alessandro; Zingoni, Niccolo


    Modeling guidelines constitute one of the fundamental cornerstones for Model Based Development. Their relevance is essential when dealing with code generation in the safety-critical domain. This article presents the experience of a railway signaling systems manufacturer on this issue. Introduction of Model-Based Development (MBD) and code generation in the industrial safety-critical sector created a crucial paradigm shift in the development process of dependable systems. While traditional software development focuses on the code, with MBD practices the focus shifts to model abstractions. The change has fundamental implications for safety-critical systems, which still need to guarantee a high degree of confidence also at code level. Usage of the Simulink/Stateflow platform for modeling, which is a de facto standard in control software development, does not ensure by itself production of high-quality dependable code. This issue has been addressed by companies through the definition of modeling rules imposing restrictions on the usage of design tools components, in order to enable production of qualified code. The MAAB Control Algorithm Modeling Guidelines (MathWorks Automotive Advisory Board)[3] is a well established set of publicly available rules for modeling with Simulink/Stateflow. This set of recommendations has been developed by a group of OEMs and suppliers of the automotive sector with the objective of enforcing and easing the usage of the MathWorks tools within the automotive industry. The guidelines have been published in 2001 and afterwords revisited in 2007 in order to integrate some additional rules developed by the Japanese division of MAAB [5]. The scope of the current edition of the guidelines ranges from model maintainability and readability to code generation issues. The rules are conceived as a reference baseline and therefore they need to be tailored to comply with the characteristics of each industrial context. Customization of these

  6. Description of codes and models to be used in risk assessment

    Energy Technology Data Exchange (ETDEWEB)


    Human health and environmental risk assessments will be performed as part of the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA) remedial investigation/feasibility study (RI/FS) activities at the Hanford Site. Analytical and computer encoded numerical models are commonly used during both the remedial investigation (RI) and feasibility study (FS) to predict or estimate the concentration of contaminants at the point of exposure to humans and/or the environment. This document has been prepared to identify the computer codes that will be used in support of RI/FS human health and environmental risk assessments at the Hanford Site. In addition to the CERCLA RI/FS process, it is recommended that these computer codes be used when fate and transport analyses is required for other activities. Additional computer codes may be used for other purposes (e.g., design of tracer tests, location of observation wells, etc.). This document provides guidance for unit managers in charge of RI/FS activities. Use of the same computer codes for all analytical activities at the Hanford Site will promote consistency, reduce the effort required to develop, validate, and implement models to simulate Hanford Site conditions, and expedite regulatory review. The discussion provides a description of how models will likely be developed and utilized at the Hanford Site. It is intended to summarize previous environmental-related modeling at the Hanford Site and provide background for future model development. The modeling capabilities that are desirable for the Hanford Site and the codes that were evaluated. The recommendations include the codes proposed to support future risk assessment modeling at the Hanford Site, and provides the rational for the codes selected. 27 refs., 3 figs., 1 tab.

  7. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC).

    Energy Technology Data Exchange (ETDEWEB)

    Schultz, Peter Andrew


    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomic scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V&V) is required throughout the system to establish evidence-based metrics for the level of confidence in M&S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V&V challenge at the subcontinuum scale, an approach to incorporate V&V concepts into subcontinuum scale modeling and simulation (M&S), and a plan to incrementally incorporate effective V&V into subcontinuum scale M&S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.

  8. Benchmarking Defmod, an open source FEM code for modeling episodic fault rupture (United States)

    Meng, Chunfang


    We present Defmod, an open source (linear) finite element code that enables us to efficiently model the crustal deformation due to (quasi-)static and dynamic loadings, poroelastic flow, viscoelastic flow and frictional fault slip. Ali (2015) provides the original code introducing an implicit solver for (quasi-)static problem, and an explicit solver for dynamic problem. The fault constraint is implemented via Lagrange Multiplier. Meng (2015) combines these two solvers into a hybrid solver that uses failure criteria and friction laws to adaptively switch between the (quasi-)static state and dynamic state. The code is capable of modeling episodic fault rupture driven by quasi-static loadings, e.g. due to reservoir fluid withdraw or injection. Here, we focus on benchmarking the Defmod results against some establish results.

  9. Numerical modeling of immiscible two-phase flow in micro-models using a commercial CFD code

    Energy Technology Data Exchange (ETDEWEB)

    Crandall, Dustin; Ahmadia, Goodarz; Smith, Duane H.


    Off-the-shelf CFD software is being used to analyze everything from flow over airplanes to lab-on-a-chip designs. So, how accurately can two-phase immiscible flow be modeled flowing through some small-scale models of porous media? We evaluate the capability of the CFD code FLUENT{trademark} to model immiscible flow in micro-scale, bench-top stereolithography models. By comparing the flow results to experimental models we show that accurate 3D modeling is possible.

  10. Student Model Tools Code Release and Documentation

    DEFF Research Database (Denmark)

    Johnson, Matthew; Bull, Susan; Masci, Drew

    This document contains a wealth of information about the design and implementation of the Next-TELL open learner model. Information is included about the final specification (Section 3), the interfaces and features (Section 4), its implementation and technical design (Section 5) and also a summary...

  11. Code-to-Code Comparison, and Material Response Modeling of Stardust and MSL using PATO and FIAT (United States)

    Omidy, Ali D.; Panerai, Francesco; Martin, Alexandre; Lachaud, Jean R.; Cozmuta, Ioana; Mansour, Nagi N.


    This report provides a code-to-code comparison between PATO, a recently developed high fidelity material response code, and FIAT, NASA's legacy code for ablation response modeling. The goal is to demonstrates that FIAT and PATO generate the same results when using the same models. Test cases of increasing complexity are used, from both arc-jet testing and flight experiment. When using the exact same physical models, material properties and boundary conditions, the two codes give results that are within 2% of errors. The minor discrepancy is attributed to the inclusion of the gas phase heat capacity (cp) in the energy equation in PATO, and not in FIAT.

  12. The JCSS probabilistic model code: Experience and recent developments

    NARCIS (Netherlands)

    Chryssanthopoulos, M.; Diamantidis, D.; Vrouwenvelder, A.C.W.M.


    The JCSS Probabilistic Model Code (JCSS-PMC) has been available for public use on the JCSS website ( for over two years. During this period, several examples have been worked out and new probabilistic models have been added. Since the engineering community has already been exposed t

  13. A Mathematical Model for Comparing Holland's Personality and Environmental Codes. (United States)

    Kwak, Junkyu Christopher; Pulvino, Charles J.


    Presents a mathematical model utilizing three-letter codes of personality patterns determined from the Self Directed Search. This model compares personality types over time or determines relationships between personality types and person-environment interactions. This approach is consistent with Holland's theory yet more comprehensive than one- or…

  14. Model-building codes for membrane proteins.

    Energy Technology Data Exchange (ETDEWEB)

    Shirley, David Noyes; Hunt, Thomas W.; Brown, W. Michael; Schoeniger, Joseph S. (Sandia National Laboratories, Livermore, CA); Slepoy, Alexander; Sale, Kenneth L. (Sandia National Laboratories, Livermore, CA); Young, Malin M. (Sandia National Laboratories, Livermore, CA); Faulon, Jean-Loup Michel; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA)


    We have developed a novel approach to modeling the transmembrane spanning helical bundles of integral membrane proteins using only a sparse set of distance constraints, such as those derived from MS3-D, dipolar-EPR and FRET experiments. Algorithms have been written for searching the conformational space of membrane protein folds matching the set of distance constraints, which provides initial structures for local conformational searches. Local conformation search is achieved by optimizing these candidates against a custom penalty function that incorporates both measures derived from statistical analysis of solved membrane protein structures and distance constraints obtained from experiments. This results in refined helical bundles to which the interhelical loops and amino acid side-chains are added. Using a set of only 27 distance constraints extracted from the literature, our methods successfully recover the structure of dark-adapted rhodopsin to within 3.2 {angstrom} of the crystal structure.

  15. Multiview coding mode decision with hybrid optimal stopping model. (United States)

    Zhao, Tiesong; Kwong, Sam; Wang, Hanli; Wang, Zhou; Pan, Zhaoqing; Kuo, C-C Jay


    In a generic decision process, optimal stopping theory aims to achieve a good tradeoff between decision performance and time consumed, with the advantages of theoretical decision-making and predictable decision performance. In this paper, optimal stopping theory is employed to develop an effective hybrid model for the mode decision problem, which aims to theoretically achieve a good tradeoff between the two interrelated measurements in mode decision, as computational complexity reduction and rate-distortion degradation. The proposed hybrid model is implemented and examined with a multiview encoder. To support the model and further promote coding performance, the multiview coding mode characteristics, including predicted mode probability and estimated coding time, are jointly investigated with inter-view correlations. Exhaustive experimental results with a wide range of video resolutions reveal the efficiency and robustness of our method, with high decision accuracy, negligible computational overhead, and almost intact rate-distortion performance compared to the original encoder.

  16. Using Genome-scale Models to Predict Biological Capabilities

    DEFF Research Database (Denmark)

    O’Brien, Edward J.; Monk, Jonathan M.; Palsson, Bernhard O.


    Constraint-based reconstruction and analysis (COBRA) methods at the genome scale have been under development since the first whole-genome sequences appeared in the mid-1990s. A few years ago, this approach began to demonstrate the ability to predict a range of cellular functions, including cellular...... growth capabilities on various substrates and the effect of gene knockouts at the genome scale. Thus, much interest has developed in understanding and applying these methods to areas such as metabolic engineering, antibiotic design, and organismal and enzyme evolution. This Primer will get you started....

  17. An Evacuation Emergency Response Model Coupling Atmospheric Release Advisory Capability Output. (United States)


    concentration contours coupled with the SMI evacuation model were calculated by using the MATHEW and ADPIC codes. The evacuation emergency response...2 M ATH EW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 2 ADPIC ...CDC 7600 computer within a matter of minutes MATHEW and ADPIC codes. These two models after the computer center is notified, are described briefly

  18. TRIPOLI-4{sup ®} Monte Carlo code ITER A-lite neutronic model validation

    Energy Technology Data Exchange (ETDEWEB)

    Jaboulay, Jean-Charles, E-mail: [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France); Cayla, Pierre-Yves; Fausser, Clement [MILLENNIUM, 16 Av du Québec Silic 628, F-91945 Villebon sur Yvette (France); Damian, Frederic; Lee, Yi-Kang; Puma, Antonella Li; Trama, Jean-Christophe [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France)


    3D Monte Carlo transport codes are extensively used in neutronic analysis, especially in radiation protection and shielding analyses for fission and fusion reactors. TRIPOLI-4{sup ®} is a Monte Carlo code developed by CEA. The aim of this paper is to show its capability to model a large-scale fusion reactor with complex neutron source and geometry. A benchmark between MCNP5 and TRIPOLI-4{sup ®}, on the ITER A-lite model was carried out; neutron flux, nuclear heating in the blankets and tritium production rate in the European TBMs were evaluated and compared. The methodology to build the TRIPOLI-4{sup ®} A-lite model is based on MCAM and the MCNP A-lite model. Simplified TBMs, from KIT, were integrated in the equatorial-port. A good agreement between MCNP and TRIPOLI-4{sup ®} is shown; discrepancies are mainly included in the statistical error.

  19. Modelling spread of Bluetongue in Denmark: The code

    DEFF Research Database (Denmark)

    Græsbøll, Kaare

    This technical report was produced to make public the code produced as the main project of the PhD project by Kaare Græsbøll, with the title: "Modelling spread of Bluetongue and other vector borne diseases in Denmark and evaluation of intervention strategies".......This technical report was produced to make public the code produced as the main project of the PhD project by Kaare Græsbøll, with the title: "Modelling spread of Bluetongue and other vector borne diseases in Denmark and evaluation of intervention strategies"....

  20. Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach

    Directory of Open Access Journals (Sweden)

    W. Bastiaan Kleijn


    Full Text Available Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel coding.

  1. A review of MAAP4 code structure and core T/H model

    Energy Technology Data Exchange (ETDEWEB)

    Song, Yong Mann; Park, Soo Yong


    The modular accident analysis program (MAAP) version 4 is a computer code that can simulate the response of LWR plants during severe accident sequences and includes models for all of the important phenomena which might occur during accident sequences. In this report, MAAP4 code structure and core thermal hydraulic (T/H) model which models the T/H behavior of the reactor core and the response of core components during all accident phases involving degraded cores are specifically reviewed and then reorganized. This reorganization is performed via getting the related models together under each topic whose contents and order are same with other two reports for MELCOR and SCDAP/RELAP5 to be simultaneously published. Major purpose of the report is to provide information about the characteristics of MAAP4 core T/H models for an integrated severe accident computer code development being performed under the one of on-going mid/long-term nuclear developing project. The basic characteristics of the new integrated severe accident code includes: 1) Flexible simulation capability of primary side, secondary side, and the containment under severe accident conditions, 2) Detailed plant simulation, 3) Convenient user-interfaces, 4) Highly modularization for easy maintenance/improvement, and 5) State-of-the-art model selection. In conclusion, MAAP4 code has appeared to be superior for 3) and 4) items but to be somewhat inferior for 1) and 2) items. For item 5), more efforts should be made in the future to compare separated models in detail with not only other codes but also recent world-wide work. (author). 17 refs., 1 tab., 12 figs.

  2. Inclusion of models to describe severe accident conditions in the fuel simulation code DIONISIO

    Energy Technology Data Exchange (ETDEWEB)

    Lemes, Martín; Soba, Alejandro [Sección Códigos y Modelos, Gerencia Ciclo del Combustible Nuclear, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina); Daverio, Hernando [Gerencia Reactores y Centrales Nucleares, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina); Denis, Alicia [Sección Códigos y Modelos, Gerencia Ciclo del Combustible Nuclear, Comisión Nacional de Energía Atómica, Avenida General Paz 1499, 1650 San Martín, Provincia de Buenos Aires (Argentina)


    The simulation of fuel rod behavior is a complex task that demands not only accurate models to describe the numerous phenomena occurring in the pellet, cladding and internal rod atmosphere but also an adequate interconnection between them. In the last years several models have been incorporated to the DIONISIO code with the purpose of increasing its precision and reliability. After the regrettable events at Fukushima, the need for codes capable of simulating nuclear fuels under accident conditions has come forth. Heat removal occurs in a quite different way than during normal operation and this fact determines a completely new set of conditions for the fuel materials. A detailed description of the different regimes the coolant may exhibit in such a wide variety of scenarios requires a thermal-hydraulic formulation not suitable to be included in a fuel performance code. Moreover, there exist a number of reliable and famous codes that perform this task. Nevertheless, and keeping in mind the purpose of building a code focused on the fuel behavior, a subroutine was developed for the DIONISIO code that performs a simplified analysis of the coolant in a PWR, restricted to the more representative situations and provides to the fuel simulation the boundary conditions necessary to reproduce accidental situations. In the present work this subroutine is described and the results of different comparisons with experimental data and with thermal-hydraulic codes are offered. It is verified that, in spite of its comparative simplicity, the predictions of this module of DIONISIO do not differ significantly from those of the specific, complex codes.

  3. Development of explosive event scale model testing capability at Sandia`s large scale centrifuge facility

    Energy Technology Data Exchange (ETDEWEB)

    Blanchat, T.K.; Davie, N.T.; Calderone, J.J. [and others


    Geotechnical structures such as underground bunkers, tunnels, and building foundations are subjected to stress fields produced by the gravity load on the structure and/or any overlying strata. These stress fields may be reproduced on a scaled model of the structure by proportionally increasing the gravity field through the use of a centrifuge. This technology can then be used to assess the vulnerability of various geotechnical structures to explosive loading. Applications of this technology include assessing the effectiveness of earth penetrating weapons, evaluating the vulnerability of various structures, counter-terrorism, and model validation. This document describes the development of expertise in scale model explosive testing on geotechnical structures using Sandia`s large scale centrifuge facility. This study focused on buried structures such as hardened storage bunkers or tunnels. Data from this study was used to evaluate the predictive capabilities of existing hydrocodes and structural dynamics codes developed at Sandia National Laboratories (such as Pronto/SPH, Pronto/CTH, and ALEGRA). 7 refs., 50 figs., 8 tabs.

  4. Capabilities for modelling of conversion processes in LCA

    DEFF Research Database (Denmark)

    Damgaard, Anders; Zarrin, Bahram; Tonini, Davide


    Life cycle assessment was traditionally used for modelling of product design and optimization. This is also seen in the conventional LCA software which is optimized for the modelling of single materials streams of a homogeneous nature that is assembled into a final product. There has therefore been...... be modelled and then integrated into the overall LCA model. This allows for flexible modules which automatically will adjust the material flows it is handling on basis of its chemical information, which can be set for multiple input materials at the same time. A case example of this was carried out for a bio...

  5. Code Shift: Grid Specifications and Dynamic Wind Turbine Models

    DEFF Research Database (Denmark)

    Ackermann, Thomas; Ellis, Abraham; Fortmann, Jens


    Grid codes (GCs) and dynamic wind turbine (WT) models are key tools to allow increasing renewable energy penetration without challenging security of supply. In this article, the state of the art and the further development of both tools are discussed, focusing on the European and North American...

  6. Capabilities For Modelling Of Conversion Processes In Life Cycle Assessment

    DEFF Research Database (Denmark)

    Damgaard, Anders; Zarrin, Bahram; Tonini, Davide

    Life cycle assessment was traditionally used for modelling of product design and optimization. This is also seen in the conventional LCA software which is optimized for the modelling of single materials streams of a homogeneous nature that is assembled into a final product. There has therefore been...... little focus on the chemical composition of the functional flows, as flows in the models have mainly been tracked on a mass basis, as emphasis was the function of the product and not the chemical composition of said product. Conversely, in modelling of environmental technologies, such as wastewater...... considering how the biochemical parameters change through a process chain. A good example of this is bio-refinery processes where different residual biomass products are converted through different steps into the final energy product. Here it is necessary to know the stoichiometry of the different products...

  7. On the predictive capabilities of multiphase Darcy flow models

    KAUST Repository

    Icardi, Matteo


    Darcy s law is a widely used model and the limit of its validity is fairly well known. When the flow is sufficiently slow and the porosity relatively homogeneous and low, Darcy s law is the homogenized equation arising from the Stokes and Navier- Stokes equations and depends on a single effective parameter (the absolute permeability). However when the model is extended to multiphase flows, the assumptions are much more restrictive and less realistic. Therefore it is often used in conjunction with empirical models (such as relative permeability and capillary pressure curves), derived usually from phenomenological speculations and experimental data fitting. In this work, we present the results of a Bayesian calibration of a two-phase flow model, using high-fidelity DNS numerical simulation (at the pore-scale) in a realistic porous medium. These reference results have been obtained from a Navier-Stokes solver coupled with an explicit interphase-tracking scheme. The Bayesian inversion is performed on a simplified 1D model in Matlab by using adaptive spectral method. Several data sets are generated and considered to assess the validity of this 1D model.

  8. Modelling binary rotating stars by new population synthesis code BONNFIRES

    CERN Document Server

    Lau, Herbert H B; Schneider, Fabian R N


    BONNFIRES, a new generation of population synthesis code, can calculate nuclear reaction, various mixing processes and binary interaction in a timely fashion. We use this new population synthesis code to study the interplay between binary mass transfer and rotation. We aim to compare theoretical models with observations, in particular the surface nitrogen abundance and rotational velocity. Preliminary results show binary interactions may explain the formation of nitrogen-rich slow rotators and nitrogen-poor fast rotators, but more work needs to be done to estimate whether the observed frequencies of those stars can be matched.

  9. Uncertainty quantification's role in modeling and simulation planning, and credibility assessment through the predictive capability maturity model

    Energy Technology Data Exchange (ETDEWEB)

    Rider, William J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Witkowski, Walter R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mousseau, Vincent Andrew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)


    The importance of credible, trustworthy numerical simulations is obvious especially when using the results for making high-consequence decisions. Determining the credibility of such numerical predictions is much more difficult and requires a systematic approach to assessing predictive capability, associated uncertainties and overall confidence in the computational simulation process for the intended use of the model. This process begins with an evaluation of the computational modeling of the identified, important physics of the simulation for its intended use. This is commonly done through a Phenomena Identification Ranking Table (PIRT). Then an assessment of the evidence basis supporting the ability to computationally simulate these physics can be performed using various frameworks such as the Predictive Capability Maturity Model (PCMM). There were several critical activities that follow in the areas of code and solution verification, validation and uncertainty quantification, which will be described in detail in the following sections. Here, we introduce the subject matter for general applications but specifics are given for the failure prediction project. In addition, the first task that must be completed in the verification & validation procedure is to perform a credibility assessment to fully understand the requirements and limitations of the current computational simulation capability for the specific application intended use. The PIRT and PCMM are tools used at Sandia National Laboratories (SNL) to provide a consistent manner to perform such an assessment. Ideally, all stakeholders should be represented and contribute to perform an accurate credibility assessment. PIRTs and PCMMs are both described in brief detail below and the resulting assessments for an example project are given.

  10. Modeling of Anomalous Transport in Tokamaks with FACETS code (United States)

    Pankin, A. Y.; Batemann, G.; Kritz, A.; Rafiq, T.; Vadlamani, S.; Hakim, A.; Kruger, S.; Miah, M.; Rognlien, T.


    The FACETS code, a whole-device integrated modeling code that self-consistently computes plasma profiles for the plasma core and edge in tokamaks, has been recently developed as a part of the SciDAC project for core-edge simulations. A choice of transport models is available in FACETS through the FMCFM interface [1]. Transport models included in FMCFM have specific ranges of applicability, which can limit their use to parts of the plasma. In particular, the GLF23 transport model does not include the resistive ballooning effects that can be important in the tokamak pedestal region and GLF23 typically under-predicts the anomalous fluxes near the magnetic axis [2]. The TGLF and GYRO transport models have similar limitations [3]. A combination of transport models that covers the entire discharge domain is studied using FACETS in a realistic tokamak geometry. Effective diffusivities computed with the FMCFM transport models are extended to the region near the separatrix to be used in the UEDGE code within FACETS. 1. S. Vadlamani et al. (2009) %First time-dependent transport simulations using GYRO and NCLASS within FACETS (this meeting).2. T. Rafiq et al. (2009) %Simulation of electron thermal transport in H-mode discharges Submitted to Phys. Plasmas.3. C. Holland et al. (2008) %Validation of gyrokinetic transport simulations using %DIII-D core turbulence measurements Proc. of IAEA FEC (Switzerland, 2008)

  11. Model classification rate control algorithm for video coding

    Institute of Scientific and Technical Information of China (English)


    A model classification rate control method for video coding is proposed. The macro-blocks are classified according to their prediction errors, and different parameters are used in the rate-quantization and distortion-quantization model.The different model parameters are calculated from the previous frame of the same type in the process of coding. These models are used to estimate the relations among rate, distortion and quantization of the current frame. Further steps,such as R-D optimization based quantization adjustment and smoothing of quantization of adjacent macroblocks, are used to improve the quality. The results of the experiments prove that the technique is effective and can be realized easily. The method presented in the paper can be a good way for MPEG and H. 264 rate control.

  12. Finite Element Modeling, Simulation, Tools, and Capabilities at Superform (United States)

    Raman, Hari; Barnes, A. J.


    Over the past thirty years Superform has been a pioneer in the SPF arena, having developed a keen understanding of the process and a range of unique forming techniques to meet varying market needs. Superform’s high-profile list of customers includes Boeing, Airbus, Aston Martin, Ford, and Rolls Royce. One of the more recent additions to Superform’s technical know-how is finite element modeling and simulation. Finite element modeling is a powerful numerical technique which when applied to SPF provides a host of benefits including accurate prediction of strain levels in a part, presence of wrinkles and predicting pressure cycles optimized for time and part thickness. This paper outlines a brief history of finite element modeling applied to SPF and then reviews some of the modeling tools and techniques that Superform have applied and continue to do so to successfully superplastically form complex-shaped parts. The advantages of employing modeling at the design stage are discussed and illustrated with real-world examples.

  13. Computable general equilibrium model fiscal year 2013 capability development report

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences in the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.

  14. NLTE solar irradiance modeling with the COSI code

    CERN Document Server

    Shapiro, A I; Schoell, M; Haberreiter, M; Rozanov, E


    Context. The solar irradiance is known to change on time scales of minutes to decades, and it is suspected that its substantial fluctua- tions are partially responsible for climate variations. Aims. We are developing a solar atmosphere code that allows the physical modeling of the entire solar spectrum composed of quiet Sun and active regions. This code is a tool for modeling the variability of the solar irradiance and understanding its influence on Earth. Methods. We exploit further development of the radiative transfer code COSI that now incorporates the calculation of molecular lines. We validated COSI under the conditions of local thermodynamic equilibrium (LTE) against the synthetic spectra calculated with the ATLAS code. The synthetic solar spectra were also calculated in non-local thermodynamic equilibrium (NLTE) and compared to the available measured spectra. In doing so we have defined the main problems of the modeling, e.g., the lack of opacity in the UV part of the spectrum and the inconsistency in...

  15. A semianalytic Monte Carlo code for modelling LIDAR measurements (United States)

    Palazzi, Elisa; Kostadinov, Ivan; Petritoli, Andrea; Ravegnani, Fabrizio; Bortoli, Daniele; Masieri, Samuele; Premuda, Margherita; Giovanelli, Giorgio


    LIDAR (LIght Detection and Ranging) is an optical active remote sensing technology with many applications in atmospheric physics. Modelling of LIDAR measurements appears useful approach for evaluating the effects of various environmental variables and scenarios as well as of different measurement geometries and instrumental characteristics. In this regard a Monte Carlo simulation model can provide a reliable answer to these important requirements. A semianalytic Monte Carlo code for modelling LIDAR measurements has been developed at ISAC-CNR. The backscattered laser signal detected by the LIDAR system is calculated in the code taking into account the contributions due to the main atmospheric molecular constituents and aerosol particles through processes of single and multiple scattering. The contributions by molecular absorption, ground and clouds reflection are evaluated too. The code can perform simulations of both monostatic and bistatic LIDAR systems. To enhance the efficiency of the Monte Carlo simulation, analytical estimates and expected value calculations are performed. Artificial devices (such as forced collision, local forced collision, splitting and russian roulette) are moreover foreseen by the code, which can enable the user to drastically reduce the variance of the calculation.

  16. The Hadronic Models for Cosmic Ray Physics: the FLUKA Code Solutions

    Energy Technology Data Exchange (ETDEWEB)

    Battistoni, G.; Garzelli, M.V.; Gadioli, E.; Muraro, S.; Sala, P.R.; Fasso, A.; Ferrari, A.; Roesler, S.; Cerutti, F.; Ranft, J.; Pinsky, L.S.; Empl, A.; Pelliccioni, M.; Villari, R.; /INFN, Milan /Milan U. /SLAC /CERN /Siegen U. /Houston U. /Frascati /ENEA, Frascati


    FLUKA is a general purpose Monte Carlo transport and interaction code used for fundamental physics and for a wide range of applications. These include Cosmic Ray Physics (muons, neutrinos, EAS, underground physics), both for basic research and applied studies in space and atmospheric flight dosimetry and radiation damage. A review of the hadronic models available in FLUKA and relevant for the description of cosmic ray air showers is presented in this paper. Recent updates concerning these models are discussed. The FLUKA capabilities in the simulation of the formation and propagation of EM and hadronic showers in the Earth's atmosphere are shown.

  17. Subsurface flow and transport of organic chemicals: an assessment of current modeling capability and priority directions for future research (1987-1995)

    Energy Technology Data Exchange (ETDEWEB)

    Streile, G.P.; Simmons, C.S.


    Theoretical and computer modeling capability for assessing the subsurface movement and fate of organic contaminants in groundwater was examined. Hence, this study is particularly concerned with energy-related, organic compounds that could enter a subsurface environment and move as components of a liquid phase separate from groundwater. The migration of organic chemicals that exist in an aqueous dissolved state is certainly a part of this more general scenario. However, modeling of the transport of chemicals in aqueous solution has already been the subject of several reviews. Hence, this study emphasizes the multiphase scenario. This study was initiated to focus on the important physicochemical processes that control the behavior of organic substances in groundwater systems, to evaluate the theory describing these processes, and to search for and evaluate computer codes that implement models that correctly conceptualize the problem situation. This study is not a code inventory, and no effort was made to identify every available code capable of representing a particular process.

  18. Application of flow network models of SINDA/FLUINT{sup TM} to a nuclear power plant system thermal hydraulic code

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Ji Bum [Institute for Advanced Engineering, Yongin (Korea, Republic of); Park, Jong Woon [Korea Electric Power Research Institute, Taejon (Korea, Republic of)


    In order to enhance the dynamic and interactive simulation capability of a system thermal hydraulic code for nuclear power plant, applicability of flow network models in SINDA/FLUINT{sup TM} has been tested by modeling feedwater system and coupling to DSNP which is one of a system thermal hydraulic simulation code for a pressurized heavy water reactor. The feedwater system is selected since it is one of the most important balance of plant systems with a potential to greatly affect the behavior of nuclear steam supply system. The flow network model of this feedwater system consists of condenser, condensate pumps, low and high pressure heaters, deaerator, feedwater pumps, and control valves. This complicated flow network is modeled and coupled to DSNP and it is tested for several normal and abnormal transient conditions such turbine load maneuvering, turbine trip, and loss of class IV power. The results show reasonable behavior of the coupled code and also gives a good dynamic and interactive simulation capabilities for the several mild transient conditions. It has been found that coupling system thermal hydraulic code with a flow network code is a proper way of upgrading simulation capability of DSNP to mature nuclear plant analyzer (NPA). 5 refs., 10 figs. (Author)

  19. The Capabilities-Complexity Model. CALA Report 108 (United States)

    Oosterhof, Albert; Rohani, Faranak; Sanfilippo, Carol; Stillwell, Peggy; Hawkins, Karen


    In assessment, the ability to construct test items that measure a targeted skill is fundamental to validity and alignment. The ability to do the reverse is also important: determining what skill an existing test item measures. This paper presents a model for classifying test items that builds on procedures developed by others, including Bloom…

  20. Improvement of reflood model in RELAP5 code based on sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Li, Dong; Liu, Xiaojing; Yang, Yanhua, E-mail:


    Highlights: • Sensitivity analysis is performed on the reflood model of RELAP5. • The selected influential models are discussed and modified. • The modifications are assessed by FEBA experiment and better predictions are obtained. - Abstract: Reflooding is an important and complex process to the safety of nuclear reactor during loss of coolant accident (LOCA). Accurate prediction of the reflooding behavior is one of the challenge tasks for the current system code development. RELAP5 as a widely used system code has the capability to simulate this process but with limited accuracy, especially for low inlet flow rate reflooding conditions. Through the preliminary assessment with six FEBA (Flooding Experiments with Blocked Arrays) tests, it is observed that the peak cladding temperature (PCT) is generally underestimated and bundle quench is predicted too early compared to the experiment data. In this paper, the improvement of constitutive models related to reflooding is carried out based on single parametric sensitivity analysis. Film boiling heat transfer model and interfacial friction model of dispersed flow are selected as the most influential models to the results of interests. Then studies and discussions are specifically focused on these sensitive models and proper modifications are recommended. These proposed improvements are implemented in RELAP5 code and assessed against FEBA experiment. Better agreement between calculations and measured data for both cladding temperature and quench time is obtained.

  1. Enhancements to the SSME transfer function modeling code (United States)

    Irwin, R. Dennis; Mitchell, Jerrel R.; Bartholomew, David L.; Glenn, Russell D.


    This report details the results of a one year effort by Ohio University to apply the transfer function modeling and analysis tools developed under NASA Grant NAG8-167 (Irwin, 1992), (Bartholomew, 1992) to attempt the generation of Space Shuttle Main Engine High Pressure Turbopump transfer functions from time domain data. In addition, new enhancements to the transfer function modeling codes which enhance the code functionality are presented, along with some ideas for improved modeling methods and future work. Section 2 contains a review of the analytical background used to generate transfer functions with the SSME transfer function modeling software. Section 2.1 presents the 'ratio method' developed for obtaining models of systems that are subject to single unmeasured excitation sources and have two or more measured output signals. Since most of the models developed during the investigation use the Eigensystem Realization Algorithm (ERA) for model generation, Section 2.2 presents an introduction of ERA, and Section 2.3 describes how it can be used to model spectral quantities. Section 2.4 details the Residue Identification Algorithm (RID) including the use of Constrained Least Squares (CLS) and Total Least Squares (TLS). Most of this information can be found in the report (and is repeated for convenience). Section 3 chronicles the effort of applying the SSME transfer function modeling codes to the a51p394.dat and a51p1294.dat time data files to generate transfer functions from the unmeasured input to the 129.4 degree sensor output. Included are transfer function modeling attempts using five methods. The first method is a direct application of the SSME codes to the data files and the second method uses the underlying trends in the spectral density estimates to form transfer function models with less clustering of poles and zeros than the models obtained by the direct method. In the third approach, the time data is low pass filtered prior to the modeling process in an

  2. Video Coding and Modeling with Applications to ATM Multiplexing (United States)

    Nguyen, Hien

    A new vector quantization (VQ) coding method based on optimized concentric shell partitioning of the image space is proposed. The advantages of using the concentric shell partition vector quantizer (CSPVQ) are that it is very fast and the image patterns found in each different subspace can be more effectively coded by using a codebook that is best matched to that particular subspace. For intra-frame coding, the CSPVQ is shown to have the same performance, if not better, than the optimized gain-shape VQ in terms of encoded picture quality while it definitely surpasses the gain-shape VQ in term of computational complexity. A variable bit rate (VBR) video coder for moving video is then proposed where the idea of CSPVQ is coupled with the idea of regular quadtree decomposition to further reduce the bit rate of the encoded picture sequence. The usefulness of a quadtree coding technique comes from the fact that different homogeneous regions occurring within an image can be compactly represented by various nodes in a quadtree. It is found that this image representation technique is particularly useful in providing a low bit rate video encoder without compromising the image quality when it is used in conjunction with the CSPVQ. The characteristics of the VBR coder's output as applied to ATM transmission are investigated. Three video models are used to study the performance of the ATM multiplexer. These models are the auto regressive (AR) model, the auto regressive hidden Markov model (AR-HMM), and the fluid flow uniform arrival and service (UAS) model. The AR model is allowed to have arbitrary order and is used to model a video source which has a constant amount of motion, that is, a stationary video source. The AR-HMM is a more general video model which is based on the idea of auto regressive hidden Markov chain formulated by Baum and is used to describe highly non-stationary sources. Hence, it is expected that the AR-HMM model may also be used top represent a video

  3. Discovering binary codes for documents by learning deep generative models. (United States)

    Hinton, Geoffrey; Salakhutdinov, Ruslan


    We describe a deep generative model in which the lowest layer represents the word-count vector of a document and the top layer represents a learned binary code for that document. The top two layers of the generative model form an undirected associative memory and the remaining layers form a belief net with directed, top-down connections. We present efficient learning and inference procedures for this type of generative model and show that it allows more accurate and much faster retrieval than latent semantic analysis. By using our method as a filter for a much slower method called TF-IDF we achieve higher accuracy than TF-IDF alone and save several orders of magnitude in retrieval time. By using short binary codes as addresses, we can perform retrieval on very large document sets in a time that is independent of the size of the document set using only one word of memory to describe each document.

  4. Using cryptology models for protecting PHP source code (United States)

    Jevremović, Aleksandar; Ristić, Nenad; Veinović, Mladen


    Protecting PHP scripts from unwanted use, copying and modifications is a big issue today. Existing solutions on source code level are mostly working as obfuscators, they are free, and they are not providing any serious protection. Solutions that encode opcode are more secure, but they are commercial and require closed-source proprietary PHP interpreter's extension. Additionally, encoded opcode is not compatible with future versions of interpreters which imply re-buying encoders from the authors. Finally, if extension source-code is compromised, all scripts encoded with that solution are compromised too. In this paper, we will present a new model for free and open-source PHP script protection solution. Protection level provided by the proposed solution is equal to protection level of commercial solutions. Model is based on conclusions from use of standard cryptology models for analysis of strengths and weaknesses of the existing solutions, when a scripts protection is seen as secure communication channel in the cryptology.

  5. A spectral synthesis code for rapid modelling of supernovae

    CERN Document Server

    Kerzendorf, Wolfgang E


    We present TARDIS - an open-source code for rapid spectral modelling of supernovae (SNe). Our goal is to develop a tool that is sufficiently fast to allow exploration of the complex parameter spaces of models for SN ejecta. This can be used to analyse the growing number of high-quality SN spectra being obtained by transient surveys. The code uses Monte Carlo methods to obtain a self-consistent description of the plasma state and to compute a synthetic spectrum. It has a modular design to facilitate the implementation of a range of physical approximations that can be compared to asses both accuracy and computational expediency. This will allow users to choose a level of sophistication appropriate for their application. Here, we describe the operation of the code and make comparisons with alternative radiative transfer codes of differing levels of complexity (SYN++, PYTHON, and ARTIS). We then explore the consequence of adopting simple prescriptions for the calculation of atomic excitation, focussing on four sp...

  6. Transform Coding for Point Clouds Using a Gaussian Process Model. (United States)

    De Queiroz, Ricardo; Chou, Philip A


    We propose using stationary Gaussian Processes (GPs) to model the statistics of the signal on points in a point cloud, which can be considered samples of a GP at the positions of the points. Further, we propose using Gaussian Process Transforms (GPTs), which are Karhunen-Lo`eve transforms of the GP, as the basis of transform coding of the signal. Focusing on colored 3D point clouds, we propose a transform coder that breaks the point cloud into blocks, transforms the blocks using GPTs, and entropy codes the quantized coefficients. The GPT for each block is derived from both the covariance function of the GP and the locations of the points in the block, which are separately encoded. The covariance function of the GP is parameterized, and its parameters are sent as side information. The quantized coefficients are sorted by eigenvalues of the GPTs, binned, and encoded using an arithmetic coder with bin-dependent Laplacian models whose parameters are also sent as side information. Results indicate that transform coding of 3D point cloud colors using the proposed GPT and entropy coding achieves superior compression performance on most of our data sets.

  7. The 2010 fib Model Code for Structural Concrete: A new approach to structural engineering

    NARCIS (Netherlands)

    Walraven, J.C.; Bigaj-Van Vliet, A.


    The fib Model Code is a recommendation for the design of reinforced and prestressed concrete which is intended to be a guiding document for future codes. Model Codes have been published before, in 1978 and 1990. The draft for fib Model Code 2010 was published in May 2010. The most important new elem

  8. Numerical modelling of spallation in 2D hydrodynamics codes (United States)

    Maw, J. R.; Giles, A. R.


    A model for spallation based on the void growth model of Johnson has been implemented in 2D Lagrangian and Eulerian hydrocodes. The model has been extended to treat complete separation of material when voids coalesce and to describe the effects of elevated temperatures and melting. The capabilities of the model are illustrated by comparison with data from explosively generated spall experiments. Particular emphasis is placed on the prediction of multiple spall effects in weak, low melting point, materials such as lead. The correlation between the model predictions and observations on the strain rate dependence of spall strength is discussed.

  9. Developing Materials Processing to Performance Modeling Capabilities and the Need for Exascale Computing Architectures (and Beyond)

    Energy Technology Data Exchange (ETDEWEB)

    Schraad, Mark William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Physics and Engineering Models; Luscher, Darby Jon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Advanced Simulation and Computing


    Additive Manufacturing techniques are presenting the Department of Energy and the NNSA Laboratories with new opportunities to consider novel component production and repair processes, and to manufacture materials with tailored response and optimized performance characteristics. Additive Manufacturing technologies already are being applied to primary NNSA mission areas, including Nuclear Weapons. These mission areas are adapting to these new manufacturing methods, because of potential advantages, such as smaller manufacturing footprints, reduced needs for specialized tooling, an ability to embed sensing, novel part repair options, an ability to accommodate complex geometries, and lighter weight materials. To realize the full potential of Additive Manufacturing as a game-changing technology for the NNSA’s national security missions; however, significant progress must be made in several key technical areas. In addition to advances in engineering design, process optimization and automation, and accelerated feedstock design and manufacture, significant progress must be made in modeling and simulation. First and foremost, a more mature understanding of the process-structure-property-performance relationships must be developed. Because Additive Manufacturing processes change the nature of a material’s structure below the engineering scale, new models are required to predict materials response across the spectrum of relevant length scales, from the atomistic to the continuum. New diagnostics will be required to characterize materials response across these scales. And not just models, but advanced algorithms, next-generation codes, and advanced computer architectures will be required to complement the associated modeling activities. Based on preliminary work in each of these areas, a strong argument for the need for Exascale computing architectures can be made, if a legitimate predictive capability is to be developed.

  10. Improvement of Basic Fluid Dynamics Models for the COMPASS Code (United States)

    Zhang, Shuai; Morita, Koji; Shirakawa, Noriyuki; Yamamoto, Yuichi

    The COMPASS code is a new next generation safety analysis code to provide local information for various key phenomena in core disruptive accidents of sodium-cooled fast reactors, which is based on the moving particle semi-implicit (MPS) method. In this study, improvement of basic fluid dynamics models for the COMPASS code was carried out and verified with fundamental verification calculations. A fully implicit pressure solution algorithm was introduced to improve the numerical stability of MPS simulations. With a newly developed free surface model, numerical difficulty caused by poor pressure solutions is overcome by involving free surface particles in the pressure Poisson equation. In addition, applicability of the MPS method to interactions between fluid and multi-solid bodies was investigated in comparison with dam-break experiments with solid balls. It was found that the PISO algorithm and free surface model makes simulation with the passively moving solid model stable numerically. The characteristic behavior of solid balls was successfully reproduced by the present numerical simulations.

  11. New Mechanical Model for the Transmutation Fuel Performance Code

    Energy Technology Data Exchange (ETDEWEB)

    Gregory K. Miller


    A new mechanical model has been developed for implementation into the TRU fuel performance code. The new model differs from the existing FRAPCON 3 model, which it is intended to replace, in that it will include structural deformations (elasticity, plasticity, and creep) of the fuel. Also, the plasticity algorithm is based on the “plastic strain–total strain” approach, which should allow for more rapid and assured convergence. The model treats three situations relative to interaction between the fuel and cladding: (1) an open gap between the fuel and cladding, such that there is no contact, (2) contact between the fuel and cladding where the contact pressure is below a threshold value, such that axial slippage occurs at the interface, and (3) contact between the fuel and cladding where the contact pressure is above a threshold value, such that axial slippage is prevented at the interface. The first stage of development of the model included only the fuel. In this stage, results obtained from the model were compared with those obtained from finite element analysis using ABAQUS on a problem involving elastic, plastic, and thermal strains. Results from the two analyses showed essentially exact agreement through both loading and unloading of the fuel. After the cladding and fuel/clad contact were added, the model demonstrated expected behavior through all potential phases of fuel/clad interaction, and convergence was achieved without difficulty in all plastic analysis performed. The code is currently in stand alone form. Prior to implementation into the TRU fuel performance code, creep strains will have to be added to the model. The model will also have to be verified against an ABAQUS analysis that involves contact between the fuel and cladding.

  12. Galactic Cosmic Ray Event-Based Risk Model (GERM) Code (United States)

    Cucinotta, Francis A.; Plante, Ianik; Ponomarev, Artem L.; Kim, Myung-Hee Y.


    This software describes the transport and energy deposition of the passage of galactic cosmic rays in astronaut tissues during space travel, or heavy ion beams in patients in cancer therapy. Space radiation risk is a probability distribution, and time-dependent biological events must be accounted for physical description of space radiation transport in tissues and cells. A stochastic model can calculate the probability density directly without unverified assumptions about shape of probability density function. The prior art of transport codes calculates the average flux and dose of particles behind spacecraft and tissue shielding. Because of the signaling times for activation and relaxation in the cell and tissue, transport code must describe temporal and microspatial density of functions to correlate DNA and oxidative damage with non-targeted effects of signals, bystander, etc. These are absolutely ignored or impossible in the prior art. The GERM code provides scientists data interpretation of experiments; modeling of beam line, shielding of target samples, and sample holders; and estimation of basic physical and biological outputs of their experiments. For mono-energetic ion beams, basic physical and biological properties are calculated for a selected ion type, such as kinetic energy, mass, charge number, absorbed dose, or fluence. Evaluated quantities are linear energy transfer (LET), range (R), absorption and fragmentation cross-sections, and the probability of nuclear interactions after 1 or 5 cm of water equivalent material. In addition, a set of biophysical properties is evaluated, such as the Poisson distribution for a specified cellular area, cell survival curves, and DNA damage yields per cell. Also, the GERM code calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle in a selected material. The GERM code makes the numerical estimates of basic

  13. An integrated PHY-MAC analytical model for IEEE 802.15.7 VLC network with MPR capability (United States)

    Yu, Hai-feng; Chi, Xue-fen; Liu, Jian


    Considering that the collision caused by hidden terminal is particularly serious due to the narrow beams of optical devices, the multi-packet reception (MPR) is introduced to mitigate the collisions for IEEE 802.15.7 visible light communication (VLC) system. To explore the impact of MPR on system performance and investigate the interaction between physical (PHY) layer and media access control (MAC) layer, a three dimensional (3D) integrated PHY-MAC analytical model of carrier sense multiple access/collision avoidance (CSMA/CA) is established based on Markov chain theory for VLC system, in which MPR is implemented through the use of orthogonal code sequence. Throughput is derived to evaluate the performance of VLC system with MPR capability under imperfect optical channel. The results can be used for the performance optimization of a VLC system with MPR capability.

  14. Mercury + VisIt: Integration of a Real-Time Graphical Analysis Capability into a Monte Carlo Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, M J; Procassini, R J; Joy, K I


    Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.

  15. Direct containment heating models in the CONTAIN code

    Energy Technology Data Exchange (ETDEWEB)

    Washington, K.E.; Williams, D.C.


    The potential exists in a nuclear reactor core melt severe accident for molten core debris to be dispersed under high pressure into the containment building. If this occurs, the set of phenomena that result in the transfer of energy to the containment atmosphere and its surroundings is referred to as direct containment heating (DCH). Because of the potential for DCH to lead to early containment failure, the U.S. Nuclear Regulatory Commission (USNRC) has sponsored an extensive research program consisting of experimental, analytical, and risk integration components. An important element of the analytical research has been the development and assessment of direct containment heating models in the CONTAIN code. This report documents the DCH models in the CONTAIN code. DCH models in CONTAIN for representing debris transport, trapping, chemical reactions, and heat transfer from debris to the containment atmosphere and surroundings are described. The descriptions include the governing equations and input instructions in CONTAIN unique to performing DCH calculations. Modifications made to the combustion models in CONTAIN for representing the combustion of DCH-produced and pre-existing hydrogen under DCH conditions are also described. Input table options for representing the discharge of debris from the RPV and the entrainment phase of the DCH process are also described. A sample calculation is presented to demonstrate the functionality of the models. The results show that reasonable behavior is obtained when the models are used to predict the sixth Zion geometry integral effects test at 1/10th scale.

  16. Assessment of wall condensation model in the presence of noncondensable gas for the SPACE code

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Hyuk; Kim, Byong Jae; Lee, Seung Wook; Kim, Kyung Doo [KAERI, Daejeon (Korea, Republic of); Cho, Hyung Kyu [Seoul National University, Seoul (Korea, Republic of)


    A vapor containing noncondensable (NC) gases mostly was used in the cases for many postulated light water reactor accidents. The NC gases reduce the heat transfer and condensation rates even though they are present in the bulk vapor in a small amount. To understand the characteristics of condensation heat transfer in the presence of NC gases, a large number of analytical and experimental studies have performed. SPACE code, which have been developed since 2006 as a thermal hydraulic system analysis code, also had a capability of analysis for wall condensation with NC gases. To assess the model, three kinds of experiments are introduced: COPAIN test, University of Wisconsin condensation test, and KAIST reflux condensation test. The Colburn-Hougen model has been widely used in thermal hydraulic system codes for the wall condensation problem in the presence of noncondensable (NC) gases. However, we notice that there is a mistake in the used derived equation. The assessment of the modified Colburn-Hougen model was conducted by validating with variable experiments: COPAIN, University of Wisconsin condensation test, and KAIST reflux condensation test. Through the comparison of calculated results using SPACE with experimental data, we concluded that modified Colburn-Houngen model can more precisely simulate wall condensation heat transfer. And, calculated results have a better agreement with experimental data. Commonly, the calculated heat flux and vapor mass flux with higher air mass fraction cases are more increased and show a better agreement with experimental data.

  17. An Evaluating Model for Enterprise's Innovation Capability Based on BP Neural Network

    Institute of Scientific and Technical Information of China (English)

    HU Wei-qiang; WANG Li-xin


    To meet the challenge of knowledge-based economy in the 21st century, scientifically evaluating the innovation capability is important to strengthen the international competence and acquire long-term competitive advantage for Chinese enterprises. In the article, based on the description of concept and structure of enterprise's innovation capability, the evaluation index system of innovation capability is established according to Analytic Hierarchy Process (AHP). In succession, evaluation model based on Back Propagation (BP) neural network is put forward, which provides some theoretic guidance to scientifically evaluating the innovation capability of Chinese enterprises.

  18. Improved solidification influence modelling for Eulerian fuel-coolant interaction codes

    Energy Technology Data Exchange (ETDEWEB)

    Ursic, Mitja, E-mail: mitja.ursic@ijs.s [Jozef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Leskovar, Matjaz; Mavko, Borut [Jozef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia)


    considered as an important property, because it enables the correct prediction of the amount of droplets participating in the fine fragmentation process during the explosion phase. Second, the heat flux from the droplet interior to the surface was considered as an important feature, because it enables to improve the surface temperature determination and reflects the history of the droplet's cooling. The last objective was to implement the improved solidification influence modelling into the Eulerian code MC3D. The first demonstrative simulations with the implemented modelling are promising and are showing improvements in the simulation capability.

  19. Nuclear Energy Advanced Modeling and Simulation (NEAMS) Waste Integrated Performance and Safety Codes (IPSC) : FY10 development and integration.

    Energy Technology Data Exchange (ETDEWEB)

    Criscenti, Louise Jacqueline; Sassani, David Carl; Arguello, Jose Guadalupe, Jr.; Dewers, Thomas A.; Bouchard, Julie F.; Edwards, Harold Carter; Freeze, Geoffrey A.; Wang, Yifeng; Schultz, Peter Andrew


    This report describes the progress in fiscal year 2010 in developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. Waste IPSC activities in fiscal year 2010 focused on specifying a challenge problem to demonstrate proof of concept, developing a verification and validation plan, and performing an initial gap analyses to identify candidate codes and tools to support the development and integration of the Waste IPSC. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. This year-end progress report documents the FY10 status of acquisition, development, and integration of thermal-hydrologic-chemical-mechanical (THCM) code capabilities, frameworks, and enabling tools and infrastructure.

  20. Benchmarking of computer codes and approaches for modeling exposure scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Seitz, R.R. [EG and G Idaho, Inc., Idaho Falls, ID (United States); Rittmann, P.D.; Wood, M.I. [Westinghouse Hanford Co., Richland, WA (United States); Cook, J.R. [Westinghouse Savannah River Co., Aiken, SC (United States)


    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.

  1. Dynamic Model on the Transmission of Malicious Codes in Network

    Directory of Open Access Journals (Sweden)

    Bimal Kumar Mishra


    Full Text Available This paper introduces differential susceptible e-epidemic model S_i IR (susceptible class-1 for virus (S1 - susceptible class-2 for worms (S2 -susceptible class-3 for Trojan horse (S3 – infectious (I – recovered (R for the transmission of malicious codes in a computer network. We derive the formula for reproduction number (R0 to study the spread of malicious codes in computer network. We show that the Infectious free equilibrium is globally asymptotically stable and endemic equilibrium is locally asymptotically sable when reproduction number is less than one. Also an analysis has been made on the effect of antivirus software in the infectious nodes. Numerical methods are employed to solve and simulate the system of equations developed.

  2. Finite element code development for modeling detonation of HMX composites (United States)

    Duran, Adam V.; Sundararaghavan, Veera


    In this work, we present a hydrodynamics code for modeling shock and detonation waves in HMX. A stable efficient solution strategy based on a Taylor-Galerkin finite element (FE) discretization was developed to solve the reactive Euler equations. In our code, well calibrated equations of state for the solid unreacted material and gaseous reaction products have been implemented, along with a chemical reaction scheme and a mixing rule to define the properties of partially reacted states. A linear Gruneisen equation of state was employed for the unreacted HMX calibrated from experiments. The JWL form was used to model the EOS of gaseous reaction products. It is assumed that the unreacted explosive and reaction products are in both pressure and temperature equilibrium. The overall specific volume and internal energy was computed using the rule of mixtures. Arrhenius kinetics scheme was integrated to model the chemical reactions. A locally controlled dissipation was introduced that induces a non-oscillatory stabilized scheme for the shock front. The FE model was validated using analytical solutions for SOD shock and ZND strong detonation models. Benchmark problems are presented for geometries in which a single HMX crystal is subjected to a shock condition.

  3. A Mutation Model from First Principles of the Genetic Code. (United States)

    Thorvaldsen, Steinar


    The paper presents a neutral Codons Probability Mutations (CPM) model of molecular evolution and genetic decay of an organism. The CPM model uses a Markov process with a 20-dimensional state space of probability distributions over amino acids. The transition matrix of the Markov process includes the mutation rate and those single point mutations compatible with the genetic code. This is an alternative to the standard Point Accepted Mutation (PAM) and BLOcks of amino acid SUbstitution Matrix (BLOSUM). Genetic decay is quantified as a similarity between the amino acid distribution of proteins from a (group of) species on one hand, and the equilibrium distribution of the Markov chain on the other. Amino acid data for the eukaryote, bacterium, and archaea families are used to illustrate how both the CPM and PAM models predict their genetic decay towards the equilibrium value of 1. A family of bacteria is studied in more detail. It is found that warm environment organisms on average have a higher degree of genetic decay compared to those species that live in cold environments. The paper addresses a new codon-based approach to quantify genetic decay due to single point mutations compatible with the genetic code. The present work may be seen as a first approach to use codon-based Markov models to study how genetic entropy increases with time in an effectively neutral biological regime. Various extensions of the model are also discussed.

  4. On Network-Error Correcting Convolutional Codes under the BSC Edge Error Model

    CERN Document Server

    Prasad, K


    Convolutional network-error correcting codes (CNECCs) are known to provide error correcting capability in acyclic instantaneous networks within the network coding paradigm under small field size conditions. In this work, we investigate the performance of CNECCs under the error model of the network where the edges are assumed to be statistically independent binary symmetric channels, each with the same probability of error $p_e$($0\\leq p_e<0.5$). We obtain bounds on the performance of such CNECCs based on a modified generating function (the transfer function) of the CNECCs. For a given network, we derive a mathematical condition on how small $p_e$ should be so that only single edge network-errors need to be accounted for, thus reducing the complexity of evaluating the probability of error of any CNECC. Simulations indicate that convolutional codes are required to possess different properties to achieve good performance in low $p_e$ and high $p_e$ regimes. For the low $p_e$ regime, convolutional codes with g...

  5. Noise Feedback Coding Revisited: Refurbished Legacy Codecs and New Coding Models

    Institute of Scientific and Technical Information of China (English)

    Stephane Ragot; Balazs Kovesi; Alain Le Guyader


    Noise feedback coding (NFC) has attracted renewed interest with the recent standardization of backward-compatible enhancements for ITU-T G.711 and G.722. It has also been revisited with the emergence of proprietary speech codecs, such as BV16, BV32, and SILK, that have structures different from CELP coding. In this article, we review NFC and describe a novel coding technique that optimally shapes coding noise in embedded pulse-code modulation (PCM) and embedded adaptive differential PCM (ADPCM). We describe how this new technique was incorporated into the recent ITU-T G.711.1, G.711 App. III, and G.722 Annex B (G.722B) speech-coding standards.

  6. Estimating Heat and Mass Transfer Processes in Green Roof Systems: Current Modeling Capabilities and Limitations (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Tabares Velasco, P. C.


    This presentation discusses estimating heat and mass transfer processes in green roof systems: current modeling capabilities and limitations. Green roofs are 'specialized roofing systems that support vegetation growth on rooftops.'

  7. Contractor Development Models for Promoting Sustainable Building – a case for developing management capabilities of contractors

    CSIR Research Space (South Africa)

    Dlungwana, Wilkin S


    Full Text Available practices and thereby grow and prosper. Furthermore, the authors argue that improvement, adequate resourcing and the implementation of models and programmes that embrace effective contractor development can greatly enhance the management capabilities...

  8. A Stochastic Model for the Landing Dispersion of Hazard Detection and Avoidance Capable Flight Systems (United States)

    Witte, L.


    To support landing site assessments for HDA-capable flight systems and to facilitate trade studies between the potential HDA architectures versus the yielded probability of safe landing a stochastic landing dispersion model has been developed.

  9. Modelling of aspherical nebulae. I. A quick pseudo-3D photoionization code

    CERN Document Server

    Morisset, C; Peña, M


    We describe a pseudo-3D photoionization code, NEBU_3D and its associated visualization tool, VIS_NEB3D, which are able to easily and rapidly treat a wide variety of nebular geometries, by combining models obtained with a 1D photoionization code. The only requirement for the code to work is that the ionization source is uniqu e and not extended. It is applicable as long as the diffuse ionizing radiation f ield is not dominant and strongly inhomogeneous. As examples of the capabilities of these new tools, we consider two very differ ent theoretical cases. One is that of a high excitation planetary nebula that ha s an ellipsoidal shape with two polar density knots. The other one is that of a blister HII region, for which we have also constructed a spherical model (the sp herical impostor) which has exactly the same Hbeta surface brightness distrib ution as the blister model and the same ionizing star. These two examples warn against preconceived ideas when interpreting spectroscop ic and imaging data of HII regi...

  10. MMA, A Computer Code for Multi-Model Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Eileen P. Poeter and Mary C. Hill


    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.

  11. Advances in National Capabilities for Consequence Assessment Modeling of Airborne Hazards

    Energy Technology Data Exchange (ETDEWEB)

    Nasstrom, J; Sugiyama, G; Foster, K; Larsen, S; Kosovic, B; Eme, B; Walker, H; Goldstein, P; Lundquist, J; Pobanz, B; Fulton, J


    This paper describes ongoing advancement of airborne hazard modeling capabilities in support of multiple agencies through the National Atmospheric Release Advisory Center (NARAC) and the Interagency Atmospheric Modeling and Atmospheric Assessment Center (IMAAC). A suite of software tools developed by Lawrence Livermore National Laboratory (LLNL) and collaborating organizations includes simple stand-alone, local-scale plume modeling tools for end user's computers, Web- and Internet-based software to access advanced 3-D flow and atmospheric dispersion modeling tools and expert analysis from the national center at LLNL, and state-of-the-science high-resolution urban models and event reconstruction capabilities.

  12. Code-to-code benchmark tests for 3D simulation models dedicated to the extraction region in negative ion sources (United States)

    Nishioka, S.; Mochalskyy, S.; Taccogna, F.; Hatayama, A.; Fantz, U.; Minelli, P.


    The development of the kinetic particle model for the extraction region in negative hydrogen ion sources is indispensable and helpful to clarify the H- beam extraction physics. Recently, various 3D kinetic particle codes have been developed to study the extraction mechanism. Direct comparison between each other has not yet been done. Therefore, we have carried out a code-to-code benchmark activity to validate our codes. In the present study, the progress in this benchmark activity is summarized. At present, the reasonable agreement with the result by each code have been obtained using realistic plasma parameters at least for the following items; (1) Potential profile in the case of the vacuum condition (2) Temporal evolution of extracted current densities and profiles of electric potential in the case of the plasma consisting of only electrons and positive ions.

  13. Acoustic Gravity Wave Chemistry Model for the RAYTRACE Code. (United States)


    AU)-AI56 850 ACOlUSTIC GRAVITY WAVE CHEMISTRY MODEL FOR THE IAYTRACE I/~ CODE(U) MISSION RESEARCH CORP SANTA BARBIARA CA T E OLD Of MAN 84 MC-N-SlS...DNA-TN-S4-127 ONAOOI-BO-C-0022 UNLSSIFIlED F/O 20/14 NL 1-0 2-8 1111 po 312.2 1--I 11111* i •. AD-A 156 850 DNA-TR-84-127 ACOUSTIC GRAVITY WAVE...Hicih Frequency Radio Propaoation Acoustic Gravity Waves 20. ABSTRACT (Continue en reveree mide if tteceeemr and Identify by block number) This

  14. Combustion chamber analysis code (United States)

    Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.


    A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.

  15. The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.


    capability of performing iterated-source (criticality), multiplied-fixed-source, and fixed-source calculations. MCV uses a highly detailed continuous-energy (as opposed to multigroup) representation of neutron histories and cross section data. The spatial modeling is fully three-dimensional (3-D), and any geometrical region that can be described by quadric surfaces may be represented. The primary results are region-wise reaction rates, neutron production rates, slowing-down-densities, fluxes, leakages, and when appropriate the eigenvalue or multiplication factor. Region-wise nuclidic reaction rates are also computed, which may then be used by other modules in the system to determine time-dependent nuclide inventories so that RACER can perform depletion calculations. Furthermore, derived quantities such as ratios and sums of primary quantities and/or other derived quantities may also be calculated. MCV performs statistical analyses on output quantities, computing estimates of the 95% confidence intervals as well as indicators as to the reliability of these estimates. The remainder of this chapter provides an overview of the MCV algorithm. The following three chapters describe the MCV mathematical, physical, and statistical treatments in more detail. Specifically, Chapter 2 discusses topics related to tracking the histories including: geometry modeling, how histories are moved through the geometry, and variance reduction techniques related to the tracking process. Chapter 3 describes the nuclear data and physical models employed by MCV. Chapter 4 discusses the tallies, statistical analyses, and edits. Chapter 5 provides some guidance as to how to run the code, and Chapter 6 is a list of the code input options.

  16. RAMONA-4B a computer code with three-dimensional neutron kinetics for BWR and SBWR system transient - models and correlations

    Energy Technology Data Exchange (ETDEWEB)

    Rohatgi, U.S.; Cheng, H.S.; Khan, H.J.; Mallen, A.N.; Neymotin, L.Y.


    This document describes the major modifications and improvements made to the modeling of the RAMONA-3B/MOD0 code since 1981, when the code description and assessment report was completed. The new version of the code is RAMONA-4B. RAMONA-4B is a systems transient code for application to different versions of Boiling Water Reactors (BWR) such as the current BWR, the Advanced Boiling Water Reactor (ABWR), and the Simplified Boiling Water Reactor (SBWR). This code uses a three-dimensional neutron kinetics model coupled with a multichannel, non-equilibrium, drift-flux, two-phase flow formulation of the thermal hydraulics of the reactor vessel. The code is designed to analyze a wide spectrum of BWR core and system transients and instability issues. Chapter 1 is an overview of the code`s capabilities and limitations; Chapter 2 discusses the neutron kinetics modeling and the implementation of reactivity edits. Chapter 3 is an overview of the heat conduction calculations. Chapter 4 presents modifications to the thermal-hydraulics model of the vessel, recirculation loop, steam separators, boron transport, and SBWR specific components. Chapter 5 describes modeling of the plant control and safety systems. Chapter 6 presents and modeling of Balance of Plant (BOP). Chapter 7 describes the mechanistic containment model in the code. The content of this report is complementary to the RAMONA-3B code description and assessment document. 53 refs., 81 figs., 13 tabs.

  17. RAMONA-4B a computer code with three-dimensional neutron kinetics for BWR and SBWR system transient - models and correlations

    Energy Technology Data Exchange (ETDEWEB)

    Rohatgi, U.S.; Cheng, H.S.; Khan, H.J.; Mallen, A.N.; Neymotin, L.Y.


    This document describes the major modifications and improvements made to the modeling of the RAMONA-3B/MOD0 code since 1981, when the code description and assessment report was completed. The new version of the code is RAMONA-4B. RAMONA-4B is a systems transient code for application to different versions of Boiling Water Reactors (BWR) such as the current BWR, the Advanced Boiling Water Reactor (ABWR), and the Simplified Boiling Water Reactor (SBWR). This code uses a three-dimensional neutron kinetics model coupled with a multichannel, non-equilibrium, drift-flux, two-phase flow formulation of the thermal hydraulics of the reactor vessel. The code is designed to analyze a wide spectrum of BWR core and system transients and instability issues. Chapter 1 is an overview of the code`s capabilities and limitations; Chapter 2 discusses the neutron kinetics modeling and the implementation of reactivity edits. Chapter 3 is an overview of the heat conduction calculations. Chapter 4 presents modifications to the thermal-hydraulics model of the vessel, recirculation loop, steam separators, boron transport, and SBWR specific components. Chapter 5 describes modeling of the plant control and safety systems. Chapter 6 presents and modeling of Balance of Plant (BOP). Chapter 7 describes the mechanistic containment model in the code. The content of this report is complementary to the RAMONA-3B code description and assessment document. 53 refs., 81 figs., 13 tabs.

  18. C code generation applied to nonlinear model predictive control for an artificial pancreas

    DEFF Research Database (Denmark)

    Boiroux, Dimitri; Jørgensen, John Bagterp


    This paper presents a method to generate C code from MATLAB code applied to a nonlinear model predictive control (NMPC) algorithm. The C code generation uses the MATLAB Coder Toolbox. It can drastically reduce the time required for development compared to a manual porting of code from MATLAB to C...

  19. Review of Hydrologic Models for Evaluating Use of Remote Sensing Capabilities (United States)

    Peck, E. L.; Mcquivey, R. S.; Keefer, T.; Johnson, E. R.; Erekson, J. L.


    Hydrologic models most commonly used by federal agencies for hydrologic forecasting are reviewed. Six catchment models and one snow accumulation and ablation model are reviewed. Information on the structure, parameters, states, and required inputs is presented in schematic diagrams and in tables. The primary and secondary roles of parameters and state variables with respect to their function in the models are identified. The information will be used to evaluate the usefulness of remote sensing capabilities in the operational use of hydrologic models.

  20. Improvements on the ice cloud modeling capabilities of the Community Radiative Transfer Model (United States)

    Yi, Bingqi; Yang, Ping; Liu, Quanhua; Delst, Paul; Boukabara, Sid-Ahmed; Weng, Fuzhong


    Noticeable improvements on the ice cloud modeling capabilities of the Community Radiative Transfer Model (CRTM) are reported, which are based on the most recent advances in understanding ice cloud microphysical (particularly, ice particle habit/shape characteristics) and optical properties. The new CRTM ice cloud model is derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) collection 6 ice cloud habit model, which represents ice particles as severely roughened hexagonal ice column aggregates with a gamma size distribution. The single-scattering properties of the new ice particle model are derived from a state-of-the-art ice optical property library and are constructed as look-up tables for rapid CRTM computations. Various sensitivity studies concerning instrument-specific applications and simulations are performed to validate CRTM against satellite observations. In particular, radiances in a spectral region covering the infrared wavelengths are simulated. Comparisons of brightness temperatures between CRTM simulations and observations (from MODIS, the Atmospheric Infrared Sounder, and the Advanced Microwave Sounding Unit) show that the new ice cloud optical property look-up table substantially enhances the performance of the CRTM under ice cloud conditions.

  1. Kinetic models of gene expression including non-coding RNAs (United States)

    Zhdanov, Vladimir P.


    In cells, genes are transcribed into mRNAs, and the latter are translated into proteins. Due to the feedbacks between these processes, the kinetics of gene expression may be complex even in the simplest genetic networks. The corresponding models have already been reviewed in the literature. A new avenue in this field is related to the recognition that the conventional scenario of gene expression is fully applicable only to prokaryotes whose genomes consist of tightly packed protein-coding sequences. In eukaryotic cells, in contrast, such sequences are relatively rare, and the rest of the genome includes numerous transcript units representing non-coding RNAs (ncRNAs). During the past decade, it has become clear that such RNAs play a crucial role in gene expression and accordingly influence a multitude of cellular processes both in the normal state and during diseases. The numerous biological functions of ncRNAs are based primarily on their abilities to silence genes via pairing with a target mRNA and subsequently preventing its translation or facilitating degradation of the mRNA-ncRNA complex. Many other abilities of ncRNAs have been discovered as well. Our review is focused on the available kinetic models describing the mRNA, ncRNA and protein interplay. In particular, we systematically present the simplest models without kinetic feedbacks, models containing feedbacks and predicting bistability and oscillations in simple genetic networks, and models describing the effect of ncRNAs on complex genetic networks. Mathematically, the presentation is based primarily on temporal mean-field kinetic equations. The stochastic and spatio-temporal effects are also briefly discussed.

  2. A simple model of optimal population coding for sensory systems. (United States)

    Doi, Eizaburo; Lewicki, Michael S


    A fundamental task of a sensory system is to infer information about the environment. It has long been suggested that an important goal of the first stage of this process is to encode the raw sensory signal efficiently by reducing its redundancy in the neural representation. Some redundancy, however, would be expected because it can provide robustness to noise inherent in the system. Encoding the raw sensory signal itself is also problematic, because it contains distortion and noise. The optimal solution would be constrained further by limited biological resources. Here, we analyze a simple theoretical model that incorporates these key aspects of sensory coding, and apply it to conditions in the retina. The model specifies the optimal way to incorporate redundancy in a population of noisy neurons, while also optimally compensating for sensory distortion and noise. Importantly, it allows an arbitrary input-to-output cell ratio between sensory units (photoreceptors) and encoding units (retinal ganglion cells), providing predictions of retinal codes at different eccentricities. Compared to earlier models based on redundancy reduction, the proposed model conveys more information about the original signal. Interestingly, redundancy reduction can be near-optimal when the number of encoding units is limited, such as in the peripheral retina. We show that there exist multiple, equally-optimal solutions whose receptive field structure and organization vary significantly. Among these, the one which maximizes the spatial locality of the computation, but not the sparsity of either synaptic weights or neural responses, is consistent with known basic properties of retinal receptive fields. The model further predicts that receptive field structure changes less with light adaptation at higher input-to-output cell ratios, such as in the periphery.

  3. A Study on Evaluation Model of IT Industry's Innovation Capability Based Variable Weight Theory

    Institute of Scientific and Technical Information of China (English)

    LI Zi-biao; WANG lei; HU Bao-min


    In this paper, IT Industry's innovation capability is considered to be the innovation output capability after complex operation of industry input in industry system. In this complex process, R&D personnel input and R&D expense input are un-substitutable, and for evaluation of innovation capability, innovation input and innovation output also are un-substitutable. Based on this theory, an evaluation model of sustaining strength index is put forward. Considering both input scale and output contribution of IT industry's innovation system, this model reflects the un-substitutability of every evaluation aspects. The measurement result not only shows the industry innovation capability, but also reflects the support degree to economy. At last the data of IT industry in China are provided between 1994 and 2004 for empirical study.

  4. Development of condensation modeling modeling and simulation code for IRWST

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sang Nyung; Jang, Wan Ho; Ko, Jong Hyun; Ha, Jong Baek; Yang, Chang Keun; Son, Myung Seong [Kyung Hee Univ., Seoul (Korea)


    One of the design improvements of the KNGR(Korean Next Generation Reactor) which is advanced to safety and economy is the adoption of IRWST(In-Containment Refueling Water Storage Tank). The IRWST, installed inside of the containment building, has more designed purpose than merely the location change of the tank. Since the design functions of the IRWST is similar to these of the BWR's suppression pool, theoretical models applicable to BWR's suppression pool can be mostly applied to the IRWST. But for the PWR, the geometry of the sparger, the operation mode and the steam quantity and temperature and pressure of discharged fluid from primary system to IRWST through PSV or SDV may be different from those of BWR. Also there is some defects in detailed parts of condensation model. Therefore we, as the first nation to construct PWR with IRWST, must carry out profound research for there problems such that the results can be utilized and localized as an exclusive technology. All kinds of thermal hydraulics phenomena was investigated and existing condensation models by Hideki Nariai and Izuo Aya were analyzed. Also throuh a rigorous literature review such as operation experience, experimental data, design document of KNGR, items which need modification and supplementation were derived. Analytical model for chugging phenomena is also presented. 15 refs., 18 figs., 4 tabs. (Author)

  5. Exploring a capability-demand interaction model for inclusive design evaluation


    Persad, Umesh


    Designers are required to evaluate their designs against the needs and capabilities of their target user groups in order to achieve successful, inclusive products. This dissertation presents exploratory research into the specific problem of supporting analytical design evaluation for Inclusive Design. The analytical evaluation process involves evaluating products with user data rather than testing with actual users. The work focuses on the exploration of a capability-demand model of product i...

  6. Secondary neutron source modelling using MCNPX and ALEPH codes (United States)

    Trakas, Christos; Kerkar, Nordine


    Monitoring the subcritical state and divergence of reactors requires the presence of neutron sources. But mainly secondary neutrons from these sources feed the ex-core detectors (SRD, Source Range Detector) whose counting rate is correlated with the level of the subcriticality of reactor. In cycle 1, primary neutrons are provided by sources activated outside of the reactor (e.g. Cf252); part of this source can be used for the divergence of cycle 2 (not systematic). A second family of neutron sources is used for the second cycle: the spontaneous neutrons of actinides produced after irradiation of fuel in the first cycle. Both families of sources are not sufficient to efficiently monitor the divergence of the second cycles and following ones, in most reactors. Secondary sources cluster (SSC) fulfil this role. In the present case, the SSC [Sb, Be], after activation in the first cycle (production of Sb124, unstable), produces in subsequent cycles a photo-neutron source by gamma (from Sb124)-neutron (on Be9) reaction. This paper presents the model of the process between irradiation in cycle 1 and cycle 2 results for SRD counting rate at the beginning of cycle 2, using the MCNPX code and the depletion chain ALEPH-V1 (coupling of MCNPX and ORIGEN codes). The results of this simulation are compared with two experimental results of the PWR 1450 MWe-N4 reactors. A good agreement is observed between these results and the simulations. The subcriticality of the reactors is about at -15,000 pcm. Discrepancies on the SRD counting rate between calculations and measurements are in the order of 10%, lower than the combined uncertainty of measurements and code simulation. This comparison validates the AREVA methodology, which allows having an SRD counting rate best-estimate for cycles 2 and next ones and optimizing the position of the SSC, depending on the geographic location of sources, main parameter for optimal monitoring of subcritical states.

  7. TASS/SMR Code Topical Report for SMART Plant, Vol. I: Code Structure, System Models, and Solution Methods

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Young Jong; Kim, Soo Hyoung; Kim, See Darl (and others)


    The TASS/SMR code has been developed with domestic technologies for the safety analysis of the SMART plant which is an integral type pressurized water reactor. It can be applied to the analysis of design basis accidents including non-LOCA (loss of coolant accident) and LOCA of the SMART plant. The TASS/SMR code can be applied to any plant regardless of the structural characteristics of a reactor since the code solves the same governing equations for both the primary and secondary system. The code has been developed to meet the requirements of the safety analysis code. This report describes the overall structure of the TASS/SMR, input processing, and the processes of a steady state and transient calculations. In addition, basic differential equations, finite difference equations, state relationships, and constitutive models are described in the report. First, the conservation equations, a discretization process for numerical analysis, search method for state relationship are described. Then, a core power model, heat transfer models, physical models for various components, and control and trip models are explained.

  8. Surface Modeling, Solid Modeling and Finite Element Modeling. Analysis Capabilities of Computer-Assisted Design and Manufacturing Systems. (United States)

    Nee, John G.; Kare, Audhut P.


    Explores several concepts in computer assisted design/computer assisted manufacturing (CAD/CAM). Defines, evaluates, reviews and compares advanced computer-aided geometric modeling and analysis techniques. Presents the results of a survey to establish the capabilities of minicomputer based-systems with the CAD/CAM packages evaluated. (CW)

  9. Joint intelligence operations centers (JIOC) business process model & capabilities evaluation methodology


    Schacher, Gordon; Irvine, Nelson; Hoyt, Roger


    A JIOC Business Process Model has been developed for use in evaluating JIOC capabilities. The model is described and depicted through OV5 and organization swim-lane diagrams. Individual intelligence activities diagrams are included. A JIOC evaluation methodology is described.

  10. Analysis of different containment models for IRIS small break LOCA, using GOTHIC and RELAP5 codes

    Energy Technology Data Exchange (ETDEWEB)

    Papini, Davide, E-mail: davide.papini@mail.polimi.i [Department of Energy, CeSNEF - Nuclear Engineering Division, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Grgic, Davor [Department of Power Systems, FER, University of Zagreb, Unska 3, 10000 Zagreb (Croatia); Cammi, Antonio; Ricotti, Marco E. [Department of Energy, CeSNEF - Nuclear Engineering Division, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy)


    Advanced nuclear water reactors rely on containment behaviour in realization of some of their passive safety functions. Steam condensation on containment walls, where non-condensable gas effects are significant, is an important feature of the new passive containment concepts, like the AP600/1000 ones. In this work the international reactor innovative and secure (IRIS) was taken as reference, and the relevant condensation phenomena involved within its containment were investigated with different computational tools. In particular, IRIS containment response to a small break LOCA (SBLOCA) was calculated with GOTHIC and RELAP5 codes. A simplified model of IRIS containment drywell was implemented with RELAP5 according to a sliced approach, based on the two-pipe-with-junction concept, while it was addressed with GOTHIC using several modelling options, regarding both heat transfer correlations and volume and thermal structure nodalization. The influence on containment behaviour prediction was investigated in terms of drywell temperature and pressure response, heat transfer coefficient (HTC) and steam volume fraction distribution, and internal recirculating mass flow rate. The objective of the paper is to preliminarily compare the capability of the two codes in modelling of the same postulated accident, thus to check the results obtained with RELAP5, when applied in a situation not covered by its validation matrix (comprising SBLOCA and to some extent LBLOCA transients, but not explicitly the modelling of large dry containment volumes). The option to include or not droplets in fluid mass flow discharged to the containment was the most influencing parameter for GOTHIC simulations. Despite some drawbacks, due, e.g. to a marked overestimation of internal natural recirculation, RELAP5 confirmed its capability to satisfactorily model the basic processes in IRIS containment following SBLOCA.

  11. Model code for energy conservation in new building construction

    Energy Technology Data Exchange (ETDEWEB)



    In response to the recognized lack of existing consensus standards directed to the conservation of energy in building design and operation, the preparation and publication of such a standard was accomplished with the issuance of ASHRAE Standard 90-75 ''Energy Conservation in New Building Design,'' by the American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc., in 1975. This standard addressed itself to recommended practices for energy conservation, using both depletable and non-depletable sources. A model code for energy conservation in building construction has been developed, setting forth the minimum regulations found necessary to mandate such conservation. The code addresses itself to the administration, design criteria, systems elements, controls, service water heating and electrical distribution and use, both for depletable and non-depletable energy sources. The technical provisions of the document are based on ASHRAE 90-75 and it is intended for use by state and local building officials in the implementation of a statewide energy conservation program.

  12. Mathematical modeling of wiped-film evaporators. [MAIN codes

    Energy Technology Data Exchange (ETDEWEB)

    Sommerfeld, J.T.


    A mathematical model and associated computer program were developed to simulate the steady-state operation of wiped-film evaporators for the concentration of typical waste solutions produced at the Savannah River Plant. In this model, which treats either a horizontal or a vertical wiped-film evaporator as a plug-flow device with no backmixing, three fundamental phenomena are described: sensible heating of the waste solution, vaporization of water, and crystallization of solids from solution. Physical property data were coded into the computer program, which performs the calculations of this model. Physical properties of typical waste solutions and of the heating steam, generally as analytical functions of temperature, were obtained from published data or derived by regression analysis of tabulated or graphical data. Preliminary results from tests of the Savannah River Laboratory semiworks wiped-film evaporators were used to select a correlation for the inside film heat transfer coefficient. This model should be a useful aid in the specification, operation, and control of the full-scale wiped-film evaporators proposed for application under plant conditions. In particular, it should be of value in the development and analysis of feed-forward control schemes for the plant units. Also, this model can be readily adapted, with only minor changes, to simulate the operation of wiped-film evaporators for other conceivable applications, such as the concentration of acid wastes.

  13. CODE's new solar radiation pressure model for GNSS orbit determination (United States)

    Arnold, D.; Meindl, M.; Beutler, G.; Dach, R.; Schaer, S.; Lutz, S.; Prange, L.; Sośnica, K.; Mervart, L.; Jäggi, A.


    The Empirical CODE Orbit Model (ECOM) of the Center for Orbit Determination in Europe (CODE), which was developed in the early 1990s, is widely used in the International GNSS Service (IGS) community. For a rather long time, spurious spectral lines are known to exist in geophysical parameters, in particular in the Earth Rotation Parameters (ERPs) and in the estimated geocenter coordinates, which could recently be attributed to the ECOM. These effects grew creepingly with the increasing influence of the GLONASS system in recent years in the CODE analysis, which is based on a rigorous combination of GPS and GLONASS since May 2003. In a first step we show that the problems associated with the ECOM are to the largest extent caused by the GLONASS, which was reaching full deployment by the end of 2011. GPS-only, GLONASS-only, and combined GPS/GLONASS solutions using the observations in the years 2009-2011 of a global network of 92 combined GPS/GLONASS receivers were analyzed for this purpose. In a second step we review direct solar radiation pressure (SRP) models for GNSS satellites. We demonstrate that only even-order short-period harmonic perturbations acting along the direction Sun-satellite occur for GPS and GLONASS satellites, and only odd-order perturbations acting along the direction perpendicular to both, the vector Sun-satellite and the spacecraft's solar panel axis. Based on this insight we assess in the third step the performance of four candidate orbit models for the future ECOM. The geocenter coordinates, the ERP differences w. r. t. the IERS 08 C04 series of ERPs, the misclosures for the midnight epochs of the daily orbital arcs, and scale parameters of Helmert transformations for station coordinates serve as quality criteria. The old and updated ECOM are validated in addition with satellite laser ranging (SLR) observations and by comparing the orbits to those of the IGS and other analysis centers. Based on all tests, we present a new extended ECOM which

  14. RELAP5-3D Code Includes Athena Features and Models

    Energy Technology Data Exchange (ETDEWEB)

    Richard A. Riemke; Cliff B. Davis; Richard R. Schultz


    Version 2.3 of the RELAP5-3D computer program includes all features and models previously available only in the ATHENA version of the code. These include the addition of new working fluids (i.e., ammonia, blood, carbon dioxide, glycerol, helium, hydrogen, lead-bismuth, lithium, lithium-lead, nitrogen, potassium, sodium, and sodium-potassium) and a magnetohydrodynamic model that expands the capability of the code to model many more thermal-hydraulic systems. In addition to the new working fluids along with the standard working fluid water, one or more noncondensable gases (e.g., air, argon, carbon dioxide, carbon monoxide, helium, hydrogen, krypton, nitrogen, oxygen, sf6, xenon) can be specified as part of the vapor/gas phase of the working fluid. These noncondensable gases were in previous versions of RELAP5- 3D. Recently four molten salts have been added as working fluids to RELAP5-3D Version 2.4, which has had limited release. These molten salts will be in RELAP5-3D Version 2.5, which will have a general release like RELAP5-3D Version 2.3. Applications that use these new features and models are discussed in this paper.

  15. A self-organized internal models architecture for coding sensory-motor schemes

    Directory of Open Access Journals (Sweden)

    Esaú eEscobar Juárez


    Full Text Available Cognitive robotics research draws inspiration from theories and models on cognition, as conceived by neuroscience or cognitive psychology, to investigate biologically plausible computational models in artificial agents. In this field, the theoretical framework of Grounded Cognition provides epistemological and methodological grounds for the computational modeling of cognition. It has been stressed in the literature that textit{simulation}, textit{prediction}, and textit{multi-modal integration} are key aspects of cognition and that computational architectures capable of putting them into play in a biologically plausible way are a necessity.Research in this direction has brought extensive empirical evidencesuggesting that textit{Internal Models} are suitable mechanisms forsensory-motor integration. However, current Internal Models architectures show several drawbacks, mainly due to the lack of a unified substrate allowing for a true sensory-motor integration space, enabling flexible and scalable ways to model cognition under the embodiment hypothesis constraints.We propose the Self-Organized Internal ModelsArchitecture (SOIMA, a computational cognitive architecture coded by means of a network of self-organized maps, implementing coupled internal models that allow modeling multi-modal sensory-motor schemes. Our approach addresses integrally the issues of current implementations of Internal Models.We discuss the design and features of the architecture, and provide empirical results on a humanoid robot that demonstrate the benefits and potentialities of the SOIMA concept for studying cognition in artificial agents.

  16. Verification of the New FAST v8 Capabilities for the Modeling of Fixed-Bottom Offshore Wind Turbines: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Barahona, B.; Jonkman, J.; Damiani, R.; Robertson, A.; Hayman, G.


    Coupled dynamic analysis has an important role in the design of offshore wind turbines because the systems are subject to complex operating conditions from the combined action of waves and wind. The aero-hydro-servo-elastic tool FAST v8 is framed in a novel modularization scheme that facilitates such analysis. Here, we present the verification of new capabilities of FAST v8 to model fixed-bottom offshore wind turbines. We analyze a series of load cases with both wind and wave loads and compare the results against those from the previous international code comparison projects-the International Energy Agency (IEA) Wind Task 23 Subtask 2 Offshore Code Comparison Collaboration (OC3) and the IEA Wind Task 30 OC3 Continued (OC4) projects. The verification is performed using the NREL 5-MW reference turbine supported by monopile, tripod, and jacket substructures. The substructure structural-dynamics models are built within the new SubDyn module of FAST v8, which uses a linear finite-element beam model with Craig-Bampton dynamic system reduction. This allows the modal properties of the substructure to be synthesized and coupled to hydrodynamic loads and tower dynamics. The hydrodynamic loads are calculated using a new strip theory approach for multimember substructures in the updated HydroDyn module of FAST v8. These modules are linked to the rest of FAST through the new coupling scheme involving mapping between module-independent spatial discretizations and a numerically rigorous implicit solver. The results show that the new structural dynamics, hydrodynamics, and coupled solutions compare well to the results from the previous code comparison projects.

  17. Molecular Code Division Multiple Access: Gaussian Mixture Modeling (United States)

    Zamiri-Jafarian, Yeganeh

    Communications between nano-devices is an emerging research field in nanotechnology. Molecular Communication (MC), which is a bio-inspired paradigm, is a promising technique for communication in nano-network. In MC, molecules are administered to exchange information among nano-devices. Due to the nature of molecular signals, traditional communication methods can't be directly applied to the MC framework. The objective of this thesis is to present novel diffusion-based MC methods when multi nano-devices communicate with each other in the same environment. A new channel model and detection technique, along with a molecular-based access method, are proposed in here for communication between asynchronous users. In this work, the received molecular signal is modeled as a Gaussian mixture distribution when the MC system undergoes Brownian noise and inter-symbol interference (ISI). This novel approach demonstrates a suitable modeling for diffusion-based MC system. Using the proposed Gaussian mixture model, a simple receiver is designed by minimizing the error probability. To determine an optimum detection threshold, an iterative algorithm is derived which minimizes a linear approximation of the error probability function. Also, a memory-based receiver is proposed to improve the performance of the MC system by considering previously detected symbols in obtaining the threshold value. Numerical evaluations reveal that theoretical analysis of the bit error rate (BER) performance based on the Gaussian mixture model match simulation results very closely. Furthermore, in this thesis, molecular code division multiple access (MCDMA) is proposed to overcome the inter-user interference (IUI) caused by asynchronous users communicating in a shared propagation environment. Based on the selected molecular codes, a chip detection scheme with an adaptable threshold value is developed for the MCDMA system when the proposed Gaussian mixture model is considered. Results indicate that the

  18. Modeling of the CTEx subcritical unit using MCNPX code

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Avelino [Divisao de Defesa Quimica, Biologica e Nuclear. Centro Tecnologico do Exercito - CTEx, Guaratiba, Rio de Janeiro, RJ (Brazil); Silva, Ademir X. da, E-mail: [Programa de Engenharia Nuclear. Universidade Federal do Rio de Janeiro - UFRJ Centro de Tecnologia, Rio de Janeiro, RJ (Brazil); Rebello, Wilson F. [Secao de Engenharia Nuclear - SE/7 Instituto Militar de Engenharia - IME Rio de Janeiro, RJ (Brazil); Cunha, Victor L. Lassance [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)


    The present work aims at simulating the subcritical unit of Army Technology Center (CTEx) namely ARGUS pile (subcritical uranium-graphite arrangement) by using the computational code MCNPX. Once such modeling is finished, it could be used in k-effective calculations for systems using natural uranium as fuel, for instance. ARGUS is a subcritical assembly which uses reactor-grade graphite as moderator of fission neutrons and metallic uranium fuel rods with aluminum cladding. The pile is driven by an Am-Be spontaneous neutron source. In order to achieve a higher value for k{sub eff}, a higher concentration of U235 can be proposed, provided it safely remains below one. (author)

  19. Djehuty, a Code for Modeling Stars in Three Dimensions

    CERN Document Server

    Bazán, G; Dossa, D D; Eggleton, P P; Taylor, A; Castor, J I; Murray, S; Cook, K H; Eltgroth, P G; Cavallo, R M; Turcotte, S; Keller, S C; Pudliner, B S


    Current practice in stellar evolution is to employ one-dimensional calculations that quantitatively apply only to a minority of the observed stars (single non-rotating stars, or well detached binaries). Even in these systems, astrophysicists are dependent on approximations to handle complex three-dimensional processes like convection. Understanding the structure of binary stars, like those that lead to the Type Ia supernovae used to measure the expansion of the universe, are grossly non-spherical and await a 3D treatment. To approach very large problems like multi-dimensional modeling of stars, the Lawrence Livermore National Laboratory has invested in massively parallel computers and invested even more in developing the algorithms to utilize them on complex physics problems. We have leveraged skills from across the lab to develop a 3D stellar evolution code, Djehuty (after the Egyptian god for writing and calculation) that operates efficiently on platforms with thousands of nodes, with the best available phy...

  20. Maximizing entropy of image models for 2-D constrained coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Danieli, Matteo; Burini, Nino


    This paper considers estimating and maximizing the entropy of two-dimensional (2-D) fields with application to 2-D constrained coding. We consider Markov random fields (MRF), which have a non-causal description, and the special case of Pickard random fields (PRF). The PRF are 2-D causal finite...... context models, which define stationary probability distributions on finite rectangles and thus allow for calculation of the entropy. We consider two binary constraints and revisit the hard square constraint given by forbidding neighboring 1s and provide novel results for the constraint that no uniform 2...... £ 2 squares contains all 0s or all 1s. The maximum values of the entropy for the constraints are estimated and binary PRF satisfying the constraint are characterized and optimized w.r.t. the entropy. The maximum binary PRF entropy is 0.839 bits/symbol for the no uniform squares constraint. The entropy...

  1. Modeling Vortex Generators in a Navier-Stokes Code (United States)

    Dudek, Julianne C.


    A source-term model that simulates the effects of vortex generators was implemented into the Wind-US Navier-Stokes code. The source term added to the Navier-Stokes equations simulates the lift force that would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, subsonic flow in an S-duct with 22 corotating vortex generators, and supersonic flow in a rectangular duct with a counter-rotating vortex-generator pair. The model was also used to successfully simulate microramps in supersonic flow by treating each microramp as a pair of vanes with opposite angles of incidence. The validation results indicate that the source-term vortex-generator model provides a useful tool for screening vortex-generator configurations and gives comparable results to solutions computed using gridded vanes.

  2. Transformation invariant sparse coding

    DEFF Research Database (Denmark)

    Mørup, Morten; Schmidt, Mikkel Nørgaard


    Sparse coding is a well established principle for unsupervised learning. Traditionally, features are extracted in sparse coding in specific locations, however, often we would prefer invariant representation. This paper introduces a general transformation invariant sparse coding (TISC) model....... The model decomposes images into features invariant to location and general transformation by a set of specified operators as well as a sparse coding matrix indicating where and to what degree in the original image these features are present. The TISC model is in general overcomplete and we therefore invoke...... sparse coding to estimate its parameters. We demonstrate how the model can correctly identify components of non-trivial artificial as well as real image data. Thus, the model is capable of reducing feature redundancies in terms of pre-specified transformations improving the component identification....

  3. Co-firing biomass and coal-progress in CFD modelling capabilities

    DEFF Research Database (Denmark)

    Kær, Søren Knudsen; Rosendahl, Lasse Aistrup; Yin, Chungen


    This paper discusses the development of user defined FLUENT™ sub models to improve the modelling capabilities in the area of large biomass particle motion and conversion. Focus is put on a model that includes the influence from particle size and shape on the reactivity by resolving intra-particle...... particle conversion patterns. The improved model will impact the simulation capabilities of biomass fired boilers in the areas of thermal conditions, NOx formation and particle deposition behaviour.......This paper discusses the development of user defined FLUENT™ sub models to improve the modelling capabilities in the area of large biomass particle motion and conversion. Focus is put on a model that includes the influence from particle size and shape on the reactivity by resolving intra-particle...... gradients. The advanced reaction model predicts moisture and volatiles release characteristics that differ significantly from those found from a 0-dimensional model partly due to the processes occurring in parallel rather than sequentially. This is demonstrated for a test case that illustrates single...

  4. Modeling Vortex Generators in the Wind-US Code (United States)

    Dudek, Julianne C.


    A source term model which simulates the effects of vortex generators was implemented into the Wind-US Navier Stokes code. The source term added to the Navier-Stokes equations simulates the lift force which would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, supersonic flow in a rectangular duct with a counterrotating vortex generator pair, and subsonic flow in an S-duct with 22 co-rotating vortex generators. The validation results indicate that the source term vortex generator model provides a useful tool for screening vortex generator configurations and gives comparable results to solutions computed using a gridded vane.

  5. Evaluation Model for Capability of Enterprise Agent Coalition Based on Information Fusion and Attribute Reduction

    Institute of Scientific and Technical Information of China (English)

    Dongjun Liu; Li Li; and Jiayang Wang


    For the issue of evaluation of capability of enterprise agent coalition, an evaluation model based on information fusion and entropy weighting method is presented. The attribute reduction method is utilized to reduce indicators of the capability according to the theory of rough set. The new indicator system can be determined. Attribute reduction can also reduce the workload and remove the redundant information, when there are too many indicators or the indicators have strong correlation. The research complexity can be reduced and the efficiency can be improved. Entropy weighting method is used to determine the weights of the remaining indicators, and the importance of indicators is analyzed. The information fusion model based on nearest neighbor method is developed and utilized to evaluate the capability of multiple agent coalitions, compared to cloud evaluation model and D-S evidence method. Simulation results are reasonable and with obvious distinction. Thus they verify the effectiveness and feasibility of the model. The information fusion model can provide more scientific, rational decision support for choosing the best agent coalition, and provide innovative steps for the evaluation process of capability of agent coalitions.

  6. Modelling of sprays in containment applications with A CMFD code

    Energy Technology Data Exchange (ETDEWEB)

    Mimouni, S., E-mail: stephane.mimouni@edf.f [Electricite de France R and D Division, 6 Quai Watier, F-78400 Chatou (France); Lamy, J.-S. [Electricite de France R and D Division, 1 av. du General de Gaulle, F-92140 Clamart (France); Lavieville, J. [Electricite de France R and D Division, 6 Quai Watier, F-78400 Chatou (France); Guieu, S.; Martin, M. [Electricite de France SEPTEN Division, 12-14 av. Dutrievoz, 69628 Villeurbanne (France)


    During the course of a hypothetical severe accident in a Pressurized Water Reactor (PWR), spray systems are used in the containment in order to prevent overpressure in case of a steam line break, and to enhance the gas mixing in case of the presence of hydrogen. In the frame of the Severe Accident Research Network (SARNET) of the 6th EC Framework Programme, two tests was produced in the TOSQAN facility in order to study the spray behaviour under severe accident conditions: TOSQAN 101 and TOSQAN 113. The TOSQAN facility is a closed cylindrical vessel. The inner spray system is located on the top of the enclosure on the vertical axis. For the TOSQAN 101 case, an initial pressurization in the vessel is performed with superheated steam up to 2.5 bar. Then, steam injection is stopped and spraying starts simultaneously at a given water temperature (around 25 {sup o}C) and water mass flow-rate (around 30 g/s). The depressurization transient starts and continues until the equilibrium phase, which corresponds to the stabilization of the average temperature and pressure of the gaseous mixture inside the vessel. The purpose of the TOSQAN 113 cold spray test is to study helium mixing due to spray activation without heat and mass transfers between gas and droplets. We present in this paper the spray modelling implemented in NEPTUNE{sub C}FD, a three-dimensional multi-fluid code developed especially for nuclear reactor applications. A new model dedicated to the droplet evaporation at the wall is also detailed. Keeping in mind the Best Practice Guidelines, closure laws have been selected to ensure a grid-dependence as weak as possible. For the TOSQAN 113 case, the time evolution of the helium volume fraction calculated shows that the physical approach described in the paper is able to reproduce the mixing of helium by the spray. The prediction of the transient behaviour should be improved by including in the model corrections based on better understanding of the influence of the

  7. Capability-based Access Control Delegation Model on the Federated IoT Network

    DEFF Research Database (Denmark)

    Anggorojati, Bayu; Mahalle, Parikshit N.; Prasad, Neeli R.


    Flexibility is an important property for general access control system and especially in the Internet of Things (IoT), which can be achieved by access or authority delegation. Delegation mechanisms in access control that have been studied until now have been intended mainly for a system that has...... no resource constraint, such as a web-based system, which is not very suitable for a highly pervasive system such as IoT. To this end, this paper presents an access delegation method with security considerations based on Capability-based Context Aware Access Control (CCAAC) model intended for federated...... machine-to-machine communication or IoT networks. The main idea of our proposed model is that the access delegation is realized by means of a capability propagation mechanism, and incorporating the context information as well as secure capability propagation under federated IoT environments. By using...

  8. Capability Model for Case-Based Reasoning in Collaborative Commerce Environment

    Institute of Scientific and Technical Information of China (English)


    Collaborative commerce (c-commerce) has become an innovative business paradigm that helps companies achieve high operational performance through inter-organizational collaboration. This paper presents an effective case-based reasoning (CBR) capability model for solution selection in c-commerce applications, as CBR is widely used in knowledge management and electronic commerce.Based on the case-based competence model suggested by Smyth and McKenna, a directed graph was used to represent the collaborative reasoning history of CBR systems, where information of reasoning process ability was extracted. Experiment was carried out on a travel dataset. By integrating case-based competence and reasoning process ability, the capability is more suitable to reflect the real ability of CBR systems. The result shows that the proposed method can effectively evaluate the capability of CBR systems and enhance the performance of collaborative case-based reasoning in c-commerce environment.

  9. Existing and Required Modeling Capabilities for Evaluating ATM Systems and Concepts (United States)

    Odoni, Amedeo R.; Bowman, Jeremy; Delahaye, Daniel; Deyst, John J.; Feron, Eric; Hansman, R. John; Khan, Kashif; Kuchar, James K.; Pujet, Nicolas; Simpson, Robert W.


    ATM systems throughout the world are entering a period of major transition and change. The combination of important technological developments and of the globalization of the air transportation industry has necessitated a reexamination of some of the fundamental premises of existing Air Traffic Management (ATM) concepts. New ATM concepts have to be examined, concepts that may place more emphasis on: strategic traffic management; planning and control; partial decentralization of decision-making; and added reliance on the aircraft to carry out strategic ATM plans, with ground controllers confined primarily to a monitoring and supervisory role. 'Free Flight' is a case in point. In order to study, evaluate and validate such new concepts, the ATM community will have to rely heavily on models and computer-based tools/utilities, covering a wide range of issues and metrics related to safety, capacity and efficiency. The state of the art in such modeling support is adequate in some respects, but clearly deficient in others. It is the objective of this study to assist in: (1) assessing the strengths and weaknesses of existing fast-time models and tools for the study of ATM systems and concepts and (2) identifying and prioritizing the requirements for the development of additional modeling capabilities in the near future. A three-stage process has been followed to this purpose: 1. Through the analysis of two case studies involving future ATM system scenarios, as well as through expert assessment, modeling capabilities and supporting tools needed for testing and validating future ATM systems and concepts were identified and described. 2. Existing fast-time ATM models and support tools were reviewed and assessed with regard to the degree to which they offer the capabilities identified under Step 1. 3 . The findings of 1 and 2 were combined to draw conclusions about (1) the best capabilities currently existing, (2) the types of concept testing and validation that can be carried

  10. DiskFit: a code to fit simple non-axisymmetric galaxy models either to photometric images or to kinematic maps

    CERN Document Server

    Sellwood, J A


    This posting announces public availability of version 1.2 of the DiskFit software package developed by the authors, which may be used to fit simple non-axisymmetric models either to images or to velocity fields of disk galaxies. Here we give an outline of the capability of the code and provide the link to downloading executables, the source code, and a comprehensive on-line manual. We argue that in important respects the code is superior to rotcur for fitting kinematic maps and to galfit for fitting multi-component models to photometric images.

  11. Supply Chain Modeling: Downstream Risk Assessment Methodology (DRAM) Demonstration of Capability (United States)


    This work was done for Defense Logistics Agency Strategic Materials (DLA SM) to provide the capability to analyze supply chains of strategic and...of Defense for Acquisition, Technology and Logistics , 2013). 9 3. Supply Chain Modeling... Logistics , 2013. Kouvelis, Panos, Lingxiu Dong, Onur Boyabatli, and Rong Li. Handbook of Integrated Risk Management in Global Supply Chains Hoboken, NJ

  12. University-Industry Research Collaboration: A Model to Assess University Capability (United States)

    Abramo, Giovanni; D'Angelo, Ciriaco Andrea; Di Costa, Flavia


    Scholars and policy makers recognize that collaboration between industry and the public research institutions is a necessity for innovation and national economic development. This work presents an econometric model which expresses the university capability for collaboration with industry as a function of size, location and research quality. The…

  13. Using a Capability Maturity Model to Build on the Generational Approach to Student Engagement Practices (United States)

    Nelson, K.; Clarke, J.; Stoodley, I.; Creagh, T.


    The generational approach to conceptualising first-year student learning behaviour has made a useful contribution to understanding student engagement. It has an explicit focus on student behaviour and we suggest that a Capability Maturity Model interpretation may provide a complementary extension of that understanding as it builds on the…

  14. Methodology Using MELCOR Code to Model Proposed Hazard Scenario

    Energy Technology Data Exchange (ETDEWEB)

    Gavin Hawkley


    This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.

  15. The Advanced Modeling, Simulation and Analysis Capability Roadmap Vision for Engineering (United States)

    Zang, Thomas; Lieber, Mike; Norton, Charles; Fucik, Karen


    This paper summarizes a subset of the Advanced Modeling Simulation and Analysis (AMSA) Capability Roadmap that was developed for NASA in 2005. The AMSA Capability Roadmap Team was chartered to "To identify what is needed to enhance NASA's capabilities to produce leading-edge exploration and science missions by improving engineering system development, operations, and science understanding through broad application of advanced modeling, simulation and analysis techniques." The AMSA roadmap stressed the need for integration, not just within the science, engineering and operations domains themselves, but also across these domains. Here we discuss the roadmap element pertaining to integration within the engineering domain, with a particular focus on implications for future observatory missions. The AMSA products supporting the system engineering function are mission information, bounds on information quality, and system validation guidance. The Engineering roadmap element contains 5 sub-elements: (1) Large-Scale Systems Models, (2) Anomalous Behavior Models, (3) advanced Uncertainty Models, (4) Virtual Testing Models, and (5) space-based Robotics Manufacture and Servicing Models.

  16. Coding conventions and principles for a National Land-Change Modeling Framework (United States)

    Donato, David I.


    This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.

  17. ASTEC V2 severe accident integral code: Fission product modelling and validation

    Energy Technology Data Exchange (ETDEWEB)

    Cantrel, L., E-mail:; Cousin, F.; Bosland, L.; Chevalier-Jabet, K.; Marchetto, C.


    One main goal of the severe accident integral code ASTEC V2, jointly developed since almost more than 15 years by IRSN and GRS, is to simulate the overall behaviour of fission products (FP) in a damaged nuclear facility. ASTEC applications are source term determinations, level 2 Probabilistic Safety Assessment (PSA2) studies including the determination of uncertainties, accident management studies and physical analyses of FP experiments to improve the understanding of the phenomenology. ASTEC is a modular code and models of a part of the phenomenology are implemented in each module: the release of FPs and structural materials from degraded fuel in the ELSA module; the transport through the reactor coolant system approximated as a sequence of control volumes in the SOPHAEROS module; and the radiochemistry inside the containment nuclear building in the IODE module. Three other modules, CPA, ISODOP and DOSE, allow respectively computing the deposition rate of aerosols inside the containment, the activities of the isotopes as a function of time, and the gaseous dose rate which is needed to model radiochemistry in the gaseous phase. In ELSA, release models are semi-mechanistic and have been validated for a wide range of experimental data, and noticeably for VERCORS experiments. For SOPHAEROS, the models can be divided into two parts: vapour phase phenomena and aerosol phase phenomena. For IODE, iodine and ruthenium chemistry are modelled based on a semi-mechanistic approach, these FPs can form some volatile species and are particularly important in terms of potential radiological consequences. The models in these 3 modules are based on a wide experimental database, resulting for a large part from international programmes, and they are considered at the state of the art of the R and D knowledge. This paper illustrates some FPs modelling capabilities of ASTEC and computed values are compared to some experimental results, which are parts of the validation matrix.

  18. Modeling ion exchange in clinoptilolite using the EQ3/6 geochemical modeling code

    Energy Technology Data Exchange (ETDEWEB)

    Viani, B.E.; Bruton, C.J.


    Assessing the suitability of Yucca Mtn., NV as a potential repository for high-level nuclear waste requires the means to simulate ion-exchange behavior of zeolites. Vanselow and Gapon convention cation-exchange models have been added to geochemical modeling codes EQ3NR/EQ6, allowing exchange to be modeled for up to three exchangers or a single exchanger with three independent sites. Solid-solution models that are numerically equivalent to the ion-exchange models were derived and also implemented in the code. The Gapon model is inconsistent with experimental adsorption isotherms of trace components in clinoptilolite. A one-site Vanselow model can describe adsorption of Cs or Sr on clinoptilolite, but a two-site Vanselow exchange model is necessary to describe K contents of natural clinoptilolites.

  19. Modeling ion exchange in clinoptilolite using the EQ3/6 geochemical modeling code

    Energy Technology Data Exchange (ETDEWEB)

    Viani, B.E.; Bruton, C.J. [Lawrence Livermore National Lab., CA (United States)


    Potential disposal of high-level nuclear waste at Yucca Mtn., Nevada requires the means to simulate ion-exchange behavior of clays and zeolites. Vanselow and Gapon convention cation-exchange models have been added to geochemical modeling codes EQ3NR/EQ6, allowing exchange to be modeled for up to three exchangers or a single exchanger with three independent sites. Solid-solution models that are numerically equivalent to the ion-exchange models were derived and also implemented in the code. The Gapon model is inconsistent with experimental adsorption isotherms of trace components in clinoptilolite. A one-site Vanselow model can describe adsorption of Cs and Sr on clinoptilolite, but a two-site Vanselow exchange model is necessary to describe K contents of natural clinoptilolites. 15 refs., 5 figs., 1 tab.


    Energy Technology Data Exchange (ETDEWEB)



    Recent extensions and improvements of the EMPIRE code system are outlined. They add new capabilities to the code, such as prompt fission neutron spectra calculations using Hauser-Feshbach plus pre-equilibrium pre-fission spectra, cross section covariance matrix calculations by Monte Carlo method, fitting of optical model parameters, extended set of optical model potentials including new dispersive coupled channel potentials, parity-dependent level densities and transmission through numerically defined fission barriers. These features, along with improved and validated ENDF formatting, exclusive/inclusive spectra, and recoils make the current EMPIRE release a complete and well validated tool for evaluation of nuclear data at incident energies above the resonance region. The current EMPIRE release has been used in evaluations of neutron induced reaction files for {sup 232}Th and {sup 231,233}Pa nuclei in the fast neutron region at IAEA. Triple-humped fission barriers and exclusive pre-fission neutron spectra were considered for the fission data evaluation. Total, fission, capture and neutron emission cross section, average resonance parameters and angular distributions of neutron scattering are in excellent agreement with the available experimental data.

  1. IT-enabled dynamic capability on performance: An empirical study of BSC model

    Directory of Open Access Journals (Sweden)

    Adilson Carlos Yoshikuni


    Full Text Available ew studies have investigated the influence of “information capital,” through IT-enabled dynamic capability, on corporate performance, particularly in economic turbulence. Our study investigates the causal relationship between performance perspectives of the balanced scorecard using partial least squares path modeling. Using data on 845 Brazilian companies, we conduct a quantitative empirical study of firms during an economic crisis and observe the following interesting results. Operational and analytical IT-enabled dynamic capability had positive effects on business process improvement and corporate performance. Results pertaining to mediation (endogenous variables and moderation (control variables clarify IT’s role in and benefits for corporate performance.

  2. Improved virtual channel noise model for transform domain Wyner-Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren


    Distributed video coding (DVC) has been proposed as a new video coding paradigm to deal with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. A virtual channel noise model is utilized at the decoder to estimate...

  3. A Mathematical Model Accounting for the Organisation in Multiplets of the Genetic Code


    Sciarrino, A.


    Requiring stability of genetic code against translation errors, modelised by suitable mathematical operators in the crystal basis model of the genetic code, the main features of the organisation in multiplets of the mitochondrial and of the standard genetic code are explained.

  4. A realistic model under which the genetic code is optimal. (United States)

    Buhrman, Harry; van der Gulik, Peter T S; Klau, Gunnar W; Schaffner, Christian; Speijer, Dave; Stougie, Leen


    The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By comparing this value with a distribution of values belonging to codes generated by random permutations of amino acid assignments, the level of error robustness of a genetic code can be quantified. We present a calculation in which the standard genetic code is shown to be optimal. We obtain this result by (1) using recently updated values of polar requirement as input; (2) fixing seven assignments (Ile, Trp, His, Phe, Tyr, Arg, and Leu) based on aptamer considerations; and (3) using known biosynthetic relations of the 20 amino acids. This last point is reflected in an approach of subdivision (restricting the random reallocation of assignments to amino acid subgroups, the set of 20 being divided in four such subgroups). The three approaches to explain robustness of the code (specific selection for robustness, amino acid-RNA interactions leading to assignments, or a slow growth process of assignment patterns) are reexamined in light of our findings. We offer a comprehensive hypothesis, stressing the importance of biosynthetic relations, with the code evolving from an early stage with just glycine and alanine, via intermediate stages, towards 64 codons carrying todays meaning.

  5. Semantic-preload video model based on VOP coding (United States)

    Yang, Jianping; Zhang, Jie; Chen, Xiangjun


    In recent years, in order to reduce semantic gap which exists between high-level semantics and low-level features of video when the human understanding image or video, people mostly try the method of video annotation where in signal's downstream, namely further (again) attach labels to the content in video-database. Few people focus on the idea that: Use limited interaction and the means of comprehensive segmentation (including optical technologies) from the front-end of collection of video information (i.e. video camera), with video semantics analysis technology and corresponding concepts sets (i.e. ontology) which belong in a certain domain, as well as story shooting script and the task description of scene shooting etc; Apply different-level semantic descriptions to enrich the attributes of video object and the attributes of image region, then forms a new video model which is based on Video Object Plan (VOP) Coding. This model has potential intellectualized features, and carries a large amount of metadata, and embedded intermediate-level semantic concept into every object. This paper focuses on the latter, and presents a framework of a new video model. At present, this new video model is temporarily named "Video Model of Semantic-Preloaded or Semantic-Preload Video Model (simplified into VMoSP or SPVM)". This model mainly researches how to add labeling to video objects and image regions in real time, here video object and image region are usually used intermediate semantic labeling, and this work is placed on signal's upstream (i.e. video capture production stage). Because of the research needs, this paper also tries to analyses the hierarchic structure of video, and divides the hierarchic structure into nine hierarchy semantic levels, of course, this nine hierarchy only involved in video production process. In addition, the paper also point out that here semantic level tagging work (i.e. semantic preloading) only refers to the four middle-level semantic. All in

  6. Development, Verification and Use of Gust Modeling in the NASA Computational Fluid Dynamics Code FUN3D (United States)

    Bartels, Robert E.


    This paper presents the implementation of gust modeling capability in the CFD code FUN3D. The gust capability is verified by computing the response of an airfoil to a sharp edged gust. This result is compared with the theoretical result. The present simulations will be compared with other CFD gust simulations. This paper also serves as a users manual for FUN3D gust analyses using a variety of gust profiles. Finally, the development of an Auto-Regressive Moving-Average (ARMA) reduced order gust model using a gust with a Gaussian profile in the FUN3D code is presented. ARMA simulated results of a sequence of one-minus-cosine gusts is shown to compare well with the same gust profile computed with FUN3D. Proper Orthogonal Decomposition (POD) is combined with the ARMA modeling technique to predict the time varying pressure coefficient increment distribution due to a novel gust profile. The aeroelastic response of a pitch/plunge airfoil to a gust environment is computed with a reduced order model, and compared with a direct simulation of the system in the FUN3D code. The two results are found to agree very well.

  7. How do dynamic capabilities transform external technologies into firms’ renewed technological resources? – A mediation model

    DEFF Research Database (Denmark)

    Li-Ying, Jason; Wang, Yuandi; Ning, Lutao


    microfoundations of dynamic technological capabilities, mediate the relationship between external technology breadth and firms’ technological innovation performance, based on the resource-based view and dynamic capability view. Using a sample of listed Chinese licensee firms, we find that firms must broadly......How externally acquired resources may become valuable, rare, hard-to-imitate, and non-substitute resource bundles through the development of dynamic capabilities? This study proposes and tests a mediation model of how firms’ internal technological diversification and R&D, as two distinctive...... explore external technologies to ignite the dynamism in internal technological diversity and in-house R&D, which play their crucial roles differently to transform and reconfigure firms’ technological resources....

  8. Extending the Lunar Mapping and Modeling Portal - New Capabilities and New Worlds (United States)

    Day, B.; Law, E.; Arevalo, E.; Bui, B.; Chang, G.; Dodge, K.; Kim, R.; Malhotra, S.; Sadaqathullah, S.; Schmidt, G.; Bailey, B.


    NASA's Lunar Mapping and Modeling Portal (LMMP) provides a web-based Portal and a suite of interactive visualization and analysis tools to enable mission planners, lunar scientists, and engineers to access mapped lunar data products from past and current lunar missions ( During the past year, the capabilities and data served by LMMP have been significantly expanded. New interfaces are providing improved ways to access and visualize data. At the request of NASA's Science Mission Directorate, LMMP's technology and capabilities are now being extended to additional planetary bodies. New portals for Vesta and Mars are the first of these new products to be released. This presentation will provide an overview of LMMP, Vesta Trek, and Mars Trek, demonstrate their uses and capabilities, highlight new features, and preview coming enhancements.

  9. How do dynamic capabilities transform external technologies into firms’ renewed technological resources? – A mediation model

    DEFF Research Database (Denmark)

    Li-Ying, Jason; Wang, Yuandi; Ning, Lutao


    How externally acquired resources may become valuable, rare, hard-to-imitate, and non-substitute resource bundles through the development of dynamic capabilities? This study proposes and tests a mediation model of how firms’ internal technological diversification and R&D, as two distinctive...... microfoundations of dynamic technological capabilities, mediate the relationship between external technology breadth and firms’ technological innovation performance, based on the resource-based view and dynamic capability view. Using a sample of listed Chinese licensee firms, we find that firms must broadly...... explore external technologies to ignite the dynamism in internal technological diversity and in-house R&D, which play their crucial roles differently to transform and reconfigure firms’ technological resources....

  10. Implementation of Lumped Plasticity Models and Developments in an Object Oriented Nonlinear Finite Element Code (United States)

    Segura, Christopher L.

    Numerical simulation tools capable of modeling nonlinear material and geometric behavior are important to structural engineers concerned with approximating the strength and deformation capacity of a structure. While structures are typically designed to behave linear elastic when subjected to building code design loads, exceedance of the linear elastic range is often an important consideration, especially with regards to structural response during hazard level events (i.e. earthquakes, hurricanes, floods), where collapse prevention is the primary goal. This thesis addresses developments made to Mercury, a nonlinear finite element program developed in MATLAB for numerical simulation and in C++ for real time hybrid simulation. Developments include the addition of three new constitutive models to extend Mercury's lumped plasticity modeling capabilities, a constitutive driver tool for testing and implementing Mercury constitutive models, and Mercury pre and post-processing tools. Mercury has been developed as a tool for transient analysis of distributed plasticity models, offering accurate nonlinear results on the material level, element level, and structural level. When only structural level response is desired (collapse prevention), obtaining material level results leads to unnecessarily lengthy computational time. To address this issue in Mercury, lumped plasticity capabilities are developed by implementing two lumped plasticity flexural response constitutive models and a column shear failure constitutive model. The models are chosen for implementation to address two critical issues evident in structural testing: column shear failure and strength and stiffness degradation under reverse cyclic loading. These tools make it possible to model post-peak behavior, capture strength and stiffness degradation, and predict global collapse. During the implementation process, a need was identified to create a simple program, separate from Mercury, to simplify the process of

  11. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    CERN Document Server

    Blazewicz, Marek; Koppelman, David M; Brandt, Steven R; Ciznicki, Milosz; Kierzynka, Michal; Löffler, Frank; Tao, Jian


    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization is based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of va...

  12. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    Directory of Open Access Journals (Sweden)

    Marek Blazewicz


    Full Text Available Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization is based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.

  13. On the development of LWR fuel analysis code (1). Analysis of the FEMAXI code and proposal of a new model

    Energy Technology Data Exchange (ETDEWEB)

    Lemehov, Sergei; Suzuki, Motoe [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment


    This report summarizes the review on the modeling features of FEMAXI code and proposal of a new theoretical equation model of clad creep on the basis of irradiation-induced microstructure change. It was pointed out that plutonium build-up in fuel matrix and non-uniform radial power profile at high burn-up affect significantly fuel behavior through the interconnected effects with such phenomena as clad irradiation-induced creep, fission gas release, fuel thermal conductivity degradation, rim porous band formation and associated fuel swelling. Therefore, these combined effects should be properly incorporated into the models of the FEMAXI code so that the code can carry out numerical analysis at the level of accuracy and elaboration that modern experimental data obtained in test reactors have. Also, the proposed new mechanistic clad creep model has a general formalism which allows the model to be flexibly applied for clad behavior analysis under normal operation conditions and power transients as well for Zr-based clad materials by the use of established out-of-pile mechanical properties. The model has been tested against experimental data, while further verification is needed with specific emphasis on power ramps and transients. (author)

  14. On the Generalization Capabilities of the Ten-Parameter Jiles-Atherton Model

    Directory of Open Access Journals (Sweden)

    Gabriele Maria Lozito


    Full Text Available This work proposes an analysis on the generalization capabilities for the modified version of the classic Jiles-Atherton model for magnetic hysteresis. The modified model takes into account the use of dynamic parameterization, as opposed to the classic model where the parameters are constant. Two different dynamic parameterizations are taken into account: a dependence on the excitation and a dependence on the response. The identification process is performed by using a novel nonlinear optimization technique called Continuous Flock-of-Starling Optimization Cube (CFSO3, an algorithm belonging to the class of swarm intelligence. The algorithm exploits parallel architecture and uses a supervised strategy to alternate between exploration and exploitation capabilities. Comparisons between the obtained results are presented at the end of the paper.

  15. Konsep Tingkat Kematangan penerapan Internet Protokol versi 6 (Capability Maturity Model for IPv6 Implementation

    Directory of Open Access Journals (Sweden)

    Riza Azmi


    Full Text Available Internet Protocol atau IP merupakan standar penomoran internet di dunia yang jumlahnya terbatas. Di dunia, alokasi IP diatur oleh Internet Assignd Number Authority (IANA dan didelegasikan ke melalui otoritas masing-masing benua. IP sendiri terdiri dari 2 jenis versi yaitu IPv4 dan IPv6 dimana alokasi IPv4 dinyatakan habis di tingkat IANA pada bulan April 2011. Oleh karena itu, penggunaan IP diarahkan kepada penggunaan IPv6. Untuk melihat bagaimana kematangan suatu organisasi terhadap implementasi IPv6, penelitian ini mencoba membuat sebuah model tingkat kematangan penerapan IPv6. Konsep dasar dari model ini mengambil konsep Capability Maturity Model Integrated (CMMI, dengan beberapa tambahan yaitu roadmap migrasi IPv6 di Indonesia, Request for Comment (RFC yang terkait dengan IPv6 serta beberapa best-practice implementasi dari IPv6. Dengan konsep tersebut, penelitian ini menghasilkan konsep Capability Maturity for IPv6 Implementation.

  16. A Parallel Ocean Model With Adaptive Mesh Refinement Capability For Global Ocean Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Herrnstein, Aaron R. [Univ. of California, Davis, CA (United States)


    An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration, and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO2 concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No

  17. Djehuty A Code for Modeling Whole Stars in Three Dimensions

    CERN Document Server

    Turcotte, S; Castor, J I; Cavallo, R M; Cohl, H S; Cook, K; Dearborn, D S P; Dossa, D D; Eastman, R; Eggleton, P P; Eltgroth, P; Keller, S; Murray, S; Taylor, A


    The DJEHUTY project is an intensive effort at the Lawrence Livermore National Laboratory (LLNL) to produce a general purpose 3-D stellar structure and evolution code to study dynamic processes in whole stars.

  18. Algorthms and Regolith Erosion Models for the Alert Code Project (United States)

    National Aeronautics and Space Administration — ORBITEC and Duke University have teamed on this STTR to develop the ALERT (Advanced Lunar Exhaust-Regolith Transport) code which will include new developments in...

  19. Error threshold in topological quantum-computing models with color codes (United States)

    Katzgraber, Helmut; Bombin, Hector; Martin-Delgado, Miguel A.


    Dealing with errors in quantum computing systems is possibly one of the hardest tasks when attempting to realize physical devices. By encoding the qubits in topological properties of a system, an inherent protection of the quantum states can be achieved. Traditional topologically-protected approaches are based on the braiding of quasiparticles. Recently, a braid-less implementation using brane-net condensates in 3-colexes has been proposed. In 2D it allows the transversal implementation of the whole Clifford group of quantum gates. In this work, we compute the error threshold for this topologically-protected quantum computing system in 2D, by means of mapping its error correction process onto a random 3-body Ising model on a triangular lattice. Errors manifest themselves as random perturbation of the plaquette interaction terms thus introducing frustration. Our results from Monte Carlo simulations suggest that these topological color codes are similarly robust to perturbations as the toric codes. Furthermore, they provide more computational capabilities and the possibility of having more qubits encoded in the quantum memory.

  20. New Modelling Capabilities in Commercial Software for High-Gain Antennas

    DEFF Research Database (Denmark)

    Jørgensen, Erik; Lumholt, Michael; Meincke, Peter


    type of EM software tool aimed at extending the ways engineers can use antenna measurements in the antenna design process. The tool allows reconstruction of currents and near fields on a 3D surface conformal to the antenna, by using the measured antenna field as input. The currents on the antenna......This paper presents an overview of selected new modelling algorithms and capabilities in commercial software tools developed by TICRA. A major new area is design and analysis of printed reflectarrays where a fully integrated design environment is under development, allowing fast and accurate...... characterization of the reflectarray element, an initial phaseonly synthesis, followed by a full optimization procedure taking into account the near-field from the feed and the finite extent of the array. Another interesting new modelling capability is made available through the DIATOOL software, which is a new...

  1. Mass transport and direction dependent battery modeling for accurate on-line power capability prediction

    Energy Technology Data Exchange (ETDEWEB)

    Wiegman, H.L.N. [General Electric Corporate Research and Development, Schenectady, NY (United States)


    Some recent advances in battery modeling were discussed with reference to on-line impedance estimates and power performance predictions for aqueous solution, porous electrode cell structures. The objective was to determine which methods accurately estimate a battery's internal state and power capability while operating a charge and sustaining a hybrid electric vehicle (HEV) over a wide range of driving conditions. The enhancements to the Randles-Ershler equivalent electrical model of common cells with lead-acid, nickel-cadmium and nickel-metal hydride chemistries were described. This study also investigated which impedances are sensitive to boundary layer charge concentrations and mass transport limitations. Non-linear impedances were shown to significantly affect the battery's ability to process power. The main advantage of on-line estimating a battery's impedance state and power capability is that the battery can be optimally sized for any application. refs., tabs., figs., append.

  2. MIG version 0.0 model interface guidelines: Rules to accelerate installation of numerical models into any compliant parent code

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, R.M.; Wong, M.K.


    A set of model interface guidelines, called MIG, is presented as a means by which any compliant numerical material model can be rapidly installed into any parent code without having to modify the model subroutines. Here, {open_quotes}model{close_quotes} usually means a material model such as one that computes stress as a function of strain, though the term may be extended to any numerical operation. {open_quotes}Parent code{close_quotes} means a hydrocode, finite element code, etc. which uses the model and enforces, say, the fundamental laws of motion and thermodynamics. MIG requires the model developer (who creates the model package) to specify model needs in a standardized but flexible way. MIG includes a dictionary of technical terms that allows developers and parent code architects to share a common vocabulary when specifying field variables. For portability, database management is the responsibility of the parent code. Input/output occurs via structured calling arguments. As much model information as possible (such as the lists of required inputs, as well as lists of precharacterized material data and special needs) is supplied by the model developer in an ASCII text file. Every MIG-compliant model also has three required subroutines to check data, to request extra field variables, and to perform model physics. To date, the MIG scheme has proven flexible in beta installations of a simple yield model, plus a more complicated viscodamage yield model, three electromechanical models, and a complicated anisotropic microcrack constitutive model. The MIG yield model has been successfully installed using identical subroutines in three vectorized parent codes and one parallel C++ code, all predicting comparable results. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort, thereby reducing the cost of installing and sharing models in diverse new codes.


    Institute of Scientific and Technical Information of China (English)

    Zhang Daoqiang; Chen Songcan


    A Hyperbolic Tangent multi-valued Bi-directional Associative Memory (HTBAM)model is proposed in this letter. Two general energy functions are defined to prove the stabilityof one class of multi-valued Bi-directional Associative Memorys(BAMs), with HTBAM being thespecial case. Simulation results show that HTBAM has a competitive storage capacity and muchmore error-correcting capability than other multi-valued BAMs.

  4. Adaptive Partially Hidden Markov Models with Application to Bilevel Image Coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto; Rasmussen, Tage


    Adaptive Partially Hidden Markov Models (APHMM) are introduced extending the PHMM models. The new models are applied to lossless coding of bi-level images achieving resluts which are better the JBIG standard.......Adaptive Partially Hidden Markov Models (APHMM) are introduced extending the PHMM models. The new models are applied to lossless coding of bi-level images achieving resluts which are better the JBIG standard....

  5. Biocomputational prediction of non-coding RNAs in model cyanobacteria

    Directory of Open Access Journals (Sweden)

    Ude Susanne


    Full Text Available Abstract Background In bacteria, non-coding RNAs (ncRNA are crucial regulators of gene expression, controlling various stress responses, virulence, and motility. Previous work revealed a relatively high number of ncRNAs in some marine cyanobacteria. However, for efficient genetic and biochemical analysis it would be desirable to identify a set of ncRNA candidate genes in model cyanobacteria that are easy to manipulate and for which extended mutant, transcriptomic and proteomic data sets are available. Results Here we have used comparative genome analysis for the biocomputational prediction of ncRNA genes and other sequence/structure-conserved elements in intergenic regions of the three unicellular model cyanobacteria Synechocystis PCC6803, Synechococcus elongatus PCC6301 and Thermosynechococcus elongatus BP1 plus the toxic Microcystis aeruginosa NIES843. The unfiltered numbers of predicted elements in these strains is 383, 168, 168, and 809, respectively, combined into 443 sequence clusters, whereas the numbers of individual elements with high support are 94, 56, 64, and 406, respectively. Removing also transposon-associated repeats, finally 78, 53, 42 and 168 sequences, respectively, are left belonging to 109 different clusters in the data set. Experimental analysis of selected ncRNA candidates in Synechocystis PCC6803 validated new ncRNAs originating from the fabF-hoxH and apcC-prmA intergenic spacers and three highly expressed ncRNAs belonging to the Yfr2 family of ncRNAs. Yfr2a promoter-luxAB fusions confirmed a very strong activity of this promoter and indicated a stimulation of expression if the cultures were exposed to elevated light intensities. Conclusion Comparison to entries in Rfam and experimental testing of selected ncRNA candidates in Synechocystis PCC6803 indicate a high reliability of the current prediction, despite some contamination by the high number of repetitive sequences in some of these species. In particular, we

  6. A model of turbocharger radial turbines appropriate to be used in zero- and one-dimensional gas dynamics codes for internal combustion engines modelling

    Energy Technology Data Exchange (ETDEWEB)

    Serrano, J.R.; Arnau, F.J.; Dolz, V.; Tiseira, A. [CMT-Motores Termicos, Universidad Politecnica de Valencia, Camino de Vera s/n, 46022 Valencia (Spain); Cervello, C. [Conselleria de Cultura, Educacion y Deporte, Generalitat Valenciana (Spain)


    The paper presents a model of fixed and variable geometry turbines. The aim of this model is to provide an efficient boundary condition to model turbocharged internal combustion engines with zero- and one-dimensional gas dynamic codes. The model is based from its very conception on the measured characteristics of the turbine. Nevertheless, it is capable of extrapolating operating conditions that differ from those included in the turbine maps, since the engines usually work within these zones. The presented model has been implemented in a one-dimensional gas dynamic code and has been used to calculate unsteady operating conditions for several turbines. The results obtained have been compared with success against pressure-time histories measured upstream and downstream of the turbine during on-engine operation. (author)

  7. Demonstration of the Recent Additions in Modeling Capabilities for the WEC-Sim Wave Energy Converter Design Tool: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Tom, N.; Lawson, M.; Yu, Y. H.


    WEC-Sim is a mid-fidelity numerical tool for modeling wave energy conversion (WEC) devices. The code uses the MATLAB SimMechanics package to solve the multi-body dynamics and models the wave interactions using hydrodynamic coefficients derived from frequency domain boundary element methods. In this paper, the new modeling features introduced in the latest release of WEC-Sim will be presented. The first feature discussed is the conversion of the fluid memory kernel to a state-space approximation that provides significant gains in computational speed. The benefit of the state-space calculation becomes even greater after the hydrodynamic body-to-body coefficients are introduced as the number of interactions increases exponentially with the number of floating bodies. The final feature discussed is the capability toadd Morison elements to provide additional hydrodynamic damping and inertia. This is generally used as a tuning feature, because performance is highly dependent on the chosen coefficients. In this paper, a review of the hydrodynamic theory for each of the features is provided and successful implementation is verified using test cases.

  8. Application of the thermal-hydraulic codes in VVER-440 steam generators modelling

    Energy Technology Data Exchange (ETDEWEB)

    Matejovic, P.; Vranca, L.; Vaclav, E. [Nuclear Power Plant Research Inst. VUJE (Slovakia)


    Performances with the CATHARE2 V1.3U and RELAP5/MOD3.0 application to the VVER-440 SG modelling during normal conditions and during transient with secondary water lowering are described. Similar recirculation model was chosen for both codes. In the CATHARE calculation, no special measures were taken with the aim to optimize artificially flow rate distribution coefficients for the junction between SG riser and steam dome. Contrary to RELAP code, the CATHARE code is able to predict reasonable the secondary swell level in nominal conditions. Both codes are able to model properly natural phase separation on the SG water level. 6 refs.

  9. Linear-Time Non-Malleable Codes in the Bit-Wise Independent Tampering Model

    DEFF Research Database (Denmark)

    Cramer, Ronald; Damgård, Ivan Bjerre; Döttling, Nico

    Non-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m' (eventuall...... non-malleable codes of Agrawal et al. (TCC 2015) and of Cher- aghchi and Guruswami (TCC 2014) and improves the previous result in the bit-wise tampering model: it builds the first non-malleable codes with linear-time complexity and optimal-rate (i.e. rate 1 - o(1))....

  10. General Description of Fission Observables: GEF Model Code

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, K.-H. [CENBG, CNRS/IN2 P3, Chemin du Solarium, B.P. 120, F-33175 Gradignan (France); Jurado, B., E-mail: [CENBG, CNRS/IN2 P3, Chemin du Solarium, B.P. 120, F-33175 Gradignan (France); Amouroux, C. [CEA, DSM-Saclay (France); Schmitt, C., E-mail: [GANIL, Bd. Henri Becquerel, B.P. 55027, F-14076 Caen Cedex 05 (France)


    The GEF (“GEneral description of Fission observables”) model code is documented. It describes the observables for spontaneous fission, neutron-induced fission and, more generally, for fission of a compound nucleus from any other entrance channel, with given excitation energy and angular momentum. The GEF model is applicable for a wide range of isotopes from Z = 80 to Z = 112 and beyond, up to excitation energies of about 100 MeV. The results of the GEF model are compared with fission barriers, fission probabilities, fission-fragment mass- and nuclide distributions, isomeric ratios, total kinetic energies, and prompt-neutron and prompt-gamma yields and energy spectra from neutron-induced and spontaneous fission. Derived properties of delayed neutrons and decay heat are also considered. The GEF model is based on a general approach to nuclear fission that explains a great part of the complex appearance of fission observables on the basis of fundamental laws of physics and general properties of microscopic systems and mathematical objects. The topographic theorem is used to estimate the fission-barrier heights from theoretical macroscopic saddle-point and ground-state masses and experimental ground-state masses. Motivated by the theoretically predicted early localisation of nucleonic wave functions in a necked-in shape, the properties of the relevant fragment shells are extracted. These are used to determine the depths and the widths of the fission valleys corresponding to the different fission channels and to describe the fission-fragment distributions and deformations at scission by a statistical approach. A modified composite nuclear-level-density formula is proposed. It respects some features in the superfluid regime that are in accordance with new experimental findings and with theoretical expectations. These are a constant-temperature behaviour that is consistent with a considerably increased heat capacity and an increased pairing condensation energy that is

  11. The New MCNP6 Depletion Capability

    Energy Technology Data Exchange (ETDEWEB)

    Fensin, Michael Lorne [Los Alamos National Laboratory; James, Michael R. [Los Alamos National Laboratory; Hendricks, John S. [Los Alamos National Laboratory; Goorley, John T. [Los Alamos National Laboratory


    The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology.

  12. Modelling and Implementation of Network Coding for Video

    Directory of Open Access Journals (Sweden)

    Can Eyupoglu


    Full Text Available In this paper, we investigate Network Coding for Video (NCV which we apply for video streaming over wireless networks. NCV provides a basis for network coding. We use NCV algorithm to increase throughput and video quality. When designing NCV algorithm, we take the deadline as well as the decodability of the video packet at the receiver. In network coding, different flows of video packets are packed into a single packet at intermediate nodes and forwarded to other nodes over wireless networks. There are many problems that occur during transmission on the wireless channel. Network coding plays an important role in dealing with these problems. We observe the benefits of network coding for throughput increase thanks to applying broadcast operations on wireless networks. The aim of this study is to implement NCV algorithm using C programming language which takes the output of the H.264 video codec generating the video packets. In our experiments, we investigated improvements in terms of video quality and throughput at different scenarios.

  13. Spatial Preference Modelling for equitable infrastructure provision: an application of Sen's Capability Approach (United States)

    Wismadi, Arif; Zuidgeest, Mark; Brussel, Mark; van Maarseveen, Martin


    To determine whether the inclusion of spatial neighbourhood comparison factors in Preference Modelling allows spatial decision support systems (SDSSs) to better address spatial equity, we introduce Spatial Preference Modelling (SPM). To evaluate the effectiveness of this model in addressing equity, various standardisation functions in both Non-Spatial Preference Modelling and SPM are compared. The evaluation involves applying the model to a resource location-allocation problem for transport infrastructure in the Special Province of Yogyakarta in Indonesia. We apply Amartya Sen's Capability Approach to define opportunity to mobility as a non-income indicator. Using the extended Moran's I interpretation for spatial equity, we evaluate the distribution output regarding, first, `the spatial distribution patterns of priority targeting for allocation' (SPT) and, second, `the effect of new distribution patterns after location-allocation' (ELA). The Moran's I index of the initial map and its comparison with six patterns for SPT as well as ELA consistently indicates that the SPM is more effective for addressing spatial equity. We conclude that the inclusion of spatial neighbourhood comparison factors in Preference Modelling improves the capability of SDSS to address spatial equity. This study thus proposes a new formal method for SDSS with specific attention on resource location-allocation to address spatial equity.

  14. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments

    Directory of Open Access Journals (Sweden)

    Monteagudo Ángel


    Full Text Available Abstract Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the

  15. Development of realistic thermal-hydraulic system analysis codes ; development of thermal hydraulic test requirements for multidimensional flow modeling

    Energy Technology Data Exchange (ETDEWEB)

    Suh, Kune Yull; Yoon, Sang Hyuk; Noh, Sang Woo; Lee, Il Suk [Seoul National University, Seoul (Korea)


    This study is concerned with developing a multidimensional flow model required for the system analysis code MARS to more mechanistically simulate a variety of thermal hydraulic phenomena in the nuclear stem supply system. The capability of the MARS code as a thermal hydraulic analysis tool for optimized system design can be expanded by improving the current calculational methods and adding new models. In this study the relevant literature was surveyed on the multidimensional flow models that may potentially be applied to the multidimensional analysis code. Research items were critically reviewed and suggested to better predict the multidimensional thermal hydraulic behavior and to identify test requirements. A small-scale preliminary test was performed in the downcomer formed by two vertical plates to analyze multidimensional flow pattern in a simple geometry. The experimental result may be applied to the code for analysis of the fluid impingement to the reactor downcomer wall. Also, data were collected to find out the controlling parameters for the one-dimensional and multidimensional flow behavior. 22 refs., 40 figs., 7 tabs. (Author)

  16. Development and assessment of Multi-dimensional flow models in the thermal-hydraulic system analysis code MARS

    Energy Technology Data Exchange (ETDEWEB)

    Chung, B. D.; Bae, S. W.; Jeong, J. J.; Lee, S. M


    A new multi-dimensional component has been developed to allow for more flexible 3D capabilities in the system code, MARS. This component can be applied in the Cartesian and cylindrical coordinates. For the development of this model, the 3D convection and diffusion terms are implemented in the momentum and energy equation. And a simple Prandtl's mixing length model is applied for the turbulent viscosity. The developed multi-dimensional component was assessed against five conceptual problems with analytic solution. And some SETs are calculated and compared with experimental data. With this newly developed multi-dimensional flow module, the MARS code can realistic calculate the flow fields in pools such as those occurring in the core, steam generators and IRWST.

  17. Evaluation of remote-sensing-based rainfall products through predictive capability in hydrological runoff modelling

    DEFF Research Database (Denmark)

    Stisen, Simon; Sandholt, Inge


    The emergence of regional and global satellite-based rainfall products with high spatial and temporal resolution has opened up new large-scale hydrological applications in data-sparse or ungauged catchments. Particularly, distributed hydrological models can benefit from the good spatial coverage...... and distributed nature of satellite-based rainfall estimates (SRFE). In this study, five SRFEs with temporal resolution of 24 h and spatial resolution between 8 and 27 km have been evaluated through their predictive capability in a distributed hydrological model of the Senegal River basin in West Africa. The main...

  18. A new formal model for privilege control with supporting POSIX capability mechanism

    Institute of Scientific and Technical Information of China (English)

    JI Qingguang; QING Sihan; HE Yeping


    In order to enforce the least privilege principle in the operating system, it is necessary for the process privilege to be effectively controlled; but this is very difficult because a process always changes as time changes. In this paper, based on the analysis on how the process privilege is generated and how it works, a hierarchy implementing the least privilege principle with three layers, i.e. administration layer, functionality control layer and performance layer, is posed. It is clearly demonstrated that to bound privilege's working scope is a critical part for controlling privilege, but this is only mentioned implicitly while not supported in POSIX capability mechanism. Based on analysis of existing control mechanism for privilege, not only an improved capability inheritance formula but also a new complete formal model for controlling process based on integrating RBAC, DTE, and POSIX capability mechanism is introduced. The new invariants in the model show that this novel privilege control mechanism is different from RBAC's, DTE's, and POSIX's, and it generalizes subdomain control mechanism and makes this mechanism dynamic.

  19. INTRA/Mod3.2. Manual and Code Description. Volume I - Physical Modelling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Jenny; Edlund, O.; Hermann, J.; Johansson, Lise-Lotte


    The INTRA Manual consists of two volumes. Volume I of the manual is a thorough description of the code INTRA, the Physical modelling of INTRA and the ruling numerical methods and volume II, the User`s Manual is an input description. This document, the Physical modelling of INTRA, contains code characteristics, integration methods and applications

  20. Code generation by model transformation: a case study in transformation modularity

    NARCIS (Netherlands)

    Hemel, Z.; Kats, L.C.L.; Groenewegen, D.M.; Viser, E.


    The realization of model-driven software development requires effective techniques for implementing code generators for domain-specific languages. This paper identifies techniques for improving separation of concerns in the implementation of generators. The core technique is code generation by model

  1. Capabilities and performance of Elmer/Ice, a new-generation ice sheet model

    Directory of Open Access Journals (Sweden)

    O. Gagliardini


    Full Text Available The Fourth IPCC Assessment Report concluded that ice sheet flow models, in their current state, were unable to provide accurate forecast for the increase of polar ice sheet discharge and the associated contribution to sea level rise. Since then, the glaciological community has undertaken a huge effort to develop and improve a new generation of ice flow models, and as a result a significant number of new ice sheet models have emerged. Among them is the parallel finite-element model Elmer/Ice, based on the open-source multi-physics code Elmer. It was one of the first full-Stokes models used to make projections for the evolution of the whole Greenland ice sheet for the coming two centuries. Originally developed to solve local ice flow problems of high mechanical and physical complexity, Elmer/Ice has today reached the maturity to solve larger-scale problems, earning the status of an ice sheet model. Here, we summarise almost 10 yr of development performed by different groups. Elmer/Ice solves the full-Stokes equations, for isotropic but also anisotropic ice rheology, resolves the grounding line dynamics as a contact problem, and contains various basal friction laws. Derived fields, like the age of the ice, the strain rate or stress, can also be computed. Elmer/Ice includes two recently proposed inverse methods to infer badly known parameters. Elmer is a highly parallelised code thanks to recent developments and the implementation of a block preconditioned solver for the Stokes system. In this paper, all these components are presented in detail, as well as the numerical performance of the Stokes solver and developments planned for the future.

  2. A Cycle Model of Co-evolution between Emerging Technology and Firm’s Capabilities Based on Case Study

    Institute of Scientific and Technical Information of China (English)

    Wang; Min; Li; Limiao; Yin; Lu


    This study explores the mechanism on the co-evolution between emerging technology and capability.Our research focus is how the firms capabilities affect the evolution of emerging technology through strategy.Based on the theoretical analysis and case study,this paper builds a theoretical framework:firms capability is classified into static capability and dynamic capability,and the evolution of emerging technology is summarized by a cycle model.Further,strategy is looked as a mediated variable.The conclusion is that the static capability affects the emerging technology evolution through strategy implement,and the dynamic capability affects the evolution through strategy change.In both situations,organization learning is a key capability to the evolution of emerging technology.

  3. 49 CFR 41.120 - Acceptable model codes. (United States)


    ... Natural Hazards, Federal Emergency Management Agency, 500 C Street, SW., Washington, DC 20472.): (i) The..., published by the Building Officials and Code Administrators, 4051 West Flossmoor Rd., Country Club Hills... Disaster Relief and Emergency Assistance Act (Stafford Act), 42 U.S.C. 5170a, 5170b, 5192, and 5193, or...

  4. M3: An Open Model for Measuring Code Artifacts

    NARCIS (Netherlands)

    Izmaylova, A.; Klint, P.; Shahi, A.; Vinju, J.J.


    In the context of the EU FP7 project ``OSSMETER'' we are developing an infra-structure for measuring source code. The goal of OSSMETER is to obtain insight in the quality of open-source projects from all possible perspectives, including product, process and community. This is a "white paper" on M3,

  5. Dark Current and Multipacting Capabilities in OPAL: Model Benchmarks and Applications

    CERN Document Server

    Wang, C; Yin, Z G; Zhang, T J


    Dark current and multiple electron impacts (multipacting), as for example observed in radio frequency (RF) structures of accelerators, are usually harmful to the equipment and the beam quality. These effects need to be suppressed to guarantee efficient and stable operation. Large scale simulations can be used to understand causes and develop strategies to suppress these phenomenas. We extend \\opal, a parallel framework for charged particle optics in accelerator structures and beam lines, with the necessary physics models to efficiently and precisely simulate multipacting phenomenas. We added a Fowler-Nordheim field emission model, two secondary electron emission models, developed by Furman-Pivi and Vaughan respectively, as well as efficient 3D boundary geometry handling capabilities. The models and their implementation are carefully benchmark against a non-stationary multipacting theory for the classic parallel plate geometry. A dedicated, parallel plate experiment is sketched.

  6. Qualification and application of nuclear reactor accident analysis code with the capability of internal assessment of uncertainty; Qualificacao e aplicacao de codigo de acidentes de reatores nucleares com capacidade interna de avaliacao de incerteza

    Energy Technology Data Exchange (ETDEWEB)

    Borges, Ronaldo Celem


    This thesis presents an independent qualification of the CIAU code ('Code with the capability of - Internal Assessment of Uncertainty') which is part of the internal uncertainty evaluation process with a thermal hydraulic system code on a realistic basis. This is done by combining the uncertainty methodology UMAE ('Uncertainty Methodology based on Accuracy Extrapolation') with the RELAP5/Mod3.2 code. This allows associating uncertainty band estimates with the results obtained by the realistic calculation of the code, meeting licensing requirements of safety analysis. The independent qualification is supported by simulations with RELAP5/Mod3.2 related to accident condition tests of LOBI experimental facility and to an event which has occurred in Angra 1 nuclear power plant, by comparison with measured results and by establishing uncertainty bands on safety parameter calculated time trends. These bands have indeed enveloped the measured trends. Results from this independent qualification of CIAU have allowed to ascertain the adequate application of a systematic realistic code procedure to analyse accidents with uncertainties incorporated in the results, although there is an evident need of extending the uncertainty data base. It has been verified that use of the code with this internal assessment of uncertainty is feasible in the design and license stages of a NPP. (author)

  7. Transitioning Enhanced Land Surface Initialization and Model Verification Capabilities to the Kenya Meteorological Department (KMD) (United States)

    Case, Jonathan L.; Mungai, John; Sakwa, Vincent; Zavodsky, Bradley T.; Srikishen, Jayanthi; Limaye, Ashutosh; Blankenship, Clay B.


    Flooding, severe weather, and drought are key forecasting challenges for the Kenya Meteorological Department (KMD), based in Nairobi, Kenya. Atmospheric processes leading to convection, excessive precipitation and/or prolonged drought can be strongly influenced by land cover, vegetation, and soil moisture content, especially during anomalous conditions and dry/wet seasonal transitions. It is thus important to represent accurately land surface state variables (green vegetation fraction, soil moisture, and soil temperature) in Numerical Weather Prediction (NWP) models. The NASA SERVIR and the Short-term Prediction Research and Transition (SPoRT) programs in Huntsville, AL have established a working partnership with KMD to enhance its regional modeling capabilities. SPoRT and SERVIR are providing experimental land surface initialization datasets and model verification capabilities for capacity building at KMD. To support its forecasting operations, KMD is running experimental configurations of the Weather Research and Forecasting (WRF; Skamarock et al. 2008) model on a 12-km/4-km nested regional domain over eastern Africa, incorporating the land surface datasets provided by NASA SPoRT and SERVIR. SPoRT, SERVIR, and KMD participated in two training sessions in March 2014 and June 2015 to foster the collaboration and use of unique land surface datasets and model verification capabilities. Enhanced regional modeling capabilities have the potential to improve guidance in support of daily operations and high-impact weather and climate outlooks over Eastern Africa. For enhanced land-surface initialization, the NASA Land Information System (LIS) is run over Eastern Africa at 3-km resolution, providing real-time land surface initialization data in place of interpolated global model soil moisture and temperature data available at coarser resolutions. Additionally, real-time green vegetation fraction (GVF) composites from the Suomi-NPP VIIRS instrument is being incorporated

  8. Model for the extension of the processing and memory capabilities of Java Card smartcards

    Directory of Open Access Journals (Sweden)

    Susana María Ramírez Brey


    Full Text Available Smartcard have distinctive features like portability, by the reduced size, and the low cost, in order to be used on a large scale. Associated to these characteristics, find out limitations of the hardware’s resources, related fundamentally with memory and processing capabilities. These and others limitations of Java Card technology constitute significant limitations for the smartcard applications developers, and in general. In this work, is presented a smartcard application development model with Java Card technology that allows to extend memory and processing capabilities, making use of the computer's hardware resources. This model guarantees the safe environment that is characteristic of this device type. The proposed development model provide a mechanism for storage data associated to smartcard applications off- card, and for the execution of high cost computational algorithms, that for runtime or complexity is more feasible to perform off- card. With this new model is intended to significantly increase the applications and use of the smartcard, in connected and controlled environments like companies and institutions.

  9. Using a Simple Binomial Model to Assess Improvement in Predictive Capability: Sequential Bayesian Inference, Hypothesis Testing, and Power Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sigeti, David E. [Los Alamos National Laboratory; Pelak, Robert A. [Los Alamos National Laboratory


    We present a Bayesian statistical methodology for identifying improvement in predictive simulations, including an analysis of the number of (presumably expensive) simulations that will need to be made in order to establish with a given level of confidence that an improvement has been observed. Our analysis assumes the ability to predict (or postdict) the same experiments with legacy and new simulation codes and uses a simple binomial model for the probability, {theta}, that, in an experiment chosen at random, the new code will provide a better prediction than the old. This model makes it possible to do statistical analysis with an absolute minimum of assumptions about the statistics of the quantities involved, at the price of discarding some potentially important information in the data. In particular, the analysis depends only on whether or not the new code predicts better than the old in any given experiment, and not on the magnitude of the improvement. We show how the posterior distribution for {theta} may be used, in a kind of Bayesian hypothesis testing, both to decide if an improvement has been observed and to quantify our confidence in that decision. We quantify the predictive probability that should be assigned, prior to taking any data, to the possibility of achieving a given level of confidence, as a function of sample size. We show how this predictive probability depends on the true value of {theta} and, in particular, how there will always be a region around {theta} = 1/2 where it is highly improbable that we will be able to identify an improvement in predictive capability, although the width of this region will shrink to zero as the sample size goes to infinity. We show how the posterior standard deviation may be used, as a kind of 'plan B metric' in the case that the analysis shows that {theta} is close to 1/2 and argue that such a plan B should generally be part of hypothesis testing. All the analysis presented in the paper is done with a

  10. The modeling of core melting and in-vessel corium relocation in the APRIL code

    Energy Technology Data Exchange (ETDEWEB)

    Kim. S.W.; Podowski, M.Z.; Lahey, R.T. [Rensselaer Polytechnic Institute, Troy, NY (United States)] [and others


    This paper is concerned with the modeling of severe accident phenomena in boiling water reactors (BWR). New models of core melting and in-vessel corium debris relocation are presented, developed for implementation in the APRIL computer code. The results of model testing and validations are given, including comparisons against available experimental data and parametric/sensitivity studies. Also, the application of these models, as parts of the APRIL code, is presented to simulate accident progression in a typical BWR reactor.

  11. A computer code for calculations in the algebraic collective model of the atomic nucleus

    CERN Document Server

    Welsh, T A


    A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1,1) x SO(5) dynamical group. This, in particular, obviates the use of coefficients of fractional parentage. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [pi x q x pi]_0 and [pi x pi]_{LM}, where q_M are the model's quadrupole moments, and pi_N are corresponding conjugate momenta (-2>=M,N<=2). The code also provides ready access to SO(3)-reduced SO(5) Clebsch-Gordan coefficients through data files provided with the code.

  12. Modelling the biogeochemical cycle of silicon in soils using the reactive transport code MIN3P (United States)

    Gerard, F.; Mayer, K. U.; Hodson, M. J.; Meunier, J.


    We investigated the biogeochemical cycling of Si in an acidic brown soil covered by a coniferous forest (Douglas fir) based on a comprehensive data set and reactive transport modelling. Both published and original data enable us to make up a conceptual model on which the development of a numerical model is based. We modified the reactive transport code MIN3P, which solves thermodynamic and kinetic reactions coupled with vadose zone flow and solute transport. Simulations were performed for a one-dimensional heterogeneous soil profile and were constrained by observed data including daily soil temperature, plant transpiration, throughfall, and dissolved Si in solutions collected beneath the organic layer. Reactive transport modelling was first used to test the validity of the hypothesis that a dynamic balance between Si uptake by plants and release by weathering controls aqueous Si-concentrations. We were able to calibrate the model quite accurately by stepwise adjustment of the relevant parameters. The capability of the model to predict Si-concentrations was good. Mass balance calculations indicate that only 40% of the biogeochemical cycle of Si is controlled by weathering and that about 60% of Si-cycling is related to biological processes (i.e. Si uptake by plants and dissolution of biogenic Si). Such a large contribution of biological processes was not anticipated considering the temperate climate regime, but may be explained by the high biomass productivity of the planted coniferous species. The large contribution of passive Si-uptake by vegetation permits the conservation of seasonal concentration variations caused by temperature-induced weathering, although the modelling suggests that the latter process was of lesser importance relative to biological Si-cycling.

  13. Adaptive Planning: Understanding Organizational Workload to Capability/ Capacity through Modeling and Simulation (United States)

    Hase, Chris


    In August 2003, the Secretary of Defense (SECDEF) established the Adaptive Planning (AP) initiative [1] with an objective of reducing the time necessary to develop and revise Combatant Commander (COCOM) contingency plans and increase SECDEF plan visibility. In addition to reducing the traditional plan development timeline from twenty-four months to less than twelve months (with a goal of six months)[2], AP increased plan visibility to Department of Defense (DoD) leadership through In-Progress Reviews (IPRs). The IPR process, as well as the increased number of campaign and contingency plans COCOMs had to develop, increased the workload while the number of planners remained fixed. Several efforts from collaborative planning tools to streamlined processes were initiated to compensate for the increased workload enabling COCOMS to better meet shorter planning timelines. This paper examines the Joint Strategic Capabilities Plan (JSCP) directed contingency planning and staffing requirements assigned to a combatant commander staff through the lens of modeling and simulation. The dynamics of developing a COCOM plan are captured with an ExtendSim [3] simulation. The resulting analysis provides a quantifiable means by which to measure a combatant commander staffs workload associated with development and staffing JSCP [4] directed contingency plans with COCOM capability/capacity. Modeling and simulation bring significant opportunities in measuring the sensitivity of key variables in the assessment of workload to capability/capacity analysis. Gaining an understanding of the relationship between plan complexity, number of plans, planning processes, and number of planners with time required for plan development provides valuable information to DoD leadership. Through modeling and simulation AP leadership can gain greater insight in making key decisions on knowing where to best allocate scarce resources in an effort to meet DoD planning objectives.

  14. Expand the Modeling Capabilities of DOE's EnergyPlus Building Energy Simulation Program

    Energy Technology Data Exchange (ETDEWEB)

    Don Shirey


    EnergyPlus{trademark} is a new generation computer software analysis tool that has been developed, tested, and commercialized to support DOE's Building Technologies (BT) Program in terms of whole-building, component, and systems R&D ( It is also being used to support evaluation and decision making of zero energy building (ZEB) energy efficiency and supply technologies during new building design and existing building retrofits. Version 1.0 of EnergyPlus was released in April 2001, followed by semiannual updated versions over the ensuing seven-year period. This report summarizes work performed by the University of Central Florida's Florida Solar Energy Center (UCF/FSEC) to expand the modeling capabilities of EnergyPlus. The project tasks involved implementing, testing, and documenting the following new features or enhancement of existing features: (1) A model for packaged terminal heat pumps; (2) A model for gas engine-driven heat pumps with waste heat recovery; (3) Proper modeling of window screens; (4) Integrating and streamlining EnergyPlus air flow modeling capabilities; (5) Comfort-based controls for cooling and heating systems; and (6) An improved model for microturbine power generation with heat recovery. UCF/FSEC located existing mathematical models or generated new model for these features and incorporated them into EnergyPlus. The existing or new models were (re)written using Fortran 90/95 programming language and were integrated within EnergyPlus in accordance with the EnergyPlus Programming Standard and Module Developer's Guide. Each model/feature was thoroughly tested and identified errors were repaired. Upon completion of each model implementation, the existing EnergyPlus documentation (e.g., Input Output Reference and Engineering Document) was updated with information describing the new or enhanced feature. Reference data sets were generated for several of the features to aid program users in selecting proper

  15. Accelerating scientific codes by performance and accuracy modeling

    CERN Document Server

    Fabregat-Traver, Diego; Bientinesi, Paolo


    Scientific software is often driven by multiple parameters that affect both accuracy and performance. Since finding the optimal configuration of these parameters is a highly complex task, it extremely common that the software is used suboptimally. In a typical scenario, accuracy requirements are imposed, and attained through suboptimal performance. In this paper, we present a methodology for the automatic selection of parameters for simulation codes, and a corresponding prototype tool. To be amenable to our methodology, the target code must expose the parameters affecting accuracy and performance, and there must be formulas available for error bounds and computational complexity of the underlying methods. As a case study, we consider the particle-particle particle-mesh method (PPPM) from the LAMMPS suite for molecular dynamics, and use our tool to identify configurations of the input parameters that achieve a given accuracy in the shortest execution time. When compared with the configurations suggested by exp...

  16. A high burnup model developed for the DIONISIO code

    Energy Technology Data Exchange (ETDEWEB)

    Soba, A. [U.A. Combustibles Nucleares, Comisión Nacional de Energía Atómica, Avenida del Libertador 8250, 1429 Buenos Aires (Argentina); Denis, A., E-mail: [U.A. Combustibles Nucleares, Comisión Nacional de Energía Atómica, Avenida del Libertador 8250, 1429 Buenos Aires (Argentina); Romero, L. [U.A. Reactores Nucleares, Comisión Nacional de Energía Atómica, Avenida del Libertador 8250, 1429 Buenos Aires (Argentina); Villarino, E.; Sardella, F. [Departamento Ingeniería Nuclear, INVAP SE, Comandante Luis Piedra Buena 4950, 8430 San Carlos de Bariloche, Río Negro (Argentina)


    A group of subroutines, designed to extend the application range of the fuel performance code DIONISIO to high burn up, has recently been included in the code. The new calculation tools, which are tuned for UO{sub 2} fuels in LWR conditions, predict the radial distribution of power density, burnup, and concentration of diverse nuclides within the pellet. The balance equations of all the isotopes involved in the fission process are solved in a simplified manner, and the one-group effective cross sections of all of them are obtained as functions of the radial position in the pellet, burnup, and enrichment in {sup 235}U. In this work, the subroutines are described and the results of the simulations performed with DIONISIO are presented. The good agreement with the data provided in the FUMEX II/III NEA data bank can be easily recognized.

  17. A high burnup model developed for the DIONISIO code (United States)

    Soba, A.; Denis, A.; Romero, L.; Villarino, E.; Sardella, F.


    A group of subroutines, designed to extend the application range of the fuel performance code DIONISIO to high burn up, has recently been included in the code. The new calculation tools, which are tuned for UO2 fuels in LWR conditions, predict the radial distribution of power density, burnup, and concentration of diverse nuclides within the pellet. The balance equations of all the isotopes involved in the fission process are solved in a simplified manner, and the one-group effective cross sections of all of them are obtained as functions of the radial position in the pellet, burnup, and enrichment in 235U. In this work, the subroutines are described and the results of the simulations performed with DIONISIO are presented. The good agreement with the data provided in the FUMEX II/III NEA data bank can be easily recognized.

  18. Development of an Aeroelastic Modeling Capability for Transient Nozzle Side Load Analysis (United States)

    Wang, Ten-See; Zhao, Xiang; Zhang, Sijun; Chen, Yen-Sen


    Lateral nozzle forces are known to cause severe structural damage to any new rocket engine in development during test. While three-dimensional, transient, turbulent, chemically reacting computational fluid dynamics methodology has been demonstrated to capture major side load physics with rigid nozzles, hot-fire tests often show nozzle structure deformation during major side load events, leading to structural damages if structural strengthening measures were not taken. The modeling picture is incomplete without the capability to address the two-way responses between the structure and fluid. The objective of this study is to develop a coupled aeroelastic modeling capability by implementing the necessary structural dynamics component into an anchored computational fluid dynamics methodology. The computational fluid dynamics component is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, while the computational structural dynamics component is developed in the framework of modal analysis. Transient aeroelastic nozzle startup analyses of the Block I Space Shuttle Main Engine at sea level were performed. The computed results from the aeroelastic nozzle modeling are presented.

  19. Test code for the assessment and improvement of Reynolds stress models (United States)

    Rubesin, M. W.; Viegas, J. R.; Vandromme, D.; Minh, H. HA


    An existing two-dimensional, compressible flow, Navier-Stokes computer code, containing a full Reynolds stress turbulence model, was adapted for use as a test bed for assessing and improving turbulence models based on turbulence simulation experiments. To date, the results of using the code in comparison with simulated channel flow and over an oscillating flat plate have shown that the turbulence model used in the code needs improvement for these flows. It is also shown that direct simulation of turbulent flows over a range of Reynolds numbers are needed to guide subsequent improvement of turbulence models.

  20. Towards enhancing Sandia's capabilities in multiscale materials modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Aidun, John Bahram; Fang, Huei Eliot; Barbour, John Charles; Westrich, Henry Roger; Chen, Er-Ping


    We report our conclusions in support of the FY 2003 Science and Technology Milestone ST03-3.5. The goal of the milestone was to develop a research plan for expanding Sandia's capabilities in materials modeling and simulation. From inquiries and discussion with technical staff during FY 2003 we conclude that it is premature to formulate the envisioned coordinated research plan. The more appropriate goal is to develop a set of computational tools for making scale transitions and accumulate experience with applying these tools to real test cases so as to enable us to attack each new problem with higher confidence of success.

  1. Initiative-taking, Improvisational Capability and Business Model Innovation in Emerging Market

    DEFF Research Database (Denmark)

    Cao, Yangfeng

    Business model innovation plays a very important role in developing competitive advantage when multinational small and medium-sized enterprises (SMEs) from developed country enter into emerging markets because of the large contextual distances or gaps between the emerging and developed economies....... Many prior researches have shown that the foreign subsidiaries play important role in shaping the overall strategy of the parent company. However, little is known about how subsidiary specifically facilitates business model innovation (BMI) in emerging markets. Adopting the method of comparative...... innovation in emerging markets. We find that high initiative-taking and strong improvisational capability can accelerate the business model innovation. Our research contributes to the literatures on international and strategic entrepreneurship....

  2. A Scalable Model for the Performance Evaluation of ROADMs with Generic Switching Capabilities

    Directory of Open Access Journals (Sweden)

    Athanasios S Tsokanos


    Full Text Available In order to evaluate the performance of Reconfigurable Optical Add/Drop Multiplexers (ROADMs consisting of a single large switch, in circuit switched Wavelength-Division Multiplexing (WDM networks, a theoretical Queuing Network Model (QNM is developed, which consists of two M/M/c/c loss systems each of which is analyzed in isolation. An overall analytical blocking probability of a ROADM is obtained. This model can also be used for the performance optimization of ROADMs with a single switch capable of switching all or a partial number of the wavelengths being used. It is demonstrated how the proposed model can be used for the performance evaluation of a ROADM for different number of wavelengths inside the switch, in various traffic intensity conditions producing an exact blocking probability solution. The accuracy of the analytical results is validated by simulation.

  3. Evaluation of the analysis models in the ASTRA nuclear design code system

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Nam Jin; Park, Chang Jea; Kim, Do Sam; Lee, Kyeong Taek; Kim, Jong Woon [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)


    In the field of nuclear reactor design, main practice was the application of the improved design code systems. During the process, a lot of basis and knowledge were accumulated in processing input data, nuclear fuel reload design, production and analysis of design data, et al. However less efforts were done in the analysis of the methodology and in the development or improvement of those code systems. Recently, KEPO Nuclear Fuel Company (KNFC) developed the ASTRA (Advanced Static and Transient Reactor Analyzer) code system for the purpose of nuclear reactor design and analysis. In the code system, two group constants were generated from the CASMO-3 code system. The objective of this research is to analyze the analysis models used in the ASTRA/CASMO-3 code system. This evaluation requires indepth comprehension of the models, which is important so much as the development of the code system itself. Currently, most of the code systems used in domestic Nuclear Power Plant were imported, so it is very difficult to maintain and treat the change of the situation in the system. Therefore, the evaluation of analysis models in the ASTRA nuclear reactor design code system in very important.

  4. Implementing a Nuclear Power Plant Model for Evaluating Load-Following Capability on a Small Grid (United States)

    Arda, Samet Egemen

    A pressurized water reactor (PWR) nuclear power plant (NPP) model is introduced into Positive Sequence Load Flow (PSLF) software by General Electric in order to evaluate the load-following capability of NPPs. The nuclear steam supply system (NSSS) consists of a reactor core, hot and cold legs, plenums, and a U-tube steam generator. The physical systems listed above are represented by mathematical models utilizing a state variable lumped parameter approach. A steady-state control program for the reactor, and simple turbine and governor models are also developed. Adequacy of the isolated reactor core, the isolated steam generator, and the complete PWR models are tested in Matlab/Simulink and dynamic responses are compared with the test results obtained from the H. B. Robinson NPP. Test results illustrate that the developed models represents the dynamic features of real-physical systems and are capable of predicting responses due to small perturbations of external reactivity and steam valve opening. Subsequently, the NSSS representation is incorporated into PSLF and coupled with built-in excitation system and generator models. Different simulation cases are run when sudden loss of generation occurs in a small power system which includes hydroelectric and natural gas power plants besides the developed PWR NPP. The conclusion is that the NPP can respond to a disturbance in the power system without exceeding any design and safety limits if appropriate operational conditions, such as achieving the NPP turbine control by adjusting the speed of the steam valve, are met. In other words, the NPP can participate in the control of system frequency and improve the overall power system performance.

  5. Geared rotor dynamic methodologies for advancing prognostic modeling capabilities in rotary-wing transmission systems (United States)

    Stringer, David Blake

    The overarching objective in this research is the development of a robust, rotor dynamic, physics based model of a helicopter drive train as a foundation for the prognostic modeling for rotary-wing transmissions. Rotorcrafts rely on the integrity of their drive trains for their airworthiness. Drive trains rely on gear technology for their integrity and function. Gears alter the vibration characteristics of a mechanical system and significantly contribute to noise, component fatigue, and personal discomfort prevalent in rotorcraft. This research effort develops methodologies for generating a rotor dynamic model of a rotary-wing transmission based on first principles, through (i) development of a three-dimensional gear-mesh stiffness model for helical and spur gears and integration of this model in a finite element rotor dynamic model, (ii) linear and nonlinear analyses of a geared system for comparison and validation of the gear-mesh model, (iii) development of a modal synthesis technique for potentially providing model reduction and faster analysis capabilities for geared systems, and (iv) extension of the gear-mesh model to bevel and epicyclic configurations. In addition to model construction and validation, faults indigenous to geared systems are presented and discussed. Two faults are selected for analysis and seeded into the transmission model. Diagnostic vibration parameters are presented and used as damage indicators in the analysis. The fault models produce results consistent with damage experienced during experimental testing. The results of this research demonstrate the robustness of the physics-based approach in simulating multiple normal and abnormal conditions. The advantages of this physics-based approach, when combined with contemporary probabilistic and time-series techniques, provide a useful method for improving health monitoring technologies in mechanical systems.

  6. Modeling Proton- and Light Ion-Induced Reactions at Low Energies in the MARS15 Code

    Energy Technology Data Exchange (ETDEWEB)

    Rakhno, I. L. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Mokhov, N. V. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Gudima, K. K. [National Academy of Sciences, Cisineu (Moldova)


    An implementation of both ALICE code and TENDL evaluated nuclear data library in order to describe nuclear reactions induced by low-energy projectiles in the Monte Carlo code MARS15 is presented. Comparisons between results of modeling and experimental data on reaction cross sections and secondary particle distributions are shown.

  7. The Nuremberg Code subverts human health and safety by requiring animal modeling


    Greek Ray; Pippus Annalea; Hansen Lawrence A


    Abstract Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive...

  8. Implementation of the critical points model in a SFM-FDTD code working in oblique incidence

    Energy Technology Data Exchange (ETDEWEB)

    Hamidi, M; Belkhir, A; Lamrous, O [Laboratoire de Physique et Chimie Quantique, Universite Mouloud Mammeri, Tizi-Ouzou (Algeria); Baida, F I, E-mail: [Departement d' Optique P.M. Duffieux, Institut FEMTO-ST UMR 6174 CNRS Universite de Franche-Comte, 25030 Besancon Cedex (France)


    We describe the implementation of the critical points model in a finite-difference-time-domain code working in oblique incidence and dealing with dispersive media through the split field method. Some tests are presented to validate our code in addition to an application devoted to plasmon resonance of a gold nanoparticles grating.

  9. Addressing Hate Speech and Hate Behaviors in Codes of Conduct: A Model for Public Institutions. (United States)

    Neiger, Jan Alan; Palmer, Carolyn; Penney, Sophie; Gehring, Donald D.


    As part of a larger study, researchers collected campus codes prohibiting hate crimes, which were then reviewed to determine whether the codes presented constitutional problems. Based on this review, the authors develop and present a model policy that is content neutral and does not use language that could be viewed as unconstitutionally vague or…


    Energy Technology Data Exchange (ETDEWEB)

    Joshua J. Cogliati; Abderrafi M. Ougouag


    A comprehensive, high fidelity model for pebble flow has been developed and embodied in the PEBBLES computer code. In this paper, a description of the physical artifacts included in the model is presented and some results from using the computer code for predicting the features of pebble flow and packing in a realistic pebble bed reactor design are shown. The sensitivity of models to various physical parameters is also discussed.

  11. Water structure-forming capabilities are temperature shifted for different models. (United States)

    Shevchuk, Roman; Prada-Gracia, Diego; Rao, Francesco


    A large number of water models exist for molecular simulations. They differ in the ability to reproduce specific features of real water instead of others, like the correct temperature for the density maximum or the diffusion coefficient. Past analysis mostly concentrated on ensemble quantities, while few data were reported on the different microscopic behavior. Here, we compare seven widely used classical water models (SPC, SPC/E, TIP3P, TIP4P, TIP4P-Ew, TIP4P/2005, and TIP5P) in terms of their local structure-forming capabilities through hydrogen bonds for temperatures ranging from 210 to 350 K by the introduction of a set of order parameters taking into account the configuration of up to the second solvation shell. We found that all models share the same structural pattern up to a temperature shift. When this shift is applied, all models overlap onto a master curve. Interestingly, increased stabilization of fully coordinated structures extending to at least two solvation shells is found for models that are able to reproduce the correct position of the density maximum. Our results provide a self-consistent atomic-level structural comparison protocol, which can be of help in elucidating the influence of different water models on protein structure and dynamics.

  12. Programming with models: modularity and abstraction provide powerful capabilities for systems biology. (United States)

    Mallavarapu, Aneil; Thomson, Matthew; Ullian, Benjamin; Gunawardena, Jeremy


    Mathematical models are increasingly used to understand how phenotypes emerge from systems of molecular interactions. However, their current construction as monolithic sets of equations presents a fundamental barrier to progress. Overcoming this requires modularity, enabling sub-systems to be specified independently and combined incrementally, and abstraction, enabling generic properties of biological processes to be specified independently of specific instances. These, in turn, require models to be represented as programs rather than as datatypes. Programmable modularity and abstraction enables libraries of modules to be created, which can be instantiated and reused repeatedly in different contexts with different components. We have developed a computational infrastructure that accomplishes this. We show here why such capabilities are needed, what is required to implement them and what can be accomplished with them that could not be done previously.

  13. MESSOC capabilities and results. [Model for Estimating Space Station Opertions Costs (United States)

    Shishko, Robert


    MESSOC (Model for Estimating Space Station Operations Costs) is the result of a multi-year effort by NASA to understand and model the mature operations cost of Space Station Freedom. This paper focuses on MESSOC's ability to contribute to life-cycle cost analyses through its logistics equations and databases. Together, these afford MESSOC the capability to project not only annual logistics costs for a variety of Space Station scenarios, but critical non-cost logistics results such as annual Station maintenance crewhours, upweight/downweight, and on-orbit sparing availability as well. MESSOC results using current logistics databases and baseline scenario have already shown important implications for on-orbit maintenance approaches, space transportation systems, and international operations cost sharing.

  14. Mathematical model and computer code for the analysis of advanced fast reactor dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Schukin, N.V. (Moscow Engineering Physics Inst. (Russian Federation)); Korsun, A.S. (Moscow Engineering Physics Inst. (Russian Federation)); Vitruk, S.G. (Moscow Engineering Physics Inst. (Russian Federation)); Zimin, V.G. (Moscow Engineering Physics Inst. (Russian Federation)); Romanin, S.D. (Moscow Engineering Physics Inst. (Russian Federation))


    Efficient algorithms for mathematical modeling of 3-D neutron kinetics and thermal hydraulics are described. The model and appropriate computer code make it possible to analyze a variety of transient events ranging from normal operational states to catastrophic accident excursions. To verify the code, a number of calculations of different kind of transients was carried out. The results of the calculations show that the model and the computer code could be used for conceptual design of advanced liquid metal reactors. The detailed description of calculations of TOP WS accident is presented. (orig./DG)

  15. The Aviation System Analysis Capability Air Carrier Cost-Benefit Model (United States)

    Gaier, Eric M.; Edlich, Alexander; Santmire, Tara S.; Wingrove, Earl R.., III


    To meet its objective of assisting the U.S. aviation industry with the technological challenges of the future, NASA must identify research areas that have the greatest potential for improving the operation of the air transportation system. Therefore, NASA is developing the ability to evaluate the potential impact of various advanced technologies. By thoroughly understanding the economic impact of advanced aviation technologies and by evaluating how the new technologies will be used in the integrated aviation system, NASA aims to balance its aeronautical research program and help speed the introduction of high-leverage technologies. To meet these objectives, NASA is building the Aviation System Analysis Capability (ASAC). NASA envisions ASAC primarily as a process for understanding and evaluating the impact of advanced aviation technologies on the U.S. economy. ASAC consists of a diverse collection of models and databases used by analysts and other individuals from the public and private sectors brought together to work on issues of common interest to organizations in the aviation community. ASAC also will be a resource available to the aviation community to analyze; inform; and assist scientists, engineers, analysts, and program managers in their daily work. The ASAC differs from previous NASA modeling efforts in that the economic behavior of buyers and sellers in the air transportation and aviation industries is central to its conception. Commercial air carriers, in particular, are an important stakeholder in this community. Therefore, to fully evaluate the implications of advanced aviation technologies, ASAC requires a flexible financial analysis tool that credibly links the technology of flight with the financial performance of commercial air carriers. By linking technical and financial information, NASA ensures that its technology programs will continue to benefit the user community. In addition, the analysis tool must be capable of being incorporated into the

  16. A Relationship Framework for Building Information Modeling (BIM Capability in Quantity Surveying Practice and Project Performance

    Directory of Open Access Journals (Sweden)

    Wong, P. F.


    Full Text Available Construction industry has suffered from poor project performance and it’s crucial to find out solution to improve this issue. Quantity surveyors (QSs play a key role in managing project cost. However, their method of performing tasks is tedious till affect the project performance. Building information modeling (BIM application is attaining attention in the construction industry as a mean to improve the project performance. However, the adoption is low among QSs due to limited study of the BIM’s capabilities in their profession. This research aims to identify the BIM capabilities in quantity surveying practices and examine its relationship with project performance by developing a relationship framework. Data were collected through questionnaire survey and interview in Malaysia. Questionnaire results revealed that several BIM capabilities were significantly correlated with project performance and they were validated through interview. The relationship framework will guide QSs to focus on the identified BIM capabilities for better project outcomes.La industria de la construcción ha sufrido históricamente desviaciones en las mediciones de los materiales empleados frente a las cantidades proyectadas. Los aparejadores juegan un papel clave en este aspecto como responsables de la recepción de materiales. Sin embargo, el trabajo de medición es tedioso hasta el punto de afectar al rendimiento del proyecto. La aplicación del Building Information Modeling (BIM está logrando mejorar este trabajo. Aun así, su utilización es baja entre los aparejadores debido a la escasa formación recibida sobre las posibilidades del BIM. Esta investigación busca identificar las capacidades del BIM aplicado a la medición de materiales y examinar su relación con el rendimiento del proyecto desarrollando un marco de relación. Mediante encuestas y entrevistas realizadas en Malasia, se obtuvieron datos que revelaron que varias capacidades de BIM se correlacionan

  17. Development of thermal hydraulic models for the reliable regulatory auditing code

    Energy Technology Data Exchange (ETDEWEB)

    Chung, B. D.; Song, C. H.; Lee, Y. J.; Kwon, T. S.; Lee, S. W. [Korea Automic Energy Research Institute, Taejon (Korea, Republic of)


    The objective of this project is to develop thermal hydraulic models for use in improving the reliability of the regulatory auditing codes. The current year fall under the second step of the 3 year project, and the main researches were focused on the development of downcorner boiling model. During the current year, the bubble stream model of downcorner has been developed and installed in he auditing code. The model sensitivity analysis has been performed for APR1400 LBLOCA scenario using the modified code. The preliminary calculation has been performed for the experimental test facility using FLUENT and MARS code. The facility for air bubble experiment has been installed. The thermal hydraulic phenomena for VHTR and super critical reactor have been identified for the future application and model development.

  18. Improving National Capability in Biogeochemical Flux Modelling: the UK Environmental Virtual Observatory (EVOp) (United States)

    Johnes, P.; Greene, S.; Freer, J. E.; Bloomfield, J.; Macleod, K.; Reaney, S. M.; Odoni, N. A.


    The best outcomes from watershed management arise where policy and mitigation efforts are underpinned by strong science evidence, but there are major resourcing problems associated with the scale of monitoring needed to effectively characterise the sources rates and impacts of nutrient enrichment nationally. The challenge is to increase national capability in predictive modelling of nutrient flux to waters, securing an effective mechanism for transferring knowledge and management tools from data-rich to data-poor regions. The inadequacy of existing tools and approaches to address these challenges provided the motivation for the Environmental Virtual Observatory programme (EVOp), an innovation from the UK Natural Environment Research Council (NERC). EVOp is exploring the use of a cloud-based infrastructure in catchment science, developing an exemplar to explore N and P fluxes to inland and coastal waters in the UK from grid to catchment and national scale. EVOp is bringing together for the first time national data sets, models and uncertainty analysis into cloud computing environments to explore and benchmark current predictive capability for national scale biogeochemical modelling. The objective is to develop national biogeochemical modelling capability, capitalising on extensive national investment in the development of science understanding and modelling tools to support integrated catchment management, and supporting knowledge transfer from data rich to data poor regions, The AERC export coefficient model (Johnes et al., 2007) has been adapted to function within the EVOp cloud environment, and on a geoclimatic basis, using a range of high resolution, geo-referenced digital datasets as an initial demonstration of the enhanced national capacity for N and P flux modelling using cloud computing infrastructure. Geoclimatic regions are landscape units displaying homogenous or quasi-homogenous functional behaviour in terms of process controls on N and P cycling

  19. A generic method for automatic translation between input models for different versions of simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Serfontein, Dawid E., E-mail: [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)


    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.

  20. Predictions for the drive capabilities of the RancheroS Flux Compression Generator into various load inductances using the Eulerian AMR Code Roxane

    Energy Technology Data Exchange (ETDEWEB)

    Watt, Robert Gregory [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    The Ranchero Magnetic Flux Compression Generator (FCG) has been used to create current pulses in the 10-­100 MA range for driving both “static” low inductance (0.5 nH) loads1 for generator demonstration purposes and high inductance (10-­20 nH) imploding liner loads2 for ultimate use in physics experiments at very high energy density. Simulations of the standard Ranchero generator have recently shown that it had a design issue that could lead to flux trapping in the generator, and a non-­ robust predictability in its use in high energy density experiments. A re-­examination of the design concept for the standard Ranchero generator, prompted by the possible appearance of an aneurism at the output glide plane, has led to a new generation of Ranchero generators designated the RancheroS (for swooped). This generator has removed the problematic output glide plane and replaced it with a region of constantly increasing diameter in the output end of the FCG cavity in which the armature is driven outward under the influence of an additional HE load not present in the original Ranchero. The resultant RancheroS generator, to be tested in LA43S-­L13, probably in early FY17, has a significantly increased initial inductance and may be able to drive a somewhat higher load inductance than the standard Ranchero. This report will use the Eulerian AMR code Roxane to study the ability of the new design to drive static loads, with a goal of providing a database corresponding to the load inductances for which the generator might be used and the anticipated peak currents such loads might produce in physics experiments. Such a database, combined with a simple analytic model of an ideal generator, where d(LI)/dt = 0, and supplemented by earlier estimates of losses in actual use of the standard Ranchero, scaled to estimate the increase in losses due to the longer current carrying perimeter in the RancheroS, can then be used to bound the expectations for the current drive one may

  1. The fast code

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, L.N.; Wilson, R.E. [Oregon State Univ., Dept. of Mechanical Engineering, Corvallis, OR (United States)


    The FAST Code which is capable of determining structural loads on a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data are given at two wind speeds for the ESI-80. The FAST Code models a two-bladed HAWT with degrees of freedom for blade bending, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffnesses, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms, and azimuth averaged bin plots. It is concluded that agreement between the FAST Code and test results is good. (au)

  2. DESTINY: A Comprehensive Tool with 3D and Multi-Level Cell Memory Modeling Capability

    Directory of Open Access Journals (Sweden)

    Sparsh Mittal


    Full Text Available To enable the design of large capacity memory structures, novel memory technologies such as non-volatile memory (NVM and novel fabrication approaches, e.g., 3D stacking and multi-level cell (MLC design have been explored. The existing modeling tools, however, cover only a few memory technologies, technology nodes and fabrication approaches. We present DESTINY, a tool for modeling 2D/3D memories designed using SRAM, resistive RAM (ReRAM, spin transfer torque RAM (STT-RAM, phase change RAM (PCM and embedded DRAM (eDRAM and 2D memories designed using spin orbit torque RAM (SOT-RAM, domain wall memory (DWM and Flash memory. In addition to single-level cell (SLC designs for all of these memories, DESTINY also supports modeling MLC designs for NVMs. We have extensively validated DESTINY against commercial and research prototypes of these memories. DESTINY is very useful for performing design-space exploration across several dimensions, such as optimizing for a target (e.g., latency, area or energy-delay product for a given memory technology, choosing the suitable memory technology or fabrication method (i.e., 2D v/s 3D for a given optimization target, etc. We believe that DESTINY will boost studies of next-generation memory architectures used in systems ranging from mobile devices to extreme-scale supercomputers. The latest source-code of DESTINY is available from the following git repository:

  3. Verification and Validation Strategy for Implementation of Hybrid Potts-Phase Field Hydride Modeling Capability in MBM

    Energy Technology Data Exchange (ETDEWEB)

    Jason D. Hales; Veena Tikare


    The Used Fuel Disposition (UFD) program has initiated a project to develop a hydride formation modeling tool using a hybrid Potts­phase field approach. The Potts model is incorporated in the SPPARKS code from Sandia National Laboratories. The phase field model is provided through MARMOT from Idaho National Laboratory.

  4. Further assessment of the chemical modelling of iodine in IMPAIR 3 code using ACE/RTF data

    Energy Technology Data Exchange (ETDEWEB)

    Cripps, R.C.; Guentay, S. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)


    This paper introduces the assessment of the computer code IMPAIR 3 (Iodine Matter Partitioning And Iodine Release) which simulates physical and chemical iodine processes in a LWR containment with one or more compartments under conditions relevant to a severe accident in a nuclear reactor. The first version was published in 1992 to replace both the multi-compartment code IMPAIR 2/M and the single-compartment code IMPAIR 2.2. IMPAIR 2.2 was restricted to a single pH value specified before programme execution and precluded any variation of pH or calculation of H{sup +} changes during program execution. This restriction is removed in IMPAIR 3. Results of the IMPAIR 2.2 assessment using ACE/RTF Test 2 and the acidic phase of Test 3 B data were presented at the 3rd CSNI Workshop. The purpose of the current assessment is to verify the IMPAIR 3 capability to follow the whole test duration with changing boundary conditions. Besides revisiting ACE/RTF Test 3B, Test 4 data were also used for the current assessment. A limited data analysis was conducted using the outcome of the current ACEX iodine work to understand the iodine behaviour observed during these tests. This paper presents comparisons of the predicted results with the test data. The code capabilities are demonstrated to focus on still unresolved modelling problems. The unclear behaviour observed in the gaseous molecular iodine behaviour and its inconclusive effect on the calculated behaviour in the acidic phase of the Test 4 and importance of the catalytic effect of stainless steel are also indicated. (author) 18 figs., 1 tab., 11 refs.

  5. An Advanced simulation Code for Modeling Inductive Output Tubes

    Energy Technology Data Exchange (ETDEWEB)

    Thuc Bui; R. Lawrence Ives


    During the Phase I program, CCR completed several major building blocks for a 3D large signal, inductive output tube (IOT) code using modern computer language and programming techniques. These included a 3D, Helmholtz, time-harmonic, field solver with a fully functional graphical user interface (GUI), automeshing and adaptivity. Other building blocks included the improved electrostatic Poisson solver with temporal boundary conditions to provide temporal fields for the time-stepping particle pusher as well as the self electric field caused by time-varying space charge. The magnetostatic field solver was also updated to solve for the self magnetic field caused by time changing current density in the output cavity gap. The goal function to optimize an IOT cavity was also formulated, and the optimization methodologies were investigated.

  6. Modeling of Ionization Physics with the PIC Code OSIRIS

    Energy Technology Data Exchange (ETDEWEB)

    Deng, S.; Tsung, F.; Lee, S.; Lu, W.; Mori, W.B.; Katsouleas, T.; Muggli, P.; Blue, B.E.; Clayton, C.E.; O' Connell, C.; Dodd, E.; Decker, F.J.; Huang, C.; Hogan, M.J.; Hemker, R.; Iverson, R.H.; Joshi, C.; Ren, C.; Raimondi, P.; Wang, S.; Walz, D.; /Southern California U. /UCLA /SLAC


    When considering intense particle or laser beams propagating in dense plasma or gas, ionization plays an important role. Impact ionization and tunnel ionization may create new plasma electrons, altering the physics of wakefield accelerators, causing blue shifts in laser spectra, creating and modifying instabilities, etc. Here we describe the addition of an impact ionization package into the 3-D, object-oriented, fully parallel PIC code OSIRIS. We apply the simulation tool to simulate the parameters of the upcoming E164 Plasma Wakefield Accelerator experiment at the Stanford Linear Accelerator Center (SLAC). We find that impact ionization is dominated by the plasma electrons moving in the wake rather than the 30 GeV drive beam electrons. Impact ionization leads to a significant number of trapped electrons accelerated from rest in the wake.

  7. Predictability of the geospace variations and measuring the capability to model the state of the system (United States)

    Pulkkinen, A.


    Empirical modeling has been the workhorse of the past decades in predicting the state of the geospace. For example, numerous empirical studies have shown that global geoeffectiveness indices such as Kp and Dst are generally well predictable from the solar wind input. These successes have been facilitated partly by the strongly externally driven nature of the system. Although characterizing the general state of the system is valuable and empirical modeling will continue playing an important role, refined physics-based quantification of the state of the system has been the obvious next step in moving toward more mature science. Importantly, more refined and localized products are needed also for space weather purposes. Predictions of local physical quantities are necessary to make physics-based links to the impacts on specific systems. As we have introduced more localized predictions of the geospace state one central question is how predictable these local quantities are? This complex question can be addressed by rigorously measuring the model performance against the observed data. Space sciences community has made great advanced on this topic over the past few years and there are ongoing efforts in SHINE, CEDAR and GEM to carry out community-wide evaluations of the state-of-the-art solar and heliospheric, ionosphere-thermosphere and geospace models, respectively. These efforts will help establish benchmarks and thus provide means to measure the progress in the field analogous to monitoring of the improvement in lower atmospheric weather predictions carried out rigorously since 1980s. In this paper we will discuss some of the latest advancements in predicting the local geospace parameters and give an overview of some of the community efforts to rigorously measure the model performances. We will also briefly discuss some of the future opportunities for advancing the geospace modeling capability. These will include further development in data assimilation and ensemble

  8. A p-Adic Model of DNA Sequence and Genetic Code

    CERN Document Server

    Dragovich, Branko


    Using basic properties of p-adic numbers, we consider a simple new approach to describe main aspects of DNA sequence and genetic code. Central role in our investigation plays an ultrametric p-adic information space which basic elements are nucleotides, codons and genes. We show that a 5-adic model is appropriate for DNA sequence. This 5-adic model, combined with 2-adic distance, is also suitable for genetic code and for a more advanced employment in genomics. We find that genetic code degeneracy is related to the p-adic distance between codons.

  9. Application of Capability Maturity Model Integration to Innovation Management for Software and Service Companies

    Institute of Scientific and Technical Information of China (English)

    LI Jing


    To look through innovation management in small project-based firms, such as software engineering companies, which are serv-ice firms that conduct projects for their clients, capability maturity model integration (CMMI) is introduced. It is a process im- provement approach that provides organizations with the essential elements of effective processes. Taking ABC Software Company as an example, the performances before and after the introduction of CMMI in the firm were compared. The results indicated that after two years of application, the productivity increased 92%, and the ability of detecting errors improved 26.45% ; while the rate of faults and the cost of software development dropped 12.45% and 77.55%, respectively. To conclude, small project-based firms benefit a lot if they take CMMI into their process of innovation management, particularly for those R&D firms, since the implemen- tation of CMMI leads them to a promising future with higher efficiency and better effects.

  10. Trace-Based Code Generation for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.-G.


    Paper Submitted for review at the Eighth International Conference on Generative Programming and Component Engineering. Model-based testing can be a powerful means to generate test cases for the system under test. However, creating a useful model for model-based testing requires expertise in the (fo

  11. Trace-Based Code Generation for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.-G.


    Paper Submitted for review at the Eighth International Conference on Generative Programming and Component Engineering. Model-based testing can be a powerful means to generate test cases for the system under test. However, creating a useful model for model-based testing requires expertise in the

  12. Defining Building Information Modeling implementation activities based on capability maturity evaluation: a theoretical model

    Directory of Open Access Journals (Sweden)

    Romain Morlhon


    Full Text Available Building Information Modeling (BIM has become a widely accepted tool to overcome the many hurdles that currently face the Architecture, Engineering and Construction industries. However, implementing such a system is always complex and the recent introduction of BIM does not allow organizations to build their experience on acknowledged standards and procedures. Moreover, data on implementation projects is still disseminated and fragmentary. The objective of this study is to develop an assistance model for BIM implementation. Solutions that are proposed will help develop BIM that is better integrated and better used, and take into account the different maturity levels of each organization. Indeed, based on Critical Success Factors, concrete activities that help in implementation are identified and can be undertaken according to the previous maturity evaluation of an organization. The result of this research consists of a structured model linking maturity, success factors and actions, which operates on the following principle: once an organization has assessed its BIM maturity, it can identify various weaknesses and find relevant answers in the success factors and the associated actions.

  13. FRED fuel behaviour code: Main models and analysis of Halden IFA-503.2 tests

    Energy Technology Data Exchange (ETDEWEB)

    Mikityuk, K., E-mail: [Paul Scherrer Institute, 5232 Villigen PSI (Switzerland); Shestopalov, A., E-mail: [RRC' Kurchatov Institute' , Kurchatov sq, 123182 Moscow (Russian Federation)


    Highlights: > We developed a new fuel rod behaviour code named FRED. > Main models and assumptions are described. > The code was checked using the IFA-503.2 tests performed at the Halden reactor. - Abstract: The FRED fuel rod code is being developed for thermal and mechanical simulation of fast breeder reactor (FBR) and light-water reactor (LWR) fuel behaviour under base-irradiation and accident conditions. The current version of the code calculates temperature distribution in fuel rods, stress-strain condition of cladding, fuel deformation, fuel-cladding gap conductance, and fuel rod inner pressure. The code was previously evaluated in the frame of two OECD mixed plutonium-uranium oxide (MOX) fuel performance benchmarks and then integrated into PSI's FAST code system to provide the fuel rod temperatures necessary for the neutron kinetics and thermal-hydraulic modules in transient calculations. This paper briefly overviews basic models and material property database of the FRED code used to assess the fuel behaviour under steady-state conditions. In addition, the code was used to simulate the IFA-503.2 tests, performed at the Halden reactor for two PWR and twelve VVER fuel samples under base-irradiation conditions. This paper presents the results of this simulation for two cases using a code-to-data comparison of fuel centreline temperatures, internal gas pressures, and fuel elongations. This comparison has demonstrated that the code adequately describes the important physical mechanisms of the uranium oxide (UOX) fuel rod thermal performance under steady-state conditions. Future activity should be concentrated on improving the model and extending the validation range, especially to the MOX fuel steady-state and transient behaviour.

  14. RELAP5/MOD3 code manual. Volume 4, Models and correlations

    Energy Technology Data Exchange (ETDEWEB)



    The RELAP5 code has been developed for best-estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents and operational transients such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I presents modeling theory and associated numerical schemes; Volume II details instructions for code application and input data preparation; Volume III presents the results of developmental assessment cases that demonstrate and verify the models used in the code; Volume IV discusses in detail RELAP5 models and correlations; Volume V presents guidelines that have evolved over the past several years through the use of the RELAP5 code; Volume VI discusses the numerical scheme used in RELAP5; and Volume VII presents a collection of independent assessment calculations.

  15. Capabilities of stochastic rainfall models as data providers for urban hydrology (United States)

    Haberlandt, Uwe


    For planning of urban drainage systems using hydrological models, long, continuous precipitation series with high temporal resolution are needed. Since observed time series are often too short or not available everywhere, the use of synthetic precipitation is a common alternative. This contribution compares three precipitation models regarding their suitability to provide 5 minute continuous rainfall time series for a) sizing of drainage networks for urban flood protection and b) dimensioning of combined sewage systems for pollution reduction. The rainfall models are a parametric stochastic model (Haberlandt et al., 2008), a non-parametric probabilistic approach (Bárdossy, 1998) and a stochastic downscaling of dynamically simulated rainfall (Berg et al., 2013); all models are operated both as single site and multi-site generators. The models are applied with regionalised parameters assuming that there is no station at the target location. Rainfall and discharge characteristics are utilised for evaluation of the model performance. The simulation results are compared against results obtained from reference rainfall stations not used for parameter estimation. The rainfall simulations are carried out for the federal states of Baden-Württemberg and Lower Saxony in Germany and the discharge simulations for the drainage networks of the cities of Hamburg, Brunswick and Freiburg. Altogether, the results show comparable simulation performance for the three models, good capabilities for single site simulations but low skills for multi-site simulations. Remarkably, there is no significant difference in simulation performance comparing the tasks flood protection with pollution reduction, so the models are finally able to simulate both the extremes and the long term characteristics of rainfall equally well. Bárdossy, A., 1998. Generating precipitation time series using simulated annealing. Wat. Resour. Res., 34(7): 1737-1744. Berg, P., Wagner, S., Kunstmann, H., Schädler, G

  16. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    Directory of Open Access Journals (Sweden)

    S. Galelli


    Full Text Available Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modeling. In this paper we investigate the prediction capability of extremely randomized trees (Extra-Trees, in terms of accuracy, explanation ability and computational efficiency, in a streamflow modeling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i alleviates the poor generalization property and tendency to overfitting of traditional standalone decision trees (e.g. CART; (ii is computationally very efficient; and, (iii allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analyzed on two real-world case studies (Marina catchment (Singapore and Canning River (Western Australia representing two different morphoclimatic contexts comparatively with other tree-based methods (CART and M5 and parametric data-driven approaches (ANNs and multiple linear regression. Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5 in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  17. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling (United States)

    Galelli, S.; Castelletti, A.


    Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modelling. In this paper, we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modelling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalisation property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analysed on two real-world case studies - Marina catchment (Singapore) and Canning River (Western Australia) - representing two different morphoclimatic contexts. The evaluation is performed against other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  18. An analytical model for source code distributability verification

    Institute of Scientific and Technical Information of China (English)



    One way to speed up the execution of sequential programs is to divide them into concurrent segments and execute such segments in a parallel manner over a distributed computing environment. We argue that the execution speedup primarily depends on the concurrency degree between the identified segments as well as communication overhead between the segments. To guar-antee the best speedup, we have to obtain the maximum possible concurrency degree between the identified segments, taking communication overhead into consideration. Existing code distributor and multi-threading approaches do not fulfill such re-quirements;hence, they cannot provide expected distributability gains in advance. To overcome such limitations, we propose a novel approach for verifying the distributability of sequential object-oriented programs. The proposed approach enables users to see the maximum speedup gains before the actual distributability implementations, as it computes an objective function which is used to measure different distribution values from the same program, taking into consideration both remote and sequential calls. Experimental results showed that the proposed approach successfully determines the distributability of different real-life software applications compared with their real-life sequential and distributed implementations.

  19. Modelling the CANMET and Marchwood furnaces using the PCOC code

    Energy Technology Data Exchange (ETDEWEB)

    Stopford, P.J.; Marriott, N. (AEA Decommissioning and Radioactive Waste, Harwell (UK). Theoretical Studies Dept.)


    Pulverised coal combustion models are validated by detailed comparison with in-flame measurements of velocity, temperatures and species concentrations on two axisymmetric tunnel furnaces. Nitric oxide formation by the thermal and fuel nitrogen mechanisms is also calculated and compared with experiment. The sensitivity of the predictions to the various aspects of the model and the potential for modelling full-scale, power-generating furnaces are discussed. 18 refs., 13 figs.

  20. Review of release models used in source-term codes

    Energy Technology Data Exchange (ETDEWEB)

    Song, Jongsoon [Department of Nuclear Engineering, Chosen University, Kwangju (Korea, Republic of)


    Throughout this reviews, the limitations of current release models are identified and ways of improving them suggested, By incorporation recent experimental results, recommendations for future release modeling activities can be made. All release under review were compared with respect to the following six items: scenario, assumptions, mathematical formulations, solution method, radioactive decay chain considered, and geometry. The following nine models are considered for review: SOTEC and SCCEX (CNWRA), DOE/INTERA, TSPA (SNL), Vault Model (AECL), CCALIBRE (SKI), AREST (PNL), Risk Assessment (EPRI), TOSPAC (SNL). (author)

  1. Comparison of a Coupled Near and Far Wake Model With a Free Wake Vortex Code

    DEFF Research Database (Denmark)

    Pirrung, Georg; Riziotis, Vasilis; Aagaard Madsen, Helge


    This paper presents the integration of a near wake model for trailing vorticity, which is based on a prescribed wake lifting line model proposed by Beddoes, with a BEM-based far wake model and a 2D shed vorticity model. The resulting coupled aerodynamics model is validated against lifting surface...... computations performed using a free wake panel code. The focus of the description of the aerodynamics model is on the numerical stability, the computation speed and the accuracy of 5 unsteady simulations. To stabilize the near wake model, it has to be iterated to convergence, using a relaxation factor that has...... induction modeling at slow time scales. Finally, the unsteady airfoil aerodynamics model is extended to provide the unsteady bound circulation for the near wake model and to improve 10 the modeling of the unsteady behavior of cambered airfoils. The model comparison with results from a free wake panel code...

  2. CERT Resilience Management Model Capability Appraisal Method (CAM) Version 1.1 (United States)


    used codes of practice such as the ISO27000 series, NIST special publications, ITIL , BS25999, or COBIT. These point-in-time reviews using codes of...CAP) − SANS Institute (GIAC, GSEC) − Disaster Recovery Institute (CBCP, MBCP) − Business Continuity Institute (CBCI, MBCI) − itSMF ( ITIL ) − PMI

  3. A computer code for calculations in the algebraic collective model of the atomic nucleus (United States)

    Welsh, T. A.; Rowe, D. J.


    A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1 , 1) × SO(5) dynamical group. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code enables a wide range of model Hamiltonians to be analysed. This range includes essentially all Hamiltonians that are rational functions of the model's quadrupole moments qˆM and are at most quadratic in the corresponding conjugate momenta πˆN (- 2 ≤ M , N ≤ 2). The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [ π ˆ ⊗ q ˆ ⊗ π ˆ ] 0 and [ π ˆ ⊗ π ˆ ] LM. The code is made efficient by use of an analytical expression for the needed SO(5)-reduced matrix elements, and use of SO(5) ⊃ SO(3) Clebsch-Gordan coefficients obtained from precomputed data files provided with the code.

  4. Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF

    Energy Technology Data Exchange (ETDEWEB)

    Blyth, Taylor S. [Pennsylvania State Univ., University Park, PA (United States); Avramova, Maria [North Carolina State Univ., Raleigh, NC (United States)


    The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics- based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR) cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal- hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.

  5. Dense Coding in a Two-Spin Squeezing Model with Intrinsic Decoherence (United States)

    Zhang, Bing-Bing; Yang, Guo-Hui


    Quantum dense coding in a two-spin squeezing model under intrinsic decoherence with different initial states (Werner state and Bell state) is investigated. It shows that dense coding capacity χ oscillates with time and finally reaches different stable values. χ can be enhanced by decreasing the magnetic field Ω and the intrinsic decoherence γ or increasing the squeezing interaction μ, moreover, one can obtain a valid dense coding capacity ( χ satisfies χ > 1) by modulating these parameters. The stable value of χ reveals that the decoherence cannot entirely destroy the dense coding capacity. In addition, decreasing Ω or increasing μ can not only enhance the stable value of χ but also impair the effects of decoherence. As the initial state is the Werner state, the purity r of initial state plays a key role in adjusting the value of dense coding capacity, χ can be significantly increased by improving the purity of initial state. For the initial state is Bell state, the large spin squeezing interaction compared with the magnetic field guarantees the optimal dense coding. One cannot always achieve a valid dense coding capacity for the Werner state, while for the Bell state, the dense coding capacity χ remains stuck at the range of greater than 1.

  6. Stimulus-dependent maximum entropy models of neural population codes. (United States)

    Granot-Atedgi, Einat; Tkačik, Gašper; Segev, Ronen; Schneidman, Elad


    Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME) model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.

  7. Stimulus-dependent maximum entropy models of neural population codes.

    Directory of Open Access Journals (Sweden)

    Einat Granot-Atedgi

    Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.

  8. Code modernization and modularization of APEX and SWAT watershed simulation models (United States)

    SWAT (Soil and Water Assessment Tool) and APEX (Agricultural Policy / Environmental eXtender) are respectively large and small watershed simulation models derived from EPIC Environmental Policy Integrated Climate), a field-scale agroecology simulation model. All three models are coded in FORTRAN an...

  9. A Hidden Markov Model method, capable of predicting and discriminating β-barrel outer membrane proteins

    Directory of Open Access Journals (Sweden)

    Hamodrakas Stavros J


    Full Text Available Abstract Background Integral membrane proteins constitute about 20–30% of all proteins in the fully sequenced genomes. They come in two structural classes, the α-helical and the β-barrel membrane proteins, demonstrating different physicochemical characteristics, structure and localization. While transmembrane segment prediction for the α-helical integral membrane proteins appears to be an easy task nowadays, the same is much more difficult for the β-barrel membrane proteins. We developed a method, based on a Hidden Markov Model, capable of predicting the transmembrane β-strands of the outer membrane proteins of gram-negative bacteria, and discriminating those from water-soluble proteins in large datasets. The model is trained in a discriminative manner, aiming at maximizing the probability of correct predictions rather than the likelihood of the sequences. Results The training has been performed on a non-redundant database of 14 outer membrane proteins with structures known at atomic resolution; it has been tested with a jacknife procedure, yielding a per residue accuracy of 84.2% and a correlation coefficient of 0.72, whereas for the self-consistency test the per residue accuracy was 88.1% and the correlation coefficient 0.824. The total number of correctly predicted topologies is 10 out of 14 in the self-consistency test, and 9 out of 14 in the jacknife. Furthermore, the model is capable of discriminating outer membrane from water-soluble proteins in large-scale applications, with a success rate of 88.8% and 89.2% for the correct classification of outer membrane and water-soluble proteins respectively, the highest rates obtained in the literature. That test has been performed independently on a set of known outer membrane proteins with low sequence identity with each other and also with the proteins of the training set. Conclusion Based on the above, we developed a strategy, that enabled us to screen the entire proteome of E. coli for

  10. Capability of Spaceborne Hyperspectral EnMAP Mission for Mapping Fractional Cover for Soil Erosion Modeling

    Directory of Open Access Journals (Sweden)

    Sarah Malec


    Full Text Available Soil erosion can be linked to relative fractional cover of photosynthetic-active vegetation (PV, non-photosynthetic-active vegetation (NPV and bare soil (BS, which can be integrated into erosion models as the cover-management C-factor. This study investigates the capability of EnMAP imagery to map fractional cover in a region near San Jose, Costa Rica, characterized by spatially extensive coffee plantations and grazing in a mountainous terrain. Simulated EnMAP imagery is based on airborne hyperspectral HyMap data. Fractional cover estimates are derived in an automated fashion by extracting image endmembers to be used with a Multiple End-member Spectral Mixture Analysis approach. The C-factor is calculated based on the fractional cover estimates determined independently for EnMAP and HyMap. Results demonstrate that with EnMAP imagery it is possible to extract quality endmember classes with important spectral features related to PV, NPV and soil, and be able to estimate relative cover fractions. This spectral information is critical to separate BS and NPV which greatly can impact the C-factor derivation. From a regional perspective, we can use EnMAP to provide good fractional cover estimates that can be integrated into soil erosion modeling.

  11. The aeroelastic code HawC - model and comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Thirstrup Petersen, J. [Risoe National Lab., The Test Station for Wind Turbines, Roskilde (Denmark)


    A general aeroelastic finite element model for simulation of the dynamic response of horizontal axis wind turbines is presented. The model has been developed with the aim to establish an effective research tool, which can support the general investigation of wind turbine dynamics and research in specific areas of wind turbine modelling. The model concentrates on the correct representation of the inertia forces in a form, which makes it possible to recognize and isolate effects originating from specific degrees of freedom. The turbine structure is divided into substructures, and nonlinear kinematic terms are retained in the equations of motion. Moderate geometric nonlinearities are allowed for. Gravity and a full wind field including 3-dimensional 3-component turbulence are included in the loading. Simulation results for a typical three bladed, stall regulated wind turbine are presented and compared with measurements. (au)


    Frisch, H. P.


    The objective of the Integrated Analysis Capability (IAC) system is to provide a highly effective, interactive analysis tool for the integrated design of large structures. With the goal of supporting the unique needs of engineering analysis groups concerned with interdisciplinary problems, IAC was developed to interface programs from the fields of structures, thermodynamics, controls, and system dynamics with an executive system and database to yield a highly efficient multi-disciplinary system. Special attention is given to user requirements such as data handling and on-line assistance with operational features, and the ability to add new modules of the user's choice at a future date. IAC contains an executive system, a data base, general utilities, interfaces to various engineering programs, and a framework for building interfaces to other programs. IAC has shown itself to be effective in automatic data transfer among analysis programs. IAC 2.5, designed to be compatible as far as possible with Level 1.5, contains a major upgrade in executive and database management system capabilities, and includes interfaces to enable thermal, structures, optics, and control interaction dynamics analysis. The IAC system architecture is modular in design. 1) The executive module contains an input command processor, an extensive data management system, and driver code to execute the application modules. 2) Technical modules provide standalone computational capability as well as support for various solution paths or coupled analyses. 3) Graphics and model generation interfaces are supplied for building and viewing models. Advanced graphics capabilities are provided within particular analysis modules such as INCA and NASTRAN. 4) Interface modules provide for the required data flow between IAC and other modules. 5) User modules can be arbitrary executable programs or JCL procedures with no pre-defined relationship to IAC. 6) Special purpose modules are included, such as MIMIC (Model

  13. Evaluation of Computational Codes for Underwater Hull Analysis Model Applications (United States)


    vice versa or scaling the model to be physical scale model ( PSM ) size instead of full size. Another common geometric change is translating the...anodes and cathodes are generally the boundary conditions to the solution. The ability to link anodes to reference cells to mimic PSM testing and full...could be developed that allows the user to mimic how anode values are set on shipboard systems. Since much PSM experimental work uses shipboard system

  14. A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects (United States)

    Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.


    Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.

  15. Once-through CANDU reactor models for the ORIGEN2 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Croff, A.G.; Bjerke, M.A.


    Reactor physics calculations have led to the development of two CANDU reactor models for the ORIGEN2 computer code. The model CANDUs are based on (1) the existing once-through fuel cycle with feed comprised of natural uranium and (2) a projected slightly enriched (1.2 wt % /sup 235/U) fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models, as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST, are given.

  16. New higher-order Godunov code for modelling performance of two-stage light gas guns (United States)

    Bogdanoff, D. W.; Miller, R. J.


    A new quasi-one-dimensional Godunov code for modeling two-stage light gas guns is described. The code is third-order accurate in space and second-order accurate in time. A very accurate Riemann solver is used. Friction and heat transfer to the tube wall for gases and dense media are modeled and a simple nonequilibrium turbulence model is used for gas flows. The code also models gunpowder burn in the first-stage breech. Realistic equations of state (EOS) are used for all media. The code was validated against exact solutions of Riemann's shock-tube problem, impact of dense media slabs at velocities up to 20 km/sec, flow through a supersonic convergent-divergent nozzle and burning of gunpowder in a closed bomb. Excellent validation results were obtained. The code was then used to predict the performance of two light gas guns (1.5 in. and 0.28 in.) in service at the Ames Research Center. The code predictions were compared with measured pressure histories in the powder chamber and pump tube and with measured piston and projectile velocities. Very good agreement between computational fluid dynamics (CFD) predictions and measurements was obtained. Actual powder-burn rates in the gun were found to be considerably higher (60-90 percent) than predicted by the manufacturer and the behavior of the piston upon yielding appears to differ greatly from that suggested by low-strain rate tests.

  17. On models of the genetic code generated by binary dichotomic algorithms. (United States)

    Gumbel, Markus; Fimmel, Elena; Danielli, Alberto; Strüngmann, Lutz


    In this paper we introduce the concept of a BDA-generated model of the genetic code which is based on binary dichotomic algorithms (BDAs). A BDA-generated model is based on binary dichotomic algorithms (BDAs). Such a BDA partitions the set of 64 codons into two disjoint classes of size 32 each and provides a generalization of known partitions like the Rumer dichotomy. We investigate what partitions can be generated when a set of different BDAs is applied sequentially to the set of codons. The search revealed that these models are able to generate code tables with very different numbers of classes ranging from 2 to 64. We have analyzed whether there are models that map the codons to their amino acids. A perfect matching is not possible. However, we present models that describe the standard genetic code with only few errors. There are also models that map all 64 codons uniquely to 64 classes showing that BDAs can be used to identify codons precisely. This could serve as a basis for further mathematical analysis using coding theory, for example. The hypothesis that BDAs might reflect a molecular mechanism taking place in the decoding center of the ribosome is discussed. The scan demonstrated that binary dichotomic partitions are able to model different aspects of the genetic code very well. The search was performed with our tool Beady-A. This software is freely available at It requires a JVM version 6 or higher.

  18. Coded Random Access

    DEFF Research Database (Denmark)

    Paolini, Enrico; Stefanovic, Cedomir; Liva, Gianluigi


    , in which the structure of the access protocol can be mapped to a structure of an erasure-correcting code defined on graph. This opens the possibility to use coding theory and tools for designing efficient random access protocols, offering markedly better performance than ALOHA. Several instances of coded......The rise of machine-to-machine communications has rekindled the interest in random access protocols as a support for a massive number of uncoordinatedly transmitting devices. The legacy ALOHA approach is developed under a collision model, where slots containing collided packets are considered...... as waste. However, if the common receiver (e.g., base station) is capable to store the collision slots and use them in a transmission recovery process based on successive interference cancellation, the design space for access protocols is radically expanded. We present the paradigm of coded random access...

  19. Representing Resources in Petri Net Models: Hardwiring or Soft-coding?



    This paper presents an interesting design problem in developing a new tool for discrete-event dynamic systems (DEDS). A new tool known as GPenSIM was developed for modeling and simulation of DEDS; GPenSIM is based on Petri Nets. The design issue this paper talks about is whether to represent resources in DEDS hardwired as a part of the Petri net structure (which is the widespread practice) or to soft code as common variables in the program code. This paper shows that soft coding resources giv...

  20. Implementing the WebSocket Protocol Based on Formal Modelling and Automated Code Generation

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars Michael


    protocols. Furthermore, we perform formal verification of the CPN model prior to code generation, and test the implementation for interoperability against the Autobahn WebSocket test-suite resulting in 97% and 99% success rate for the client and server implementation, respectively. The tests show...... with pragmatic annotations for automated code generation of protocol software. The contribution of this paper is an application of the approach as implemented in the PetriCode tool to obtain protocol software implementing the IETF WebSocket protocol. This demonstrates the scalability of our approach to real...

  1. ABAREX -- A neutron spherical optical-statistical-model code -- A user`s manual

    Energy Technology Data Exchange (ETDEWEB)

    Smith, A.B. [ed.; Lawson, R.D.


    The contemporary version of the neutron spherical optical-statistical-model code ABAREX is summarized with the objective of providing detailed operational guidance for the user. The physical concepts involved are very briefly outlined. The code is described in some detail and a number of explicit examples are given. With this document one should very quickly become fluent with the use of ABAREX. While the code has operated on a number of computing systems, this version is specifically tailored for the VAX/VMS work station and/or the IBM-compatible personal computer.

  2. Improving high-altitude emp modeling capabilities by using a non-equilibrium electron swarm model to monitor conduction electron evolution (United States)

    Pusateri, Elise Noel

    abruptly. The objective of the PhD research is to mitigate this effect by integrating a conduction electron model into CHAP-LA which can calculate the conduction current based on a non-equilibrium electron distribution. We propose to use an electron swarm model to monitor the time evolution of conduction electrons in the EMP environment which is characterized by electric field and pressure. Swarm theory uses various collision frequencies and reaction rates to study how the electron distribution and the resultant transport coefficients change with time, ultimately reaching an equilibrium distribution. Validation of the swarm model we develop is a necessary step for completion of the thesis work. After validation, the swarm model is integrated in the air chemistry model CHAP-LA employs for conduction electron simulations. We test high altitude EMP simulations with the swarm model option in the air chemistry model to show improvements in the computational capability of CHAP-LA. A swarm model has been developed that is based on a previous swarm model developed by Higgins, Longmire and O'Dell 1973, hereinafter HLO. The code used for the swarm model calculation solves a system of coupled differential equations for electric field, electron temperature, electron number density, and drift velocity. Important swarm parameters, including the momentum transfer collision frequency, energy transfer collision frequency, and ionization rate, are recalculated and compared to the previously reported empirical results given by HLO. These swarm parameters are found using BOLSIG+, a two term Boltzmann solver developed by Hagelaar and Pitchford 2005. BOLSIG+ utilizes updated electron scattering cross sections that are defined over an expanded energy range found in the atomic and molecular cross section database published by Phelps in the Phelps Database 2014 on the LXcat website created by Pancheshnyi et al. 2012. The swarm model is also updated from the original HLO model by including

  3. User's manual for MODCAL: Bounding surface soil plasticity model calibration and prediction code, volume 2 (United States)

    Dennatale, J. S.; Herrmann, L. R.; Defalias, Y. F.


    In order to reduce the complexity of the model calibration process, a computer-aided automated procedure has been developed and tested. The computer code employs a Quasi-Newton optimization strategy to locate that set of parameter values which minimizes the discrepancy between the model predictions and the experimental observations included in the calibration data base. Through application to a number of real soils, the automated procedure has been found to be an efficient, reliable and economical means of accomplishing model calibration. Although the code was developed specifically for use with the Bounding Surface plasticity model, it can readily be adapted to other constitutive formulations. Since the code greatly reduces the dependence of calibration success on user expertise, it significantly increases the accessibility and usefulness of sophisticated material models to the general engineering community.

  4. Validation of vortex code viscous models using lidar wake measurements and CFD

    DEFF Research Database (Denmark)

    Branlard, Emmanuel; Machefaux, Ewan; Gaunaa, Mac;


    The newly implemented vortex code Omnivor coupled to the aero-servo-elastic tool hawc2 is described in this paper. Vortex wake improvements by the implementation of viscous effects are considered. Different viscous models are implemented and compared with each other. Turbulent flow fields...... with sheared inflow are used to compare the vortex code performance with CFD and lidar measurements. Laminar CFD computations are used to evaluate the performance of the viscous models. Consistent results between the vortex code and CFD tool are obtained up to three diameters downstream. The modelling...... of viscous boundaries appear more important than the modelling of viscosity in the wake. External turbulence and shear appear sufficient but their full potential flow modelling would be preferred....

  5. Improvement of Interfacial Heat Transfer Model and Correlations in SPACE Code

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Sung Won; Kim, Kyung Du [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)


    The SPACE code development project has been successfully proceeded since 2006. The first stage of development program has been finished at April 2010. During the first stage, main logic and conceptual structure have been established under the support of Korea Ministry of Knowledge and Economy. In the second stage, it is focused to assess the physical models and correlations of SPACE code by using the well known SET problems. A problem selection process has been performed under the leading of KEPRI. KEPRI has listed suitable SET problems according to the individual assessment purpose. Among the SET problems, the MIT pressurizer test reveals a improper results by using SPACE code. This paper introduce the problem found during the assessment of MIT pressurizer test assessment and the resolving process about the interfacial heat transfer model and correlations in SPACE code

  6. User manual for ATILA, a finite-element code for modeling piezoelectric transducers (United States)

    Decarpigny, Jean-Noel; Debus, Jean-Claude


    This manual for the user of the finite-element code ATILA provides instruction for entering information and running the code on a VAX computer. The manual does not include the code. The finite element code ATILA has been specifically developed to aid the design of piezoelectric devices, mainly for sonar applications. Thus, it is able to perform the model analyses of both axisymmetrical and fully three-dimensional piezoelectric transducers. It can also provide their harmonic response under radiating conditions: nearfield and farfield pressure, transmitting voltage response, directivity pattern, electrical impedance, as well as displacement field, nodal plane positions, stress field and various stress criteria...Its accuracy and its ability to describe the physical behavior of various transducers (Tonpilz transducers, double headmass symmetrical length expanders, free flooded rings, flextensional transducers, bender bars, cylindrical and trilaminar hydrophones...) have been checked by modelling more than twenty different structures and comparing numerical and experimental results.

  7. A model of a code of ethics for tissue banks operating in developing countries. (United States)

    Morales Pedraza, Jorge


    Ethical practice in the field of tissue banking requires the setting of principles, the identification of possible deviations and the establishment of mechanisms that will detect and hinder abuses that may occur during the procurement, processing and distribution of tissues for transplantation. This model of a Code of Ethics has been prepared with the purpose of being used for the elaboration of a Code of Ethics for tissue banks operating in the Latin American and the Caribbean, Asia and the Pacific and the African regions in order to guide the day-to-day operation of these banks. The purpose of this model of Code of Ethics is to assist interested tissue banks in the preparation of their own Code of Ethics towards ensuring that the tissue bank staff support with their actions the mission and values associated with tissue banking.

  8. PWR hot leg natural circulation modeling with MELCOR code

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jae Hong; Lee, Jong In [Korea Institute of Nuclear Safety, Taejon (Korea, Republic of)


    Previous MELCOR and SCDAP/RELAP5 nodalizations for simulating the counter-current, natural circulation behavior of vapor flow within the RCS hot legs and SG U-tubes when core damage progress can not be applied to the steady state and water-filled conditions during the initial period of accident progression because of the artificially high loss coefficients in the hot legs and SG U-tubes which were chosen from results of COMMIX calculation and the Westinghouse natural circulation experiments in a 1/7-scale facility for simulating steam natural circulation behavior in the vessel and circulation modeling which can be used both for the liquid flow condition at steady state and for the vapor flow condition at the later period of in-vessel core damage. For this, the drag forces resulting from the momentum exchange effects between the two vapor streams in the hot leg was modeled as a pressure drop by pump model. This hot leg natural circulation modeling of MELCOR was able to reproduce similar mass flow rates with those predicted by previous models. 6 refs., 2 figs. (Author)

  9. Priliminary Modeling of Air Breakdown with the ICEPIC code

    CERN Document Server

    Schulz, A E; Cartwright, K L; Mardahl, P J; Peterkin, R E; Bruner, N; Genoni, T; Hughes, T P; Welch, D


    Interest in air breakdown phenomena has recently been re-kindled with the advent of advanced virtual prototyping of radio frequency (RF) sources for use in high power microwave (HPM) weapons technology. Air breakdown phenomena are of interest because the formation of a plasma layer at the aperture of an RF source decreases the transmitted power to the target, and in some cases can cause significant reflection of RF radiation. Understanding the mechanisms behind the formation of such plasma layers will aid in the development of maximally effective sources. This paper begins with some of the basic theory behind air breakdown, and describes two independent approaches to modeling the formation of plasmas, the dielectric fluid model and the Particle in Cell (PIC) approach. Finally we present the results of preliminary studies in numerical modeling and simulation of breakdown.

  10. Atmospheric Transport Modeling with 3D Lagrangian Dispersion Codes Compared with SF6 Tracer Experiments at Regional Scale

    Directory of Open Access Journals (Sweden)

    François Van Dorpe


    Full Text Available The results of four gas tracer experiments of atmospheric dispersion on a regional scale are used for the benchmarking of two atmospheric dispersion modeling codes, MINERVE-SPRAY (CEA, and NOSTRADAMUS (IBRAE. The main topic of this comparison is to estimate the Lagrangian code capability to predict the radionuclide atmospheric transfer on a large field, in the case of risk assessment of nuclear power plant for example. For the four experiments, the results of calculations show a rather good agreement between the two codes, and the order of magnitude of the concentrations measured on the soil is predicted. Simulation is best for sampling points located ten kilometers from the source, while we note a divergence for more distant points results (difference in concentrations by a factor 2 to 5. This divergence may be explained by the fact that, for these four experiments, only one weather station (near the point source was used on a field of 10 000 km2, generating the simulation of a uniform wind field throughout the calculation domain.

  11. Application of S{gamma} Model for the Mechanistic Bubble Size Prediction in the Subcooled Boiling Flow with CFD Code

    Energy Technology Data Exchange (ETDEWEB)

    Yun, B. J.; Song, C. H. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Splawski, A.; Lo, S. [CD-adapco, Melville (United States)


    Accurate simulation of subcooled boiling flow is essential for the operation and safety of nuclear power plants (NPP). In recent years, the use of computational fluid dynamics (CFD) codes has been extended to the analysis of multi-dimensional two-phase flow for the NPP. Among the applications of CFD code for the NPP analysis, the first target was selected as a mechanistic prediction of DNB (Departure from Nucleate Boiling) in PWR. In DNB-type CHF (Critical Heat Flux), the expected flow regime is bubbly or churn turbulent flow in the high mass flux and high heat flux condition and thus subcooled boiling is also one of the key phenomena for the precise prediction of DNB. In this paper, S{gamma},which is a mechanistic transport equation for the bubble parameters, was examined in a CFD code with the objective of enhancing the prediction capability of subcooled boiling flows. The models were applied in the STAR-CD 4.12 software

  12. A Dual Coding Theoretical Model of Decoding in Reading: Subsuming the LaBerge and Samuels Model (United States)

    Sadoski, Mark; McTigue, Erin M.; Paivio, Allan


    In this article we present a detailed Dual Coding Theory (DCT) model of decoding. The DCT model reinterprets and subsumes The LaBerge and Samuels (1974) model of the reading process which has served well to account for decoding behaviors and the processes that underlie them. However, the LaBerge and Samuels model has had little to say about…

  13. The non-power model of the genetic code: a paradigm for interpreting genomic information. (United States)

    Gonzalez, Diego Luis; Giannerini, Simone; Rosa, Rodolfo


    In this article, we present a mathematical framework based on redundant (non-power) representations of integer numbers as a paradigm for the interpretation of genomic information. The core of the approach relies on modelling the degeneracy of the genetic code. The model allows one to explain many features and symmetries of the genetic code and to uncover hidden symmetries. Also, it provides us with new tools for the analysis of genomic sequences. We review briefly three main areas: (i) the Euplotid nuclear code, (ii) the vertebrate mitochondrial code, and (iii) the main coding/decoding strategies used in the three domains of life. In every case, we show how the non-power model is a natural unified framework for describing degeneracy and deriving sound biological hypotheses on protein coding. The approach is rooted on number theory and group theory; nevertheless, we have kept the technical level to a minimum by focusing on key concepts and on the biological implications. © 2016 The Author(s).

  14. Domain-specific modeling enabling full code generation

    CERN Document Server

    Kelly, Steven


    Domain-Specific Modeling (DSM) is the latest approach tosoftware development, promising to greatly increase the speed andease of software creation. Early adopters of DSM have been enjoyingproductivity increases of 500–1000% in production for over adecade. This book introduces DSM and offers examples from variousfields to illustrate to experienced developers how DSM can improvesoftware development in their teams. Two authorities in the field explain what DSM is, why it works,and how to successfully create and use a DSM solution to improveproductivity and quality. Divided into four parts, the book covers:background and motivation; fundamentals; in-depth examples; andcreating DSM solutions. There is an emphasis throughout the book onpractical guidelines for implementing DSM, including how toidentify the nece sary language constructs, how to generate fullcode from models, and how to provide tool support for a new DSMlanguage. The example cases described in the book are available thebook's Website, www.dsmbook....

  15. A Review of Equation of State Models, Chemical Equilibrium Calculations and CERV Code Requirements for SHS Detonation Modelling (United States)


    Beattie - Bridgeman Virial expansion The above equations are suitable for moderate pressures and are usually based on either empirical constants...CR 2010-013 October 2009 A Review of Equation of State Models, Chemical Equilibrium Calculations and CERV Code Requirements for SHS Detonation...Defence R&D Canada. A Review of Equation of State Models, Chemical Equilibrium Calculations and CERV Code Requirements for SHS Detonation

  16. Model study of the thermal storage system by FEHM code

    Energy Technology Data Exchange (ETDEWEB)

    Tenma, N.; Yasukawa, Kasumi [National Institute of Advanced Industrial Science and Technology, Ibaraki (Japan); Zyvoloski, G. [Los Alamos National Laboratory, Los Alamos, NM (United States). Earth and Environmental Science Division


    The use of low-temperature geothermal resources is important from the viewpoint of global warming. In order to evaluate various underground projects that use low-temperature geothermal resources, we have estimated the parameters of a typical underground system using the two-well model. By changing the parameters of the system, six different heat extraction scenarios have been studied. One of these six scenarios is recommended because of its small energy loss. (author)

  17. Model based code generation for distributed embedded systems


    Raghav, Gopal; Gopalswamy, Swaminathan; Radhakrishnan, Karthikeyan; Hugues, Jérôme; Delange, Julien


    Embedded systems are becoming increasingly complex and more distributed. Cost and quality requirements necessitate reuse of the functional software components for multiple deployment architectures. An important step is the allocation of software components to hardware. During this process the differences between the hardware and application software architectures must be reconciled. In this paper we discuss an architecture driven approach involving model-based techniques to resolve these diff...

  18. Pre-engineering Spaceflight Validation of Environmental Models and the 2005 HZETRN Simulation Code (United States)

    Nealy, John E.; Cucinotta, Francis A.; Wilson, John W.; Badavi, Francis F.; Dachev, Ts. P.; Tomov, B. T.; Walker, Steven A.; DeAngelis, Giovanni; Blattnig, Steve R.; Atwell, William


    The HZETRN code has been identified by NASA for engineering design in the next phase of space exploration highlighting a return to the Moon in preparation for a Mars mission. In response, a new series of algorithms beginning with 2005 HZETRN, will be issued by correcting some prior limitations and improving control of propagated errors along with established code verification processes. Code validation processes will use new/improved low Earth orbit (LEO) environmental models with a recently improved International Space Station (ISS) shield model to validate computational models and procedures using measured data aboard ISS. These validated models will provide a basis for flight-testing the designs of future space vehicles and systems of the Constellation program in the LEO environment.

  19. Reduced Fast Ion Transport Model For The Tokamak Transport Code TRANSP

    Energy Technology Data Exchange (ETDEWEB)

    Podesta,, Mario; Gorelenkova, Marina; White, Roscoe


    Fast ion transport models presently implemented in the tokamak transport code TRANSP [R. J. Hawryluk, in Physics of Plasmas Close to Thermonuclear Conditions, CEC Brussels, 1 , 19 (1980)] are not capturing important aspects of the physics associated with resonant transport caused by instabilities such as Toroidal Alfv en Eigenmodes (TAEs). This work describes the implementation of a fast ion transport model consistent with the basic mechanisms of resonant mode-particle interaction. The model is formulated in terms of a probability distribution function for the particle's steps in phase space, which is consistent with the MonteCarlo approach used in TRANSP. The proposed model is based on the analysis of fast ion response to TAE modes through the ORBIT code [R. B. White et al., Phys. Fluids 27 , 2455 (1984)], but it can be generalized to higher frequency modes (e.g. Compressional and Global Alfv en Eigenmodes) and to other numerical codes or theories.

  20. The Nuremberg Code subverts human health and safety by requiring animal modeling (United States)


    Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive value of animal models when used as test subjects for human response to drugs and disease. We explore the use of animals for models in toxicity testing as an example of the problem with using animal models. Summary We conclude that the requirements for animal testing found in the Nuremberg Code were based on scientifically outdated principles, compromised by people with a vested interest in animal experimentation, serve no useful function, increase the cost of drug development, and prevent otherwise safe and efficacious drugs and therapies from being implemented. PMID:22769234

  1. The Nuremberg Code subverts human health and safety by requiring animal modeling

    Directory of Open Access Journals (Sweden)

    Greek Ray


    Full Text Available Abstract Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive value of animal models when used as test subjects for human response to drugs and disease. We explore the use of animals for models in toxicity testing as an example of the problem with using animal models. Summary We conclude that the requirements for animal testing found in the Nuremberg Code were based on scientifically outdated principles, compromised by people with a vested interest in animal experimentation, serve no useful function, increase the cost of drug development, and prevent otherwise safe and efficacious drugs and therapies from being implemented.

  2. Modeling of BWR core meltdown accidents - for application in the MELRPI. MOD2 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Koh, B R; Kim, S H; Taleyarkhan, R P; Podowski, M Z; Lahey, Jr, R T


    This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing.

  3. Immune Modulating Capability of Two Exopolysaccharide-Producing Bifidobacterium Strains in a Wistar Rat Model

    Directory of Open Access Journals (Sweden)

    Nuria Salazar


    Full Text Available Fermented dairy products are the usual carriers for the delivery of probiotics to humans, Bifidobacterium and Lactobacillus being the most frequently used bacteria. In this work, the strains Bifidobacterium animalis subsp. lactis IPLA R1 and Bifidobacterium longum IPLA E44 were tested for their capability to modulate immune response and the insulin-dependent glucose homeostasis using male Wistar rats fed with a standard diet. Three intervention groups were fed daily for 24 days with 10% skimmed milk, or with 109 cfu of the corresponding strain suspended in the same vehicle. A significant increase of the suppressor-regulatory TGF-β cytokine occurred with both strains in comparison with a control (no intervention group of rats; the highest levels were reached in rats fed IPLA R1. This strain presented an immune protective profile, as it was able to reduce the production of the proinflammatory IL-6. Moreover, phosphorylated Akt kinase decreased in gastroctemius muscle of rats fed the strain IPLA R1, without affecting the glucose, insulin, and HOMA index in blood, or levels of Glut-4 located in the membrane of muscle and adipose tissue cells. Therefore, the strain B. animalis subsp. lactis IPLA R1 is a probiotic candidate to be tested in mild grade inflammation animal models.

  4. Model-based Assessment for Balancing Privacy Requirements and Operational Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Knirsch, Fabian [Salzburg Univ. (Austria); Engel, Dominik [Salzburg Univ. (Austria); Frincu, Marc [Univ. of Southern California, Los Angeles, CA (United States); Prasanna, Viktor [Univ. of Southern California, Los Angeles, CA (United States)


    The smart grid changes the way energy is produced and distributed. In addition both, energy and information is exchanged bidirectionally among participating parties. Therefore heterogeneous systems have to cooperate effectively in order to achieve a common high-level use case, such as smart metering for billing or demand response for load curtailment. Furthermore, a substantial amount of personal data is often needed for achieving that goal. Capturing and processing personal data in the smart grid increases customer concerns about privacy and in addition, certain statutory and operational requirements regarding privacy aware data processing and storage have to be met. An increase of privacy constraints, however, often limits the operational capabilities of the system. In this paper, we present an approach that automates the process of finding an optimal balance between privacy requirements and operational requirements in a smart grid use case and application scenario. This is achieved by formally describing use cases in an abstract model and by finding an algorithm that determines the optimum balance by forward mapping privacy and operational impacts. For this optimal balancing algorithm both, a numeric approximation and – if feasible – an analytic assessment are presented and investigated. The system is evaluated by applying the tool to a real-world use case from the University of Southern California (USC) microgrid.

  5. Development of a model and computer code to describe solar grade silicon production processes (United States)

    Gould, R. K.; Srivastava, R.


    Two computer codes were developed for describing flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides. The first is the CHEMPART code, an axisymmetric, marching code which treats two phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. It can be used to described flow reactors in which reactants, mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, deposition of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail.

  6. Varian 2100C/D Clinac 18 MV photon phase space file characterization and modeling by using MCNP Code (United States)

    Ezzati, Ahad Ollah


    Multiple points and a spatial mesh based surface source model (MPSMBSS) was generated for 18MV Varian 2100 C/D Clinac phase space file (PSF) and implemented in MCNP code. The generated source model (SM) was benchmarked against PSF and measurements. PDDs and profiles were calculated using the SM and original PSF for different field sizes from 5 × 5 to 20 × 20 cm2. Agreement was within 2% of the maximum dose at 100cm SSD for beam profiles at the depths of 4cm and 15cm with respect to the original PSF. Differences between measured and calculated points were less than 2% of the maximum dose or 2mm distance to agreement (DTA) at 100 cm SSD. Thus it can be concluded that the modified MCNP code can be used for radiotherapy calculations including multiple source model (MSM) and using the source biasing capability of MPSMBSS can increase the simulation speed up to 3600 for field sizes smaller than 5 × 5 cm2.

  7. Data-driven inference of network connectivity for modeling the dynamics of neural codes in the insect antennal lobe

    Directory of Open Access Journals (Sweden)

    Eli eShlizerman


    Full Text Available The antennal lobe (AL, olfactory processing center in insects, is able to process stimuli into distinct neural activity patterns, called olfactory neural codes. To model their dynamics we perform multichannel recordings from the projection neurons in the AL driven by different odorants. We then derive a dynamic neuronal network from the electrophysiological data. The network consists of lateral-inhibitory neurons and excitatory neurons (modeled as firing-rate units, and is capable of producing unique olfactory neural codes for the tested odorants. To construct the network, we (i design a projection, an odor space, for the neural recording from the AL, which discriminates between distinct odorants trajectories (ii characterize scent recognition, i.e., decision-making based on olfactory signals and (iii infer the wiring of the neural circuit, the connectome of the AL. We show that the constructed model is consistent with biological observations, such as contrast enhancement and robustness to noise. The study suggests a data-driven approach to answer a key biological question in identifying how lateral inhibitory neurons can be wired to excitatory neurons to permit robust activity patterns.

  8. Development of RBMK-1500 Model for BDBA Analysis Using RELAP/SCDAPSIM Code (United States)

    Uspuras, Eugenijus; Kaliatka, Algirdas

    This article discusses the specificity of RBMK (channel type, boiling water, graphite moderated) reactors and problems of Reactor Cooling System modelling employing computer codes. The article presents, how the RELAP/SCDAPSIM code, which is originally designed for modelling of accidents in vessel type reactors, is fit to simulate the phenomena in the RBMK reactor core and RCS in case of Beyond Design Basis Accidents. For this reason, use of two RELAP/SCDAPSIM models is recommended. First model with described complete geometry of RCS is recommended for analysis of initial phase of accident. The calculations results, received using this model, are used as boundary conditions in simplified model for simulation of later phases of severe accidents. The simplified model was tested comparing results of simulation performed using RELAP5 and RELAP/SCDAPSIM codes. As the typical example of BDBA, large break LOCA in reactor cooling system with failure of emergency core cooling system was analyzed. Use of developed models allows to receive behaviour of thermal-hydraulic parameters, temperatures of core components, amount of generated hydrogen due to steam-zirconium reaction. These parameters will be used as input for other special codes, designed for analysis of processes in reactor containment.

  9. Reduced-order LPV model of flexible wind turbines from high fidelity aeroelastic codes

    DEFF Research Database (Denmark)

    Adegas, Fabiano Daher; Sønderby, Ivan Bergquist; Hansen, Morten Hartvig


    space. The obtained LPV model is of suitable size for designing modern gain-scheduling controllers based on recently developed LPV control design techniques. Results are thoroughly assessed on a set of industrial wind turbine models generated by the recently developed aeroelastic code HAWCStab2....

  10. Fine-Grained Energy Modeling for the Source Code of a Mobile Application

    DEFF Research Database (Denmark)

    Li, Xueliang; Gallagher, John Patrick


    The goal of an energy model for source code is to lay a foundation for the application of energy-aware programming techniques. State of the art solutions are based on source-line energy information. In this paper, we present an approach to constructing a fine-grained energy model which is able...

  11. The modelling of wall condensation with noncondensable gases for the containment codes

    Energy Technology Data Exchange (ETDEWEB)

    Leduc, C.; Coste, P.; Barthel, V.; Deslandes, H. [Commissariat a l`Energi Atomique, Grenoble (France)


    This paper presents several approaches in the modelling of wall condensation in the presence of noncondensable gases for containment codes. The lumped-parameter modelling and the local modelling by 3-D codes are discussed. Containment analysis codes should be able to predict the spatial distributions of steam, air, and hydrogen as well as the efficiency of cooling by wall condensation in both natural convection and forced convection situations. 3-D calculations with a turbulent diffusion modelling are necessary since the diffusion controls the local condensation whereas the wall condensation may redistribute the air and hydrogen mass in the containment. A fine mesh modelling of film condensation in forced convection has been in the developed taking into account the influence of the suction velocity at the liquid-gas interface. It is associated with the 3-D model of the TRIO code for the gas mixture where a k-{xi} turbulence model is used. The predictions are compared to the Huhtiniemi`s experimental data. The modelling of condensation in natural convection or mixed convection is more complex. As no universal velocity and temperature profile exist for such boundary layers, a very fine nodalization is necessary. More simple models integrate equations over the boundary layer thickness, using the heat and mass transfer analogy. The model predictions are compared with a MIT experiment. For the containment compartments a two node model is proposed using the lumped parameter approach. Heat and mass transfer coefficients are tested on separate effect tests and containment experiments. The CATHARE code has been adapted to perform such calculations and shows a reasonable agreement with data.

  12. Development and Verification of a Pilot Code based on Two-fluid Three-field Model

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Moon Kyu; Bae, S. W.; Lee, Y. J.; Chung, B. D.; Jeong, J. J.; Ha, K. S.; Kang, D. H


    In this study, a semi-implicit pilot code is developed for a one-dimensional channel flow as three-fields. The three fields are comprised of a gas, continuous liquid and entrained liquid fields. All the three fields are allowed to have their own velocities. The temperatures of the continuous liquid and the entrained liquid are, however, assumed to be equilibrium. The interphase phenomena include heat and mass transfer, as well as momentum transfer. The fluid/structure interaction, generally, include both heat and momentum transfer. Assuming adiabatic system, only momentum transfer is considered in this study, leaving the wall heat transfer for the future study. Using 10 conceptual problems, the basic pilot code has been verified. The results of the verification are summarized below: It was confirmed that the basic pilot code can simulate various flow conditions (such as single-phase liquid flow, bubbly flow, slug/churn turbulent flow, annular-mist flow, and single-phase vapor flow) and transitions of the flow conditions. The pilot code was programmed so that the source terms of the governing equations and numerical solution schemes can be easily tested. The mass and energy conservation was confirmed for single-phase liquid and single-phase vapor flows. It was confirmed that the inlet pressure and velocity boundary conditions work properly. It was confirmed that, for single- and two-phase flows, the velocity and temperature of non-existing phase are calculated as intended. Complete phase depletion which might occur during a phase change was found to adversely affect the code stability. A further study would be required to enhance code capability in this regard.

  13. Comparison of current state residential energy codes with the 1992 model energy code for one- and two-family dwellings; 1994

    Energy Technology Data Exchange (ETDEWEB)

    Klevgard, L.A.; Taylor, Z.T.; Lucas, R.G.


    This report is one in a series of documents describing research activities in support of the US Department of Energy (DOE) Building Energy Codes Program. The Pacific Northwest Laboratory (PNL) leads the program for DOE. The goal of the program is to develop and support the adopting, implementation, and enforcement of Federal, State, and Local energy codes for new buildings. The program approach to meeting the goal is to initiate and manage individual research and standards and guidelines development efforts that are planned and conducted in cooperation with representatives from throughout the buildings community. Projects under way involve practicing architects and engineers, professional societies and code organizations, industry representatives, and researchers from the private sector and national laboratories. Research results and technical justifications for standards criteria are provided to standards development and model code organizations and to Federal, State, and local jurisdictions as a basis to update their codes and standards. This effort helps to ensure that building standards incorporate the latest research results to achieve maximum energy savings in new buildings, yet remain responsive to the needs of the affected professions, organizations, and jurisdictions. Also supported are the implementation, deployment, and use of energy-efficient codes and standards. This report documents findings from an analysis conducted by PNL of the State`s building codes to determine if the codes meet or exceed the 1992 MEC energy efficiency requirements (CABO 1992a).

  14. Assessment of Turbulence-Chemistry Interaction Models in the National Combustion Code (NCC) - Part I (United States)

    Wey, Thomas Changju; Liu, Nan-suey


    This paper describes the implementations of the linear-eddy model (LEM) and an Eulerian FDF/PDF model in the National Combustion Code (NCC) for the simulation of turbulent combustion. The impacts of these two models, along with the so called laminar chemistry model, are then illustrated via the preliminary results from two combustion systems: a nine-element gas fueled combustor and a single-element liquid fueled combustor.

  15. Model of a neural network inertial satellite navigation system capable of estimating the earth's gravitational field gradient (United States)

    Devyatisil'nyi, A. S.


    A model for recognizing inertial and satellite data on an object's motion that are delivered by a set of distributed onboard sensors (newtonmeters, gyros, satellite receivers) has been described. Specifically, the model is capable of estimating the parameters of the gravitational field.

  16. The implementation of a toroidal limiter model into the gyrokinetic code ELMFIRE

    Energy Technology Data Exchange (ETDEWEB)

    Leerink, S.; Janhunen, S.J.; Kiviniemi, T.P.; Nora, M. [Euratom-Tekes Association, Helsinki University of Technology (Finland); Heikkinen, J.A. [Euratom-Tekes Association, VTT, P.O. Box 1000, FI-02044 VTT (Finland); Ogando, F. [Universidad Nacional de Educacion a Distancia, Madrid (Spain)


    The ELMFIRE full nonlinear gyrokinetic simulation code has been developed for calculations of plasma evolution and dynamics of turbulence in tokamak geometry. The code is applicable for calculations of strong perturbations in particle distribution function, rapid transients and steep gradients in plasma. Benchmarking against experimental reflectometry data from the FT2 tokamak is being discussed and in this paper a model for comparison and studying poloidal velocity is presented. To make the ELMFIRE code suitable for scrape-off layer simulations a simplified toroidal limiter model has been implemented. The model is be discussed and first results are presented. (copyright 2008 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  17. Full Wave Parallel Code for Modeling RF Fields in Hot Plasmas (United States)

    Spencer, Joseph; Svidzinski, Vladimir; Evstatiev, Evstati; Galkin, Sergei; Kim, Jin-Soo


    FAR-TECH, Inc. is developing a suite of full wave RF codes in hot plasmas. It is based on a formulation in configuration space with grid adaptation capability. The conductivity kernel (which includes a nonlocal dielectric response) is calculated by integrating the linearized Vlasov equation along unperturbed test particle orbits. For Tokamak applications a 2-D version of the code is being developed. Progress of this work will be reported. This suite of codes has the following advantages over existing spectral codes: 1) It utilizes the localized nature of plasma dielectric response to the RF field and calculates this response numerically without approximations. 2) It uses an adaptive grid to better resolve resonances in plasma and antenna structures. 3) It uses an efficient sparse matrix solver to solve the formulated linear equations. The linear wave equation is formulated using two approaches: for cold plasmas the local cold plasma dielectric tensor is used (resolving resonances by particle collisions), while for hot plasmas the conductivity kernel is calculated. Work is supported by the U.S. DOE SBIR program.

  18. Transport Corrections in Nodal Diffusion Codes for HTR Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Abderrafi M. Ougouag; Frederick N. Gleicher


    The cores and reflectors of High Temperature Reactors (HTRs) of the Next Generation Nuclear Plant (NGNP) type are dominantly diffusive media from the point of view of behavior of the neutrons and their migration between the various structures of the reactor. This means that neutron diffusion theory is sufficient for modeling most features of such reactors and transport theory may not be needed for most applications. Of course, the above statement assumes the availability of homogenized diffusion theory data. The statement is true for most situations but not all. Two features of NGNP-type HTRs require that the diffusion theory-based solution be corrected for local transport effects. These two cases are the treatment of burnable poisons (BP) in the case of the prismatic block reactors and, for both pebble bed reactor (PBR) and prismatic block reactor (PMR) designs, that of control rods (CR) embedded in non-multiplying regions near the interface between fueled zones and said non-multiplying zones. The need for transport correction arises because diffusion theory-based solutions appear not to provide sufficient fidelity in these situations.

  19. An evolutionary model for protein-coding regions with conserved RNA structure

    DEFF Research Database (Denmark)

    Pedersen, Jakob Skou; Forsberg, Roald; Meyer, Irmtraud Margret


    components of traditional phylogenetic models. We applied this to a data set of full-genome sequences from the hepatitis C virus where five RNA structures are mapped within the coding region. This allowed us to partition the effects of selection on different structural elements and to test various hypotheses...... concerning the relation of these effects. Of particular interest, we found evidence of a functional role of loop and bulge regions, as these were shown to evolve according to a different and more constrained selective regime than the nonpairing regions outside the RNA structures. Other potential applications...... of the model include comparative RNA structure prediction in coding regions and RNA virus phylogenetics....

  20. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) verification and validation plan. version 1.

    Energy Technology Data Exchange (ETDEWEB)

    Bartlett, Roscoe Ainsworth; Arguello, Jose Guadalupe, Jr.; Urbina, Angel; Bouchard, Julie F.; Edwards, Harold Carter; Freeze, Geoffrey A.; Knupp, Patrick Michael; Wang, Yifeng; Schultz, Peter Andrew; Howard, Robert (Oak Ridge National Laboratory, Oak Ridge, TN); McCornack, Marjorie Turner


    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. To meet this objective, NEAMS Waste IPSC M&S capabilities will be applied to challenging spatial domains, temporal domains, multiphysics couplings, and multiscale couplings. A strategic verification and validation (V&V) goal is to establish evidence-based metrics for the level of confidence in M&S codes and capabilities. Because it is economically impractical to apply the maximum V&V rigor to each and every M&S capability, M&S capabilities will be ranked for their impact on the performance assessments of various components of the repository systems. Those M&S capabilities with greater impact will require a greater level of confidence and a correspondingly greater investment in V&V. This report includes five major components: (1) a background summary of the NEAMS Waste IPSC to emphasize M&S challenges; (2) the conceptual foundation for verification, validation, and confidence assessment of NEAMS Waste IPSC M&S capabilities; (3) specifications for the planned verification, validation, and confidence-assessment practices; (4) specifications for the planned evidence information management system; and (5) a path forward for the incremental implementation of this V&V plan.


    Institute of Scientific and Technical Information of China (English)

    Hu Xiaofei; Zhu Xiuchang


    In Wyner-Ziv (WZ) Distributed Video Coding (DVC),correlation noise model is often used to describe the error distribution between WZ frame and the side information.The accuracy of the model can influence the performance of the video coder directly.A mixture correlation noise model in Discrete Cosine Transform (DCT) domain for WZ video coding is established in this paper.Different correlation noise estimation method is used for direct current and alternating current coefficients.Parameter estimation method based on expectation maximization algorithm is used to estimate the Laplace distribution center of direct current frequency band and Mixture Laplace-Uniform Distribution Model (MLUDM) is established for alternating current coefficients.Experimental results suggest that the proposed mixture correlation noise model can describe the heavy tail and sudden change of the noise accurately at high rate and make significant improvement on the coding efficiency compared with the noise model presented by DIStributed COding for Video sERvices (DISCOVER).

  2. Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments

    Directory of Open Access Journals (Sweden)

    Dragutin KERMEK


    Full Text Available In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student assignments and projects that involve a lot of source code files these tools are not so effective. Also, issues may occur when source code is given to students in class so they can copy it. In such cases these tools do not provide satisfying results and reports. In this study, we present an improved process model for plagiarism detection when multiple student files exist and allowed source code is present. In the research in this paper we use the Sherlock detection tool, although the presented process model can be combined with any plagiarism detection engine. The proposed model is tested on assignments in three courses in two subsequent academic years.

  3. SENR, A Super-Efficient Code for Gravitational Wave Source Modeling: Latest Results (United States)

    Ruchlin, Ian; Etienne, Zachariah; Baumgarte, Thomas


    The science we extract from gravitational wave observations will be limited by our theoretical understanding, so with the recent breakthroughs by LIGO, reliable gravitational wave source modeling has never been more critical. Due to efficiency considerations, current numerical relativity codes are very limited in their applicability to direct LIGO source modeling, so it is important to develop new strategies for making our codes more efficient. We introduce SENR, a Super-Efficient, open-development numerical relativity (NR) code aimed at improving the efficiency of moving-puncture-based LIGO gravitational wave source modeling by 100x. SENR builds upon recent work, in which the BSSN equations are evolved in static spherical coordinates, to allow dynamical coordinates with arbitrary spatial distributions. The physical domain is mapped to a uniform-resolution grid on which derivative operations are approximated using standard central finite difference stencils. The source code is designed to be human-readable, efficient, parallelized, and readily extensible. We present the latest results from the SENR code.

  4. Capability Paternalism

    NARCIS (Netherlands)

    Claassen, R.J.G.


    A capability approach prescribes paternalist government actions to the extent that it requires the promotion of specific functionings, instead of the corresponding capabilities. Capability theorists have argued that their theories do not have much of these paternalist implications, since promoting c

  5. Self-consistent modeling of DEMOs with 1.5D BALDUR integrated predictive modeling code (United States)

    Wisitsorasak, A.; Somjinda, B.; Promping, J.; Onjun, T.


    Self-consistent simulations of four DEMO designs proposed by teams from China, Europe, India, and Korea are carried out using the BALDUR integrated predictive modeling code in which theory-based models are used, for both core transport and boundary conditions. In these simulations, a combination of the NCLASS neoclassical transport and multimode (MMM95) anomalous transport model is used to compute a core transport. The boundary is taken to be at the top of the pedestal, where the pedestal values are described using a pedestal temperature model based on a combination of magnetic and flow shear stabilization, pedestal width scaling and an infinite- n ballooning pressure gradient model and a pedestal density model based on a line average density. Even though an optimistic scenario is considered, the simulation results suggest that, with the exclusion of ELMs, the fusion gain Q obtained for these reactors is pessimistic compared to their original designs, i.e. 52% for the Chinese design, 63% for the European design, 22% for the Korean design, and 26% for the Indian design. In addition, the predicted bootstrap current fractions are also found to be lower than their original designs, as fractions of their original designs, i.e. 0.49 (China), 0.66 (Europe), and 0.58 (India). Furthermore, in relation to sensitivity, it is found that increasing values of the auxiliary heating power and the electron line average density from their design values yield an enhancement of fusion performance. In addition, inclusion of sawtooth oscillation effects demonstrate positive impacts on the plasma and fusion performance in European, Indian and Korean DEMOs, but degrade the performance in the Chinese DEMO.

  6. Capability of particle inspection on patterned EUV mask using model EBEYE M (United States)

    Naka, Masato; Yoshikawa, Ryoji; Yamaguchi, Shinji; Hirano, Takashi; Itoh, Masamitsu; Terao, Kenji; Hatakeyama, Masahiro; Watanabe, Kenji; Sobukawa, Hiroshi; Murakami, Takeshi; Tsukamoto, Kiwamu; Hayashi, Takehide; Tajima, Ryo; Kimura, Norio; Hayashi, Naoya


    According to the road map shown in ITRS [1], the EUV mask requirement for defect inspection is to detect the defect size of sub- 20 nm in the near future. EB (Electron Beam) inspection with high resolution is one of the promising candidates to meet such severe defect inspection requirements. However, conventional EB inspection using the SEM method has the problem of low throughput. Therefore, we have developed an EB inspection tool, named Model EBEYE M. The tool has the PEM (Projection Electron Microscope) technique and the image acquisition technique with TDI (Time Delay Integration) sensor while moving the stage continuously to achieve high throughput [2]. In our previous study, we showed the performance of the tool applied for the half pitch (hp) 2X nm node in a production phase for particle inspection on an EUV blank. In the study, the sensitivity of 20 nm with capture rate of 100 % and the throughput of 1 hour per 100 mm square were achieved, which was higher than the conventional optical inspection tool for EUV mask inspection [3]-[5]. Such particle inspection is called for not only on the EUV blank but also on the patterned EUV mask. It is required after defect repair and final cleaning for EUV mask fabrication. Moreover, it is useful as a particle monitoring tool between a certain numbers of exposures for wafer fabrication because EUV pellicle has not been ready yet. However, since the patterned EUV mask consists of 3D structure, it is more difficult than that on the EUV blank. In this paper, we evaluated that the particle inspection on the EUV blank using the tool which was applied for the patterned EUV mask. Moreover, the capability of the particle inspection on the patterned EUV mask for the hp 2X nm node, whose target is 25 nm of the sensitivity, was confirmed. As a result, the inspection and SEM review results of the patterned EUV masks revealed that the sensitivity of the hp 100 nm Line/Space (LS) was 25 nm and that of the hp 140- 160 nm Contact Hole

  7. Basic Pilot Code Development for Two-Fluid, Three-Field Model

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Jae Jun; Bae, S. W.; Lee, Y. J.; Chung, B. D.; Hwang, M.; Ha, K. S.; Kang, D. H


    A basic pilot code for one-dimensional, transient, two-fluid, three-field model has been developed. Using 9 conceptual problems, the basic pilot code has been verified. The results of the verification are summarized below: - It was confirmed that the basic pilot code can simulate various flow conditions (such as single-phase liquid flow, bubbly flow, slug/churn turbulent flow, annular-mist flow, and single-phase vapor flow) and transitions of the flow conditions. A mist flow was not simulated, but it seems that the basic pilot code can simulate mist flow conditions. - The pilot code was programmed so that the source terms of the governing equations and numerical solution schemes can be easily tested. - The mass and energy conservation was confirmed for single-phase liquid and single-phase vapor flows. - It was confirmed that the inlet pressure and velocity boundary conditions work properly. - It was confirmed that, for single- and two-phase flows, the velocity and temperature of non-existing phase are calculated as intended. - During the simulation of a two-phase flow, the calculation reaches a quasisteady state with small-amplitude oscillations. The oscillations seem to be induced by some numerical causes. The research items for the improvement of the basic pilot code are listed in the last section of this report.

  8. A Perceptual Model for Sinusoidal Audio Coding Based on Spectral Integration

    Directory of Open Access Journals (Sweden)

    Jensen Søren Holdt


    Full Text Available Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of audio signals. In this paper, we present a new perceptual model that predicts masked thresholds for sinusoidal distortions. The model relies on signal detection theory and incorporates more recent insights about spectral and temporal integration in auditory masking. As a consequence, the model is able to predict the distortion detectability. In fact, the distortion detectability defines a (perceptually relevant norm on the underlying signal space which is beneficial for optimisation algorithms such as rate-distortion optimisation or linear predictive coding. We evaluate the merits of the model by combining it with a sinusoidal extraction method and compare the results with those obtained with the ISO MPEG-1 Layer I-II recommended model. Listening tests show a clear preference for the new model. More specifically, the model presented here leads to a reduction of more than 20% in terms of number of sinusoids needed to represent signals at a given quality level.

  9. THATCH: A computer code for modelling thermal networks of high- temperature gas-cooled nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Kroeger, P.G.; Kennett, R.J.; Colman, J.; Ginsberg, T. (Brookhaven National Lab., Upton, NY (United States))


    This report documents the THATCH code, which can be used to model general thermal and flow networks of solids and coolant channels in two-dimensional r-z geometries. The main application of THATCH is to model reactor thermo-hydraulic transients in High-Temperature Gas-Cooled Reactors (HTGRs). The available modules simulate pressurized or depressurized core heatup transients, heat transfer to general exterior sinks or to specific passive Reactor Cavity Cooling Systems, which can be air or water-cooled. Graphite oxidation during air or water ingress can be modelled, including the effects of added combustion products to the gas flow and the additional chemical energy release. A point kinetics model is available for analyzing reactivity excursions; for instance due to water ingress, and also for hypothetical no-scram scenarios. For most HTGR transients, which generally range over hours, a user-selected nodalization of the core in r-z geometry is used. However, a separate model of heat transfer in the symmetry element of each fuel element is also available for very rapid transients. This model can be applied coupled to the traditional coarser r-z nodalization. This report described the mathematical models used in the code and the method of solution. It describes the code and its various sub-elements. Details of the input data and file usage, with file formats, is given for the code, as well as for several preprocessing and postprocessing options. The THATCH model of the currently applicable 350 MW{sub th} reactor is described. Input data for four sample cases are given with output available in fiche form. Installation requirements and code limitations, as well as the most common error indications are listed. 31 refs., 23 figs., 32 tabs.

  10. Reactor physics modelling of accident tolerant fuel for LWRs using ANSWERS codes

    Directory of Open Access Journals (Sweden)

    Lindley Benjamin A.


    adopts an integral configuration and a fully passive decay heat removal system to provide indefinite cooling capability for a class of accidents. This paper presents the equilibrium cycle core design and reactor physics behaviour of the I2S-LWR with U3Si2 and the advanced steel cladding. The results were obtained using the traditional two-stage approach, in which homogenized macroscopic cross-section sets were generated by WIMS and applied in a full 3D core solution with PANTHER. The results obtained with WIMS/PANTHER were compared against the Monte Carlo Serpent code developed by VTT and previously reported results for the I2S-LWR. The results were found to be in a good agreement (e.g. <200 pcm in reactivity among the compared codes, giving confidence that the WIMS/PANTHER reactor physics package can be reliably used in modelling advanced LWR systems.

  11. A Neuronal Model of Predictive Coding Accounting for the Mismatch Negativity


    Wacongne, Catherine; Changeux, Jean-Pierre; Dehaene, Stanislas


    International audience; The mismatch negativity (MMN) is thought to index the activation of specialized neural networks for active prediction and deviance detection. However, a detailed neuronal model of the neurobiological mechanisms underlying the MMN is still lacking, and its computational foundations remain debated. We propose here a detailed neuronal model of auditory cortex, based on predictive coding, that accounts for the critical features of MMN. The model is entirely composed of spi...

  12. Bootstrap imputation with a disease probability model minimized bias from misclassification due to administrative database codes. (United States)

    van Walraven, Carl


    Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Development Of Sputtering Models For Fluids-Based Plasma Simulation Codes (United States)

    Veitzer, Seth; Beckwith, Kristian; Stoltz, Peter


    Rf-driven plasma devices such as ion sources and plasma processing devices for many industrial and research applications benefit from detailed numerical modeling. Simulation of these devices using explicit PIC codes is difficult due to inherent separations of time and spatial scales. One alternative type of model is fluid-based codes coupled with electromagnetics, that are applicable to modeling higher-density plasmas in the time domain, but can relax time step requirements. To accurately model plasma-surface processes, such as physical sputtering and secondary electron emission, kinetic particle models have been developed, where particles are emitted from a material surface due to plasma ion bombardment. In fluid models plasma properties are defined on a cell-by-cell basis, and distributions for individual particle properties are assumed. This adds a complexity to surface process modeling, which we describe here. We describe the implementation of sputtering models into the hydrodynamic plasma simulation code USim, as well as methods to improve the accuracy of fluids-based simulation of plasmas-surface interactions by better modeling of heat fluxes. This work was performed under the auspices of the Department of Energy, Office of Basic Energy Sciences Award #DE-SC0009585.

  14. Motion-compensated coding and frame rate up-conversion: models and analysis. (United States)

    Dar, Yehuda; Bruckstein, Alfred M


    Block-based motion estimation (ME) and motion compensation (MC) techniques are widely used in modern video processing algorithms and compression systems. The great variety of video applications and devices results in diverse compression specifications, such as frame rates and bit rates. In this paper, we study the effect of frame rate and compression bit rate on block-based ME and MC as commonly utilized in inter-frame coding and frame rate up-conversion (FRUC). This joint examination yields a theoretical foundation for comparing MC procedures in coding and FRUC. First, the video signal is locally modeled as a noisy translational motion of an image. Then, we theoretically model the motion-compensated prediction of available and absent frames as in coding and FRUC applications, respectively. The theoretic MC-prediction error is studied further and its autocorrelation function is calculated, yielding useful separable-simplifications for the coding application. We argue that a linear relation exists between the variance of the MC-prediction error and temporal distance. While the relevant distance in MC coding is between the predicted and reference frames, MC-FRUC is affected by the distance between the frames available for interpolation. We compare our estimates with experimental results and show that the theory explains qualitatively the empirical behavior. Then, we use the models proposed to analyze a system for improving of video coding at low bit rates, using a spatio-temporal scaling. Although this concept is practically employed in various forms, so far it lacked a theoretical justification. We here harness the proposed MC models and present a comprehensive analysis of the system, to qualitatively predict the experimental results.

  15. Incorporation of Electrical Systems Models Into an Existing Thermodynamic Cycle Code (United States)

    Freeh, Josh


    Integration of entire system includes: Fuel cells, motors, propulsors, thermal/power management, compressors, etc. Use of existing, pre-developed NPSS capabilities includes: 1) Optimization tools; 2) Gas turbine models for hybrid systems; 3) Increased interplay between subsystems; 4) Off-design modeling capabilities; 5) Altitude effects; and 6) Existing transient modeling architecture. Other factors inclde: 1) Easier transfer between users and groups of users; 2) General aerospace industry acceptance and familiarity; and 3) Flexible analysis tool that can also be used for ground power applications.

  16. Re-framing Inclusive Education Through the Capability Approach: An Elaboration of the Model of Relational Inclusion

    Directory of Open Access Journals (Sweden)

    Maryam Dalkilic


    Full Text Available Scholars have called for the articulation of new frameworks in special education that are responsive to culture and context and that address the limitations of medical and social models of disability. In this article, we advance a theoretical and practical framework for inclusive education based on the integration of a model of relational inclusion with Amartya Sen’s (1985 Capability Approach. This integrated framework engages children, educators, and families in principled practices that acknowledge differences, rather than deficits, and enable attention to enhancing the capabilities of children with disabilities in inclusive educational environments. Implications include the development of policy that clarifies the process required to negotiate capabilities and valued functionings and the types of resources required to permit children, educators, and families to create relationally inclusive environments.

  17. Solar optical codes evaluation for modeling and analyzing complex solar receiver geometries (United States)

    Yellowhair, Julius; Ortega, Jesus D.; Christian, Joshua M.; Ho, Clifford K.


    Solar optical modeling tools are valuable for modeling and predicting the performance of solar technology systems. Four optical modeling tools were evaluated using the National Solar Thermal Test Facility heliostat field combined with flat plate receiver geometry as a benchmark. The four optical modeling tools evaluated were DELSOL, HELIOS, SolTrace, and Tonatiuh. All are available for free from their respective developers. DELSOL and HELIOS both use a convolution of the sunshape and optical errors for rapid calculation of the incident irradiance profiles on the receiver surfaces. SolTrace and Tonatiuh use ray-tracing methods to intersect the reflected solar rays with the receiver surfaces and construct irradiance profiles. We found the ray-tracing tools, although slower in computation speed, to be more flexible for modeling complex receiver geometries, whereas DELSOL and HELIOS were limited to standard receiver geometries such as flat plate, cylinder, and cavity receivers. We also list the strengths and deficiencies of the tools to show tool preference depending on the modeling and design needs. We provide an example of using SolTrace for modeling nonconventional receiver geometries. The goal is to transfer the irradiance profiles on the receiver surfaces calculated in an optical code to a computational fluid dynamics code such as ANSYS Fluent. This approach eliminates the need for using discrete ordinance or discrete radiation transfer models, which are computationally intensive, within the CFD code. The irradiance profiles on the receiver surfaces then allows for thermal and fluid analysis on the receiver.

  18. A study on the dependency between turbulent models and mesh configurations of CFD codes

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Jungjin; Heo, Yujin; Jerng, Dong-Wook [CAU, Seoul (Korea, Republic of)


    This paper focuses on the analysis of the behavior of hydrogen mixing and hydrogen stratification, using the GOTHIC code and the CFD code. Specifically, we examined the mesh sensitivity and how the turbulence model affects hydrogen stratification or hydrogen mixing, depending on the mesh configuration. In this work, sensitivity analyses for the meshes and the turbulence models were conducted for missing and stratification phenomena. During severe accidents in a nuclear power plants, the generation of hydrogen may occur and this will complicate the atmospheric condition of the containment by causing stratification of air, steam, and hydrogen. This could significantly impact containment integrity analyses, as hydrogen could be accumulated in local region. From this need arises the importance of research about stratification of gases in the containment. Two computation fluid dynamics code, i.e. GOTHIC and STAR-CCM+ were adopted and the computational results were benchmarked against the experimental data from PANDA facility. The main findings observed through the present work can be summarized as follows: 1) In the case of the GOTHIC code, it was observed that the aspect ratio of the mesh was found more important than the mesh size. Also, if the number of the mesh is over 3,000, the effects of the turbulence models were marginal. 2) For STAR-CCM+, the tendency is quite different from the GOTHIC code. That is, the effects of the turbulence models were small for fewer number of the mesh, however, as the number of mesh increases, the effects of the turbulence models becomes significant. Another observation is that away from the injection orifice, the role of the turbulence models tended to be important due to the nature of mixing process and inducted jet stream.

  19. A model code on co-determination and CSR : The Netherlands: A bottom-up approach

    NARCIS (Netherlands)

    Lambooy, T.E.


    This article discusses the works council’s role in the determination of a company’s CSR strategy and the implementation thereof throughout the organisation. The association of the works councils of multinational companies with a base in the Netherlands has recently developed a ‘Model Code on Co-Dete

  20. Code-switched English pronunciation modeling for Swahili spoken term detection

    CSIR Research Space (South Africa)

    Kleynhans, N


    Full Text Available Computer Science 81 ( 2016 ) 128 – 135 5th Workshop on Spoken Language Technology for Under-resourced Languages, SLTU 2016, 9-12 May 2016, Yogyakarta, Indonesia Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection Neil...

  1. Dynamic Model for the Z Accelerator Vacuum Section Based on Transmission Line Code%Dynamic Model for the Z Accelerator Vacuum Section Based on Transmission Line Code

    Institute of Scientific and Technical Information of China (English)

    呼义翔; 雷天时; 吴撼宇; 郭宁; 韩娟娟; 邱爱慈; 王亮平; 黄涛; 丛培天; 张信军; 李岩; 曾正中; 孙铁平


    The transmission-line-circuit model of the Z accelerator, developed originally by W. A. STYGAR, P. A. CORCORAN, et al., is revised. The revised model uses different calculations for the electron loss and flow impedance in the magnetically insulated transmission line system of the Z accelerator before and after magnetic insulation is established. By including electron pressure and zero electric field at the cathode, a closed set of equations is obtained at each time step, and dynamic shunt resistance (used to represent any electron loss to the anode) and flow impedance are solved, which have been incorporated into the transmission line code for simulations of the vacuum section in the Z accelerator. Finally, the results are discussed in comparison with earlier findings to show the effectiveness and limitations of the model.

  2. Model document for code officials on solar heating and cooling of buildings. Second draft

    Energy Technology Data Exchange (ETDEWEB)

    Trant, B. S.


    Guidelines and codes for the construction, alteration, moving, demolition, repair and use of solar energy systems and parts thereof used for space heating and cooling, for water heating and for processing purposes in, on, or adjacent to buildings and appurtenant structures are presented. The necessary references are included wherever these provisions affect or are affected by the requirments of nationally recognized standards or model codes. The purpose of this document is to safeguard life and limb, health, property and public welfare by regulating and controlling the design, construction, quality of materials, location and maintenance of solar energy systems in, on, or adjacent to buildings and appurtenant structures.

  3. Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence


    Pastawski, Fernando; Yoshida, Beni; Harlow, Daniel; Preskill, John


    We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be ...

  4. Underwater Acoustic Networks: Channel Models and Network Coding based Lower Bound to Transmission Power for Multicast

    CERN Document Server

    Lucani, Daniel E; Stojanovic, Milica


    The goal of this paper is two-fold. First, to establish a tractable model for the underwater acoustic channel useful for network optimization in terms of convexity. Second, to propose a network coding based lower bound for transmission power in underwater acoustic networks, and compare this bound to the performance of several network layer schemes. The underwater acoustic channel is characterized by a path loss that depends strongly on transmission distance and signal frequency. The exact relationship among power, transmission band, distance and capacity for the Gaussian noise scenario is a complicated one. We provide a closed-form approximate model for 1) transmission power and 2) optimal frequency band to use, as functions of distance and capacity. The model is obtained through numerical evaluation of analytical results that take into account physical models of acoustic propagation loss and ambient noise. Network coding is applied to determine a lower bound to transmission power for a multicast scenario, fo...

  5. A simple modelling of mass diffusion effects on condensation with noncondensable gases for the CATHARE Code

    Energy Technology Data Exchange (ETDEWEB)

    Coste, P.; Bestion, D. [Commissariat a l Energie Atomique, Grenoble (France)


    This paper presents a simple modelling of mass diffusion effects on condensation. In presence of noncondensable gases, the mass diffusion near the interface is modelled using the heat and mass transfer analogy and requires normally an iterative procedure to calculate the interface temperature. Simplifications of the model and of the solution procedure are used without important degradation of the predictions. The model is assessed on experimental data for both film condensation in vertical tubes and direct contact condensation in horizontal tubes, including air-steam, Nitrogen-steam and Helium-steam data. It is implemented in the Cathare code, a french system code for nuclear reactor thermal hydraulics developed by CEA, EDF, and FRAMATOME.

  6. Distortion-rate models for entropy-coded lattice vector quantization. (United States)

    Raffy, P; Antonini, M; Barlaud, M


    The increasing demand for real-time applications requires the use of variable-rate quantizers having good performance in the low bit rate domain. In order to minimize the complexity of quantization, as well as maintaining a reasonably high PSNR ratio, we propose to use an entropy-coded lattice vector quantizer (ECLVQ). These quantizers have proven to outperform the well-known EZW algorithm's performance in terms of rate-distortion tradeoff. In this paper, we focus our attention on the modeling of the mean squared error (MSE) distortion and the prefix code rate for ECLVQ. First, we generalize the distortion model of Jeong and Gibson (1993) on fixed-rate cubic quantizers to lattices under a high rate assumption. Second, we derive new rate models for ECLVQ, efficient at low bit rates without any high rate assumptions. Simulation results prove the precision of our models.

  7. A finite-temperature Hartree-Fock code for shell-model Hamiltonians (United States)

    Bertsch, G. F.; Mehlhaff, J. M.


    The codes and find axially symmetric minima of a Hartree-Fock energy functional for a Hamiltonian supplied in a shell model basis. The functional to be minimized is the Hartree-Fock energy for zero-temperature properties or the Hartree-Fock grand potential for finite-temperature properties (thermal energy, entropy). The minimization may be subjected to additional constraints besides axial symmetry and nucleon numbers. A single-particle operator can be used to constrain the minimization by adding it to the single-particle Hamiltonian with a Lagrange multiplier. One can also constrain its expectation value in the zero-temperature code. Also the orbital filling can be constrained in the zero-temperature code, fixing the number of nucleons having given Kπ quantum numbers. This is particularly useful to resolve near-degeneracies among distinct minima.

  8. Physics Based Model for Cryogenic Chilldown and Loading. Part IV: Code Structure (United States)

    Luchinsky, D. G.; Smelyanskiy, V. N.; Brown, B.


    This is the fourth report in a series of technical reports that describe separated two-phase flow model application to the cryogenic loading operation. In this report we present the structure of the code. The code consists of five major modules: (1) geometry module; (2) solver; (3) material properties; (4) correlations; and finally (5) stability control module. The two key modules - solver and correlations - are further divided into a number of submodules. Most of the physics and knowledge databases related to the properties of cryogenic two-phase flow are included into the cryogenic correlations module. The functional form of those correlations is not well established and is a subject of extensive research. Multiple parametric forms for various correlations are currently available. Some of them are included into correlations module as will be described in details in a separate technical report. Here we describe the overall structure of the code and focus on the details of the solver and stability control modules.

  9. Modeling of transient dust events in fusion edge plasmas with DUSTT-UEDGE code (United States)

    Smirnov, R. D.; Krasheninnikov, S. I.; Pigarov, A. Yu.; Rognlien, T. D.


    It is well known that dust can be produced in fusion devices due to various processes involving structural damage of plasma exposed materials. Recent computational and experimental studies have demonstrated that dust production and associated with it plasma contamination can present serious challenges in achieving sustained fusion reaction in future fusion devices, such as ITER. To analyze the impact, which dust can have on performance of fusion plasmas, modeling of coupled dust and plasma transport with DUSTT-UEDGE code is used by the authors. In past, only steady-state computational studies, presuming continuous source of dust influx, were performed due to iterative nature of DUSTT-UEDGE code coupling. However, experimental observations demonstrate that intermittent injection of large quantities of dust, often associated with transient plasma events, may severely impact fusion plasma conditions and even lead to discharge termination. In this work we report on progress in coupling of DUSTT-UEDGE codes in time-dependent regime, which allows modeling of transient dust-plasma transport processes. The methodology and details of the time-dependent code coupling, as well as examples of simulations of transient dust-plasma transport phenomena will be presented. These include time-dependent modeling of impact of short out-bursts of different quantities of tungsten dust in ITER divertor on the edge plasma parameters. The plasma response to the out-bursts with various duration, location, and ejected dust sizes will be analyzed.

  10. Development of full wave code for modeling RF fields in hot non-uniform plasmas (United States)

    Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo


    FAR-TECH, Inc. is developing a full wave RF modeling code to model RF fields in fusion devices and in general plasma applications. As an important component of the code, an adaptive meshless technique is introduced to solve the wave equations, which allows resolving plasma resonances efficiently and adapting to the complexity of antenna geometry and device boundary. The computational points are generated using either a point elimination method or a force balancing method based on the monitor function, which is calculated by solving the cold plasma dispersion equation locally. Another part of the code is the conductivity kernel calculation, used for modeling the nonlocal hot plasma dielectric response. The conductivity kernel is calculated on a coarse grid of test points and then interpolated linearly onto the computational points. All the components of the code are parallelized using MPI and OpenMP libraries to optimize the execution speed and memory. The algorithm and the results of our numerical approach to solving 2-D wave equations in a tokamak geometry will be presented. Work is supported by the U.S. DOE SBIR program.

  11. Large Discriminative Structured Set Prediction Modeling With Max-Margin Markov Network for Lossless Image Coding. (United States)

    Dai, Wenrui; Xiong, Hongkai; Wang, Jia; Zheng, Yuan F


    Inherent statistical correlation for context-based prediction and structural interdependencies for local coherence is not fully exploited in existing lossless image coding schemes. This paper proposes a novel prediction model where the optimal correlated prediction for a set of pixels is obtained in the sense of the least code length. It not only exploits the spatial statistical correlations for the optimal prediction directly based on 2D contexts, but also formulates the data-driven structural interdependencies to make the prediction error coherent with the underlying probability distribution for coding. Under the joint constraints for local coherence, max-margin Markov networks are incorporated to combine support vector machines structurally to make max-margin estimation for a correlated region. Specifically, it aims to produce multiple predictions in the blocks with the model parameters learned in such a way that the distinction between the actual pixel and all possible estimations is maximized. It is proved that, with the growth of sample size, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. Incorporated into the lossless image coding framework, the proposed model outperforms most prediction schemes reported.

  12. CPAT: Coding-Potential Assessment Tool using an alignment-free logistic regression model. (United States)

    Wang, Liguo; Park, Hyun Jung; Dasari, Surendra; Wang, Shengqin; Kocher, Jean-Pierre; Li, Wei


    Thousands of novel transcripts have been identified using deep transcriptome sequencing. This discovery of large and 'hidden' transcriptome rejuvenates the demand for methods that can rapidly distinguish between coding and noncoding RNA. Here, we present a novel alignment-free method, Coding Potential Assessment Tool (CPAT), which rapidly recognizes coding and noncoding transcripts from a large pool of candidates. To this end, CPAT uses a logistic regression model built with four sequence features: open reading frame size, open reading frame coverage, Fickett TESTCODE statistic and hexamer usage bias. CPAT software outperformed (sensitivity: 0.96, specificity: 0.97) other state-of-the-art alignment-based software such as Coding-Potential Calculator (sensitivity: 0.99, specificity: 0.74) and Phylo Codon Substitution Frequencies (sensitivity: 0.90, specificity: 0.63). In addition to high accuracy, CPAT is approximately four orders of magnitude faster than Coding-Potential Calculator and Phylo Codon Substitution Frequencies, enabling its users to process thousands of transcripts within seconds. The software accepts input sequences in either FASTA- or BED-formatted data files. We also developed a web interface for CPAT that allows users to submit sequences and receive the prediction results almost instantly.

  13. Applications of the 3-dim ICRH global wave code FISIC and comparison with other models

    Energy Technology Data Exchange (ETDEWEB)

    Kruecken, T.; Brambilla, M. (Association Euratom-Max-Planck-Institut fuer Plasmaphysik, Garching (Germany, F.R.))


    Numerical simulations of two ICRF heating experiments in ASDEX are presented, using the FISIC code to solve the integrodifferential wave equations in the finite Larmor radius (FLR) approximation model and of ray tracing. The different models show on the whole good agreement; we can however identify a few interesting toroidal effects, in particular on the efficiency of mode conversion and on the propagation of ion Bernstein waves. (author).

  14. AMICON: A multi-model interpretative code for two phase flow instrumentation with uncertainty analysis (United States)

    Teague, J. W., II


    The code was designed to calculate mass fluxes and mass flux standard deviations, as well as certain other fluid physical properties. Several models are used to compute mass fluxes and uncertainties since some models provide more reliable results than others under certain flow situations. The program was specifically prepared to compute these variables using data gathered from spoolpiece instrumentation on the Thermal-Hydraulic Test Facility (THTF) and written to an Engineering Units (EU) data set.

  15. Complexity modeling for context-based adaptive binary arithmetic coding (CABAC) in H.264/AVC decoder (United States)

    Lee, Szu-Wei; Kuo, C.-C. Jay


    One way to save the power consumption in the H.264 decoder is for the H.264 encoder to generate decoderfriendly bit streams. By following this idea, a decoding complexity model of context-based adaptive binary arithmetic coding (CABAC) for H.264/AVC is investigated in this research. Since different coding modes will have an impact on the number of quantized transformed coeffcients (QTCs) and motion vectors (MVs) and, consequently, the complexity of entropy decoding, the encoder with a complexity model can estimate the complexity of entropy decoding and choose the best coding mode to yield the best tradeoff between the rate, distortion and decoding complexity performance. The complexity model consists of two parts: one for source data (i.e. QTCs) and the other for header data (i.e. the macro-block (MB) type and MVs). Thus, the proposed CABAC decoding complexity model of a MB is a function of QTCs and associated MVs, which is verified experimentally. The proposed CABAC decoding complexity model can provide good estimation results for variant bit streams. Practical applications of this complexity model will also be discussed.

  16. Benchmarking of Heavy Ion Transport Codes

    Energy Technology Data Exchange (ETDEWEB)

    Remec, Igor [ORNL; Ronningen, Reginald M. [Michigan State University, East Lansing; Heilbronn, Lawrence [University of Tennessee, Knoxville (UTK)


    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required.

  17. HYDROïD humanoid robot head with perception and emotion capabilities :Modeling, Design and Experimental Results

    Directory of Open Access Journals (Sweden)

    Samer eAlfayad


    Full Text Available In the framework of the HYDROïD humanoid robot project, this paper describes the modeling and design of an electrically actuated head mechanism. Perception and emotion capabilities are considered in the design process. Since HYDROïD humanoid robot is hydraulically actuated, the choice of electrical actuation for the head mechanism addressed in this paper is justified. Considering perception and emotion capabilities leads to a total number of 15 degrees of freedom for the head mechanism which are split on four main sub-mechanisms: the neck, the mouth, the eyes and the eyebrows. Biological data and kinematics performances of human head are taken as inputs of the design process. A new solution of uncoupled eyes is developed to possibly address the master-slave process that links the human eyes as well as vergence capabilities. Modeling each sub-system is carried out in order to get equations of motion, their frequency responses and their transfer functions. The neck pitch rotation is given as a study example. Then, the head mechanism performances are presented through a comparison between model and experimental results validating the hardware capabilities. Finally, the head mechanism is integrated on the HYDROïD upper-body. An object tracking experiment coupled with emotional expressions is carried out to validate the synchronization of the eye rotations with the body motions.

  18. Capability ethics

    NARCIS (Netherlands)

    I.A.M. Robeyns (Ingrid)


    textabstractThe capability approach is one of the most recent additions to the landscape of normative theories in ethics and political philosophy. Yet in its present stage of development, the capability approach is not a full-blown normative theory, in contrast to utilitarianism, deontological

  19. Capability ethics

    NARCIS (Netherlands)

    I.A.M. Robeyns (Ingrid)


    textabstractThe capability approach is one of the most recent additions to the landscape of normative theories in ethics and political philosophy. Yet in its present stage of development, the capability approach is not a full-blown normative theory, in contrast to utilitarianism, deontological theor

  20. Capability ethics

    NARCIS (Netherlands)

    I.A.M. Robeyns (Ingrid)


    textabstractThe capability approach is one of the most recent additions to the landscape of normative theories in ethics and political philosophy. Yet in its present stage of development, the capability approach is not a full-blown normative theory, in contrast to utilitarianism, deontological theor

  1. New high burnup fuel models for NRC`s licensing audit code, FRAPCON

    Energy Technology Data Exchange (ETDEWEB)

    Lanning, D.D.; Beyer, C.E.; Painter, C.L. [Pacific Northwest Laboratory, Richland, WA (United States)


    Fuel behavior models have recently been updated within the U.S. Nuclear Regulatory Commission steady-state FRAPCON code used for auditing of fuel vendor/utility-codes and analyses. These modeling updates have concentrated on providing a best estimate prediction of steady-state fuel behavior up to the maximum burnup level s of current data (60 to 65 GWd/MTU rod-average). A decade has passed since these models were last updated. Currently, some U.S. utilities and fuel vendors are requesting approval for rod-average burnups greater than 60 GWd/MTU; however, until these recent updates the NRC did not have valid fuel performance models at these higher burnup levels. Pacific Northwest Laboratory (PNL) has reviewed 15 separate effects models within the FRAPCON fuel performance code (References 1 and 2) and identified nine models that needed updating for improved prediction of fuel behavior at high burnup levels. The six separate effects models not updated were the cladding thermal properties, cladding thermal expansion, cladding creepdown, fuel specific heat, fuel thermal expansion and open gap conductance. Comparison of these models to the currently available data indicates that these models still adequately predict the data within data uncertainties. The nine models identified as needing improvement for predicting high-burnup behavior are fission gas release (FGR), fuel thermal conductivity (accounting for both high burnup effects and burnable poison additions), fuel swelling, fuel relocation, radial power distribution, fuel-cladding contact gap conductance, cladding corrosion, cladding mechanical properties and cladding axial growth. Each of the updated models will be described in the following sections and the model predictions will be compared to currently available high burnup data.

  2. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL


    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.


    Frisch, H. P.


    The objective of the Integrated Analysis Capability (IAC) system is to provide a highly effective, interactive analysis tool for the integrated design of large structures. With the goal of supporting the unique needs of engineering analysis groups concerned with interdisciplinary problems, IAC was developed to interface programs from the fields of structures, thermodynamics, controls, and system dynamics with an executive system and database to yield a highly efficient multi-disciplinary system. Special attention is given to user requirements such as data handling and on-line assistance with operational features, and the ability to add new modules of the user's choice at a future date. IAC contains an executive system, a data base, general utilities, interfaces to various engineering programs, and a framework for building interfaces to other programs. IAC has shown itself to be effective in automatic data transfer among analysis programs. IAC 2.5, designed to be compatible as far as possible with Level 1.5, contains a major upgrade in executive and database management system capabilities, and includes interfaces to enable thermal, structures, optics, and control interaction dynamics analysis. The IAC system architecture is modular in design. 1) The executive module contains an input command processor, an extensive data management system, and driver code to execute the application modules. 2) Technical modules provide standalone computational capability as well as support for various solution paths or coupled analyses. 3) Graphics and model generation interfaces are supplied for building and viewing models. Advanced graphics capabilities are provided within particular analysis modules such as INCA and NASTRAN. 4) Interface modules provide for the required data flow between IAC and other modules. 5) User modules can be arbitrary executable programs or JCL procedures with no pre-defined relationship to IAC. 6) Special purpose modules are included, such as MIMIC (Model

  4. Production of a New Model for Evaluation of Iran Ecological Capabilities in Order to Establish Services and Civil Development Application

    Directory of Open Access Journals (Sweden)

    J. Nouri


    Full Text Available This study is aimed to design a new model for service and civil development application in order to be used in evaluation of Iran ecological capability studies. For this purpose in the first step, the frequency of sustainable and unsustainable ecological factors in Iran was known. In the next step, the Delphi method was used which is itself a branch of phase theory methods. Effective ecological factors priorities and frequency value of each factor were determined by completing 750 questionnaires for desired branches (Delphi group. Questionnaire data were analyzed using software SPSS 11.0. After designing, model was introduced to geographical information system using Arcinfo program. Model sensitive analysis test was done with the purpose to determine the level of sensibility in favorable replies against the specific changes in target function with the simplex method using Lingo software. This model is used in evaluation of ecological capability at the time of ecological resources analysis in the field under examination and after the preparation of environmental unit maps. Indeed environmental unit map is considered as a base map in ecological capability evaluation in this study. To asses the capabilities of the new method, ecological capability of District 22, Municipality of Tehran was evaluated as a case study and service and civil development application map was prepared using Arc-view GIS 3.2a program. Results of the studies on this section, according to the new method, the points given in environmental units vary from zero to sixty-five. There are restricting factors such as some environmental units along river path, fold passage and hilly areas, which hinder these units from getting service and civil development application.

  5. Desisgning customer agility model based on dynamic capabilities: Effect of IT Competence, Entrepreneurial Alertness And Market Acuity

    Directory of Open Access Journals (Sweden)

    Seyed Hamid Khodadad Hoseini


    Full Text Available Today organizations face great environmental turbulence. High levels of environmental turbulence can paralyze a firm’s operations. Actually outputs of organization process in such environments depend on the firm’s ability to change management and its flexibility. Since one of the important changes in the turbulent environments is changing of needs and preferences of customers and also based on new marketing approach, in order to consider customer's needs, customer agility has been identified as the vital competency for the survival of organizations. Given the critical role of customer agility in turbulent competitive environment, this concept has attracted many researchers of management science in recent years. Therefore, it is vital for organizations to find how to achieve this important capability and it is important to note that very few researches have been done in this regard. Since one of the important tools for achieving customer agility in the organization is dynamic capabilities, therefore in this research the model of formation of customer agility in organization based on dynamic capabilities has been proposed and examined in electronics industry of Iran. The model includes IT competencies, entrepreneurial and market acuity for improving the output of organizational process. This model has been developed based on three management areas of strategic management literature related to dynamic capabilities and the literature of entrepreneurship and information technology. Findings of the sample approve the research model. In addition, it is concluded that dynamic capabilities of the organization will help shaping customer agility and customer agility has a positive effect on quality and efficiency of the process output.

  6. Designing Customer Agility Model Based on Organizational Dynamic Capabilities: Effect of IT Competence, Entrepreneurial Alertness and Market Acuity

    Directory of Open Access Journals (Sweden)

    Soheila Khoddami


    Full Text Available Today organizations face great environmental turbulence. High levels of environmental turbulence can paralyze a firm’s operations. Actually outputs of organization process in such environments depend on the firm’s ability to change management and its flexibility. Since one of the important changes in the turbulent environments is changing of needs and preferences of customers and also based on new marketing approach, in order to consider customer's needs, customer agility has been identified as the vital competency for the survival of organizations. Given the critical role of customer agility in turbulent competitive environment, this concept has attracted many researchers of management science in recent years. Therefore, it is vital for organizations to find how to achieve this important capability and it is important to note that very few researches have been done in this regard. Since one of the important tools for achieving customer agility in the organization is dynamic capabilities, therefore in this research the model of formation of customer agility in organization based on dynamic capabilities has been proposed and examined in electronics industry of Iran. The model includes IT competencies, entrepreneurial and market acuity for improving the output of organizational process. This model has been developed based on three management areas of strategic management literature related to dynamic capabilities and the literature of entrepreneurship and information technology. Findings of the sample approve the research model. In addition, it is concluded that dynamic capabilities of the organization will help shaping customer agility and customer agility has a positive effect on quality and efficiency of the process output.


    Directory of Open Access Journals (Sweden)

    Yerriswamy Wooluru


    Full Text Available Process Capability Indices (PCI has been widely used as a means of summarizing process performance relative to set of specification limits. The proper use of process capability indices are based on some assumptions which may not be true always. Therefore, sometime whether the process capability indices can truly reflect the performance of a process is questionable. Most of PCIs, including Cp, Cpk, Cpm and Cpmk, neglect the changes in the shape of the distribution, which is an important indicator of problems in skewness-prone processes. Wright proposed a process capability index 'Cs' to detect shape changes in a process due to skewness by incorporating a penalty for skewness. In this paper, the effect of skewness on assessment of accuracy of Wright's capability index Cs is studied and comparison is made with Cp, Cpk, Cpm and Cpmk indices when the distribution of the quality characteristic (spring force considered is skewed slightly. This paper also discusses how modelling the non normal data using statistical software and results were compared with other methods.

  8. THELMA code electromagnetic model of ITER superconducting cables and application to the ENEA stability experiment (United States)

    Ciotti, M.; Nijhuis, A.; Ribani, P. L.; Savoldi Richard, L.; Zanino, R.


    The new THELMA code, including a thermal-hydraulic (TH) and an electro-magnetic (EM) model of a cable-in-conduit conductor (CICC), has been developed. The TH model is at this stage relatively conventional, with two fluid components (He flowing in the annular cable region and He flowing in the central channel) being particular to the CICC of the International Thermonuclear Experimental Reactor (ITER), and two solid components (superconducting strands and jacket/conduit). In contrast, the EM model is novel and will be presented here in full detail. The results obtained from this first version of the code are compared with experimental results from pulsed tests of the ENEA stability experiment (ESE), showing good agreement between computed and measured deposited energy and subsequent temperature increase.

  9. Incorporation of the capillary hysteresis model HYSTR into the numerical code TOUGH

    Energy Technology Data Exchange (ETDEWEB)

    Niemi, A.; Bodvarsson, G.S.; Pruess, K.


    As part of the work performed to model flow in the unsaturated zone at Yucca Mountain Nevada, a capillary hysteresis model has been developed. The computer program HYSTR has been developed to compute the hysteretic capillary pressure -- liquid saturation relationship through interpolation of tabulated data. The code can be easily incorporated into any numerical unsaturated flow simulator. A complete description of HYSTR, including a brief summary of the previous hysteresis literature, detailed description of the program, and instructions for its incorporation into a numerical simulator are given in the HYSTR user`s manual (Niemi and Bodvarsson, 1991a). This report describes the incorporation of HYSTR into the numerical code TOUGH (Transport of Unsaturated Groundwater and Heat; Pruess, 1986). The changes made and procedures for the use of TOUGH for hysteresis modeling are documented.

  10. Revised uranium--plutonium cycle PWR and BWR models for the ORIGEN computer code

    Energy Technology Data Exchange (ETDEWEB)

    Croff, A. G.; Bjerke, M. A.; Morrison, G. W.; Petrie, L. M.


    Reactor physics calculations and literature searches have been conducted, leading to the creation of revised enriched-uranium and enriched-uranium/mixed-oxide-fueled PWR and BWR reactor models for the ORIGEN computer code. These ORIGEN reactor models are based on cross sections that have been taken directly from the reactor physics codes and eliminate the need to make adjustments in uncorrected cross sections in order to obtain correct depletion results. Revised values of the ORIGEN flux parameters THERM, RES, and FAST were calculated along with new parameters related to the activation of fuel-assembly structural materials not located in the active fuel zone. Recommended fuel and structural material masses and compositions are presented. A summary of the new ORIGEN reactor models is given.

  11. NRMC - A GPU code for N-Reverse Monte Carlo modeling of fluids in confined media (United States)

    Sánchez-Gil, Vicente; Noya, Eva G.; Lomba, Enrique


    NRMC is a parallel code for performing N-Reverse Monte Carlo modeling of fluids in confined media [V. Sánchez-Gil, E.G. Noya, E. Lomba, J. Chem. Phys. 140 (2014) 024504]. This method is an extension of the usual Reverse Monte Carlo method to obtain structural models of confined fluids compatible with experimental diffraction patterns, specifically designed to overcome the problem of slow diffusion that can appear under conditions of tight confinement. Most of the computational time in N-Reverse Monte Carlo modeling is spent in the evaluation of the structure factor for each trial configuration, a calculation that can be easily parallelized. Implementation of the structure factor evaluation in NVIDIA® CUDA so that the code can be run on GPUs leads to a speed up of up to two orders of magnitude.

  12. Modeling precipitation from concentrated solutions with the EQ3/6 chemical speciation codes

    Energy Technology Data Exchange (ETDEWEB)

    Brown, L.F.; Ebinger, M.H.


    One of the more important uncertainties of using chemical speciation codes to study dissolution and precipitation of compounds is the results of modeling which depends on the particular thermodynamic database being used. The authors goal is to investigate the effects of different thermodynamic databases on modeling precipitation from concentrated solutions. They used the EQ3/6 codes and the supplied databases to model precipitation in this paper. One aspect of this goal is to compare predictions of precipitation from ideal solutions to similar predictions from nonideal solutions. The largest thermodynamic databases available for use by EQ3/6 assume that solutions behave ideally. However, two databases exist that allow modeling nonideal solutions. The two databases are much less extensive than the ideal solution data, and they investigated the comparability of modeling ideal solutions and nonideal solutions. They defined four fundamental problems to test the EQ3/6 codes in concentrated solutions. Two problems precipitate Ca(OH){sub 2} from solutions concentrated in Ca{sup ++}. One problem tests the precipitation of Ca(OH){sub 2} from high ionic strength (high concentration) solutions that are low in the concentrations of precipitating species (Ca{sup ++} in this case). The fourth problem evaporates the supernatant of the problem with low concentrations of precipitating species. The specific problems are discussed.

  13. Wind turbine control systems: Dynamic model development using system identification and the fast structural dynamics code

    Energy Technology Data Exchange (ETDEWEB)

    Stuart, J.G.; Wright, A.D.; Butterfield, C.P.


    Mitigating the effects of damaging wind turbine loads and responses extends the lifetime of the turbine and, consequently, reduces the associated Cost of Energy (COE). Active control of aerodynamic devices is one option for achieving wind turbine load mitigation. Generally speaking, control system design and analysis requires a reasonable dynamic model of {open_quotes}plant,{close_quotes} (i.e., the system being controlled). This paper extends the wind turbine aileron control research, previously conducted at the National Wind Technology Center (NWTC), by presenting a more detailed development of the wind turbine dynamic model. In prior research, active aileron control designs were implemented in an existing wind turbine structural dynamics code, FAST (Fatigue, Aerodynamics, Structures, and Turbulence). In this paper, the FAST code is used, in conjunction with system identification, to generate a wind turbine dynamic model for use in active aileron control system design. The FAST code is described and an overview of the system identification technique is presented. An aileron control case study is used to demonstrate this modeling technique. The results of the case study are then used to propose ideas for generalizing this technique for creating dynamic models for other wind turbine control applications.

  14. A Mathematical Model and MATLAB Code for Muscle-Fluid-Structure Simulations. (United States)

    Battista, Nicholas A; Baird, Austin J; Miller, Laura A


    This article provides models and code for numerically simulating muscle-fluid-structure interactions (FSIs). This work was presented as part of the symposium on Leading Students and Faculty to Quantitative Biology through Active Learning at the society-wide meeting of the Society for Integrative and Comparative Biology in 2015. Muscle mechanics and simple mathematical models to describe the forces generated by muscular contractions are introduced in most biomechanics and physiology courses. Often, however, the models are derived for simplifying cases such as isometric or isotonic contractions. In this article, we present a simple model of the force generated through active contraction of muscles. The muscles' forces are then used to drive the motion of flexible structures immersed in a viscous fluid. An example of an elastic band immersed in a fluid is first presented to illustrate a fully-coupled FSI in the absence of any external driving forces. In the second example, we present a valveless tube with model muscles that drive the contraction of the tube. We provide a brief overview of the numerical method used to generate these results. We also include as Supplementary Material a MATLAB code to generate these results. The code was written for flexibility so as to be easily modified to many other biological applications for educational purposes.

  15. Beyond the Business Model: Incentives for Organizations to Publish Software Source Code (United States)

    Lindman, Juho; Juutilainen, Juha-Pekka; Rossi, Matti

    The software stack opened under Open Source Software (OSS) licenses is growing rapidly. Commercial actors have released considerable amounts of previously proprietary source code. These actions beg the question why companies choose a strategy based on giving away software assets? Research on outbound OSS approach has tried to answer this question with the concept of the “OSS business model”. When studying the reasons for code release, we have observed that the business model concept is too generic to capture the many incentives organizations have. Conversely, in this paper we investigate empirically what the companies’ incentives are by means of an exploratory case study of three organizations in different stages of their code release. Our results indicate that the companies aim to promote standardization, obtain development resources, gain cost savings, improve the quality of software, increase the trustworthiness of software, or steer OSS communities. We conclude that future research on outbound OSS could benefit from focusing on the heterogeneous incentives for code release rather than on revenue models.

  16. A code reviewer assignment model incorporating the competence differences and participant preferences

    Directory of Open Access Journals (Sweden)

    Wang Yanqing


    Full Text Available A good assignment of code reviewers can effectively utilize the intellectual resources, assure code quality and improve programmers’ skills in software development. However, little research on reviewer assignment of code review has been found. In this study, a code reviewer assignment model is created based on participants’ preference to reviewing assignment. With a constraint of the smallest size of a review group, the model is optimized to maximize review outcomes and avoid the negative impact of “mutual admiration society”. This study shows that the reviewer assignment strategies incorporating either the reviewers’ preferences or the authors’ preferences get much improvement than a random assignment. The strategy incorporating authors’ preference makes higher improvement than that incorporating reviewers’ preference. However, when the reviewers’ and authors’ preference matrixes are merged, the improvement becomes moderate. The study indicates that the majority of the participants have a strong wish to work with reviewers and authors having highest competence. If we want to satisfy the preference of both reviewers and authors at the same time, the overall improvement of learning outcomes may be not the best.

  17. Dual coding with STDP in a spiking recurrent neural network model of the hippocampus.

    Directory of Open Access Journals (Sweden)

    Daniel Bush

    Full Text Available The firing rate of single neurons in the mammalian hippocampus has been demonstrated to encode for a range of spatial and non-spatial stimuli. It has also been demonstrated that phase of firing, with respect to the theta oscillation that dominates the hippocampal EEG during stereotype learning behaviour, correlates with an animal's spatial location. These findings have led to the hypothesis that the hippocampus operates using a dual (rate and temporal coding system. To investigate the phenomenon of dual coding in the hippocampus, we examine a spiking recurrent network model with theta coded neural dynamics and an STDP rule that mediates rate-coded Hebbian learning when pre- and post-synaptic firing is stochastic. We demonstrate that this plasticity rule can generate both symmetric and asymmetric connections between neurons that fire at concurrent or successive theta phase, respectively, and subsequently produce both pattern completion and sequence prediction from partial cues. This unifies previously disparate auto- and hetero-associative network models of hippocampal function and provides them with a firmer basis in modern neurobiology. Furthermore, the encoding and reactivation of activity in mutually exciting Hebbian cell assemblies demonstrated here is believed to represent a fundamental mechanism of cognitive processing in the brain.

  18. Summary of Interfacial Heat Transfer Model and Correlations in SPACE Code

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Sung Won; Lee, Seung Wook; Kim, Kyung Du [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)


    The first stage of development program for a nuclear reactor safety analysis code named as SPACE which will be used by utility bodies has been finished at last April 2010. During the first stage, main logic and conceptual sculpture have been established successfully under the support of Korea Ministry of Knowledge and Economy. The code, named as SPACE, has been designed to solve the multi-dimensional 3-field 2 phase equations. From the beginning of second stage of development, KNF has moved to concentrate on the methodology evaluation by using he SPACE code. Thus, KAERI, KOPEC, KEPRI have been remained as the major development organizations. In the second stage, it is focused to assess the physical models and correlations of SPACE code by using the well known SET problems. For the successful SET assessment procedure, a problem selection process has been performed under the leading of KEPRI. KEPRI has listed suitable SET problems according to the individual assessment purpose. For the interfacial area concentration, the models and correlations are continuously modified and verified

  19. Motion-adaptive model-assisted compatible coding with spatiotemporal scalability (United States)

    Lee, JaeBeom; Eleftheriadis, Alexandros


    We introduce the concept of motion adaptive spatio-temporal model-assisted compatible (MA-STMAC) coding, a technique to selectively encode areas of different importance to the human eye in terms of space and time in moving images with the consideration of object motion. PRevious STMAC was proposed base don the fact that human 'eye contact' and 'lip synchronization' are very important in person-to-person communication. Several areas including the eyes and lips need different types of quality, since different areas have different perceptual significance to human observers. The approach provides a better rate-distortion tradeoff than conventional image coding techniques base don MPEG-1, MPEG- 2, H.261, as well as H.263. STMAC coding is applied on top of an encoder, taking full advantage of its core design. Model motion tracking in our previous STMAC approach was not automatic. The proposed MA-STMAC coding considers the motion of the human face within the STMAC concept using automatic area detection. Experimental results are given using ITU-T H.263, addressing very low bit-rate compression.


    Institute of Scientific and Technical Information of China (English)

    樊荣; 段会川


    2 - D bar codes have become one of the most popular techniques in recent years,in which QR Code is one of the most applied in practice. However,present applications mainly involve the mid or lower version codes which can only accommodate moderate amount of information,and application of higher version QR Codes, i. e. level 20 or more,is seldom reported. To improve teaching in programming languages and algorithms,this paper proposes to employ higher version QR Code images that can be printed on handouts or textbooks to represent program and algorithm codes that may require a number of characters to represent. When learning,students can photograph the QR Code images with smart phones which can then figure out the program or algorithm code and display them in color,highlighted and structured form by means of special coding editors. Moreover,the code can also be run in a browser to show the running process as well as results dynamically. This new model can bring students plenty of convenience compared with the traditional textbook model. The dynamic running also greatly increases learning efficiency and augments learning effects as well.%近几年来二维条形码已成为最流行的技术之一,而 QR Code 又是其中应用最广泛的一种码型。然而,目前针对 QR Code 的应用大多使用的是承载信息量不太多的中低版本,而级别20以上的承载信息量较多的 QR Code 的研究和应用却很少见。笔者针对程序设计语言及算法课程的教学,提出使用高版本 QR Code 图像对一定字符量的程序和算法代码进行编码存储并印刷在讲义或教材上,使学生在学习过程中可以使用智能手机拍摄和解析出 QR Code 图像中的算法代码,再以专用的代码展示和编辑软件进行彩色、高亮和结构化的演示,并可在浏览器中运行以查看动态的运行过程和结果。这种方式较传统的教材模式给学生学习带来了极大的方便,动态的运行

  1. Development of a plate-type fuel model for the neutronics and thermal-hydraulics coupled code - SIMMER-III - and its application to the analyses of SPERT

    Energy Technology Data Exchange (ETDEWEB)

    Liu Ping, E-mail: [Karlsruhe Institute of Technology (KIT), Institute for Nuclear and Energy Technologies (IKET), P.O. Box 3640, D-76021 Karlsruhe (Germany); Gabrielli, Fabrizio; Rineiski, Andrei; Maschek, Werner [Karlsruhe Institute of Technology (KIT), Institute for Nuclear and Energy Technologies (IKET), P.O. Box 3640, D-76021 Karlsruhe (Germany); Bruna, Giovanni B. [Reactor Safety Division, French Institute for Radioprotection and Nuclear Safety (IRSN), B.P. 17, 92262 Fontenay aux Roses Cedex (France)


    SIMMER-III, a neutronics and thermal-hydraulics coupled code, was originally developed for core disruptive accident analyses of liquid metal cooled fast reactors. Due to its versatility in investigating scenarios of core disruption, the code has also been extended to the simulation of transients in thermal neutron systems such as the criticality accident at the JCO fuel fabrication plant, and, in recent years, applied to water-moderated thermal research reactor transient studies, too. Originally, SIMMER considered only cylindrical fuel pin geometry. Therefore, implementation of a plate-type fuel model to the SIMMER-III code is of importance for the analysis of research reactors adopting this kind of fuel. Furthermore, validation of the SIMMER-III modeling of light water-cooled thermal reactor reactivity initiated transients is of necessity. This paper presents the work carried out on the SIMMER-III code in the framework of a KIT and IRSN joint activity aimed at providing the code with experimental reactor transient study capabilities. The first step of the job was the implementation of a new fuel model in SIMMER-III. Verification on this new model indicates that it can well simulate the steady-state temperature profile in the fuel. Secondly, three cases with the shortest reactor periods of 5.0 ms, 4.6 ms and 3.2 ms among the Special Power Excursion Reactor Tests (SPERT) performed in the SPERT I D-12/25 facility have been simulated. Comparison of the results between the SIMMER-III simulation and the reported SPERT results indicates that although there is space for further improvement on the modeling of negative feedback mechanisms, with this plate-type fuel model SIMMER-III can well represent the transient phenomena of reactivity driven power excursion.

  2. A Causal Model of the Quality Activities Process: Exploring the Links between Quality Capabilities, Competitiveness and Organizational Performance

    Directory of Open Access Journals (Sweden)

    Cheng-tao Yu


    Full Text Available The purpose of this study is to examine the relationship between Total Quality Management (TQM practices, quality capabilities, competitiveness and firm performance. In this study, TQM has been conceptualized as soft and hard practices. An empirical analysis based upon an extensive validation process was applied to refine the construct scales, respectively. The sample consists of 423 valid responses for applying Structural Equation Modeling (SEM. Results derived from this study show that soft TQM practices have a direct, positive and significant relationship between quality capabilities, competitive strategies and Organizational performance. In addition, an indirect, positive and significant relationship on organizational performance through quality capabilities and competitive strategies was observed. The findings of this research show that hypotheses H3b, H4b and H6b do not support, the rest are in line with the model inference. Particularly, from the results indicate that soft TQM are the most important resource, which has strong effects on organizational performance. Results derived from this study might help managers to implement TQM practices in order to effectively allocate resources and improve financial performance. Thus, managers should consider that improvement in soft TQM would support the successful implementation of quality capabilities, competitive advantage and organizational performance. Much efforts relating to social aspects in TQM activities are particularly key issues to improve performance.

  3. Code package MAG c user tool for numerical modeling of 1D shock wave and dynamic processes in solids (United States)

    Rudenko, Vladimir; Shaburov, Michail


    Design and theoretical and numerical preparation of shock wave experiments require, as a rule, conduction of a large amount of calculations. Usually preparation of a problem for numerical solution, calculation and processing of the results is done be programmers c mathematicians. The appearance of powerful personal computers and interface tools allows to develop such user-oriented programs that a researcher can handle them without the help of a mathematician, even if he does not have special programming background. Code package MAG for numerical solution of 1D system of equations of hydrodynamics, elastoplastics, heat conduction and magnetic hydrodynamic. A number of modern models of elastoplastics and kinetics of power materials is implemented in it. The package includes libraries of equations of state, thermal physical and electromagnetic properties of substances. The code package is an interactive visual medium providing a user with the following capabilities: ? Input and edit initial data for a problem; ? Calculate separate problems, as well as series of problems with a possibility of automatic variation of parameters; ? View the modeled phenomena dynamically using the means of visualization; ? Control the process of calculation: terminate the calculation, change parameters, make express-processing of the results, continue the calculation etc.; ? Process the numerical results producing final plots and tables; ? Record and store numerical results in databases, including the formats supported by Microsoft Word, Acces, Exel; ? Make dynamic visual comparison of the results of several simultaneous calculations; ? Carry out automatic numerical optimization of a selected experimental scheme. The package is easy in use, allows prompt input and convenient information processing. The validity of numerical results obtained with the package MAG has been proved by numerous hydrodynamic experiments and comparisons with numerical results from similar programs. The package was

  4. Application of RS Codes in Decoding QR Code

    Institute of Scientific and Technical Information of China (English)

    Zhu Suxia(朱素霞); Ji Zhenzhou; Cao Zhiyan


    The QR Code is a 2-dimensional matrix code with high error correction capability. It employs RS codes to generate error correction codewords in encoding and recover errors and damages in decoding. This paper presents several QR Code's virtues, analyzes RS decoding algorithm and gives a software flow chart of decoding the QR Code with RS decoding algorithm.

  5. Modeling compositional dynamics based on GC and purine contents of protein-coding sequences

    KAUST Repository

    Zhang, Zhang


    Background: Understanding the compositional dynamics of genomes and their coding sequences is of great significance in gaining clues into molecular evolution and a large number of publically-available genome sequences have allowed us to quantitatively predict deviations of empirical data from their theoretical counterparts. However, the quantification of theoretical compositional variations for a wide diversity of genomes remains a major challenge.Results: To model the compositional dynamics of protein-coding sequences, we propose two simple models that take into account both mutation and selection effects, which act differently at the three codon positions, and use both GC and purine contents as compositional parameters. The two models concern the theoretical composition of nucleotides, codons, and amino acids, with no prerequisite of homologous sequences or their alignments. We evaluated the two models by quantifying theoretical compositions of a large collection of protein-coding sequences (including 46 of Archaea, 686 of Bacteria, and 826 of Eukarya), yielding consistent theoretical compositions across all the collected sequences.Conclusions: We show that the compositions of nucleotides, codons, and amino acids are largely determined by both GC and purine contents and suggest that deviations of the observed from the expected compositions may reflect compositional signatures that arise from a complex interplay between mutation and selection via DNA replication and repair mechanisms.Reviewers: This article was reviewed by Zhaolei Zhang (nominated by Mark Gerstein), Guruprasad Ananda (nominated by Kateryna Makova), and Daniel Haft. 2010 Zhang and Yu; licensee BioMed Central Ltd.

  6. Chiefly Symmetric: Results on the Scalability of Probabilistic Model Checking for Operating-System Code

    Directory of Open Access Journals (Sweden)

    Marcus Völp


    Full Text Available Reliability in terms of functional properties from the safety-liveness spectrum is an indispensable requirement of low-level operating-system (OS code. However, with evermore complex and thus less predictable hardware, quantitative and probabilistic guarantees become more and more important. Probabilistic model checking is one technique to automatically obtain these guarantees. First experiences with the automated quantitative analysis of low-level operating-system code confirm the expectation that the naive probabilistic model checking approach rapidly reaches its limits when increasing the numbers of processes. This paper reports on our work-in-progress to tackle the state explosion problem for low-level OS-code caused by the exponential blow-up of the model size when the number of processes grows. We studied the symmetry reduction approach and carried out our experiments with a simple test-and-test-and-set lock case study as a representative example for a wide range of protocols with natural inter-process dependencies and long-run properties. We quickly see a state-space explosion for scenarios where inter-process dependencies are insignificant. However, once inter-process dependencies dominate the picture models with hundred and more processes can be constructed and analysed.

  7. Dynamic capabilities

    DEFF Research Database (Denmark)

    Grünbaum, Niels Nolsøe; Stenger, Marianne


    it was dominated by a lack of systematism, assessment, monitoring, marketing speculations and feasibility calculation. Furthermore, the sphere was dictated by asymmetric supplier-customer relationships and negotiation power leading, among other possible factors, to meager profitability.......The consequences of dynamic capabilities (i.e. innovation performance and profitability) is an under researched area in the growing body of literature on dynamic capabilities and innovation management. This study aims to examine the relationship between dynamic capabilities, innovation performance...... and profitability of small and medium sized manufacturing enterprises operating in volatile environments. A multi-case study design was adopted as research strategy. The findings reveal a positive relationship between dynamic capabilities and innovation performance in the case companies, as we would expect. It was...

  8. Transfer of ocean modelling capability to two scientists of the National Institute of Oceanography of India

    NARCIS (Netherlands)

    Holthuijsen, L.H.; Booij, N.


    Two scientists from the National Institute of Oceanography of India have been trained to use the storm surge model DUCHESS and the wave model DOLPHIN. The results are published separately in two reports. This is the first of them.

  9. Transfer of ocean modelling capability to two scientists of the National Institute of Oceanography of India

    NARCIS (Netherlands)

    Holthuijsen, L.H.; Booij, N.


    Two scientists from the National Institute of Oceanography of India have been trained to use the storm surge model DUCHESS and the wave model DOLPHIN. The results are published separately in two reports. This is the first of them.

  10. Comparison of different methods used in integral codes to model coagulation of aerosols (United States)

    Beketov, A. I.; Sorokin, A. A.; Alipchenkov, V. M.; Mosunova, N. A.


    The methods for calculating coagulation of particles in the carrying phase that are used in the integral codes SOCRAT, ASTEC, and MELCOR, as well as the Hounslow and Jacobson methods used to model aerosol processes in the chemical industry and in atmospheric investigations are compared on test problems and against experimental results in terms of their effectiveness and accuracy. It is shown that all methods are characterized by a significant error in modeling the distribution function for micrometer particles if calculations are performed using rather "coarse" spectra of particle sizes, namely, when the ratio of the volumes of particles from neighboring fractions is equal to or greater than two. With reference to the problems considered, the Hounslow method and the method applied in the aerosol module used in the ASTEC code are the most efficient ones for carrying out calculations.

  11. Bayesian Regularization in a Neural Network Model to Estimate Lines of Code Using Function Points

    Directory of Open Access Journals (Sweden)

    K. K. Aggarwal


    Full Text Available It is a well known fact that at the beginning of any project, the software industry needs to know, how much will it cost to develop and what would be the time required ? . This paper examines the potential of using a neural network model for estimating the lines of code, once the functional requirements are known. Using the International Software Benchmarking Standards Group (ISBSG Repository Data (release 9 for the experiment, this paper examines the performance of back propagation feed forward neural network to estimate the Source Lines of Code. Multiple training algorithms are used in the experiments. Results demonstrate that the neural network models trained using Bayesian Regularization provide the best results and are suitable for this purpose.

  12. A model-based, multichannel, real-time capable sawtooth crash detector

    NARCIS (Netherlands)

    van den Brand, H.; de Baar, M. R.; van Berkel, M.; Blanken, T. C.; Felici, F.; Westerhof, E.; Willensdorfer, M.; ASDEX Upgrade team,; EUROfusion MST1 Team,


    Control of the time between sawtooth crashes, necessary for ITER and DEMO, requires real-time detection of the moment of the sawtooth crash. In this paper, estimation of sawtooth crash times is demonstrated using the model-based interacting multiple model (IMM) estimator, based on simplified models

  13. Development of thermal hydraulic models for the reliable regulatory auditing code

    Energy Technology Data Exchange (ETDEWEB)

    Chung, B. D.; Song, C. H.; Lee, Y. J.; Kwon, T. S. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)


    The objective of this project is to develop thermal hydraulic models for use in improving the reliability of the regulatory auditing codes. The current year fall under the first step of the 3 year project, and the main researches were focused on identifying the candidate thermal hydraulic models for improvement and to develop prototypical model development. During the current year, the verification calculations submitted for the APR 1400 design certification have been reviewed, the experimental data from the MIDAS DVI experiment facility in KAERI have been analyzed and evaluated, candidate thermal hydraulic models for improvement have been identified, prototypical models for the improved thermal hydraulic models have been developed, items for experiment in connection with the model development have been identified, and preliminary design of the experiment has been carried out.

  14. Dysregulation of the long non-coding RNA transcriptome in a Rett syndrome mouse model. (United States)

    Petazzi, Paolo; Sandoval, Juan; Szczesna, Karolina; Jorge, Olga C; Roa, Laura; Sayols, Sergi; Gomez, Antonio; Huertas, Dori; Esteller, Manel


    Mecp2 is a transcriptional repressor protein that is mutated in Rett syndrome, a neurodevelopmental disorder that is the second most common cause of mental retardation in women. It has been shown that the loss of the Mecp2 protein in Rett syndrome cells alters the transcriptional silencing of coding genes and microRNAs. Herein, we have studied the impact of Mecp2 impairment in a Rett syndrome mouse model on the global transcriptional patterns of long non-coding RNAs (lncRNAs). Using a microarray platform that assesses 41,232 unique lncRNA transcripts, we have identified the aberrant lncRNA transcriptome that is present in the brain of Rett syndrome mice. The study of the most relevant lncRNAs altered in the assay highlighted the upregulation of the AK081227 and AK087060 transcripts in Mecp2-null mice brains. Chromatin immunoprecipitation demonstrated the Mecp2 occupancy in the 5'-end genomic loci of the described lncRNAs and its absence in Rett syndrome mice. Most importantly, we were able to show that the overexpression of AK081227 mediated by the Mecp2 loss was associated with the downregulation of its host coding protein gene, the gamma-aminobutyric acid receptor subunit Rho 2 (Gabrr2). Overall, our findings indicate that the transcriptional dysregulation of lncRNAs upon Mecp2 loss contributes to the neurological phenotype of Rett syndrome and highlights the complex interaction between ncRNAs and coding-RNAs.

  15. Full SED fitting with the KOSMA-\\tau\\ PDR code - I. Dust modelling

    CERN Document Server

    Röllig, M; Ossenkopf, V; Glück, C


    We revised the treatment of interstellar dust in the KOSMA-\\tau\\ PDR model code to achieve a consistent description of the dust-related physics in the code. The detailed knowledge of the dust properties is then used to compute the dust continuum emission together with the line emission of chemical species. We coupled the KOSMA-\\tau\\ PDR code with the MCDRT (multi component dust radiative transfer) code to solve the frequency-dependent radiative transfer equations and the thermal balance equation in a dusty clump under the assumption of spherical symmetry, assuming thermal equilibrium in calculating the dust temperatures, neglecting non-equilibrium effects. We updated the calculation of the photoelectric heating and extended the parametrization range for the photoelectric heating toward high densities and UV fields. We revised the computation of the H2 formation on grain surfaces to include the Eley-Rideal effect, thus allowing for high-temperature H2 formation. We demonstrate how the different optical propert...

  16. Efficient Work Team Scheduling: Using Psychological Models of Knowledge Retention to Improve Code Writing Efficiency

    Directory of Open Access Journals (Sweden)

    Michael J. Pelosi


    Full Text Available Development teams and programmers must retain critical information about their work during work intervals and gaps in order to improve future performance when work resumes. Despite time lapses, project managers want to maximize coding efficiency and effectiveness. By developing a mathematically justified, practically useful, and computationally tractable quantitative and cognitive model of learning and memory retention, this study establishes calculations designed to maximize scheduling payoff and optimize developer efficiency and effectiveness.

  17. PREMOR: a point reactor exposure model computer code for survey analysis of power plant performance

    Energy Technology Data Exchange (ETDEWEB)

    Vondy, D.R.


    The PREMOR computer code was written to exploit a simple, two-group point nuclear reactor power plant model for survey analysis. Up to thirteen actinides, fourteen fission products, and one lumped absorber nuclide density are followed over a reactor history. Successive feed batches are accounted for with provision for from one to twenty batches resident. The effect of exposure of each of the batches to the same neutron flux is determined.

  18. Modeling of Vector Quantization Image Coding in an Ant Colony System

    Institute of Scientific and Technical Information of China (English)

    LIXia; LUOXuehui; ZHANGJihong


    Ant colony algorithm is a newly emerged stochastic searching optimization algorithm in recent years. In this paper, vector quantization image coding is modeled as a stochastic optimization problem in an Ant colony system (ACS). An appropriately adapted ant colony algorithm is proposed for vector quantization codebook design. Experimental results show that the ACS-based algorithm can produce a better codebook and the improvement of Pixel signal-to-noise ratio (PSNR) exceeds 1dB compared with the conventional LBG algorithm.

  19. Modeling of a stair-climbing wheelchair mechanism with high single-step capability. (United States)

    Lawn, Murray J; Ishimatsu, Takakazu


    In the field of providing mobility for the elderly and disabled, the aspect of dealing with stairs continues largely unresolved. This paper focuses on presenting the development of a stair-climbing wheelchair mechanism with high single-step capability. The mechanism is based on front and rear wheel clusters connected to the base (chair) via powered linkages so as to permit both autonomous stair ascent and descent in the forward direction, and high single-step functionality for such as direct entry to and from a van. Primary considerations were inherent stability, provision of a mechanism that is physically no larger than a standard powered wheelchair, aesthetics, and being based on readily available low-cost components.

  20. Neural Codes: Firing Rates and beyond (United States)

    Gerstner, Wulfram; Kreiter, Andreas K.; Markram, Henry; Herz, Andreas V. M.


    Computational neuroscience has contributed significantly to our understanding of higher brain function by combining experimental neurobiology, psychophysics, modeling, and mathematical analysis. This article reviews recent advances in a key area: neural coding and information processing. It is shown that synapses are capable of supporting computations based on highly structured temporal codes. Such codes could provide a substrate for unambiguous representations of complex stimuli and be used to solve difficult cognitive tasks, such as the binding problem. Unsupervised learning rules could generate the circuitry required for precise temporal codes. Together, these results indicate that neural systems perform a rich repertoire of computations based on action potential timing.