WorldWideScience

Sample records for benchmark simulation model

  1. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.;

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to...... and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing...

  2. Benchmark Simulation Model No 2 in Matlab-Simulink

    DEFF Research Database (Denmark)

    Vrecko, Darko; Gernaey, Krist; Rosen, Christian;

    2006-01-01

    In this paper, implementation of the Benchmark Simulation Model No 2 (BSM2) within Matlab-Simulink is presented. The BSM2 is developed for plant-wide WWTP control strategy evaluation on a long-term basis. It consists of a pre-treatment process, an activated sludge process and sludge treatment...

  3. Benchmarking computational fluid dynamics models for lava flow simulation

    Science.gov (United States)

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi

    2016-04-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, and COMSOL. Using the new benchmark scenarios defined in Cordonnier et al. (Geol Soc SP, 2015) as a guide, we model viscous, cooling, and solidifying flows over horizontal and sloping surfaces, topographic obstacles, and digital elevation models of natural topography. We compare model results to analytical theory, analogue and molten basalt experiments, and measurements from natural lava flows. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We can apply these models to reconstruct past lava flows in Hawai'i and Saudi Arabia using parameters assembled from morphology, textural analysis, and eruption observations as natural test cases. Our study highlights the strengths and weaknesses of each code, including accuracy and computational costs, and provides insights regarding code selection.

  4. Benchmark Simulation Model No 2 – finalisation of plant layout and default control strategy

    DEFF Research Database (Denmark)

    Nopens, I.; Benedetti, L.; Jeppsson, U.;

    2010-01-01

    The COST/IWA Benchmark Simulation Model No 1 (BSM1) has been available for almost a decade. Its primary purpose has been to create a platform for control strategy benchmarking of activated sludge processes. The fact that the research work related to the benchmark simulation models has resulted in...... evaluated in a realistic fashion in the one week BSM1 evaluation period. In this paper, the finalised plant layout is summarised and, as was done for BSM1, a default control strategy is proposed. A demonstration of how BSM2 can be used to evaluate control strategies is also given....

  5. Benchmark Simulation Model No 2: finalisation of plant layout and default control strategy.

    Science.gov (United States)

    Nopens, I; Benedetti, L; Jeppsson, U; Pons, M-N; Alex, J; Copp, J B; Gernaey, K V; Rosen, C; Steyer, J-P; Vanrolleghem, P A

    2010-01-01

    The COST/IWA Benchmark Simulation Model No 1 (BSM1) has been available for almost a decade. Its primary purpose has been to create a platform for control strategy benchmarking of activated sludge processes. The fact that the research work related to the benchmark simulation models has resulted in more than 300 publications worldwide demonstrates the interest in and need of such tools within the research community. Recent efforts within the IWA Task Group on "Benchmarking of control strategies for WWTPs" have focused on an extension of the benchmark simulation model. This extension aims at facilitating control strategy development and performance evaluation at a plant-wide level and, consequently, includes both pretreatment of wastewater as well as the processes describing sludge treatment. The motivation for the extension is the increasing interest and need to operate and control wastewater treatment systems not only at an individual process level but also on a plant-wide basis. To facilitate the changes, the evaluation period has been extended to one year. A prolonged evaluation period allows for long-term control strategies to be assessed and enables the use of control handles that cannot be evaluated in a realistic fashion in the one week BSM1 evaluation period. In this paper, the finalised plant layout is summarised and, as was done for BSM1, a default control strategy is proposed. A demonstration of how BSM2 can be used to evaluate control strategies is also given. PMID:21045320

  6. Benchmarking of dynamic simulation predictions in two software platforms using an upper limb musculoskeletal model.

    Science.gov (United States)

    Saul, Katherine R; Hu, Xiao; Goehler, Craig M; Vidt, Meghan E; Daly, Melissa; Velisar, Anca; Murray, Wendy M

    2015-01-01

    Several opensource or commercially available software platforms are widely used to develop dynamic simulations of movement. While computational approaches are conceptually similar across platforms, technical differences in implementation may influence output. We present a new upper limb dynamic model as a tool to evaluate potential differences in predictive behavior between platforms. We evaluated to what extent differences in technical implementations in popular simulation software environments result in differences in kinematic predictions for single and multijoint movements using EMG- and optimization-based approaches for deriving control signals. We illustrate the benchmarking comparison using SIMM-Dynamics Pipeline-SD/Fast and OpenSim platforms. The most substantial divergence results from differences in muscle model and actuator paths. This model is a valuable resource and is available for download by other researchers. The model, data, and simulation results presented here can be used by future researchers to benchmark other software platforms and software upgrades for these two platforms. PMID:24995410

  7. BSM-MBR: A Benchmark Simulation Model to Compare Control and Operational Strategies for Membrane Bioreactors

    OpenAIRE

    Maere, Thomas; Verrecht, Bart; Moerenhout, Stefanie; Judd, Simon J.; Nopens, Ingmar

    2011-01-01

    A benchmark simulation model for membrane bioreactors (BSM-MBR) was developed to evaluate operational and control strategies in terms of effluent quality and operational costs. The configuration of the existing BSM1 for conventional wastewater treatment plants was adapted using reactor volumes, pumped sludge flows and membrane filtration for the water-sludge separation. The BSM1 performance criteria were extended for an MBR taking into account additional pumping requirements for permeate prod...

  8. Quo Vadis Benchmark Simulation Models? 8th IWA Symposium on Systems Analysis and Integrated Assessment

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J.; Batstone, D,;

    2011-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for WWTPs is coming towards an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights...... modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system....

  9. Simulation of the multiple-fracture model. Phase 1, benchmark test 2 of the DECOVALEX project

    International Nuclear Information System (INIS)

    DECOVALEX is an international co-operative project for the development of coupled models and their validation against experiments in nuclear waste isolation. The emphasis of this project is on the coupled thermo-hydro-mechanical effects in jointed hard rock. In the first phase of DECOVALEX, two benchmark tests and one test case have been selected for modelling. This report describes the results of the second benchmark test, the Multiple-Fracture Model, as obtained by the AECL Research team. This problem relates to groundwater flow and coupled thermo-hydro-mechanical deformation in a simple system comprising several blocks of porous medium and several intersecting fractures. The simulation domain is defined to be a rectangular box that is made up of an assemblage of nine blocks separated by two sets of discontinuities (planar fractures). The rock mass is subjected to in situ stress and thermal loading as well as a hydraulic gradient. Both no-flow and adiabatic heat flux acting along a section of one of the lateral boundaries will induce expansion of the rock and cause shearing in the model. The MOTIF finite-element code, developed at AECL, has been employed to simulate this problem. The simulation results show that thermal expansion of the solid blocks reduced the aperture and, consequently, the permeability of the fractures. As a result, the fluid velocity along the horizontal fractures decreased with time, except in the vicinity close to the heat source, where the velocity initially increased and then decreased as a result of the decrease in permeability. (author). 9 refs., 7 tabs., 23 figs

  10. Towards a plant-wide Benchmark Simulation Model with simultaneous nitrogen and phosphorus removal wastewater treatment processes.

    OpenAIRE

    Flores-Alsina, Xavier; Ikumi, David; Batstone, Damien; Gernaey, Krist; Brouckaert, Chris; Ekama, George A.; Jeppsson, Ulf

    2012-01-01

    It is more than 10 years since the publication of the Benchmark Simulation Model No 1 (BSM1) manual (Copp, 2002). The main objective of BSM1 was creating a platform for benchmarking carbon and nitrogen removal strategies in activated sludge systems. The initial platform evolved into BSM1_LT and BSM2, which allowed the evaluation of monitoring and plant-wide control strategies, respectively. The fact that the BSM platforms have resulted in 300+ publications demonstrates the interest for the to...

  11. Benchmarking Model Variants in Development of a Hardware-in-the-Loop Simulation System

    Science.gov (United States)

    Aretskin-Hariton, Eliot D.; Zinnecker, Alicia M.; Kratz, Jonathan L.; Culley, Dennis E.; Thomas, George L.

    2016-01-01

    Distributed engine control architecture presents a significant increase in complexity over traditional implementations when viewed from the perspective of system simulation and hardware design and test. Even if the overall function of the control scheme remains the same, the hardware implementation can have a significant effect on the overall system performance due to differences in the creation and flow of data between control elements. A Hardware-in-the-Loop (HIL) simulation system is under development at NASA Glenn Research Center that enables the exploration of these hardware dependent issues. The system is based on, but not limited to, the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k). This paper describes the step-by-step conversion from the self-contained baseline model to the hardware in the loop model, and the validation of each step. As the control model hardware fidelity was improved during HIL system development, benchmarking simulations were performed to verify that engine system performance characteristics remained the same. The results demonstrate the goal of the effort; the new HIL configurations have similar functionality and performance compared to the baseline C-MAPSS40k system.

  12. A performance benchmark test for geodynamo simulations

    Science.gov (United States)

    Matsui, H.; Heien, E. M.

    2013-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. As new models and numerical methods continue to be developed, it is important to update and extend benchmarks for testing these models. The first dynamo benchmark of Christensen et al. (2001) was applied to models based on spherical harmonic expansion methods. However, only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, spherical harmonics expansion methods perform poorly on massively parallel computers because global data communications are required for the spherical harmonics expansions to evaluate nonlinear terms. We perform benchmark tests to asses various numerical methods for the next generation of geodynamo simulations. The purpose of this benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the accuracy benchmark, we compare the dynamo models by using modest Ekman and Rayleigh numbers proposed by Christensen et. al. (2001). We investigate a required spatial resolution for each dynamo code to obtain less than 1% difference from the suggested solution of the benchmark test using the two magnetic boundary conditions. In the performance benchmark, we investigate computational performance under the same computational environment. We perform these

  13. Towards a benchmark simulation model for plant-wide control strategy performance evaluation of WWTPs

    DEFF Research Database (Denmark)

    Jeppsson, Ulf; Rosen, Christian; Alex, Jens;

    2006-01-01

    worldwide, demonstrates the interest in such a tool within the research community In this paper, an extension of the benchmark simulation model no 1 (BSM1) is proposed. This extension aims at facilitating control strategy development and performance evaluation at a plant-wide level and, consequently......, includes both pretreatment of wastewater as well as the processes describing sludge treatment. The motivation for the extension is the increasing interest and need to operate and control wastewater treatment systems not only at an individual process level but also on a plant-wide basis. To facilitate the...... changes, the evaluation period has been extended to one year. A prolonged evaluation period allows for long-term control strategies to be assessed and enables the use of control handles that cannot be evaluated in a realistic fashion in the one-week BSM1 evaluation period. In the paper, the extended plant...

  14. Simulation Methods for High-Cycle Fatigue-Driven Delamination using Cohesive Zone Models - Fundamental Behavior and Benchmark Studies

    DEFF Research Database (Denmark)

    Bak, Brian Lau Verndal; Lindgaard, Esben; Turon, A.;

    2015-01-01

    A novel computational method for simulating fatigue-driven delamination cracks in composite laminated structures under cyclic loading based on a cohesive zone model [2] and new benchmark studies with four other comparable methods [3-6] are presented. The benchmark studies describe and compare the...... traction-separation response in the cohesive zone and the transition phase from quasistatic to fatigue loading for each method. Furthermore, the accuracy of the predicted crack growth rate is studied and compared for each method. It is shown that the method described in [2] is significantly more accurate...... than the other methods [3-6]. Finally, studies are presented of the dependency and sensitivity to the change in different quasi-static material parameters and model specific fitting parameters. It is shown that all the methods except [2] rely on different parameters which are not possible to determine...

  15. Modelling anaerobic co-digestion in Benchmark Simulation Model No. 2: Parameter estimation, substrate characterisation and plant-wide integration.

    Science.gov (United States)

    Arnell, Magnus; Astals, Sergi; Åmand, Linda; Batstone, Damien J; Jensen, Paul D; Jeppsson, Ulf

    2016-07-01

    Anaerobic co-digestion is an emerging practice at wastewater treatment plants (WWTPs) to improve the energy balance and integrate waste management. Modelling of co-digestion in a plant-wide WWTP model is a powerful tool to assess the impact of co-substrate selection and dose strategy on digester performance and plant-wide effects. A feasible procedure to characterise and fractionate co-substrates COD for the Benchmark Simulation Model No. 2 (BSM2) was developed. This procedure is also applicable for the Anaerobic Digestion Model No. 1 (ADM1). Long chain fatty acid inhibition was included in the ADM1 model to allow for realistic modelling of lipid rich co-substrates. Sensitivity analysis revealed that, apart from the biodegradable fraction of COD, protein and lipid fractions are the most important fractions for methane production and digester stability, with at least two major failure modes identified through principal component analysis (PCA). The model and procedure were tested on bio-methane potential (BMP) tests on three substrates, each rich on carbohydrates, proteins or lipids with good predictive capability in all three cases. This model was then applied to a plant-wide simulation study which confirmed the positive effects of co-digestion on methane production and total operational cost. Simulations also revealed the importance of limiting the protein load to the anaerobic digester to avoid ammonia inhibition in the digester and overloading of the nitrogen removal processes in the water train. In contrast, the digester can treat relatively high loads of lipid rich substrates without prolonged disturbances. PMID:27088248

  16. Towards a plant-wide Benchmark Simulation Model with simultaneous nitrogen and phosphorus removal wastewater treatment processes

    DEFF Research Database (Denmark)

    Flores-Alsina, Xavier; Ikumi, David; Batstone, Damien;

    . This extension aims at facilitating simultaneous carbon, nitrogen and phosphorus (P) removal process development and performance evaluation at a plant-wide level. The main motivation of the work is that numerous wastewater treatment plants (WWTPs) pursue biological phosphorus removal as an alternative......It is more than 10 years since the publication of the Benchmark Simulation Model No 1 (BSM1) manual (Copp, 2002). The main objective of BSM1 was creating a platform for benchmarking carbon and nitrogen removal strategies in activated sludge systems. The initial platform evolved into BSM1_LT and BSM...... to chemical P removal based on precipitation using metal salts, such as Fe or Al. This paper identifies and discusses important issues that need to be addressed to upgrade the BSM2 to BSM2-P, for example: 1) new influent wastewater characteristics; 2) new (bio) chemical processes to account for; 3...

  17. Benchmarking hydrological models for low-flow simulation and forecasting on French catchments

    OpenAIRE

    Nicolle, P.; Pushpalatha, R.; Perrin, C.; François, D.; Thiéry, D.; Mathevet, T.; Le Lay, M.; Besson, F.; Soubeyroux, J.-M.; Viel, C.; Regimbeau, F.; V. Andréassian; Maugis, P.; B. Augeard; Morice, E.

    2014-01-01

    Low-flow simulation and forecasting remains a difficult issue for hydrological modellers, and intercomparisons can be extremely instructive for assessing existing low-flow prediction models and for developing more efficient operational tools. This research presents the results of a collaborative experiment conducted to compare low-flow simulation and forecasting models on 21 unregulated catchments in France. Five hydrological models (four lumped storage-type models – Gardenia, GR6J, Mor...

  18. Direct Simulation of a Solidification Benchmark Experiment

    OpenAIRE

    Carozzani, Tommy; Gandin, Charles-André; Digonnet, Hugues; Bellet, Michel; Zaidat, Kader; Fautrelle, Yves

    2013-01-01

    International audience A solidification benchmark experiment is simulated using a three-dimensional cellular automaton-finite element solidification model. The experiment consists of a rectangular cavity containing a Sn-3 wt pct Pb alloy. The alloy is first melted and then solidified in the cavity. A dense array of thermocouples permits monitoring of temperatures in the cavity and in the heat exchangers surrounding the cavity. After solidification, the grain structure is revealed by metall...

  19. A benchmark simulation model to describe plant-wide phosphorus transformations in WWTPs

    DEFF Research Database (Denmark)

    Flores-Alsina, Xavier; Ikumi, D.; Kazadi-Mbamba, C.;

    (WWTPs) pursue biological/chemical phosphorus removal. However, realistic descriptions of combined C, N and P removal, adds a major, but unavoidable degree of complexity in wastewater treatment process models. This paper identifies and discusses important issues that need to be addressed to upgrade the......It is more than 10 years since the publication of the BSM1 technical report (Copp, 2002). The main objective of BSM1 was to create a platform for benchmarking C and N removal strategies in activated sludge systems. The initial platform evolved into BSM1_LT and BSM2, which allowed for the evaluation...... scientific community. In this paper, a highly necessary extension of the BSM2 is proposed. This extension aims at facilitating simultaneous C, N and P removal process development and performance evaluation at a plant-wide level. The main motivation of the work is that numerous wastewater treatment plants...

  20. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  1. System-wide Benchmark Simulation Model for integrated analysis of urban wastewater systems

    DEFF Research Database (Denmark)

    Saagi, R.; Flores-Alsina, X.; Gernaey, K. V.;

    this system is developed. Modelling details for various building blocks of the model are explained in the following sections of this abstract. Preliminary simulation results are used to evaluate the impact of rain events using indirect (emission measures from sewers and WWTPs) and direct (river quality...

  2. Catchment & sewer network simulation model to benchmark control strategies within urban wastewater systems

    DEFF Research Database (Denmark)

    Saagi, Ramesh; Flores Alsina, Xavier; Fu, Guangtao;

    2016-01-01

    evaluation criteria describing the cumulative and acute effects are presented. Simulation results show that the proposed set of models is capable of generating daily, weekly and seasonal variations as well as describing the effect of rain events on wastewater characteristics. Two sets of case studies...

  3. DOE Commercial Building Benchmark Models: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Torcelini, P.; Deru, M.; Griffith, B.; Benne, K.; Halverson, M.; Winiarski, D.; Crawley, D. B.

    2008-07-01

    To provide a consistent baseline of comparison and save time conducting such simulations, the U.S. Department of Energy (DOE) has developed a set of standard benchmark building models. This paper will provide an executive summary overview of these benchmark buildings, and how they can save building analysts valuable time. Fully documented and implemented to use with the EnergyPlus energy simulation program, the benchmark models are publicly available and new versions will be created to maintain compatibility with new releases of EnergyPlus. The benchmark buildings will form the basis for research on specific building technologies, energy code development, appliance standards, and measurement of progress toward DOE energy goals. Having a common starting point allows us to better share and compare research results and move forward to make more energy efficient buildings.

  4. Simulating pesticides in ditches to assess ecological risk (SPIDER): II. Benchmarking for the drainage model.

    Science.gov (United States)

    Renaud, Fabrice G; Brown, Colin D

    2008-05-01

    SPIDER (simulating pesticides in ditches to assess ecological risk) is a locally distributed, capacitance-based model that accounts for pesticide entry into surface water bodies via spray drift, surface runoff, interlayer flow and drainage. SPIDER was developed for application to small agricultural catchments. Transport of pesticide from site of application to surface water via subsurface field drains is one of the major routes of entry to surface water. Several pesticide fate models describe transfer of pesticide via drainflow, notably MACRO which has been evaluated against field data in several studies. The capacity of SPIDER to simulate drainflow and pesticide concentration in drain water was evaluated against two datasets that had been used previously to evaluate MACRO independently of this study: a plot experiment at Cockle Park and a field experiment at Maidwell, both located in the UK. In both circumstances, SPIDER was able to reproduce drain hydrographs relatively well with no or limited calibration. At Cockle Park, simulated and observed drainflow over the season were 240 and 278 mm, respectively with a Nash and Sutcliffe model efficiency (NSME) coefficient of 0.32 whilst at Maidwell they were 259 and 296 mm, respectively with a NSME coefficient of 0.55. Prediction of maximum isoproturon concentration at Cockle Park by SPIDER and MACRO were 5.3 and 13.1 microg L(- 1) respectively compared to the 3.8 microg L(- 1) measured in the field, whilst pesticide load to drains over the season were 0.22 and 1.53 g, respectively, compared to an observed load of 0.35 g. Maximum sulfosulfuron concentration at Maidwell were 2.3, 3.9 and 5.4 microg L(- 1) for observed and as simulated by SPIDER and MACRO, respectively and pesticide loading to drains of the season was 0.77, 5.61, 4.77 g, respectively. Results from the sensitivity analysis showed that the sensitivity of SPIDER compared favourably to that of several other capacity models but was more sensitive than MACRO to

  5. Mixed quantum-classical simulations of charge transport in organic materials: Numerical benchmark of the Su-Schrieffer-Heeger model

    International Nuclear Information System (INIS)

    The electron-phonon coupling is critical in determining the intrinsic charge carrier and exciton transport properties in organic materials. In this study, we consider a Su-Schrieffer-Heeger (SSH) model for molecular crystals, and perform numerical benchmark studies for different strategies of simulating the mixed quantum-classical dynamics. These methods, which differ in the selection of initial conditions and the representation used to solve the time evolution of the quantum carriers, are shown to yield similar equilibrium diffusion properties. A hybrid approach combining molecular dynamics simulations of nuclear motion and quantum-chemical calculations of the electronic Hamiltonian at each geometric configuration appears as an attractive strategy to model charge dynamics in large size systems ''on the fly,'' yet it relies on the assumption that the quantum carriers do not impact the nuclear dynamics. We find that such an approximation systematically results in overestimated charge-carrier mobilities, with the associated error being negligible when the room-temperature mobility exceeds ∼4.8 cm2/Vs (∼0.14 cm2/Vs) in one-dimensional (two-dimensional) crystals.

  6. MCNP simulation of the TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    The complete 3D MCNP model of the TRIGA Mark II reactor is presented. It enables precise calculations of some quantities of interest in a steady-state mode of operation. Calculational results are compared to the experimental results gathered during reactor reconstruction in 1992. Since the operating conditions were well defined at that time, the experimental results can be used as a benchmark. It may be noted that this benchmark is one of very few high enrichment benchmarks available. In our simulations experimental conditions were thoroughly simulated: fuel elements and control rods were precisely modeled as well as entire core configuration and the vicinity of the core. ENDF/B-VI and ENDF/B-V libraries were used. Partial results of benchmark calculations are presented. Excellent agreement of core criticality, excess reactivity and control rod worths can be observed. (author)

  7. Concrete Model Descriptions and Summary of Benchmark Studies for Blast Effects Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Noble, C; Kokko, E; Darnell, I; Dunn, T; Hagler, L; Leininger, L

    2005-07-21

    Concrete is perhaps one of the most widely used construction materials in the world. Engineers use it to build massive concrete dams, concrete waterways, highways, bridges, and even nuclear reactors. The advantages of using concrete is that it can be cast into any desired shape, it is durable, and very economical compared to structural steel. The disadvantages are its low tensile strength, low ductility, and low strength-to-weight ratio. Concrete is a composite material that consists of a coarse granular material, or aggregate, embedded in a hard matrix of material, or cement, which fills the gaps between the aggregates and binds them together. Concrete properties, however, vary widely. The properties depend on the choice of materials used and the proportions for a particular application, as well as differences in fabrication techniques. Table 1 provides a listing of typical engineering properties for structural concrete. Properties also depend on the level of concrete confinement, or hydrostatic pressure, the material is being subjected to. In general, concrete is rarely subjected to a single axial stress. The material may experience a combination of stresses all acting simultaneously. The behavior of concrete under these combined stresses are, however, extremely difficult to characterize. In addition to the type of loading, one must also consider the stress history of the material. Failure is determined not only by the ultimate stresses, but also by the rate of loading and the order in which these stresses were applied. The concrete model described herein accounts for this complex behavior of concrete. It was developed by Javier Malvar, Jim Wesevich, and John Crawford of Karagozian and Case, and Don Simon of Logicon RDA in support of the Defense Threat Reduction Agency's programs. The model is an enhanced version of the Concrete/Geological Material Model 16 in the Lagrangian finite element code DYNA3D. The modifications that were made to the original model

  8. Simulation with Different Turbulence Models in an Annex 20 Benchmark Test using Star-CCM+

    DEFF Research Database (Denmark)

    Le Dreau, Jerome; Heiselberg, Per; Nielsen, Peter V.

    The purpose of this investigation is to compare the different flow patterns obtained for the 2D isothermal test case defined in Annex 20 (1990) using different turbulence models. The different results are compared with the existing experimental data. Similar study has already been performed by Rong...... et al. (2008) using Ansys CFX 11.0. In this report, the software Star-CCM+ has been used....

  9. Benchmark simulation model no 2: general protocol and exploratory case studies

    DEFF Research Database (Denmark)

    Jeppsson, U.; Pons, M.N.; Nopens, I.;

    2007-01-01

    the control strategy evaluation rather than on modelling issues. Finally, for illustration, twelve simple operational strategies have been implemented in BSM2 and their performance evaluated. Results show that it is an interesting control engineering challenge to further improve the performance of the...... significant new development that is reported on here: Rather than only evaluating control strategies at the level of the activated sludge unit ( bioreactors and secondary clarifier) the new BSM2 now allows the evaluation of control strategies at the level of the whole plant, including primary clarifier and...

  10. On a new benchmark for the simulation of saltwater intrusion

    Science.gov (United States)

    Stoeckl, Leonard; Graf, Thomas

    2015-04-01

    To date, many different benchmark problems for density-driven flow are available. Benchmarks are necessary to validate numerical models. The benchmark by Henry (1964) measures a saltwater wedge, intruding into a freshwater aquifer in a rectangular model. The Henry (1964) problem of saltwater intrusion is one of the most applied benchmarks in hydrogeology. Modelling saltwater intrusion will be of major importance in the future because investigating the impact of groundwater overexploitation, climate change or sea level rise are of key concern. The worthiness of the Henry (1964) problem was questioned by Simpson and Clement (2003), who compared density-coupled and density-uncoupled simulations. Density-uncoupling was achieved by neglecting density effects in the governing equations, and by considering density effects only in the flow boundary conditions. As both of their simulations showed similar results, Simpson and Clement (2003) concluded that flow patterns of the Henry (1964) problem are largely dictated by the applied flow boundary conditions and density-dependent effects are not adequately represented in the Henry (1964) problem. In the present study, we compare numerical simulations of the physical benchmark of a freshwater lens by Stoeckl and Houben (2012) to the Henry (1964) problem. In this new benchmark, the development of a freshwater lens under an island is simulated by applying freshwater recharge to the model top. Results indicate that density-uncoupling significantly alters the flow patterns of fresh- and saltwater. This leads to the conclusion that next to the boundary conditions applied, density-dependent effects are important to correctly simulate the flow dynamics of a freshwater lens.

  11. Extending the benchmark simulation model no2 with processes for nitrous oxide production and side-stream nitrogen removal

    DEFF Research Database (Denmark)

    Boiocchi, Riccardo; Sin, Gürkan; Gernaey, Krist V.

    2015-01-01

    increased the total nitrogen removal by 10%; (ii) reduced the aeration demand by 16% compared to the base case, and (iii) the activity of ammonia-oxidizing bacteria is most influencing nitrous oxide emissions. The extended model provides a simulation platform to generate, test and compare novel control......In this work the Benchmark Simulation Model No.2 is extended with processes for nitrous oxide production and for side-stream partial nitritation/Anammox (PN/A) treatment. For these extensions the Activated Sludge Model for Greenhouse gases No.1 was used to describe the main waterline, whereas the...... Complete Autotrophic Nitrogen Removal (CANR) model was used to describe the side-stream (PN/A) treatment. Comprehensive simulations were performed to assess the extended model. Steady-state simulation results revealed the following: (i) the implementation of a continuous CANR side-stream reactor has...

  12. Synthetic benchmark model for parallel agent-based simulation%面向并行Agent仿真的合成基准测试模型

    Institute of Scientific and Technical Information of China (English)

    余文广; 王维平; 侯洪涛; 李群

    2012-01-01

    In order to evaluate the performance of parallel simulation algorithms, there is a need for a benchmark model. To solve the problem that there is currently lack of such a common benchmark model that is independent of applications in the parallel agent-based simulation (PABS) research community, based on the design principles of parallel HOLD which is a classic synthetic benchmark model for parallel discrete event simulations (PDES) , a common benchmark model for PABS is proposed according to the characteristics of agent-based simulations (ABS). This model can easily synthesize various required workloads based on application characteristics and exclude the impact of elements related to specific applications on the performance analysis so as to provide a common benchmark for different PABS researchers. Finally, with this model, the impact of the computation granularity of agents and the number of processors on the speedup is analyzed experimentally.%为了评估并行仿真算法的性能,需要建立一个基准测试模型.针对并行Agent仿真研究领域中缺乏一种与应用无关的基准测试模型这一问题,在借鉴并行离散事件仿真中经典的合成测试模型PHOLD设计思想的基础上,根据基于Agent仿真的特点,提出面向并行Agent仿真的合成基准测试模型,利用该模型可以方便地合成符合不同应用特点的计算负载,去除与应用相关的因素对性能分析的影响,能够为不同的并行Agent仿真研究者提供一个公共的测试基准.最后,采用该模型从实验层次上分析了Agent计算粒度、所采用的处理器数目等因素对并行Agent仿真加速比的影响.

  13. Optimal design activated sludge process by means of multi-objective optimization: case study in Benchmark Simulation Model 1 (BSM1).

    Science.gov (United States)

    Chen, Wenliang; Yao, Chonghua; Lu, Xiwu

    2014-01-01

    Optimal design of activated sludge process (ASP) using multi-objective optimization was studied, and a benchmark process in Benchmark Simulation Model 1 (BSM1) was taken as a target process. The objectives of the study were to achieve four indexes of percentage of effluent violation (PEV), overall cost index (OCI), total volume and total suspended solids, making up four cases for comparative analysis. Models were solved by the non-dominated sorting genetic algorithm in MATLAB. Results show that: ineffective solutions can be rejected by adding constraints, and newly added objectives can affect the relationship between the existing objectives; taking Pareto solutions as process parameters, the performance indexes of PEV and OCI can be improved more than with the default process parameters of BSM1, especially for N removal and resistance against dynamic NH4(+)-N in influent. The results indicate that multi-objective optimization is a useful method for optimal design ASP. PMID:24845320

  14. Benchmark Data Set for Wheat Growth Models: Field Experiments and AgMIP Multi-Model Simulations.

    Science.gov (United States)

    Asseng, S.; Ewert, F.; Martre, P.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P.J.; Rotter, R. P.

    2015-01-01

    The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation, maximum and minimum temperature, precipitation, surface wind, dew point temperature, relative humidity, and vapor pressure), soil characteristics, frequent growth, nitrogen in crop and soil, crop and soil water and yield components. Simulations include results from 27 wheat models and a sensitivity analysis with 26 models and 30 years (1981-2010) for each location, for elevated atmospheric CO2 and temperature changes, a heat stress sensitivity analysis at anthesis, and a sensitivity analysis with soil and crop management variations and a Global Climate Model end-century scenario.

  15. BENCHMARKING AND VERIFICATION OF BIPOLAR JUNCTION TRANSISTOR TCAD SIMULATION PROGRAMS

    OpenAIRE

    Niemiec, Carrie Ann; Lundstrom, Mark S.

    1996-01-01

    This thesis contains the results from a benchmarking study, which involves comparing different device modeling approaches. After discussing the pflilosophical importance of accuracy and truth in science with respect to modeling, modern day simulation methods are discussed and a methodology defined. The modeling approaches compared are the drift-diffusion method, which is most commonly utilized currently, the hydrodynamic/energy transport method, which has greater rigor than drift-difision but...

  16. BENCHMARKING LEARNER EDUCATION USING ONLINE BUSINESS SIMULATION

    Directory of Open Access Journals (Sweden)

    Alfred H. Miller

    2016-06-01

    Full Text Available For programmatic accreditation by the Accreditation Council of Business Schools and Programs (ACBSP, business programs are required to meet STANDARD #4, Measurement and Analysis of Student Learning and Performance. Business units must demonstrate that outcome assessment systems are in place using documented evidence that shows how the results are being used to further develop or improve the academic business program. The Higher Colleges of Technology, a 17 campus federal university in the United Arab Emirates, differentiates its applied degree programs through a ‘learning by doing ethos,’ which permeates the entire curricula. This paper documents benchmarking of education for managing innovation. Using business simulation for Bachelors of Business, Year 3 learners, in a business strategy class; learners explored through a simulated environment the following functional areas; research and development, production, and marketing of a technology product. Student teams were required to use finite resources and compete against other student teams in the same universe. The study employed an instrument developed in a 60-sample pilot study of business simulation learners against which subsequent learners participating in online business simulation could be benchmarked. The results showed incremental improvement in the program due to changes made in assessment strategies, including the oral defense.

  17. Simulation benchmarks for low-pressure plasmas: capacitive discharges

    CERN Document Server

    Turner, M M; Donko, Z; Eremin, D; Kelly, S J; Lafleur, T; Mussenbrock, T

    2012-01-01

    Benchmarking is generally accepted as an important element in demonstrating the correctness of computer simulations. In the modern sense, a benchmark is a computer simulation result that has evidence of correctness, is accompanied by estimates of relevant errors, and which can thus be used as a basis for judging the accuracy and efficiency of other codes. In this paper, we present four benchmark cases related to capacitively coupled discharges. These benchmarks prescribe all relevant physical and numerical parameters. We have simulated the benchmark conditions using five independently developed particle-in-cell codes. We show that the results of these simulations are statistically indistinguishable, within bounds of uncertainty that we define. We therefore claim that the results of these simulations represent strong benchmarks, that can be used as a basis for evaluating the accuracy of other codes. These other codes could include other approaches than particle-in-cell simulations, where benchmarking could exa...

  18. Experiment vs simulation RT WFNDEC 2014 benchmark: CIVA results

    International Nuclear Information System (INIS)

    The French Atomic Energy Commission and Alternative Energies (CEA) has developed for years the CIVA software dedicated to simulation of NDE techniques such as Radiographic Testing (RT). RT modelling is achieved in CIVA using combination of a determinist approach based on ray tracing for transmission beam simulation and a Monte Carlo model for the scattered beam computation. Furthermore, CIVA includes various detectors models, in particular common x-ray films and a photostimulable phosphor plates. This communication presents the results obtained with the configurations proposed in the World Federation of NDEC 2014 RT modelling benchmark with the RT models implemented in the CIVA software

  19. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns. W

  20. Benchmarking of SIMULATE-3 on engineering workstations

    International Nuclear Information System (INIS)

    The nuclear fuel management department of Arizona Public Service Company (APS) has evaluated various computer platforms for a departmental engineering and business work-station local area network (LAN). Historically, centralized mainframe computer systems have been utilized for engineering calculations. Increasing usage and the resulting longer response times on the company mainframe system and the relative cost differential between a mainframe upgrade and workstation technology justified the examination of current workstations. A primary concern was the time necessary to turn around routine reactor physics reload and analysis calculations. Computers ranging from a Definicon 68020 processing board in an AT compatible personal computer up to an IBM 3090 mainframe were benchmarked. The SIMULATE-3 advanced nodal code was selected for benchmarking based on its extensive use in nuclear fuel management. SIMULATE-3 is used at APS for reload scoping, design verification, core follow, and providing predictions of reactor behavior under nominal conditions and planned reactor maneuvering, such as axial shape control during start-up and shutdown

  1. Lower hybrid current drive: an overview of simulation models, benchmarking with experiment, and predictions for future devices

    International Nuclear Information System (INIS)

    This paper reviews the status of lower hybrid current drive (LHCD) simulation and modeling. We first discuss modules used for wave propagation, absorption, and current drive with particular emphasis placed on comparing exact numerical solutions of the Fokker Planck equation in 2-dimension with solution methods that employ 1-dimensional and adjoint approaches. We also survey model predictions for LHCD in past and present experiments showing detailed comparisons between simulated and observed current drive efficiencies and hard X-ray profiles. Finally we discuss several model predictions for lower hybrid current profile control in proposed next step reactor options. (authors)

  2. Benchmark Simulations of Gyro-Kinetic Electron and Fully-Kinetic Ion Model for Lower Hybrid Waves in Linear Region

    International Nuclear Information System (INIS)

    Particle-in-cell (PIC) simulation method has been proved to be a good candidate to study the interactions between plasmas and radio-frequency waves. However, for waves in the lower hybrid range of frequencies, a full PIC simulation is not efficient due to its high computational cost. In this work, a gyro-kinetic electron and fully-kinetic ion (GeFi) particle simulation model is applied to study the propagations and mode conversion processes of lower hybrid waves (LHWs) in plasmas. With this method, the computational efficiency of LHW simulations is greatly increased by using a larger grid size and time step. The simulation results in the linear regime are validated by comparison with the linear theory. (magnetically confined plasma)

  3. A chemical EOR benchmark study of different reservoir simulators

    Science.gov (United States)

    Goudarzi, Ali; Delshad, Mojdeh; Sepehrnoori, Kamy

    2016-09-01

    Interest in chemical EOR processes has intensified in recent years due to the advancements in chemical formulations and injection techniques. Injecting Polymer (P), surfactant/polymer (SP), and alkaline/surfactant/polymer (ASP) are techniques for improving sweep and displacement efficiencies with the aim of improving oil production in both secondary and tertiary floods. There has been great interest in chemical flooding recently for different challenging situations. These include high temperature reservoirs, formations with extreme salinity and hardness, naturally fractured carbonates, and sandstone reservoirs with heavy and viscous crude oils. More oil reservoirs are reaching maturity where secondary polymer floods and tertiary surfactant methods have become increasingly important. This significance has added to the industry's interest in using reservoir simulators as tools for reservoir evaluation and management to minimize costs and increase the process efficiency. Reservoir simulators with special features are needed to represent coupled chemical and physical processes present in chemical EOR processes. The simulators need to be first validated against well controlled lab and pilot scale experiments to reliably predict the full field implementations. The available data from laboratory scale include 1) phase behavior and rheological data; and 2) results of secondary and tertiary coreflood experiments for P, SP, and ASP floods under reservoir conditions, i.e. chemical retentions, pressure drop, and oil recovery. Data collected from corefloods are used as benchmark tests comparing numerical reservoir simulators with chemical EOR modeling capabilities such as STARS of CMG, ECLIPSE-100 of Schlumberger, REVEAL of Petroleum Experts. The research UTCHEM simulator from The University of Texas at Austin is also included since it has been the benchmark for chemical flooding simulation for over 25 years. The results of this benchmark comparison will be utilized to improve

  4. FRIB driver linac vacuum model and benchmarks

    CERN Document Server

    Durickovic, Bojan; Kersevan, Roberto; Machicoane, Guillaume

    2014-01-01

    The Facility for Rare Isotope Beams (FRIB) is a superconducting heavy-ion linear accelerator that is to produce rare isotopes far from stability for low energy nuclear science. In order to achieve this, its driver linac needs to achieve a very high beam current (up to 400 kW beam power), and this requirement makes vacuum levels of critical importance. Vacuum calculations have been carried out to verify that the vacuum system design meets the requirements. The modeling procedure was benchmarked by comparing models of an existing facility against measurements. In this paper, we present an overview of the methods used for FRIB vacuum calculations and simulation results for some interesting sections of the accelerator. (C) 2013 Elsevier Ltd. All rights reserved.

  5. Simulation of two-region and four-region models for typical PWR pressurizer and benchmark obtained results using available results

    International Nuclear Information System (INIS)

    Highlights: • WWER-1000 pressurizer two-region and four-region modeling is presented. • Simulation of in-surge and out-surge transients is performed using computer codes. • Transient results are compared with predictions of the RELAP5/MOD3 code. - Abstract: In light water pressurized reactors, pressurizer plays an important role in controlling the coolant pressure during in-surge or out-surge transients. Increase and decrease coolant mass flow rate or specific volume leads to in-surge or out-surge transient, which changes pressurizer tank’s pressure. There are different thermodynamic models, which are predicted the pressurizer surge tank’s pressure, water and vapor properties in different times. In the present article, two-region and four-region thermodynamic models were used for simulation of typical PWR pressurizer and the calculated results were benchmarked with RELAP5/Mod3 code findings. Finite difference method is applying to differential equations’ analytical solving. Water and vapor thermodynamic properties are calculating according to IAPWS-IF97 (International Association for the Properties of Water and Steam) correlations in each time steps. Try and error method is using for pressure estimations. The pressure results are compared and show that accuracy of the presented models are acceptable and deviations from RELAP code results are justifiable

  6. OECD/NEZ Main Steam Line Break Benchmark Problem Exercise I Simulation Using the SPACE Code with the Point Kinetics Model

    International Nuclear Information System (INIS)

    The Safety and Performance Analysis Code for Nuclear Power Plants (SPACE) has been developed in recent years by the Korea Nuclear Hydro and Nuclear Power Co. (KHNP) through collaborative works with other Korean nuclear industries. The SPACE is a best-estimated two-phase three-field thermal-hydraulic analysis code to analyze the safety and performance of pressurized water reactors (PWRs). The SPACE code has sufficient features to replace outdated vendor supplied codes and to be used for the safety analysis of operating PWRs and the design of advanced reactors. As a result of the second phase of the development, the 2.14 version of the code was released through the successive various V and V works. The topical reports on the code and related safety analysis methodologies have been prepared for license works. In this study, the OECD/NEA Main Steam Line Break (MSLB) Benchmark Problem Exercise I was simulated as a V and V work. The results were compared with those of the participants in the benchmark project. The OECD/NEA MSLB Benchmark Problem Exercise I was simulated using the SPACE code. The results were compared with those of the participants in the benchmark project. Through the simulation, it was concluded that the SPACE code can effectively simulate PWR MSLB accidents

  7. A comprehensive benchmarking system for evaluating global vegetation models

    Directory of Open Access Journals (Sweden)

    D. I. Kelley

    2012-11-01

    Full Text Available We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover, composition and height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, and are compared to scores based on the temporal or spatial mean value of the observations and a "random" model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM, and the Lund-Potsdam-Jena (LPJ and Land Processes and eXchanges (LPX dynamic global vegetation models (DGVMs. SDBM reproduces observed CO2 seasonal cycles, but its simulation of independent measurements of net primary production (NPP is too high. The two DGVMs show little difference for most benchmarks (including the inter-annual variability in the growth rate and seasonal cycle of atmospheric CO2, but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change impacts and feedbacks.

  8. Benchmarking of proton transport in Super Monte Carlo simulation program

    International Nuclear Information System (INIS)

    treat the intermediate energy nuclear reactions for proton. Some other hadronic models are also being developed now. The benchmarking of proton transport in SuperMC has been performed according to Accelerator Driven subcritical System (ADS) benchmark data and model released by IAEA from IAEA's Cooperation Research Plan (CRP). The incident proton energy is 1.0 GeV. The neutron flux and energy deposition were calculated. The results simulated using SuperMC and FLUKA are in agreement within the statistical uncertainty inherent in the Monte Carlo method. The proton transport in SuperMC has also been applied in China Lead-Alloy cooled Reactor (CLEAR), which is designed by FDS Team for the calculation of spallation reaction in the target

  9. Reactive transport benchmarks for subsurface environmental simulation

    Energy Technology Data Exchange (ETDEWEB)

    Steefel, Carl I.; Yabusaki, Steven B.; Mayer, K. U.

    2015-06-01

    Over the last 20 years, we have seen firsthand the evolution of multicomponent reactive transport modeling and the expanding range and increasing complexity of subsurface applications it is being used to address. There is a growing reliance on reactive transport modeling (RTM) to address some of the most compelling issues facing our planet: climate change, nuclear waste management, contaminant remediation, and pollution prevention. While these issues are motivating the development of new and improved capabilities for subsurface environmental modeling using RTM (e.g., biogeochemistry from cell-scale physiology to continental-scale terrestrial ecosystems, nonisothermal multiphase conditions, coupled geomechanics), there remain longstanding challenges in characterizing the natural variability of hydrological, biological, and geochemical properties in subsurface environments and limited success in transferring models between sites and across scales. An equally important trend over the last 20 years is the evolution of modeling from a service sought out after data has been collected to a multifaceted research approach that provides (1) an organizing principle for characterization and monitoring activities; (2) a systematic framework for identifying knowledge gaps, developing and integrating new knowledge; and (3) a mechanistic understanding that represents the collective wisdom of the participating scientists and engineers. There are now large multidisciplinary projects where the research approach is model-driven, and the principal product is a holistic predictive simulation capability that can be used as a test bed for alternative conceptualizations of processes, properties, and conditions. Much of the future growth and expanded role for RTM will depend on its continued ability to exploit technological advancements in the earth and environmental sciences. Advances in measurement technology, particularly in molecular biology (genomics), isotope fractionation, and high

  10. A comprehensive benchmarking system for evaluating global vegetation models

    Directory of Open Access Journals (Sweden)

    D. I. Kelley

    2013-05-01

    Full Text Available We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover; composition and height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, and are compared to scores based on the temporal or spatial mean value of the observations and a "random" model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM, the Lund-Potsdam-Jena (LPJ and Land Processes and eXchanges (LPX dynamic global vegetation models (DGVMs. In general, the SDBM performs better than either of the DGVMs. It reproduces independent measurements of net primary production (NPP but underestimates the amplitude of the observed CO2 seasonal cycle. The two DGVMs show little difference for most benchmarks (including the inter-annual variability in the growth rate and seasonal cycle of atmospheric CO2, but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change impacts and feedbacks.

  11. Simulating diffusion processes in discontinuous media: Benchmark tests

    Science.gov (United States)

    Lejay, Antoine; Pichot, Géraldine

    2016-06-01

    We present several benchmark tests for Monte Carlo methods simulating diffusion in one-dimensional discontinuous media. These benchmark tests aim at studying the potential bias of the schemes and their impact on the estimation of micro- or macroscopic quantities (repartition of masses, fluxes, mean residence time, …). These benchmark tests are backed by a statistical analysis to filter out the bias from the unavoidable Monte Carlo error. We apply them on four different algorithms. The results of the numerical tests give a valuable insight into the fine behavior of these schemes, as well as rules to choose between them.

  12. Discussion of OECD LWR Uncertainty Analysis in Modelling Benchmark

    International Nuclear Information System (INIS)

    The demand for best estimate calculations in nuclear reactor design and safety evaluations has increased in recent years. Uncertainty quantification has been highlighted as part of the best estimate calculations. The modelling aspects of uncertainty and sensitivity analysis are to be further developed and validated on scientific grounds in support of their performance and application to multi-physics reactor simulations. The Organization for Economic Co-operation and Development (OECD) / Nuclear Energy Agency (NEA) Nuclear Science Committee (NSC) has endorsed the creation of an Expert Group on Uncertainty Analysis in Modelling (EGUAM). Within the framework of activities of EGUAM/NSC the OECD/NEA initiated the Benchmark for Uncertainty Analysis in Modelling for Design, Operation, and Safety Analysis of Light Water Reactor (OECD LWR UAM benchmark). The general objective of the benchmark is to propagate the predictive uncertainties of code results through complex coupled multi-physics and multi-scale simulations. The benchmark is divided into three phases with Phase I highlighting the uncertainty propagation in stand-alone neutronics calculations, while Phase II and III are focused on uncertainty analysis of reactor core and system respectively. This paper discusses the progress made in Phase I calculations, the Specifications for Phase II and the incoming challenges in defining Phase 3 exercises. The challenges of applying uncertainty quantification to complex code systems, in particular the time-dependent coupled physics models are the large computational burden and the utilization of non-linear models (expected due to the physics coupling). (authors)

  13. Shear Strength Measurement Benchmarking Tests for K Basin Sludge Simulants

    Energy Technology Data Exchange (ETDEWEB)

    Burns, Carolyn A.; Daniel, Richard C.; Enderlin, Carl W.; Luna, Maria; Schmidt, Andrew J.

    2009-06-10

    Equipment development and demonstration testing for sludge retrieval is being conducted by the K Basin Sludge Treatment Project (STP) at the MASF (Maintenance and Storage Facility) using sludge simulants. In testing performed at the Pacific Northwest National Laboratory (under contract with the CH2M Hill Plateau Remediation Company), the performance of the Geovane instrument was successfully benchmarked against the M5 Haake rheometer using a series of simulants with shear strengths (τ) ranging from about 700 to 22,000 Pa (shaft corrected). Operating steps for obtaining consistent shear strength measurements with the Geovane instrument during the benchmark testing were refined and documented.

  14. Simulator for SUPO, a Benchmark Aqueous Homogeneous Reactor (AHR)

    Energy Technology Data Exchange (ETDEWEB)

    Klein, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Determan, John C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-10-14

    A simulator has been developed for SUPO (Super Power) an aqueous homogeneous reactor (AHR) that operated at Los Alamos National Laboratory (LANL) from 1951 to 1974. During that period SUPO accumulated approximately 600,000 kWh of operation. It is considered the benchmark for steady-state operation of an AHR. The SUPO simulator was developed using the process that resulted in a simulator for an accelerator-driven subcritical system, which has been previously reported.

  15. Simulator for SUPO, a Benchmark Aqueous Homogeneous Reactor (AHR)

    International Nuclear Information System (INIS)

    A simulator has been developed for SUPO (Super Power) an aqueous homogeneous reactor (AHR) that operated at Los Alamos National Laboratory (LANL) from 1951 to 1974. During that period SUPO accumulated approximately 600,000 kWh of operation. It is considered the benchmark for steady-state operation of an AHR. The SUPO simulator was developed using the process that resulted in a simulator for an accelerator-driven subcritical system, which has been previously reported.

  16. POLCA-T Neutron Kinetics Model Benchmarking

    OpenAIRE

    Kotchoubey, Jurij

    2015-01-01

    The demand for computational tools that are capable to reliably predict the behavior of a nuclear reactor core in a variety of static and dynamic conditions does inevitably require a proper qualification of these tools for the intended purposes. One of the qualification methods is the verification of the code in question. Hereby, the correct implementation of the applied model as well as its flawless implementation in the code are scrutinized. The present work concerns with benchmarking as a ...

  17. Benchmark of Neutronics and Thermal-hydraulics Coupled Simulation program NTC on beam interruptions in XADS

    International Nuclear Information System (INIS)

    Highlights: • A Neutronics and Thermal-hydraulics Coupled code is developed for transient analysis. • The spatial kinetics model was employed in the benchmark. • The simulation correctness of NTC accuracy demonstrated by benchmark. - Abstract: The Neutronics and Thermal-hydraulics Coupled Simulation program (NTC) is developed by FDS Team, which is a code used for transient analysis of advanced reactors. To investigate the capacity and calculation correctness of NTC for transient simulation, a benchmark on beam interruptions in an 80 MWth LBE-cooled and MOX-fuelled experimental accelerator-driven sub-critical system XADS was carried out by NTC. The benchmark on beam interruptions used in this paper was developed by the OECD/NEA Working Party on Scientific Issues in Partitioning and Transmutation (WPPT). The calculation model had the minimum phenomenological and computational complexity which concerned a simple model (single fuel channel thermal-hydraulics) of the average fuel pin corresponding to the BOL fuel condition. This benchmark was designed to investigate the temperature and power responses caused by beam interruption of different durations, which aimed at comparative assessment of NTC and other computation methods. A comparison of NTC and other ten sets of temperature and power was provided, which showed that the results had good agreement

  18. Towards a Benchmark Suite for Modelica Compilers: Large Models

    OpenAIRE

    Frenkel, Jens; Schubert, Christian; Kunze, Günter; Fritzson, Peter; Sjölund, Martin; Pop, Adrian

    2011-01-01

    The paper presents a contribution to a Modelica benchmark suite. Basic ideas for a tool independent benchmark suite based on Python scripting along with models for testing the performance of Modelica compilers regarding large systems of equation are given. The automation of running the benchmark suite is demonstrated followed by a selection of benchmark results to determine the current limits of Modelica tools and how they scale for an increasing number of equations.

  19. Holistic simulation of geotechnical installation processes benchmarks and simulations

    CERN Document Server

    2016-01-01

    This book examines in detail the entire process involved in implementing geotechnical projects, from a well-defined initial stress and deformation state, to the completion of the installation process.   The individual chapters provide the fundamental knowledge needed to effectively improve soil-structure interaction models. Further, they present the results of theoretical fundamental research on suitable constitutive models, contact formulations, and efficient numerical implementations and algorithms. Applications of fundamental research on boundary value problems are also considered in order to improve the implementation of the theoretical models developed. Subsequent chapters highlight parametric studies of the respective geotechnical installation process, as well as elementary and large-scale model tests under well-defined conditions, in order to identify the most essential parameters for optimizing the process. The book provides suitable methods for simulating boundary value problems in connection with g...

  20. Benchmarking an Unstructured-Grid Model for Tsunami Current Modeling

    Science.gov (United States)

    Zhang, Yinglong J.; Priest, George; Allan, Jonathan; Stimely, Laura

    2016-06-01

    We present model results derived from a tsunami current benchmarking workshop held by the NTHMP (National Tsunami Hazard Mitigation Program) in February 2015. Modeling was undertaken using our own 3D unstructured-grid model that has been previously certified by the NTHMP for tsunami inundation. Results for two benchmark tests are described here, including: (1) vortex structure in the wake of a submerged shoal and (2) impact of tsunami waves on Hilo Harbor in the 2011 Tohoku event. The modeled current velocities are compared with available lab and field data. We demonstrate that the model is able to accurately capture the velocity field in the two benchmark tests; in particular, the 3D model gives a much more accurate wake structure than the 2D model for the first test, with the root-mean-square error and mean bias no more than 2 cm s-1 and 8 mm s-1, respectively, for the modeled velocity.

  1. Empirical policy functions as benchmarks for evaluation of dynamic models

    OpenAIRE

    Bazdresch, Santiago; Kahn, R. Jay; Whited, Toni

    2011-01-01

    We describe a set of model-dependent statistical benchmarks that can be used to estimate and evaluate dynamic models of firms' investment and financing. The benchmarks characterize the empirical counterparts of the models' policy functions. These empirical policy functions (EPFs) are intuitively related to the corresponding model, their features can be estimated very easily and robustly, and they describe economically important aspects of firms' dynamic behavior. We calculate the benchmarks f...

  2. Benchmark of Space Charge Simulations and Comparison with Experimental Results for High Intensity, Low Energy Accelerators

    CERN Document Server

    Cousineau, Sarah M

    2005-01-01

    Space charge effects are a major contributor to beam halo and emittance growth leading to beam loss in high intensity, low energy accelerators. As future accelerators strive towards unprecedented levels of beam intensity and beam loss control, a more comprehensive understanding of space charge effects is required. A wealth of simulation tools have been developed for modeling beams in linacs and rings, and with the growing availability of high-speed computing systems, computationally expensive problems that were inconceivable a decade ago are now being handled with relative ease. This has opened the field for realistic simulations of space charge effects, including detailed benchmarks with experimental data. A great deal of effort is being focused in this direction, and several recent benchmark studies have produced remarkably successful results. This paper reviews the achievements in space charge benchmarking in the last few years, and discusses the challenges that remain.

  3. Benchmarking of a Markov multizone model of contaminant transport.

    Science.gov (United States)

    Jones, Rachael M; Nicas, Mark

    2014-10-01

    A Markov chain model previously applied to the simulation of advection and diffusion process of gaseous contaminants is extended to three-dimensional transport of particulates in indoor environments. The model framework and assumptions are described. The performance of the Markov model is benchmarked against simple conventional models of contaminant transport. The Markov model is able to replicate elutriation predictions of particle deposition with distance from a point source, and the stirred settling of respirable particles. Comparisons with turbulent eddy diffusion models indicate that the Markov model exhibits numerical diffusion in the first seconds after release, but over time accurately predicts mean lateral dispersion. The Markov model exhibits some instability with grid length aspect when turbulence is incorporated by way of the turbulent diffusion coefficient, and advection is present. However, the magnitude of prediction error may be tolerable for some applications and can be avoided by incorporating turbulence by way of fluctuating velocity (e.g. turbulence intensity). PMID:25143517

  4. A benchmark on computational simulation of a CT fracture experiment

    International Nuclear Information System (INIS)

    For a better understanding of the fracture behavior of cracked welds in piping, FRAMATOME, EDF and CEA have launched an important analytical research program. This program is mainly based on the analysis of the effects of the geometrical parameters (the crack size and the welded joint dimensions) and the yield strength ratio on the fracture behavior of several cracked configurations. Two approaches have been selected for the fracture analyses: on one hand, the global approach based on the concept of crack driving force J and on the other hand, a local approach of ductile fracture. In this approach the crack initiation and growth are modelized by the nucleation, growth and coalescence of cavities in front of the crack tip. The model selected in this study estimates only the growth of the cavities using the RICE and TRACEY relationship. The present study deals with a benchmark on computational simulation of CT fracture experiments using three computer codes : ALIBABA developed by EDF the CEA's code CASTEM 2000 and the FRAMATOME's code SYSTUS. The paper is split into three parts. At first, the authors present the experimental procedure for high temperature toughness testing of two CT specimens taken from a welded pipe, characteristic of pressurized water reactor primary piping. Secondly, considerations are outlined about the Finite Element analysis and the application procedure. A detailed description is given on boundary and loading conditions, on the mesh characteristics, on the numerical scheme involved and on the void growth computation. Finally, the comparisons between numerical and experimental results are presented up to the crack initiation, the tearing process being not taken into account in the present study. The variations of J and of the local variables used to estimate the damage around the crack tip (triaxiality and hydrostatic stresses, plastic deformations, void growth ...) are computed as a function of the increasing load

  5. Benchmark for evaluation and validation of reactor simulations (BEAVRS)

    International Nuclear Information System (INIS)

    Advances in parallel computing have made possible the development of high-fidelity tools for the design and analysis of nuclear reactor cores, and such tools require extensive verification and validation. This paper introduces BEAVRS, a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading patterns, and numerous in-vessel components. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from fifty-eight instrumented assemblies. Initial comparisons between calculations performed with MIT's OpenMC Monte Carlo neutron transport code and measured cycle 1 HZP test data are presented, and these results display an average deviation of approximately 100 pcm for the various critical configurations and control rod worth measurements. Computed HZP radial fission detector flux maps also agree reasonably well with the available measured data. All results indicate that this benchmark will be extremely useful in validation of coupled-physics codes and uncertainty quantification of in-core physics computational predictions. The detailed BEAVRS specification and its associated data package is hosted online at the MIT Computational Reactor Physics Group web site (http://crpg.mit.edu/), where future revisions and refinements to the benchmark specification will be made publicly available. (authors)

  6. Benchmarking novel approaches for modelling species range dynamics.

    Science.gov (United States)

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E

    2016-08-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches

  7. Benchmarking novel approaches for modelling species range dynamics

    Science.gov (United States)

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H.; Moore, Kara A.; Zimmermann, Niklaus E.

    2016-01-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species’ range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species’ response to climate change but also emphasise several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches

  8. Towards benchmarking an in-stream water quality model

    OpenAIRE

    Boorman, D. B.

    2007-01-01

    A method of model evaluation is presented which utilises a comparison with a benchmark model. The proposed benchmarking concept is one that can be applied to many hydrological models but, in this instance, is implemented in the context of an in-stream water quality model. The benchmark model is defined in such a way that it is easily implemented within the framework of the test model, i.e. the approach relies on two applications of the same model code rather than the application of two separa...

  9. Coupled Climate Model Appraisal a Benchmark for Future Studies

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, T J; AchutaRao, K; Bader, D; Covey, C; Doutriaux, C M; Fiorino, M; Gleckler, P J; Sperber, K R; Taylor, K E

    2005-08-22

    The Program for Climate Model Diagnosis and Intercomparison (PCMDI) has produced an extensive appraisal of simulations of present-day climate by eleven representative coupled ocean-atmosphere general circulation models (OAGCMs) which were developed during the period 1995-2002. Because projections of potential future global climate change are derived chiefly from OAGCMs, there is a continuing need to test the credibility of these predictions by evaluating model performance in simulating the historically observed climate. For example, such an evaluation is an integral part of the periodic assessments of climate change that are reported by the Intergovernmental Panel on Climate Change. The PCMDI appraisal thus provides a useful benchmark for future studies of this type. The appraisal mainly analyzed multi-decadal simulations of present-day climate by models that employed diverse representations of climate processes for atmosphere, ocean, sea ice, and land, as well as different techniques for coupling these components (see Table). The selected models were a subset of those entered in phase 2 of the Coupled Model Intercomparison Project (CMIP2, Covey et al. 2003). For these ''CMIP2+ models'', more atmospheric or oceanic variables were provided than the minimum requirements for participation in CMIP2. However, the appraisal only considered those climate variables that were supplied from most of the CMIP2+ models. The appraisal focused on three facets of the simulations of current global climate: (1) secular trends in simulation time series which would be indicative of a problematical ''coupled climate drift''; (2) comparisons of temporally averaged fields of simulated atmospheric and oceanic climate variables with available observational climatologies; and (3) correspondences between simulated and observed modes of climatic variability. Highlights of these climatic aspects manifested by different CMIP2+ simulations are briefly

  10. The NeT bead-on-plate benchmark for weld residual stress simulation

    International Nuclear Information System (INIS)

    Fracture mechanics based structural integrity assessments of pressure vessels and piping are widely used to support the economic and safe management of operating engineering plant. Assessments of defects at weldments can be highly sensitive to the through-thickness distribution of residual stress assumed in the fracture calculations. Increasingly, finite element modelling approaches are applied to predict residual stress in engineering structures arising from the welding process. However, such methods are complex and require analysts to make many assumptions and approximations. Guidelines covering the calculation of residual stresses in weldments are being prepared for inclusion in the R6 defect assessment procedure and will be accompanied by a series of validation benchmarks. The benchmarks will allow analysts to evaluate and improve the accuracy of weld modelling approaches and assess their suitability for use in fracture assessments. The first part of this paper presents an austenitic stainless steel bead-on-plate weldment validation benchmark based on the extensive round robin measurements performed by members of the European NeT project. The benchmark defines thermal and residual stress performance targets against which a weld simulation approach can be evaluated. Guidance is also provided on how to validate predicted residual stress profiles for use in a high integrity fracture assessment. The second part of this paper provides a commentary on how the weld simulation accuracy and performance targets have been established

  11. Microworlds, Simulators, and Simulation: Framework for a Benchmark of Human Reliability Data Sources

    Energy Technology Data Exchange (ETDEWEB)

    Ronald Boring; Dana Kelly; Carol Smidts; Ali Mosleh; Brian Dyre

    2012-06-01

    In this paper, we propose a method to improve the data basis of human reliability analysis (HRA) by extending the data sources used to inform HRA methods. Currently, most HRA methods are based on limited empirical data, and efforts to enhance the empirical basis behind HRA methods have not yet yielded significant new data. Part of the reason behind this shortage of quality data is attributable to the data sources used. Data have been derived from unrelated industries, from infrequent risk-significant events, or from costly control room simulator studies. We propose a benchmark of four data sources: a simplified microworld simulator using unskilled student operators, a full-scope control room simulator using skilled student operators, a full-scope control room simulator using licensed commercial operators, and a human performance modeling and simulation system using virtual operators. The goal of this research is to compare findings across the data sources to determine to what extent data may be used and generalized from cost effective sources.

  12. Microbially Mediated Kinetic Sulfur Isotope Fractionation: Reactive Transport Modeling Benchmark

    Science.gov (United States)

    Wanner, C.; Druhan, J. L.; Cheng, Y.; Amos, R. T.; Steefel, C. I.; Ajo Franklin, J. B.

    2014-12-01

    Microbially mediated sulfate reduction is a ubiquitous process in many subsurface systems. Isotopic fractionation is characteristic of this anaerobic process, since sulfate reducing bacteria (SRB) favor the reduction of the lighter sulfate isotopologue (S32O42-) over the heavier isotopologue (S34O42-). Detection of isotopic shifts have been utilized as a proxy for the onset of sulfate reduction in subsurface systems such as oil reservoirs and aquifers undergoing uranium bioremediation. Reactive transport modeling (RTM) of kinetic sulfur isotope fractionation has been applied to field and laboratory studies. These RTM approaches employ different mathematical formulations in the representation of kinetic sulfur isotope fractionation. In order to test the various formulations, we propose a benchmark problem set for the simulation of kinetic sulfur isotope fractionation during microbially mediated sulfate reduction. The benchmark problem set is comprised of four problem levels and is based on a recent laboratory column experimental study of sulfur isotope fractionation. Pertinent processes impacting sulfur isotopic composition such as microbial sulfate reduction and dispersion are included in the problem set. To date, participating RTM codes are: CRUNCHTOPE, TOUGHREACT, MIN3P and THE GEOCHEMIST'S WORKBENCH. Preliminary results from various codes show reasonable agreement for the problem levels simulating sulfur isotope fractionation in 1D.

  13. Benchmarking Simulation of Long Term Station Blackout Events

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sung Kyum; Lee, John C. [POSTECH, Pohang (Korea, Republic of); Fynan, Douglas A.; Lee, John C. [Univ. of Michigan, Ann Arbor (United States)

    2013-05-15

    The importance of passive cooling systems has emerged since the SBO events. Turbine-driven auxiliary feedwater (TD-AFW) system is the only passive cooling system for steam generators (SGs) in current PWRs. During SBO events, all alternating current (AC) and direct current (DC) are interrupted and then the water levels of steam generators become high. In this case, turbine blades could be degraded and cannot cool down the SGs anymore. To prevent this kind of degradations, improved TD-AFW system should be installed for current PWRs, especially OPR 1000 plants. A long-term station blackout (LTSBO) scenario based on the improved TD-AFW system has been benchmarked as a reference input file. The following task is a safety analysis in order to find some important parameters causing the peak cladding temperature (PCT) to vary. This task has been initiated with the benchmarked input deck applying to the State-of-the-Art Reactor Consequence Analyses (SOARCA) Report. The point of the improved TD-AFW is to control the water level of the SG by using the auxiliary battery charged by a generator connected with the auxiliary turbine. However, this battery also could be disconnected from the generator. To analyze the uncertainties of the failure of the auxiliary battery, the simulation for the time-dependent failure of the TD-AFW has been performed. In addition to the cases simulated in the paper, some valves (e. g., pressurizer safety valve), available during SBO events in the paper, could be important parameters to assess uncertainties in PCTs estimated. The results for these parameters will be included in a future study in addition to the results for the leakage of the RCP seals. After the simulation of several transient cases, alternating conditional expectation (ACE) algorithm will be used to derive functional relationships between the PCT and several system parameters.

  14. Theory-Motivated Benchmark Models and Superpartners at the Tevatron

    OpenAIRE

    Kane, G. L.; Lykken, J.; Mrenna, Stephen; Nelson, Brent D.; Wang, Lian-Tao; Wang, Ting T.

    2002-01-01

    Recently published benchmark models have contained rather heavy superpartners. To test the robustness of this result, several benchmark models have been constructed based on theoretically well-motivated approaches, particularly string-based ones. These include variations on anomaly and gauge-mediated models, as well as gravity mediation. The resulting spectra often have light gauginos that are produced in significant quantities at the Tevatron collider, or will be at a 500 GeV linear collider...

  15. Development of interfacial area transport equation - modeling and experimental benchmark

    International Nuclear Information System (INIS)

    A dynamic treatment of interfacial area concentration has been studied over the last decade by employing the interfacial area transport equation. When coupled with the two-fluid model, the interfacial area transport equation replaces the flow regime dependent correlations for interfacial area concentration and eliminates potential artificial bifurcation or numerical oscillations stemming from these static correlations. An extensive database has been established to evaluate the model under various two-phase flow conditions. These include adiabatic and heated conditions, vertical and horizontal flow orientations, round, rectangular, annulus and 8×8 rod bundle channel geometries, and normal-gravity and simulated reduced-gravity conditions. This paper reviews the current state-of-the-art in the development of the interfacial area transport equation, available experimental databases and 1D and 3D benchmarking work of the interfacial area transport equation. (author)

  16. Benchmarking of mobile network simulator, with real network data

    OpenAIRE

    Näslund, Lars

    2007-01-01

    In the radio network simulator used in this thesis the radio network from a specific operator is modeled. The real network model in the simulator uses, a 3-D building database, realistic site data (antenna types, feederloss, ...) and parameter setting from field. In addition traffic statistics are collected from the customer’s network for the modeled area. The traffic payload is used as input to the simulator and creates an inhomogeneous traffic distribution over the area. One of the outputs ...

  17. Benchmarking of CAD-based SuperMC with ITER benchmark model

    International Nuclear Information System (INIS)

    Neutronics design and analysis of fusion reactors is significantly complex mainly on geometry and physical process of neutron. The great challenges brought by advanced nuclear energy system promote the development of Super Monte Carlo Calculation Program for Nuclear and Radiation Process (SuperMC). The ITER benchmark model, a verification model created by ITER International Organization, was used for benchmarking the latest SuperMC which can perform CAD-based neutron and photon transport calculation. The calculation results of SuperMC for the first wall, divertor cassettes, inboard toroidal field coils and equatorial port were compared with the results of MCNP and the results were coincident. The intelligence and advantage of SuperMC on automatic conversion from complicated CAD model to full format calculation model, complex source construction and geometry description method was demonstrated. The correctness of neutron and photon transport in energy range corresponding to fusion reactors was also demonstrated

  18. Enthalpy benchmark experiments for numerical ice sheet models

    Directory of Open Access Journals (Sweden)

    T. Kleiner

    2014-06-01

    Full Text Available We present benchmark experiments to test the implementation of enthalpy and the corresponding boundary conditions in numerical ice sheet models. The first experiment tests particularly the functionality of the boundary condition scheme and the basal melt rate calculation during transient simulations. The second experiment addresses the steady-state enthalpy profile and the resulting position of the cold–temperate transition surface (CTS. For both experiments we assume ice flow in a parallel-sided slab decoupled from the thermal regime. Since we impose several assumptions on the experiment design, analytical solutions can be formulated for the proposed numerical experiments. We compare simulation results achieved by three different ice flow-models with these analytical solutions. The models agree well to the analytical solutions, if the change in conductivity between cold and temperate ice is properly considered in the model. In particular, the enthalpy gradient at the cold side of the CTS vanishes in the limit of vanishing conductivity in the temperate ice part as required from the physical jump conditions at the CTS.

  19. A computer code package for Monte Carlo photon-electron transport simulation Comparisons with experimental benchmarks

    International Nuclear Information System (INIS)

    A computer code package (PTSIM) for particle transport Monte Carlo simulation was developed using object oriented techniques of design and programming. A flexible system for simulation of coupled photon, electron transport, facilitating development of efficient simulation applications, was obtained. For photons: Compton and photo-electric effects, pair production and Rayleigh interactions are simulated, while for electrons, a class II condensed history scheme was considered, in which catastrophic interactions (Moeller electron-electron interaction, bremsstrahlung, etc.) are treated in detail and all other interactions with reduced individual effect on electron history are grouped together using continuous slowing down approximation and energy straggling theories. Electron angular straggling is simulated using Moliere theory or a mixed model in which scatters at large angles are treated as distinct events. Comparisons with experimentally benchmarks for electron transmission and bremsstrahlung emissions energy and angular spectra, and for dose calculations are presented

  20. List of benchmarks for simulation tools of steam-water two-phase flows

    International Nuclear Information System (INIS)

    A physical-numerical benchmarks matrix was drawn up in the context of the ECUME co-development action. Its purpose is to test the different potentialities required for the numerical methods to be used in the codes of the future which will benefit from advanced physics simulations. This benchmarks matrix is to be used for each numerical method in order to answer the following questions: What is the two-phase flow field that the combination of physics model + numerical scheme can process? What is the accuracy of the scheme for each type of physics situation? What is the numerical efficiency (computing time) of the numerical scheme for each type of physics situation? (author)

  1. Benchmarking a Visual-Basic based multi-component one-dimensional reactive transport modeling tool

    Science.gov (United States)

    Torlapati, Jagadish; Prabhakar Clement, T.

    2013-01-01

    We present the details of a comprehensive numerical modeling tool, RT1D, which can be used for simulating biochemical and geochemical reactive transport problems. The code can be run within the standard Microsoft EXCEL Visual Basic platform, and it does not require any additional software tools. The code can be easily adapted by others for simulating different types of laboratory-scale reactive transport experiments. We illustrate the capabilities of the tool by solving five benchmark problems with varying levels of reaction complexity. These literature-derived benchmarks are used to highlight the versatility of the code for solving a variety of practical reactive transport problems. The benchmarks are described in detail to provide a comprehensive database, which can be used by model developers to test other numerical codes. The VBA code presented in the study is a practical tool that can be used by laboratory researchers for analyzing both batch and column datasets within an EXCEL platform.

  2. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano; Ferrara, Liberato; Krenzer, Knut; Mechtcherine, Viktor; Shyshko, Sergiy; Skocec, Jan; Spangenberg, Jon; Svec, Oldrich; Thrane, Lars Nyholm; Vasilic, Ksenija

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we...... compare numerical predictions of the concrete sample final shape for these two benchmark flows obtained by various research teams around the world using various numerical techniques. Our results show that all numerical techniques compared here give very similar results suggesting that numerical...

  3. Benchmarking Biological Nutrient Removal in Wastewater Treatment Plants:Influence of Mathematical Model Assumptions

    OpenAIRE

    Flores-Alsina, Xavier; Gernaey, Krist; Jeppsson, Ulf

    2011-01-01

    This paper examines the effect of different model assumptions when describing biological nutrient removal (BNR) by the activated sludge models (ASM) 1, 2d & 3. The performance of a nitrogen removal (WWTP1) and a combined nitrogen and phosphorus removal (WWTP2) benchmark wastewater treatment plant was compared for a series of model assumptions. Three different model approaches describing BNR are considered. In the reference case, the original model implementations are used to simulate WWTP...

  4. Proxy benchmarks for intercomparison of 8.2 ka simulations

    Directory of Open Access Journals (Sweden)

    C. Morrill

    2012-08-01

    Full Text Available The Paleoclimate Modelling Intercomparison Project (PMIP3 now includes the 8.2 ka event as a test of model sensitivity to North Atlantic freshwater forcing. To provide benchmarks for intercomparison, we compiled and analyzed high-resolution records spanning this event. Two previously-described anomaly patterns that emerge are cooling around the North Atlantic and drier conditions in the Northern Hemisphere tropics. Newer to this compilation are more robustly-defined wetter conditions in the Southern Hemisphere tropics and regionally-limited warming in the Southern Hemisphere. Most anomalies around the globe lasted on the order of 100 to 150 yr. More quantitative reconstructions are now available and indicate cooling of 1.0 to 1.2 °C and a ~20% decrease in precipitation in parts of Europe, as well as spatial gradients in δ18O from the high to low latitudes. Unresolved questions remain about the seasonality of the climate response to freshwater forcing and the extent to which the bipolar seesaw operated in the early Holocene.

  5. Proxy benchmarks for intercomparison of 8.2 ka simulations

    Directory of Open Access Journals (Sweden)

    C. Morrill

    2013-02-01

    Full Text Available The Paleoclimate Modelling Intercomparison Project (PMIP3 now includes the 8.2 ka event as a test of model sensitivity to North Atlantic freshwater forcing. To provide benchmarks for intercomparison, we compiled and analyzed high-resolution records spanning this event. Two previously-described anomaly patterns that emerge are cooling around the North Atlantic and drier conditions in the Northern Hemisphere tropics. Newer to this compilation are more robustly-defined wetter conditions in the Southern Hemisphere tropics and regionally-limited warming in the Southern Hemisphere. Most anomalies around the globe lasted on the order of 100 to 150 yr. More quantitative reconstructions are now available and indicate cooling of ~ 1 °C and a ~ 20% decrease in precipitation in parts of Europe as well as spatial gradients in δ18O from the high to low latitudes. Unresolved questions remain about the seasonality of the climate response to freshwater forcing and the extent to which the bipolar seesaw operated in the early Holocene.

  6. Large-Signal Model of Graphene Field-Effect Transistors -- Part II: Circuit Performance Benchmarking

    OpenAIRE

    Pasadas, Francisco; Jiménez, David

    2016-01-01

    This paper presents a circuit performance benchmarking using the large-signal model of graphene field effect transistor reported in Part I of this two-part paper. To test the model, it has been implemented in a circuit simulator. Specifically we have simulated a high-frequency performance amplifier, together with other circuits that take advantage of the ambipolarity of graphene, such as a frequency doubler, a radio-frequency subharmonic mixer and a multiplier phase detector. A variety of sim...

  7. TCSC impedance regulator applied to the second benchmark model

    Energy Technology Data Exchange (ETDEWEB)

    Hamel, J.P.; Dessaint, L.A. [Ecole de Technologie Superieure, Montreal, PQ (Canada). Dept. of Electrical Engineering; Champagne, R. [Ecole de Technologie Superieure, Montreal, PQ (Canada). Dept. of Software and IT Engineering; Pare, D. [Institut de Recherche d' Hydro-Quebec, Varennes, PQ (Canada)

    2008-07-01

    Due to the combination of electrical demand growth and the high cost of building new power transmission lines, series compensation is increasingly used in power systems all around the world. Series compensation has been proposed as a new way to transfer more power on existing lines. By adding series compensation to an existing line (a relatively small change), the power transfer can be increased significantly. One of the means used for line compensation is the addition of capacitive elements in series with the line. This paper presented a thyristor-controlled series capacitor (TCSC) model that used impedance as reference, had individual controls for each phase, included a linearization module and considered only the fundamental frequency for impedance computations, without using any filter. The model's dynamic behavior was validated by applying it to the second benchmark model for subsynchronous resonance (SSR). Simulation results from the proposed model, obtained using EMTP-RV and SimPowerSystems were demonstrated. It was concluded that SSR was mitigated by the proposed approach. 19 refs., 19 figs.

  8. Adapting benchmarking to project management : an analysis of project management processes, metrics, and benchmarking process models

    OpenAIRE

    Emhjellen, Kjetil

    1997-01-01

    Since the first publication on benchmarking in 1989 by Robert C. Camp of “Benchmarking: The search for Industry Best Practices that Lead to Superior Performance”, the improvement technique benchmarking has been established as an important tool in the process focused manufacturing or production environment. The use of benchmarking has expanded to other types of industry. Benchmarking has past the doorstep and is now in early trials in the project and construction environment....

  9. SCALE Modeling of Selected Neutronics Test Problems within the OECD UAM LWR’s Benchmark

    OpenAIRE

    Luigi Mercatali; Kostadin Ivanov; Victor Hugo Sanchez

    2013-01-01

    The OECD UAM Benchmark was launched in 2005 with the objective of determining the uncertainty in the simulation of Light Water Reactors (LWRs) system calculations at all the stages of the coupled reactor physics—thermal hydraulics modeling. Within the framework of the “Neutronics Phase” of the Benchmark the solutions of some selected test cases at the cell physics and lattice physics levels are presented. The SCALE 6.1 code package has been used for the neutronics modeling of the selected exe...

  10. Towards Financial Cloud Framework - Modelling and Benchmarking of Financial Assets in Public and Private Clouds

    OpenAIRE

    Chang, Victor; Wills, Gary; de Roure, David

    2010-01-01

    Literature identifies two problems in clouds: (i) there are few financial clouds and (ii) portability of financial modelling from desktop to cloud is challenging. To address these two problems, we propose the Financial Cloud Framework (FCF), which contains business models, forecasting, sustainability, modelling, simulation and benchmarking of financial assets. We select Monte Carlo Methods for pricing and Black Scholes Model for risk analysis. Our objective is to demonstrate portability, spee...

  11. Model evaluation using a community benchmarking system for land surface models

    Science.gov (United States)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Kluzek, E. B.; Koven, C. D.; Randerson, J. T.

    2014-12-01

    Evaluation of atmosphere, ocean, sea ice, and land surface models is an important step in identifying deficiencies in Earth system models and developing improved estimates of future change. For the land surface and carbon cycle, the design of an open-source system has been an important objective of the International Land Model Benchmarking (ILAMB) project. Here we evaluated CMIP5 and CLM models using a benchmarking system that enables users to specify models, data sets, and scoring systems so that results can be tailored to specific model intercomparison projects. Our scoring system used information from four different aspects of global datasets, including climatological mean spatial patterns, seasonal cycle dynamics, interannual variability, and long-term trends. Variable-to-variable comparisons enable investigation of the mechanistic underpinnings of model behavior, and allow for some control of biases in model drivers. Graphics modules allow users to evaluate model performance at local, regional, and global scales. Use of modular structures makes it relatively easy for users to add new variables, diagnostic metrics, benchmarking datasets, or model simulations. Diagnostic results are automatically organized into HTML files, so users can conveniently share results with colleagues. We used this system to evaluate atmospheric carbon dioxide, burned area, global biomass and soil carbon stocks, net ecosystem exchange, gross primary production, ecosystem respiration, terrestrial water storage, evapotranspiration, and surface radiation from CMIP5 historical and ESM historical simulations. We found that the multi-model mean often performed better than many of the individual models for most variables. We plan to publicly release a stable version of the software during fall of 2014 that has land surface, carbon cycle, hydrology, radiation and energy cycle components.

  12. VVER-1000 Coolant Transient Benchmark. Phase 2 (V1000CT-2) Summary Results of Exercise 1 on Vessel Mixing Simulation

    International Nuclear Information System (INIS)

    Recently developed best-estimate computer code systems for modelling 3-D coupled neutronics/thermal-hydraulics transients in nuclear reactors need to be validated against results from experiments and compared with each other to help understand how the different modelling methods adopted affect the accuracy of the simulation. This benchmark was set up for that purpose. This report is one of a series covering benchmarks designed to test modelling methods for a range of transient scenarios in a VVER-1000 reactor. In this case, the transient is initiated by isolation of one steam generator causing asymmetric loop heat-up. The benchmark is based on experiments conducted at the Kozloduy nuclear power plant. (authors)

  13. Fundamental modeling issues on benchmark structure for structural health monitoring

    Institute of Scientific and Technical Information of China (English)

    HU; Sau-Lon; James

    2009-01-01

    The IASC-ASCE Structural Health Monitoring Task Group developed a series of benchmark problems, and participants of the benchmark study were charged with using a 12-degree-of-freedom (DOF) shear building as their identification model. The present article addresses improperness, including the parameter and modeling errors, of using this particular model for the intended purpose of damage detec- tion, while the measurements of damaged structures are synthesized from a full-order finite-element model. In addressing parameter errors, a model calibration procedure is utilized to tune the mass and stiffness matrices of the baseline identification model, and a 12-DOF shear building model that preserves the first three modes of the full-order model is obtained. Sequentially, this calibrated model is employed as the baseline model while performing the damage detection under various damage scenarios. Numerical results indicate that the 12-DOF shear building model is an over-simplified identification model, through which only idealized damage situations for the benchmark structure can be detected. It is suggested that a more sophisticated 3-dimensional frame structure model should be adopted as the identification model, if one intends to detect local member damages correctly.

  14. Fundamental modeling issues on benchmark structure for structural health monitoring

    Institute of Scientific and Technical Information of China (English)

    LI HuaJun; ZHANG Min; WANG JunRong; HU Sau-Lon James

    2009-01-01

    The IASC-ASCE Structural Health Monitoring Task Group developed a series of benchmark problems,and participants of the benchmark study were charged with using a 12-degree-of-freedom (DOF) shear building as their identification model. The present article addresses improperness, including the parameter and modeling errors, of using this particular model for the intended purpose of damage detection, while the measurements of damaged structures are synthesized from a full-order finite-element model. In addressing parameter errors, a model calibration procedure is utilized to tune the mass and stiffness matrices of the baseline identification model, and a 12-DOF shear building model that preserves the first three modes of the full-order model is obtained. Sequentially, this calibrated model is employed as the baseline model while performing the damage detection under various damage scenarios. Numerical results indicate that the 12-DOF shear building model is an over-simplified identification model, through which only idealized damage situations for the benchmark structure can be detected. It is suggested that a more sophisticated 3-dimensional frame structure model should be adopted as the identification model, if one intends to detect local member damages correctly.

  15. Benchmarking the codes VORPAL, OSIRIS, and QuickPIC with Laser Wakefield Acceleration Simulations

    OpenAIRE

    Paul, Kevin

    2010-01-01

    Three-dimensional laser wakefield acceleration (LWFA) simulations have recently been performed to benchmark the commonly used particle-in-cell (PIC) codes VORPAL, OSIRIS, and QuickPIC. The simulations were run in parallel on over 100 processors, using parameters relevant to LWFA with ultra-short Ti-Sapphire laser pulses propagating in hydrogen gas. Both first-order and second-order particle shapes were employed. We present the results of this benchmarking exercise, and show that accelerating ...

  16. Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model

    CERN Document Server

    Baudron, Anne-Marie A -M; Maday, Yvon; Riahi, Mohamed Kamel; Salomon, Julien

    2014-01-01

    We present a parareal in time algorithm for the simulation of neutron diffusion transient model. The method is made efficient by means of a coarse solver defined with large time steps and steady control rods model. Using finite element for the space discretization, our implementation provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch-Maurer-Werner (LMW) benchmark [1].

  17. Computer simulation of Masurca critical and subcritical experiments. Muse-4 benchmark. Final report

    International Nuclear Information System (INIS)

    The efficient and safe management of spent fuel produced during the operation of commercial nuclear power plants is an important issue. In this context, partitioning and transmutation (P and T) of minor actinides and long-lived fission products can play an important role, significantly reducing the burden on geological repositories of nuclear waste and allowing their more effective use. Various systems, including existing reactors, fast reactors and advanced systems have been considered to optimise the transmutation scheme. Recently, many countries have shown interest in accelerator-driven systems (ADS) due to their potential for transmutation of minor actinides. Much R and D work is still required in order to demonstrate their desired capability as a whole system, and the current analysis methods and nuclear data for minor actinide burners are not as well established as those for conventionally-fuelled systems. Recognizing a need for code and data validation in this area, the Nuclear Science Committee of the OECD/NEA has organised various theoretical benchmarks on ADS burners. Many improvements and clarifications concerning nuclear data and calculation methods have been achieved. However, some significant discrepancies for important parameters are not fully understood and still require clarification. Therefore, this international benchmark based on MASURCA experiments, which were carried out under the auspices of the EC 5. Framework Programme, was launched in December 2001 in co-operation with the CEA (France) and CIEMAT (Spain). The benchmark model was oriented to compare simulation predictions based on available codes and nuclear data libraries with experimental data related to TRU transmutation, criticality constants and time evolution of the neutronic flux following source variation, within liquid metal fast subcritical systems. A total of 16 different institutions participated in this first experiment based benchmark, providing 34 solutions. The large number

  18. Modelling the benchmark spot curve for the Serbian

    Directory of Open Access Journals (Sweden)

    Drenovak Mikica

    2010-01-01

    Full Text Available The objective of this paper is to estimate Serbian benchmark spot curves using the Svensson parametric model. The main challenges that we tackle are: sparse data, different currency denominations of short and longer term maturities, and infrequent transactions in the short-term market segment vs daily traded medium and long-term market segment. We find that the model is flexible enough to account for most of the data variability. The model parameters are interpreted in economic terms.

  19. Benchmark Finite Element Simulations of Postbuckling Composite Stiffened Panels

    OpenAIRE

    Orifici, Adrian; Thomson, R.; Gunnion, A.J.; Degenhardt, Richard; Abramovich, H.; Bayandor, J.

    2005-01-01

    This paper outlines the CRC-ACS contribution to a software code benchmarking exercise as part of the European Commission Project COCOMAT investigating composite postbuckling stiffened panels. Analysis was carried out using MSC.Nastran (Nastran) solution sequences SOL 106 and SOL 600, Abaqus/Standard (Abaqus) and LS-Dyna, and compared to experimental data generated previously at the Technion, Israel and DLR, Germany. The finite element (FE) analyses generally gave very good comparison u...

  20. Benchmark analyses of prediction models for pipe wall thinning

    International Nuclear Information System (INIS)

    In recent years, the importance of utilizing a prediction model or code for the management of pipe wall thinning has been recognized. In Japan Society of Mechanical Engineers (JSME), a working group on prediction methods has been set up within a research committee for studying the management of pipe wall-thinning. Some prediction models for pipe wall thinning were reviewed by benchmark analyses in terms of their prediction characteristics and the specifications required for their use in the management of pipe wall thinning in power generation facilities. This paper introduces the prediction models selected from the existing flow-accelerated corrosion and/or liquid droplet impingement erosion models. The experimental results and example of the results of wall thickness measurement used as benchmark data are also mentioned. (author)

  1. Simulation of Enhanced Geothermal Systems: A Benchmarking and Code Intercomparison Study

    Energy Technology Data Exchange (ETDEWEB)

    Scheibe, Timothy D.; White, Mark D.; White, Signe K.; Sivaramakrishnan, Chandrika; Purohit, Sumit; Black, Gary D.; Podgorney, Robert; Boyd, Lauren W.; Phillips, Benjamin R.

    2013-06-30

    Numerical simulation codes have become critical tools for understanding complex geologic processes, as applied to technology assessment, system design, monitoring, and operational guidance. Recently the need for quantitatively evaluating coupled Thermodynamic, Hydrologic, geoMechanical, and geoChemical (THMC) processes has grown, driven by new applications such as geologic sequestration of greenhouse gases and development of unconventional energy sources. Here we focus on Enhanced Geothermal Systems (EGS), which are man-made geothermal reservoirs created where hot rock exists but there is insufficient natural permeability and/or pore fluids to allow efficient energy extraction. In an EGS, carefully controlled subsurface fluid injection is performed to enhance the permeability of pre-existing fractures, which facilitates fluid circulation and heat transport. EGS technologies are relatively new, and pose significant simulation challenges. To become a trusted analytical tool for EGS, numerical simulation codes must be tested to demonstrate that they adequately represent the coupled THMC processes of concern. This presentation describes the approach and status of a benchmarking and code intercomparison effort currently underway, supported by the U. S. Department of Energy’s Geothermal Technologies Program. This study is being closely coordinated with a parallel international effort sponsored by the International Partnership for Geothermal Technology (IPGT). We have defined an extensive suite of benchmark problems, test cases, and challenge problems, ranging in complexity and difficulty, and a number of modeling teams are applying various simulation tools to these problems. The descriptions of the problems and modeling results are being compiled using the Velo framework, a scientific workflow and data management environment accessible through a simple web-based interface.

  2. Simulation of thermos-solutal convection induced macrosegregation in a Sn-10%Pb alloy benchmark during columnar solidification

    Science.gov (United States)

    Zheng, Y.; Wu, M.; Kharicha, A.; Ludwig, A.

    2016-03-01

    In order to investigate the effect of thermo-solutal convection on the formation of macrosegregation during columnar solidification, simulations with a liquid-columnar two phase model were carried out on a 2D rectangular benchmark of Sn-10%Pb alloy. The solidification direction in the benchmark is unidirectional: (') downwards from top to bottom or (2) upwards from bottom to top. Thermal expansion coefficient, solutal expansion coefficient and liquid diffusion coefficient of the melt are found to be key factors influencing the final macrosegregation. The segregation range and distribution are also strongly influenced by the benchmark configurations, e.g. the solidifying direction (upwards or downwards) and boundary conditions, et al. The global macrosegregation range increases with the velocity magnitude of the melt during the process of solidification.

  3. Benchmark of tyre models for mechatronic application

    OpenAIRE

    Carulla Castellví, Marina

    2010-01-01

    In this paper a comparison matrix is developed in order to examine three tyre models through nine criteria. These criteria are obtained after the requirements study of the main vehicle-dynamics mechatronic applications, such as ABS, ESP, TCS and EPAS. The present study proposes a weight for each criterion related to its importance to the mentioned applications. These weights are obtained by taking into account both practical and theoretical judgement. The former was collected through experts‟...

  4. Theoretical benchmarking of laser-accelerated ion fluxes by 2D-PIC simulations

    CERN Document Server

    Mackenroth, Felix; Marklund, Mattias

    2016-01-01

    There currently exists a number of different schemes for laser based ion acceleration in the literature. Some of these schemes are also partly overlapping, making a clear distinction between the schemes difficult in certain parameter regimes. Here, we provide a systematic numerical comparison between the following schemes and their analytical models: light-sail acceleration, Coulomb explosions, hole boring acceleration, and target normal sheath acceleration (TNSA). We study realistic laser parameters and various different target designs, each optimized for one of the acceleration schemes, respectively. As a means of comparing the schemes, we compute the ion current density generated at different laser powers, using two-dimensional particle-in-cell (PIC) simulations, and benchmark the particular analytical models for the corresponding schemes against the numerical results. Finally, we discuss the consequences for attaining high fluxes through the studied laser ion-acceleration schemes.

  5. Model-Based Engineering and Manufacturing CAD/CAM Benchmark.

    Energy Technology Data Exchange (ETDEWEB)

    Domm, T.C.; Underwood, R.S.

    1999-10-13

    The Benchmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supporting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate their engineering practices and processes to determine direction and focus for Y-12 modernization efforts. The companies visited included several large established companies and a new, small, high-tech machining firm. As a result of this effort, changes are recommended that will enable Y-12 to become a more modern, responsive, cost-effective manufacturing facility capable of supporting the needs of the Nuclear Weapons Complex (NWC) into the 21st century. The benchmark team identified key areas of interest, both focused and general. The focus areas included Human Resources, Information Management, Manufacturing Software Tools, and Standards/Policies and Practices. Areas of general interest included Infrastructure, Computer Platforms and Networking, and Organizational Structure. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were somewhere between 3-D solid modeling and surfaced wire-frame models. The manufacturing computer tools were varied, with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) from a common model. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system than a single computer-aided manufacturing (CAM) system. The Internet was a technology that all companies were looking to either transport information more easily throughout the corporation or as a conduit for

  6. A benchmark model to assess community structure in evolving networks

    CERN Document Server

    Granell, Clara; Arenas, Alex; Fortunato, Santo; Gómez, Sergio

    2015-01-01

    Detecting the time evolution of the community structure of networks is crucial to identify major changes in the internal organization of many complex systems, which may undergo important endogenous or exogenous events. This analysis can be done in two ways: considering each snapshot as an independent community detection problem or taking into account the whole evolution of the network. In the first case, one can apply static methods on the temporal snapshots, which correspond to configurations of the system in short time windows, and match afterwards the communities across layers. Alternatively, one can develop dedicated dynamic procedures, so that multiple snapshots are simultaneously taken into account while detecting communities, which allows to keep memory of the flow. To check how well a method of any kind could capture the evolution of communities, suitable benchmarks are needed. Here we propose a model for generating simple dynamic benchmark graphs, based on stochastic block models. In them, the time e...

  7. Simulation of the OECD main steam line benchmark using the Westinghouse RAVE (TM) methodology

    International Nuclear Information System (INIS)

    In order to determine the safety of a reactor with respect to reactor system failures, a set of postulated events is analyzed, and the results are presented in Chapter 14 or 15 of the plant Final Safety Analysis Report (FSAR). In the analysis of events that are not initiated by a Loss of Coolant Accidents (non-LOCA events), the typical approach has been to make conservative and bounding analysis assumptions, either because of analysis expediency, or because of the simplified modeling assumptions. In some cases, this has resulted in combination of assumptions that cannot occur in reality. The consistency of the analysis assumptions can be improved by externally linking the reactor coolant system thermal-hydraulic calculation model to a more realistic 3-dimensional core neutronics and heat transfer model, that replaces the simple point kinetics model typically used to represent the core neutronics in the system code. Over the past few years, Westinghouse has developed the RAVETM methodology for the application of three-dimensional core neutron kinetics to the analysis of non-LOCA FSAR events. This methodology uses the NRC-approved core neutron kinetics code SPNOVA and the NRC-approved Westinghouse version of the core thermal hydraulics code VIPRE-01, in conjunction with the NRC approved Westinghouse version of the reactor coolant system thermal hydraulic code RETRAN-02. The Westinghouse methodology has been submitted to the NRC for approval in April, 2004 and the NRC Safety Evaluation Report (SER) is expected to be issued before this paper will be presented. As part of the development and licensing of the RAVETM methodology, Westinghouse has performed an analysis of the OECD Main Steam Line Break (MSLB) benchmark. This benchmark problem had been defined in a cooperative program sponsored by the OECD, the NRC, and the Pennsylvania State University, in order to simulate the core response and the reactor coolant system response to a relatively severe steamline break

  8. Analysis of the OECD/NEA Oskarshamn-2 feedwater transient and stability benchmark with SIMULATE-3K

    International Nuclear Information System (INIS)

    The OECD/NEA recently launched an international benchmark on a combined feedwater transient and stability event that occurred at the Swedish Oskarshamn-2 (O2) nuclear power plant (NPP). The primary benchmark objective is to assess advances in coupled neutronic/thermal-hydraulic codes for simulations of challenging transients including the appearance of unstable power oscillations. The Paul Scherrer Institut (PSI) is participating in this benchmark in order to enlarge the validation basis of its advanced stability analysis methodology currently under development for Swiss BWRs and based on the state-of-the-art SIMULATE-3K (S3K) code. This paper presents the development, optimization and validation of a S3K model for the first phase of the O2 benchmark, namely the analysis of the entire event. With the optimized model, the S3K solution is compared to available benchmark data both for at steady-state and transient conditions. For the latter, the qualitative as well as quantitative behavior of S3K results is compared and discussed in relation to the experimental observations. The modeling aspects found in this context to mostly affect the S3K ability to reproduce the event are also presented. Finally, the S3K model indicates that if the reactor scram had not been introduced, the observed diverging oscillations would reach a maximum amplitude before decaying back into a stable state. However, it was also found that the core could instead have evolved into limit-cycle oscillations if a stabilization of the feedwater flow and temperature had occurred just before the scram signal. (author)

  9. Development of parallel benchmark code by sheet metal forming simulator 'ITAS'

    International Nuclear Information System (INIS)

    This report describes the development of parallel benchmark code by sheet metal forming simulator 'ITAS'. ITAS is a nonlinear elasto-plastic analysis program by the finite element method for the purpose of the simulation of sheet metal forming. ITAS adopts the dynamic analysis method that computes displacement of sheet metal at every time unit and utilizes the implicit method with the direct linear equation solver. Therefore the simulator is very robust. However, it requires a lot of computational time and memory capacity. In the development of the parallel benchmark code, we designed the code by MPI programming to reduce the computational time. In numerical experiments on the five kinds of parallel super computers at CCSE JAERI, i.e., SP2, SR2201, SX-4, T94 and VPP300, good performances are observed. The result will be shown to the public through WWW so that the benchmark results may become a guideline of research and development of the parallel program. (author)

  10. A Benchmarking Initiative for Reactive Transport Modeling Applied to Subsurface Environmental Applications

    Science.gov (United States)

    Steefel, C. I.

    2015-12-01

    Over the last 20 years, we have seen the evolution of multicomponent reactive transport modeling and the expanding range and increasing complexity of subsurface environmental applications it is being used to address. Reactive transport modeling is being asked to provide accurate assessments of engineering performance and risk for important issues with far-reaching consequences. As a result, the complexity and detail of subsurface processes, properties, and conditions that can be simulated have significantly expanded. Closed form solutions are necessary and useful, but limited to situations that are far simpler than typical applications that combine many physical and chemical processes, in many cases in coupled form. In the absence of closed form and yet realistic solutions for complex applications, numerical benchmark problems with an accepted set of results will be indispensable to qualifying codes for various environmental applications. The intent of this benchmarking exercise, now underway for more than five years, is to develop and publish a set of well-described benchmark problems that can be used to demonstrate simulator conformance with norms established by the subsurface science and engineering community. The objective is not to verify this or that specific code--the reactive transport codes play a supporting role in this regard—but rather to use the codes to verify that a common solution of the problem can be achieved. Thus, the objective of each of the manuscripts is to present an environmentally-relevant benchmark problem that tests the conceptual model capabilities, numerical implementation, process coupling, and accuracy. The benchmark problems developed to date include 1) microbially-mediated reactions, 2) isotopes, 3) multi-component diffusion, 4) uranium fate and transport, 5) metal mobility in mining affected systems, and 6) waste repositories and related aspects.

  11. RESRAD benchmarking against six radiation exposure pathway models

    International Nuclear Information System (INIS)

    A series of benchmarking runs were conducted so that results obtained with the RESRAD code could be compared against those obtained with six pathway analysis models used to determine the radiation dose to an individual living on a radiologically contaminated site. The RESRAD computer code was benchmarked against five other computer codes - GENII-S, GENII, DECOM, PRESTO-EPA-CPG, and PATHRAE-EPA - and the uncodified methodology presented in the NUREG/CR-5512 report. Estimated doses for the external gamma pathway; the dust inhalation pathway; and the soil, food, and water ingestion pathways were calculated for each methodology by matching, to the extent possible, input parameters such as occupancy, shielding, and consumption factors

  12. Experimental verification of boundary conditions for numerical simulation of airflow in a benchmark ventilation channel

    Directory of Open Access Journals (Sweden)

    Lizal Frantisek

    2016-01-01

    Full Text Available Correct definition of boundary conditions is crucial for the appropriate simulation of a flow. It is a common practice that simulation of sufficiently long upstream entrance section is performed instead of experimental investigation of the actual conditions at the boundary of the examined area, in the case that the measurement is either impossible or extremely demanding. We focused on the case of a benchmark channel with ventilation outlet, which models a regular automotive ventilation system. At first, measurements of air velocity and turbulence intensity were performed at the boundary of the examined area, i.e. in the rectangular channel 272.5 mm upstream the ventilation outlet. Then, the experimentally acquired results were compared with results obtained by numerical simulation of further upstream entrance section defined according to generally approved theoretical suggestions. The comparison showed that despite the simple geometry and general agreement of average axial velocity, certain difference was found in the shape of the velocity profile. The difference was attributed to the simplifications of the numerical model and the isotropic turbulence assumption of the used turbulence model. The appropriate recommendations were stated for the future work.

  13. DOE–CEA Benchmark on SFR ASTRID Innovative Core: Neutronic and Safety Transients Simulation

    International Nuclear Information System (INIS)

    ASTRID is a fast reactor being designed by the CEA to achieve a level of safety that exceeds that of conventional fast reactors. In particular, an axially heterogeneous core with an upper sodium plenum is employed to achieve a non-positive sodium void reactivity worth. In order to address the simulation challenges for this innovative concept, the US Department of Energy’s (DOE) Laboratories (Argonne National Laboratory and Idaho National Laboratory) and the CEA are performing neutronic and transient benchmark calculations for an ASTRID model based on design specifications provided by the CEA. The blind comparison of the initial DOE and CEA results are found to be in good agreement, enhancing confidence in CEA predictions of key ASTRID safety relevant parameters and transient behaviour. For several parameters, compared uncertainties in computed values are significant and further studies are needed to reduce them. (author)

  14. Benchmarking the codes VORPAL, OSIRIS, and QuickPIC with Laser Wakefield Acceleration Simulations

    International Nuclear Information System (INIS)

    Three-dimensional laser wakefield acceleration (LWFA) simulations have recently been performed to benchmark the commonly used particle-in-cell (PIC) codes VORPAL, OSIRIS, and QuickPIC. The simulations were run in parallel on over 100 processors, using parameters relevant to LWFA with ultra-short Ti-Sapphire laser pulses propagating in hydrogen gas. Both first-order and second-order particle shapes were employed. We present the results of this benchmarking exercise, and show that accelerating gradients from full PIC agree for all values of a0 and that full and reduced PIC agree well for values of a0 approaching 4.

  15. Extension of PENELOPE to protons: Simulation of nuclear reactions and benchmark with Geant4

    International Nuclear Information System (INIS)

    Purpose: Describing the implementation of nuclear reactions in the extension of the Monte Carlo code (MC) PENELOPE to protons (PENH) and benchmarking with Geant4.Methods: PENH is based on mixed-simulation mechanics for both elastic and inelastic electromagnetic collisions (EM). The adopted differential cross sections for EM elastic collisions are calculated using the eikonal approximation with the Dirac–Hartree–Fock–Slater atomic potential. Cross sections for EM inelastic collisions are computed within the relativistic Born approximation, using the Sternheimer–Liljequist model of the generalized oscillator strength. Nuclear elastic and inelastic collisions were simulated using explicitly the scattering analysis interactive dialin database for 1H and ICRU 63 data for 12C, 14N, 16O, 31P, and 40Ca. Secondary protons, alphas, and deuterons were all simulated as protons, with the energy adapted to ensure consistent range. Prompt gamma emission can also be simulated upon user request. Simulations were performed in a water phantom with nuclear interactions switched off or on and integral depth–dose distributions were compared. Binary-cascade and precompound models were used for Geant4. Initial energies of 100 and 250 MeV were considered. For cases with no nuclear interactions simulated, additional simulations in a water phantom with tight resolution (1 mm in all directions) were performed with FLUKA. Finally, integral depth–dose distributions for a 250 MeV energy were computed with Geant4 and PENH in a homogeneous phantom with, first, ICRU striated muscle and, second, ICRU compact bone.Results: For simulations with EM collisions only, integral depth–dose distributions were within 1%/1 mm for doses higher than 10% of the Bragg-peak dose. For central-axis depth–dose and lateral profiles in a phantom with tight resolution, there are significant deviations between Geant4 and PENH (up to 60%/1 cm for depth–dose distributions). The agreement is much better

  16. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  17. Design and development of a community carbon cycle benchmarking system for CMIP5 models

    Science.gov (United States)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Randerson, J. T.

    2013-12-01

    Benchmarking has been widely used to assess the ability of atmosphere, ocean, sea ice, and land surface models to capture the spatial and temporal variability of observations during the historical period. For the carbon cycle and terrestrial ecosystems, the design and development of an open-source community platform has been an important goal as part of the International Land Model Benchmarking (ILAMB) project. Here we designed and developed a software system that enables the user to specify the models, benchmarks, and scoring systems so that results can be tailored to specific model intercomparison projects. We used this system to evaluate the performance of CMIP5 Earth system models (ESMs). Our scoring system used information from four different aspects of climate, including the climatological mean spatial pattern of gridded surface variables, seasonal cycle dynamics, the amplitude of interannual variability, and long-term decadal trends. We used this system to evaluate burned area, global biomass stocks, net ecosystem exchange, gross primary production, and ecosystem respiration from CMIP5 historical simulations. Initial results indicated that the multi-model mean often performed better than many of the individual models for most of the observational constraints.

  18. In-cylinder diesel spray combustion simulations using parallel computation: A performance benchmarking study

    International Nuclear Information System (INIS)

    Highlights: ► A performance benchmarking exercise is conducted for diesel combustion simulations. ► The reduced chemical mechanism shows its advantages over base and skeletal models. ► High efficiency and great reduction of CPU runtime are achieved through 4-node solver. ► Increasing ISAT memory from 0.1 to 2 GB reduces the CPU runtime by almost 35%. ► Combustion and soot processes are predicted well with minimal computational cost. - Abstract: In the present study, in-cylinder diesel combustion simulation was performed with parallel processing on an Intel Xeon Quad-Core platform to allow both fluid dynamics and chemical kinetics of the surrogate diesel fuel model to be solved simultaneously on multiple processors. Here, Cartesian Z-Coordinate was selected as the most appropriate partitioning algorithm since it computationally bisects the domain such that the dynamic load associated with fuel particle tracking was evenly distributed during parallel computations. Other variables examined included number of compute nodes, chemistry sizes and in situ adaptive tabulation (ISAT) parameters. Based on the performance benchmarking test conducted, parallel configuration of 4-compute node was found to reduce the computational runtime most efficiently whereby a parallel efficiency of up to 75.4% was achieved. The simulation results also indicated that accuracy level was insensitive to the number of partitions or the partitioning algorithms. The effect of reducing the number of species on computational runtime was observed to be more significant than reducing the number of reactions. Besides, the study showed that an increase in the ISAT maximum storage of up to 2 GB reduced the computational runtime by 50%. Also, the ISAT error tolerance of 10−3 was chosen to strike a balance between results accuracy and computational runtime. The optimised parameters in parallel processing and ISAT, as well as the use of the in-house reduced chemistry model allowed accurate

  19. The Lexington Benchmarks for Numerical Simulations of Nebulae

    CERN Document Server

    Ferland, G; Contini, M; Harrington, J; Kallman, T; Netzer, H; Péquignot, D; Raymond, J; Rubin, R; Shields, G; Sutherland, R; Viegas, S

    2016-01-01

    We present the results of a meeting on numerical simulations of ionized nebulae held at the University of Kentucky in conjunction with the celebration of the 70th birthdays of Profs. Donald Osterbrock and Michael Seaton.

  20. Benchmark calculation with improved VVER-440/213 RPV CFD model

    International Nuclear Information System (INIS)

    A detailed RPV model of WWER-440/213 type reactors was developed in BME NTI in the last years. This model contains the main structural elements as inlet and outlet nozzles, guide baffles of hydro-accumulators, alignment drifts, perforated plates, brake- and guide tube chamber and simplified core. For the meshing and simulations ANSYS software's (ICEM 12.0 and CFX 12.0) were used. With the new vessel model a series of parameter studies were performed considering turbulence models, discretization schemes, and modeling methods. In steady state the main results were presented on last AER Symposium in Varna. The model is suitable for different transient calculations as well. The purpose of the suggested new benchmark (seventh Dynamic AER Benchmark) is to investigate the reactor dynamic effects of coolant mixing in the WWER-440/213 reactor vessel and to compare the different codes. The task of this benchmark is to investigate the start up of the sixth main coolant pump. The computation was carried out with the help of ATHLET/BIPRVVER code in Kurchatov Institute for this transient and was repeated with ANSYS CFX 12.0 at our Institute. (Authors)

  1. Benchmarking analysis of three multimedia models: RESRAD, MMSOILS, and MEPAS

    International Nuclear Information System (INIS)

    Multimedia modelers from the United States Environmental Protection Agency (EPA) and the United States Department of Energy (DOE) collaborated to conduct a comprehensive and quantitative benchmarking analysis of three multimedia models. The three models-RESRAD (DOE), MMSOILS (EPA), and MEPAS (DOE)-represent analytically based tools that are used by the respective agencies for performing human exposure and health risk assessments. The study is performed by individuals who participate directly in the ongoing design, development, and application of the models. A list of physical/chemical/biological processes related to multimedia-based exposure and risk assessment is first presented as a basis for comparing the overall capabilities of RESRAD, MMSOILS, and MEPAS. Model design, formulation, and function are then examined by applying the models to a series of hypothetical problems. Major components of the models (e.g., atmospheric, surface water, groundwater) are evaluated separately and then studied as part of an integrated system for the assessment of a multimedia release scenario to determine effects due to linking components of the models. Seven modeling scenarios are used in the conduct of this benchmarking study: (1) direct biosphere exposure, (2) direct release to the air, (3) direct release to the vadose zone, (4) direct release to the saturated zone, (5) direct release to surface water, (6) surface water hydrology, and (7) multimedia release. Study results show that the models differ with respect to (1) environmental processes included (i.e., model features) and (2) the mathematical formulation and assumptions related to the implementation of solutions (i.e., parameterization)

  2. Benchmarking of super Monte Carlo simulation program SuperMC 2

    International Nuclear Information System (INIS)

    The Super Monte Carlo Simulation Program (SuperMC) for fusion, fission and other nuclear applications has been developed. The techniques of variance reduction and hybrid parallel computing have been implemented in SuperMC to enhance efficiency. SuperMC is written in an object-oriented programing language C++ with modular design concept, and is relatively easy to maintain, modify and expand. Continuous-energy cross section data from hybrid evaluated nuclear data library HENDL, are utilized to support simulation. In SuperMC 2 constructive solid geometry (CSG) method are mainly employed to describe geometry and support geometry processes. As a way to input geometrical data of a complex system efficiently, SuperMC 2 adopts a hierarchical modeling option that makes the best of hierarchical tiers called composition solids. SuperMC 2 can perform fixed source and critical eigenvalue calculations through the Monte Carlo particle simulation algorithms for neutron, photon and coupled neutron-photon transport. SuperMC 2 has been validated on a broad range of international benchmarks. The results of fixed source calculation such as flux, surface current, energy deposition and fission energy deposition was corresponding with MCNP standard within 1% relative error and critical eigenvalue keff matched within 0.5% relative error

  3. Information-Theoretic Benchmarking of Land Surface Models

    Science.gov (United States)

    Nearing, Grey; Mocko, David; Kumar, Sujay; Peters-Lidard, Christa; Xia, Youlong

    2016-04-01

    Benchmarking is a type of model evaluation that compares model performance against a baseline metric that is derived, typically, from a different existing model. Statistical benchmarking was used to qualitatively show that land surface models do not fully utilize information in boundary conditions [1] several years before Gong et al [2] discovered the particular type of benchmark that makes it possible to *quantify* the amount of information lost by an incorrect or imperfect model structure. This theoretical development laid the foundation for a formal theory of model benchmarking [3]. We here extend that theory to separate uncertainty contributions from the three major components of dynamical systems models [4]: model structures, model parameters, and boundary conditions describe time-dependent details of each prediction scenario. The key to this new development is the use of large-sample [5] data sets that span multiple soil types, climates, and biomes, which allows us to segregate uncertainty due to parameters from the two other sources. The benefit of this approach for uncertainty quantification and segregation is that it does not rely on Bayesian priors (although it is strictly coherent with Bayes' theorem and with probability theory), and therefore the partitioning of uncertainty into different components is *not* dependent on any a priori assumptions. We apply this methodology to assess the information use efficiency of the four land surface models that comprise the North American Land Data Assimilation System (Noah, Mosaic, SAC-SMA, and VIC). Specifically, we looked at the ability of these models to estimate soil moisture and latent heat fluxes. We found that in the case of soil moisture, about 25% of net information loss was from boundary conditions, around 45% was from model parameters, and 30-40% was from the model structures. In the case of latent heat flux, boundary conditions contributed about 50% of net uncertainty, and model structures contributed

  4. OECD/NRC NUPEC PSBT benchmark: simulation of the steady-state sub-channel test-case with NEPTUNECFD

    International Nuclear Information System (INIS)

    In this paper, we present numerical results obtained with the multifield CFD code NEPTUNECFD in the framework of the OECD/NRC PWR Subchannel and Bundle Tests (PSBT) international benchmark, focusing on the simulation of five selected runs of the steady-state subchannel exercise. The propagation of the estimated experimental uncertainties on the simulations results is investigated, as well as the mesh sensitivity of the axial evolution of the mean void-fraction by using three grid levels. Last, calculation results using a devoted model for the bubble-size distribution are presented. (author)

  5. Benchmarking and scaling studies of pseudospectral code Tarang for turbulence simulations

    Indian Academy of Sciences (India)

    Mahendra K Verma; Anando Chatterjee; K Sandeep Reddy; Rakesh K Yadav; Supriyo Paul; Mani Chandra; Ravi Samtaney

    2013-10-01

    Tarang is a general-purpose pseudospectral parallel code for simulating flows involving fluids, magnetohydrodynamics, and Rayleigh–Bénard convection in turbulence and instability regimes. In this paper we present code validation and benchmarking results of Tarang. We performed our simulations on 10243, 20483, and 40963 grids using the HPC system of IIT Kanpur and Shaheen of KAUST. We observe good `weak' and `strong' scaling for Tarang on these systems.

  6. Benchmarking and scaling studies of pseudospectral code Tarang for turbulence simulations

    KAUST Repository

    VERMA, MAHENDRA K

    2013-09-21

    Tarang is a general-purpose pseudospectral parallel code for simulating flows involving fluids, magnetohydrodynamics, and Rayleigh–Bénard convection in turbulence and instability regimes. In this paper we present code validation and benchmarking results of Tarang. We performed our simulations on 10243, 20483, and 40963 grids using the HPC system of IIT Kanpur and Shaheen of KAUST. We observe good ‘weak’ and ‘strong’ scaling for Tarang on these systems.

  7. Benchmark model to assess community structure in evolving networks.

    Science.gov (United States)

    Granell, Clara; Darst, Richard K; Arenas, Alex; Fortunato, Santo; Gómez, Sergio

    2015-07-01

    Detecting the time evolution of the community structure of networks is crucial to identify major changes in the internal organization of many complex systems, which may undergo important endogenous or exogenous events. This analysis can be done in two ways: considering each snapshot as an independent community detection problem or taking into account the whole evolution of the network. In the first case, one can apply static methods on the temporal snapshots, which correspond to configurations of the system in short time windows, and match afterward the communities across layers. Alternatively, one can develop dedicated dynamic procedures so that multiple snapshots are simultaneously taken into account while detecting communities, which allows us to keep memory of the flow. To check how well a method of any kind could capture the evolution of communities, suitable benchmarks are needed. Here we propose a model for generating simple dynamic benchmark graphs, based on stochastic block models. In them, the time evolution consists of a periodic oscillation of the system's structure between configurations with built-in community structure. We also propose the extension of quality comparison indices to the dynamic scenario. PMID:26274223

  8. A Programming Model Performance Study Using the NAS Parallel Benchmarks

    Directory of Open Access Journals (Sweden)

    Hongzhang Shan

    2010-01-01

    Full Text Available Harnessing the power of multicore platforms is challenging due to the additional levels of parallelism present. In this paper we use the NAS Parallel Benchmarks to study three programming models, MPI, OpenMP and PGAS to understand their performance and memory usage characteristics on current multicore architectures. To understand these characteristics we use the Integrated Performance Monitoring tool and other ways to measure communication versus computation time, as well as the fraction of the run time spent in OpenMP. The benchmarks are run on two different Cray XT5 systems and an Infiniband cluster. Our results show that in general the three programming models exhibit very similar performance characteristics. In a few cases, OpenMP is significantly faster because it explicitly avoids communication. For these particular cases, we were able to re-write the UPC versions and achieve equal performance to OpenMP. Using OpenMP was also the most advantageous in terms of memory usage. Also we compare performance differences between the two Cray systems, which have quad-core and hex-core processors. We show that at scale the performance is almost always slower on the hex-core system because of increased contention for network resources.

  9. Developing and Using Benchmarks for Eddy Current Simulation Codes Validation to Address Industrial Issues

    Science.gov (United States)

    Mayos, M.; Buvat, F.; Costan, V.; Moreau, O.; Gilles-Pascaud, C.; Reboud, C.; Foucher, F.

    2011-06-01

    To achieve performance demonstration, which is a legal requirement for the qualification of NDE processes applied on French nuclear power plants, the use of modeling tools is a valuable support, provided that the employed models have been previously validated. To achieve this, in particular for eddy current modeling, a validation methodology based on the use of specific benchmarks close to the actual industrial issue has to be defined. Nonetheless, considering the high variability in code origin and complexity, the feedback from experience on actual cases has shown that it was critical to define simpler generic and public benchmarks in order to perform a preliminary selection. A specific Working Group has been launched in the frame of COFREND, the French Association for NDE, resulting in the definition of several benchmark problems. This action is now ready for mutualization with similar international approaches.

  10. A performance geodynamo benchmark

    Science.gov (United States)

    Matsui, H.; Heien, E. M.

    2014-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. However, to approach the parameters regime of the Earth's outer core, we need massively parallel computational environment for extremely large spatial resolutions. Local methods are expected to be more suitable for massively parallel computation because the local methods needs less data communication than the spherical harmonics expansion, but only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, some numerical dynamo models using spherical harmonics expansion has performed successfully with thousands of processes. We perform benchmark tests to asses various numerical methods to asses the next generation of geodynamo simulations. The purpose of the present benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the present study, we will report the results of the performance benchmark. We perform the participated dynamo models under the same computational environment (XSEDE TACC Stampede), and investigate computational performance. To simplify the problem, we choose the same model and parameter regime as the accuracy benchmark test, but perform the simulations with much finer spatial resolutions as possible to investigate computational capability (e

  11. Model-Based Engineering and Manufacturing CAD/CAM Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Domm, T.D.; Underwood, R.S.

    1999-04-26

    The Benehmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supprting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate lheir engineering practices and processes to determine direction and focus fm Y-12 modmizadon efforts. The companies visited included several large established companies and anew, small, high-tech machining firm. As a result of this efforL changes are recommended that will enable Y-12 to become a more responsive cost-effective manufacturing facility capable of suppordng the needs of the Nuclear Weapons Complex (NW@) and Work Fw Others into the 21' century. The benchmark team identified key areas of interest, both focused and gencml. The focus arm included Human Resources, Information Management, Manufacturing Software Tools, and Standarda/ Policies and Practices. Areas of general interest included Inhstructure, Computer Platforms and Networking, and Organizational Structure. The method for obtaining the desired information in these areas centered on the creation of a benchmark questionnaire. The questionnaire was used throughout each of the visits as the basis for information gathering. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were using both 3-D solid modeling and surfaced Wire-frame models. The manufacturing computer tools were varie4 with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) ftom a common medel. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system

  12. Model-Based Engineering and Manufacturing CAD/CAM Benchmark

    International Nuclear Information System (INIS)

    The Benehmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supporting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate their engineering practices and processes to determine direction and focus fm Y-12 modmizadon efforts. The companies visited included several large established companies and anew, small, high-tech machining firm. As a result of this effort changes are recommended that will enable Y-12 to become a more responsive cost-effective manufacturing facility capable of suppording the needs of the Nuclear Weapons Complex (NW at sign) and Work Fw Others into the 21' century. The benchmark team identified key areas of interest, both focused and gencml. The focus arm included Human Resources, Information Management, Manufacturing Software Tools, and Standarda/ Policies and Practices. Areas of general interest included Inhstructure, Computer Platforms and Networking, and Organizational Structure. The method for obtaining the desired information in these areas centered on the creation of a benchmark questionnaire. The questionnaire was used throughout each of the visits as the basis for information gathering. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were using both 3-D solid modeling and surfaced Wire-frame models. The manufacturing computer tools were varie4 with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) ftom a common medel. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system

  13. CFD Simulation of Thermal-Hydraulic Benchmark V1000CT-2 Using ANSYS CFX

    Directory of Open Access Journals (Sweden)

    Thomas Höhne

    2009-01-01

    Full Text Available Plant measured data from VVER-1000 coolant mixing experiments were used within the OECD/NEA and AER coupled code benchmarks for light water reactors to test and validate computational fluid dynamic (CFD codes. The task is to compare the various calculations with measured data, using specified boundary conditions and core power distributions. The experiments, which are provided for CFD validation, include single loop cooling down or heating-up by disturbing the heat transfer in the steam generator through the steam valves at low reactor power and with all main coolant pumps in operation. CFD calculations have been performed using a numerical grid model of 4.7 million tetrahedral elements. The Best Practice Guidelines in using CFD in nuclear reactor safety applications has been used. Different advanced turbulence models were utilized in the numerical simulation. The results show a clear sector formation of the affected loop at the downcomer, lower plenum and core inlet, which corresponds to the measured values. The maximum local values of the relative temperature rise in the calculation are in the same range of the experiment. Due to this result, it is now possible to improve the mixing models which are usually used in system codes.

  14. An International Land-Biosphere Model Benchmarking Activity for the IPCC Fifth Assessment Report (AR5)

    Science.gov (United States)

    Hoffman, F. M.; Randerson, J. T.; Thornton, P. E.; Bonan, G. B.; Brooks, B. J.; Erickson, D. J.; Fung, I.

    2009-12-01

    The need to capture important climate feedbacks in general circulation models (GCMs) has resulted in efforts to include atmospheric chemistry and land and ocean biogeochemistry into the next generation of production climate models, called Earth System Models (ESMs). While many terrestrial and ocean carbon models have been coupled to GCMs, recent work has shown that such models can yield a wide range of results (Friedlingstein et al., 2006). This work suggests that a more rigorous set of global offline and partially coupled experiments, along with detailed analyses of processes and comparisons with measurements, are needed. The Carbon-Land Model Intercomparison Project (C-LAMP) was designed to meet this need by providing a simulation protocol and model performance metrics based upon comparisons against best-available satellite- and ground-based measurements (Hoffman et al., 2007). Recently, a similar effort in Europe, called the International Land Model Benchmark (ILAMB) Project, was begun to assess the performance of European land surface models. These two projects will now serve as prototypes for a proposed international land-biosphere model benchmarking activity for those models participating in the IPCC Fifth Assessment Report (AR5). Initially used for model validation for terrestrial biogeochemistry models in the NCAR Community Land Model (CLM), C-LAMP incorporates a simulation protocol for both offline and partially coupled simulations using a prescribed historical trajectory of atmospheric CO2 concentrations. Models are confronted with data through comparisons against AmeriFlux site measurements, MODIS satellite observations, NOAA Globalview flask records, TRANSCOM inversions, and Free Air CO2 Enrichment (FACE) site measurements. Both sets of experiments have been performed using two different terrestrial biogeochemistry modules coupled to the CLM version 3 in the Community Climate System Model version 3 (CCSM3): the CASA model of Fung, et al., and the carbon

  15. An international land-biosphere model benchmarking activity for the IPCC Fifth Assessment Report (AR5)

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, Forrest M [ORNL; Randerson, James T [ORNL; Thornton, Peter E [ORNL; Bonan, Gordon [National Center for Atmospheric Research (NCAR); Erickson III, David J [ORNL; Fung, Inez [University of California, Berkeley

    2009-12-01

    The need to capture important climate feedbacks in general circulation models (GCMs) has resulted in efforts to include atmospheric chemistry and land and ocean biogeochemistry into the next generation of production climate models, called Earth System Models (ESMs). While many terrestrial and ocean carbon models have been coupled to GCMs, recent work has shown that such models can yield a wide range of results (Friedlingstein et al., 2006). This work suggests that a more rigorous set of global offline and partially coupled experiments, along with detailed analyses of processes and comparisons with measurements, are needed. The Carbon-Land Model Intercomparison Project (C-LAMP) was designed to meet this need by providing a simulation protocol and model performance metrics based upon comparisons against best-available satellite- and ground-based measurements (Hoffman et al., 2007). Recently, a similar effort in Europe, called the International Land Model Benchmark (ILAMB) Project, was begun to assess the performance of European land surface models. These two projects will now serve as prototypes for a proposed international land-biosphere model benchmarking activity for those models participating in the IPCC Fifth Assessment Report (AR5). Initially used for model validation for terrestrial biogeochemistry models in the NCAR Community Land Model (CLM), C-LAMP incorporates a simulation protocol for both offline and partially coupled simulations using a prescribed historical trajectory of atmospheric CO2 concentrations. Models are confronted with data through comparisons against AmeriFlux site measurements, MODIS satellite observations, NOAA Globalview flask records, TRANSCOM inversions, and Free Air CO2 Enrichment (FACE) site measurements. Both sets of experiments have been performed using two different terrestrial biogeochemistry modules coupled to the CLM version 3 in the Community Climate System Model version 3 (CCSM3): the CASA model of Fung, et al., and the carbon

  16. Beam equipment electromagnetic interaction in accelerators: simulation and experimental benchmarking

    CERN Document Server

    Passarelli, Andrea; Vaccaro, Vittorio Giorgio; Massa, Rita; Masullo, Maria Rosaria

    One of the most significant technological problems to achieve the nominal performances in the Large Hadron Collider (LHC) concerns the system of collimation of particle beams. The use of collimators crystals, exploiting the channeling effect on extracted beam, has been experimentally demonstrated. The first part of this thesis is about the optimization of UA9 goniometer at CERN, this device used for beam collimation will replace a part of the vacuum chamber. The optimization process, however, requires the calculation of the coupling impedance between the circulating beam and this structure in order to define the threshold of admissible intensity to do not trigger instability processes. Simulations have been performed with electromagnetic codes to evaluate the coupling impedance and to assess the beam-structure interaction. The results clearly showed that the most concerned resonance frequencies are due solely to the open cavity to the compartment of the motors and position sensors considering the crystal in o...

  17. Monte Carlo simulations and benchmark studies at CERN's accelerator chain

    CERN Document Server

    AUTHOR|(CDS)2083190; Brugger, Markus

    2015-01-01

    Mixed particle and energy radiation fields present at the Large Hadron Collider (LHC) and its accelerator chain are responsible for failures on electronic devices located in the vicinity of the accelerator beam lines. These radiation effects on electronics and, more generally, the overall radiation damage issues have a direct impact on component and system lifetimes, as well as on maintenance requirements and radiation exposure to personnel who have to intervene and fix existing faults. The radiation environments and respective radiation damage issues along the CERN’s accelerator chain were studied in the framework of the CERN Radiation to Electronics (R2E) project and are hereby presented. The important interplay between Monte Carlo simulations and radiation monitoring is also highlighted.

  18. Benchmarking consensus model quality assessment for protein fold recognition

    Directory of Open Access Journals (Sweden)

    McGuffin Liam J

    2007-09-01

    Full Text Available Abstract Background Selecting the highest quality 3D model of a protein structure from a number of alternatives remains an important challenge in the field of structural bioinformatics. Many Model Quality Assessment Programs (MQAPs have been developed which adopt various strategies in order to tackle this problem, ranging from the so called "true" MQAPs capable of producing a single energy score based on a single model, to methods which rely on structural comparisons of multiple models or additional information from meta-servers. However, it is clear that no current method can separate the highest accuracy models from the lowest consistently. In this paper, a number of the top performing MQAP methods are benchmarked in the context of the potential value that they add to protein fold recognition. Two novel methods are also described: ModSSEA, which based on the alignment of predicted secondary structure elements and ModFOLD which combines several true MQAP methods using an artificial neural network. Results The ModSSEA method is found to be an effective model quality assessment program for ranking multiple models from many servers, however further accuracy can be gained by using the consensus approach of ModFOLD. The ModFOLD method is shown to significantly outperform the true MQAPs tested and is competitive with methods which make use of clustering or additional information from multiple servers. Several of the true MQAPs are also shown to add value to most individual fold recognition servers by improving model selection, when applied as a post filter in order to re-rank models. Conclusion MQAPs should be benchmarked appropriately for the practical context in which they are intended to be used. Clustering based methods are the top performing MQAPs where many models are available from many servers; however, they often do not add value to individual fold recognition servers when limited models are available. Conversely, the true MQAP methods

  19. Benchmark of numerical tools simulating beam propagation and secondary particles in ITER NBI

    Energy Technology Data Exchange (ETDEWEB)

    Sartori, E., E-mail: emanuele.sartori@igi.cnr.it; Veltri, P.; Serianni, G. [Consorzio RFX (CNR, ENEA, INFN, Università di Padova, Acciaierie Venete SpA) Corso Stati Uniti 4 - 35127 Padova (Italy); Dlougach, E. [RRC Kurchatov institute, 1, Kurchatov Sq, Moscow, 123182 (Russian Federation); Hemsworth, R.; Singh, M. [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France)

    2015-04-08

    Injection of high energy beams of neutral particles is a method for plasma heating in fusion devices. The ITER injector, and its prototype MITICA (Megavolt ITER Injector and Concept Advancement), are large extrapolations from existing devices: therefore numerical modeling is needed to set thermo-mechanical requirements for all beam-facing components. As the power and charge deposition originates from several sources (primary beam, co-accelerated electrons, and secondary production by beam-gas, beam-surface, and electron-surface interaction), the beam propagation along the beam line is simulated by comprehensive 3D models. This paper presents a comparative study between two codes: BTR has been used for several years in the design of the ITER HNB/DNB components; SAMANTHA code was independently developed and includes additional phenomena, such as secondary particles generated by collision of beam particles with the background gas. The code comparison is valuable in the perspective of the upcoming experimental operations, in order to prepare a reliable numerical support to the interpretation of experimental measurements in the beam test facilities. The power density map calculated on the Electrostatic Residual Ion Dump (ERID) is the chosen benchmark, as it depends on the electric and magnetic fields as well as on the evolution of the beam species via interaction with the gas. Finally the paper shows additional results provided by SAMANTHA, like the secondary electrons produced by volume processes accelerated by the ERID fringe-field towards the Cryopumps.

  20. Development of common user data model for APOLLO3 and MARBLE and application to benchmark problems

    International Nuclear Information System (INIS)

    A Common User Data Model, CUDM, has been developed for the purpose of benchmark calculations between APOLLO3 and MARBLE code systems. The current version of CUDM was designed for core calculation benchmark problems with 3-dimensional Cartesian, 3-D XYZ, geometry. CUDM is able to manage all input/output data such as 3-D XYZ geometry, effective macroscopic cross section, effective multiplication factor and neutron flux. In addition, visualization tools for geometry and neutron flux were included. CUDM was designed by the object-oriented technique and implemented using Python programming language. Based on the CUDM, a prototype system for a benchmark calculation, CUDM-benchmark, was also developed. The CUDM-benchmark supports input/output data conversion for IDT solver in APOLLO3, and TRITAC and SNT solvers in MARBLE. In order to evaluate pertinence of CUDM, the CUDM-benchmark was applied to benchmark problems proposed by T. Takeda, G. Chiba and I. Zmijarevic. It was verified that the CUDM-benchmark successfully reproduced the results calculated with reference input data files, and provided consistent results among all the solvers by using one common input data defined by CUDM. In addition, a detailed benchmark calculation for Chiba benchmark was performed by using the CUDM-benchmark. Chiba benchmark is a neutron transport benchmark problem for fast criticality assembly without homogenization. This benchmark problem consists of 4 core configurations which have different sodium void regions, and each core configuration is defined by more than 5,000 fuel/material cells. In this application, it was found that the results by IDT and SNT solvers agreed well with the reference results by Monte-Carlo code. In addition, model effects such as quadrature set effect, Sn order effect and mesh size effect were systematically evaluated and summarized in this report. (author)

  1. Benchmarking the calculation of stochastic heating and emissivity of dust grains in the context of radiative transfer simulations

    Science.gov (United States)

    Camps, Peter; Misselt, Karl; Bianchi, Simone; Lunttila, Tuomas; Pinte, Christophe; Natale, Giovanni; Juvela, Mika; Fischera, Joerg; Fitzgerald, Michael P.; Gordon, Karl; Baes, Maarten; Steinacker, Jürgen

    2015-08-01

    Context. Thermal emission by stochastically heated dust grains (SHGs) plays an important role in the radiative transfer (RT) problem for a dusty medium. It is therefore essential to verify that RT codes properly calculate the dust emission before studying the effects of spatial distribution and other model parameters on the simulated observables. Aims: We define an appropriate problem for benchmarking dust emissivity calculations in the context of RT simulations, specifically including the emission from SHGs. Our aim is to provide a self-contained guide for implementors of such functionality and to offer insight into the effects of the various approximations and heuristics implemented by the participating codes to accelerate the calculations. Methods: The benchmark problem definition includes the optical and calorimetric material properties and the grain size distributions for a typical astronomical dust mixture with silicate, graphite, and PAH components. It also includes a series of analytically defined radiation fields to which the dust population is to be exposed and instructions for the desired output. We processed this problem using six RT codes participating in this benchmark effort and compared the results to a reference solution computed with the publicly available dust emission code DustEM. Results: The participating codes implement different heuristics to keep the calculation time at an acceptable level. We study the effects of these mechanisms on the calculated solutions and report on the level of (dis)agreement between the participating codes. For all but the most extreme input fields, we find agreement within 10% across the important wavelength range 3 μm ≤ λ ≤ 1000 μm. Conclusions: We conclude that the relevant modules in RT codes can and do produce fairly consistent results for the emissivity spectra of SHGs. This work can serve as a reference for implementors of dust RT codes, and it will pave the way for a more extensive benchmark effort

  2. Benchmarking reactive transport models at a hillslope scale

    Science.gov (United States)

    Kalbacher, T.; He, W.; Nixdorf, E.; Jang, E.; Fleckenstein, J. H.; Kolditz, O.

    2015-12-01

    The hillslope scale is an important transition between the field scale and the catchment scale. The water flow in the unsaturated zone of a hillslope can be highly dynamic, which can lead to dynamic changes of groundwater flow or stream outflow. Additionally, interactions among host rock formation, soil properties and recharge water from precipitation or anthropogenic activities (mining, agriculture etc.) can influence the water quality of groundwater and stream in the long term. To simulate reactive transport processes at such a scale is a challenging task. On the one hand, simulation of water flow in a coupled soil-aquifer system often involves solving of highly non-linear PDEs such as Richards equation; on the other hand, one has to consider complicated biogeochemical reactions (e.g. water-rock interactions, biological degradation, redox reactions). Both aspects are computationally expensive and have high requirements on the numerical precision and stabilities of the employed code. The primary goals of this study are as follows: i) Identify the bottlenecks and quantitatively analyse their influence on simulation of biogeochemical reactions at a hillslope scale; ii) find or suggest practical strategies to deal with these bottlenecks, thus to provide detailed hints for future improvements of reactive transport simulators. To achieve these goals, the parallelized reactive transport simulator OGS#IPhreeqc has been applied to simulate two benchmark examples. The first example is about uranium leaching based on Šimůnek et al. (2012), which considers the leaching of uranium from a mill tailing and accompanied mineral dissolution/precipitation. The geochemical system is then extended to include redox reactions in the second example. Based on these examples, the numerical stability and parallel performance of the tool is analysed. ReferenceŠimůnek, J., Jacques, D., Šejna, M., van Genuchten, M. T.: The HP2 program for HYDRUS (2D/3D), A coupled code for simulating two

  3. Numerical and computational aspects of the coupled three-dimensional core/ plant simulations: organization for economic cooperation and development/ U.S. nuclear regulatory commission pressurized water reactor main-steam-line-break benchmark-II. Neutronics Behavior Comparison Using Different Thermohydraulic Modelizations

    International Nuclear Information System (INIS)

    System codes incorporating a full three-dimensional (3-D) reactor core model allow best-estimate simulations of nuclear power plants; however, two main questions arise with respect to this technology. First, are full 3-D thermohydraulics necessary? Second, what net profit is gained with the 3-D neutronics? Furthermore, currently, there is only short experience with 3-D techniques; consequently, the idea has formed of the Nuclear Energy Agency Nuclear Science Committee performing a series of plant transient benchmarks in order to verify the 3-D core models. The Main-Steam-Line-Break (MSLB) Benchmark belongs to this initiative. This benchmark consists of three exercises, namely, point kinetics plant simulation, a coupled 3-D neutronics/core thermohydraulic evaluation of core response, and finally a best-estimate coupled core-plant transient simulation. This paper is based on the experience of performing the second exercise to test the 3-D and point neutron kinetics response with imposed thermohydraulic boundary conditions using TRAC/BF1 (Ref. 2), TRAC/PF1 (Ref. 3), and RETRAN-3D. The following thermohydraulic conditions are provided in the benchmark: radial distribution of mass flow rates and liquid temperatures at the core inlet, and radial distribution of pressure versus time at both the core inlet and outlet. We have developed four different reactor core models: two for TRAC/PF1 and two more for TRAC/BF1. The first one, assumed as the reference, models the core using the VESSEL component of TRAC/PF1 (3-D thermohydraulic equations). The second one models the core using the PIPE component in place of the vessel, also with the same code. The third and fourth ones are with TRAC/BF1, and both represent the core using 18 bundles with the CHANNEL component and no vessel (so no crossflow is considered). The third one has no lower plenum mixing, but the fourth one has it; it allows all the flow thermohydraulic properties to mix before the core inlet, and it is similar to

  4. A resource for benchmarking the usefulness of protein structure models.

    KAUST Repository

    Carbajo, Daniel

    2012-08-02

    BACKGROUND: Increasingly, biologists and biochemists use computational tools to design experiments to probe the function of proteins and/or to engineer them for a variety of different purposes. The most effective strategies rely on the knowledge of the three-dimensional structure of the protein of interest. However it is often the case that an experimental structure is not available and that models of different quality are used instead. On the other hand, the relationship between the quality of a model and its appropriate use is not easy to derive in general, and so far it has been analyzed in detail only for specific application. RESULTS: This paper describes a database and related software tools that allow testing of a given structure based method on models of a protein representing different levels of accuracy. The comparison of the results of a computational experiment on the experimental structure and on a set of its decoy models will allow developers and users to assess which is the specific threshold of accuracy required to perform the task effectively. CONCLUSIONS: The ModelDB server automatically builds decoy models of different accuracy for a given protein of known structure and provides a set of useful tools for their analysis. Pre-computed data for a non-redundant set of deposited protein structures are available for analysis and download in the ModelDB database. IMPLEMENTATION, AVAILABILITY AND REQUIREMENTS: Project name: A resource for benchmarking the usefulness of protein structure models. Project home page: http://bl210.caspur.it/MODEL-DB/MODEL-DB_web/MODindex.php.Operating system(s): Platform independent. Programming language: Perl-BioPerl (program); mySQL, Perl DBI and DBD modules (database); php, JavaScript, Jmol scripting (web server). Other requirements: Java Runtime Environment v1.4 or later, Perl, BioPerl, CPAN modules, HHsearch, Modeller, LGA, NCBI Blast package, DSSP, Speedfill (Surfnet) and PSAIA. License: Free. Any restrictions to use by

  5. Extension of PENELOPE to protons: Simulation of nuclear reactions and benchmark with Geant4

    Energy Technology Data Exchange (ETDEWEB)

    Sterpin, E. [Center of Molecular Imaging, Radiotherapy and Oncology, Institut de recherche expérimentale et clinique, Université catholique de Louvain, Avenue Hippocrate 54, 1200 Brussels (Belgium); Sorriaux, J. [Center of Molecular Imaging, Radiotherapy and Oncology, Institut de recherche expérimentale et clinique, Université catholique de Louvain, Avenue Hippocrate 54, 1200 Brussels, Belgium and ICTEAM Institute, Université catholique de Louvain, Louvain-la-Neuve (Belgium); Vynckier, S. [Center of Molecular Imaging, Radiotherapy and Oncology, Institut de recherche expérimentale et clinique, Université catholique de Louvain, Avenue Hippocrate 54, 1200 Brussels, Belgium and Département de radiothérapie, Cliniques Universitaires Saint-Luc, Avenue Hippocrate 10, 1200 Brussels (Belgium)

    2013-11-15

    Purpose: Describing the implementation of nuclear reactions in the extension of the Monte Carlo code (MC) PENELOPE to protons (PENH) and benchmarking with Geant4.Methods: PENH is based on mixed-simulation mechanics for both elastic and inelastic electromagnetic collisions (EM). The adopted differential cross sections for EM elastic collisions are calculated using the eikonal approximation with the Dirac–Hartree–Fock–Slater atomic potential. Cross sections for EM inelastic collisions are computed within the relativistic Born approximation, using the Sternheimer–Liljequist model of the generalized oscillator strength. Nuclear elastic and inelastic collisions were simulated using explicitly the scattering analysis interactive dialin database for {sup 1}H and ICRU 63 data for {sup 12}C, {sup 14}N, {sup 16}O, {sup 31}P, and {sup 40}Ca. Secondary protons, alphas, and deuterons were all simulated as protons, with the energy adapted to ensure consistent range. Prompt gamma emission can also be simulated upon user request. Simulations were performed in a water phantom with nuclear interactions switched off or on and integral depth–dose distributions were compared. Binary-cascade and precompound models were used for Geant4. Initial energies of 100 and 250 MeV were considered. For cases with no nuclear interactions simulated, additional simulations in a water phantom with tight resolution (1 mm in all directions) were performed with FLUKA. Finally, integral depth–dose distributions for a 250 MeV energy were computed with Geant4 and PENH in a homogeneous phantom with, first, ICRU striated muscle and, second, ICRU compact bone.Results: For simulations with EM collisions only, integral depth–dose distributions were within 1%/1 mm for doses higher than 10% of the Bragg-peak dose. For central-axis depth–dose and lateral profiles in a phantom with tight resolution, there are significant deviations between Geant4 and PENH (up to 60%/1 cm for depth

  6. CALIBRATION METHODS OF A CONSTITUTIVE MODEL FOR PARTIALLY SATURATED SOILS: A BENCHMARKING EXERCISE WITHIN THE MUSE NETWORK

    OpenAIRE

    D'Onza, Francesca

    2008-01-01

    The paper presents a benchmarking exercise comparing different procedures, adopted by seven different teams of constitutive modellers, for the determination of parameter values in the Barcelona Basic Model, which is an elasto-plastic model for unsaturated soils. Each team is asked to determine a set of parameter values based on the same laboratory test data. The different set of parameters are then employed to simulate soil behaviour along a variety of stress paths. The results are finally co...

  7. Indoor Modelling Benchmark for 3D Geometry Extraction

    Science.gov (United States)

    Thomson, C.; Boehm, J.

    2014-06-01

    A combination of faster, cheaper and more accurate hardware, more sophisticated software, and greater industry acceptance have all laid the foundations for an increased desire for accurate 3D parametric models of buildings. Pointclouds are the data source of choice currently with static terrestrial laser scanning the predominant tool for large, dense volume measurement. The current importance of pointclouds as the primary source of real world representation is endorsed by CAD software vendor acquisitions of pointcloud engines in 2011. Both the capture and modelling of indoor environments require great effort in time by the operator (and therefore cost). Automation is seen as a way to aid this by reducing the workload of the user and some commercial packages have appeared that provide automation to some degree. In the data capture phase, advances in indoor mobile mapping systems are speeding up the process, albeit currently with a reduction in accuracy. As a result this paper presents freely accessible pointcloud datasets of two typical areas of a building each captured with two different capture methods and each with an accurate wholly manually created model. These datasets are provided as a benchmark for the research community to gauge the performance and improvements of various techniques for indoor geometry extraction. With this in mind, non-proprietary, interoperable formats are provided such as E57 for the scans and IFC for the reference model. The datasets can be found at: http://indoor-bench.github.io/indoor-bench.

  8. Benchmarks for interface-tracking codes in the consortium for advanced simulation of LWRs (CASL)

    International Nuclear Information System (INIS)

    A major innovation pursued by the Consortium for Advanced Simulation of LWRs (CASL) is the use of Interface Tracking Methods (ITM) to generate high-fidelity closure relations for two-phase flow and heat transfer phenomena (e.g. nucleate boiling, bubble break-up and coalescence, vapor condensation, etc.), to be used in coarser CFD, subchannel and system codes. ITMs do not assume an idealized geometry of the interface between the liquid and vapor phases, but rather calculate it from ‘first principles’. Also, used within the context of high-fidelity turbulence simulations, such as Direct Numerical Simulation (DNS) or Large Eddy Simulation (LES), ITMs can resolve the velocity (including the fluctuating field) and temperature/scalar gradients near the liquid-vapor interface, so prediction of the exchange of momentum, mass and heat at the interface in principle requires no empirical correlations. The physical complexity of the two-phase flow and heat transfer phenomena encountered in LWRs naturally lends itself to an ITM analysis approach. Several codes featuring ITM capabilities are available within CASL. These are TransAT, STAR-CCM+, PHASTA, FTC3D and FELBM. They use a variety of ITMs ranging from Volume-Of- Fluid to Level-Set, from Front-Tracking to Lattice-Boltzmann. A series of benchmark simulations is being developed to test the key capabilities of these codes and their ITMs. In this paper, three such benchmark simulations, testing DNS, LES and interface tracking, respectively, are briefly described. (author)

  9. Benchmarking of Simulation Codes Based on the Montague Resonance in the CERN Proton Synchrotron

    CERN Document Server

    Hofmann, Ingo; Cousineau, Sarah M; Franchetti, Giuliano; Giovannozzi, Massimo; Holmes, Jeffrey Alan; Jones, Frederick W; Luccio, Alfredo U; Machida, Shinji; Métral, E; Qiang, Ji; Ryne, Robert D; Spentzouris, Panagiotis

    2005-01-01

    Experimental data on emittance exchange by the space charge driven ‘‘Montague resonance'' have been obtained at the CERN Proton Synchrotron in 2002-04 as a function of the working point. These data are used to advance the benchmarking of major simulation codes (ACCSIM, IMPACT, MICROMAP, ORBIT, SIMBAD, SIMPSONS, SYNERGIA) currently employed world-wide in the design or performance improvement of high intensity circular accelerators. In this paper we summarize the experimental findings and compare them with the first three steps of simulation results of the still progressing work.

  10. A Study on Benchmarking Models and Frameworks in Industrial SMEs: Challenges and Issues

    Directory of Open Access Journals (Sweden)

    Masoomeh Zeinalnezhad

    2011-01-01

    Full Text Available This paper is based on a literature review of recent publications in the field of benchmarking methodology implemented in small and medium enterprises with regards to measure and benchmark upstream, leading or developmental aspects of organizations. Benchmarking has been recognized as an essential tool for continuous improvement and competitiveness.  It can also help SMEs to improve their operational and financial performances. However, only few entrepreneurs turn to benchmarking implementation, due to lack of time and resources. In this study current benchmarking models (2005 onwards, dedicated specifically to the SMEs, have been identified and their characteristics and objectives have been discussed.  Key findings from this review confirm that this is an under-developed area of research and that most practitioner approaches are focused on benchmarking practices within SMEs. There is a need to extend theoretical and practical aspects of benchmarking in SMEs by studying the process of benchmarking with regards to the novel concept of lead benchmarking as a possible means of achieving increased radical and innovative transformation in organizational change.   From the review it emerged that, lead, forward looking and predictive benchmarking have not been considered in SMEs, and future researches could include them.

  11. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Directory of Open Access Journals (Sweden)

    Shane Ó Conchúir

    Full Text Available The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  12. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Science.gov (United States)

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available. PMID:26335248

  13. Benchmark hydrogeophysical data from a physical seismic model

    Science.gov (United States)

    Lorenzo, Juan M.; Smolkin, David E.; White, Christopher; Chollett, Shannon R.; Sun, Ting

    2013-01-01

    Theoretical fluid flow models are used regularly to predict and analyze porous media flow but require verification against natural systems. Seismic monitoring in a controlled laboratory setting at a nominal scale of 1:1000 in the acoustic frequency range can help improve fluid flow models as well as elasto-granular models for uncompacted saturated-unsaturated soils. A mid-scale sand tank allows for many highly repeatable, yet flexible, experimental configurations with different material compositions and pump rates while still capturing phenomena such as patchy saturation, flow fingering, or layering. The tank (˜6×9×0.44 m) contains a heterogeneous sand pack (1.52-1.7 phi). In a set of eight benchmark experiments the water table is raised inside the sand body at increments of ˜0.05 m. Seismic events (vertical component) are recorded by a pseudowalkaway 64-channel accelerometer array (20 Hz-20 kHz), at 78 kS/s, in 100- scan stacks so as to optimize signal-to-noise ratio. Three screened well sites monitor water depth (+/-3 mm) inside the sand body. Seismic data sets in SEG Y format are publicly downloadable from the internet (http://github.com/cageo/Lorenzo-2012), in order to allow comparisons of different seismic and fluid flow analyses. The capillary fringe does not appear to completely saturate, as expected, because the interpreted compressional-wave velocity values remain so low (water levels there is no large seismic impedance contrast across the top of the water table to generate a clear reflector. Preliminary results indicate an immediate need for several additional experiments whose data sets will be added to the online database. Future benchmark data sets will grow with a control data set to show conditions in the sand body before water levels rise, and a surface 3D data set. In later experiments, buried sensors will help reduce seismic attenuation effects and in-situ saturation sensors will provide calibration values.

  14. Mesoscale Benchmark Demonstration Problem 1: Mesoscale Simulations of Intra-granular Fission Gas Bubbles in UO2 under Post-irradiation Thermal Annealing

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yulan; Hu, Shenyang Y.; Montgomery, Robert; Gao, Fei; Sun, Xin; Tonks, Michael; Biner, Bullent; Millet, Paul; Tikare, Veena; Radhakrishnan, Balasubramaniam; Andersson , David

    2012-04-11

    A study was conducted to evaluate the capabilities of different numerical methods used to represent microstructure behavior at the mesoscale for irradiated material using an idealized benchmark problem. The purpose of the mesoscale benchmark problem was to provide a common basis to assess several mesoscale methods with the objective of identifying the strengths and areas of improvement in the predictive modeling of microstructure evolution. In this work, mesoscale models (phase-field, Potts, and kinetic Monte Carlo) developed by PNNL, INL, SNL, and ORNL were used to calculate the evolution kinetics of intra-granular fission gas bubbles in UO2 fuel under post-irradiation thermal annealing conditions. The benchmark problem was constructed to include important microstructural evolution mechanisms on the kinetics of intra-granular fission gas bubble behavior such as the atomic diffusion of Xe atoms, U vacancies, and O vacancies, the effect of vacancy capture and emission from defects, and the elastic interaction of non-equilibrium gas bubbles. An idealized set of assumptions was imposed on the benchmark problem to simplify the mechanisms considered. The capability and numerical efficiency of different models are compared against selected experimental and simulation results. These comparisons find that the phase-field methods, by the nature of the free energy formulation, are able to represent a larger subset of the mechanisms influencing the intra-granular bubble growth and coarsening mechanisms in the idealized benchmark problem as compared to the Potts and kinetic Monte Carlo methods. It is recognized that the mesoscale benchmark problem as formulated does not specifically highlight the strengths of the discrete particle modeling used in the Potts and kinetic Monte Carlo methods. Future efforts are recommended to construct increasingly more complex mesoscale benchmark problems to further verify and validate the predictive capabilities of the mesoscale modeling

  15. Theory, modeling, and simulation annual report, 1992

    Energy Technology Data Exchange (ETDEWEB)

    1993-05-01

    This report briefly discusses research on the following topics: development of electronic structure methods; modeling molecular processes in clusters; modeling molecular processes in solution; modeling molecular processes in separations chemistry; modeling interfacial molecular processes; modeling molecular processes in the atmosphere; methods for periodic calculations on solids; chemistry and physics of minerals; graphical user interfaces for computational chemistry codes; visualization and analysis of molecular simulations; integrated computational chemistry environment; and benchmark computations.

  16. An Evaluation of Fault Tolerant Wind Turbine Control Schemes applied to a Benchmark Model

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob

    keeping the power grids stable. Advanced Fault Tolerant Control is one of the potential tools to increase reliability of modern wind turbines. A benchmark model for wind turbine fault detection and isolation and fault tolerant control has previously been proposed, and based on this benchmark an...... benchmark and is especially good accommodating sensors faults. The two other evaluated solutions do also well accommodating sensors faults, but have some issues which should be worked on, before they can be considered as a full solution to the benchmark problem....

  17. Towards a Core Model for Higher Education IT Management Benchmarking

    OpenAIRE

    Markus Juult, Janne

    2013-01-01

    This study evaluates three European higher education IT benchmarking projects by applying a custom comparison framework that is based on benchmarking literature and IT manager experience. The participating projects are Bencheit (Finland), UCISA (The United Kingdom) and UNIVERSITIC (Spain). EDUCAUSE (The United States of America) is also included as a project outside our geographical focus area due to its size and prominence in North America. Each of these projects is examined to map the data ...

  18. Photochemistry in Terrestrial Exoplanet Atmospheres I: Photochemistry Model and Benchmark Cases

    CERN Document Server

    Hu, Renyu; Bains, William

    2012-01-01

    We present a comprehensive photochemistry model for exploration of the chemical composition of terrestrial exoplanet atmospheres. The photochemistry model is designed from the ground up to have the capacity to treat all types of terrestrial planet atmospheres, ranging from oxidizing through reducing, which makes the code suitable for applications for the wide range of anticipated terrestrial exoplanet compositions. The one-dimensional chemical transport model treats up to 800 chemical reactions, photochemical processes, dry and wet deposition, surface emission and thermal escape of O, H, C, N and S bearing species, as well as formation and deposition of elemental sulfur and sulfuric acid aerosols. We validate the model by computing the atmospheric composition of current Earth and Mars and find agreement with observations of major trace gases in Earth's and Mars' atmospheres. We simulate several plausible atmospheric scenarios of terrestrial exoplanets, and choose three benchmark cases for atmospheres from red...

  19. A resource for benchmarking the usefulness of protein structure models

    Directory of Open Access Journals (Sweden)

    Carbajo Daniel

    2012-08-01

    Full Text Available Abstract Background Increasingly, biologists and biochemists use computational tools to design experiments to probe the function of proteins and/or to engineer them for a variety of different purposes. The most effective strategies rely on the knowledge of the three-dimensional structure of the protein of interest. However it is often the case that an experimental structure is not available and that models of different quality are used instead. On the other hand, the relationship between the quality of a model and its appropriate use is not easy to derive in general, and so far it has been analyzed in detail only for specific application. Results This paper describes a database and related software tools that allow testing of a given structure based method on models of a protein representing different levels of accuracy. The comparison of the results of a computational experiment on the experimental structure and on a set of its decoy models will allow developers and users to assess which is the specific threshold of accuracy required to perform the task effectively. Conclusions The ModelDB server automatically builds decoy models of different accuracy for a given protein of known structure and provides a set of useful tools for their analysis. Pre-computed data for a non-redundant set of deposited protein structures are available for analysis and download in the ModelDB database. Implementation, availability and requirements Project name: A resource for benchmarking the usefulness of protein structure models. Project home page: http://bl210.caspur.it/MODEL-DB/MODEL-DB_web/MODindex.php. Operating system(s: Platform independent. Programming language: Perl-BioPerl (program; mySQL, Perl DBI and DBD modules (database; php, JavaScript, Jmol scripting (web server. Other requirements: Java Runtime Environment v1.4 or later, Perl, BioPerl, CPAN modules, HHsearch, Modeller, LGA, NCBI Blast package, DSSP, Speedfill (Surfnet and PSAIA. License: Free. Any

  20. A unified framework for benchmark dose estimation applied to mixed models and model averaging

    DEFF Research Database (Denmark)

    Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.

    2013-01-01

    This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...

  1. Piloting a Process Maturity Model as an e-Learning Benchmarking Method

    Science.gov (United States)

    Petch, Jim; Calverley, Gayle; Dexter, Hilary; Cappelli, Tim

    2007-01-01

    As part of a national e-learning benchmarking initiative of the UK Higher Education Academy, the University of Manchester is carrying out a pilot study of a method to benchmark e-learning in an institution. The pilot was designed to evaluate the operational viability of a method based on the e-Learning Maturity Model developed at the University of…

  2. Benchmark models, planes lines and points for future SUSY searches at the LHC

    International Nuclear Information System (INIS)

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  3. Benchmark models, planes lines and points for future SUSY searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    AbdusSalam, S.S. [The Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Allanach, B.C. [Cambridge Univ. (United Kingdom). Dept. of Applied Mathematics and Theoretical Physics; Dreiner, H.K. [Bonn Univ. (DE). Bethe Center for Theoretical Physics and Physikalisches Inst.] (and others)

    2012-03-15

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  4. Benchmark models, planes, lines and points for future SUSY searches at the LHC

    International Nuclear Information System (INIS)

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data. (orig.)

  5. Development of a benchmarking model for lithium battery electrodes

    Science.gov (United States)

    Bergholz, Timm; Korte, Carsten; Stolten, Detlef

    2016-07-01

    This paper presents a benchmarking model to enable systematic selection of anode and cathode materials for lithium batteries in stationary applications, hybrid and battery electric vehicles. The model incorporates parameters for energy density, power density, safety, lifetime, costs and raw materials. Combinations of carbon anodes, Li4Ti5O12 or TiO2 with LiFePO4 cathodes comprise interesting combinations for application in hybrid power trains. Higher cost and raw material prioritization of stationary applications hinders the breakthrough of Li4Ti5O12, while a combination of TiO2 and LiFePO4 is suggested. The favored combinations resemble state-of-the-art materials, whereas novel cell chemistries must be optimized for cells in battery electric vehicles. In contrast to actual research efforts, sulfur as a cathode material is excluded due to its low volumetric energy density and its known lifetime and safety issues. Lithium as anode materials is discarded due to safety issues linked to electrode melting and dendrite formation. A high capacity composite Li2MnO3·LiNi0.5Co0.5O2 and high voltage spinel LiNi0.5Mn1.5O4 cathode with silicon as anode material promise high energy densities with sufficient lifetime and safety properties if electrochemical and thermal stabilization of the electrolyte/electrode interfaces and bulk materials is achieved. The model allows a systematic top-down orientation of research on lithium batteries.

  6. Inter-code comparison benchmark between DINA and TSC for ITER disruption modelling

    International Nuclear Information System (INIS)

    Results of 2D disruption modelling for validation of benchmark ITER scenarios using two established codes—DINA and TSC, are compared. Although the simulation models employed in those two codes ought to be equivalent in the resistive time scale, quite different defining equations and formulations are adopted in their approaches. Moreover there are considerable differences in the implemented model of solid conducting structures placed on the periphery of the plasma such as the vacuum vessel and blanket modules. Thus it has long been unanswered whether the one of the two codes is really able to reproduce the other's results correctly, since a large number of code-wise differences render the comparison task exceedingly complicated. In this paper, it is demonstrated that after the simulations are set up accounting for the model differences, a reasonably good agreement is generally obtained, corroborating the correctness of the code results. When the halo current generation and its poloidal path in the first wall are included, however, the situation is more complicated. Because of the surface averaged treatment of the magnetic field (current density) diffusion equation, DINA can only approximately handle the poloidal electric currents in the first wall that cross the field lines. Validation is carried out for DINA simulations of the halo current generation by comparing with TSC simulations, where the treatment of halo current dynamics is more justifiable. The specific details of each code, affecting the consequence in ITER disruption prediction, are highlighted and discussed. (paper)

  7. Simulation of hydrogen deflagration experiment – Benchmark exercise with lumped-parameter codes

    International Nuclear Information System (INIS)

    Highlights: • Blind and open simulations of hydrogen combustion experiment in large-scale containment-like facility with different lumped-parameter codes. • Simulation of axial as well as radial flame propagation. • Confirmation of adequacy of lumped-parameter codes for safety analyses of actual nuclear power plants. - Abstract: An experiment on hydrogen deflagration (Upward Flame Propagation Experiment – UFPE) was proposed by the Jozef Stefan Institute (Slovenia) and performed in the HYKA A2 facility at the Karlsruhe Institute of Technology (Germany). The experimental results were used to organize a benchmark exercise for lumped-parameter codes. Six organizations (JSI, AEP, LEI, NUBIKI, RSE and UJD SR) participated in the benchmark exercise, using altogether four different computer codes: ANGAR, ASTEC, COCOSYS and ECART. Both blind and open simulations were performed. In general, all the codes provided satisfactory results of the pressure increase, whereas the results of the temperature show a wider dispersal. Concerning the flame axial and radial velocities, the results may be considered satisfactory, given the inherent simplification of the lumped-parameter description compared to the local instantaneous description

  8. Simulation of hydrogen deflagration experiment – Benchmark exercise with lumped-parameter codes

    Energy Technology Data Exchange (ETDEWEB)

    Kljenak, Ivo, E-mail: ivo.kljenak@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Kuznetsov, Mikhail, E-mail: mike.kuznetsov@kit.edu [Karlsruhe Institute of Technology, Kaiserstraße 12, 76131 Karlsruhe (Germany); Kostka, Pal, E-mail: kostka@nubiki.hu [NUBIKI Nuclear Safety Research Institute, Konkoly-Thege Miklós út 29-33, 1121 Budapest (Hungary); Kubišova, Lubica, E-mail: lubica.kubisova@ujd.gov.sk [Nuclear Regulatory Authority of the Slovak Republic, Bajkalská 27, 82007 Bratislava (Slovakia); Maltsev, Mikhail, E-mail: maltsev_MB@aep.ru [JSC Atomenergoproekt, 1, st. Podolskykh Kursantov, Moscow (Russian Federation); Manzini, Giovanni, E-mail: giovanni.manzini@rse-web.it [Ricerca sul Sistema Energetico, Via Rubattino 54, 20134 Milano (Italy); Povilaitis, Mantas, E-mail: mantas.p@mail.lei.lt [Lithuania Energy Institute, Breslaujos g.3, 44403 Kaunas (Lithuania)

    2015-03-15

    Highlights: • Blind and open simulations of hydrogen combustion experiment in large-scale containment-like facility with different lumped-parameter codes. • Simulation of axial as well as radial flame propagation. • Confirmation of adequacy of lumped-parameter codes for safety analyses of actual nuclear power plants. - Abstract: An experiment on hydrogen deflagration (Upward Flame Propagation Experiment – UFPE) was proposed by the Jozef Stefan Institute (Slovenia) and performed in the HYKA A2 facility at the Karlsruhe Institute of Technology (Germany). The experimental results were used to organize a benchmark exercise for lumped-parameter codes. Six organizations (JSI, AEP, LEI, NUBIKI, RSE and UJD SR) participated in the benchmark exercise, using altogether four different computer codes: ANGAR, ASTEC, COCOSYS and ECART. Both blind and open simulations were performed. In general, all the codes provided satisfactory results of the pressure increase, whereas the results of the temperature show a wider dispersal. Concerning the flame axial and radial velocities, the results may be considered satisfactory, given the inherent simplification of the lumped-parameter description compared to the local instantaneous description.

  9. The PRISM Benchmark Suite

    OpenAIRE

    Kwiatkowsa, Marta; Norman, Gethin; Parker, David

    2012-01-01

    We present the PRISM benchmark suite: a collection of probabilistic models and property specifications, designed to facilitate testing, benchmarking and comparisons of probabilistic verification tools and implementations.

  10. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  11. Systematic benchmarking of large molecular dynamics simulations employing GROMACS on massive multiprocessing facilities.

    Science.gov (United States)

    Gruber, Christian C; Pleiss, Jürgen

    2011-03-01

    The influence of the total number of cores, the number of cores dedicated to Particle mesh Ewald (PME) calculation and the choice of single vs. double precision on the performance of molecular dynamic (MD) simulations in the size of 70,000 to 1.7 million of atoms was analyzed on three different high-performance computing facilities employing GROMACS 4 by running about 6000 benchmark simulations. Small and medium sized systems scaled linear up to 64 and 128 cores, respectively. Systems with half a million to 1.2 million atoms scaled linear up to 256 cores. The best performance was achieved by dedicating 25% of the total number of cores to PME calculation. Double precision calculations lowered the performance by 30-50%. A database for collecting information about MD simulations and the achieved performance was created and is freely available online and allows the fast estimation of the performance that can be expected in similar environments. PMID:20812321

  12. Generation IV benchmarking of TRISO fuel performance models under accident conditions. Modeling input data

    Energy Technology Data Exchange (ETDEWEB)

    Blaise Collin

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison

  13. A framework for implementation of organ effect models in TOPAS with benchmarks extended to proton therapy

    International Nuclear Information System (INIS)

    The aim of this work was to develop a framework for modeling organ effects within TOPAS (TOol for PArticle Simulation), a wrapper of the Geant4 Monte Carlo toolkit that facilitates particle therapy simulation. The DICOM interface for TOPAS was extended to permit contour input, used to assign voxels to organs. The following dose response models were implemented: The Lyman–Kutcher–Burman model, the critical element model, the population based critical volume model, the parallel-serial model, a sigmoid-based model of Niemierko for normal tissue complication probability and tumor control probability (TCP), and a Poisson-based model for TCP. The framework allows easy manipulation of the parameters of these models and the implementation of other models.As part of the verification, results for the parallel-serial and Poisson model for x-ray irradiation of a water phantom were compared to data from the AAPM Task Group 166. When using the task group dose-volume histograms (DVHs), results were found to be sensitive to the number of points in the DVH, with differences up to 2.4%, some of which are attributable to differences between the implemented models. New results are given with the point spacing specified. When using Monte Carlo calculations with TOPAS, despite the relatively good match to the published DVH’s, differences up to 9% were found for the parallel-serial model (for a maximum DVH difference of 2%) and up to 0.5% for the Poisson model (for a maximum DVH difference of 0.5%). However, differences of 74.5% (in Rectangle1), 34.8% (in PTV) and 52.1% (in Triangle) for the critical element, critical volume and the sigmoid-based models were found respectively.We propose a new benchmark for verification of organ effect models in proton therapy. The benchmark consists of customized structures in the spread out Bragg peak plateau, normal tissue, tumor, penumbra and in the distal region. The DVH’s, DVH point spacing, and results of the organ effect models are

  14. A framework for implementation of organ effect models in TOPAS with benchmarks extended to proton therapy

    Science.gov (United States)

    Ramos-Méndez, J.; Perl, J.; Schümann, J.; Shin, J.; Paganetti, H.; Faddegon, B.

    2015-07-01

    The aim of this work was to develop a framework for modeling organ effects within TOPAS (TOol for PArticle Simulation), a wrapper of the Geant4 Monte Carlo toolkit that facilitates particle therapy simulation. The DICOM interface for TOPAS was extended to permit contour input, used to assign voxels to organs. The following dose response models were implemented: The Lyman-Kutcher-Burman model, the critical element model, the population based critical volume model, the parallel-serial model, a sigmoid-based model of Niemierko for normal tissue complication probability and tumor control probability (TCP), and a Poisson-based model for TCP. The framework allows easy manipulation of the parameters of these models and the implementation of other models. As part of the verification, results for the parallel-serial and Poisson model for x-ray irradiation of a water phantom were compared to data from the AAPM Task Group 166. When using the task group dose-volume histograms (DVHs), results were found to be sensitive to the number of points in the DVH, with differences up to 2.4%, some of which are attributable to differences between the implemented models. New results are given with the point spacing specified. When using Monte Carlo calculations with TOPAS, despite the relatively good match to the published DVH’s, differences up to 9% were found for the parallel-serial model (for a maximum DVH difference of 2%) and up to 0.5% for the Poisson model (for a maximum DVH difference of 0.5%). However, differences of 74.5% (in Rectangle1), 34.8% (in PTV) and 52.1% (in Triangle) for the critical element, critical volume and the sigmoid-based models were found respectively. We propose a new benchmark for verification of organ effect models in proton therapy. The benchmark consists of customized structures in the spread out Bragg peak plateau, normal tissue, tumor, penumbra and in the distal region. The DVH’s, DVH point spacing, and results of the organ effect models are provided

  15. Tree-ring responses to extreme climate events as benchmarks for terrestrial dynamic vegetation models

    Directory of Open Access Journals (Sweden)

    A. Rammig

    2014-02-01

    Full Text Available Climate extremes can trigger exceptional responses in terrestrial ecosystems, for instance by altering growth or mortality rates. Effects of this kind are often manifested in reductions of the local net primary production (NPP. Investigating a set of European long-term data on annual radial tree growth confirms this pattern: we find that 53% of tree ring width (TRW indices are below one standard deviation, and up to 16% of the TRW values are below two standard deviations in years with extremely high temperatures and low precipitation. Based on these findings we investigate if climate driven patterns in long-term tree growth data may serve as benchmarks for state-of-the-art dynamic vegetation models such as LPJmL. The model simulates NPP but not explicitly the radial tree ring growth, hence requiring a generic method to ensure an objective comparison. Here we propose an analysis scheme that quantifies the coincidence rate of climate extremes with some biotic responses (here TRW or simulated NPP. We find that the reduction in tree-ring width during drought extremes is lower than the corresponding reduction of simulated NPP. We identify ten extreme years during the 20th century in which both, model and measurements indicate high coincidence rates across Europe. However, we detect substantial regional differences in simulated and observed responses to extreme events. One explanation for this discrepancy could be that the tree-ring data have preferentially been sampled at more climatically stressed sites. The model-data difference is amplified by the fact that dynamic vegetation models are designed to simulate mean ecosystem responses at landscape or regional scale. However, we find that both model-data and measurements display carry-over effects from the previous year. We conclude that using radial tree growth is a good basis for generic model-benchmarks if the data are analyzed by scale-free measures such as coincidence analysis. Our study shows

  16. Summary of FY15 results of benchmark modeling activities

    Energy Technology Data Exchange (ETDEWEB)

    Arguello, J. Guadalupe [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    Sandia is participating in the third phase of an is a contributing partner to a U.S.-German "Joint Project" entitled "Comparison of current constitutive models and simulation procedures on the basis of model calculations of the thermo-mechanical behavior and healing of rock salt." The first goal of the project is to check the ability of numerical modeling tools to correctly describe the relevant deformation phenomena in rock salt under various influences. Achieving this goal will lead to increased confidence in the results of numerical simulations related to the secure storage of radioactive wastes in rock salt, thereby enhancing the acceptance of the results. These results may ultimately be used to make various assertions regarding both the stability analysis of an underground repository in salt, during the operating phase, and the long-term integrity of the geological barrier against the release of harmful substances into the biosphere, in the post-operating phase.

  17. SCALE Modeling of Selected Neutronics Test Problems within the OECD UAM LWR’s Benchmark

    Directory of Open Access Journals (Sweden)

    Luigi Mercatali

    2013-01-01

    Full Text Available The OECD UAM Benchmark was launched in 2005 with the objective of determining the uncertainty in the simulation of Light Water Reactors (LWRs system calculations at all the stages of the coupled reactor physics—thermal hydraulics modeling. Within the framework of the “Neutronics Phase” of the Benchmark the solutions of some selected test cases at the cell physics and lattice physics levels are presented. The SCALE 6.1 code package has been used for the neutronics modeling of the selected exercises. Sensitivity and Uncertainty analysis (S/U based on the generalized perturbation theory has been performed in order to assess the uncertainty of the computation of some selected reactor integral parameters due to the uncertainty in the basic nuclear data. As a general trend, it has been found that the main sources of uncertainty are the 238U (n, and the 239Pu nubar for the UOX- and the MOX-fuelled test cases, respectively. Moreover, the reference solutions for the test cases obtained using Monte Carlo methodologies together with a comparison between deterministic and stochastic solutions are presented.

  18. Verification Benchmarks to Assess the Implementation of Computational Fluid Dynamics Based Hemolysis Prediction Models.

    Science.gov (United States)

    Hariharan, Prasanna; D'Souza, Gavin; Horner, Marc; Malinauskas, Richard A; Myers, Matthew R

    2015-09-01

    As part of an ongoing effort to develop verification and validation (V&V) standards for using computational fluid dynamics (CFD) in the evaluation of medical devices, we have developed idealized flow-based verification benchmarks to assess the implementation of commonly cited power-law based hemolysis models in CFD. Verification process ensures that all governing equations are solved correctly and the model is free of user and numerical errors. To perform verification for power-law based hemolysis modeling, analytical solutions for the Eulerian power-law blood damage model (which estimates hemolysis index (HI) as a function of shear stress and exposure time) were obtained for Couette and inclined Couette flow models, and for Newtonian and non-Newtonian pipe flow models. Subsequently, CFD simulations of fluid flow and HI were performed using Eulerian and three different Lagrangian-based hemolysis models and compared with the analytical solutions. For all the geometries, the blood damage results from the Eulerian-based CFD simulations matched the Eulerian analytical solutions within ∼1%, which indicates successful implementation of the Eulerian hemolysis model. Agreement between the Lagrangian and Eulerian models depended upon the choice of the hemolysis power-law constants. For the commonly used values of power-law constants (α  = 1.9-2.42 and β  = 0.65-0.80), in the absence of flow acceleration, most of the Lagrangian models matched the Eulerian results within 5%. In the presence of flow acceleration (inclined Couette flow), moderate differences (∼10%) were observed between the Lagrangian and Eulerian models. This difference increased to greater than 100% as the beta exponent decreased. These simplified flow problems can be used as standard benchmarks for verifying the implementation of blood damage predictive models in commercial and open-source CFD codes. The current study only used power-law model as an illustrative example to emphasize the need

  19. Physical Model Development and Benchmarking for MHD Flows in Blanket Design

    International Nuclear Information System (INIS)

    An advanced simulation environment to model incompressible MHD flows relevant to blanket conditions in fusion reactors has been developed at HyPerComp in research collaboration with TEXCEL. The goals of this phase-II project are two-fold: The first is the incorporation of crucial physical phenomena such as induced magnetic field modeling, and extending the capabilities beyond fluid flow prediction to model heat transfer with natural convection and mass transfer including tritium transport and permeation. The second is the design of a sequence of benchmark tests to establish code competence for several classes of physical phenomena in isolation as well as in select (termed here as 'canonical',) combinations. No previous attempts to develop such a comprehensive MHD modeling capability exist in the literature, and this study represents essentially uncharted territory. During the course of this Phase-II project, a significant breakthrough was achieved in modeling liquid metal flows at high Hartmann numbers. We developed a unique mathematical technique to accurately compute the fluid flow in complex geometries at extremely high Hartmann numbers (10,000 and greater), thus extending the state of the art of liquid metal MHD modeling relevant to fusion reactors at the present time. These developments have been published in noted international journals. A sequence of theoretical and experimental results was used to verify and validate the results obtained. The code was applied to a complete DCLL module simulation study with promising results.

  20. Physical Model Development and Benchmarking for MHD Flows in Blanket Design

    Energy Technology Data Exchange (ETDEWEB)

    Ramakanth Munipalli; P.-Y.Huang; C.Chandler; C.Rowell; M.-J.Ni; N.Morley; S.Smolentsev; M.Abdou

    2008-06-05

    An advanced simulation environment to model incompressible MHD flows relevant to blanket conditions in fusion reactors has been developed at HyPerComp in research collaboration with TEXCEL. The goals of this phase-II project are two-fold: The first is the incorporation of crucial physical phenomena such as induced magnetic field modeling, and extending the capabilities beyond fluid flow prediction to model heat transfer with natural convection and mass transfer including tritium transport and permeation. The second is the design of a sequence of benchmark tests to establish code competence for several classes of physical phenomena in isolation as well as in select (termed here as “canonical”,) combinations. No previous attempts to develop such a comprehensive MHD modeling capability exist in the literature, and this study represents essentially uncharted territory. During the course of this Phase-II project, a significant breakthrough was achieved in modeling liquid metal flows at high Hartmann numbers. We developed a unique mathematical technique to accurately compute the fluid flow in complex geometries at extremely high Hartmann numbers (10,000 and greater), thus extending the state of the art of liquid metal MHD modeling relevant to fusion reactors at the present time. These developments have been published in noted international journals. A sequence of theoretical and experimental results was used to verify and validate the results obtained. The code was applied to a complete DCLL module simulation study with promising results.

  1. Upgrading the Benchmark Simulation Model Framework with emerging challenges - A study of N2O emissions and the fate of pharmaceuticals in urban wastewater systems

    DEFF Research Database (Denmark)

    Snip, Laura

    Nowadays a wastewater treatment plant (WWTP) is not only expected to remove traditional pollutants from the wastewater; other emerging challenges have arisen as well. A WWTP is now, among other things, expected to also minimise its carbon footprint and deal with micropollutants. Optimising the...... greenhouse gas (GHG) emissions and the removal rate of micropollutants (MPs), modelling these processes for dynamic simulations and evaluation seems to be a promising tool for optimisation of a WWTP. Therefore, in this thesis the BSM is upgraded with processes describing GHG emissions and MPs removal....... Regarding GHGs emissions, the focus is placed on the production of nitrous oxide (N2O). As micropollutants comprise a wide range of chemicals, pharmaceuticals are selected here as specific examples to be studied. Different nitrification models containing N2O producing processes are tested and used for an...

  2. OECD/NEA main steam line break PWR benchmark simulation by TRACE/S3K coupled code

    International Nuclear Information System (INIS)

    A coupling between the TRACE system thermal-hydraulics code and the SIMULATE-3K (S3K) three-dimensional reactor kinetics code has been developed in a collaboration between the Paul Scherrer Institut (PSI) and Studsvik. In order to verify the coupling scheme and the coupled code capabilities with regards to plant transients, the OECD/NEA Main Steam Line Break PWR benchmark was simulated with the coupled TRACE/S3K code. The core/plant system data were taken from the benchmark specifications, while the nuclear data were generated with the Studsvik's lattice code CASMO-4 and the core analysis code SIMULATE-3. The TRACE/S3K results were compared with the published results obtained by the 17 participants of the benchmark. The comparison shows that the TRACE/S3K code reproduces satisfactory the main transient parameters, namely, the power and reactivity history, steam generator inventory, and pressure response. (author)

  3. Benchmark of the FLUKA model of crystal channeling against the UA9-H8 experiment

    International Nuclear Information System (INIS)

    Channeling in bent crystals is increasingly considered as an option for the collimation of high-energy particle beams. The installation of crystals in the LHC has taken place during this past year and aims at demonstrating the feasibility of crystal collimation and a possible cleaning efficiency improvement. The performance of CERN collimation insertions is evaluated with the Monte Carlo code FLUKA, which is capable to simulate energy deposition in collimators as well as beam loss monitor signals. A new model of crystal channeling was developed specifically so that similar simulations can be conducted in the case of crystal-assisted collimation. In this paper, most recent results of this model are brought forward in the framework of a joint activity inside the UA9 collaboration to benchmark the different simulation tools available. The performance of crystal STF 45, produced at INFN Ferrara, was measured at the H8 beamline at CERN in 2010 and serves as the basis to the comparison. Distributions of deflected particles are shown to be in very good agreement with experimental data. Calculated dechanneling lengths and crystal performance in the transition region between amorphous regime and volume reflection are also close to the measured ones

  4. PHOTOCHEMISTRY IN TERRESTRIAL EXOPLANET ATMOSPHERES. I. PHOTOCHEMISTRY MODEL AND BENCHMARK CASES

    International Nuclear Information System (INIS)

    We present a comprehensive photochemistry model for exploration of the chemical composition of terrestrial exoplanet atmospheres. The photochemistry model is designed from the ground up to have the capacity to treat all types of terrestrial planet atmospheres, ranging from oxidizing through reducing, which makes the code suitable for applications for the wide range of anticipated terrestrial exoplanet compositions. The one-dimensional chemical transport model treats up to 800 chemical reactions, photochemical processes, dry and wet deposition, surface emission, and thermal escape of O, H, C, N, and S bearing species, as well as formation and deposition of elemental sulfur and sulfuric acid aerosols. We validate the model by computing the atmospheric composition of current Earth and Mars and find agreement with observations of major trace gases in Earth's and Mars' atmospheres. We simulate several plausible atmospheric scenarios of terrestrial exoplanets and choose three benchmark cases for atmospheres from reducing to oxidizing. The most interesting finding is that atomic hydrogen is always a more abundant reactive radical than the hydroxyl radical in anoxic atmospheres. Whether atomic hydrogen is the most important removal path for a molecule of interest also depends on the relevant reaction rates. We also find that volcanic carbon compounds (i.e., CH4 and CO2) are chemically long-lived and tend to be well mixed in both reducing and oxidizing atmospheres, and their dry deposition velocities to the surface control the atmospheric oxidation states. Furthermore, we revisit whether photochemically produced oxygen can cause false positives for detecting oxygenic photosynthesis, and find that in 1 bar CO2-rich atmospheres oxygen and ozone may build up to levels that have conventionally been accepted as signatures of life, if there is no surface emission of reducing gases. The atmospheric scenarios presented in this paper can serve as the benchmark atmospheres for

  5. Benchmark test of drift-kinetic and gyrokinetic codes through neoclassical transport simulations

    International Nuclear Information System (INIS)

    Two simulation codes that solve the drift-kinetic or gyrokinetic equation in toroidal plasmas are benchmarked by comparing the simulation results of neoclassical transport. The two codes are the drift-kinetic δf Monte Carlo code (FORTEC-3D) and the gyrokinetic full- f Vlasov code (GT5D), both of which solve radially-global, five-dimensional kinetic equation with including the linear Fokker-Planck collision operator. In a tokamak configuration, neoclassical radial heat flux and the force balance relation, which relates the parallel mean flow with radial electric field and temperature gradient, are compared between these two codes, and their results are also compared with the local neoclassical transport theory. It is found that the simulation results of the two codes coincide very well in a wide rage of plasma collisionality parameter ν* = 0.01 - 10 and also agree with the theoretical estimations. The time evolution of radial electric field and particle flux, and the radial profile of the geodesic acoustic mode frequency also coincide very well. These facts guarantee the capability of GT5D to simulate plasma turbulence transport with including proper neoclassical effects of collisional diffusion and equilibrium radial electric field. (author)

  6. Characterization of Cetyltrimethylammonium Bromide/Hexanol Reverse Micelles by Experimentally Benchmarked Molecular Dynamics Simulations.

    Science.gov (United States)

    Fuglestad, Brian; Gupta, Kushol; Wand, A Joshua; Sharp, Kim A

    2016-02-23

    Encapsulation of small molecules, proteins, and other macromolecules within the protective water core of reverse micelles is emerging as a powerful strategy for a variety of applications. The cationic surfactant cetyltrimethylammonium bromide (CTAB) in combination with hexanol as a cosurfactant is particularly useful in the context of solution NMR spectroscopy of encapsulated proteins. Small-angle X-ray and neutron scattering is employed to investigate the internal structure of the CTAB/hexanol reverse micelle particle under conditions appropriate for high-resolution NMR spectroscopy. The scattering profiles are used to benchmark extensive molecular dynamics simulations of this reverse micelle system and indicate that the parameters used in these simulations recapitulate experimental results. Scattering profiles and simulations indicate formation of homogeneous solutions of small approximately spherical reverse micelle particles at a water loading of 20 composed of ∼150 CTAB and 240 hexanol molecules. The 3000 waters comprising the reverse micelle core show a gradient of translational diffusion that reaches that of bulk water at the center. Rotational diffusion is slowed relative to bulk throughout the water core, with the greatest slowing near the CTAB headgroups. The 5 Å thick interfacial region of the micelle consists of overlapping layers of Br(-) enriched water, CTAB headgroups, and hexanol hydroxyl groups, containing about one-third of the total water. This study employs well-parametrized MD simulations, X-ray and neutron scattering, and electrostatic theory to illuminate fundamental properties of CTAB/hexanol reverse micelle size, shape, partitioning, and water behavior. PMID:26840651

  7. Benchmarking electron-cloud simulations and pressure measurements at the LHC

    CERN Document Server

    Dominguez, O

    2013-01-01

    During the beam commissioning of the Large Hadron Collider (LHC) with 150, 75, 50 and 25-ns bunch spacing, important electron-cloud effects, like pressure rise, cryogenic heat load, beam instabilities or emittance growth, were observed. A method has been developed to infer different key beam-pipe surface parameters by benchmarking simulations and pressure rise observed in the machine. This method allows us to monitor the scrubbing process (i.e. the reduction of the secondary emission yield as a function of time) in the regions where the vacuum-pressure gauges are located, in order to decide on the most appropriate strategies for machine operation. In this paper we present the methodology and first results from applying this technique to the LHC.

  8. A software tool for creating simulated outbreaks to benchmark surveillance systems

    Directory of Open Access Journals (Sweden)

    Olson Karen L

    2005-07-01

    Full Text Available Abstract Background Evaluating surveillance systems for the early detection of bioterrorism is particularly challenging when systems are designed to detect events for which there are few or no historical examples. One approach to benchmarking outbreak detection performance is to create semi-synthetic datasets containing authentic baseline patient data (noise and injected artificial patient clusters, as signal. Methods We describe a software tool, the AEGIS Cluster Creation Tool (AEGIS-CCT, that enables users to create simulated clusters with controlled feature sets, varying the desired cluster radius, density, distance, relative location from a reference point, and temporal epidemiological growth pattern. AEGIS-CCT does not require the use of an external geographical information system program for cluster creation. The cluster creation tool is an open source program, implemented in Java and is freely available under the Lesser GNU Public License at its Sourceforge website. Cluster data are written to files or can be appended to existing files so that the resulting file will include both existing baseline and artificially added cases. Multiple cluster file creation is an automated process in which multiple cluster files are created by varying a single parameter within a user-specified range. To evaluate the output of this software tool, sets of test clusters were created and graphically rendered. Results Based on user-specified parameters describing the location, properties, and temporal pattern of simulated clusters, AEGIS-CCT created clusters accurately and uniformly. Conclusion AEGIS-CCT enables the ready creation of datasets for benchmarking outbreak detection systems. It may be useful for automating the testing and validation of spatial and temporal cluster detection algorithms.

  9. Benchmarking biological nutrient removal in wastewater treatment plants: influence of mathematical model assumptions.

    Science.gov (United States)

    Flores-Alsina, Xavier; Gernaey, Krist V; Jeppsson, Ulf

    2012-01-01

    This paper examines the effect of different model assumptions when describing biological nutrient removal (BNR) by the activated sludge models (ASM) 1, 2d & 3. The performance of a nitrogen removal (WWTP1) and a combined nitrogen and phosphorus removal (WWTP2) benchmark wastewater treatment plant was compared for a series of model assumptions. Three different model approaches describing BNR are considered. In the reference case, the original model implementations are used to simulate WWTP1 (ASM1 & 3) and WWTP2 (ASM2d). The second set of models includes a reactive settler, which extends the description of the non-reactive TSS sedimentation and transport in the reference case with the full set of ASM processes. Finally, the third set of models is based on including electron acceptor dependency of biomass decay rates for ASM1 (WWTP1) and ASM2d (WWTP2). The results show that incorporation of a reactive settler: (1) increases the hydrolysis of particulates; (2) increases the overall plant's denitrification efficiency by reducing the S(NOx) concentration at the bottom of the clarifier; (3) increases the oxidation of COD compounds; (4) increases X(OHO) and X(ANO) decay; and, finally, (5) increases the growth of X(PAO) and formation of X(PHA,Stor) for ASM2d, which has a major impact on the whole P removal system. Introduction of electron acceptor dependent decay leads to a substantial increase of the concentration of X(ANO), X(OHO) and X(PAO) in the bottom of the clarifier. The paper ends with a critical discussion of the influence of the different model assumptions, and emphasizes the need for a model user to understand the significant differences in simulation results that are obtained when applying different combinations of 'standard' models. PMID:22466599

  10. Development of a benchmarking methodology for evaluating oxidation ditch control strategies

    OpenAIRE

    Abusam, A.A.A.

    2001-01-01

    Keywords: wastewater, oxidation ditch, carrousel, modeling, activated sludge, ASM No. 1, oxygen transfer rate, aeration, parameter estimation, calibration, sensitivity analysis, uncertainty analysis, sensors, horizontal velocity, benchmark, benchmarking, control strategies, simulation. The purpose of this thesis was to develop a benchmarking methodology for evaluating control strategies for oxidation ditch wastewater treatment plants. A benchmark consists of a description of the plant layout,...

  11. Benchmarking the CEMDATA07 database to model chemical degradation of concrete using GEMS and PHREEQC

    International Nuclear Information System (INIS)

    Thermodynamic equilibrium modelling of degradation of cement and concrete systems by chemically detrimental reactions as carbonation, sulphate attack and decalcification or leaching processes requires a consistent thermodynamic database with the relevant aqueous species, cement minerals and hydrates. The recent and consistent database CEMDATA07 is used as the basis in the studies of the Belgian near-surface disposal concept being developed by ONDRAF/NIRAS. The database is consistent with the thermodynamic data in the Nagra/PSI-Thermodynamic Database. When used with the GEMS thermodynamic code, thermodynamic modelling can be performed at temperatures different from the standard temperature of 25 C. GEMS calculates thermodynamic equilibrium by minimizing the Gibbs free energy of the system. Alternatively, thermodynamic equilibrium can also be calculated by solving a nonlinear system of mass balance equations and mass action equations, as is done in PHREEQC. A PHREEQC-database for the cement systems at temperatures different from 25 C is derived from the thermodynamic parameters and models from GEMS. A number of benchmark simulations using PHREEQC and GEM-Selektor were done to verify the implementation of the CEMDATA07 database in PHREEQC-databases. Simulations address a series of reactions that are relevant to the assessment of long-term cement and concrete durability. Verification calculations were performed for different systems with increasing complexity: CaO-SiO2-CO2, CaO-Al2O3-SO3-CO2, and CaO-SiO2-Al2O3-Fe2O3-MgO-SO3-CO2. Three types of chemical degradation processes were simulated: (1) carbonation by adding CO2 to the bulk composition, (2) sulphate attack by adding SO3 to the bulk composition, and (3) decalcification/leaching by putting the cement solid phase sequentially in contact with pure water. An excellent agreement between the simulations with GEMS and PHREEQC was obtained

  12. Simulation of TRIGA Mark II Benchmark Experiment using WIMSD4 and CITATION codes

    International Nuclear Information System (INIS)

    This paper presents a simulation of the TRIGA Mark II Benchmark Experiment, Part I: Steady-State Operation and is part of the calculation methodology validation developed to the neutronic calculation of the CDTN's TRIGA IPR - R1 reactor. A version of the WIMSD4, obtained in the Centro de Tecnologia Nuclear, in Cuba, was used in the cells calculation. In the core calculations was adopted the diffusion code CITATION. Was adopted a 3D representation of the core and the calculations were carried out at two energy groups. Many of the experiments were simulated, including, Keff, control rods reactivity worth, fuel elements reactivity worth distribution and the fuel temperature reactivity coefficient. The comparison of the obtained results, with the experimental results, shows differences in the range of the accuracy of the measurements, to the control rods worth and fuel temperature reactivity coefficient, or on an acceptable range, following the literature, to the Keff and fuel elements reactivity worth distribution and the fuel temperature reactivity coefficient. The comparison of the obtained results, with the experimental. results, shows differences in the range of the accuracy of the measurements, to the control rods worth and fuel temperature reactivity coefficient, or in an acceptable range, following the literature, to the Keff and fuel elements reactivity worth distribution. (author)

  13. Benchmarking Electron-Cloud Build-Up and Heat-Load Simulations against Large-Hadron-Collider Observations

    OpenAIRE

    Dominguez, O; Iriso, U; Maury, H.; Rumolo, G.; Zimmermann, F

    2011-01-01

    After reviewing the basic features of electron clouds in particle accelerators, the pertinent vacuum-chamber surface properties, and the electron-cloud simulation tools in use at CERN, we report recent observations of electron-cloud phenomena at the Large Hadron Collider (LHC) and ongoing attempts to benchmark the measured LHC vacuum pressure increases and heat loads against electron-cloud build-up simulations aimed at determining the actual surface parameters and at monitoring the so-called ...

  14. Viscoelastic silicone oils in analog modeling - a rheological benchmark

    Science.gov (United States)

    Rudolf, Michael; Boutelier, David; Rosenau, Matthias; Schreurs, Guido; Oncken, Onno

    2016-04-01

    Tectonic analog models frequently use silicone oils to simulate viscous flow in the lower crust and mantle. Precise knowledge of the model rheology is required to ensure dynamic similarity with the prototype. We assessed the rheological properties of various silicone oils using rotational and oscillatory tests. Resulting viscosities are in the range of 2 - 3 ×104 Pa s with a transition from Newtonian viscous to power-law, shear-thinning, around shear rates of 10‑2 to 10‑1 s‑1. Maxwell relaxation times are in the range of 10‑1 s. Comparing the rheological properties of chemically similar silicone oils from different laboratories shows that they differ from laboratory to laboratory. Furthermore, we characterized the temperature dependency of viscosity and aging effects. The samples show a reduction in zero-shear viscosity over time. This stabilizes at a certain value over several months. The dynamic moduli decrease as well, but other viscoelastic constants, such as the Maxwell relaxation time, are not affected by aging. We conclude that the aging is mainly controlled by the storage conditions and that a silicone shows no further aging when it has equilibrated with the ambient laboratory conditions. We consider all these differences as minor compared to the much larger uncertainties for estimating the lithosphere rheology. Nevertheless, it is important that the rheological properties of the experimental materials are monitored during an experimental series that spans over several weeks to months. Additionally, the viscoelastic properties may be scaled using dimensionless parameters (Deborah number) and show a dynamically similar change from Newtonian to power-law flow, like the natural prototype. In consequence, the viscoelasticity of these silicone oils is able to mimic the change in deformation mechanism from diffusion to dislocation creep.

  15. Simulation modeling and arena

    CERN Document Server

    Rossetti, Manuel D

    2015-01-01

    Emphasizes a hands-on approach to learning statistical analysis and model building through the use of comprehensive examples, problems sets, and software applications With a unique blend of theory and applications, Simulation Modeling and Arena®, Second Edition integrates coverage of statistical analysis and model building to emphasize the importance of both topics in simulation. Featuring introductory coverage on how simulation works and why it matters, the Second Edition expands coverage on static simulation and the applications of spreadsheets to perform simulation. The new edition als

  16. An integer programming model and benchmark suite for liner shipping network design

    DEFF Research Database (Denmark)

    Løfstedt, Berit; Alvarez, Jose Fernando; Plum, Christian Edinger Munk;

    along with a rich integer programming model based on the services, that constitute the fixed schedule of a liner shipping company. The model may be relaxed as well as decomposed. The design of a benchmark suite of data instances to reflect the business structure of a global liner shipping network...

  17. Benchmarking nuclear models of FLUKA and GEANT4 for carbon ion therapy

    International Nuclear Information System (INIS)

    As carbon ions, at therapeutic energies, penetrate tissue, they undergo inelastic nuclear reactions and give rise to significant yields of secondary fragment fluences. Therefore, an accurate prediction of these fluences resulting from the primary carbon interactions is necessary in the patient's body in order to precisely simulate the spatial dose distribution and the resulting biological effect. In this paper, the performance of nuclear fragmentation models of the Monte Carlo transport codes, FLUKA and GEANT4, in tissue-like media and for an energy regime relevant for therapeutic carbon ions is investigated. The ability of these Monte Carlo codes to reproduce experimental data of charge-changing cross sections and integral and differential yields of secondary charged fragments is evaluated. For the fragment yields, the main focus is on the consideration of experimental approximations and uncertainties such as the energy measurement by time-of-flight. For GEANT4, the hadronic models G4BinaryLightIonReaction and G4QMD are benchmarked together with some recently enhanced de-excitation models. For non-differential quantities, discrepancies of some tens of percent are found for both codes. For differential quantities, even larger deviations are found. Implications of these findings for the therapeutic use of carbon ions are discussed.

  18. Benchmarking nuclear models of FLUKA and GEANT4 for carbon ion therapy.

    Science.gov (United States)

    Böhlen, T T; Cerutti, F; Dosanjh, M; Ferrari, A; Gudowska, I; Mairani, A; Quesada, J M

    2010-10-01

    As carbon ions, at therapeutic energies, penetrate tissue, they undergo inelastic nuclear reactions and give rise to significant yields of secondary fragment fluences. Therefore, an accurate prediction of these fluences resulting from the primary carbon interactions is necessary in the patient's body in order to precisely simulate the spatial dose distribution and the resulting biological effect. In this paper, the performance of nuclear fragmentation models of the Monte Carlo transport codes, FLUKA and GEANT4, in tissue-like media and for an energy regime relevant for therapeutic carbon ions is investigated. The ability of these Monte Carlo codes to reproduce experimental data of charge-changing cross sections and integral and differential yields of secondary charged fragments is evaluated. For the fragment yields, the main focus is on the consideration of experimental approximations and uncertainties such as the energy measurement by time-of-flight. For GEANT4, the hadronic models G4BinaryLightIonReaction and G4QMD are benchmarked together with some recently enhanced de-excitation models. For non-differential quantities, discrepancies of some tens of percent are found for both codes. For differential quantities, even larger deviations are found. Implications of these findings for the therapeutic use of carbon ions are discussed. PMID:20844337

  19. Creating a benchmark of vertical axis wind turbines in dynamic stall for validating numerical models

    DEFF Research Database (Denmark)

    Castelein, D.; Ragni, D.; Tescione, G.; Ferreira, C. J. Simão; Gaunaa, Mac

    2015-01-01

    An experimental campaign using Particle Image Velocimetry (2C-PIV) technique has been conducted on a H-type Vertical Axis Wind Turbine (VAWT) to create a benchmark for validating and comparing numerical models. The turbine is operated at tip speed ratios (TSR) of 4.5 and 2, at an average chord...... phenomenon is numerically very hard to model, so a solid benchmark for a VAWT in DS is of great interest. The aim of the paper is to present the experimental flow fields, and the validated loads on the blades for both TSR....

  20. PREMIUM - Benchmark on the quantification of the uncertainty of the physical models in the system thermal-hydraulic codes

    International Nuclear Information System (INIS)

    PREMIUM (Post BEMUSE Reflood Models Input Uncertainty Methods) is an activity launched with the aim to push forward the methods of quantification of physical models uncertainties in thermal-hydraulic codes. It is endorsed by OECD/NEA/CSNI/WGAMA. The benchmark PREMIUM is addressed to all who applies uncertainty evaluation methods based on input uncertainties quantification and propagation. The benchmark is based on a selected case of uncertainty analysis application to the simulation of quench front propagation in an experimental test facility. Application to an experiment enables evaluation and confirmation of the quantified probability distribution functions on the basis of experimental data. The scope of the benchmark comprises a review of the existing methods, selection of potentially important uncertain input parameters, preliminary quantification of the ranges and distributions of the identified parameters, evaluation of the probability density function using experimental results of tests performed on FEBA test facility and confirmation/validation of the performed quantification on the basis of blind calculation of Reflood 2-D PERICLES experiment. (authors)

  1. Verification of cardiac tissue electrophysiology simulators using an N-version benchmark

    OpenAIRE

    Niederer, Steven A.; Kerfoot, Eric; Benson, Alan P.; Bernabeu, Miguel O.; Bernus, Olivier; Bradley, Chris; Cherry, Elizabeth M.; Clayton, Richard; Fenton, Flavio H.; Garny, Alan; Heidenreich, Elvio; Land, Sander; Maleckar, Mary; Pathmanathan, Pras; Plank, Gernot

    2011-01-01

    Ongoing developments in cardiac modelling have resulted, in particular, in the development of advanced and increasingly complex computational frameworks for simulating cardiac tissue electrophysiology. The goal of these simulations is often to represent the detailed physiology and pathologies of the heart using codes that exploit the computational potential of high-performance computing architectures. These developments have rapidly progressed the simulation capacity of cardiac virtual physio...

  2. Modeling of the ORNL PCA Benchmark Using SCALE6.0 Hybrid Deterministic-Stochastic Methodology

    Directory of Open Access Journals (Sweden)

    Mario Matijević

    2013-01-01

    Full Text Available Revised guidelines with the support of computational benchmarks are needed for the regulation of the allowed neutron irradiation to reactor structures during power plant lifetime. Currently, US NRC Regulatory Guide 1.190 is the effective guideline for reactor dosimetry calculations. A well known international shielding database SINBAD contains large selection of models for benchmarking neutron transport methods. In this paper a PCA benchmark has been chosen from SINBAD for qualification of our methodology for pressure vessel neutron fluence calculations, as required by the Regulatory Guide 1.190. The SCALE6.0 code package, developed at Oak Ridge National Laboratory, was used for modeling of the PCA benchmark. The CSAS6 criticality sequence of the SCALE6.0 code package, which includes KENO-VI Monte Carlo code, as well as MAVRIC/Monaco hybrid shielding sequence, was utilized for calculation of equivalent fission fluxes. The shielding analysis was performed using multigroup shielding library v7_200n47g derived from general purpose ENDF/B-VII.0 library. As a source of response functions for reaction rate calculations with MAVRIC we used international reactor dosimetry libraries (IRDF-2002 and IRDF-90.v2 and appropriate cross-sections from transport library v7_200n47g. The comparison of calculational results and benchmark data showed a good agreement of the calculated and measured equivalent fission fluxes.

  3. On the validity of empirical potentials for simulating radiation damage in graphite: a benchmark.

    Science.gov (United States)

    Latham, C D; McKenna, A J; Trevethan, T P; Heggie, M I; Rayson, M J; Briddon, P R

    2015-08-12

    In this work, the ability of methods based on empirical potentials to simulate the effects of radiation damage in graphite is examined by comparing results for point defects, found using ab initio calculations based on density functional theory (DFT), with those given by two state of the art potentials: the Environment-Dependent Interatomic Potential (EDIP) and the Adaptive Intermolecular Reactive Empirical Bond Order potential (AIREBO). Formation energies for the interstitial, the vacancy and the Stone-Wales (5775) defect are all reasonably close to DFT values. Both EDIP and AIREBO can thus be suitable for the prompt defects in a cascade, for example. Both potentials suffer from arefacts. One is the pinch defect, where two α-atoms adopt a fourfold-coordinated sp(3) configuration, that forms a cross-link between neighbouring graphene sheets. Another, for AIREBO only, is that its ground state vacancy structure is close to the transition state found by DFT for migration. The EDIP fails to reproduce the ground state self-interstitial structure given by DFT, but has nearly the same formation energy. Also, for both potentials, the energy barriers that control diffusion and the evolution of a damage cascade, are not well reproduced. In particular the EDIP gives a barrier to removal of the Stone-Wales defect as 0.9 eV against DFT's 4.5 eV. The suite of defect structures used is provided as supplementary information as a benchmark set for future potentials. PMID:26202454

  4. Benchmarking of Monte Carlo simulation of bremsstrahlung from thick targets at radiotherapy energies

    Energy Technology Data Exchange (ETDEWEB)

    Faddegon, Bruce A.; Asai, Makoto; Perl, Joseph; Ross, Carl; Sempau, Josep; Tinslay, Jane; Salvat, Francesc [Department of Radiation Oncology, University of California at San Francisco, San Francisco, California 94143 (United States); Stanford Linear Accelerator Center, 2575 Sand Hill Road, Menlo Park, California 94025 (United States); National Research Council Canada, Institute for National Measurement Standards, 1200 Montreal Road, Building M-36, Ottawa, Ontario K1A 0R6 (Canada); Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya and Centro de Investigacion Biomedica en Red en Bioingenieria, Biomateriales y Nanomedicina (CIBER-BBN), Diagonal 647, 08028 Barcelona (Spain); Stanford Linear Accelerator Center, 2575 Sand Hill Road, Menlo Park, California 94025 (United States); Facultat de Fisica (ECM), Universitat de Barcelona, Societat Catalana de Fisica (IEC), Diagonal 647, 08028 Barcelona (Spain)

    2008-10-15

    Several Monte Carlo systems were benchmarked against published measurements of bremsstrahlung yield from thick targets for 10-30 MV beams. The quantity measured was photon fluence at 1 m per unit energy per incident electron (spectra), and total photon fluence, integrated over energy, per incident electron (photon yield). Results were reported at 10-30 MV on the beam axis for Al and Pb targets and at 15 MV at angles out to 90 degree sign for Be, Al, and Pb targets. Beam energy was revised with improved accuracy of 0.5% using an improved energy calibration of the accelerator. Recently released versions of the Monte Carlo systems EGSNRC, GEANT4, and PENELOPE were benchmarked against the published measurements using the revised beam energies. Monte Carlo simulation was capable of calculation of photon yield in the experimental geometry to 5% out to 30 degree sign , 10% at wider angles, and photon spectra to 10% at intermediate photon energies, 15% at lower energies. Accuracy of measured photon yield from 0 to 30 degree sign was 5%, 1 s.d., increasing to 7% for the larger angles. EGSNRC and PENELOPE results were within 2 s.d. of the measured photon yield at all beam energies and angles, GEANT4 within 3 s.d. Photon yield at nonzero angles for angles covering conventional field sizes used in radiotherapy (out to 10 degree sign ), measured with an accuracy of 3%, was calculated within 1 s.d. of measurement for EGSNRC, 2 s.d. for PENELOPE and GEANT4. Calculated spectra closely matched measurement at photon energies over 5 MeV. Photon spectra near 5 MeV were underestimated by as much as 10% by all three codes. The photon spectra below 2-3 MeV for the Be and Al targets and small angles were overestimated by up to 15% when using EGSNRC and PENELOPE, 20% with GEANT4. EGSNRC results with the NIST option for the bremsstrahlung cross section were preferred over the alternative cross section available in EGSNRC and over EGS4. GEANT4 results calculated with the &apos

  5. Benchmarked Empirical Bayes Methods in Multiplicative Area-level Models with Risk Evaluation

    OpenAIRE

    Ghosh, Malay; Kubokawa, Tatsuya; Kawakubo, Yuki

    2014-01-01

    The paper develops empirical Bayes and benchmarked empirical Bayes estimators of positive small area means under multiplicative models. A simple example will be estimation of per capita income for small areas. It is now well-understood that small area estimation needs explicit, or at least implicit use of models. One potential difficulty with model-based estimators is that the overall estimator for a larger geographical area based on (weighted) sum of the model-based estimators is not necessa...

  6. A Base Integer Programming Model and Benchmark Suite for Liner-Shipping Network Design

    DEFF Research Database (Denmark)

    Brouer, Berit Dangaard; Alvarez, Fernando; Plum, Christian Edinger Munk;

    2014-01-01

    problem to be strongly NP-hard. A benchmark suite of data instances to reflect the business structure of a global liner shipping network is presented. The design of the benchmark suite is discussed in relation to industry standards, business rules, and mathematical programming. The data are based on real...... sources of liner shipping for OR researchers in general. We describe and analyze the liner-shipping domain applied to network design and present a rich integer programming model based on services that constitute the fixed schedule of a liner shipping company. We prove the liner-shipping network design...

  7. Validation of simulation models

    DEFF Research Database (Denmark)

    Rehman, Muniza; Pedersen, Stig Andur

    2012-01-01

    In philosophy of science, the interest for computational models and simulations has increased heavily during the past decades. Different positions regarding the validity of models have emerged but the views have not succeeded in capturing the diversity of validation methods. The wide variety of...... models has been somewhat narrow-minded reducing the notion of validation to establishment of truth. This article puts forward the diversity in applications of simulation models that demands a corresponding diversity in the notion of validation....

  8. An Analysis of Academic Research Libraries Assessment Data: A Look at Professional Models and Benchmarking Data

    Science.gov (United States)

    Lewin, Heather S.; Passonneau, Sarah M.

    2012-01-01

    This research provides the first review of publicly available assessment information found on Association of Research Libraries (ARL) members' websites. After providing an overarching review of benchmarking assessment data, and of professionally recommended assessment models, this paper examines if libraries contextualized their assessment…

  9. Developing of Indicators of an E-Learning Benchmarking Model for Higher Education Institutions

    Science.gov (United States)

    Sae-Khow, Jirasak

    2014-01-01

    This study was the development of e-learning indicators used as an e-learning benchmarking model for higher education institutes. Specifically, it aimed to: 1) synthesize the e-learning indicators; 2) examine content validity by specialists; and 3) explore appropriateness of the e-learning indicators. Review of related literature included…

  10. Structural modeling and fuzzy-logic based diagnosis of a ship propulsion benchmark

    DEFF Research Database (Denmark)

    Izadi-Zamanabadi, Roozbeh; Blanke, M.; Katebi, S.D.

    2000-01-01

    An analysis of structural model of a ship propulsion benchmark leads to identifying the subsystems with inherent redundant information. For a nonlinear part of the system, a Fuzzy logic based FD algorithm with adaptive threshold is employed. The results illustrate the applicability of structural...... analysis as well as fuzzy observer....

  11. Structural modeling and fuzzy-logic based diagnosis of a ship propulsion benchmark

    DEFF Research Database (Denmark)

    Izadi-Zamanabadi, Roozbeh; Blanke, M.; Katebi, S.D.

    2000-01-01

    An analysis of structural model of a ship propulsion benchmark leads to identifying the subsystems with inherent redundant information. For a nonlinear part of the system, a Fuzzy logic based FD algorithm with adaptive threshold is employed. The results illustrate the applicability of structural...... analysis as well as fuzzy observer...

  12. TRIPOLI-4® - MCNP5 ITER A-lite neutronic model benchmarking

    Science.gov (United States)

    Jaboulay, J.-C.; Cayla, P.-Y.; Fausser, C.; Lee, Y.-K.; Trama, J.-C.; Li-Puma, A.

    2014-06-01

    The aim of this paper is to present the capability of TRIPOLI-4®, the CEA Monte Carlo code, to model a large-scale fusion reactor with complex neutron source and geometry. In the past, numerous benchmarks were conducted for TRIPOLI-4® assessment on fusion applications. Experiments (KANT, OKTAVIAN, FNG) analysis and numerical benchmarks (between TRIPOLI-4® and MCNP5) on the HCLL DEMO2007 and ITER models were carried out successively. In this previous ITER benchmark, nevertheless, only the neutron wall loading was analyzed, its main purpose was to present MCAM (the FDS Team CAD import tool) extension for TRIPOLI-4®. Starting from this work a more extended benchmark has been performed about the estimation of neutron flux, nuclear heating in the shielding blankets and tritium production rate in the European TBMs (HCLL and HCPB) and it is presented in this paper. The methodology to build the TRIPOLI-4® A-lite model is based on MCAM and the MCNP A-lite model (version 4.1). Simplified TBMs (from KIT) have been integrated in the equatorial-port. Comparisons of neutron wall loading, flux, nuclear heating and tritium production rate show a good agreement between the two codes. Discrepancies are mainly included in the Monte Carlo codes statistical error.

  13. Pore-scale and Continuum Simulations of Solute Transport Micromodel Benchmark Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Oostrom, Martinus; Mehmani, Yashar; Romero Gomez, Pedro DJ; Tang, Y.; Liu, H.; Yoon, Hongkyu; Kang, Qinjun; Joekar Niasar, Vahid; Balhoff, Matthew; Dewers, T.; Tartakovsky, Guzel D.; Leist, Emily AE; Hess, Nancy J.; Perkins, William A.; Rakowski, Cynthia L.; Richmond, Marshall C.; Serkowski, John A.; Werth, Charles J.; Valocchi, Albert J.; Wietsma, Thomas W.; Zhang, Changyong

    2016-08-01

    Four sets of micromodel nonreactive solute transport experiments were conducted with flow velocity, grain diameter, pore-aspect ratio, and flow focusing heterogeneity as the variables. The data sets were offered to pore-scale modeling groups to test their simulators. Each set consisted of two learning experiments, for which all results was made available, and a challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing, and considerably enhanced mixing due to flow focusing. Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice-Boltzmann (LB) approach, and one employed a computational fluid dynamics (CFD) technique. The learning experiments were used by the PN models to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used these experiments to appropriately discretize the grid representations. The continuum model use published non-linear relations between transverse dispersion coefficients and Peclet numbers to compute the required dispersivity input values. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values and, resulting in less dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models needed up to several days on supercomputers to resolve the more complex problems.

  14. Data for model validation summary report. A summary of data for validation and benchmarking of recovery boiler models

    Energy Technology Data Exchange (ETDEWEB)

    Grace, T.; Lien, S.; Schmidl, W.; Salcudean, M.; Abdullah, Z.

    1997-07-01

    One of the tasks in the project was to obtain data from operating recovery boilers for the purpose of model validation. Another task was to obtain water model data and computer output from University of British Columbia for purposes of benchmarking the UBC model against other codes. In the course of discussions on recovery boiler modeling over the course of this project, it became evident that there would be value in having some common cases for carrying out benchmarking exercises with different recovery boiler models. In order to facilitate such a benchmarking exercise, the data that was obtained on this project for validation and benchmarking purposes has been brought together in a single, separate report. The intent is to make this data available to anyone who may want to use it for model validation. The report contains data from three different cases. Case 1 is an ABBCE recovery boiler which was used for model validation. The data are for a single set of operating conditions. Case 2 is a Babcock & Wilcox recovery boiler that was modified by Tampella. In this data set, several different operating conditions were employed. The third case is water flow data supplied by UBC, along with computational output using the UBC code, for benchmarking purposes.

  15. A benchmark for the validation of solidification modelling algorithms

    Science.gov (United States)

    Kaschnitz, E.; Heugenhauser, S.; Schumacher, P.

    2015-06-01

    This work presents two three-dimensional solidification models, which were solved by several commercial solvers (MAGMASOFT, FLOW-3D, ProCAST, WinCast, ANSYS, and OpenFOAM). Surprisingly, the results show noticeable differences. The results are analyzed similar to a round-robin test procedure to obtain reference values for temperatures and their uncertainties at selected positions in the model. The first model is similar to an adiabatic calorimeter with an aluminum alloy solidifying in a copper block. For this model, an analytical solution for the overall temperature at steady state can be calculated. The second model implements additional heat transfer boundary conditions at outer faces. The geometry of the models, the initial and boundary conditions as well as the material properties are kept as simple as possible but, nevertheless, close to a realistic solidification situation. The gained temperature results can be used to validate self-written solidification solvers and check the accuracy of commercial solidification programs.

  16. Standardizing and benchmarking of modeled DNI data products

    OpenAIRE

    Meyer, Richard; Gueymard, Chris; Ineichen, Pierre

    2011-01-01

    Modeled direct normal irradiance (DNI) can be either derived from satellite data or from numerical weather prediction models. Such modeled datasets are available at continental scale and provide continuous long-term time series, but are known to fall short in quality over some areas, compared to high-quality ground measurements. The uncertainty in DNI may be locally so high that CSP projects cannot be financed. The CSP industry would obviously benefit from a comprehensive and large-scale benc...

  17. Simulation of TRIGA Mark II Benchmark Experiment using WIMSD4 and CITATION codes; Simulacao com WIMSD4 e CITATION do Triga Mark II benchmark experiment

    Energy Technology Data Exchange (ETDEWEB)

    Dalle, Hugo Moura [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN), Belo Horizonte, MG (Brazil); Pereira, Claubia [Minas Gerais Univ., Belo Horizonte, MG (Brazil). Dept. de Engenharia Nuclear

    2000-07-01

    This paper presents a simulation of the TRIGA Mark II Benchmark Experiment, Part I: Steady-State Operation and is part of the calculation methodology validation developed to the neutronic calculation of the CDTN's TRIGA IPR - R1 reactor. A version of the WIMSD4, obtained in the Centro de Tecnologia Nuclear, in Cuba, was used in the cells calculation. In the core calculations was adopted the diffusion code CITATION. Was adopted a 3D representation of the core and the calculations were carried out at two energy groups. Many of the experiments were simulated, including, K{sub eff}, control rods reactivity worth, fuel elements reactivity worth distribution and the fuel temperature reactivity coefficient. The comparison of the obtained results, with the experimental results, shows differences in the range of the accuracy of the measurements, to the control rods worth and fuel temperature reactivity coefficient, or on an acceptable range, following the literature, to the K{sub eff} and fuel elements reactivity worth distribution and the fuel temperature reactivity coefficient. The comparison of the obtained results, with the experimental. results, shows differences in the range of the accuracy of the measurements, to the control rods worth and fuel temperature reactivity coefficient, or in an acceptable range, following the literature, to the K{sub eff} and fuel elements reactivity worth distribution. (author)

  18. Benchmark Dose Software Development and Maintenance Ten Berge Cxt Models

    Science.gov (United States)

    This report is intended to provide an overview of beta version 1.0 of the implementation of a concentration-time (CxT) model originally programmed and provided by Wil ten Berge (referred to hereafter as the ten Berge model). The recoding and development described here represent ...

  19. Scale resolved simulations of the OECD/NEA–Vattenfall T-junction benchmark

    International Nuclear Information System (INIS)

    Mixing of fluids in T-junction geometries is of significant interest for nuclear safety research. The most prominent example is the thermal striping phenomena in piping T-junctions, where hot and cold streams join and turbulently mix, however not completely or not immediately at the T-junction. This result in significant temperature fluctuations near the piping wall, either at the side of the secondary pipe branch or at the opposite side of the main branch pipe. The wall temperature fluctuation can cause cyclical thermal stresses and subsequently fatigue cracking of the wall. Thermal mixing in a T-junction has been studied for validation of CFD-calculations. A T-junction thermal mixing test was carried out at the Älvkarleby Laboratory of Vattenfall Research and Development (VRD) in Sweden. Data from this test have been reserved specifically for a OECD CFD benchmark exercise. The computational results show that RANS fail to predict a realistic mixing between the fluids. The results were significantly better with scale-resolving methods such as LES, showing fairly good predictions of the velocity field and mean temperatures. The calculation predicts also similar fluctuations and frequencies observed in the model test

  20. Upgrading the Benchmark Simulation Model Framework with emerging challenges - A study of N2O emissions and the fate of pharmaceuticals in urban wastewater systems

    OpenAIRE

    Snip, Laura; Plósz, Benedek G.; Flores Alsina, Xavier; Jeppsson, Ulf A. C.; Gernaey, Krist

    2015-01-01

    I dag forventes der at et spildevandsrensningsanlæg ikke kun fjerner de almindelige forurenende stoffer fra spildevandet. Indenfor de seneste årtier er mange nye udfordringer opstået, hvilket har markant øget kravene til rensningsanlæggene. For eksempel forventes et rensningsanlæg nu til dags også at minimere sit CO2‐aftryk (carbon footprint) og fjerne mikroforureninger fra spildevandet. Optimering af driften af et rensningsanlæg kan undersøges og forbedres ved brug af matematiske modeller, d...

  1. Simulation of heavy rainfall events over Indian region: a benchmark skill with a GCM

    Science.gov (United States)

    Goswami, Prashant; Kantha Rao, B.

    2015-10-01

    Extreme rainfall events (ERE) contribute a significant component of the Indian summer monsoon rainfall. Thus an important requirement for regional climate simulations is to attain desirable quality and reliability in simulating the extreme rainfall events. While the global circulation model (GCM) with coarse resolution are not preferred for simulation of extreme events, it is expected that the global domain in a GCM would allow better representation of scale interactions, resulting in adequate skill in simulating localized events in spite of lower resolution. At the same time, a GCM with skill in simulation of extreme events will provide a more reliable tool for seamless prediction. The present work provides an assessment of a GCM for simulating 40 ERE that occurred over India during 1998-2013. It is found that, expectedly, the GCM forecasts underestimate the observed (TRMM) rainfall in most cases, but not always. Somewhat surprisingly, the forecasts of location are quite accurate in spite of low resolution (~50 km). An interesting result is that the highest skill of the forecasts is realized at 48 h lead rather than at 24 or 96 h lead. Diagnostics of dynamical fields like convergence shows that the forecasts can capture contrasting features on pre-event, event and post-event days. The forecast configuration used is similar to one that has been used for long-range monsoon forecasting and tropical cyclones in earlier studies; the present results on ERE forecasting, therefore, provide an indication for the potential application of the model for seamless prediction.

  2. Scientific Modeling and simulations

    CERN Document Server

    Diaz de la Rubia, Tomás

    2009-01-01

    Showcases the conceptual advantages of modeling which, coupled with the unprecedented computing power through simulations, allow scientists to tackle the formibable problems of our society, such as the search for hydrocarbons, understanding the structure of a virus, or the intersection between simulations and real data in extreme environments

  3. Benchmarking Rapid TLES Simulations of Gas Diffusion in Proteins: Mapping O2 Migration and Escape in Myoglobin as a Case Study.

    Science.gov (United States)

    Shadrina, Maria S; English, Ann M; Peslherbe, Gilles H

    2016-04-12

    Standard molecular dynamics (MD) simulations of gas diffusion consume considerable computational time and resources even for small proteins. To combat this, temperature-controlled locally enhanced sampling (TLES) examines multiple diffusion trajectories per simulation by accommodating multiple noninteracting copies of a gas molecule that diffuse independently, while the protein and water molecules experience an average interaction from all copies. Furthermore, gas migration within a protein matrix can be accelerated without altering protein dynamics by increasing the effective temperature of the TLES copies. These features of TLES enable rapid simulations of gas diffusion within a protein matrix at significantly reduced (∼98%) computational cost. However, the results of TLES and standard MD simulations have not been systematically compared, which limits the adoption of the TLES approach. We address this drawback here by benchmarking TLES against standard MD in the simulation of O2 diffusion in myoglobin (Mb) as a case study since this model system has been extensively characterized. We find that 2 ns TLES and 108 ns standard simulations map the same network of diffusion tunnels in Mb and uncover the same docking sites, barriers, and escape portals. We further discuss the influence of simulation time as well as the number of independent simulations on the O2 population density within the diffusion tunnels and on the sampling of Mb's conformational space as revealed by principal component analysis. Overall, our comprehensive benchmarking reveals that TLES is an appropriate and robust tool for the rapid mapping of gas diffusion in proteins when the kinetic data provided by standard MD are not required. Furthermore, TLES provides explicit ligand diffusion pathways, unlike most rapid methods. PMID:26938707

  4. Application of MPC and sliding mode control to IFAC benchmark models

    OpenAIRE

    McGookin, M.; Anderson, D; McGookin, E.

    2008-01-01

    The comparison of Model Predictive Control (MPC) and Sliding Mode Control (SMC) are presented in this paper. This paper investigates the performance of each controller as the navigation system for IFAC benchmark ship models (cargo vessel and oil tanker). In this investigation the navigation system regulates the heading angle of the two types of marine vessel with reference to a desired heading trajectory. In this investigation, the result obtained from MPC is compared with a well-...

  5. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  6. Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) Benchmark Phase II: Identification of Influential Parameters

    International Nuclear Information System (INIS)

    The objective of the Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) benchmark is to progress on the issue of the quantification of the uncertainty of the physical models in system thermal-hydraulic codes by considering a concrete case: the physical models involved in the prediction of core reflooding. The PREMIUM benchmark consists of five phases. This report presents the results of Phase II dedicated to the identification of the uncertain code parameters associated with physical models used in the simulation of reflooding conditions. This identification is made on the basis of the Test 216 of the FEBA/SEFLEX programme according to the following steps: - identification of influential phenomena; - identification of the associated physical models and parameters, depending on the used code; - quantification of the variation range of identified input parameters through a series of sensitivity calculations. A procedure for the identification of potentially influential code input parameters has been set up in the Specifications of Phase II of PREMIUM benchmark. A set of quantitative criteria has been as well proposed for the identification of influential IP and their respective variation range. Thirteen participating organisations, using 8 different codes (7 system thermal-hydraulic codes and 1 sub-channel module of a system thermal-hydraulic code) submitted Phase II results. The base case calculations show spread in predicted cladding temperatures and quench front propagation that has been characterized. All the participants, except one, predict a too fast quench front progression. Besides, the cladding temperature time trends obtained by almost all the participants show oscillatory behaviour which may have numeric origins. Adopted criteria for identification of influential input parameters differ between the participants: some organisations used the set of criteria proposed in Specifications 'as is', some modified the quantitative thresholds

  7. Generic Hockey-Stick Model for Estimating Benchmark Dose and Potency: Performance Relative to BMDS and Application to Anthraquinone.

    Science.gov (United States)

    Bogen, Kenneth T

    2011-01-01

    Benchmark Dose Model software (BMDS), developed by the U.S. Environmental Protection Agency, involves a growing suite of models and decision rules now widely applied to assess noncancer and cancer risk, yet its statistical performance has never been examined systematically. As typically applied, BMDS also ignores the possibility of reduced risk at low doses ("hormesis"). A simpler, proposed Generic Hockey-Stick (GHS) model also estimates benchmark dose and potency, and additionally characterizes and tests objectively for hormetic trend. Using 100 simulated dichotomous-data sets (5 dose groups, 50 animals/group), sampled from each of seven risk functions, GHS estimators performed about as well or better than BMDS estimators, and a surprising observation was that BMDS mis-specified all of six non-hormetic sampled risk functions most or all of the time. When applied to data on rodent tumors induced by the genotoxic chemical carcinogen anthraquinone (AQ), the GHS model yielded significantly negative estimates of net potency exhibited by the combined rodent data, suggesting that-consistent with the anti-leukemogenic properties of AQ and structurally similar quinones-environmental AQ exposures do not likely increase net cancer risk. In addition to its simplicity and flexibility, the GHS approach offers a unified, consistent approach to quantifying environmental chemical risk. PMID:21731536

  8. DECOVALEX I - Bench-Mark Test 3: Thermo-hydro-mechanical modelling

    International Nuclear Information System (INIS)

    The bench-mark test concerns the excavation of a tunnel, located 500 m below the ground surface, and the establishment of mechanical equilibrium and steady-state fluid flow. Following this, a thermal heating due to the nuclear waste, stored in a borehole below the tunnel, was simulated. The results are reported at (1) 30 days after tunnel excavation, (2) steady state, (3) one year after thermal loading, and (4) at the time of maximum temperature. The problem specification included the excavation and waste geometry, materials properties for intact rock and joints, location of more than 6500 joints observed in the 50 by 50 m area, and calculated hydraulic conductivities. However, due to the large number of joints and the lack of dominating orientations, it was decided to treat the problem as a continuum using the computer code FLAC. The problem was modeled using a vertical symmetry plane through the tunnel and the borehole. Flow equilibrium was obtained approx. 40 days after the opening of the tunnel. Since the hydraulic conductivity was set to be stress dependent, a noticeable difference in the horizontal and vertical conductivity and flow was observed. After 40 days, an oedometer-type consolidation of the model was observed. Approx. 4 years after the initiation of the heat source, a maximum temperature of 171 C was obtained. The stress-dependent hydraulic conductivity and the temperature-dependent dynamic viscosity caused minor changes to the flow pattern. The specified mechanical boundary conditions imply that the tunnel is part of a system of parallel tunnels. However, the fixed temperature at the top boundary maintains the temperature below the temperature anticipated for an equivalent repository. The combination of mechanical and hydraulic boundary conditions cause the model to behave like an oedometer test in which the consolidation rate goes asymptotically to zero. 17 refs, 55 figs, 22 tabs

  9. Analysis of Transitional and Turbulent Flow Through the FDA Benchmark Nozzle Model Using Laser Doppler Velocimetry.

    Science.gov (United States)

    Taylor, Joshua O; Good, Bryan C; Paterno, Anthony V; Hariharan, Prasanna; Deutsch, Steven; Malinauskas, Richard A; Manning, Keefe B

    2016-09-01

    Transitional and turbulent flow through a simplified medical device model is analyzed as part of the FDA's Critical Path Initiative, designed to improve the process of bringing medical products to market. Computational predictions are often used in the development of devices and reliable in vitro data is needed to validate computational results, particularly estimations of the Reynolds stresses that could play a role in damaging blood elements. The high spatial resolution of laser Doppler velocimetry (LDV) is used to collect two component velocity data within the FDA benchmark nozzle model. Two flow conditions are used to produce flow encompassing laminar, transitional, and turbulent regimes, and viscous stresses, principal Reynolds stresses, and turbulence intensities are calculated from the measured LDV velocities. Axial velocities and viscous stresses are compared to data from a prior inter-laboratory study conducted with particle image velocimetry. Large velocity gradients are observed near the wall in the nozzle throat and in the jet shear layer located in the expansion downstream of the throat, with axial velocity changing as much as 4.5 m/s over 200 μm. Additionally, maximum Reynolds shear stresses of 1000-2000 Pa are calculated in the high shear regions, which are an order of magnitude higher than the peak viscous shear stresses (nozzle model, indicating that hemolysis may occur under certain flow conditions. As such, the presented turbulence quantities from LDV, which are also available for download at https://fdacfd.nci.nih.gov/ , provide an ideal validation test for computational simulations that seek to characterize the flow field and to predict hemolysis within the FDA nozzle geometry. PMID:27350137

  10. Models of asthma: density-equalizing mapping and output benchmarking

    Directory of Open Access Journals (Sweden)

    Fischer Tanja C

    2008-02-01

    Full Text Available Abstract Despite the large amount of experimental studies already conducted on bronchial asthma, further insights into the molecular basics of the disease are required to establish new therapeutic approaches. As a basis for this research different animal models of asthma have been developed in the past years. However, precise bibliometric data on the use of different models do not exist so far. Therefore the present study was conducted to establish a data base of the existing experimental approaches. Density-equalizing algorithms were used and data was retrieved from a Thomson Institute for Scientific Information database. During the period from 1900 to 2006 a number of 3489 filed items were connected to animal models of asthma, the first being published in the year 1968. The studies were published by 52 countries with the US, Japan and the UK being the most productive suppliers, participating in 55.8% of all published items. Analyzing the average citation per item as an indicator for research quality Switzerland ranked first (30.54/item and New Zealand ranked second for countries with more than 10 published studies. The 10 most productive journals included 4 with a main focus allergy and immunology and 4 with a main focus on the respiratory system. Two journals focussed on pharmacology or pharmacy. In all assigned subject categories examined for a relation to animal models of asthma, immunology ranked first. Assessing numbers of published items in relation to animal species it was found that mice were the preferred species followed by guinea pigs. In summary it can be concluded from density-equalizing calculations that the use of animal models of asthma is restricted to a relatively small number of countries. There are also differences in the use of species. These differences are based on variations in the research focus as assessed by subject category analysis.

  11. Theory Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Shlachter, Jack [Los Alamos National Laboratory

    2012-08-23

    Los Alamos has a long history in theory, modeling and simulation. We focus on multidisciplinary teams that tackle complex problems. Theory, modeling and simulation are tools to solve problems just like an NMR spectrometer, a gas chromatograph or an electron microscope. Problems should be used to define the theoretical tools needed and not the other way around. Best results occur when theory and experiments are working together in a team.

  12. Simulating Online Business Models

    OpenAIRE

    Schuster, Stephan; Gilbert, Nigel

    2004-01-01

    The online content market for news and music is changing rapidly with the spread of technology and innovative business models (e.g. the online delivery of music, specialised subscription news services). It is correspondingly hard for suppliers of online content to anticipate developments and the effects of their businesses. The paper describes a prototype multiagent simulation to model possible scenarios in this market. The simulation is intended for use by business strategists and has been d...

  13. A Review of Flood Loss Models as Basis for Harmonization and Benchmarking

    Science.gov (United States)

    Kreibich, Heidi; Franco, Guillermo; Marechal, David

    2016-01-01

    Risk-based approaches have been increasingly accepted and operationalized in flood risk management during recent decades. For instance, commercial flood risk models are used by the insurance industry to assess potential losses, establish the pricing of policies and determine reinsurance needs. Despite considerable progress in the development of loss estimation tools since the 1980s, loss estimates still reflect high uncertainties and disparities that often lead to questioning their quality. This requires an assessment of the validity and robustness of loss models as it affects prioritization and investment decision in flood risk management as well as regulatory requirements and business decisions in the insurance industry. Hence, more effort is needed to quantify uncertainties and undertake validations. Due to a lack of detailed and reliable flood loss data, first order validations are difficult to accomplish, so that model comparisons in terms of benchmarking are essential. It is checked if the models are informed by existing data and knowledge and if the assumptions made in the models are aligned with the existing knowledge. When this alignment is confirmed through validation or benchmarking exercises, the user gains confidence in the models. Before these benchmarking exercises are feasible, however, a cohesive survey of existing knowledge needs to be undertaken. With that aim, this work presents a review of flood loss–or flood vulnerability–relationships collected from the public domain and some professional sources. Our survey analyses 61 sources consisting of publications or software packages, of which 47 are reviewed in detail. This exercise results in probably the most complete review of flood loss models to date containing nearly a thousand vulnerability functions. These functions are highly heterogeneous and only about half of the loss models are found to be accompanied by explicit validation at the time of their proposal. This paper exemplarily

  14. Benchmarks and models for 1-D radiation transport in stochastic participating media

    Energy Technology Data Exchange (ETDEWEB)

    Miller, D S

    2000-08-21

    Benchmark calculations for radiation transport coupled to a material temperature equation in a 1-D slab and 1-D spherical geometry binary random media are presented. The mixing statistics are taken to be homogeneous Markov statistics in the 1-D slab but only approximately Markov statistics in the 1-D sphere. The material chunk sizes are described by Poisson distribution functions. The material opacities are first taken to be constant and then allowed to vary as a strong function of material temperature. Benchmark values and variances for time evolution of the ensemble average of material temperature energy density and radiation transmission are computed via a Monte Carlo type method. These benchmarks are used as a basis for comparison with three other approximate methods of solution. One of these approximate methods is simple atomic mix. The second approximate model is an adaptation of what is commonly called the Levermore-Pomraning model and which is referred to here as the standard model. It is shown that recasting the temperature coupling as a type of effective scattering can be useful in formulating the third approximate model, an adaptation of a model due to Su and Pomraning which attempts to account for the effects of scattering in a stochastic context. This last adaptation shows consistent improvement over both the atomic mix and standard models when used in the 1-D slab geometry but shows limited improvement in the 1-D spherical geometry. Benchmark values are also computed for radiation transmission from the 1-D sphere without material heating present. This is to evaluate the performance of the standard model on this geometry--something which has never been done before. All of the various tests demonstrate the importance of stochastic structure on the solution. Also demonstrated are the range of usefulness and limitations of a simple atomic mix formulation.

  15. Multichannel Multi Market Media Service Business Model Evaluation and Benchmark

    OpenAIRE

    2012-01-01

    This research was conducted as a part of Next Media’s Multichannel Multimarket Media Services research programme. The project’s members are publishing companies and research institutions in Finland, including AAC Global, SanomaPro, Aalto University, Laurea, and the VTT Technical Research Centre of Finland. This study examines business models for the e-learning industry both in Finland and in international markets. Three complementary research pieces are presented in this report, and each ...

  16. Network Generation Model Based on Evolution Dynamics To Generate Benchmark Graphs

    CERN Document Server

    Pasta, Muhammad Qasim

    2016-01-01

    Network generation models provide an understanding of the dynamics behind the formation and evolution of different networks including social networks, technological networks and biological networks. Two important applications of these models are to study the evolution dynamics of network formation and to generate benchmark networks with known community structures. Research has been conducted in both these directions relatively independent of the other application area. This creates a disjunct between real world networks and the networks generated to study community detection algorithms. In this paper, we propose to study both these application areas together i.e.\\ introduce a network generation model based on evolution dynamics of real world networks and generate networks with community structures that can be used as benchmark graphs to study community detection algorithms. The generated networks possess tunable modular structures which can be used to generate networks with known community structures. We stud...

  17. Algorithm comparison and benchmarking using a parallel spectra transform shallow water model

    Energy Technology Data Exchange (ETDEWEB)

    Worley, P.H. [Oak Ridge National Lab., TN (United States); Foster, I.T.; Toonen, B. [Argonne National Lab., IL (United States)

    1995-04-01

    In recent years, a number of computer vendors have produced supercomputers based on a massively parallel processing (MPP) architecture. These computers have been shown to be competitive in performance with conventional vector supercomputers for some applications. As spectral weather and climate models are heavy users of vector supercomputers, it is interesting to determine how these models perform on MPPS, and which MPPs are best suited to the execution of spectral models. The benchmarking of MPPs is complicated by the fact that different algorithms may be more efficient on different architectures. Hence, a comprehensive benchmarking effort must answer two related questions: which algorithm is most efficient on each computer and how do the most efficient algorithms compare on different computers. In general, these are difficult questions to answer because of the high cost associated with implementing and evaluating a range of different parallel algorithms on each MPP platform.

  18. Benchmarking GW against exact diagonalization for semiempirical models

    DEFF Research Database (Denmark)

    Kaasbjerg, Kristen; Thygesen, Kristian Sommer

    2010-01-01

    We calculate ground-state total energies and single-particle excitation energies of seven pi-conjugated molecules described with the semiempirical Pariser-Parr-Pople model using self-consistent many-body perturbation theory at the GW level and exact diagonalization. For the total energies GW...... screening and improve the low-lying excitation energies. The effect of the GW self-energy on the molecular excitation energies is shown to be similar to the inclusion of final-state relaxations in Hartree-Fock theory. We discuss the breakdown of the GW approximation in systems with short-range interactions...

  19. Simulation of the OECD Main-Steam-Line-Break Benchmark Exercise 3 Using the Coupled RELAP5/PANTHER Codes

    International Nuclear Information System (INIS)

    The RELAP5 best-estimate thermal-hydraulic system code has been coupled with the PANTHER three-dimensional neutron kinetics code via the TALINK dynamic data exchange control and processing tool. The coupled RELAP5/PANTHER code package has been qualified and will be used at Tractebel Engineering (TE) for analyzing asymmetric pressurized water reactor (PWR) accidents with strong core-system interactions. The Organization for Economic Cooperation and Development/U.S. Nuclear Regulatory Commission PWR main-steam-line-break benchmark problem was analyzed as part of the qualification efforts to demonstrate the capability of the coupled code package of simulating such transients. This paper reports the main results of TE's contribution to the benchmark Exercise 3

  20. Benchmarking of thermal hydraulic loop models for Lead-Alloy Cooled Advanced Nuclear Energy System (LACANES), phase-I: Isothermal steady state forced convection

    International Nuclear Information System (INIS)

    As highly promising coolant for new generation nuclear reactors, liquid Lead-Bismuth Eutectic has been extensively worldwide investigated. With high expectation about this advanced coolant, a multi-national systematic study on LBE was proposed in 2007, which covers benchmarking of thermal hydraulic prediction models for Lead-Alloy Cooled Advanced Nuclear Energy System (LACANES). This international collaboration has been organized by OECD/NEA, and nine organizations - ENEA, ERSE, GIDROPRESS, IAEA, IPPE, KIT/IKET, KIT/INR, NUTRECK, and RRC KI - contribute their efforts to LACANES benchmarking. To produce experimental data for LACANES benchmarking, thermal-hydraulic tests were conducted by using a 12-m tall LBE integral test facility, named as Heavy Eutectic liquid metal loop for integral test of Operability and Safety of PEACER (HELIOS) which has been constructed in 2005 at the Seoul National University in the Republic of Korea. LACANES benchmark campaigns consist of a forced convection (phase-I) and a natural circulation (phase-II). In the forced convection case, the predictions of pressure losses based on handbook correlations and that obtained by Computational Fluid Dynamics code simulation were compared with the measured data for various components of the HELIOS test facility. Based on comparative analyses of the predictions and the measured data, recommendations for the prediction methods of a pressure loss in LACANES were obtained. In this paper, results for the forced convection case (phase-I) of LACANES benchmarking are described.

  1. Generative Benchmark Models for Mesoscale Structures in Multilayer Networks

    CERN Document Server

    Bazzi, Marya; Arenas, Alex; Howison, Sam D; Porter, Mason A

    2016-01-01

    Multilayer networks allow one to represent diverse and interdependent connectivity patterns --- e.g., time-dependence, multiple subsystems, or both --- that arise in many applications and which are difficult or awkward to incorporate into standard network representations. In the study of multilayer networks, it is important to investigate "mesoscale" (i.e., intermediate-scale) structures, such as dense sets of nodes known as "communities" that are connected sparsely to each other, to discover network features that are not apparent at the microscale or the macroscale. A variety of methods and algorithms are available to identify communities in multilayer networks, but they differ in their definitions and/or assumptions of what constitutes a community, and many scalable algorithms provide approximate solutions with little or no theoretical guarantee on the quality of their approximations. Consequently, it is crucial to develop generative models of networks to use as a common test of community-detection tools. I...

  2. Benchmarking of computer codes and approaches for modeling exposure scenarios

    International Nuclear Information System (INIS)

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided

  3. Modeling RIA benchmark cases with FRAPTRAN and SCANAIR: A comparative exercise

    International Nuclear Information System (INIS)

    Highlights: • Consistent comparison between FRAPTRAN and SCANAIR RIA estimations. • Meaningful insights for further codes enhancement. • Special attention should be paid to the heat transfer across the oxide layer. • Efforts should be performed in order to achieve accurate gap closure estimations. • The FGR predictions reflect the different strategies adopted by the codes. - Abstract: The need for defining new RIA safety criteria has demonstrated the importance of performing a rigorous assessment of the transient codes capabilities. The present work is a comparative exercise devoted to identify the origin of the key deviations found between the predictions of FRAPTRAN-1.4 and SCANAIR-7.1 codes. To do so, the results of the calculations submitted by CIEMAT to the OECD/NEA RIA benchmark (CIP0-1, CIP3-1, VA-1 and VA-3 experiments) have been exploited. The advantage of these simulations is that the initial rodlet characterization, the transient power and the user assumptions were defined similarly in the two codes, which is essential to perform a consistent comparison. The comparative assessment has led to relevant insights about the codes performances. As for differences reported in the CIP0-1 clad temperatures, the heat transfer modeling across the oxide layer has been found to be mainly responsible. Likewise, the estimation of the gap closure time is directly related to the systematically higher clad hoop deformations predicted by FRAPTRAN-1.4. The differences between FRAPTRAN-1.4 and SCANAIR-7.1 fission gas release predictions reflect the different strategies adopted by the codes to model the complex processes involved in gas release during the transient

  4. AEGIS geologic simulation model

    International Nuclear Information System (INIS)

    The Geologic Simulation Model (GSM) is used by the AEGIS (Assessment of Effectiveness of Geologic Isolation Systems) program at the Pacific Northwest Laboratory to simulate the dynamic geology and hydrology of a geologic nuclear waste repository site over a million-year period following repository closure. The GSM helps to organize geologic/hydrologic data; to focus attention on active natural processes by requiring their simulation; and, through interactive simulation and calibration, to reduce subjective evaluations of the geologic system. During each computer run, the GSM produces a million-year geologic history that is possible for the region and the repository site. In addition, the GSM records in permanent history files everything that occurred during that time span. Statistical analyses of data in the history files of several hundred simulations are used to classify typical evolutionary paths, to establish the probabilities associated with deviations from the typical paths, and to determine which types of perturbations of the geologic/hydrologic system, if any, are most likely to occur. These simulations will be evaluated by geologists familiar with the repository region to determine validity of the results. Perturbed systems that are determined to be the most realistic, within whatever probability limits are established, will be used for the analyses that involve radionuclide transport and dose models. The GSM is designed to be continuously refined and updated. Simulation models are site specific, and, although the submodels may have limited general applicability, the input data equirements necessitate detailed characterization of each site before application

  5. Model-Based Engineering and Manufacturing CAD/CAM Benchmark.; FINAL

    International Nuclear Information System (INIS)

    The Benchmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supporting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate their engineering practices and processes to determine direction and focus for Y-12 modernization efforts. The companies visited included several large established companies and a new, small, high-tech machining firm. As a result of this effort, changes are recommended that will enable Y-12 to become a more modern, responsive, cost-effective manufacturing facility capable of supporting the needs of the Nuclear Weapons Complex (NWC) into the 21st century. The benchmark team identified key areas of interest, both focused and general. The focus areas included Human Resources, Information Management, Manufacturing Software Tools, and Standards/Policies and Practices. Areas of general interest included Infrastructure, Computer Platforms and Networking, and Organizational Structure. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were somewhere between 3-D solid modeling and surfaced wire-frame models. The manufacturing computer tools were varied, with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) from a common model. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system than a single computer-aided manufacturing (CAM) system. The Internet was a technology that all companies were looking to either transport information more easily throughout the corporation or as a conduit for

  6. GEANT4 simulations of the n{sub T}OF spallation source and their benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Lo Meo, S. [Research Centre ' ' Ezio Clementel' ' , ENEA, Bologna (Italy); Section of Bologna, INFN, Bologna (Italy); Cortes-Giraldo, M.A.; Lerendegui-Marco, J.; Guerrero, C.; Quesada, J.M. [Universidad de Sevilla, Facultad de Fisica, Sevilla (Spain); Massimi, C.; Vannini, G. [Section of Bologna, INFN, Bologna (Italy); University of Bologna, Physics and Astronomy Dept. ' ' Alma Mater Studiorum' ' , Bologna (Italy); Barbagallo, M.; Colonna, N. [INFN, Section of Bari, Bari (Italy); Mancusi, D. [CEA-Saclay, DEN, DM2S, SERMA, LTSD, Gif-sur-Yvette CEDEX (France); Mingrone, F. [Section of Bologna, INFN, Bologna (Italy); Sabate-Gilarte, M. [Universidad de Sevilla, Facultad de Fisica, Sevilla (Spain); European Organization for Nuclear Research (CERN), Geneva (Switzerland); Vlachoudis, V. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Collaboration: The n_TOF Collaboration

    2015-12-15

    Neutron production and transport in the spallation target of the n{sub T}OF facility at CERN has been simulated with GEANT4. The results obtained with different models of high-energy nucleon-nucleus interaction have been compared with the measured characteristics of the neutron beam, in particular the flux and its dependence on neutron energy, measured in the first experimental area. The best agreement at present, within 20% for the absolute value of the flux, and within few percent for the energy dependence in the whole energy range from thermal to 1 GeV, is obtained with the INCL++ model coupled with the GEANT4 native de-excitation model. All other available models overestimate by a larger factor, of up to 70%, the n{sub T}OF neutron flux. The simulations are also able to accurately reproduce the neutron beam energy resolution function, which is essentially determined by the moderation time inside the target/moderator assembly. The results here reported provide confidence on the use of GEANT4 for simulations of spallation neutron sources. (orig.)

  7. GEANT4 simulations of the n_TOF spallation source and their benchmarking

    Science.gov (United States)

    Lo Meo, S.; Cortés-Giraldo, M. A.; Massimi, C.; Lerendegui-Marco, J.; Barbagallo, M.; Colonna, N.; Guerrero, C.; Mancusi, D.; Mingrone, F.; Quesada, J. M.; Sabate-Gilarte, M.; Vannini, G.; Vlachoudis, V.

    2015-12-01

    Neutron production and transport in the spallation target of the n_TOF facility at CERN has been simulated with GEANT4. The results obtained with different models of high-energy nucleon-nucleus interaction have been compared with the measured characteristics of the neutron beam, in particular the flux and its dependence on neutron energy, measured in the first experimental area. The best agreement at present, within 20% for the absolute value of the flux, and within few percent for the energy dependence in the whole energy range from thermal to 1 GeV, is obtained with the INCL++ model coupled with the GEANT4 native de-excitation model. All other available models overestimate by a larger factor, of up to 70%, the n_TOF neutron flux. The simulations are also able to accurately reproduce the neutron beam energy resolution function, which is essentially determined by the moderation time inside the target/moderator assembly. The results here reported provide confidence on the use of GEANT4 for simulations of spallation neutron sources.

  8. Peculiarity by Modeling of the Control Rod Movement by the Kalinin-3 Benchmark

    International Nuclear Information System (INIS)

    The paper presents an important part of the results of the OECD/NEA benchmark transient 'Switching off one main circulation pump at nominal power' analyzed as a boundary condition problem by the coupled system code ATHLET-BIPR-VVER. Some observations and comparisons with measured data for integral reactor parameters are discussed. Special attention is paid on the modeling and comparisons performed for the control rod movement and the reactor power history. (Authors)

  9. Logistics Cost Modeling in Strategic Benchmarking Project : cases: CEL Consulting & Cement Group A

    OpenAIRE

    Nguyen Cong Minh, Vu

    2010-01-01

    This thesis deals with logistics cost modeling for a benchmarking project as consulting service from CEL Consulting for Cement Group A. The project aims at providing flows and cost comparison of bagged cement of all cement players to relevant markets in Indonesia. The results of the project yielded strategic elements for Cement Group A in planning their penetration strategy with new investments. Due to the specific needs, Cement Group A requested a flexible costing approach taking into ...

  10. The effect of coupled mass transport and internal reforming on modeling of solid oxide fuel cells part II: Benchmarking transient response and dynamic model fidelity assessment

    Science.gov (United States)

    Albrecht, Kevin J.; Braun, Robert J.

    2016-02-01

    One- and 'quasi' two-dimensional (2-D) dynamic, interface charge transport models of a solid oxide fuel cell (SOFC) developed previously in a companion paper, are benchmarked against other models and simulated to evaluate the effects of coupled transport and chemistry. Because the reforming reaction can distort the concentration profiles of the species within the anode, a 'quasi' 2-D model that captures porous media mass transport and electrochemistry is required. The impact of a change in concentration at the triple-phase boundary is twofold wherein the local Nernst potential and anode exchange current densities are influenced, thereby altering the current density and temperature distributions of the cell. Thus, the dynamic response of the cell models are compared, and benchmarked against previous channel-level models to gauge the relative importance of capturing in-situ reforming phenomena on cell performance. Simulation results indicate differences in the transient electrochemical response for a step in current density where the 'quasi' 2-D model predicts a slower rise and fall in cell potential due to the additional volume of the porous media and mass transport dynamics. Delays in fuel flow rate are shown to increase the difference observed in the electrochemical response of the cells.

  11. Molecular dynamics simulation for modelling plasma spectroscopy

    CERN Document Server

    Talin, B; Calisti, A; Gigosos, M A; González, M A; Gaztelurrutia, T R; Dufty, J W

    2003-01-01

    The ion-electron coupling properties for an ion impurity in an electron gas and for a two-component plasma are carried out on the basis of a regularized electron-ion potential removing the short-range Coulomb divergence. This work is largely motivated by the study of radiator dipole relaxation in plasmas which makes a real link between models and experiments. Current radiative property models for plasmas include single electron collisions neglecting charge-charge correlations within the classical quasi-particle approach commonly used in this field. The dipole relaxation simulation based on electron-ion molecular dynamics proposed here will provide a means to benchmark and improve model developments. Benefiting from a detailed study of a single ion embedded in an electron plasma, the challenging two-component ion-electron molecular dynamics simulations are proved accurate. They open new possibilities of obtaining reference lineshape data.

  12. Generic Hockey-Stick Model for Estimating Benchmark Dose and Potency: Performance Relative to BMDS and Application to Anthraquinone

    OpenAIRE

    Kenneth T. Bogen

    2010-01-01

    Benchmark Dose Model software (BMDS), developed by the U.S. Environmental Protection Agency, involves a growing suite of models and decision rules now widely applied to assess noncancer and cancer risk, yet its statistical performance has never been examined systematically. As typically applied, BMDS also ignores the possibility of reduced risk at low doses (“hormesis”). A simpler, proposed Generic Hockey-Stick (GHS) model also estimates benchmark dose and potency, and additionally characteri...

  13. Quality assurance for online adapted treatment plans: Benchmarking and delivery monitoring simulation

    Energy Technology Data Exchange (ETDEWEB)

    Li, Taoran, E-mail: taoran.li.duke@gmail.com; Wu, Qiuwen; Yang, Yun; Rodrigues, Anna; Yin, Fang-Fang; Jackie Wu, Q. [Department of Radiation Oncology, Duke University Medical Center Durham, North Carolina 27710 (United States)

    2015-01-15

    Purpose: An important challenge facing online adaptive radiation therapy is the development of feasible and efficient quality assurance (QA). This project aimed to validate the deliverability of online adapted plans and develop a proof-of-concept online delivery monitoring system for online adaptive radiation therapy QA. Methods: The first part of this project benchmarked automatically online adapted prostate treatment plans using traditional portal dosimetry IMRT QA. The portal dosimetry QA results of online adapted plans were compared to original (unadapted) plans as well as randomly selected prostate IMRT plans from our clinic. In the second part, an online delivery monitoring system was designed and validated via a simulated treatment with intentional multileaf collimator (MLC) errors. This system was based on inputs from the dynamic machine information (DMI), which continuously reports actual MLC positions and machine monitor units (MUs) at intervals of 50 ms or less during delivery. Based on the DMI, the system performed two levels of monitoring/verification during the delivery: (1) dynamic monitoring of cumulative fluence errors resulting from leaf position deviations and visualization using fluence error maps (FEMs); and (2) verification of MLC positions against the treatment plan for potential errors in MLC motion and data transfer at each control point. Validation of the online delivery monitoring system was performed by introducing intentional systematic MLC errors (ranging from 0.5 to 2 mm) to the DMI files for both leaf banks. These DMI files were analyzed by the proposed system to evaluate the system’s performance in quantifying errors and revealing the source of errors, as well as to understand patterns in the FEMs. In addition, FEMs from 210 actual prostate IMRT beams were analyzed using the proposed system to further validate its ability to catch and identify errors, as well as establish error magnitude baselines for prostate IMRT delivery

  14. Quality assurance for online adapted treatment plans: Benchmarking and delivery monitoring simulation

    International Nuclear Information System (INIS)

    Purpose: An important challenge facing online adaptive radiation therapy is the development of feasible and efficient quality assurance (QA). This project aimed to validate the deliverability of online adapted plans and develop a proof-of-concept online delivery monitoring system for online adaptive radiation therapy QA. Methods: The first part of this project benchmarked automatically online adapted prostate treatment plans using traditional portal dosimetry IMRT QA. The portal dosimetry QA results of online adapted plans were compared to original (unadapted) plans as well as randomly selected prostate IMRT plans from our clinic. In the second part, an online delivery monitoring system was designed and validated via a simulated treatment with intentional multileaf collimator (MLC) errors. This system was based on inputs from the dynamic machine information (DMI), which continuously reports actual MLC positions and machine monitor units (MUs) at intervals of 50 ms or less during delivery. Based on the DMI, the system performed two levels of monitoring/verification during the delivery: (1) dynamic monitoring of cumulative fluence errors resulting from leaf position deviations and visualization using fluence error maps (FEMs); and (2) verification of MLC positions against the treatment plan for potential errors in MLC motion and data transfer at each control point. Validation of the online delivery monitoring system was performed by introducing intentional systematic MLC errors (ranging from 0.5 to 2 mm) to the DMI files for both leaf banks. These DMI files were analyzed by the proposed system to evaluate the system’s performance in quantifying errors and revealing the source of errors, as well as to understand patterns in the FEMs. In addition, FEMs from 210 actual prostate IMRT beams were analyzed using the proposed system to further validate its ability to catch and identify errors, as well as establish error magnitude baselines for prostate IMRT delivery

  15. Vver-1000 coolant transient benchmark.C Phase 1 (V1000CT-1)Vol.2: summary results of exercise 1 on point kinetics plant simulation

    International Nuclear Information System (INIS)

    In the field of coupled neutronics/thermal-hydraulics computation there is a need to enhance scientific knowledge in order to develop advanced modelling techniques for new nuclear technologies and concepts, as well as current applications. Recently developed best-estimate computer code systems for modelling 3-D coupled neutronics/thermal-hydraulics transients in nuclear cores and for the coupling of core phenomena and system dynamics need to be compared against each other and validated against results from experiments. International benchmark studies have been set up for this purpose. The present volume, a follow-up to the first volume describing the specification of the benchmark, presents the results of the first exercise that identifies the key parameters and important issues concerning the thermal-hydraulic system modelling of the simulated transient. This exercise aims to achieve the correct initialization and testing of the system code models. The transient chosen for the exercise is caused by the switching on of a main coolant pump while the other three are in operation. It is based on an experiment that was conducted by Bulgarian and Russian engineers during the plant-commissioning phase at the VVER-1000 Kozloduy Unit 6. (author)

  16. Benchmark of SIMULATE5 thermal hydraulics against the Frigg and NUPEC full bundle test experiments

    International Nuclear Information System (INIS)

    SIMULATE5 is Studsvik Scandpower's next generation nodal code. The core portion of the thermal hydraulic models of PWR and BWRs are treated as essentially identical, with each assembly having an active channel and a number of parallel water channels. In addition, the BWR assembly may be divided into four radial sub-assemblies. For natural circulation reactors, the BWR thermal hydraulic model is capable of modeling an entire vessel loop: core, chimney, upper plenum, standpipes, steam separators, downcomer, recirculation pumps, and lower plenum. This paper presents results of the validation of the BWR thermal hydraulic model against: (1) pressure drop data measured in the Frigg and NUPEC test facilities; (2) void fraction distribution measured in the Frigg and NUPEC loops; (3) quarter-assembly void fraction measured in the NUPEC experiments and (4) natural and forced circulation flow measurements in the Frigg loop. (author)

  17. Benchmark of a new multi-ion-species collision operator for $\\delta f$ Monte Carlo neoclassical simulation

    CERN Document Server

    Satake, Shinsuke; Pianpanit, Theerasarn; Sugama, Hideo; Nunami, Masanori; Matsuoka, Seikichi; Ishiguro, Seiji; Kanno, Ryutaro

    2016-01-01

    A numerical method to implement a linearized Coulomb collision operator for multi-ion-species neoclassical transport simulation using two-weight $\\delta f$ Monte Carlo method is developed. The conservation properties and the adjointness of the operator in the collisions between two particle species with different temperatures are verified. The linearized operator in a $\\delta f$ Monte Carlo code is benchmarked with other two kinetic simulation codes, i. e., a $\\delta f$ continuum gyrokinetic code with the same linearized collision operator and a full-f PIC code with Nanbu collision operator. The benchmark simulations of the equilibration process of plasma flow and temperature fluctuation among several particle species show very good agreement between $\\delta f$ Monte Carlo code and the other two codes. An error in the H-theorem in the two-weight $\\delta f$ Monte Carlo method is found, which is caused by the weight spreading phenomenon inherent in the two-weight $\\delta f$ method. It is demonstrated that the w...

  18. CFD Simulation of Thermal-Hydraulic Benchmark V1000CT-2 Using ANSYS CFX

    OpenAIRE

    Thomas Höhne

    2009-01-01

    Plant measured data from VVER-1000 coolant mixing experiments were used within the OECD/NEA and AER coupled code benchmarks for light water reactors to test and validate computational fluid dynamic (CFD) codes. The task is to compare the various calculations with measured data, using specified boundary conditions and core power distributions. The experiments, which are provided for CFD validation, include single loop cooling down or heating-up by disturbing the heat transfer in the steam gene...

  19. Laser-plasma interaction in ignition relevant plasmas: benchmarking our 3D modelling capabilities versus recent experiments

    Energy Technology Data Exchange (ETDEWEB)

    Divol, L; Froula, D H; Meezan, N; Berger, R; London, R A; Michel, P; Glenzer, S H

    2007-09-27

    We have developed a new target platform to study Laser Plasma Interaction in ignition-relevant condition at the Omega laser facility (LLE/Rochester)[1]. By shooting an interaction beam along the axis of a gas-filled hohlraum heated by up to 17 kJ of heater beam energy, we were able to create a millimeter-scale underdense uniform plasma at electron temperatures above 3 keV. Extensive Thomson scattering measurements allowed us to benchmark our hydrodynamic simulations performed with HYDRA [1]. As a result of this effort, we can use with much confidence these simulations as input parameters for our LPI simulation code pF3d [2]. In this paper, we show that by using accurate hydrodynamic profiles and full three-dimensional simulations including a realistic modeling of the laser intensity pattern generated by various smoothing options, fluid LPI theory reproduces the SBS thresholds and absolute reflectivity values and the absence of measurable SRS. This good agreement was made possible by the recent increase in computing power routinely available for such simulations.

  20. Research Reactor Benchmarks

    International Nuclear Information System (INIS)

    A criticality benchmark experiment performed at the Jozef Stefan Institute TRIGA Mark II research reactor is described. This experiment and its evaluation are given as examples of benchmark experiments at research reactors. For this reason the differences and possible problems compared to other benchmark experiments are particularly emphasized. General guidelines for performing criticality benchmarks in research reactors are given. The criticality benchmark experiment was performed in a normal operating reactor core using commercially available fresh 20% enriched fuel elements containing 12 wt% uranium in uranium-zirconium hydride fuel material. Experimental conditions to minimize experimental errors and to enhance computer modeling accuracy are described. Uncertainties in multiplication factor due to fuel composition and geometry data are analyzed by sensitivity analysis. The simplifications in the benchmark model compared to the actual geometry are evaluated. Sample benchmark calculations with the MCNP and KENO Monte Carlo codes are given

  1. Benchmark Modeling of the Near-Field and Far-Field Wave Effects of Wave Energy Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Rhinefrank, Kenneth E; Haller, Merrick C; Ozkan-Haller, H Tuba

    2013-01-26

    This project is an industry-led partnership between Columbia Power Technologies and Oregon State University that will perform benchmark laboratory experiments and numerical modeling of the near-field and far-field impacts of wave scattering from an array of wave energy devices. These benchmark experimental observations will help to fill a gaping hole in our present knowledge of the near-field effects of multiple, floating wave energy converters and are a critical requirement for estimating the potential far-field environmental effects of wave energy arrays. The experiments will be performed at the Hinsdale Wave Research Laboratory (Oregon State University) and will utilize an array of newly developed Buoys' that are realistic, lab-scale floating power converters. The array of Buoys will be subjected to realistic, directional wave forcing (1:33 scale) that will approximate the expected conditions (waves and water depths) to be found off the Central Oregon Coast. Experimental observations will include comprehensive in-situ wave and current measurements as well as a suite of novel optical measurements. These new optical capabilities will include imaging of the 3D wave scattering using a binocular stereo camera system, as well as 3D device motion tracking using a newly acquired LED system. These observing systems will capture the 3D motion history of individual Buoys as well as resolve the 3D scattered wave field; thus resolving the constructive and destructive wave interference patterns produced by the array at high resolution. These data combined with the device motion tracking will provide necessary information for array design in order to balance array performance with the mitigation of far-field impacts. As a benchmark data set, these data will be an important resource for testing of models for wave/buoy interactions, buoy performance, and far-field effects on wave and current patterns due to the presence of arrays. Under the proposed project we will initiate

  2. CFD Analysis of a Void Distribution Benchmark of the NUPEC PSBT Tests: Model Calibration and Influence of Turbulence Modelling

    OpenAIRE

    Krepper, E.; Rzehak, R.

    2012-01-01

    The paper presents CFD calculations of the void distribution tests of the PSBT benchmark using ANSYS CFX-12.1. First, relevant aspects of the implemented wall boiling model are reviewed highlighting the uncertainties in several model parameters. It is then shown that the measured cross-sectionally averaged values can be reproduced well with a single set of calibrated model parameters for different test cases. For the reproduction of patterns of void distribution cross-sections, attention has ...

  3. A parallel high-order accurate finite element nonlinear Stokes ice sheet model and benchmark experiments

    Energy Technology Data Exchange (ETDEWEB)

    Leng, Wei [Chinese Academy of Sciences; Ju, Lili [University of South Carolina; Gunzburger, Max [Florida State University; Price, Stephen [Los Alamos National Laboratory; Ringler, Todd [Los Alamos National Laboratory,

    2012-01-01

    The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.

  4. Modeling the emetic potencies of food-borne trichothecenes by benchmark dose methodology.

    Science.gov (United States)

    Male, Denis; Wu, Wenda; Mitchell, Nicole J; Bursian, Steven; Pestka, James J; Wu, Felicia

    2016-08-01

    Trichothecene mycotoxins commonly co-contaminate cereal products. They cause immunosuppression, anorexia, and emesis in multiple species. Dietary exposure to such toxins often occurs in mixtures. Hence, if it were possible to determine their relative toxicities and assign toxic equivalency factors (TEFs) to each trichothecene, risk management and regulation of these mycotoxins could become more comprehensive and simple. We used a mink emesis model to compare the toxicities of deoxynivalenol, 3-acetyldeoxynivalenol, 15-acetyldeoxynivalenol, nivalenol, fusarenon-X, HT-2 toxin, and T-2 toxin. These toxins were administered to mink via gavage and intraperitoneal injection. The United States Environmental Protection Agency (EPA) benchmark dose software was used to determine benchmark doses for each trichothecene. The relative potencies of each of these toxins were calculated as the ratios of their benchmark doses to that of DON. Our results showed that mink were more sensitive to orally administered toxins than to toxins administered by IP. T-2 and HT-2 toxins caused the greatest emetic responses, followed by FX, and then by DON, its acetylated derivatives, and NIV. Although these results provide key information on comparative toxicities, there is still a need for more animal based studies focusing on various endpoints and combined effects of trichothecenes before TEFs can be established. PMID:27292944

  5. Monte Carlo simulation of MLC-shaped TrueBeam electron fields benchmarked against measurement

    CERN Document Server

    Lloyd, Samantha AM; Zavgorodni, Sergei

    2014-01-01

    Modulated electron radiotherapy (MERT) and combined, modulated photon/electron radiotherapy (MPERT) have received increased research attention, having shown capacity for reduced low dose exposure to healthy tissue and comparable, if not improved, target coverage for a number of treatment sites. Accurate dose calculation tools are necessary for clinical treatment planning, and Monte Carlo (MC) is the gold standard for electron field simulation. With many clinics replacing older accelerators, MC source models of the new machines are needed for continued development, however, Varian has kept internal schematics of the TrueBeam confidential and electron phase-space sources have not been made available. TrueBeam electron fields are not substantially different from those generated by the Clinac 21EX, so we have modified the internal schematics of the Clinac 21EX to simulate TrueBeam electrons. BEAMnrc/DOSXYZnrc were used to simulate 5x5 and 20x20 cm$^2$ electron fields with MLC-shaped apertures. Secondary collimati...

  6. Benchmarking biological nutrient removal in wastewater treatment plants: influence of mathematical model assumptions

    DEFF Research Database (Denmark)

    Flores-Alsina, Xavier; Gernaey, Krist V.; Jeppsson, Ulf

    2012-01-01

    This paper examines the effect of different model assumptions when describing biological nutrient removal (BNR) by the activated sludge models (ASM) 1, 2d & 3. The performance of a nitrogen removal (WWTP1) and a combined nitrogen and phosphorus removal (WWTP2) benchmark wastewater treatment plant...... reactive settler: (1) increases the hydrolysis of particulates; (2) increases the overall plant's denitrification efficiency by reducing the SNOx concentration at the bottom of the clarifier; (3) increases the oxidation of COD compounds; (4) increases XOHO and XANO decay; and, finally, (5) increases the...

  7. Comparison of the results of the fifth dynamic AER benchmark-a benchmark for coupled thermohydraulic system/three-dimensional hexagonal kinetic core models

    International Nuclear Information System (INIS)

    The fifth dynamic benchmark was defined at seventh AER-Symposium, held in Hoernitz, Germany in 1997. It is the first benchmark for coupled thermohydraulic system/three-dimensional hexagonal neutron kinetic core models. In this benchmark the interaction between the components of a WWER-440 NPP with the reactor core has been investigated. The initiating event is a symmetrical break of the main steam header at the end of the first fuel cycle and hot shutdown conditions with one control rod group stucking. This break causes an overcooling of the primary circuit. During this overcooling the scram reactivity is compensated and the scrammed reactor becomes re critical. The calculation was continued until the highly-borated water from the high pressure injection system terminated the power excursion. Each participant used own best-estimate nuclear cross section data. Only the initial subcriticality at the beginning of the transient was given. Solutions were received from Kurchatov Institute Russia with the code BIPR8/ATHLET, VTT Energy Finland with HEXTRAN/SMABRE, NRI Rez Czech Republic with DYN3/ATHLET, KFKI Budapest Hungary with KIKO3D/ATHLET and from FZR Germany with the code DYN3D/ATHLET.In this paper the results are compared. Beside the comparison of global results, the behaviour of several thermohydraulic and neutron kinetic parameters is presented to discuss the revealed differences between the solutions.(Authors)

  8. Benchmark measurements and simulations of dose perturbations due to metallic spheres in proton beams

    International Nuclear Information System (INIS)

    Monte Carlo simulations are increasingly used for dose calculations in proton therapy due to its inherent accuracy. However, dosimetric deviations have been found using Monte Carlo code when high density materials are present in the proton beamline. The purpose of this work was to quantify the magnitude of dose perturbation caused by metal objects. We did this by comparing measurements and Monte Carlo predictions of dose perturbations caused by the presence of small metal spheres in several clinical proton therapy beams as functions of proton beam range and drift space. Monte Carlo codes MCNPX, GEANT4 and Fast Dose Calculator (FDC) were used. Generally good agreement was found between measurements and Monte Carlo predictions, with the average difference within 5% and maximum difference within 17%. The modification of multiple Coulomb scattering model in MCNPX code yielded improvement in accuracy and provided the best overall agreement with measurements. Our results confirmed that Monte Carlo codes are well suited for predicting multiple Coulomb scattering in proton therapy beams when short drift spaces are involved. - Highlights: • We compared measurements and Monte Carlo predictions of dose perturbations caused by the metal objects in proton beams. • Different Monte Carlo codes were used, including MCNPX, GEANT4 and Fast Dose Calculator. • Good agreement was found between measurements and Monte Carlo simulations. • The modification of multiple Coulomb scattering model in MCNPX code yielded improved accuracy. • Our results confirmed that Monte Carlo codes are well suited for predicting multiple Coulomb scattering in proton therapy

  9. Electron emission from amorphous solid water after proton impact: Benchmarking PTra and Geant4 track structure Monte Carlo simulations

    International Nuclear Information System (INIS)

    Track structure Monte Carlo simulations of ionising radiation in water are often used to estimate radiation damage to DNA. For this purpose, an accurate simulation of the transport of densely ionising low-energy secondary electrons is particularly important, but is impaired by a high uncertainty of the required physical interaction cross section data of liquid water. A possible tool for the verification of the secondary electron transport in a track structure simulation has been suggested by Toburen et al. (2010), who have measured the angle-dependent energy spectra of electrons, emitted from a thin layer of amorphous solid water (ASW) upon a passage of 6 MeV protons. In this work, simulations were performed for the setup of their experiment, using the PTB Track structure code (PTra) and Geant4-DNA. To enable electron transport below the ionisation threshold, additional excitation and dissociative attachment anion states were included in PTra and activated in Geant4. Additionally, a surface potential was considered in both simulations, such that the escape probability for an electron is dependent on its energy and impact angle at the ASW/vacuum interface. For vanishing surface potential, the simulated spectra are in good agreement with the measured spectra for energies above 50 eV. Below, the simulations overestimate the yield of electrons by a factor up to 4 (PTra) or 7 (Geant4-DNA), which is still a better agreement than obtained in previous simulations of this experimental situation. The agreement of the simulations with experimental data was significantly improved by using a step-like increase of the potential energy at the ASW surface. - Highlights: ► Benchmarked electron transport in track structure simulations using liquid water. ► Simulated differential electron spectra agree with measured data. ► The agreement was improved by including a 3 eV surface potential step.

  10. Multiple-code benchmark simulation study of coupled THMC processes in the excavation disturbed zone associated with geological nuclear waste repositories

    International Nuclear Information System (INIS)

    An international, multiple-code benchmark test (BMT) study is being conducted within the international DECOVALEX project to analyze coupled thermal, hydrological, mechanical and chemical (THMC) processes in the excavation disturbed zone (EDZ) around emplacement drifts of a nuclear waste repository. This BMT focuses on mechanical responses and long-term chemo-mechanical effects that may lead to changes in mechanical and hydrological properties in the EDZ. This includes time-dependent processes such as creep, and subcritical crack, or healing of fractures that might cause 'weakening' or 'hardening' of the rock over the long term. Five research teams are studying this BMT using a wide range of model approaches, including boundary element, finite element, and finite difference, particle mechanics, and elasto-plastic cellular automata methods. This paper describes the definition of the problem and preliminary simulation results for the initial model inception part, in which time dependent effects are not yet included

  11. Multiple-Code Benchmark Simulation Study of Coupled THMC Processes IN the EXCAVATION DISTURBED ZONE Associated with Geological Nuclear Waste Repositories

    International Nuclear Information System (INIS)

    An international, multiple-code benchmark test (BMT) study is being conducted within the international DECOVALEX project to analyze coupled thermal, hydrological, mechanical and chemical (THMC) processes in the excavation disturbed zone (EDZ) around emplacement drifts of a nuclear waste repository. This BMT focuses on mechanical responses and long-term chemo-mechanical effects that may lead to changes in mechanical and hydrological properties in the EDZ. This includes time-dependent processes such as creep, and subcritical crack, or healing of fractures that might cause ''weakening'' or ''hardening'' of the rock over the long term. Five research teams are studying this BMT using a wide range of model approaches, including boundary element, finite element, and finite difference, particle mechanics, and elasto-plastic cellular automata methods. This paper describes the definition of the problem and preliminary simulation results for the initial model inception part, in which time dependent effects are not yet included

  12. Benchmarking Electron-Cloud Build-Up and Heat-Load Simulations against Large-Hadron-Collider Observations

    CERN Document Server

    Dominguez, O; Maury, H; Rumolo, G; Zimmermann, F

    2011-01-01

    After reviewing the basic features of electron clouds in particle accelerators, the pertinent vacuum-chamber surface properties, and the electron-cloud simulation tools in use at CERN, we report recent observations of electron-cloud phenomena at the Large Hadron Collider (LHC) and ongoing attempts to benchmark the measured LHC vacuum pressure increases and heat loads against electron-cloud build-up simulations aimed at determining the actual surface parameters and at monitoring the so-called scrubbing process. Finally, some other electron-cloud studies related to the LHC are mentioned, and future study plans are described. Presented at MulCoPim2011, Valencia, Spain, 21-23 September 2011.

  13. On the energy conservation electrostatic PIC/MC simulating: benchmark and application to the radio frequency discharges

    CERN Document Server

    Hong-Yu, Wang; Peng, Sun; Ling-Bao, Kong

    2014-01-01

    We benchmark and analyze the error of energy conservation (EC) scheme for particle in cell/Monte-Carlo Couple (PIC/MCC) algorithms by a radio frequency discharging simulation. The plasma heating behaviors and electron distributing functions obtained by 1D simulation are analyzed. Both explicit and implicit algorithms are checked. The results showed that the EC scheme can eliminated the self-heating with wide grid spacing in both cases with a small reduction of the accuracies. In typical parameters, the EC implicit scheme has higher precision than EC explicit scheme. Some "numerical cooling" behaviors are observed and analyzed. Some other error are analyzed also. The analysis showed EC implicit scheme can be used to qualitative estimation of some discharge problems with much less computational resource costs without much loss of accuracies.

  14. On the energy conservation electrostatic particle-in-cell/Monte Carlo simulation: Benchmark and application to the radio frequency discharges

    International Nuclear Information System (INIS)

    We benchmark and analyze the error of energy conservation (EC) scheme in particle-in-cell/Monte Carlo (PIC/MC) algorithms by simulating the radio frequency discharge. The plasma heating behaviors and electron distributing functions obtained by one-dimensional (1D) simulation are analyzed. Both explicit and implicit algorithms are checked. The results showed that the EC scheme can eliminated the self-heating with wide grid spacing in both cases with a small reduction of the accuracies. In typical parameters, the EC implicit scheme has higher precision than EC explicit scheme. Some “numerical cooling” behaviors are observed and analyzed. Some other errors are also analyzed. The analysis showed that the EC implicit scheme can be used to qualitative estimation of some discharge problems with much less computational resource cost without much loss of accuracies

  15. Soil Shielding Benchmark Experiment and its Simulation with Mars using Secondary Particles Produced by 12 GeV Protons

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, T.; Numajiri, M.; Ban, S.; Kanda, Y.; Oki, Y.; Namito, Y.; Miura, T.; Hirayama, H.; Shibata, T.; Kondo, K.; Takasaki, M.; Tanaka, K.H.; Y; Yamanoi; Minakawa, M.; Noumi, H.; Ieiri, M.; Kato, Y.; Ishii, H.; Suzuki, Y.; Nishikawa, K.; Mokhov, N

    1998-07-01

    A soil-shielding benchmark experiment was conducted using secondary particles produced by 12 GeV protons, which were injected into an iron rod surrounded by soil. Induced activities of {sup 22}Na in aluminium (Al) and soil samples were measured and the experiment was simulated by the MARS Monte Carlo code. The induced activities in Al samples were calculated using spallation cross sections and fluence, where the fluence was calculated by the MARS code. The relation between flux density and induced activities in soil was investigated using calculated flux densities: the distribution of the ratio of induced activities in soil samples to the flux densities showed the radial and axial independence. Both saturated activities and distribution coincide with experimental data within the estimated errors. These results indicate the successful simulation by the MARS code. (author)

  16. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Peiyuan [Univ. of Colorado, Boulder, CO (United States); Brown, Timothy [Univ. of Colorado, Boulder, CO (United States); Fullmer, William D. [Univ. of Colorado, Boulder, CO (United States); Hauser, Thomas [Univ. of Colorado, Boulder, CO (United States); Hrenya, Christine [Univ. of Colorado, Boulder, CO (United States); Grout, Ray [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sitaraman, Hariswaran [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-01-29

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling of the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.

  17. Modeling of shielding benchmark for Na-24 γ-rays using scale code package and QAD-CGGP code

    International Nuclear Information System (INIS)

    The benchmark data were recently published for 1.37 and 2.75 MeV photons emitted by an Na-24 uniform disc source penetrating shields of six two-layer combinations, namely, 12''Al+Fe, 12''+Pb, 6''Fe+Al, 6''Fe+Pb, 4''Pb+Al, and 4''Pb+Fe. These benchmark data fill a gap in the energy range of practical interest and provide useful reference values for computational method evaluation. In order to evaluate the computational methods incorporated into widely used shielding codes SCALE and QAD we compared the benchmark data with results of benchmark modeling with these codes. Using the functional module SAS4 of SCALE4 modular code package and the point kernel code system for gamma-ray shielding calculations QAD-CGGP scalar flux density spectra in benchmark energy group structure for three two-layer combinations were calculated. The comparison of the benchmark data and the results obtained showed that QAD-CGGP and SAS4 results are in good agreement, but the benchmark experimental data differ significantly from the both of them. (author)

  18. Finite Element Method Modeling of Sensible Heat Thermal Energy Storage with Innovative Concretes and Comparative Analysis with Literature Benchmarks

    Directory of Open Access Journals (Sweden)

    Claudio Ferone

    2014-08-01

    Full Text Available Efficient systems for high performance buildings are required to improve the integration of renewable energy sources and to reduce primary energy consumption from fossil fuels. This paper is focused on sensible heat thermal energy storage (SHTES systems using solid media and numerical simulation of their transient behavior using the finite element method (FEM. Unlike other papers in the literature, the numerical model and simulation approach has simultaneously taken into consideration various aspects: thermal properties at high temperature, the actual geometry of the repeated storage element and the actual storage cycle adopted. High-performance thermal storage materials from the literatures have been tested and used here as reference benchmarks. Other materials tested are lightweight concretes with recycled aggregates and a geopolymer concrete. Their thermal properties have been measured and used as inputs in the numerical model to preliminarily evaluate their application in thermal storage. The analysis carried out can also be used to optimize the storage system, in terms of thermal properties required to the storage material. The results showed a significant influence of the thermal properties on the performances of the storage elements. Simulation results have provided information for further scale-up from a single differential storage element to the entire module as a function of material thermal properties.

  19. Thick lead target exposed to 660 MeV protons: benchmark model on radioactive nuclides production and heat generation, and beyond

    International Nuclear Information System (INIS)

    The benchmark is proposed on the basis of the first axial residue production distributions measurements in the thick Pb target irradiated with 660 MeV protons. Heating calculation during irradiation and decay heat assessment is foreseen as well. First benchmarking results with MCNPX2.5.0 and FLUKA2006.3 of residue production are presented. These new data from thick targets allow a consistent check of physics models with the predictions coming from microscopic residue cross sections (in thin lead target) over a wide energy range and the use of particle spectra simulations. Results obtained for thin targets data are consistent with this experiment and a combination of both sets of data could more constrain the INC models especially on the excitation energy of the nucleus before deexcitation. The main difficulties, data accuracy and contribution of emitted particles to residue production, are analysed and discussed in this work. (authors)

  20. Benchmarking the Higher Education Institutions in Egypt using Composite Index Model

    Directory of Open Access Journals (Sweden)

    Mohamed Rashad M El-Hefnawy

    2014-11-01

    Full Text Available Egypt has the largest and most significant higher education system in the Middle East and North Africa but it had been continuously facing serious and accumulated challenges. The Higher Education Institutions in Egypt are undergoing important changes involving the development of performance, they are implementing strategies to enhance the overall performance of their universities using ICT, but still the gap between what is existing and what is supposed to be for the self-regulation and improvement processes is not entirely clear to face these challenges. The using of strategic comparative analysis model and tools to evaluate the current and future states will affect the overall performance of universities and shape new paradigms in development of Higher Education System (HES, several studies have investigated the evaluation of universities through the development and use of ranking and benchmark systems In this paper, we provide a model to construct unified Composite Index (CI based on a set of SMART indictors emulate the nature of higher education systems in Egypt. The outcomes of the proposed model aim to measure overall performance of universities and provide unified benchmarking method in this context. The model was discussed from theoretical and technical perspectives. Meanwhile, the research study was conducted with 40 professors from 19 renowned universities in Egypt as education domain experts.

  1. Modelling of fission product release from TRISO fuel during accident conditions : benchmark code comparison / Ramlakan A.

    OpenAIRE

    Ramlakan, Alastair Justin

    2011-01-01

    This document gives an overview of the proposed MSc study. The main goal of the study is to model the cases listed in the code benchmark study of the International Atomic Energy Agency CRP–6 fuel performance study (Verfondern & Lee, 2005). The platform that will be employed is the GETTER code (Keshaw & van der Merwe, 2006). GETTER was used at PBMR for the release calculations of metallic and some non–metallic long–lived fission products. GETTER calculates the transport of fission products ...

  2. Benchmarking Gaussian Processes and Random Forests Surrogate Models on the BBOB Noiseless Testbed

    Czech Academy of Sciences Publication Activity Database

    Bajer, Lukáš; Pitra, Z.; Holeňa, Martin

    New York: ACM, 2015 - (Silva, S.), s. 1143-1150 ISBN 978-1-4503-3488-4. [GECCO Companion '15. Genetic and Evolutionary Computation Conference. Madrid (ES), 11.07.2015-15.07.2015] R&D Projects: GA ČR GA13-17187S Grant ostatní: ČVUT(CZ) SGS14/205/OHK4/3T/14; GA MŠk(CZ) ED2.1.00/03.0078; GA MŠk(CZ) LM2010005 Institutional support: RVO:67985807 Keywords : benchmarking * black-box optimization * surrogate model * Gaussian process * random forest Subject RIV: IN - Informatics, Computer Science

  3. Groundwater flow with energy transport and water-ice phase change: Numerical simulations, benchmarks, and application to freezing in peat bogs

    Science.gov (United States)

    McKenzie, J.M.; Voss, C.I.; Siegel, D.I.

    2007-01-01

    In northern peatlands, subsurface ice formation is an important process that can control heat transport, groundwater flow, and biological activity. Temperature was measured over one and a half years in a vertical profile in the Red Lake Bog, Minnesota. To successfully simulate the transport of heat within the peat profile, the U.S. Geological Survey's SUTRA computer code was modified. The modified code simulates fully saturated, coupled porewater-energy transport, with freezing and melting porewater, and includes proportional heat capacity and thermal conductivity of water and ice, decreasing matrix permeability due to ice formation, and latent heat. The model is verified by correctly simulating the Lunardini analytical solution for ice formation in a porous medium with a mixed ice-water zone. The modified SUTRA model correctly simulates the temperature and ice distributions in the peat bog. Two possible benchmark problems for groundwater and energy transport with ice formation and melting are proposed that may be used by other researchers for code comparison. ?? 2006 Elsevier Ltd. All rights reserved.

  4. Benchmarking of neutral beam current drive codes as a basis for the integrated modelling for ITER

    International Nuclear Information System (INIS)

    Neutral beam injection is a robust method for heating and current drive because it does not depend on any resonance conditions or coupling conditions at the edge. High-energy neutral beam current drive (NBCD) was experimentally validated for central current drive in JT-60U, giving a further confidence in ITER predictions. Recent progress in diagnostics, equilibrium solvers and analysis techniques enable rather detailed comparisons with NBCD codes. However, different codes give somewhat different results. Thus, we need to clarify physics implementations in NBCD codes, such as the beam model, ionization process, fast ion diffusion in the velocity space, orbit effects and electron shielding. Also from an integrated modelling viewpoint, an NBCD code benchmark is needed to establish a more solid basis for ITER operations. A benchmark of the Fokker-Planck code ACCOME has been performed against the orbit following Monte-Carlo code OFMC. Although calculated profiles agree rather well, the OFMC profile is slightly wider than the ACCOME one. The difference in the total fast ion current is ∼ 15%. We have examined fast ion diffusion in the 2D velocity space and observed difference in the diffusion in the pitch angle space. We have also examined orbit effects using a point source of the fast ions. Comparison of OFMC runs with and without the drift term in the orbit equation shows the finite banana width effect is not negligible. We have started a new NBCD code benchmark in the frame of the ITPA Steady-State Operation Topical Group with Fokker-Planck codes and orbit following Monte-Carlo codes such as OFMC, ACCOME, SPOT, NEMO, ASTRA, TRANSP/NUBEAM, ONETWO/NUBEAM, DRIFT and TOPICS. (author)

  5. Simulation - modeling - experiment

    International Nuclear Information System (INIS)

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  6. Gravity for Detecting Caves: Airborne and Terrestrial Simulations Based on a Comprehensive Karstic Cave Benchmark

    Science.gov (United States)

    Braitenberg, Carla; Sampietro, Daniele; Pivetta, Tommaso; Zuliani, David; Barbagallo, Alfio; Fabris, Paolo; Rossi, Lorenzo; Fabbri, Julius; Mansi, Ahmed Hamdi

    2016-04-01

    Underground caves bear a natural hazard due to their possible evolution into a sink hole. Mapping of all existing caves could be useful for general civil usages as natural deposits or tourism and sports. Natural caves exist globally and are typical in karst areas. We investigate the resolution power of modern gravity campaigns to systematically detect all void caves of a minimum size in a given area. Both aerogravity and terrestrial acquisitions are considered. Positioning of the gravity station is fastest with GNSS methods the performance of which is investigated. The estimates are based on a benchmark cave of which the geometry is known precisely through a laser-scan survey. The cave is the Grotta Gigante cave in NE Italy in the classic karst. The gravity acquisition is discussed, where heights have been acquired with dual-frequency geodetic GNSS receivers and Total Station. Height acquisitions with non-geodetic low-cost receivers are shown to be useful, although the error on the gravity field is larger. The cave produces a signal of -1.5 × 10-5 m/s2, with a clear elliptic geometry. We analyze feasibility of airborne gravity acquisitions for the purpose of systematically mapping void caves. It is found that observations from fixed wing aircraft cannot resolve the caves, but observations from slower and low-flying helicopters or drones do. In order to detect the presence of caves the size of the benchmark cave, systematic terrestrial acquisitions require a density of three stations on square 500 by 500 m2 tiles. The question has a large impact on civil and environmental purposes, since it will allow planning of urban development at a safe distance from subsurface caves. The survey shows that a systematic coverage of the karst would have the benefit to recover the position of all of the greater existing void caves.

  7. A control benchmark on the energy management of a plug-in hybrid electric vehicle

    OpenAIRE

    Sciarretta, Antonio; Serrao, Lorenzo; Dewagan, Prakash Chandra; Tona, Paolino; Bergshoeff, E.N. D.; Bordons, C.; Charmpa, E.; Elbert, P.; Eriksson, L.; Hofman, T.; Hubacher, H.; Isenegger, P.; Lacandia, F.; Laveau, A.; Li, H.

    2014-01-01

    International audience A benchmark control problem was developed for a special session of the IFAC Workshop on Engine and Powertrain Control, Simulation and Modeling (E-COSM 12), held in Rueil-Malmaison, France, in October 2012. The online energy management of a plug-in hybrid-electric vehicle was to be developed by the benchmark participants. The simulator, provided by the benchmark organizers, implements a model of the GM Voltec powertrain. Each solution was evaluated according to severa...

  8. Benchmarking HRD.

    Science.gov (United States)

    Ford, Donald J.

    1993-01-01

    Discusses benchmarking, the continuous process of measuring one's products, services, and practices against those recognized as leaders in that field to identify areas for improvement. Examines ways in which benchmarking can benefit human resources functions. (JOW)

  9. Bayesian Benchmark Dose Analysis

    OpenAIRE

    Fang, Qijun; Piegorsch, Walter W.; Barnes, Katherine Y.

    2014-01-01

    An important objective in environmental risk assessment is estimation of minimum exposure levels, called Benchmark Doses (BMDs) that induce a pre-specified Benchmark Response (BMR) in a target population. Established inferential approaches for BMD analysis typically involve one-sided, frequentist confidence limits, leading in practice to what are called Benchmark Dose Lower Limits (BMDLs). Appeal to Bayesian modeling and credible limits for building BMDLs is far less developed, however. Indee...

  10. Multidimensional benchmarking

    OpenAIRE

    Campbell, Akiko

    2016-01-01

    Benchmarking is a process of comparison between performance characteristics of separate, often competing organizations intended to enable each participant to improve its own performance in the marketplace (Kay, 2007). Benchmarking sets organizations’ performance standards based on what “others” are achieving. Most widely adopted approaches are quantitative and reveal numerical performance gaps where organizations lag behind benchmarks; however, quantitative benchmarking on its own rarely yi...

  11. New LHC benchmarks for the CP-conserving two-Higgs-doublet model

    International Nuclear Information System (INIS)

    We introduce a strategy to study the parameter space of the general, CP-conserving, two-Higgs-doublet Model (2HDM) with a softly broken Z2-symmetry by means of a new ''hybrid'' basis. In this basis the input parameters are the measured values of the mass of the observed Standard Model (SM)-like Higgs boson and its coupling strength to vector boson pairs, the mass of the second CP-even Higgs boson, the ratio of neutral Higgs vacuum expectation values, and three additional dimensionless parameters. Using the hybrid basis, we present numerical scans of the 2HDM parameter space where we survey available parameter regions and analyze model constraints. From these results, we define a number of benchmark scenarios that capture different aspects of non-standard Higgs phenomenology that are of interest for future LHC Higgs searches. (orig.) 2

  12. Benchmark experiment on the model of fusion reactor blanket with uranium neutron multiplier

    International Nuclear Information System (INIS)

    Benchmark experiment on the model of thermonuclear reactor blanket with 14 MeV neutron source is described. The model design corresponds to the known concept of the fast hybrid blanket with 238U neutron multiplier and main tritium production on 6Li. Detailed measurements of the following process velocities were carried out: tritium production on lithium isotopes; reactions modelling tritium production; (n, γ) and (n, 2n) processes for 238U; fission reactions for 235,238U, 239Pu, 237Np. Neutron flux integral measurements were performed by a set of threshold detectors on the basic of the 115In(n, n'), 204Pb(n, n'), 64Zn(n, p), 27Al(n, p), 56Fe(n, p), 107Ag(n, 2n), 63Cu(n, 2n) and 64(n, 2n) reactions

  13. New LHC benchmarks for the CP-conserving two-Higgs-doublet model

    Energy Technology Data Exchange (ETDEWEB)

    Haber, Howard E. [University of California, Santa Cruz Institute for Particle Physics, Santa Cruz, CA (United States); Staal, Oscar [Stockholm University, Department of Physics, The Oskar Klein Centre, Stockholm (Sweden)

    2015-10-15

    We introduce a strategy to study the parameter space of the general, CP-conserving, two-Higgs-doublet Model (2HDM) with a softly broken Z{sub 2}-symmetry by means of a new ''hybrid'' basis. In this basis the input parameters are the measured values of the mass of the observed Standard Model (SM)-like Higgs boson and its coupling strength to vector boson pairs, the mass of the second CP-even Higgs boson, the ratio of neutral Higgs vacuum expectation values, and three additional dimensionless parameters. Using the hybrid basis, we present numerical scans of the 2HDM parameter space where we survey available parameter regions and analyze model constraints. From these results, we define a number of benchmark scenarios that capture different aspects of non-standard Higgs phenomenology that are of interest for future LHC Higgs searches. (orig.) 2.

  14. Local approach of cleavage fracture applied to a vessel with subclad flaw. A benchmark on computational simulation

    International Nuclear Information System (INIS)

    A benchmark on the computational simulation of a cladded vessel with a 6.2 mm sub-clad flaw submitted to a thermal transient has been conducted. Two-dimensional elastic and elastic-plastic finite element computations of the vessel have been performed by the different partners with respective finite element codes ASTER (EDF), CASTEM 2000 (CEA), SYSTUS (Framatome) and ABAQUS (AEA Technology). Main results have been compared: temperature field in the vessel, crack opening, opening stress at crack tips, stress intensity factor in cladding and base metal, Weibull stress σw and probability of failure in base metal, void growth rate R/R0 in cladding. This comparison shows an excellent agreement on main results, in particular on results obtained with local approach. (K.A.)

  15. Benchmarking of a 1D Scrape-off layer code SOLF1D with SOLPS and its use in modelling long-legged divertors

    CERN Document Server

    Havlickova, E; Subba, F; Coster, D; Wischmeier, M; Fishpool, G

    2013-01-01

    A 1D code modelling SOL transport parallel to the magnetic field (SOLF1D) is benchmarked with 2D simulations of MAST-U SOL performed via the SOLPS code for two different collisionalities. Based on this comparison, SOLF1D is then used to model the effects of divertor leg stretching in 1D, in support of the planned Super-X divertor on MAST. The aim is to separate magnetic flux expansion from volumetric power losses due to recycling neutrals by stretching the divertor leg either vertically or radially.

  16. Benchmarking of a 1D scrape-off layer code SOLF1D with SOLPS and its use in modelling long-legged divertors

    OpenAIRE

    Havlickova, E.; Fundamenski, W.; Subba, F.; Coster, D; Wischmeier, M; Fishpool, G.

    2013-01-01

    A 1D code modelling SOL transport parallel to the magnetic field (SOLF1D) is benchmarked with 2D simulations of MAST-U SOL performed via the SOLPS code for two different collisionalities. Based on this comparison, SOLF1D is then used to model the effects of divertor leg stretching in 1D, in support of the planned Super-X divertor on MAST. The aim is to separate magnetic flux expansion from volumetric power losses due to recycling neutrals by stretching the divertor leg either vertically or ra...

  17. How to Use Benchmark and Cross-section Studies to Improve Data Libraries and Models

    Science.gov (United States)

    Wagner, V.; Suchopár, M.; Vrzalová, J.; Chudoba, P.; Svoboda, O.; Tichý, P.; Krása, A.; Majerle, M.; Kugler, A.; Adam, J.; Baldin, A.; Furman, W.; Kadykov, M.; Solnyshkin, A.; Tsoupko-Sitnikov, S.; Tyutyunikov, S.; Vladimirovna, N.; Závorka, L.

    2016-06-01

    Improvements of the Monte Carlo transport codes and cross-section libraries are very important steps towards usage of the accelerator-driven transmutation systems. We have conducted a lot of benchmark experiments with different set-ups consisting of lead, natural uranium and moderator irradiated by relativistic protons and deuterons within framework of the collaboration “Energy and Transmutation of Radioactive Waste”. Unfortunately, the knowledge of the total or partial cross-sections of important reactions is insufficient. Due to this reason we have started extensive studies of different reaction cross-sections. We measure cross-sections of important neutron reactions by means of the quasi-monoenergetic neutron sources based on the cyclotrons at Nuclear Physics Institute in Řež and at The Svedberg Laboratory in Uppsala. Measurements of partial cross-sections of relativistic deuteron reactions were the second direction of our studies. The new results obtained during last years will be shown. Possible use of these data for improvement of libraries, models and benchmark studies will be discussed.

  18. Radioactive decay simulation with Geant4: experimental benchmarks and developments for X-ray astronomy applications

    CERN Document Server

    Hauf, S; Hoffmann, Dieter H H; Bell, Z W; Pia, M G; Weidenspointner, Georg; Zoglauer, A

    2010-01-01

    We present {\\gamma} spectroscopy validation measurements for the Geant4 radioactive decay simulation for a selected range of isotopes using a simple experimental setup. Using these results we point out problems in the decay simulation and where they may originate from.

  19. Radioactive decay simulation with Geant4. Experimental benchmarks and developments for X-ray astronomy applications

    International Nuclear Information System (INIS)

    We present γ spectroscopy validation measurements for the Geant4 radioactive decay simulation for a selected range of isotopes using a simple experimental setup. Using these results we point out problems in the decay simulation and where they may originate from. (author)

  20. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  1. Studies of accurate multi-component lattice Boltzmann models on benchmark cases required for engineering applications

    CERN Document Server

    Otomo, Hiroshi; Li, Yong; Dressler, Marco; Staroselsky, Ilya; Zhang, Raoyang; Chen, Hudong

    2016-01-01

    We present recent developments in lattice Boltzmann modeling for multi-component flows, implemented on the platform of a general purpose, arbitrary geometry solver PowerFLOW. Presented benchmark cases demonstrate the method's accuracy and robustness necessary for handling real world engineering applications at practical resolution and computational cost. The key requirements for such approach are that the relevant physical properties and flow characteristics do not strongly depend on numerics. In particular, the strength of surface tension obtained using our new approach is independent of viscosity and resolution, while the spurious currents are significantly suppressed. Using a much improved surface wetting model, undesirable numerical artifacts including thin film and artificial droplet movement on inclined wall are significantly reduced.

  2. Flexing computational muscle: modeling and simulation of musculotendon dynamics.

    Science.gov (United States)

    Millard, Matthew; Uchida, Thomas; Seth, Ajay; Delp, Scott L

    2013-02-01

    Muscle-driven simulations of human and animal motion are widely used to complement physical experiments for studying movement dynamics. Musculotendon models are an essential component of muscle-driven simulations, yet neither the computational speed nor the biological accuracy of the simulated forces has been adequately evaluated. Here we compare the speed and accuracy of three musculotendon models: two with an elastic tendon (an equilibrium model and a damped equilibrium model) and one with a rigid tendon. Our simulation benchmarks demonstrate that the equilibrium and damped equilibrium models produce similar force profiles but have different computational speeds. At low activation, the damped equilibrium model is 29 times faster than the equilibrium model when using an explicit integrator and 3 times faster when using an implicit integrator; at high activation, the two models have similar simulation speeds. In the special case of simulating a muscle with a short tendon, the rigid-tendon model produces forces that match those generated by the elastic-tendon models, but simulates 2-54 times faster when an explicit integrator is used and 6-31 times faster when an implicit integrator is used. The equilibrium, damped equilibrium, and rigid-tendon models reproduce forces generated by maximally-activated biological muscle with mean absolute errors less than 8.9%, 8.9%, and 20.9% of the maximum isometric muscle force, respectively. When compared to forces generated by submaximally-activated biological muscle, the forces produced by the equilibrium, damped equilibrium, and rigid-tendon models have mean absolute errors less than 16.2%, 16.4%, and 18.5%, respectively. To encourage further development of musculotendon models, we provide implementations of each of these models in OpenSim version 3.1 and benchmark data online, enabling others to reproduce our results and test their models of musculotendon dynamics. PMID:23445050

  3. Benchmarking shielding simulations for an accelerator-driven spallation neutron source

    OpenAIRE

    Cherkashyna, Nataliia; DiJulio, Douglas D.; Panzner, Tobias; Rantsiou, Emmanouela; Filges, Uwe; Ehlers, Georg; Bentley, Phillip M.

    2015-01-01

    The shielding at an accelerator-driven spallation neutron facility plays a critical role in the performance of the neutron scattering instruments, the overall safety, and the total cost of the facility. Accurate simulation of shielding components is thus key for the design of upcoming facilities, such as the European Spallation Source (ESS), currently in construction in Lund, Sweden. In this paper, we present a comparative study between the measured and the simulated neutron background at the...

  4. Monte Carlo Simulation of the TRIGA Mark II Benchmark Experiment with Burned Fuel

    International Nuclear Information System (INIS)

    Monte Carlo calculations of a criticality experiment with burned fuel on the TRIGA Mark II research reactor are presented. The main objective was to incorporate burned fuel composition calculated with the WIMSD4 deterministic code into the MCNP4B Monte Carlo code and compare the calculated keff with the measurements. The criticality experiment was performed in 1998 at the ''Jozef Stefan'' Institute TRIGA Mark II reactor in Ljubljana, Slovenia, with the same fuel elements and loading pattern as in the TRIGA criticality benchmark experiment with fresh fuel performed in 1991. The only difference was that in 1998, the fuel elements had on average burnup of ∼3%, corresponding to 1.3-MWd energy produced in the core in the period between 1991 and 1998. The fuel element burnup accumulated during 1991-1998 was calculated with the TRIGLAV in-house-developed fuel management two-dimensional multigroup diffusion code. The burned fuel isotopic composition was calculated with the WIMSD4 code and compared to the ORIGEN2 calculations. Extensive comparison of burned fuel material composition was performed for both codes for burnups up to 20% burned 235U, and the differences were evaluated in terms of reactivity. The WIMSD4 and ORIGEN2 results agreed well for all isotopes important in reactivity calculations, giving increased confidence in the WIMSD4 calculation of the burned fuel material composition. The keff calculated with the combined WIMSD4 and MCNP4B calculations showed good agreement with the experimental values. This shows that linking of WIMSD4 with MCNP4B for criticality calculations with burned fuel is feasible and gives reliable results

  5. Analysis of BFS-62-3A critical experiment benchmark model - IGCAR results

    International Nuclear Information System (INIS)

    The BFS 62-3A assembly is a full scale model of BN-600 hybrid core. The MOX zone is represented as a ring between medium enriched (MEZ) and high enriched zones (HEZ). The hybrid core with steel reflector is represented in a 120 deg sector of BFS. For a homogenised 3-D core of BFS, equivalent experimental data of keff and SVRE values were derived by including the following corrections to the actually obtained experimental results: (a) heterogeneity effect and (b) 3-D model simplification effect. The nuclear data used was XSET-98. It is a 26 group set with ABBN type self-shielding factor table. The benchmark models were analysed by diffusion theory. 3-D calculations were done by TREDFR code in 26 groups with 6 triangular meshes per fuel assembly. The number of triangles was 24414. Axial mesh size corrections were estimated for some cases. The convergence criteria for were 0.000001 for keff and 0.0001 for point wise fission source. The multiplication factor of the reference core of the benchmark is compared with measured. The multiplication factor is predicted with in the uncertainty margin. The SVRE values were computed as Δk/k1k2 and compared to measured values. It is found that the predictions are with in the uncertainty margin except in the MOX region. Reason for this needs to be investigated. As a first step, axial mesh size effect was estimated for MOX SVRE (sodium void reactivity effect) case with use finer meshes in the reference core as well the MOX voided core. By increasing the axial meshe from 35 to 54 both the keff reduced by the same amount leaving the MOX SVRE worth unchanged

  6. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  7. Comparison of Benchmarking Methods with and without a Survey Error Model

    OpenAIRE

    Chen, Zhao-Guo; Ho Wu, Ka

    2006-01-01

    For a target socio-economic variable, two sources of data with different precisions and collecting frequencies may be available. Typically, the less frequent data (e.g., annual report or census) are more reliable and are considered as benchmarks. The process of using them to adjust the more frequent and less reliable data (e.g., repeated monthly surveys) is called benchmarking. ¶ In this paper, we show the relationship among three types of benchmarking methods in the literature, namely the De...

  8. Benchmarking the cad-based attila discrete ordinates code with experimental data of fusion experiments and to the results of MCNP code in simulating ITER

    International Nuclear Information System (INIS)

    Attila is a newly developed finite element code based on Sn neutron, gamma, and charged particle transport in 3-D geometry in which unstructured tetrahedral meshes are generated to describe complex geometry that is based on CAD input (Solid Works, Pro/Engineer, etc). In the present work we benchmark its calculation accuracy by comparing its prediction to the measured data inside two experimental mock-ups bombarded with 14 MeV neutrons. The results are also compared to those based on MCNP calculations. The experimental mock-ups simulate parts of the International Thermonuclear Experimental Reactor (ITER) in-vessel components, namely: (1) the Tungsten mockup configuration (54.3 cm x 46.8 cm x 45 cm), and (2) the ITER shielding blanket followed by the SCM region (simulated by alternating layers of SS316 and copper). In the latter configuration, a high aspect ratio rectangular streaming channel was introduced (to simulate steaming paths between ITER blanket modules) which ends with a rectangular cavity. The experiments on these two fusion-oriented integral experiments were performed at the Fusion Neutron Generator (FNG) facility, Frascati, Italy. In addition, the nuclear performance of the ITER MCNP 'Benchmark' CAD model has been performed with Attila to compare its results to those obtained with CAD-based MCNP approach developed by several ITER participants. The objective of this paper is to compare results based on two distinctive 3-D calculation tools using the same nuclear data, FENDL2.1, and the same response functions of several reaction rates measured in ITER mock-ups and to enhance confidence from the international neutronics community in the Attila code and how it can precisely quantify the nuclear field in large and complex systems, such as ITER. Attila has the advantage of providing a full flux mapping visualization everywhere in one run where components subjected to excessive radiation level and strong streaming paths can be identified. In addition, the

  9. A quantitative CFD benchmark for Sodium Fast Reactor fuel assembly modeling

    International Nuclear Information System (INIS)

    Highlights: • A CFD model is benchmarked against the ORNL 19-pin Sodium Test assembly. • Sensitivity was tested for cell size, turbulence model, and wire contact model. • The CFD model was found to be appropriately representing the experiment. • CFD was then used as a predictive tool to help understand experimental uncertainty. • Comparison to subchannel results were carried out as well. - Abstract: This paper details a CFD model of a 19-pin wire-wrapped sodium fuel assembly experiment conducted at Oak Ridge National Laboratory in the 1970s. Model sensitivities were tested for cell size, turbulence closure, wire-wrap contact, inlet geometry, outlet geometry, and conjugate heat transfer. The final model was compared to the experimental results quantitatively to establish confidence in the approach. The results were also compared to the sub-channel analysis code COBRA-IV-I-MIT. Experiment and CFD computations were consistent inside the bundle. Comparison between experimental temperature measurements from thermocouples embedded in the heated length of the bundle are consistently reproducible with CFD code predictions across a wide range of operating conditions. The demonstrated agreement provides confidence in the predictive capabilities of the approach. However significant discrepancy between the CFD code predictions and the experimental data was found at the bundle outlet. Further sensitivity studies are presented to support the conclusion that this discrepancy is caused by significant uncertainty associated with the experimental data reported for the bundle outlet

  10. Systematic effects in CALOR simulation code to model experimental configurations

    International Nuclear Information System (INIS)

    CALOR89 code system is being used to simulate test beam results and the design parameters of several calorimeter configurations. It has been bench-marked against the ZEUS, Dθ and HELIOS data. This study identifies the systematic effects in CALOR simulation to model the experimental configurations. Five major systematic effects are identified. These are the choice of high energy nuclear collision model, material composition, scintillator saturation, shower integration time, and the shower containment. Quantitative estimates of these systematic effects are presented. 23 refs., 6 figs., 7 tabs

  11. Chaos and bifurcation control of SSR in the IEEE second benchmark model

    Energy Technology Data Exchange (ETDEWEB)

    Harb, A.M. E-mail: aharb@just.edu.jo; Widyan, M.S

    2004-07-01

    Linear and nonlinear state feedback controllers are proposed to control the bifurcation of a phenomenon in power system, this phenomenon of electro-mechanical interaction between the series resonant circuits and torsional mechanical frequencies of the turbine-generator sections, which known as subsynchronous resonance (SSR). The first system of the IEEE second benchmark model is considered. The dynamics of the two axes damper windings, automatic voltage regulator and power system stabilizer are included. The linear controller gives better initial disturbance response than that of the nonlinear, but in a small narrow region of compensation factors. The nonlinear controller not only can be easily implemented, but also it stabilizes the operating point for all values of the bifurcation parameter.

  12. A Gross-Margin Model for Defining Technoeconomic Benchmarks in the Electroreduction of CO2.

    Science.gov (United States)

    Verma, Sumit; Kim, Byoungsu; Jhong, Huei-Ru Molly; Ma, Sichao; Kenis, Paul J A

    2016-08-01

    We introduce a gross-margin model to evaluate the technoeconomic feasibility of producing different C1 -C2 chemicals such as carbon monoxide, formic acid, methanol, methane, ethanol, and ethylene through the electroreduction of CO2 . Key performance benchmarks including the maximum operating cell potential (Vmax ), minimum operating current density (jmin ), Faradaic efficiency (FE), and catalyst durability (tcatdur ) are derived. The Vmax values obtained for the different chemicals indicate that CO and HCOOH are the most economically viable products. Selectivity requirements suggest that the coproduction of an economically less feasible chemical (CH3 OH, CH4 , C2 H5 OH, C2 H4 ) with a more feasible chemical (CO, HCOOH) can be a strategy to offset the Vmax requirements for individual products. Other performance requirements such as jmin and tcatdur are also derived, and the feasibility of alternative process designs and operating conditions are evaluated. PMID:27345560

  13. Benchmarking shielding simulations for an accelerator-driven spallation neutron source

    Science.gov (United States)

    Cherkashyna, Nataliia; DiJulio, Douglas D.; Panzner, Tobias; Rantsiou, Emmanouela; Filges, Uwe; Ehlers, Georg; Bentley, Phillip M.

    2015-08-01

    The shielding at an accelerator-driven spallation neutron facility plays a critical role in the performance of the neutron scattering instruments, the overall safety, and the total cost of the facility. Accurate simulation of shielding components is thus key for the design of upcoming facilities, such as the European Spallation Source (ESS), currently in construction in Lund, Sweden. In this paper, we present a comparative study between the measured and the simulated neutron background at the Swiss Spallation Neutron Source (SINQ), at the Paul Scherrer Institute (PSI), Villigen, Switzerland. The measurements were carried out at several positions along the SINQ monolith wall with the neutron dosimeter WENDI-2, which has a well-characterized response up to 5 GeV. The simulations were performed using the Monte-Carlo radiation transport code geant4, and include a complete transport from the proton beam to the measurement locations in a single calculation. An agreement between measurements and simulations is about a factor of 2 for the points where the measured radiation dose is above the background level, which is a satisfactory result for such simulations spanning many energy regimes, different physics processes and transport through several meters of shielding materials. The neutrons contributing to the radiation field emanating from the monolith were confirmed to originate from neutrons with energies above 1 MeV in the target region. The current work validates geant4 as being well suited for deep-shielding calculations at accelerator-based spallation sources. We also extrapolate what the simulated flux levels might imply for short (several tens of meters) instruments at ESS.

  14. Financial benchmarking

    OpenAIRE

    Boldyreva, Anna

    2014-01-01

    This bachelor's thesis is focused on financial benchmarking of TULIPA PRAHA s.r.o. The aim of this work is to evaluate financial situation of the company, identify its strengths and weaknesses and to find out how efficient is the performance of this company in comparison with top companies within the same field by using INFA benchmarking diagnostic system of financial indicators. The theoretical part includes the characteristic of financial analysis, which financial benchmarking is based on a...

  15. Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model

    International Nuclear Information System (INIS)

    In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity of the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark

  16. Benchmarking Inverse Statistical Approaches for Protein Structure and Design with Exactly Solvable Models

    Science.gov (United States)

    Jacquin, Hugo; Shakhnovich, Eugene; Cocco, Simona; Monasson, Rémi

    2016-01-01

    Inverse statistical approaches to determine protein structure and function from Multiple Sequence Alignments (MSA) are emerging as powerful tools in computational biology. However the underlying assumptions of the relationship between the inferred effective Potts Hamiltonian and real protein structure and energetics remain untested so far. Here we use lattice protein model (LP) to benchmark those inverse statistical approaches. We build MSA of highly stable sequences in target LP structures, and infer the effective pairwise Potts Hamiltonians from those MSA. We find that inferred Potts Hamiltonians reproduce many important aspects of ‘true’ LP structures and energetics. Careful analysis reveals that effective pairwise couplings in inferred Potts Hamiltonians depend not only on the energetics of the native structure but also on competing folds; in particular, the coupling values reflect both positive design (stabilization of native conformation) and negative design (destabilization of competing folds). In addition to providing detailed structural information, the inferred Potts models used as protein Hamiltonian for design of new sequences are able to generate with high probability completely new sequences with the desired folds, which is not possible using independent-site models. Those are remarkable results as the effective LP Hamiltonians used to generate MSA are not simple pairwise models due to the competition between the folds. Our findings elucidate the reasons for the success of inverse approaches to the modelling of proteins from sequence data, and their limitations. PMID:27177270

  17. Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model

    Energy Technology Data Exchange (ETDEWEB)

    Baudron, Anne-Marie, E-mail: anne-marie.baudron@cea.fr [Laboratoire de Recherche Conventionné MANON, CEA/DEN/DANS/DM2S and UPMC-CNRS/LJLL (France); CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex (France); Lautard, Jean-Jacques, E-mail: jean-jacques.lautard@cea.fr [Laboratoire de Recherche Conventionné MANON, CEA/DEN/DANS/DM2S and UPMC-CNRS/LJLL (France); CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex (France); Maday, Yvon, E-mail: maday@ann.jussieu.fr [Sorbonne Universités, UPMC Univ Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions and Institut Universitaire de France, F-75005, Paris (France); Laboratoire de Recherche Conventionné MANON, CEA/DEN/DANS/DM2S and UPMC-CNRS/LJLL (France); Brown Univ, Division of Applied Maths, Providence, RI (United States); Riahi, Mohamed Kamel, E-mail: riahi@cmap.polytechnique.fr [Laboratoire de Recherche Conventionné MANON, CEA/DEN/DANS/DM2S and UPMC-CNRS/LJLL (France); CMAP, Inria-Saclay and X-Ecole Polytechnique, Route de Saclay, 91128 Palaiseau Cedex (France); Salomon, Julien, E-mail: salomon@ceremade.dauphine.fr [CEREMADE, Univ Paris-Dauphine, Pl. du Mal. de Lattre de Tassigny, F-75016, Paris (France)

    2014-12-15

    In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity of the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark.

  18. Climate simulations for 1880-2003 with GISS modelE

    CERN Document Server

    Hansen, J; Bauer, S; Baum, E; Cairns, B; Canuto, V; Chandler, M; Cheng, Y; Cohen, A; Faluvegi, G; Fleming, E; Friend, A; Genio, A D; Hall, T; Jackman, C; Jonas, J; Kelley, M; Kharecha, P; Kiang, N Y; Koch, D; Labow, G; Lacis, A; Lerner, J; Lo, K; Menon, S; Miller, R; Nazarenko, L; Novakov, T; Oinas, V; Perlwitz, J; Rind, D; Romanou, A; Ruedy, R; Russell, G; Sato, M; Schmidt, G A; Schmunk, R; Shindell, D; Stone, P; Streets, D; Sun, S; Tausnev, N; Thresher, D; Unger, N; Yao, M; Zhang, S; Perlwitz, Ja.; Perlwitz, Ju.

    2006-01-01

    We carry out climate simulations for 1880-2003 with GISS modelE driven by ten measured or estimated climate forcings. An ensemble of climate model runs is carried out for each forcing acting individually and for all forcing mechanisms acting together. We compare side-by-side simulated climate change for each forcing, all forcings, observations, unforced variability among model ensemble members, and, if available, observed variability. Discrepancies between observations and simulations with all forcings are due to model deficiencies, inaccurate or incomplete forcings, and imperfect observations. Although there are notable discrepancies between model and observations, the fidelity is sufficient to encourage use of the model for simulations of future climate change. By using a fixed well-documented model and accurately defining the 1880-2003 forcings, we aim to provide a benchmark against which the effect of improvements in the model, climate forcings, and observations can be tested. Principal model deficiencies...

  19. 3D simulation of Industrial Hall in case of fire. Benchmark between ABAQUS, ANSYS and SAFIR

    OpenAIRE

    Vassart, Olivier; Cajot, Louis-Guy; O'Connor, Marc; Shenkai, Y.; Fraud, C.; Zhao, Bin; De la Quintana, Jesus; Martinez de Aragon, J.; Franssen, Jean-Marc; Gens, Frederic

    2004-01-01

    For simple storey buildings, the structural behaviour in case of fire is relevant only for the safety of the firemen. The protection of occupants and goods is a matter of fire spread, smoke propagation, active fire fighting measures and evacuation facilities. Brittle failure, progressive collapse and partial failure of façades elements outwards may endanger the fire fighters and have to be avoided. In order to deal with such an objective, the simulation softwares has to cover the 3D structura...

  20. Effects of Exciting Evaluated Nuclear Date Files on Nuclear Parameters of the BFS-62-3A Assembly Benchmark Model

    OpenAIRE

    Mikhail

    2002-01-01

    This report is continuation of studying of the experiments performed on BFS-62-3A critical assembly in Russia. The objective of work is definition of the cross section uncertainties on reactor neutronics parameters as applied to the hybrid core of the BN-600 reactor of Beloyarskaya NPP. Two-dimensional benchmark model of BFS-62-3A was created specially for these purposes and experimental values were reduced to it. Benchmark characteristics for this assembly are (1)criticality; (2)central fiss...

  1. Benchmarking monthly homogenization algorithms

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2011-08-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  2. Antibiotic reimbursement in a model delinked from sales: a benchmark-based worldwide approach.

    Science.gov (United States)

    Rex, John H; Outterson, Kevin

    2016-04-01

    Despite the life-saving ability of antibiotics and their importance as a key enabler of all of modern health care, their effectiveness is now threatened by a rising tide of resistance. Unfortunately, the antibiotic pipeline does not match health needs because of challenges in discovery and development, as well as the poor economics of antibiotics. Discovery and development are being addressed by a range of public-private partnerships; however, correcting the poor economics of antibiotics will need an overhaul of the present business model on a worldwide scale. Discussions are now converging on delinking reward from antibiotic sales through prizes, milestone payments, or insurance-like models in which innovation is rewarded with a fixed series of payments of a predictable size. Rewarding all drugs with the same payments could create perverse incentives to produce drugs that provide the least possible innovation. Thus, we propose a payment model using a graded array of benchmarked rewards designed to encourage the development of antibiotics with the greatest societal value, together with appropriate worldwide access to antibiotics to maximise human health. PMID:27036356

  3. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added in...... order to obtain a unique selection...

  4. Precious benchmarking

    International Nuclear Information System (INIS)

    Recently, there has been a new word added to our vocabulary - benchmarking. Because of benchmarking, our colleagues travel to power plants all around the world and guests from the European power plants visit us. We asked Marek Niznansky from the Nuclear Safety Department in Jaslovske Bohunice NPP to explain us this term. (author)

  5. Benchmark Simulation for the Development of the Regulatory Audit Subchannel Analysis Code

    International Nuclear Information System (INIS)

    For the safe and reliable operation of a reactor, it is important to predict accurately the flow and temperature distributions in the thermal-hydraulic design of a reactor core. A subchannel approach can give the reasonable flow and temperature distributions with the short computing time. Korea Institute of Nuclear Safety (KINS) is presently reviewing new subchannel code, THALES, which will substitute for both THINC-IV and TORC code. To assess the prediction performance of THALES, KINS is developing the subchannel analysis code for the independent audit calculation. The code is based on workstation version of COBRA-IV-I. The main objective of the present study is to assess the performance of COBRA-IV-I code by comparing the simulation results with experimental ones for the sample problems

  6. ASPECTS ABOUT SIMULATED MODEL TRUSTINESS

    OpenAIRE

    CRISAN DANIELA ALEXANDRA; STANICA JUSTINA LAVINIA; DESPA RADU; COCULESCU CRISTINA

    2009-01-01

    Nowadays, grace of computing possibilities that electronic computers offer and namely, big memory volume and computing speed, there is the improving of modeling methods, an important role having complex system modeling using simulation techniques. These o

  7. OpenMP-accelerated SWAT simulation using Intel C and FORTRAN compilers: Development and benchmark

    Science.gov (United States)

    Ki, Seo Jin; Sugimura, Tak; Kim, Albert S.

    2015-02-01

    We developed a practical method to accelerate execution of Soil and Water Assessment Tool (SWAT) using open (free) computational resources. The SWAT source code (rev 622) was recompiled using a non-commercial Intel FORTRAN compiler in Ubuntu 12.04 LTS Linux platform, and newly named iOMP-SWAT in this study. GNU utilities of make, gprof, and diff were used to develop the iOMP-SWAT package, profile memory usage, and check identicalness of parallel and serial simulations. Among 302 SWAT subroutines, the slowest routines were identified using GNU gprof, and later modified using Open Multiple Processing (OpenMP) library in an 8-core shared memory system. In addition, a C wrapping function was used to rapidly set large arrays to zero by cross compiling with the original SWAT FORTRAN package. A universal speedup ratio of 2.3 was achieved using input data sets of a large number of hydrological response units. As we specifically focus on acceleration of a single SWAT run, the use of iOMP-SWAT for parameter calibrations will significantly improve the performance of SWAT optimization.

  8. Benchmarking atomic physics models for magnetically confined fusion plasma physics experiments

    International Nuclear Information System (INIS)

    In present magnetically confined fusion devices, high and intermediate Z impurities are either puffed into the plasma for divertor radiative cooling experiments or are sputtered from the high Z plasma facing armor. The beneficial cooling of the edge as well as the detrimental radiative losses from the core of these impurities can be properly understood only if the atomic physics used in the modeling of the cooling curves is very accurate. To this end, a comprehensive experimental and theoretical analysis of some relevant impurities is undertaken. Gases (Ne, Ar, Kr, and Xe) are puffed and nongases are introduced through laser ablation into the FTU tokamak plasma. The charge state distributions and total density of these impurities are determined from spatial scans of several photometrically calibrated vacuum ultraviolet and x-ray spectrographs (3 - 1600 Angstrom), the multiple ionization state transport code transport code (MIST) and a collisional radiative model. The radiative power losses are measured with bolometery, and the emissivity profiles were measured by a visible bremsstrahlung array. The ionization balance, excitation physics, and the radiative cooling curves are computed from the Hebrew University Lawrence Livermore atomic code (HULLAC) and are benchmarked by these experiments. (Supported by U.S. DOE Grant No. DE-FG02-86ER53214 at JHU and Contract No. W-7405-ENG-48 at LLNL.) copyright 1999 American Institute of Physics

  9. Benchmarking atomic physics models for magnetically confined fusion plasma physics experiments

    Science.gov (United States)

    May, M. J.; Finkenthal, M.; Soukhanovskii, V.; Stutman, D.; Moos, H. W.; Pacella, D.; Mazzitelli, G.; Fournier, K.; Goldstein, W.; Gregory, B.

    1999-01-01

    In present magnetically confined fusion devices, high and intermediate Z impurities are either puffed into the plasma for divertor radiative cooling experiments or are sputtered from the high Z plasma facing armor. The beneficial cooling of the edge as well as the detrimental radiative losses from the core of these impurities can be properly understood only if the atomic physics used in the modeling of the cooling curves is very accurate. To this end, a comprehensive experimental and theoretical analysis of some relevant impurities is undertaken. Gases (Ne, Ar, Kr, and Xe) are puffed and nongases are introduced through laser ablation into the FTU tokamak plasma. The charge state distributions and total density of these impurities are determined from spatial scans of several photometrically calibrated vacuum ultraviolet and x-ray spectrographs (3-1600 Å), the multiple ionization state transport code transport code (MIST) and a collisional radiative model. The radiative power losses are measured with bolometery, and the emissivity profiles were measured by a visible bremsstrahlung array. The ionization balance, excitation physics, and the radiative cooling curves are computed from the Hebrew University Lawrence Livermore atomic code (HULLAC) and are benchmarked by these experiments. (Supported by U.S. DOE Grant No. DE-FG02-86ER53214 at JHU and Contract No. W-7405-ENG-48 at LLNL.)

  10. Benchmarks for Uncertainty Analysis in Modelling (UAM) for the Design, Operation and Safety Analysis of LWRs - Volume I: Specification and Support Data for Neutronics Cases (Phase I)

    International Nuclear Information System (INIS)

    released. This report presents benchmark specifications for Phase I (Neutronics Phase) of the OECD LWR UAM benchmark in a format similar to the previous OECD/NRC benchmark specifications. Phase I consists of the following exercises: - Exercise 1 (I-1): 'Cell Physics' focused on the derivation of the multi-group microscopic cross-section libraries and their uncertainties. - Exercise 2 (I-2): 'Lattice Physics' focused on the derivation of the few-group macroscopic cross-section libraries and their uncertainties. - Exercise 3 (I-3): 'Core Physics' focused on the core steady-state stand-alone neutronics calculations and their uncertainties. These exercises follow those established in the industry and regulation routine calculation scheme for LWR design and safety analysis. This phase is focused on understanding uncertainties in the prediction of key reactor core parameters associated with LWR stand-alone neutronics core simulation. Such uncertainties occur due to input data uncertainties, modelling errors, and numerical approximations. The chosen approach in Phase I is to select/propagate the most important contributors for each exercise which can be treated in a practical manner. The cross-section uncertainty information is considered as the most important source of input uncertainty for Phase I. The cross-section related uncertainties are propagated through the 3 Exercises of Phase I. In Exercise I-1 these are the variance and covariance data associated with continuous energy cross-sections in evaluated nuclear data files. In Exercise I-2 these are the variance and covariance data associated with multi-group cross-sections used as input in the lattice physics codes. In Exercise I-3 these are the variance and covariance data associated with few-group cross-sections used as input in the core simulators. Depending on the availability of different methods in the computer code of choice for a given exercise, the related methodological uncertainties can play a smaller or larger

  11. Relative importance of secondary settling tank models in WWTP simulations

    DEFF Research Database (Denmark)

    Ramin, Elham; Flores-Alsina, Xavier; Sin, Gürkan;

    2012-01-01

    Results obtained in a study using the Benchmark Simulation Model No. 1 (BSM1) show that a one-dimensional secondary settling tank (1-D SST) model structure and its parameters are among the most significant sources of uncertainty in wastewater treatment plant (WWTP) simulations [Ramin et al., 2011...... simulations (SRC method); and (b) Morris screening. The overall objective of assessing the 1-D SST model selection and parameters in GSA is to provide a parameter sensitivity ranking for WWTP calibration exercises, aiming at predicting key plant performance criteria, including methane production and effluent...... water quality index. Results obtained in this study show that, 1-D SST model parameters strongly influence biogas production via anaerobic digestion and the plant’s effluent water quality, but they have limited effect on estimating the quality of nitrogen rich returns from the digester....

  12. Developing a reference-model for benchmarking: performance improvement in operation and maintenance

    OpenAIRE

    El-Wardani, Riad

    2012-01-01

    Statoil has a major responsibility of “driving simplification and improvement initiatives” by relying on tools such as benchmarking. The aim is to drive business performance based on best practice rather than on compliance. To date, the full potential of benchmarking has not been realized since the concept is not easy to define, let alone follow-up. A great deal of knowledge and practice remains hidden in the Statoil system that can be effectively used to drive performance based on effective ...

  13. Dose mapping simulation using the MCNP code for the Syrian gamma irradiation facility and benchmarking

    International Nuclear Information System (INIS)

    Highlights: • The MCNP4C was used to calculate the gamma ray dose rate spatial distribution in for the SGIF. • Measurement of the gamma ray dose rate spatial distribution using the Chlorobenzene dosimeter was conducted as well. • Good agreements were noticed between the calculated and measured results. • The maximum relative differences were less than 7%, 4% and 4% in the x, y and z directions respectively. - Abstract: A three dimensional model for the Syrian gamma irradiation facility (SGIF) is developed in this paper to calculate the gamma ray dose rate spatial distribution in the irradiation room at the 60Co source board using the MCNP-4C code. Measurement of the gamma ray dose rate spatial distribution using the Chlorobenzene dosimeter is conducted as well to compare the calculated and measured results. Good agreements are noticed between the calculated and measured results with maximum relative differences less than 7%, 4% and 4% in the x, y and z directions respectively. This agreement indicates that the established model is an accurate representation of the SGIF and can be used in the future to make the calculation design for a new irradiation facility

  14. Bridging experiments, models and simulations

    DEFF Research Database (Denmark)

    Carusi, Annamaria; Burrage, Kevin; Rodríguez, Blanca

    2012-01-01

    study aims at informing strategies for validation by elucidating the complex interrelations among experiments, models, and simulations in cardiac electrophysiology. We describe the processes, data, and knowledge involved in the construction of whole ventricular multiscale models of cardiac...... electrophysiology. Our analysis reveals that models, simulations, and experiments are intertwined, in an assemblage that is a system itself, namely the model-simulation-experiment (MSE) system. We argue that validation is part of the whole MSE system and is contingent upon 1) understanding and coping with sources...

  15. Simulation - modeling - experiment; Simulation - modelisation - experience

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  16. Thermochemistry of organic reactions in microporous oxides by atomistic simulations: benchmarking against periodic B3LYP.

    Science.gov (United States)

    Bleken, Francesca; Svelle, Stian; Lillerud, Karl Petter; Olsbye, Unni; Arstad, Bjørnar; Swang, Ole

    2010-07-15

    The methylation of ethene by methyl chloride and methanol in the microporous materials SAPO-34 and SSZ-13 has been studied using different periodic atomistic modeling approaches based on density functional theory. The RPBE functional, which earlier has been used successfully in studies of surface reactions on metals, fails to yield a qualitatively correct description of the transition states under study. Employing B3LYP as functional gives results in line with experimental data: (1) Methanol is adsorbed more strongly than methyl chloride to the acid site. (2) The activation energies for the methylation of ethene are slightly lower for SSZ-13. Furthermore, the B3LYP activation energies are lower for methyl chloride than for methanol. PMID:20557090

  17. Local Job Accessibility Measurement: When the Model Makes the Results. Methodological Contribution and Empirical Benchmarking on the Paris Region

    OpenAIRE

    Matthieu Bunel; Elisabeth Tovar

    2012-01-01

    This paper focuses on local job accessibility measurement. We propose an original model that uses national exhaustive micro data and allows for i) a full estimation of job availability according to an extensive set of individual characteristics, ii) a full appraisal of job competition on the labour market and iii) a full control of frontier effects. By matching several exhaustive micro data sources on the Paris region municipalities, we compare the results produced by this benchmark model to ...

  18. IVOA Recommendation: Simulation Data Model

    CERN Document Server

    Lemson, Gerard; Cervino, Miguel; Gheller, Claudio; Gray, Norman; LePetit, Franck; Louys, Mireille; Ooghe, Benjamin; Wagner, Rick; Wozniak, Herve

    2014-01-01

    In this document and the accompanying documents we describe a data model (Simulation Data Model) describing numerical computer simulations of astrophysical systems. The primary goal of this standard is to support discovery of simulations by describing those aspects of them that scientists might wish to query on, i.e. it is a model for meta-data describing simulations. This document does not propose a protocol for using this model. IVOA protocols are being developed and are supposed to use the model, either in its original form or in a form derived from the model proposed here, but more suited to the particular protocol. The SimDM has been developed in the IVOA Theory Interest Group with assistance of representatives of relevant working groups, in particular DM and Semantics.

  19. Power law distributions of wildfires across Europe: benchmarking a land surface model with observed data

    OpenAIRE

    Mauro, B.; Fava, F.; Frattini, P.; Camia, A.; Colombo, R.; Migliavacca, M.

    2015-01-01

    Monthly wildfire burned area frequency is here modeled with a power law distribution and scaling exponent across different European biomes are estimated. Data sets, spanning from 2000 to 2009, comprehend the inventory of monthly burned areas from the European Forest Fire Information System (EFFIS) and simulated monthly burned areas from a recent parameterization of a Land Surface Model (LSM), that is the Community Land Model (CLM). Power law exponents are estimated with a Ma...

  20. Proton Exchange Membrane Fuel Cell Engineering Model Powerplant. Test Report: Benchmark Tests in Three Spatial Orientations

    Science.gov (United States)

    Loyselle, Patricia; Prokopius, Kevin

    2011-01-01

    Proton exchange membrane (PEM) fuel cell technology is the leading candidate to replace the aging alkaline fuel cell technology, currently used on the Shuttle, for future space missions. This test effort marks the final phase of a 5-yr development program that began under the Second Generation Reusable Launch Vehicle (RLV) Program, transitioned into the Next Generation Launch Technologies (NGLT) Program, and continued under Constellation Systems in the Exploration Technology Development Program. Initially, the engineering model (EM) powerplant was evaluated with respect to its performance as compared to acceptance tests carried out at the manufacturer. This was to determine the sensitivity of the powerplant performance to changes in test environment. In addition, a series of tests were performed with the powerplant in the original standard orientation. This report details the continuing EM benchmark test results in three spatial orientations as well as extended duration testing in the mission profile test. The results from these tests verify the applicability of PEM fuel cells for future NASA missions. The specifics of these different tests are described in the following sections.

  1. Benchmark tests of JENDL-3.3 and ENDF/B-VI data files using Monte Carlo simulation of the 3 MW TRIGA MARK II research reactor

    International Nuclear Information System (INIS)

    The three-dimensional continuous-energy Monte Carlo code MNCP4C was used to develop a versatile and accurate full-core model of the 3 MW TRIGA MARK II research reactor at Atomic Energy Research Establishment, Savar, Dhaka, Bangladesh. The model represents in detail all components of the core with literally no physical approximation. All fresh fuel and control elements as well as the vicinity of the core were precisely described. Validation of the newly generated continuous energy cross section data from JENDL-3.3 was performed against some well-known benchmark lattices using MCNP4C and the results were found to be in very good agreement with the experiment and other evaluations. For TRIGA analysis continuous energy section data from JENDL-3.3 and ENDF/B-VI in combination with the JENDL-3.2 and ENDF/B-V data files (for natZr, natMo, natCr, natFe, natNi, natSi, and natMg) at 300K evaluations were used. Full S(α, β) scattering functions from ENDF/B-V for Zr in ZrH, H in ZrH and water molecule, and for graphite were used in both cases. The validation of the model was performed against the criticality and reactivity benchmark experiments of the reactor. The MNCP calculated values for effective multiplication factor keff underestimated 0.0250%Δk/k and 0.2510%Δk/k for control rods critical positions and overestimated 0.2098%Δk/k and 0.0966%Δk/k for all control rods withdrawn positions using JENDL-3.3 and ENDF/B-VI, respectively. The core multiplication factor differs appreciably (∼3.3%) between the no S(α, β) (when temperature representation for free gas treatment is about 300K) and 300K S(α, β) case. However, there is ∼20.0% decrease of thermal neutron flux occurs when the thermal library is removed. Effect of erbium isotope that is present in the TRIGA fuel over the criticality analysis of the reactor was also studied. In addition to the keff values, the well known integral parameters: δ28, δ25, ρ25, and C were calculated and compared for both JENDL3

  2. WLUP benchmarks

    International Nuclear Information System (INIS)

    The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)

  3. EPRI depletion benchmark calculations using PARAGON

    International Nuclear Information System (INIS)

    Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty

  4. The MCNP6 Analytic Criticality Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Codes Group

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  5. General introduction to simulation models

    DEFF Research Database (Denmark)

    Hisham Beshara Halasa, Tariq; Boklund, Anette

    2012-01-01

    trials to investigate the effect of alternative conditions or actions on a specific system. Nonetheless, field trials are expensive and sometimes not possible to conduct, as in case of foot-and-mouth disease (FMD). Instead, simulation models can be a good and cheap substitute for experiments and field...... trials. However, if simulation models would be used, good quality input data must be available. To model FMD, several disease spread models are available. For this project, we chose three simulation model; Davis Animal Disease Spread (DADS), that has been upgraded to DTU-DADS, InterSpread Plus (ISP) and...... model of FMD spread that can provide useful and trustworthy advises, there are four important issues, which the model should represent: 1) The herd structure of the country in question, 2) the dynamics of animal movements and contacts between herds, 3) the biology of the disease, and 4) the regulations...

  6. Shielding Integral Benchmark Archive and Database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, Bernadette Lugue [ORNL; Grove, Robert E [ORNL; Kodeli, I. [International Atomic Energy Agency (IAEA); Sartori, Enrico [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

    2011-01-01

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990 s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development s Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  7. Shielding Integral Benchmark Archive and Database (SINBAD)

    International Nuclear Information System (INIS)

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development's Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  8. Simulation modeling of estuarine ecosystems

    Science.gov (United States)

    Johnson, R. W.

    1980-01-01

    A simulation model has been developed of Galveston Bay, Texas ecosystem. Secondary productivity measured by harvestable species (such as shrimp and fish) is evaluated in terms of man-related and controllable factors, such as quantity and quality of inlet fresh-water and pollutants. This simulation model used information from an existing physical parameters model as well as pertinent biological measurements obtained by conventional sampling techniques. Predicted results from the model compared favorably with those from comparable investigations. In addition, this paper will discuss remotely sensed and conventional measurements in the framework of prospective models that may be used to study estuarine processes and ecosystem productivity.

  9. Benchmarking conflict resolution algorithms

    OpenAIRE

    Vanaret, Charlie; Gianazza, David; Durand, Nicolas; Gotteland, Jean-Baptiste

    2012-01-01

    Applying a benchmarking approach to conflict resolution problems is a hard task, as the analytical form of the constraints is not simple. This is especially the case when using realistic dynamics and models, considering accelerating aircraft that may follow flight paths that are not direct. Currently, there is a lack of common problems and data that would allow researchers to compare the performances of several conflict resolution algorithms. The present paper introduces a benchmarking approa...

  10. Benchmarking and regulation

    OpenAIRE

    Agrell, Per Joakim; Bogetoft, Peter

    2013-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publication...

  11. Wake modeling and simulation

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Madsen Aagaard, Helge; Larsen, Torben J.;

    We present a consistent, physically based theory for the wake meandering phenomenon, which we consider of crucial importance for the overall description of wind turbine loadings in wind farms. In its present version the model is confined to single wake situations. The model philosophy does, however......, have the potential to include also mutual wake interaction phenomenons. The basic conjecture behind the dynamic wake meandering (DWM) model is that wake transportation in the atmospheric boundary layer is driven by the large scale lateral- and vertical turbulence components. Based on this conjecture a...... simultaneously. This capability is a direct and attractive consequence of the model being based on the underlying physical process, and it potentially opens for optimization of wind farm topology, of wind farm operation as well as of control strategies for the individual turbine. To establish an integrated...

  12. TREAT Modeling and Simulation Strategy

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, Mark David [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a four-phase process used to describe the strategy in developing modeling and simulation software for the Transient Reactor Test Facility. The four phases of this research and development task are identified as (1) full core transient calculations with feedback, (2) experiment modeling, (3) full core plus experiment simulation and (4) quality assurance. The document describes the four phases, the relationship between these research phases, and anticipated needs within each phase.

  13. TREAT Modeling and Simulation Strategy

    International Nuclear Information System (INIS)

    This report summarizes a four-phase process used to describe the strategy in developing modeling and simulation software for the Transient Reactor Test Facility. The four phases of this research and development task are identified as (1) full core transient calculations with feedback, (2) experiment modeling, (3) full core plus experiment simulation and (4) quality assurance. The document describes the four phases, the relationship between these research phases, and anticipated needs within each phase.

  14. Hadron Production Model Developments and Benchmarking in the 0.7 - 12 GeV Energy Region

    OpenAIRE

    Mokhov, N. V.; Gudima, K. K.; Striganov, S. I.

    2014-01-01

    Driven by the needs of the intensity frontier projects with their Megawatt beams, e.g., ESS, FAIR and Project X, and their experiments, the event generators of the MARS15 code have been recently improved. After thorough analysis and benchmarking against data, including the newest ones by the HARP collaboration, both the exclusive and inclusive particle production models were further developed in the crucial for the above projects - but difficult from a theoretical standpoint - projectile ener...

  15. CEC thermal-hydraulic benchmark exercise on Fiploc verification experiment F2 in Battelle model containment. Experimental phases 2, 3 and 4. Results of comparisons

    International Nuclear Information System (INIS)

    The present final report comprises the major results of Phase II of the CEC thermal-hydraulic benchmark exercise on Fiploc verification experiment F2 in the Battelle model containment, experimental phases 2, 3 and 4, which was organized and sponsored by the Commission of the European Communities for the purpose of furthering the understanding and analysis of long-term thermal-hydraulic phenomena inside containments during and after severe core accidents. This benchmark exercise received high European attention with eight organizations from six countries participating with eight computer codes during phase 2. Altogether 18 results from computer code runs were supplied by the participants and constitute the basis for comparisons with the experimental data contained in this publication. This reflects both the high technical interest in, as well as the complexity of, this CEC exercise. Major comparison results between computations and data are reported on all important quantities relevant for containment analyses during long-term transients. These comparisons comprise pressure, steam and air content, velocities and their directions, heat transfer coefficients and saturation ratios. Agreements and disagreements are discussed for each participating code/institution, conclusions drawn and recommendations provided. The phase 2 CEC benchmark exercise provided an up-to-date state-of-the-art status review of the thermal-hydraulic capabilities of present computer codes for containment analyses. This exercise has shown that all of the participating codes can simulate the important global features of the experiment correctly, like: temperature stratification, pressure and leakage, heat transfer to structures, relative humidity, collection of sump water. Several weaknesses of individual codes were identified, and this may help to promote their development. As a general conclusion it may be said that while there is still a wide area of necessary extensions and improvements, the

  16. Stochastic modeling analysis and simulation

    CERN Document Server

    Nelson, Barry L

    1995-01-01

    A coherent introduction to the techniques for modeling dynamic stochastic systems, this volume also offers a guide to the mathematical, numerical, and simulation tools of systems analysis. Suitable for advanced undergraduates and graduate-level industrial engineers and management science majors, it proposes modeling systems in terms of their simulation, regardless of whether simulation is employed for analysis. Beginning with a view of the conditions that permit a mathematical-numerical analysis, the text explores Poisson and renewal processes, Markov chains in discrete and continuous time, se

  17. Model reduction for circuit simulation

    CERN Document Server

    Hinze, Michael; Maten, E Jan W Ter

    2011-01-01

    Simulation based on mathematical models plays a major role in computer aided design of integrated circuits (ICs). Decreasing structure sizes, increasing packing densities and driving frequencies require the use of refined mathematical models, and to take into account secondary, parasitic effects. This leads to very high dimensional problems which nowadays require simulation times too large for the short time-to-market demands in industry. Modern Model Order Reduction (MOR) techniques present a way out of this dilemma in providing surrogate models which keep the main characteristics of the devi

  18. FASTBUS simulation models in VHDL

    International Nuclear Information System (INIS)

    Four hardware simulation models implementing the FASTBUS protocol are described. The models are written in the VHDL hardware description language to obtain portability, i.e. without relations to any specific simulator. They include two complete FASTBUS devices, a full-duplex segment interconnect and ancillary logic for the segment. In addition, master and slave models using a high level interface to describe FASTBUS operations, are presented. With these models different configurations of FASTBUS systems can be evaluated and the FASTBUS transactions of new devices can be verified. (au)

  19. Benchmark Comparison of Dual- and Quad-Core Processor Linux Clusters with Two Global Climate Modeling Workloads

    Science.gov (United States)

    McGalliard, James

    2008-01-01

    This viewgraph presentation details the science and systems environments that NASA High End computing program serves. Included is a discussion of the workload that is involved in the processing for the Global Climate Modeling. The Goddard Earth Observing System Model, Version 5 (GEOS-5) is a system of models integrated using the Earth System Modeling Framework (ESMF). The GEOS-5 system was used for the Benchmark tests, and the results of the tests are shown and discussed. Tests were also run for the Cubed Sphere system, results for these test are also shown.

  20. Benchmarking of thermalhydraulic loop models for lead-alloy-cooled advanced nuclear energy systems. Phase I: Isothermal forced convection case

    International Nuclear Information System (INIS)

    Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Scientific Issues of the Fuel Cycle (WPFC) has been established to co-ordinate scientific activities regarding various existing and advanced nuclear fuel cycles, including advanced reactor systems, associated chemistry and flowsheets, development and performance of fuel and materials and accelerators and spallation targets. The WPFC has different expert groups to cover a wide range of scientific issues in the field of nuclear fuel cycle. The Task Force on Lead-Alloy-Cooled Advanced Nuclear Energy Systems (LACANES) was created in 2006 to study thermal-hydraulic characteristics of heavy liquid metal coolant loop. The objectives of the task force are to (1) validate thermal-hydraulic loop models for application to LACANES design analysis in participating organisations, by benchmarking with a set of well-characterised lead-alloy coolant loop test data, (2) establish guidelines for quantifying thermal-hydraulic modelling parameters related to friction and heat transfer by lead-alloy coolant and (3) identify specific issues, either in modelling and/or in loop testing, which need to be addressed via possible future work. Nine participants from seven different institutes participated in the first phase of the benchmark. This report provides details of the benchmark specifications, method and code characteristics and results of the preliminary study: pressure loss coefficient and Phase-I. A comparison and analysis of the results will be performed together with Phase-II

  1. Neutronics analysis of the International Thermonuclear Experimental Reactor (ITER) MCNP ''Benchmark CAD Model'' with the ATTILA discrete ordinance code

    International Nuclear Information System (INIS)

    The ITER IT has adopted the newly developed FEM, 3-D, and CAD-based Discrete Ordinates code, ATTILA for the neutronics studies contingent on its success in predicting key neutronics parameters and nuclear field according to the stringent QA requirements set forth by the Management and Quality Program (MQP). ATTILA has the advantage of providing a full flux and response functions mapping everywhere in one run where components subjected to excessive radiation level and strong streaming paths can be identified. The ITER neutronics community had agreed to use a standard CAD model of ITER (40 degree sector, denoted ''Benchmark CAD Model'') to compare results for several responses selected for calculation benchmarking purposes to test the efficiency and accuracy of the CAD-MCNP approach developed by each party. Since ATTILA seems to lend itself as a powerful design tool with minimal turnaround time, it was decided to benchmark this model with ATTILA as well and compare the results to those obtained with the CAD MCNP calculations. In this paper we report such comparison for five responses, namely: (1) Neutron wall load on the surface of the 18 shield blanket module (SBM), (2) Neutron flux and nuclear heating rate in the divertor cassette, (3) nuclear heating rate in the winding pack of the inner leg of the TF coil, (4) Radial flux profile across dummy port plug and shield plug placed in the equatorial port, and (5) Flux at seven point locations situated behind the equatorial port plug. (orig.)

  2. SIMULATE-3 core model for nuclear reactor training simulators

    International Nuclear Information System (INIS)

    This paper describes the adaptation of the Studsvik nuclear reactor analysis code, SIMULATE-3, to nuclear reactor training simulation. This adaption to real-time applications permits training simulation to be performed using the same 'engineering grade' core model used for core design, loading optimisation, safety analysis, and plant technical support. Use of SIMULATE-3R in training simulation permits simple initialisation of simulator core-models (without need for tuning) and facilitates application of cycle-specific core models. SIMULATE-3R permits training simulation of reactor cores with the accuracy normally associated with engineering analysis and enhances the simulator's 'plant analyser' functions. (author)

  3. Preliminary assessment of Geant4 HP models and cross section libraries by reactor criticality benchmark calculations

    DEFF Research Database (Denmark)

    Cai, Xiao-Xiao; Llamas-Jansa, Isabel; Mullet, Steven;

    2013-01-01

    Geant4 is an open source general purpose simulation toolkit for particle transportation in matter. Since the extension of the thermal scattering model in Geant4.9.5 and the availability of the IAEA HP model cross section libraries, it is now possible to extend the application area of Geant4 to re...... models and the G4NDL library. However, cross sections of those missing isotopes were made available recently through the IAEA project “new evaluated neutron cross section libraries for Geant4”....

  4. A benchmark test suite for proton transfer energies and its use to test electronic structure model chemistries

    International Nuclear Information System (INIS)

    Highlights: ► We present benchmark calculations of energies of complexation and barriers for proton transfer to water. ► Benchmark calculations are used to test methods suitable for application to large and complex systems. ► Methods tested include hybrid meta-GGAs, M06-L, PW6B95, SOGGA11, MP2, SCC-DFTB, PMO, and NDDO. - Abstract: We present benchmark calculations of nine selected points on potential energy surfaces describing proton transfer processes in three model systems, H5O2+, CH3OH…H+…OH2, and CH3COOH…OH2. The calculated relative energies of these geometries are compared to those calculated by various wave function and density functional methods, including the polarized molecular orbital (PMO) model recently developed in our research group and other semiempirical molecular orbital methods. We found that the SCC-DFTB and PMO methods (the latter available so far only for molecules consisting of only O and H and therefore only for the first of the three model systems) give results that are, on average, within 2 kcal/mol of the benchmark results. Other semiempirical molecular orbital methods have mean unsigned errors (MUEs) of 3–8 kcal/mol, local density functionals have MUEs in the range 0.7–3.7 kcal/mol, and hybrid density functionals have MUEs of only 0.3–1.0 kcal/mol, with the best density functional performance obtained by hybrid meta-GGAs, especially M06 and PW6B95.

  5. Assessment of Static Delamination Propagation Capabilities in Commercial Finite Element Codes Using Benchmark Analysis

    Science.gov (United States)

    Orifici, Adrian C.; Krueger, Ronald

    2010-01-01

    With capabilities for simulating delamination growth in composite materials becoming available, the need for benchmarking and assessing these capabilities is critical. In this study, benchmark analyses were performed to assess the delamination propagation simulation capabilities of the VCCT implementations in Marc TM and MD NastranTM. Benchmark delamination growth results for Double Cantilever Beam, Single Leg Bending and End Notched Flexure specimens were generated using a numerical approach. This numerical approach was developed previously, and involves comparing results from a series of analyses at different delamination lengths to a single analysis with automatic crack propagation. Specimens were analyzed with three-dimensional and two-dimensional models, and compared with previous analyses using Abaqus . The results demonstrated that the VCCT implementation in Marc TM and MD Nastran(TradeMark) was capable of accurately replicating the benchmark delamination growth results and that the use of the numerical benchmarks offers advantages over benchmarking using experimental and analytical results.

  6. Benchmarking in European Higher Education: A Step beyond Current Quality Models

    Science.gov (United States)

    Burquel, Nadine; van Vught, Frans

    2010-01-01

    This paper presents the findings of a two-year EU-funded project (DG Education and Culture) "Benchmarking in European Higher Education", carried out from 2006 to 2008 by a consortium led by the European Centre for Strategic Management of Universities (ESMU), with the Centre for Higher Education Development, UNESCO-CEPES, and the Universidade de…

  7. Modelling and Simulation: An Overview

    NARCIS (Netherlands)

    M.J. McAleer (Michael); F. Chan (Felix); L. Oxley (Les)

    2013-01-01

    textabstractThe papers in this special issue of Mathematics and Computers in Simulation cover the following topics: improving judgmental adjustment of model-based forecasts, whether forecast updates are progressive, on a constrained mixture vector autoregressive model, whether all estimators are bor

  8. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    International Nuclear Information System (INIS)

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the 235U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments

  9. Monte Carlo simulations and benchmark measurements on the response of TE(TE) and Mg(Ar) ionization chambers in photon, electron and neutron beams

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Yi-Chun [Health Physics Division, Institute of Nuclear Energy Research, Taoyuan County, Taiwan (China); Huang, Tseng-Te, E-mail: huangtt@iner.gov.tw [Health Physics Division, Institute of Nuclear Energy Research, Taoyuan County, Taiwan (China); Liu, Yuan-Hao [Nuclear Science and Technology Development Center, National Tsing Hua University, Hsinchu City, Taiwan (China); Chen, Wei-Lin [Institute of Nuclear Engineering and Science, National Tsing Hua University, Hsinchu City, Taiwan (China); Chen, Yen-Fu [Atomic Energy Council, New Taipei City, Taiwan (China); Wu, Shu-Wei [Dept. of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan (China); Nievaart, Sander [Institute for Energy, Joint Research Centre, European Commission, Petten (Netherlands); Jiang, Shiang-Huei [Dept. of Engineering and System Science, National Tsing Hua University, Hsinchu, Taiwan (China)

    2015-06-01

    reached 7.8–16.5% below 120 kVp X-ray beams. In this study, we were especially interested in BNCT doses where low energy photon contribution is less to ignore, MCNP model is recognized as the most suitable to simulate wide photon–electron and neutron energy distributed responses of the paired ICs. Also, MCNP provides the best prediction of BNCT source adjustment by the detector’s neutron and photon responses. - Highlights: • We established optimal T2 & M2 paired ICs model in benchmark x, γ, e & n fields. • We used MCNP, EGSnrc, FLUKA or GEANT4 for IC current simulations. • In keV photon, MCNP underestimated M2 response, but accurately estimated T2. • On detector response, we commented for source component adjustment. • For BNCT, MCNP still provides the best prediction of n & photon responses.

  10. Benchmarking the RELAP5/MOD2.5 r-Θ model of an SRS [Savannah River Site] reactor to the 1989 L-Reactor tests

    International Nuclear Information System (INIS)

    Benchmarking calculations utilizing RELAP5/MOD2.5 with a detailed multi-dimensional r-θ model of the SRS L-Reactor will be presented. This benchmarking effort has provided much insight into the two-component two-phase behavior of the reactor under isothermal conditions with large quantities of air ingested from the moderator tank to the external loops. Initial benchmarking results have illuminated several model weaknesses which will be discussed in conjunction with proposed modeling changes. The benchmarking work is being performed to provide a fully qualified RELAP5 model for use in computing the system response to a double ended large break LOCA. 5 refs., 14 figs

  11. Efficient use of 3d environment models for mobile robot simulation and localization

    OpenAIRE

    Corominas Murtra, Andreu; Trulls, Eduard; Mirats-Tur, Josep M.; Sanfeliu, Alberto

    2010-01-01

    This paper provides a detailed description of a set of algorithms to efficiently manipulate 3D geometric models to compute physical constraints and range observation models, data that is usually required in real-time mobile robotics or simulation. Our approach uses a standard file format to describe the environment and processes the model using the openGL library, a widely-used programming interface for 3D scene manipulation. The paper also presents results on a test model for benchmarking, a...

  12. Vehicle dynamics modeling and simulation

    CERN Document Server

    Schramm, Dieter; Bardini, Roberto

    2014-01-01

    The authors examine in detail the fundamentals and mathematical descriptions of the dynamics of automobiles. In this context different levels of complexity will be presented, starting with basic single-track models up to complex three-dimensional multi-body models. A particular focus is on the process of establishing mathematical models on the basis of real cars and the validation of simulation results. The methods presented are explained in detail by means of selected application scenarios.

  13. Shielding Benchmark Computational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-09-17

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC).

  14. Benchmark exercise

    International Nuclear Information System (INIS)

    The motivation to conduct this benchmark exercise, a summary of the results, and a discussion of and conclusions from the intercomparison are given in Section 5.2. This section contains further details of the results of the calculations and intercomparisons, illustrated by tables and figures, but avoiding repetition of Section 5.2 as far as possible. (author)

  15. Stochastic models: theory and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2008-03-01

    Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

  16. Numerical calculation of the ADS target model with AQUA and FLUENT codes. IAHR (10th IWGAR) benchmark calculation

    International Nuclear Information System (INIS)

    A benchmark problem was proposed to reproduce an experiment for target membrane structure cooling of Accelerator Driven System at the 10th meeting of IWGAR (International Working Group of Advanced Nuclear Reactors Thermal Hydraulic) by the Fluid Phenomena in Energy Exchanges Section of IAHR (International Association of Hydraulic Engineering and Research). The benchmark calculation has been carried out with AQUA and FLUENT codes to estimate the code validity for liquid metal thermal-hydraulics application. As a result of comparison between numerical analyses and experiment, it is concluded as follows: Inlet flow rate at the distributing grid much affects a coolant temperature and temperature pulsation near the membrane. The coolant temperature decreases and the pulsation decays rapidly as the flow rate toward the membrane center increases. On downstream of the distributing grid, numerical results agree with experimental data except that numerical analysis tends to overestimate the coolant temperature pulsation. Numerical results show that the decrease of coolant temperature and the dissipation of pulsation tend to be underestimated when the flow rate toward the membrane center increases. In FLUENT code, the dissipation of coolant temperature is underestimated more than in AQUA code because FLUENT code tends to overestimate the flow rate toward the membrane center. But the same tendency of the dissipation behavior is shown in AQUA code. A turbulent model is less influenced on the coolant behavior in this benchmark analysis. Because Prandtl (Pr) number of liquid metal is low and the turbulent flow is not developed sufficiently in the conditions of the experiment. (author)

  17. Plasma modelling and numerical simulation

    International Nuclear Information System (INIS)

    Plasma modelling is an exciting subject in which virtually all physical disciplines are represented. Plasma models combine the electromagnetic, statistical and fluid dynamical theories that have their roots in the 19th century with the modern insights concerning the structure of matter that were developed throughout the 20th century. The present cluster issue consists of 20 invited contributions, which are representative of the state of the art in plasma modelling and numerical simulation. These contributions provide an in-depth discussion of the major theories and modelling and simulation strategies, and their applications to contemporary plasma-based technologies. In this editorial review, we introduce and complement those papers by providing a bird's eye perspective on plasma modelling and discussing the historical context in which it has surfaced. (editorial review)

  18. Global and local scale flood discharge simulations in the Rhine River basin for flood risk reduction benchmarking in the Flagship Project

    Science.gov (United States)

    Gädeke, Anne; Gusyev, Maksym; Magome, Jun; Sugiura, Ai; Cullmann, Johannes; Takeuchi, Kuniyoshi

    2015-04-01

    The global flood risk assessment is prerequisite to set global measurable targets of post-Hyogo Framework for Action (HFA) that mobilize international cooperation and national coordination towards disaster risk reduction (DRR) and requires the establishment of a uniform flood risk assessment methodology on various scales. To address these issues, the International Flood Initiative (IFI) has initiated a Flagship Project, which was launched in year 2013, to support flood risk reduction benchmarking at global, national and local levels. In the Flagship Project road map, it is planned to identify the original risk (1), to identify the reduced risk (2), and to facilitate the risk reduction actions (3). In order to achieve this goal at global, regional and local scales, international research collaboration is absolutely necessary involving domestic and international institutes, academia and research networks such as UNESCO International Centres. The joint collaboration by ICHARM and BfG was the first attempt that produced the first step (1a) results on the flood discharge estimates with inundation maps under way. As a result of this collaboration, we demonstrate the outcomes of the first step of the IFI Flagship Project to identify flood hazard in the Rhine river basin on the global and local scale. In our assessment, we utilized a distributed hydrological Block-wise TOP (BTOP) model on 20-km and 0.5-km scales with local precipitation and temperature input data between 1980 and 2004. We utilized existing 20-km BTOP model, which is applied globally, and constructed the local scale 0.5-km BTOP model for the Rhine River basin. For the BTOP model results, both calibrated 20-km and 0.5-km BTOP models had similar statistical performance and represented observed flood river discharges, epecially for 1993 and 1995 floods. From 20-km and 0.5-km BTOP simulation, the flood discharges of the selected return period were estimated using flood frequency analysis and were comparable to

  19. Estimating Cross-Industry Cross-Country Models Using Benchmark Industry Characteristics

    OpenAIRE

    Ciccone, Antonio; Papaioannou, Elias

    2010-01-01

    International industry data permits testing whether the industry-specific impact of cross-country differences in institutions or policies is consistent with economic theory. Empirical implementation requires specifying the industry characteristics that determine impact strength. Most of the literature has been using US proxies of the relevant industry characteristics. We show that using industry characteristics in a benchmark country as a proxy of the relevant industry characteri...

  20. Estimating Cross-Industry Cross-Country Models Using Benchmark Industry Characteristics

    OpenAIRE

    Antonio Ciccone; Elias Papaioannou

    2010-01-01

    International industry data permits testing whether the industry-specific impact of cross-country differences in institutions or policies is consistent with economic theory. Empirical implementation requires specifying the industry characteristics that determine impact strength. Most of the literature has been using US proxies of the relevant industry characteristics. We show that using industry characteristics in a benchmark country as a proxy of the relevant industry characteristics can res...

  1. Sparticle Mass Hierarchies, Simplified Models from SUGRA Unification, and Benchmarks for LHC Run-II SUSY Searches

    CERN Document Server

    Francescone, David; Altunkaynak, Baris; Nath, Pran

    2014-01-01

    Sparticle mass hierarchies contain significant information regarding the origin and nature of supersymmetry breaking. The hierarchical patterns are severely constrained by electroweak symmetry breaking as well as by the astrophysical and particle physics data. They are further constrained by the Higgs boson mass measurement. The sparticle mass hierarchies can be used to generate simplified models consistent with the high scale models. In this work we consider supergravity models with universal boundary conditions for soft parameters at the unification scale as well as supergravity models with nonuniversalities and delineate the list of sparticle mass hierarchies for the five lightest sparticles. Simplified models can be obtained by a truncation of these, retaining a smaller set of lightest particles. The mass hierarchies and their truncated versions enlarge significantly the list of simplified models currently being used in the literature. Benchmarks for a variety of supergravity unified models appropriate fo...

  2. A Bootstrap Approach of Benchmarking Organizational Maturity Model of Software Product With Educational Maturity Model

    Directory of Open Access Journals (Sweden)

    R.Manjula

    2012-06-01

    Full Text Available This Software product line engineering is an inter-disciplinary concept. It spans the dimensions of business, architecture, process, and the organization. Similarly, Education System engineering is also an inter-disciplinary concept, which spans the dimensions of academic, infrastructure, facilities, administration etc. Some of the potential benefits of this approach include continuous improvements in System quality and adhering to global standards. The increasing competency in IT and Educational Sectors necessitates a process maturity evaluation methodology. Accordingly, this paper presents an organizational maturity model for Education system for evaluating the maturity of multi- dimension factors and attributes of an Education System. Assessment questionnaires and a rating methodology comprise the framework of this Educational maturity model. The objective and design of the questionnaires are to collect information about the Education system engineering process from the multi perspectives of academic, infrastructure, administration, facilities etc. Furthermore, we conducted one case study and reported the assessment results using the organizational maturity model presented in this paper.

  3. Model for Simulation Atmospheric Turbulence

    DEFF Research Database (Denmark)

    Lundtang Petersen, Erik

    1976-01-01

    A method that produces realistic simulations of atmospheric turbulence is developed and analyzed. The procedure makes use of a generalized spectral analysis, often called a proper orthogonal decomposition or the Karhunen-Loève expansion. A set of criteria, emphasizing a realistic appearance, a...... eigenfunctions and estimates of the distributions of the corresponding expansion coefficients. The simulation method utilizes the eigenfunction expansion procedure to produce preliminary time histories of the three velocity components simultaneously. As a final step, a spectral shaping procedure is then applied....... The method is unique in modeling the three velocity components simultaneously, and it is found that important cross-statistical features are reasonably well-behaved. It is concluded that the model provides a practical, operational simulator of atmospheric turbulence....

  4. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    to operate a boiler plant dynamically means that the boiler designs must be able to absorb any fluctuations in water level and temperature gradients resulting from the pressure change in the boiler. On the one hand a large water-/steam space may be required, i.e. to build the boiler as big as......This paper describes the modelling, simulating and optimizing including experimental verification as being carried out as part of a Ph.D. project being written resp. supervised by the authors. The work covers dynamic performance of both water-tube boilers and fire tube boilers. A detailed dynamic...... model of the boiler has been developed and simulations carried out by means of the Matlab integration routines. The model is prepared as a dynamic model consisting of both ordinary differential equations and algebraic equations, together formulated as a Differential-Algebraic-Equation system. Being able...

  5. Snowcover Simulation Model : A Review

    Directory of Open Access Journals (Sweden)

    Ashwagosha Ganju

    1999-12-01

    Full Text Available Numerical simulation of seasonal snowcover has attracted the interest of many scientists in the recent past. The present paper summarises chronologically developments in the understanding of snow properties and discusses various modelling approaches towards simulating the snowpack numerically. The authors describe the evolution of snowcover and the intricate relationship between the evolving snowpack and the atmosphere. The governing equations that describe the evolution of snowcover have been discussed. The merits and limitations of each equation pescribing a single process have been explained. Modelling strategies adopted by various workers have been analysed, and lastly the requirements of a perfect model have been brought out. In the absence of complete answers to many other processes, a strategy for the development of an operational snowcover model has been discussed.

  6. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    This paper describes the modelling, simulating and optimizing including experimental verification as being carried out as part of a Ph.D. project being written resp. supervised by the authors. The work covers dynamic performance of both water-tube boilers and fire tube boilers. A detailed dynamic...... model of the boiler has been developed and simulations carried out by means of the Matlab integration routines. The model is prepared as a dynamic model consisting of both ordinary differential equations and algebraic equations, together formulated as a Differential-Algebraic-Equation system. Being able...... to operate a boiler plant dynamically means that the boiler designs must be able to absorb any fluctuations in water level and temperature gradients resulting from the pressure change in the boiler. On the one hand a large water-/steam space may be required, i.e. to build the boiler as big as...

  7. Modelling, simulating and optimizing Boilers

    DEFF Research Database (Denmark)

    Sørensen, Kim; Condra, Thomas Joseph; Houbak, Niels

    2003-01-01

    This paper describes the modelling, simulating and optimizing including experimental verication as being carried out as part of a Ph.D. project being written resp. supervised by the authors. The work covers dynamic performance of both water-tube boilers and re tube boilers. A detailed dynamic model...... of the boiler has been developed and simulations carried out by means of the Matlab integration routines. The model is prepared as a dynamic model consisting of both ordinary differential equations and algebraic equations, together formulated as a Differential-Algebraic- Equation system. Being able...... to operate a boiler plant dynamically means that the boiler designs must be able to absorb any uctuations in water level and temperature gradients resulting from the pressure change in the boiler. On the one hand a large water-/steam space may be required, i.e. to build the boiler as big as possible...

  8. A New Infrared Benchmark for Testing the Predictive Capability of Models

    Science.gov (United States)

    Gero, P. J.; Dykema, J. A.; Leroy, S. S.; Anderson, J. G.

    2006-12-01

    Current climate models have large uncertainties that preclude credible climate prediction on decadal scales. Better observations are needed to improve the representation of feedbacks. We propose an SI-traceable high spectral resolution infrared satellite instrument as a component of a climate benchmark system needed to test climate models according to their predictive capability. The longwave forcing of the climate, the climate's response, and the longwave feedbacks involved in that response bear characteristic signatures in a time series of thermal infrared spectra. The longwave water vapor feedback, cloud feedback and temperature change are uniquely discernable. For this reason, high spectral resolution thermal infrared time series should provide powerful constraints for climate models. Initial work shows that a 10 year timeseries will begin to yield sufficient information to arrive at such constraints. Current operational and planned satellite instruments operating in the infrared do not have the ability to observe trends in the climate system to the required accuracy over the next decades. Problems with instrument calibration have been the main factor limiting the reliability of long term satellite records and many in situ measurements. This situation can be improved by appealing to the best measurement practices developed by the metrological community. The Système Internationale (SI) units of measurement are linked to fundamental physical properties of matter, and can be realized anywhere in the world without bias. By doing so, one can make an absolutely accurate observation to within a specified uncertainty. We have designed and built a prototype infrared satellite instrument that exploits the metrological foundation used to define the SI units. The primary objective of this instrument is to achieve the accuracy required to constrain our ability to forecast climate change. The instrument operates in the 200-1750 cm-1 range at 0.6 cm-1 resolution (unapodized

  9. Five model upgrade of the Indian Point-2 simulator

    International Nuclear Information System (INIS)

    The Consolidated Edison (Con Edison) Company of New York is continuing to upgrade its Indian Point-2 (IP-2) nuclear power plant simulator by replacing existing models with higher fidelity simulation models. Phase I of the project added a Gould/SEL32 Concept Series computer to the existing dual PDP 11/45 computer system. The scope of Phase II is to replace five NSSS models. The five models are the reactor core, reactor coolant system, pressurizer, reactor coolant pumps, and steam generators. The addition of these models was aimed at substantially enhancing the quality of training for both normal operation and accident scenarios involving degraded plant conditions. The five model package for Phase II is being provided by Combustion Engineering (C-E). The package is comprised of mature software which has been implemented and verified on other training simulators. These models have produced favorable comparisons to both actual plant data and best estimate analyses in addressing ANSI-ANS-3.5-1985 performance requirements. During the integration and final testing phases of the project, similar benchmark testing will be performed on the IP-2 simulator

  10. Benchmark experiments with global climate models applicable to extra-solar gas giant planets in the shallow atmosphere approximation

    CERN Document Server

    Bending, V L; Kolb, U

    2012-01-01

    The growing field of exoplanetary atmospheric modelling has seen little work on standardised benchmark tests for its models, limiting understanding of the dependence of results on specific models and conditions. With spatially resolved observations as yet difficult to obtain, such a test is invaluable. Although an intercomparison test for models of tidally locked gas giant planets has previously been suggested and carried out, the data provided were limited in terms of comparability. Here, the shallow PUMA model is subjected to such a test, and detailed statistics produced to facilitate comparison, with both time means and the associated standard deviations displayed, removing the time dependence and providing a measure of the variability. Model runs have been analysed to determine the variability between resolutions, and the effect of resolution on the energy spectra studied. Superrotation is a robust and reproducible feature at all resolutions.

  11. Fundamental M-dwarf parameters from high-resolution spectra using PHOENIX ACES models: I. Parameter accuracy and benchmark stars

    CERN Document Server

    Passegger, Vera Maria; Reiners, Ansgar

    2016-01-01

    M-dwarf stars are the most numerous stars in the Universe; they span a wide range in mass and are in the focus of ongoing and planned exoplanet surveys. To investigate and understand their physical nature, detailed spectral information and accurate stellar models are needed. We use a new synthetic atmosphere model generation and compare model spectra to observations. To test the model accuracy, we compared the models to four benchmark stars with atmospheric parameters for which independent information from interferometric radius measurements is available. We used $\\chi^2$ -based methods to determine parameters from high-resolution spectroscopic observations. Our synthetic spectra are based on the new PHOENIX grid that uses the ACES description for the equation of state. This is a model generation expected to be especially suitable for the low-temperature atmospheres. We identified suitable spectral tracers of atmospheric parameters and determined the uncertainties in $T_{\\rm eff}$, $\\log{g}$, and [Fe/H] resul...

  12. The FEBEX benchmark test: case definition and comparison of modelling approaches

    OpenAIRE

    Alonso Pérez de Agreda, Eduardo; Alcoverro Bassols, Jordi; Coste, F.; Selvadurai, A. P. S.; Tsang, C.-F.; Malinsky, L.; Merrien-Soukatchoff, V.; Kadiri, I.; T. Nowak; Shao, H.; Nguyen, T.S.; Armand, G.; Sobolik, S.R.; Itamura, M.; Stone, C.M.

    2005-01-01

    The FEBEX (Full-scale Engineered Barriers Experiment in Crystalline Host Rock) ‘‘in situ’’ test was installed at the Grimsel Test Site underground laboratory (Switzerland) and is a near-to-real scale simulation of the Spanish reference concept of deep geological storage in crystalline host rock. A modelling exercise, aimed at predicting field behaviour, was divided in three parts. In Part A, predictions for both the total water inflow to the tunnel as well as the water pressure chang...

  13. A simulation study on proton computed tomography (CT) stopping power accuracy using dual energy CT scans as benchmark

    DEFF Research Database (Denmark)

    Hansen, David Christoffer; Seco, Joao; Sørensen, Thomas Sangild;

    2015-01-01

    development) have both been proposed as methods for obtaining patient stopping power maps. The purpose of this work was to assess the accuracy of proton CT using dual energy CT scans of phantoms to establish reference accuracy levels. Material and methods. A CT calibration phantom and an abdomen cross section...... phantom containing inserts were scanned with dual energy and single energy CT with a state-of-the-art dual energy CT scanner. Proton CT scans were simulated using Monte Carlo methods. The simulations followed the setup used in current prototype proton CT scanners and included realistic modeling of...... detectors and the corresponding noise characteristics. Stopping power maps were calculated for all three scans, and compared with the ground truth stopping power from the phantoms. Results. Proton CT gave slightly better stopping power estimates than the dual energy CT method, with root mean square errors...

  14. Surrogate model approach for improving the performance of reactive transport simulations

    Science.gov (United States)

    Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris

    2016-04-01

    Reactive transport models can serve a large number of important geoscientific applications involving underground resources in industry and scientific research. It is common for simulation of reactive transport to consist of at least two coupled simulation models. First is a hydrodynamics simulator that is responsible for simulating the flow of groundwaters and transport of solutes. Hydrodynamics simulators are well established technology and can be very efficient. When hydrodynamics simulations are performed without coupled geochemistry, their spatial geometries can span millions of elements even when running on desktop workstations. Second is a geochemical simulation model that is coupled to the hydrodynamics simulator. Geochemical simulation models are much more computationally costly. This is a problem that makes reactive transport simulations spanning millions of spatial elements very difficult to achieve. To address this problem we propose to replace the coupled geochemical simulation model with a surrogate model. A surrogate is a statistical model created to include only the necessary subset of simulator complexity for a particular scenario. To demonstrate the viability of such an approach we tested it on a popular reactive transport benchmark problem that involves 1D Calcite transport. This is a published benchmark problem (Kolditz, 2012) for simulation models and for this reason we use it to test the surrogate model approach. To do this we tried a number of statistical models available through the caret and DiceEval packages for R, to be used as surrogate models. These were trained on randomly sampled subset of the input-output data from the geochemical simulation model used in the original reactive transport simulation. For validation we use the surrogate model to predict the simulator output using the part of sampled input data that was not used for training the statistical model. For this scenario we find that the multivariate adaptive regression splines

  15. Effects of Secondary Circuit Modeling on Results of Pressurized Water Reactor Main Steam Line Break Benchmark Calculations with New Coupled Code TRAB-3D/SMABRE

    International Nuclear Information System (INIS)

    All of the three exercises of the Organization for Economic Cooperation and Development/Nuclear Regulatory Commission pressurized water reactor main steam line break (PWR MSLB) benchmark were calculated at VTT, the Technical Research Centre of Finland. For the first exercise, the plant simulation with point-kinetic neutronics, the thermal-hydraulics code SMABRE was used. The second exercise was calculated with the three-dimensional reactor dynamics code TRAB-3D, and the third exercise with the combination TRAB-3D/SMABRE. VTT has over ten years' experience of coupling neutronic and thermal-hydraulic codes, but this benchmark was the first time these two codes, both developed at VTT, were coupled together. The coupled code system is fast and efficient; the total computation time of the 100-s transient in the third exercise was 16 min on a modern UNIX workstation. The results of all the exercises are similar to those of the other participants. In order to demonstrate the effect of secondary circuit modeling on the results, three different cases were calculated. In case 1 there is no phase separation in the steam lines and no flow reversal in the aspirator. In case 2 the flow reversal in the aspirator is allowed, but there is no phase separation in the steam lines. Finally, in case 3 the drift-flux model is used for the phase separation in the steam lines, but the aspirator flow reversal is not allowed. With these two modeling variations, it is possible to cover a remarkably broad range of results. The maximum power level reached after the reactor trip varies from 534 to 904 MW, the range of the time of the power maximum being close to 30 s. Compared to the total calculated transient time of 100 s, the effect of the secondary side modeling is extremely important

  16. The RAMI On-line Model Checker (ROMC): A web-based benchmarking facility for canopy reflectance models

    OpenAIRE

    Widlowski, J. -L.; Robustelli, M.; Disney, M.; Gastellu-Etchegorry, Jean-Philippe; T. Lavergne; Lewis, P.; North, P.R.J.; Pinty, B.; Thompson, R.; Verstraete, M. M.

    2008-01-01

    The exploitation of global Earth Observation data hinges increasingly on physically-based radiative transfer (RT) models. These models simulate the interactions of solar radiation within a given medium (e.g., clouds, plant canopies) and are used to generate look-up-tables that are embedded into quantitative retrieval algorithms, such as those delivering the operational surface products for MODIS, MISR and MERIS. An assessment of the quality of canopy RT models thus appears essential if accura...

  17. Using in-situ observations of atmospheric water vapor isotopes to benchmark and isotope-enabled General Circulation Models and improve ice core paleo-climate reconstruction

    Science.gov (United States)

    Steen-Larsen, Hans Christian; Sveinbjörnsdottir, Arny; Masson-Delmotte, Valerie; Werner, Martin; Risi, Camille; Yoshimura, Kei

    2016-04-01

    We have since 2010 carried out in-situ continuous water vapor isotope observations on top of the Greenland Ice Sheet (3 seasons at NEEM), in Svalbard (1 year), in Iceland (4 years), in Bermuda (4 years). The expansive dataset containing high accuracy and precision measurements of δ18O, δD, and the d-excess allow us to validate and benchmark the treatment of the atmospheric hydrological cycle's processes in General Circulation Models using simulations nudged to reanalysis products. Recent findings from both Antarctica and Greenland have documented strong interaction between the snow surface isotopes and the near surface atmospheric water vapor isotopes on diurnal to synoptic time scales. In fact, it has been shown that the snow surface isotopes take up the synoptic driven atmospheric water vapor isotopic signal in-between precipitation events, erasing the precipitation isotope signal in the surface snow. This highlights the importance of using General or Regional Climate Models, which accurately are able to simulate the atmospheric water vapor isotopic composition, to understand and interpret the ice core isotope signal. With this in mind we have used three isotope-enabled General Circulation Models (isoGSM, ECHAM5-wiso, and LMDZiso) nudged to reanalysis products. We have compared the simulations of daily mean isotope values directly with our in-situ observations. This has allowed us to characterize the variability of the isotopic composition in the models and compared it to our observations. We have specifically focused on the d-excess in order to characterize why both the mean and the variability is significantly lower than our observations. We argue that using water vapor isotopes to benchmark General Circulation Models offers an excellent tool for improving the treatment and parameterization of the atmospheric hydrological cycle. Recent studies have documented a very large inter-model dispersion in the treatment of the Arctic water cycle under a future global

  18. International benchmark for coupled codes and uncertainty analysis in modelling: switching-Off of one of the four operating main circulation pumps at nominal reactor power at NPP Kalinin unit 3

    International Nuclear Information System (INIS)

    The paper briefly describes the Specification of an international NEA/OECD benchmark based on measured plant data. During the commissioning tests for nominal power at NPP Kalinin Unit 3 a lot of measurements of neutron and thermo-hydraulic parameters have been carried out in the reactor pressure vessel, primary and the secondary circuits. One of the measured data sets for the transient 'Switching-off of one Main Circulation Pump (MCP) at nominal power' has been chosen to be applied for validation of coupled thermal-hydraulic and neutron-kinetic system codes and additionally for performing of uncertainty analyses as a part of the NEA/OECD Uncertainty Analysis in Modeling Benchmark. The benchmark is opened for all countries and institutions. The experimental data and the final specification with the cross section libraries will be provided to the participants from NEA/OECD only after official declaration of real participation in the benchmark and delivery of the simulated results of the transient for comparison. (Author)

  19. Benchmarking GEANT4 nuclear models for carbon-therapy at 95 MeV/A

    CERN Document Server

    Dudouet, J; Durand, D; Labalme, M

    2013-01-01

    In carbon-therapy, the interaction of the incoming beam with human tissues may lead to the production of a large amount of nuclear fragments and secondary light particles. An accurate estimation of the biological dose deposited into the tumor and the surrounding healthy tissues thus requires sophisticated simulation tools based on nuclear reaction models. The validity of such models requires intensive comparisons with as many sets of experimental data as possible. Up to now, a rather limited set of double di erential carbon fragmentation cross sections have been measured in the energy range used in hadrontherapy (up to 400 MeV/A). However, new data have been recently obtained at intermediate energy (95 MeV/A). The aim of this work is to compare the reaction models embedded in the GEANT4 Monte Carlo toolkit with these new data. The strengths and weaknesses of each tested model, i.e. G4BinaryLightIonReaction, G4QMDReaction and INCL++, coupled to two di fferent de-excitation models, i.e. the generalized evaporat...

  20. Benchmarking nuclear models of FLUKA and GEANT4 for carbon ion therapy

    CERN Document Server

    Bohlen, TT; Quesada, J M; Bohlen, T T; Cerutti, F; Gudowska, I; Ferrari, A; Mairani, A

    2010-01-01

    As carbon ions, at therapeutic energies, penetrate tissue, they undergo inelastic nuclear reactions and give rise to significant yields of secondary fragment fluences. Therefore, an accurate prediction of these fluences resulting from the primary carbon interactions is necessary in the patient's body in order to precisely simulate the spatial dose distribution and the resulting biological effect. In this paper, the performance of nuclear fragmentation models of the Monte Carlo transport codes, FLUKA and GEANT4, in tissue-like media and for an energy regime relevant for therapeutic carbon ions is investigated. The ability of these Monte Carlo codes to reproduce experimental data of charge-changing cross sections and integral and differential yields of secondary charged fragments is evaluated. For the fragment yields, the main focus is on the consideration of experimental approximations and uncertainties such as the energy measurement by time-of-flight. For GEANT4, the hadronic models G4BinaryLightIonReaction a...

  1. Multiscale Stochastic Simulation and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    James Glimm; Xiaolin Li

    2006-01-10

    Acceleration driven instabilities of fluid mixing layers include the classical cases of Rayleigh-Taylor instability, driven by a steady acceleration and Richtmyer-Meshkov instability, driven by an impulsive acceleration. Our program starts with high resolution methods of numerical simulation of two (or more) distinct fluids, continues with analytic analysis of these solutions, and the derivation of averaged equations. A striking achievement has been the systematic agreement we obtained between simulation and experiment by using a high resolution numerical method and improved physical modeling, with surface tension. Our study is accompanies by analysis using stochastic modeling and averaged equations for the multiphase problem. We have quantified the error and uncertainty using statistical modeling methods.

  2. Validation process of simulation model

    International Nuclear Information System (INIS)

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  3. Benchmark studies of thermal jet mixing in SFRs using a two-jet model

    Energy Technology Data Exchange (ETDEWEB)

    Omotowa, O. A.; Skifton, R.; Tokuhiro, A. [Univ. of Idaho - Idaho Falls, 1776 Science Center Drive, Idaho Falls, ID 83401 (United States)

    2012-07-01

    To guide the modeling, simulations and design of Sodium Fast Reactors (SFRs), we explore and compare the predictive capabilities of two numerical solvers COMSOL and OpenFOAM in the thermal jet mixing of two buoyant jets typical of the outlet flow from a SFR tube bundle. This process will help optimize on-going experimental efforts at obtaining high resolution data for V and V of CFD codes as anticipated in next generation nuclear systems. Using the k-{epsilon} turbulence models of both codes as reference, their ability to simulate the turbulence behavior in similar environments was first validated for single jet experimental data reported in literature. This study investigates the thermal mixing of two parallel jets having a temperature difference (hot-to-cold) {Delta}T{sub hc}= 5 deg. C, 10 deg. C and velocity ratios U{sub c}/U{sub h} = 0.5, 1. Results of the computed turbulent quantities due to convective mixing and the variations in flow field along the axial position are presented. In addition, this study also evaluates the effect of spacing ratio between jets in predicting the flow field and jet behavior in near and far fields. (authors)

  4. Benchmark studies of thermal jet mixing in SFRs using a two-jet model

    International Nuclear Information System (INIS)

    To guide the modeling, simulations and design of Sodium Fast Reactors (SFRs), we explore and compare the predictive capabilities of two numerical solvers COMSOL and OpenFOAM in the thermal jet mixing of two buoyant jets typical of the outlet flow from a SFR tube bundle. This process will help optimize on-going experimental efforts at obtaining high resolution data for V and V of CFD codes as anticipated in next generation nuclear systems. Using the k-ε turbulence models of both codes as reference, their ability to simulate the turbulence behavior in similar environments was first validated for single jet experimental data reported in literature. This study investigates the thermal mixing of two parallel jets having a temperature difference (hot-to-cold) ΔThc= 5 deg. C, 10 deg. C and velocity ratios Uc/Uh = 0.5, 1. Results of the computed turbulent quantities due to convective mixing and the variations in flow field along the axial position are presented. In addition, this study also evaluates the effect of spacing ratio between jets in predicting the flow field and jet behavior in near and far fields. (authors)

  5. Assessment of Molecular Modeling & Simulation

    Energy Technology Data Exchange (ETDEWEB)

    None

    2002-01-03

    This report reviews the development and applications of molecular and materials modeling in Europe and Japan in comparison to those in the United States. Topics covered include computational quantum chemistry, molecular simulations by molecular dynamics and Monte Carlo methods, mesoscale modeling of material domains, molecular-structure/macroscale property correlations like QSARs and QSPRs, and related information technologies like informatics and special-purpose molecular-modeling computers. The panel's findings include the following: The United States leads this field in many scientific areas. However, Canada has particular strengths in DFT methods and homogeneous catalysis; Europe in heterogeneous catalysis, mesoscale, and materials modeling; and Japan in materials modeling and special-purpose computing. Major government-industry initiatives are underway in Europe and Japan, notably in multi-scale materials modeling and in development of chemistry-capable ab-initio molecular dynamics codes.

  6. Benchmarking Exercises To Validate The Updated ELLWF GoldSim Slit Trench Model

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, G. A.; Hiergesell, R. A.

    2013-11-12

    The Savannah River National Laboratory (SRNL) results of the 2008 Performance Assessment (PA) (WSRC, 2008) sensitivity/uncertainty analyses conducted for the trenches located in the EArea LowLevel Waste Facility (ELLWF) were subject to review by the United States Department of Energy (U.S. DOE) Low-Level Waste Disposal Facility Federal Review Group (LFRG) (LFRG, 2008). LFRG comments were generally approving of the use of probabilistic modeling in GoldSim to support the quantitative sensitivity analysis. A recommendation was made, however, that the probabilistic models be revised and updated to bolster their defensibility. SRS committed to addressing those comments and, in response, contracted with Neptune and Company to rewrite the three GoldSim models. The initial portion of this work, development of Slit Trench (ST), Engineered Trench (ET) and Components-in-Grout (CIG) trench GoldSim models, has been completed. The work described in this report utilizes these revised models to test and evaluate the results against the 2008 PORFLOW model results. This was accomplished by first performing a rigorous code-to-code comparison of the PORFLOW and GoldSim codes and then performing a deterministic comparison of the two-dimensional (2D) unsaturated zone and three-dimensional (3D) saturated zone PORFLOW Slit Trench models against results from the one-dimensional (1D) GoldSim Slit Trench model. The results of the code-to-code comparison indicate that when the mechanisms of radioactive decay, partitioning of contaminants between solid and fluid, implementation of specific boundary conditions and the imposition of solubility controls were all tested using identical flow fields, that GoldSim and PORFLOW produce nearly identical results. It is also noted that GoldSim has an advantage over PORFLOW in that it simulates all radionuclides simultaneously - thus avoiding a potential problem as demonstrated in the Case Study (see Section 2.6). Hence, it was concluded that the follow

  7. Nea Benchmarks

    International Nuclear Information System (INIS)

    The two energy group diffusion equations accuracy is quite good for common/typical transients. However, better solutions should be obtained with more sophisticate techniques, including Monte Carlo and detailed neutron transport or multi-group diffusion equations and multidimensional cross section tables to get more realistic flux distribution. Constitutive models used to determine the evolution of the two-phase mixture, being mostly developed under steady state conditions, should be made more adapted for the simulation of transient situations with main reference to empirical correlations connected with the feedback between thermal-hydraulic and kinetic (e.g. the sub-cooled boiling heat transfer coefficient). 3-D nodalizations for the core or the vessel regions should be qualified based on proper sets of experimental data, as needed for Best Estimate simulation of phenomena like pressure wave propagation and flow redistribution in the core. The importance and the need for uncertainty evaluations for coupled codes predictions should be clear based on a number of reasons discussed in this work. Therefore, uncertainty must be connected with any prediction. The availability of proper computational resources should encourage the modeling of individual assemblies: this appears possible within the neutron kinetics area and may require some effort in thermal-hydraulic area namely when large number of channels constitutes the reactor core. Care is needed when specifying spatial mapping between thermal-hydraulic and kinetic nodes of the core models, especially when asymmetric core behavior is expected or when phenomena affecting a single a limited number of fuel assemblies are important. Finally, the industry and the regulatory bodies should become fully aware about the capabilities and the limitations of the coupled code techniques. Nevertheless, further and continuous assessment studies and investigations should be performed to enhance the degree of the Best Estimate

  8. Harmonic Oscillator in Heat Bath: Exact simulation of time-lapse-recorded data, exact analytical benchmark statistics

    CERN Document Server

    Norrelykke, Simon F

    2011-01-01

    The stochastic dynamics of the damped harmonic oscillator in a heat bath is simulated with an algorithm that is exact for time steps of arbitrary size. Exact analytical results are given for correlation functions and power spectra in the form they acquire when computed from experimental time-lapse recordings. Three applications are discussed: (i) Effects of finite sampling-rate and -time, described exactly here, are similar for other stochastic dynamical systems-e.g. motile micro-organisms and their time-lapse recorded trajectories. (ii) The same statistics is satisfied by any experimental system to the extent it is interpreted as a damped harmonic oscillator at finite temperature-such as an AFM cantilever. (iii) Three other models of fundamental interest are limiting cases of the damped harmonic oscillator at finite temperature; it consequently bridges their differences and describes effects of finite sampling rate and sampling time for these models as well. Finally, we give a brief discussion of nondimensio...

  9. Benchmark exercise on SBLOCA experiment of PWR PACTEL facility

    International Nuclear Information System (INIS)

    Highlights: • PWR PACTEL, the facility with EPR type steam generators, is introduced. • The focus of the benchmark was on the analyses of the SBLOCA test with PWR PACTEL. • System codes with several modeling approaches were utilized to analyze the test. • Proper consideration of heat and pressure losses improves simulation remarkably. - Abstract: The PWR PACTEL benchmark exercise was organized in Lappeenranta, Finland by Lappeenranta University of Technology. The benchmark consisted of two phases, i.e. a blind and an open calculation task. Seven organizations from the Czech Republic, Germany, Italy, Sweden and Finland participated in the benchmark exercise, and four system codes were utilized in the benchmark simulation tasks. Two workshops were organized for launching and concluding the benchmark, the latter of which involved presentations of the calculation results as well as discussions on the related modeling issues. The chosen experiment for the benchmark was a small break loss of coolant accident experiment which was performed to study the natural circulation behavior over a continuous range of primary side coolant inventories. For the blind calculation task, the detailed facility descriptions, the measured pressure and heat losses as well as the results of a short characterizing transient were provided. For the open calculation task part, the experiment results were released. According to the simulation results, the benchmark experiment was quite challenging to model. Several improvements were found and utilized especially for the open calculation case. The issues concerned model construction, heat and pressure losses impact, interpreting measured and calculated data, non-condensable gas effect, testing several condensation and CCFL correlations, sensitivity studies, as well as break modeling. There is a clear need for user guidelines or for a collection of best practices in modeling for every code. The benchmark offered a unique opportunity to test

  10. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm...... survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...

  11. Repository simulation model: Final report

    International Nuclear Information System (INIS)

    This report documents the application of computer simulation for the design analysis of the nuclear waste repository's waste handling and packaging operations. The Salt Repository Simulation Model was used to evaluate design alternatives during the conceptual design phase of the Salt Repository Project. Code development and verification was performed by the Office of Nuclear Waste Isolation (ONWL). The focus of this report is to relate the experience gained during the development and application of the Salt Repository Simulation Model to future repository design phases. Design of the repository's waste handling and packaging systems will require sophisticated analysis tools to evaluate complex operational and logistical design alternatives. Selection of these design alternatives in the Advanced Conceptual Design (ACD) and License Application Design (LAD) phases must be supported by analysis to demonstrate that the repository design will cost effectively meet DOE's mandated emplacement schedule and that uncertainties in the performance of the repository's systems have been objectively evaluated. Computer simulation of repository operations will provide future repository designers with data and insights that no other analytical form of analysis can provide. 6 refs., 10 figs

  12. Simulating spin models on GPU

    Science.gov (United States)

    Weigel, Martin

    2011-09-01

    Over the last couple of years it has been realized that the vast computational power of graphics processing units (GPUs) could be harvested for purposes other than the video game industry. This power, which at least nominally exceeds that of current CPUs by large factors, results from the relative simplicity of the GPU architectures as compared to CPUs, combined with a large number of parallel processing units on a single chip. To benefit from this setup for general computing purposes, the problems at hand need to be prepared in a way to profit from the inherent parallelism and hierarchical structure of memory accesses. In this contribution I discuss the performance potential for simulating spin models, such as the Ising model, on GPU as compared to conventional simulations on CPU.

  13. Monte Carlo simulations and benchmark measurements on the response of TE(TE) and Mg(Ar) ionization chambers in photon, electron and neutron beams

    Science.gov (United States)

    Lin, Yi-Chun; Huang, Tseng-Te; Liu, Yuan-Hao; Chen, Wei-Lin; Chen, Yen-Fu; Wu, Shu-Wei; Nievaart, Sander; Jiang, Shiang-Huei

    2015-06-01

    The paired ionization chambers (ICs) technique is commonly employed to determine neutron and photon doses in radiology or radiotherapy neutron beams, where neutron dose shows very strong dependence on the accuracy of accompanying high energy photon dose. During the dose derivation, it is an important issue to evaluate the photon and electron response functions of two commercially available ionization chambers, denoted as TE(TE) and Mg(Ar), used in our reactor based epithermal neutron beam. Nowadays, most perturbation corrections for accurate dose determination and many treatment planning systems are based on the Monte Carlo technique. We used general purposed Monte Carlo codes, MCNP5, EGSnrc, FLUKA or GEANT4 for benchmark verifications among them and carefully measured values for a precise estimation of chamber current from absorbed dose rate of cavity gas. Also, energy dependent response functions of two chambers were calculated in a parallel beam with mono-energies from 20 keV to 20 MeV photons and electrons by using the optimal simple spherical and detailed IC models. The measurements were performed in the well-defined (a) four primary M-80, M-100, M120 and M150 X-ray calibration fields, (b) primary 60Co calibration beam, (c) 6 MV and 10 MV photon, (d) 6 MeV and 18 MeV electron LINACs in hospital and (e) BNCT clinical trials neutron beam. For the TE(TE) chamber, all codes were almost identical over the whole photon energy range. In the Mg(Ar) chamber, MCNP5 showed lower response than other codes for photon energy region below 0.1 MeV and presented similar response above 0.2 MeV (agreed within 5% in the simple spherical model). With the increase of electron energy, the response difference between MCNP5 and other codes became larger in both chambers. Compared with the measured currents, MCNP5 had the difference from the measurement data within 5% for the 60Co, 6 MV, 10 MV, 6 MeV and 18 MeV LINACs beams. But for the Mg(Ar) chamber, the derivations reached 7

  14. Issues in benchmarking human reliability analysis methods : a literature review.

    Energy Technology Data Exchange (ETDEWEB)

    Lois, Erasmia (US Nuclear Regulatory Commission); Forester, John Alan; Tran, Tuan Q. (Idaho National Laboratory, Idaho Falls, ID); Hendrickson, Stacey M. Langfitt; Boring, Ronald L. (Idaho National Laboratory, Idaho Falls, ID)

    2008-04-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.

  15. Issues in benchmarking human reliability analysis methods: A literature review

    International Nuclear Information System (INIS)

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies is presented in order to aid in the design of future HRA benchmarking endeavors.

  16. Benchmarking Combined Biological Phosphorus and Nitrogen Removal Wastewater Treatment Processes

    DEFF Research Database (Denmark)

    Gernaey, Krist; Jørgensen, Sten Bay

    2004-01-01

    This paper describes the implementation of a simulation benchmark for studying the influence of control strategy implementations on combined nitrogen and phosphorus removal processes in a biological wastewater treatment plant. The presented simulation benchmark plant and its performance criteria...... conditions respectively, the definition of performance indexes that include the phosphorus removal processes, and the selection of a suitable operating point for the plant. Two control loops were implemented: one for dissolved oxygen control using the oxygen transfer coefficient K(L)a as manipulated variable...... are to a large extent based on the already existing nitrogen removal simulation benchmark. The paper illustrates and motivates the selection of the treatment plant lay-out, the selection of the biological process model, the development of realistic influent disturbance scenarios for dry, rain and storm weather...

  17. Time-resolved particle image velocimetry measurements with wall shear stress and uncertainty quantification for the FDA benchmark nozzle model

    CERN Document Server

    Raben, Jaime S; Robinson, Ronald; Malinauskas, Richard; Vlachos, Pavlos P

    2014-01-01

    We present validation of benchmark experimental data for computational fluid dynamics (CFD) analyses of medical devices using advanced Particle Image Velocimetry (PIV) processing and post-processing techniques. This work is an extension of a previous FDA-sponsored multi-laboratory study, which used a medical device mimicking geometry referred to as the FDA benchmark nozzle model. Time-resolved PIV analysis was performed in five overlapping regions of the model for Reynolds numbers in the nozzle throat of 500, 2,000, 5,000, and 8,000. Images included a two-fold increase in spatial resolution in comparison to the previous study. Data was processed using ensemble correlation, dynamic range enhancement, and phase correlations to increase signal-to-noise ratios and measurement accuracy, and to resolve flow regions with large velocity ranges and gradients, which is typical of many blood-contacting medical devices. Parameters relevant to device safety, including shear stress at the wall and in bulk flow, were comput...

  18. Creating Simulated Microgravity Patient Models

    Science.gov (United States)

    Hurst, Victor; Doerr, Harold K.; Bacal, Kira

    2004-01-01

    The Medical Operational Support Team (MOST) has been tasked by the Space and Life Sciences Directorate (SLSD) at the NASA Johnson Space Center (JSC) to integrate medical simulation into 1) medical training for ground and flight crews and into 2) evaluations of medical procedures and equipment for the International Space Station (ISS). To do this, the MOST requires patient models that represent the physiological changes observed during spaceflight. Despite the presence of physiological data collected during spaceflight, there is no defined set of parameters that illustrate or mimic a 'space normal' patient. Methods: The MOST culled space-relevant medical literature and data from clinical studies performed in microgravity environments. The areas of focus for data collection were in the fields of cardiovascular, respiratory and renal physiology. Results: The MOST developed evidence-based patient models that mimic the physiology believed to be induced by human exposure to a microgravity environment. These models have been integrated into space-relevant scenarios using a human patient simulator and ISS medical resources. Discussion: Despite the lack of a set of physiological parameters representing 'space normal,' the MOST developed space-relevant patient models that mimic microgravity-induced changes in terrestrial physiology. These models are used in clinical scenarios that will medically train flight surgeons, biomedical flight controllers (biomedical engineers; BME) and, eventually, astronaut-crew medical officers (CMO).

  19. Benchmarking the Sandbox: Quantitative Comparisons of Numerical and Analogue Models of Brittle Wedge Dynamics (Invited)

    Science.gov (United States)

    Buiter, S.; Schreurs, G.; Geomod2008 Team

    2010-12-01

    , we find differences in shear zone dip angle and surface slope between numerical and analogue models and, in 3D experiments, along-strike variations of structures in map view. Our experiments point out that we need careful treatment of material properties, discontinuities in boundary conditions, model building techniques, and boundary friction for sandbox-like setups. We show that to first order we successfully simulate sandbox-style brittle behavior using different numerical modeling techniques and that we can obtain similar styles of deformation behavior in numerical and laboratory experiments at similar levels of variability. * The GeoMod2008 Team: M. Albertz, C. Beaumont, C. Burberry, J.-P. Callot, C. Cavozzi, M. Cerca, J.-H. Chen, E. Cristallini, A. Cruden, L. Cruz, M. Cooke, T. Crook, J.-M. Daniel, D. Egholm, S. Ellis, T. Gerya, L. Hodkinson, F. Hofmann, V Garcia, C. Gomes, C. Grall, Y. Guillot, C. Guzmán, T. Nur Hidayah, G. Hilley, B. Kaus, M. Klinkmüller, H. Koyi, W. Landry, C.-Y. Lu, J. Macauley, B. Maillot, C. Meriaux, Y. Mishin, F. Nilfouroushan, C.-C. Pan, C. Pascal, D. Pillot, R. Portillo, M.Rosenau, W. Schellart, R. Schlische, P. Souloumiac, A. Take, B. Vendeville, M. Vettori, M. Vergnaud, S.-H. Wang, M. Withjack, D. Yagupsky, Y. Yamada

  20. CCF benchmark test

    International Nuclear Information System (INIS)

    A benchmark test on common cause failures (CCF) was performed giving interested institutions in Germany the opportunity of demonstrating and justifying their interpretations of events, their methods and models for analyzed CCF. The participants of this benchmark test belonged to expert and consultant organisations and to industrial institutions. The task for the benchmark test was to analyze two typical groups of motor-operated valves in German nuclear power plants. The benchmark test was carried out in two steps. In the first step the participants were to assess in a qualitative way some 200 event-reports on isolation valves. They then were to establish, quantitatively, the reliability parameters for the CCF in the two groups of motor-operated valves using their own methods and their own calculation models. In a second step the reliability parameters were to be recalculated on the basis of a common reference of well defined events, chosen from all given events, in order to analyze the influence of the calculation models on the reliability parameters. (orig.)

  1. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    International Nuclear Information System (INIS)

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments

  2. Spray model validation on single droplet heat and mass transfers for containment applications - SARNET-2 benchmark

    International Nuclear Information System (INIS)

    This work is performed in the frame of the SARNET-2 network, within the Sub-Work Package WP7-2, Task 1 (spray activities). Three different elementary test series have been proposed for benchmarking and the first series, concerning heat and mass transfer on a single droplet, is presented here. Code-experiment and code-to-code comparisons are presented. It is shown that the mass transfer terms are responsible for most of the differences and, depending on the kind of test, that the errors can either compensate together or be enhanced. Since the errors are propagating over the droplet height fall, they could be not negligible for real containment cases. (author)

  3. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    DEFF Research Database (Denmark)

    Sørensen, Kim; Condra, Thomas Joseph; Houbak, Niels

    In the present work a framework for optimizing the design of boilers for dynamic operation has been developed. A cost function to be minimized during the optimization has been formulated and for the present design variables related to the Boiler Volume and the Boiler load Gradient (i.e. ring rate...... on the boiler) have been dened. Furthermore a number of constraints related to: minimum and maximum boiler load gradient, minimum boiler size, Shrinking and Swelling and Steam Space Load have been dened. For dening the constraints related to the required boiler volume a dynamic model for simulating...... the boiler performance has been developed. Outputs from the simulations are shrinking and swelling of water level in the drum during for example a start-up of the boiler, these gures combined with the requirements with respect to allowable water level uctuations in the drum denes the requirements with...

  4. Advances in Intelligent Modelling and Simulation Simulation Tools and Applications

    CERN Document Server

    Oplatková, Zuzana; Carvalho, Marco; Kisiel-Dorohinicki, Marek

    2012-01-01

    The human capacity to abstract complex systems and phenomena into simplified models has played a critical role in the rapid evolution of our modern industrial processes and scientific research. As a science and an art, Modelling and Simulation have been one of the core enablers of this remarkable human trace, and have become a topic of great importance for researchers and practitioners. This book was created to compile some of the most recent concepts, advances, challenges and ideas associated with Intelligent Modelling and Simulation frameworks, tools and applications. The first chapter discusses the important aspects of a human interaction and the correct interpretation of results during simulations. The second chapter gets to the heart of the analysis of entrepreneurship by means of agent-based modelling and simulations. The following three chapters bring together the central theme of simulation frameworks, first describing an agent-based simulation framework, then a simulator for electrical machines, and...

  5. Benchmark experiments and numerical modelling of the columnar-equiaxed dendritic growth in the transparent alloy Neopentylglycol-(d)Camphor

    Science.gov (United States)

    Sturz, L.; Wu, M.; Zimmermann, G.; Ludwig, A.; Ahmadein, M.

    2015-06-01

    Solidification benchmark experiments on columnar and equiaxed dendritic growth, as well as the columnar-equiaxed transition have been carried out under diffusion-dominated conditions for heat and mass transfer in a low-gravity environment. The system under investigation is the transparent organic alloy system Neopentylglycol-37.5wt.-%(d)Camphor, processed aboard a TEXUS sounding rocket flight. Solidifications was observed by standard optical methods in addition to measurements of the thermal fields within the sheet like experimental cells of 1 mm thickness. The dendrite tip kinetic, primary dendrite arm spacing, temporal and spatial temperature evolution, columnar tip velocity and the critical parameters at the CET have been analysed. Here we focus on a detailed comparison of the experiment “TRACE” with a 5-phase volume averaging model to validate the numerical model and to give insight into the corresponding physical mechanisms and parameters leading to CET. The results are discussed in terms of sensitivity versus numerical parameters.

  6. Uterine Contraction Modeling and Simulation

    Science.gov (United States)

    Liu, Miao; Belfore, Lee A.; Shen, Yuzhong; Scerbo, Mark W.

    2010-01-01

    Building a training system for medical personnel to properly interpret fetal heart rate tracing requires developing accurate models that can relate various signal patterns to certain pathologies. In addition to modeling the fetal heart rate signal itself, the change of uterine pressure that bears strong relation to fetal heart rate and provides indications of maternal and fetal status should also be considered. In this work, we have developed a group of parametric models to simulate uterine contractions during labor and delivery. Through analysis of real patient records, we propose to model uterine contraction signals by three major components: regular contractions, impulsive noise caused by fetal movements, and low amplitude noise invoked by maternal breathing and measuring apparatus. The regular contractions are modeled by an asymmetric generalized Gaussian function and least squares estimation is used to compute the parameter values of the asymmetric generalized Gaussian function based on uterine contractions of real patients. Regular contractions are detected based on thresholding and derivative analysis of uterine contractions. Impulsive noise caused by fetal movements and low amplitude noise by maternal breathing and measuring apparatus are modeled by rational polynomial functions and Perlin noise, respectively. Experiment results show the synthesized uterine contractions can mimic the real uterine contractions realistically, demonstrating the effectiveness of the proposed algorithm.

  7. Ship Benchmark Shaft and Engine Gain FDI Using Neural Network

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Izadi-Zamanabadi, Roozbeh

    2002-01-01

    threshold value. In the paper a method for determining this threshold based on the neural network model is proposed, which can be used for a design strategy to handle residual sensitivity to input variations. The proposed method is used for successful FDI of a diesel engine gain fault in a ship propulsion...... benchmark simulation....

  8. A BENCHMARKING ANALYSIS FOR FIVE RADIONUCLIDE VADOSE ZONE MODELS (CHAIN, MULTIMED_DP, FECTUZ, HYDRUS, AND CHAIN 2D) IN SOIL SCREENING LEVEL CALCULATIONS

    Science.gov (United States)

    Five radionuclide vadose zone models with different degrees of complexity (CHAIN, MULTIMED_DP, FECTUZ, HYDRUS, and CHAIN 2D) were selected for use in soil screening level (SSL) calculations. A benchmarking analysis between the models was conducted for a radionuclide (99Tc) rele...

  9. Simulation model of a PWR power plant

    International Nuclear Information System (INIS)

    A simulation model of a hypothetical PWR power plant is described. A large number of disturbances and failures in plant function can be simulated. The model is written as seven modules to the modular simulation system for continuous processes DYSIM and serves also as a user example of this system. The model runs in Fortran 77 on the IBM-PC-AT. (author)

  10. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  11. Validating Cellular Automata Lava Flow Emplacement Algorithms with Standard Benchmarks

    Science.gov (United States)

    Richardson, J. A.; Connor, L.; Charbonnier, S. J.; Connor, C.; Gallant, E.

    2015-12-01

    A major existing need in assessing lava flow simulators is a common set of validation benchmark tests. We propose three levels of benchmarks which test model output against increasingly complex standards. First, imulated lava flows should be morphologically identical, given changes in parameter space that should be inconsequential, such as slope direction. Second, lava flows simulated in simple parameter spaces can be tested against analytical solutions or empirical relationships seen in Bingham fluids. For instance, a lava flow simulated on a flat surface should produce a circular outline. Third, lava flows simulated over real world topography can be compared to recent real world lava flows, such as those at Tolbachik, Russia, and Fogo, Cape Verde. Success or failure of emplacement algorithms in these validation benchmarks can be determined using a Bayesian approach, which directly tests the ability of an emplacement algorithm to correctly forecast lava inundation. Here we focus on two posterior metrics, P(A|B) and P(¬A|¬B), which describe the positive and negative predictive value of flow algorithms. This is an improvement on less direct statistics such as model sensitivity and the Jaccard fitness coefficient. We have performed these validation benchmarks on a new, modular lava flow emplacement simulator that we have developed. This simulator, which we call MOLASSES, follows a Cellular Automata (CA) method. The code is developed in several interchangeable modules, which enables quick modification of the distribution algorithm from cell locations to their neighbors. By assessing several different distribution schemes with the benchmark tests, we have improved the performance of MOLASSES to correctly match early stages of the 2012-3 Tolbachik Flow, Kamchakta Russia, to 80%. We also can evaluate model performance given uncertain input parameters using a Monte Carlo setup. This illuminates sensitivity to model uncertainty.

  12. Experimental benchmarks and simulation of GAMMA-T for overcooling and undercooling transients in HTGRs coupled with MED desalination plants

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ho Sik, E-mail: hskim25@kaist.ac.kr [Korea Advanced Institute of Science and Technology (KAIST), Department of Nuclear and Quantum Engineering, 291 Daehak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of); Kim, In Hun, E-mail: nuclea@kaist.ac.kr [Korea Advanced Institute of Science and Technology (KAIST), Department of Nuclear and Quantum Engineering, 291 Daehak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of); NO, Hee Cheon, E-mail: hcno@kaist.ac.kr [Korea Advanced Institute of Science and Technology (KAIST), Department of Nuclear and Quantum Engineering, 291 Daehak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of); Jin, Hyung Gon, E-mail: gonijin@gmail.com [Korea Advanced Institute of Science and Technology (KAIST), Department of Nuclear and Quantum Engineering, 291 Daehak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of)

    2013-06-15

    Highlights: ► The GAMMA-T code was well validated through benchmark experiments. ► Based on the KAIST coupling scheme, the GTHTR300 + MED systems were made. ► Safety analysis was performed for overcooling and undercooling accidents. ► In all accidents, maximum peak fuel temperatures were well below than 1600 °C. ► In all accidents, the HTGR + MED system could be operated continuously. -- Abstracts: The nuclear desalination based on the high temperature gas-cooled reactor (HTGR) with gas turbomachinery and multi-effect distillation (MED) is attracting attention because the coupling system can utilize the waste heat of the nuclear power system for the MED desalination system. In previous work, KAIST proposed the new HTGR + MED coupling scheme, evaluated desalination performance, and performed cost analysis for the system. In this paper, in order to confirm the safety and the performance of the coupling system, we performed the transient analysis with GAMMA-T (GAs Multidimensional Multicomponent mixture Analysis–Turbomachinery) code for the KAIST HTGR + MED systems. The experimental benchmarks of GAMMA-T code were set up before the transient analysis for several accident scenarios. The GAMMA-T code was well validated against steady state and transient scenarios of the He–Water test loop such as changes in water mass flow rate and water inlet temperatures. Then, for transient analysis, the GTHTR300 was chosen as a reference plant. The GTHTR300 + MED systems were made, based on the KAIST HTGR + MED coupling scheme. Transient analysis was performed for three kinds of accidents scenarios: (1) loss of heat rejection through MED plant, (2) loss of heat rejection through heat sink, and (3) overcooling due to abnormal cold temperature of seawater. In all kinds of accident scenarios, maximum peak fuel temperatures were well below than the fuel failure criterion, 1600 °C and the GTHTR300 + MED system could be operated continuously and safely. Specially, in the

  13. Collaborative weed modelling with Universal Simulator

    OpenAIRE

    Holst, Niels

    2010-01-01

    Universal Simulator is • open-source • modular • extendible • re-usable Universal Simulator includes • the INTERCOM model of plant growh • the Conductance model of plant growth • an annual weed demographic model • an insect demographic model • options to extend with any model and combine with the above

  14. CFD Modeling of Thermal Manikin Heat Loss in a Comfort Evaluation Benchmark Test

    DEFF Research Database (Denmark)

    Nilsson, Håkan O.; Brohus, Henrik; Nielsen, Peter V.

    2007-01-01

    comfort evaluation. The main idea is to focus on people. It is the comfort requirements of occupants that decide what thermal climate that will prevail. It is therefore important to use comfort simulation methods that originate from people, not just temperatures on surfaces and air.......Computer simulated persons (CSPs) today are different in many ways, reflecting various software possibilities and limitations as well as different research interest. Unfortunately, too few of the theories behind thermal manikin simulations are available in the public domain. Many researchers and...

  15. Closed-Loop Neuromorphic Benchmarks

    Science.gov (United States)

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  16. Techniques and Simulation Models in Risk Management

    OpenAIRE

    Mirela GHEORGHE

    2012-01-01

    In the present paper, the scientific approach of the research starts from the theoretical framework of the simulation concept and then continues in the setting of the practical reality, thus providing simulation models for a broad range of inherent risks specific to any organization and simulation of those models, using the informatics instrument @Risk (Palisade). The reason behind this research lies in the need for simulation models that will allow the person in charge with decision taking i...

  17. Simulation Model Driven Engineering for Manufacturing Cell

    OpenAIRE

    Hibino, Hironori; Inukai, Toshihiro; Yoshida, Yukishige

    2010-01-01

    In our research, the simulation model driven engineering for manufacturing cell (SMDE-MC) is proposed. The purposes of SMDE-MC are to support the manufacturing engineering processes based on the simulation model and to extend the range of control applications and simulation applications using the PC based control. SMDE-MC provides the simulation model which controls and monitors the manufacturing cell directly using PC based control in the manufacturing system execution phase. Then when the s...

  18. Kvantitativ benchmark - Produktionsvirksomheder

    DEFF Research Database (Denmark)

    Sørensen, Ole H.; Andersen, Vibeke

    Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet.......Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet....

  19. Benchmarking in Student Affairs.

    Science.gov (United States)

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  20. Validation of advanced NSSS simulator model for loss-of-coolant accidents

    Energy Technology Data Exchange (ETDEWEB)

    Kao, S.P.; Chang, S.K.; Huang, H.C. [Nuclear Training Branch, Northeast Utilities, Waterford, CT (United States)

    1995-09-01

    The replacement of the NSSS (Nuclear Steam Supply System) model on the Millstone 2 full-scope simulator has significantly increased its fidelity to simulate adverse conditions in the RCS. The new simulator NSSS model is a real-time derivative of the Nuclear Plant Analyzer by ABB. The thermal-hydraulic model is a five-equation, non-homogeneous model for water, steam, and non-condensible gases. The neutronic model is a three-dimensional nodal diffusion model. In order to certify the new NSSS model for operator training, an extensive validation effort has been performed by benchmarking the model performance against RELAP5/MOD2. This paper presents the validation results for the cases of small-and large-break loss-of-coolant accidents (LOCA). Detailed comparisons in the phenomena of reflux-condensation, phase separation, and two-phase natural circulation are discussed.

  1. Radiography benchmark 2014

    International Nuclear Information System (INIS)

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed

  2. Validation of mechanical models for reinforced concrete structures: Presentation of the French project 'Benchmark des Poutres de la Rance'

    Energy Technology Data Exchange (ETDEWEB)

    L' Hostis, V. [Laboratoire d' Etude du Comportement des Betons et des Argiles, CEA Saclay (France); Brunet, C. [EDF/SEPTEN, Villeurbanne (France); Poupard, O. [Laboratoire Pierre Sue, CNRS/CEA Saclay (France)]|[Laboratoire d' Etude du Comportement des Betons et des Argiles, CEA Saclay (France); Petre-Lazar, I. [EDF/DRD/MMC (France)

    2006-07-01

    Several ageing models are available for the prediction of the mechanical consequences of rebar corrosion. They are used for service life prediction of reinforced concrete structures. Concerning corrosion diagnosis of reinforced concrete, some Non Destructive Testing (NDT) tools have been developed, and have been in use for some years. However, these developments require validation on existing concrete structures. The French project 'Benchmark des Poutres de la Rance' contributes to this aspect. It has two main objectives: (i) validation of mechanical models to estimate the influence of rebar corrosion on the load bearing capacity of a structure, (ii) qualification of the use of the NDT results to collect information on steel corrosion within reinforced-concrete structures. Ten French and European institutions from both academic research laboratories and industrial companies contributed during the years 2004 and 2005. This paper presents the project that was divided into several work packages: (i) the reinforced concrete beams were characterized from non-destructive testing tools, (ii) the mechanical behaviour of the beams was experimentally tested, (iii) complementary laboratory analysis were performed and (iv) finally numerical simulations results were compared to the experimental results obtained with the mechanical tests. (authors)

  3. Hadron Production Model Developments and Benchmarking in the 0.7 - 12 GeV Energy Region

    CERN Document Server

    Mokhov, N V; Striganov, S I

    2014-01-01

    Driven by the needs of the intensity frontier projects with their Megawatt beams, e.g., ESS, FAIR and Project X, and their experiments, the event generators of the MARS15 code have been recently improved. After thorough analysis and benchmarking against data, including the newest ones by the HARP collaboration, both the exclusive and inclusive particle production models were further developed in the crucial for the above projects - but difficult from a theoretical standpoint - projectile energy region of 0.7 to 12 GeV. At these energies, modelling of prompt particle production in nucleon-nucleon and pion-nucleon inelastic reactions is now based on a combination of phase-space and isobar models. Other reactions are still modeled in the framework of the Quark-Gluon String Model. Pion, kaon and strange particle production and propagation in nuclear media are improved. For the alternative inclusive mode, experimental data on large-angle (> 20 degrees) pion production in hadron-nucleus interactions are parameteriz...

  4. Benchmark Calculations of OECD/NEA Reactivity-Initiated Accidents

    International Nuclear Information System (INIS)

    The benchmark- Phase I was done from 2011 to 2013 with a consistent set of four experiments on very similar highly irradiated fuel rods tested under different experimental conditions: low temperature, low pressure, stagnant water coolant, very short power pulse (NSRR VA-1), high temperature, medium pressure, stagnant water coolant, very short power pulse (NSRR VA-3), high temperature, low pressure, flowing sodium coolant, larger power pulse (CABRI CIP0-1), high temperature, high pressure, flowing water coolant, medium width power pulse (CABRI CIP3-1). Based on the importance of the thermal-hydraulics aspects revealed during the Phase I, the specifications of the benchmark-Phase II was elaborated in 2014. The benchmark-Phase II focused on the deeper understanding of the differences in modeling of the different codes. The work on the benchmark- Phase II program will last the end of 2015. The benchmark cases for RIA are simulated with the code of FRAPTRAN 1.5, in order to understand the phenomena during RIA and to check the capacity of the code itself. The results of enthalpy, cladding strain and outside temperature among 21 parameters asked by the benchmark program are summarized, and they seem to reasonably reflect the actual phenomena, except for them of case 6

  5. WIPP Benchmark calculations with the large strain SPECTROM codes

    International Nuclear Information System (INIS)

    This report provides calculational results from the updated Lagrangian structural finite-element programs SPECTROM-32 and SPECTROM-333 for the purpose of qualifying these codes to perform analyses of structural situations in the Waste Isolation Pilot Plant (WIPP). Results are presented for the Second WIPP Benchmark (Benchmark II) Problems and for a simplified heated room problem used in a parallel design calculation study. The Benchmark II problems consist of an isothermal room problem and a heated room problem. The stratigraphy involves 27 distinct geologic layers including ten clay seams of which four are modeled as frictionless sliding interfaces. The analyses of the Benchmark II problems consider a 10-year simulation period. The evaluation of nine structural codes used in the Benchmark II problems shows that inclusion of finite-strain effects is not as significant as observed for the simplified heated room problem, and a variety of finite-strain and small-strain formulations produced similar results. The simplified heated room problem provides stratigraphic complexity equivalent to the Benchmark II problems but neglects sliding along the clay seams. The simplified heated problem does, however, provide a calculational check case where the small strain-formulation produced room closures about 20 percent greater than those obtained using finite-strain formulations. A discussion is given of each of the solved problems, and the computational results are compared with available published results. In general, the results of the two SPECTROM large strain codes compare favorably with results from other codes used to solve the problems

  6. Modeling coupled blast/structure interaction with Zapotec, benchmark calculations for the Conventional Weapon Effects Backfill (CONWEB) tests.

    Energy Technology Data Exchange (ETDEWEB)

    Bessette, Gregory Carl

    2004-09-01

    Modeling the response of buried reinforced concrete structures subjected to close-in detonations of conventional high explosives poses a challenge for a number of reasons. Foremost, there is the potential for coupled interaction between the blast and structure. Coupling enters the problem whenever the structure deformation affects the stress state in the neighboring soil, which in turn, affects the loading on the structure. Additional challenges for numerical modeling include handling disparate degrees of material deformation encountered in the structure and surrounding soil, modeling the structure details (e.g., modeling the concrete with embedded reinforcement, jointed connections, etc.), providing adequate mesh resolution, and characterizing the soil response under blast loading. There are numerous numerical approaches for modeling this class of problem (e.g., coupled finite element/smooth particle hydrodynamics, arbitrary Lagrange-Eulerian methods, etc.). The focus of this work will be the use of a coupled Euler-Lagrange (CEL) solution approach. In particular, the development and application of a CEL capability within the Zapotec code is described. Zapotec links two production codes, CTH and Pronto3D. CTH, an Eulerian shock physics code, performs the Eulerian portion of the calculation, while Pronto3D, an explicit finite element code, performs the Lagrangian portion. The two codes are run concurrently with the appropriate portions of a problem solved on their respective computational domains. Zapotec handles the coupling between the two domains. The application of the CEL methodology within Zapotec for modeling coupled blast/structure interaction will be investigated by a series of benchmark calculations. These benchmarks rely on data from the Conventional Weapons Effects Backfill (CONWEB) test series. In these tests, a 15.4-lb pipe-encased C-4 charge was detonated in soil at a 5-foot standoff from a buried test structure. The test structure was composed of a

  7. Differential Die-Away Instrument: Report on Benchmark Measurements and Comparison with Simulation for the Effects of Neutron Poisons

    Energy Technology Data Exchange (ETDEWEB)

    Goodsell, Alison Victoria [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Swinhoe, Martyn Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Henzl, Vladimir [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rael, Carlos D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Desimone, David J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-03-30

    In this report, new experimental data and MCNPX simulation results of the differential die-away (DDA) instrument response to the presence of neutron absorbers are evaluated. In our previous fresh nuclear fuel experiments and simulations, no neutron absorbers or poisons were included in the fuel definition. These new results showcase the capability of the DDA instrument to acquire data from a system that better mimics spent nuclear fuel.

  8. An approximate model for pulsar navigation simulation

    Science.gov (United States)

    Jovanovic, Ilija; Enright, John

    2016-02-01

    This paper presents an approximate model for the simulation of pulsar aided navigation systems. High fidelity simulations of these systems are computationally intensive and impractical for simulating periods of a day or more. Simulation of yearlong missions is done by abstracting navigation errors as periodic Gaussian noise injections. This paper presents an intermediary approximate model to simulate position errors for periods of several weeks, useful for building more accurate Gaussian error models. This is done by abstracting photon detection and binning, replacing it with a simple deterministic process. The approximate model enables faster computation of error injection models, allowing the error model to be inexpensively updated throughout a simulation. Testing of the approximate model revealed an optimistic performance prediction for non-millisecond pulsars with more accurate predictions for pulsars in the millisecond spectrum. This performance gap was attributed to noise which is not present in the approximate model but can be predicted and added to improve accuracy.

  9. The NAS Parallel Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental

  10. Distributed simulation a model driven engineering approach

    CERN Document Server

    Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent

    2016-01-01

    Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.

  11. Fundamental M-dwarf parameters from high-resolution spectra using PHOENIX ACES models. I. Parameter accuracy and benchmark stars

    Science.gov (United States)

    Passegger, V. M.; Wende-von Berg, S.; Reiners, A.

    2016-03-01

    M-dwarf stars are the most numerous stars in the Universe; they span a wide range in mass and are in the focus of ongoing and planned exoplanet surveys. To investigate and understand their physical nature, detailed spectral information and accurate stellar models are needed. We use a new synthetic atmosphere model generation and compare model spectra to observations. To test the model accuracy, we compared the models to four benchmark stars with atmospheric parameters for which independent information from interferometric radius measurements is available. We used χ2-based methods to determine parameters from high-resolution spectroscopic observations. Our synthetic spectra are based on the new PHOENIX grid that uses the ACES description for the equation of state. This is a model generation expected to be especially suitable for the low-temperature atmospheres. We identified suitable spectral tracers of atmospheric parameters and determined the uncertainties in Teff, log g, and [Fe/H] resulting from degeneracies between parameters and from shortcomings of the model atmospheres. The inherent uncertainties we find are σTeff = 35 K, σlog g = 0.14, and σ[Fe/H] = 0.11. The new model spectra achieve a reliable match to our observed data; our results for Teff and log g are consistent with literature values to within 1σ. However, metallicities reported from earlier photometric and spectroscopic calibrations in some cases disagree with our results by more than 3σ. A possible explanation are systematic errors in earlier metallicity determinations that were based on insufficient descriptions of the cool atmospheres. At this point, however, we cannot definitely identify the reason for this discrepancy, but our analysis indicates that there is a large uncertainty in the accuracy of M-dwarf parameter estimates. Based on observations carried out with UVES at ESO VLT.

  12. Nanotechnology convergence and modeling paradigm of sustainable energy system using polymer electrolyte membrane fuel cell as a benchmark example

    International Nuclear Information System (INIS)

    Developments in nanotechnology have led to innovative progress and converging technologies in engineering and science. These demand novel methodologies that enable efficient communications from the nanoscale all the way to decision-making criteria for actual production systems. In this paper, we discuss the convergence of nanotechnology and novel multi-scale modeling paradigms by using the fuel cell system as a benchmark example. This approach includes complex multi-phenomena at different time and length scales along with the introduction of an optimization framework for application-driven nanotechnology research trends. The modeling paradigm introduced here covers the novel holistic integration from atomistic/molecular phenomena to meso/continuum scales. System optimization is also discussed with respect to the reduced order parameters for a coarse-graining procedure in multi-scale model integration as well as system design. The development of a hierarchical multi-scale paradigm consolidates the theoretical analysis and enables large-scale decision-making of process level design, based on first-principles, and therefore promotes the convergence of nanotechnology to sustainable energy technologies.

  13. Benchmarking and Performance Measurement.

    Science.gov (United States)

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  14. Harmonic oscillator in heat bath: Exact simulation of time-lapse-recorded data and exact analytical benchmark statistics

    DEFF Research Database (Denmark)

    Nørrelykke, Simon F; Flyvbjerg, Henrik

    2011-01-01

    The stochastic dynamics of the damped harmonic oscillator in a heat bath is simulated with an algorithm that is exact for time steps of arbitrary size. Exact analytical results are given for correlation functions and power spectra in the form they acquire when computed from experimental time-laps...

  15. Harmonic oscillator in heat bath: Exact simulation of time-lapse-recorded data and exact analytical benchmark statistics

    DEFF Research Database (Denmark)

    Nørrelykke, Simon F; Flyvbjerg, Henrik

    2011-01-01

    The stochastic dynamics of the damped harmonic oscillator in a heat bath is simulated with an algorithm that is exact for time steps of arbitrary size. Exact analytical results are given for correlation functions and power spectra in the form they acquire when computed from experimental time...

  16. Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2014-01-01

    This book includes extended and revised versions of a set of selected papers from the 2012 International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2012) which was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and held in Rome, Italy. SIMULTECH 2012 was technically co-sponsored by the Society for Modeling & Simulation International (SCS), GDR I3, Lionphant Simulation, Simulation Team and IFIP and held in cooperation with AIS Special Interest Group of Modeling and Simulation (AIS SIGMAS) and the Movimento Italiano Modellazione e Simulazione (MIMOS).

  17. An introduction to enterprise modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ostic, J.K.; Cannon, C.E. [Los Alamos National Lab., NM (United States). Technology Modeling and Analysis Group

    1996-09-01

    As part of an ongoing effort to continuously improve productivity, quality, and efficiency of both industry and Department of Energy enterprises, Los Alamos National Laboratory is investigating various manufacturing and business enterprise simulation methods. A number of enterprise simulation software models are being developed to enable engineering analysis of enterprise activities. In this document the authors define the scope of enterprise modeling and simulation efforts, and review recent work in enterprise simulation at Los Alamos National Laboratory as well as at other industrial, academic, and research institutions. References of enterprise modeling and simulation methods and a glossary of enterprise-related terms are provided.

  18. Terrestrial Microgravity Model and Threshold Gravity Simulation using Magnetic Levitation

    Science.gov (United States)

    Ramachandran, N.

    2005-01-01

    What is the threshold gravity (minimum gravity level) required for the nominal functioning of the human system? What dosage is required? Do human cell lines behave differently in microgravity in response to an external stimulus? The critical need for such a gravity simulator is emphasized by recent experiments on human epithelial cells and lymphocytes on the Space Shuttle clearly showing that cell growth and function are markedly different from those observed terrestrially. Those differences are also dramatic between cells grown in space and those in Rotating Wall Vessels (RWV), or NASA bioreactor often used to simulate microgravity, indicating that although morphological growth patterns (three dimensional growth) can be successfully simulated using RWVs, cell function performance is not reproduced - a critical difference. If cell function is dramatically affected by gravity off-loading, then cell response to stimuli such as radiation, stress, etc. can be very different from terrestrial cell lines. Yet, we have no good gravity simulator for use in study of these phenomena. This represents a profound shortcoming for countermeasures research. We postulate that we can use magnetic levitation of cells and tissue, through the use of strong magnetic fields and field gradients, as a terrestrial microgravity model to study human cells. Specific objectives of the research are: 1. To develop a tried, tested and benchmarked terrestrial microgravity model for cell culture studies; 2. Gravity threshold determination; 3. Dosage (magnitude and duration) of g-level required for nominal functioning of cells; 4. Comparisons of magnetic levitation model to other models such as RWV, hind limb suspension, etc. and 5. Cellular response to reduced gravity levels of Moon and Mars. The paper will discuss experiments md modeling work to date in support of this project.

  19. A physiological production model for cacao : results of model simulations

    OpenAIRE

    Zuidema, P.A.; Leffelaar, P. A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  20. Benchmark campaign and case study episode in central Europe for development and assessment of advanced GNSS tropospheric models and products

    Science.gov (United States)

    Douša, Jan; Dick, Galina; Kačmařík, Michal; Brožková, Radmila; Zus, Florian; Brenot, Hugues; Stoycheva, Anastasia; Möller, Gregor; Kaplon, Jan

    2016-07-01

    Initial objectives and design of the Benchmark campaign organized within the European COST Action ES1206 (2013-2017) are described in the paper. This campaign has aimed to support the development and validation of advanced Global Navigation Satellite System (GNSS) tropospheric products, in particular high-resolution and ultra-fast zenith total delays (ZTDs) and tropospheric gradients derived from a dense permanent network. A complex data set was collected for the 8-week period when several extreme heavy precipitation episodes occurred in central Europe which caused severe river floods in this area. An initial processing of data sets from GNSS products and numerical weather models (NWMs) provided independently estimated reference parameters - zenith tropospheric delays and tropospheric horizontal gradients. Their provision gave an overview about the product similarities and complementarities, and thus a potential for improvements of a synergy in their optimal exploitations in future. Reference GNSS and NWM results were intercompared and visually analysed using animated maps. ZTDs from two reference GNSS solutions compared to global ERA-Interim reanalysis resulted in accuracy at the 10 mm level in terms of the root mean square (rms) with a negligible overall bias, comparisons to Global Forecast System (GFS) forecasts showed accuracy at the 12 mm level with the overall bias of -5 mm and, finally, comparisons to mesoscale ALADIN-CZ forecast resulted in accuracy at the 8 mm level with a negligible total bias. The comparison of horizontal tropospheric gradients from GNSS and NWM data demonstrated a very good agreement among independent solutions with negligible biases and an accuracy of about 0.5 mm. Visual comparisons of maps of zenith wet delays and tropospheric horizontal gradients showed very promising results for future exploitations of advanced GNSS tropospheric products in meteorological applications, such as severe weather event monitoring and weather nowcasting

  1. Photochemistry in Terrestrial Exoplanet Atmospheres I: Photochemistry Model and Benchmark Cases

    OpenAIRE

    Hu, Renyu; Seager, Sara; Bains, William

    2012-01-01

    We present a comprehensive photochemistry model for exploration of the chemical composition of terrestrial exoplanet atmospheres. The photochemistry model is designed from the ground up to have the capacity to treat all types of terrestrial planet atmospheres, ranging from oxidizing through reducing, which makes the code suitable for applications for the wide range of anticipated terrestrial exoplanet compositions. The one-dimensional chemical transport model treats up to 800 chemical reactio...

  2. PHOTOCHEMISTRY IN TERRESTRIAL EXOPLANET ATMOSPHERES. I. PHOTOCHEMISTRY MODEL AND BENCHMARK CASES

    OpenAIRE

    Hu, Renyu; Seager, Sara; Bains, William

    2011-01-01

    We present a comprehensive photochemistry model for exploration of the chemical composition of terrestrial exoplanet atmospheres. The photochemistry model is designed from the ground up to have the capacity to treat all types of terrestrial planet atmospheres, ranging from oxidizing through reducing, which makes the code suitable for applications for the wide range of anticipated terrestrial exoplanet compositions. The one-dimensional chemical transport model treats up to 800 chemical reactio...

  3. An Econometric Model of CGE Simulations

    OpenAIRE

    Hess, Sebastian

    2005-01-01

    CGE models are widely used tools for economic assessments of trade policy changes. However, overall confidence in their results tends to be low. We employ the methodological framework of meta-analysis in order to approach a quantitative comparison of CGE-based simulation results. Therefore, we compile a dataset of twelve recent Doha simulations and fit a linear regression model that explains the variance between simulation results on the regional level as a function of various modeling charac...

  4. Software-Engineering Process Simulation Model (SEPS)

    OpenAIRE

    Lin, C. Y.; Abdel-Hamid, T.; Sherif, J.S.

    1999-01-01

    This article describes tlie Software-Engineering Process Simulation (SEPS) model developed at JPL. SEPS is a dynamic simulation model of the software project-development process. It uses the feedback principles of system dynamics to simulate the dynamic interactions among various software life-cycle development activities and management decision-making processes. The model is designed to be a planning tool to examine trade-offs of cost, schedule, and functionality, and to test ...

  5. Simulation modeling and analysis with Arena

    CERN Document Server

    Altiok, Tayfur

    2007-01-01

    Simulation Modeling and Analysis with Arena is a highly readable textbook which treats the essentials of the Monte Carlo discrete-event simulation methodology, and does so in the context of a popular Arena simulation environment.” It treats simulation modeling as an in-vitro laboratory that facilitates the understanding of complex systems and experimentation with what-if scenarios in order to estimate their performance metrics. The book contains chapters on the simulation modeling methodology and the underpinnings of discrete-event systems, as well as the relevant underlying probability, statistics, stochastic processes, input analysis, model validation and output analysis. All simulation-related concepts are illustrated in numerous Arena examples, encompassing production lines, manufacturing and inventory systems, transportation systems, and computer information systems in networked settings.· Introduces the concept of discrete event Monte Carlo simulation, the most commonly used methodology for modeli...

  6. Non-exponential Fidelity Decay in Randomized Benchmarking with Low-Frequency Noise

    OpenAIRE

    Fogarty, M. A.; Veldhorst, M.; R. Harper; Yang, C.H.; Bartlett, S. D.; Flammia, S. T.; A. S. Dzurak

    2015-01-01

    We show that non-exponential fidelity decays in randomized benchmarking experiments on quantum dot qubits are consistent with numerical simulations that incorporate low-frequency noise. By expanding standard randomized benchmarking analysis to this experimental regime, we find that such non-exponential decays are better modeled by multiple exponential decay rates, leading to an instantaneous control fidelity for isotopically-purified-silicon MOS quantum dot qubits which can be as high as 99.9...

  7. Submission for the CSNI/GREST benchmark exercise on chemical thermodynamic modeling in core-concrete interaction releases of radionuclides

    International Nuclear Information System (INIS)

    A submission for the CSNI/PWG-4/GREST standard problem on chemical thermodynamic modeling in core-concrete interaction releases of radionuclides is described. Part A of the exercise is a highly defined benchmark calculation in which data and speciation are specified. The problem is, however, ambiguous concerning the definition of an ideal solution. Consequently, two solutions are provided. In one solution, specified species are treated as molecular entities to define the ideal solution. In the second, mixing is assumed to occur ideally on cationic and anionic lattices. The different results obtained in these calculations illustrate the importance of condensed phase modeling in the analyses of high temperature melt interactions with concrete. Part B of the exercise consists of six problems in which the temperatures, pressures and bulk compositions of the melts are specified. Data and speciation are to be supplied as parts of the solutions to the problems. Results of calculations for these six problems are presented. Additional solutions are provided to illustrate the effects of speciation in the condensed oxide phase, non-ideality in the condensed metal phase and uncertainty in the thermodynamic properties of gas phase species. 17 refs., 14 tabs

  8. From Physical Benchmarks To Mental Benchmarks: 
A Four Dimensions Dynamic Model To Assure The Quality Of 
Instructional Activities In Electronic And Virtual Learning Environments

    OpenAIRE

    Hamdy AHMED ABDELAZIZ

    2013-01-01

    The objective of this paper was to develop a four dimensions dynamic model for designing instructional activities appropriate to electronic and virtual learning environments. The suggested model is guided by learning principles of cognitivism, constructivism, and connectivism learning theories in order to help online learners to build and acquire meaningful knowledge and experiences. The proposed model consists of four dynamic dimensions: Ø Cognitive presence activities; Ø Psychologica...

  9. Benchmarking Deep Networks for Predicting Residue-Specific Quality of Individual Protein Models in CASP11

    Science.gov (United States)

    Liu, Tong; Wang, Yiheng; Eickholt, Jesse; Wang, Zheng

    2016-01-01

    Quality assessment of a protein model is to predict the absolute or relative quality of a protein model using computational methods before the native structure is available. Single-model methods only need one model as input and can predict the absolute residue-specific quality of an individual model. Here, we have developed four novel single-model methods (Wang_deep_1, Wang_deep_2, Wang_deep_3, and Wang_SVM) based on stacked denoising autoencoders (SdAs) and support vector machines (SVMs). We evaluated these four methods along with six other methods participating in CASP11 at the global and local levels using Pearson’s correlation coefficients and ROC analysis. As for residue-specific quality assessment, our four methods achieved better performance than most of the six other CASP11 methods in distinguishing the reliably modeled residues from the unreliable measured by ROC analysis; and our SdA-based method Wang_deep_1 has achieved the highest accuracy, 0.77, compared to SVM-based methods and our ensemble of an SVM and SdAs. However, we found that Wang_deep_2 and Wang_deep_3, both based on an ensemble of multiple SdAs and an SVM, performed slightly better than Wang_deep_1 in terms of ROC analysis, indicating that integrating an SVM with deep networks works well in terms of certain measurements.

  10. Benchmarking Deep Networks for Predicting Residue-Specific Quality of Individual Protein Models in CASP11

    Science.gov (United States)

    Liu, Tong; Wang, Yiheng; Eickholt, Jesse; Wang, Zheng

    2016-01-01

    Quality assessment of a protein model is to predict the absolute or relative quality of a protein model using computational methods before the native structure is available. Single-model methods only need one model as input and can predict the absolute residue-specific quality of an individual model. Here, we have developed four novel single-model methods (Wang_deep_1, Wang_deep_2, Wang_deep_3, and Wang_SVM) based on stacked denoising autoencoders (SdAs) and support vector machines (SVMs). We evaluated these four methods along with six other methods participating in CASP11 at the global and local levels using Pearson’s correlation coefficients and ROC analysis. As for residue-specific quality assessment, our four methods achieved better performance than most of the six other CASP11 methods in distinguishing the reliably modeled residues from the unreliable measured by ROC analysis; and our SdA-based method Wang_deep_1 has achieved the highest accuracy, 0.77, compared to SVM-based methods and our ensemble of an SVM and SdAs. However, we found that Wang_deep_2 and Wang_deep_3, both based on an ensemble of multiple SdAs and an SVM, performed slightly better than Wang_deep_1 in terms of ROC analysis, indicating that integrating an SVM with deep networks works well in terms of certain measurements. PMID:26763289

  11. Validation of Neptune CFD two phase flow models using the OECD/NRC BFBT benchmark database

    International Nuclear Information System (INIS)

    In this work the flow within a fuel assembly of a boiling water reactor was modeled using NEPTUNE-CFD. The most important parameters to define the flow like the incipient boiling condition, the heat flux partitioning and the heat transfer models are identified and tested against experimental data from BFBT bundle test. Different heat transfer models are applied for the water/steam interface. Additionally the heat conduction is solved for the insulator and cladding of the heater rods by coupling NEPTUNE-CFD with the SYRTHES package. The calculated average void fractions are in good agreement with the experimental data and the areas for future improvements are identified. (author)

  12. Assessment of CTF boiling transition and critical heat flux modeling capabilities using the OECD/NRC BFBT and PSBT benchmark databases

    International Nuclear Information System (INIS)

    The need to refine models for best-estimate calculations, based on good-quality experimental data, has been expressed in many recent meetings in the field of nuclear applications. The modeling needs arising in this respect should not be limited to the currently available macroscopic methods but should be extended to next-generation analysis techniques that focus on more microscopic processes. One of the most valuable databases identified for the thermalhydraulics modeling was developed by the Nuclear Power Engineering Corporation (NUPEC), Japan. From 1987 to 1995, NUPEC performed steady-state and transient critical power and departure from nucleate boiling (DNB) test series based on the equivalent full-size mock-ups. Considering the reliability not only of the measured data, but also other relevant parameters such as the system pressure, inlet sub-cooling and rod surface temperature, these test series supplied the first substantial database for the development of truly mechanistic and consistent models for boiling transition and critical heat flux. Over the last few years the Pennsylvania State University (PSU) under the sponsorship of the U.S. Nuclear Regulatory Commission (NRC) has prepared, organized, conducted and summarized the OECD/NRC Full-size Fine-mesh Bundle Tests (BFBT) Benchmark. The international benchmark activities have been conducted in cooperation with the Nuclear Energy Agency/Organization for Economic Co-operation and Development (NEA/OECD) and Japan Nuclear Energy Safety (JNES) organization, Japan. Consequently, the JNES has made available the Boiling Water Reactor (BWR) NUPEC database for the purposes of the benchmark. Based on the success of the OECD/NRC BFBT benchmark the JNES has decided to release also the data based on the NUPEC Pressurized Water Reactor (PWR) subchannel and bundle tests for another follow-up international benchmark entitled OECD/NRC PWR Subchannel and Bundle Tests (PSBT) benchmark. This paper presents an application of

  13. Terrestrial Microgravity Model and Threshold Gravity Simulation sing Magnetic Levitation

    Science.gov (United States)

    Ramachandran, N.

    2005-01-01

    What is the threshold gravity (minimum gravity level) required for the nominal functioning of the human system? What dosage is required? Do human cell lines behave differently in microgravity in response to an external stimulus? The critical need for such a gravity simulator is emphasized by recent experiments on human epithelial cells and lymphocytes on the Space Shuttle clearly showing that cell growth and function are markedly different from those observed terrestrially. Those differences are also dramatic between cells grown in space and those in Rotating Wall Vessels (RWV), or NASA bioreactor often used to simulate microgravity, indicating that although morphological growth patterns (three dimensional growth) can be successiblly simulated using RWVs, cell function performance is not reproduced - a critical difference. If cell function is dramatically affected by gravity off-loading, then cell response to stimuli such as radiation, stress, etc. can be very different from terrestrial cell lines. Yet, we have no good gravity simulator for use in study of these phenomena. This represents a profound shortcoming for countermeasures research. We postulate that we can use magnetic levitation of cells and tissue, through the use of strong magnetic fields and field gradients, as a terrestrial microgravity model to study human cells. Specific objectives of the research are: 1. To develop a tried, tested and benchmarked terrestrial microgravity model for cell culture studies; 2. Gravity threshold determination; 3. Dosage (magnitude and duration) of g-level required for nominal functioning of cells; 4. Comparisons of magnetic levitation model to other models such as RWV, hind limb suspension, etc. and 5. Cellular response to reduced gravity levels of Moon and Mars.

  14. Benchmarking of wind farm scale wake models in the EERA - DTOC project

    DEFF Research Database (Denmark)

    Réthoré, Pierre-Elouan; Hansen, Kurt Schaldemose; Barthelmie, R.J.;

    2013-01-01

    large offshore wind farms will be analyzed, which provide a reasonable range of conditions likely to be experienced in offshore wind farms. The systematic evaluation is based upon high - quality input data that is selected in the sister project IEA - Task 31 “WakeBench”.......Designing offshore wind farms next to existing or planned wind farm clusters has recently become a common practice in the North Sea. These types of projects face unprecedented challenges in term of wind energy siting. The currently ongoing European project FP7 EERA - DTOC (Design Tool for Offshore...... wind farm Clusters) is aiming at providing a new type of model work-flow to address this issue. The wake modeling part of the EERA - DTOC project is to improve the fundamental understanding of wind turbine wakes and modeling. One of these challenges is to create a new kind of wake modeling work-flow to...

  15. From Physical Benchmarks to Mental Benchmarks: A Four Dimensions Dynamic Model to Assure the Quality of Instructional Activities in Electronic and Virtual Learning Environments

    Science.gov (United States)

    Ahmed Abdelaziz, Hamdy

    2013-01-01

    The objective of this paper was to develop a four dimensions dynamic model for designing instructional activities appropriate to electronic and virtual learning environments. The suggested model is guided by learning principles of cognitivism, constructivism, and connectivism learning theories in order to help online learners to build and acquire…

  16. AGRI-INDUSTRY VALUE CHAIN MODEL: A TOOL FOR INDUSTRY BENCHMARKING AND SCENARIO ANALYSIS

    OpenAIRE

    Nazrul, Islam; Xayavong, Vilaphonh

    2010-01-01

    Agri-industry stakeholders need to respond to the challenge of meeting the demands for higher quality products at competitive prices under increased competition in volatile markets. Development and application of an appropriate computer-based agri-industry value chain model can provide strategic options to deal with these challenges. This paper presents an overview of the theoretical foundation and the structure of an agri-industry value chain model and demonstrates its application as a tool ...

  17. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  18. MARS code developments, benchmarking and applications

    International Nuclear Information System (INIS)

    Recent developments of the MARS Monte Carlo code system for simulation of hadronic and electromagnetic cascades in shielding, accelerator and detector components in the energy range from a fraction of an electron volt up to 100 TeV are described. The physical model of hadron and lepton interactions with nuclei and atoms has undergone substantial improvements. These include a new nuclear cross section library, a model for soft prior production, a cascade-exciton model, a dual parton model, deuteron-nucleus and neutrino-nucleus interaction models, a detailed description of negative hadron and muon absorption, and a unified treatment of muon and charged hadron electro-magnetic interactions with matter. New algorithms have been implemented into the code and benchmarked against experimental data. A new Graphical-User Interface has been developed. The code capabilities to simulate cascades and generate a variety of results in complex systems have been enhanced. The MARS system includes links to the MCNP code for neutron and photon transport below 20 MeV, to the ANSYS code for thermal and stress analyses and to the STRUCT code for multi-turn particle tracking in large synchrotrons and collider rings. Results of recent benchmarking of the MARS code are presented. Examples of non-trivial code applications are given for the Fermilab Booster and Main Injector, for a 1.5 MW target station and a muon storage ring

  19. MARS code developments, benchmarking and applications

    International Nuclear Information System (INIS)

    Recent developments of the MARS Monte Carlo code system for simulation of hadronic and electromagnetic cascades in shielding, accelerator and detector components in the energy range from a fraction of an electronvolt up to 100 TeV are described. The physical model of hadron and lepton interactions with nuclei and atoms has undergone substantial improvements. These include a new nuclear cross section library, a model for soft pion production, a cascade-exciton model, a dual parton model, deuteron-nucleus and neutrino-nucleus interaction models, a detailed description of negative hadron and muon absorption, and a unified treatment of muon and charged hadron electromagnetic interactions with matter. New algorithms have been implemented into the code and benchmarked against experimental data. A new Graphical-User Interface has been developed. The code capabilities to simulate cascades and generate a variety of results in complex systems have been enhanced. The MARS system includes links to the MCNP code for neutron and photon transport below 20 MeV, to the ANSYS code for thermal and stress analyses and to the STRUCT code for multi-turn particle tracking in large synchrotrons and collider rings. Results of recent benchmarking of the MARS code are presented. Examples of non-trivial code applications are given for the Fermilab Booster and Main Injector, for a 1.5 MW target station and a muon storage ring. (author)

  20. Multi-physics and multi-scale benchmarking and uncertainty quantification within OECD/NEA framework

    International Nuclear Information System (INIS)

    Highlights: • Presentation of latest multi-physics multi-scale NEA/OECD benchmarks. • Utilization of high-quality experimental data for detailed comparative analysis. • Including uncertainty and sensitivity analysis of modeling predictions. • Uncertainty propagation in LWR multi-physics and multi-scale simulations. - Abstract: The development of multi-physics multi-scale coupled methodologies for Light Water Reactor (LWR) analysis requires comprehensive validation and verification procedures, which include well-established benchmarks developed in international cooperation. The Nuclear Energy Agency (NEA) of the Organization for Economic Co-operation and Development (OECD) has provided such framework, and over the years a number of LWR benchmarks have been developed and successfully conducted. The first set of NEA/OECD benchmarks that permits testing of the neutronics/thermal–hydraulics coupling, and verifying the capability of the coupled codes to analyze complex transients with coupled core/plant interactions have been completed and documented. These benchmarks provided a validation basis for the new generation of coupled “best-estimate” codes. The above mentioned OECD/NEA LWR benchmark activities have also stimulated follow up developments and benchmarks to test these developments. The models utilized have been improved when moving from one benchmark to the next and this created a need to validate them using high-quality experimental data. Second set of the NEA/OECD benchmarks have been initiated by the Expert Group on Uncertainty Analysis in Modelling (EGUAM) at the Nuclear Science Committee (NSC), NEA/OECD to address the current trends in the development of LWR multi-physics and multi-scale modeling and simulation. These benchmarks include the following common features, which address some of the issues identified in the first set of OECD/NEA benchmarks: (a) utilization of high-quality experimental data; (b) refined local scale modeling in addition

  1. Analytical solutions for benchmarking cold regions subsurface water flow and energy transport models: one-dimensional soil thaw with conduction and advection

    Science.gov (United States)

    Kurylyk, Barret L.; McKenzie, Jeffrey M; MacQuarrie, Kerry T. B.; Voss, Clifford I.

    2014-01-01

    Numerous cold regions water flow and energy transport models have emerged in recent years. Dissimilarities often exist in their mathematical formulations and/or numerical solution techniques, but few analytical solutions exist for benchmarking flow and energy transport models that include pore water phase change. This paper presents a detailed derivation of the Lunardini solution, an approximate analytical solution for predicting soil thawing subject to conduction, advection, and phase change. Fifteen thawing scenarios are examined by considering differences in porosity, surface temperature, Darcy velocity, and initial temperature. The accuracy of the Lunardini solution is shown to be proportional to the Stefan number. The analytical solution results obtained for soil thawing scenarios with water flow and advection are compared to those obtained from the finite element model SUTRA. Three problems, two involving the Lunardini solution and one involving the classic Neumann solution, are recommended as standard benchmarks for future model development and testing.

  2. A Generic Multibody Parachute Simulation Model

    Science.gov (United States)

    Neuhaus, Jason Richard; Kenney, Patrick Sean

    2006-01-01

    Flight simulation of dynamic atmospheric vehicles with parachute systems is a complex task that is not easily modeled in many simulation frameworks. In the past, the performance of vehicles with parachutes was analyzed by simulations dedicated to parachute operations and were generally not used for any other portion of the vehicle flight trajectory. This approach required multiple simulation resources to completely analyze the performance of the vehicle. Recently, improved software engineering practices and increased computational power have allowed a single simulation to model the entire flight profile of a vehicle employing a parachute.

  3. Singlet Extensions of the Standard Model at LHC Run 2: Benchmarks and Comparison with the NMSSM

    CERN Document Server

    Costa, Raul; Sampaio, Marco O P; Santos, Rui

    2015-01-01

    The Complex singlet extension of the Standard Model (CxSM) is the simplest extension which provides scenarios for Higgs pair production with different masses. The model has two interesting phases: the dark matter phase, with a Standard Model-like Higgs boson, a new scalar and a dark matter candidate; and the broken phase, with all three neutral scalars mixing. In the latter phase Higgs decays into a pair of two different Higgs bosons are possible. In this study we analyse Higgs-to-Higgs decays in the framework of singlet extensions of the Standard Model (SM), with focus on the CxSM. After demonstrating that scenarios with large rates for such chain decays are possible we perform a comparison between the NMSSM and the CxSM. We find that, based on Higgs-to-Higgs decays, the only possibility to distinguish the two models at the LHC run 2 is through final states with two different scalars. This conclusion builds a strong case for searches for final states with two different scalars at the LHC run 2. Finally, we p...

  4. Finite Element Method Modeling of Sensible Heat Thermal Energy Storage with Innovative Concretes and Comparative Analysis with Literature Benchmarks

    OpenAIRE

    Claudio Ferone; Francesco Colangelo; Domenico Frattini; Giuseppina Roviello; Raffaele Cioffi; Rosa di Maggio

    2014-01-01

    Efficient systems for high performance buildings are required to improve the integration of renewable energy sources and to reduce primary energy consumption from fossil fuels. This paper is focused on sensible heat thermal energy storage (SHTES) systems using solid media and numerical simulation of their transient behavior using the finite element method (FEM). Unlike other papers in the literature, the numerical model and simulation approach has simultaneously taken into consideration vario...

  5. Benchmarking of wind farm scale wake models in the EERA - DTOC project

    OpenAIRE

    Réthoré, Pierre-Elouan; Hansen, Kurt Schaldemose; R. J. Barthelmie; Pryor, S. C.; Sieros, G.; Prospathopoulos, J.; J.M.L.M. Palma; Gomes, V.C.; Schepers, G.; Stuart, P.; T. Young; Rodrigo, J.S.; Larsen, Gunner Chr.; Larsen , Torben J.; Ott, Søren

    2013-01-01

    Designing offshore wind farms next to existing or planned wind farm clusters has recently become a common practice in the North Sea. These types of projects face unprecedented challenges in term of wind energy siting. The currently ongoing European project FP7 EERA - DTOC (Design Tool for Offshore wind farm Clusters) is aiming at providing a new type of model work-flow to address this issue. The wake modeling part of the EERA - DTOC project is to improve the fundamental understanding of wind ...

  6. New LHC benchmarks for the CP -conserving two-Higgs-doublet model

    OpenAIRE

    Haber, Howard E.; Stål, Oscar

    2015-01-01

    We introduce a strategy to study the parameter space of the general, CP -conserving, two-Higgs-doublet Model (2HDM) with a softly broken Z2 -symmetry by means of a new “hybrid” basis. In this basis the input parameters are the measured values of the mass of the observed Standard Model (SM)-like Higgs boson and its coupling strength to vector boson pairs, the mass of the second CP -even Higgs boson, the ratio of neutral Higgs vacuum expectation values, and three additional dimensionless parame...

  7. Groundwater risk assessment for a Polycyclic Aromatic Hydrocarbons (PAH) contaminated site ; benchmarking and validation of numerical transport models

    OpenAIRE

    Rollin, Claire; Baroudi, Hafid; Ben Slimane, Férid

    2001-01-01

    The objective is to test the modelling approaches and the reliability of the codes used in water risk assessment. The final goal is to elaborate, for each group of pollutant, guidelines that could serve as a scientific support for pollutant transport modelling in groundwater and soil. The models performed by 5 teams to simulate a contamination of soils and groundwaters by PAH at a disused coke plant site were compared. Hydrogeology is characterised by a superficial and a chalk aquifer, the la...

  8. Shale gas technology innovation rate impact on economic Base Case – Scenario model benchmarks

    International Nuclear Information System (INIS)

    Highlights: • Cash flow models control which technology is affordable in emerging shale gas plays. • Impact of technology innovation on IRR can be as important as wellhead price hikes. • Cash flow models are useful for technology decisions that make shale gas plays economic. • The economic gap can be closed by appropriate technology innovation. - Abstract: Low gas wellhead prices in North America have put its shale gas industry under high competitive pressure. Rapid technology innovation can help companies to improve the economic performance of shale gas fields. Cash flow models are paramount for setting effective production and technology innovation targets to achieve positive returns on investment in all global shale gas plays. Future cash flow of a well (or cluster of wells) may either improve further or deteriorate, depending on: (1) the regional volatility in gas prices at the wellhead – which must pay for the gas resource extraction, and (2) the cost and effectiveness of the well technology used. Gas price is an externality and cannot be controlled by individual companies, but well technology cost can be reduced while improving production output. We assume two plausible scenarios for well technology innovation and model the return on investment while checking against sensitivity to gas price volatility. It appears well technology innovation – if paced fast enough – can fully redeem the negative impact of gas price decline on shale well profits, and the required rates are quantified in our sensitivity analysis

  9. Applicability domains for classification problems: benchmarking of distance to models for AMES mutagenicity set

    Science.gov (United States)

    For QSAR and QSPR modeling of biological and physicochemical properties, estimating the accuracy of predictions is a critical problem. The “distance to model” (DM) can be defined as a metric that defines the similarity between the training set molecules and the test set compound ...

  10. Benchmarking of numerical models describing the dispersion of radionuclides in the Arctic Seas

    DEFF Research Database (Denmark)

    Scott, E.M.; Gurbutt, P.; Harms, I.;

    1997-01-01

    As part of the International Arctic Seas Assessment Project (IASAP) of the International Atomic Energy Agency (IAEA), a working group was created to model the dispersal and transfer of radionuclides released from radioactive waste disposed of in the Kara Sea. The objectives of this group are: (1...

  11. FROM PHYSICAL BENCHMARKS TO MENTAL BENCHMARKS: A Four Dimensions Dynamic Model to Assure the Quality of Instructional Activities in Electronic and Virtual Learning Environments

    Directory of Open Access Journals (Sweden)

    Hamdy AHMED ABDELAZIZ

    2013-04-01

    Full Text Available The objective of this paper was to develop a four dimensions dynamic model for designing instructional activities appropriate to electronic and virtual learning environments. The suggested model is guided by learning principles of cognitivism, constructivism, and connectivism learning theories in order to help online learners to build and acquire meaningful knowledge and experiences. The proposed model consists of four dynamic dimensions: Ø Cognitive presence activities; Ø Psychological presence activities; Ø Social presence activities; and Ø Mental presence activities. Cognitive presence activities refer to learner’s ability to emerge a cognitive vision regarding the content of learning. The cognitive vision will be the starting point to construct meaningful understanding. Psychological presence activities refer to the learner’s ability to construct self awareness and trustworthiness. It will work as psychological schema to decrease the load of learning at distance. Social presence activities refer to the learner’s ability to share knowledge with others in a way to construct a community of practice and assure global understanding of learning. Finally, mental presence activities refer to learner’s ability to construct mental models that represent knowledge creation. It will help learners to make learning outcomes and experiences transferable. Applying the proposed model will improve the process of developing e-based activities throughout a set of adaptive and dynamic frameworks and guidelines to meet online learner’s cognitive, psychological, social and mental presence.

  12. SSA Modeling and Simulation with DIRSIG

    Science.gov (United States)

    Bennett, D.; Allen, D.; Dank, J.; Gartley, M.; Tyler, D.

    2014-09-01

    We describe and demonstrate a robust, physics-based modeling system to simulate ground and space-based observations of both LEO and GEO objects. With the DIRSIG radiometry engine at its core, our system exploits STK, adaptive optics modeling, and detector effects to produce high fidelity simulated images and radiometry. Key to generating quantitative simulations is our ability to attribute engineering-quality, faceted CAD models with reflective and emissive properties derived from laboratory measurements, including the spatial structure of such difficult materials as MLI. In addition to simulated video imagery, we will demonstrate a computational procedure implementing a position-based dynamics approach to shrink wrap MLI around space components.

  13. Assessment of CTF Boiling Transition and Critical Heat Flux Modeling Capabilities Using the OECD/NRC BFBT and PSBT Benchmark Databases

    Directory of Open Access Journals (Sweden)

    Maria Avramova

    2013-01-01

    Full Text Available Over the last few years, the Pennsylvania State University (PSU under the sponsorship of the US Nuclear Regulatory Commission (NRC has prepared, organized, conducted, and summarized two international benchmarks based on the NUPEC data—the OECD/NRC Full-Size Fine-Mesh Bundle Test (BFBT Benchmark and the OECD/NRC PWR Sub-Channel and Bundle Test (PSBT Benchmark. The benchmarks’ activities have been conducted in cooperation with the Nuclear Energy Agency/Organization for Economic Co-operation and Development (NEA/OECD and the Japan Nuclear Energy Safety (JNES Organization. This paper presents an application of the joint Penn State University/Technical University of Madrid (UPM version of the well-known sub-channel code COBRA-TF (Coolant Boiling in Rod Array-Two Fluid, namely, CTF, to the steady state critical power and departure from nucleate boiling (DNB exercises of the OECD/NRC BFBT and PSBT benchmarks. The goal is two-fold: firstly, to assess these models and to examine their strengths and weaknesses; and secondly, to identify the areas for improvement.

  14. Optical component modelling and circuit simulation

    OpenAIRE

    Guilloton, Laurent; Tedjini, Smail; Vuong, Tan-Phu; Lemaitre Auger, Pierre

    2005-01-01

    This communication introduces the modelling of optical and optoelectronic components and the simulation of optical circuits and links using a Computer aided design (CAD) tool. A specific library describing several optical components and devices has been established and integrated as User-Defined-Models into the simulator. Then, the simulation facilities of our tool are used to study an integrated optical circuit: an integrated interferometer sensor.

  15. From Simulation Model to Critique of Structuration

    OpenAIRE

    2005-01-01

    In this paper I use a simulation model of Learning to Labor (Willis 1981) to critique Giddens’ structuration theory (Giddens 1984). The simulation model represents interactions between a group of non-conformist boys from Birmingham, England and an industrial capitalist business. I present the results of three simulation experiments designed to test the limits of structuration theory by exploring when decision-making based in cultural meaning conflicts with structural power and when it reprodu...

  16. AIRBench: A DEA-based model for the benchmarking of airports revenues

    OpenAIRE

    Perfetti, Francesca

    2014-01-01

    The socio-economic development of countries is a key factor for the growth of people mobility, bringing an increase of inter and extra continental air passengers flows. On the other side, the leading model business of low cost companies is decreasing the revenues of airports coming from the avio operations. The two effects are increasing the awareness of airports management, historically focused on the avio operations, towards the mix of avio and commercial revenues. AIRBench is a DEA-based b...

  17. Benchmarking the invariant embedding method against analytical solutions in model transport problems

    OpenAIRE

    Wahlberg Malin; Pázsit Imre

    2006-01-01

    The purpose of this paper is to demonstrate the use of the invariant embedding method in a few model transport problems for which it is also possible to obtain an analytical solution. The use of the method is demonstrated in three different areas. The first is the calculation of the energy spectrum of sputtered particles from a scattering medium without absorption, where the multiplication (particle cascade) is generated by recoil production. Both constant and energy dependent cross-sections ...

  18. Modelling benchmark of a laboratory test on hydro-mechanical behavior of bentonite

    Czech Academy of Sciences Publication Activity Database

    Millard, A.; Barnichon, J. D.; Mokni, N.; Thatcher, K. E.; Bond, A.; Blaheta, Radim

    London: CRC Press, Taylor and Francis Group, 2014 - (Khalli, N.; Russell, A.; Khoshghalb, A.), s. 489-495 ISBN 978-1-138-00150-3. [International Conference on Unsaturated Soils /6./. Sydney (AU), 02.07.2014-04.07.2014] Institutional support: RVO:68145535 Keywords : modelling * SEALEX experiment * mechanical behaviour of bentonite Subject RIV: BA - General Mathematics http://www.crcnetbase.com/doi/abs/10.1201/b17034-68

  19. Whole-building Hygrothermal Simulation Model

    DEFF Research Database (Denmark)

    Rode, Carsten; Grau, Karl

    2003-01-01

    An existing integrated simulation tool for dynamic thermal simulation of building was extended with a transient model for moisture release and uptake in building materials. Validation of the new model was begun with comparison against measurements in an outdoor test cell furnished with single...

  20. BN-600 full MOX core benchmark analysis

    International Nuclear Information System (INIS)

    As a follow-up of the BN-600 hybrid core benchmark, a full MOX core benchmark was performed within the framework of the IAEA co-ordinated research project. Discrepancies between the values of main reactivity coefficients obtained by the participants for the BN-600 full MOX core benchmark appear to be larger than those in the previous hybrid core benchmarks on traditional core configurations. This arises due to uncertainties in the proper modelling of the axial sodium plenum above the core. It was recognized that the sodium density coefficient strongly depends on the core model configuration of interest (hybrid core vs. fully MOX fuelled core with sodium plenum above the core) in conjunction with the calculation method (diffusion vs. transport theory). The effects of the discrepancies revealed between the participants results on the ULOF and UTOP transient behaviours of the BN-600 full MOX core were investigated in simplified transient analyses. Generally the diffusion approximation predicts more benign consequences for the ULOF accident but more hazardous ones for the UTOP accident when compared with the transport theory results. The heterogeneity effect does not have any significant effect on the simulation of the transient. The comparison of the transient analyses results concluded that the fuel Doppler coefficient and the sodium density coefficient are the two most important coefficients in understanding the ULOF transient behaviour. In particular, the uncertainty in evaluating the sodium density coefficient distribution has the largest impact on the description of reactor dynamics. This is because the maximum sodium temperature rise takes place at the top of the core and in the sodium plenum.