WorldWideScience

Sample records for benchmark simulation model

  1. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  2. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.;

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...

  3. Benchmark Simulation Model No 2 in Matlab-Simulink

    DEFF Research Database (Denmark)

    Vrecko, Darko; Gernaey, Krist; Rosen, Christian;

    2006-01-01

    In this paper, implementation of the Benchmark Simulation Model No 2 (BSM2) within Matlab-Simulink is presented. The BSM2 is developed for plant-wide WWTP control strategy evaluation on a long-term basis. It consists of a pre-treatment process, an activated sludge process and sludge treatment...

  4. Benchmarking computational fluid dynamics models for lava flow simulation

    Science.gov (United States)

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi

    2016-04-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, and COMSOL. Using the new benchmark scenarios defined in Cordonnier et al. (Geol Soc SP, 2015) as a guide, we model viscous, cooling, and solidifying flows over horizontal and sloping surfaces, topographic obstacles, and digital elevation models of natural topography. We compare model results to analytical theory, analogue and molten basalt experiments, and measurements from natural lava flows. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We can apply these models to reconstruct past lava flows in Hawai'i and Saudi Arabia using parameters assembled from morphology, textural analysis, and eruption observations as natural test cases. Our study highlights the strengths and weaknesses of each code, including accuracy and computational costs, and provides insights regarding code selection.

  5. Benchmark Simulation Model No 2 – finalisation of plant layout and default control strategy

    DEFF Research Database (Denmark)

    Nopens, I.; Benedetti, L.; Jeppsson, U.;

    2010-01-01

    The COST/IWA Benchmark Simulation Model No 1 (BSM1) has been available for almost a decade. Its primary purpose has been to create a platform for control strategy benchmarking of activated sludge processes. The fact that the research work related to the benchmark simulation models has resulted in...... be evaluated in a realistic fashion in the one week BSM1 evaluation period. In this paper, the finalised plant layout is summarised and, as was done for BSM1, a default control strategy is proposed. A demonstration of how BSM2 can be used to evaluate control strategies is also given....

  6. Towards a benchmark simulation model for plant-wide control strategy performance evaluation of WWTPs

    DEFF Research Database (Denmark)

    Jeppsson, Ulf; Rosen, Christian; Alex, Jens;

    2006-01-01

    The COST/IWA benchmark simulation model has been available for seven years. Its primary purpose has been to create a platform for control strategy benchmarking of activated sludge processes. The fact that the benchmark has resulted in more than 100 publications, not only in Europe but also...... worldwide, demonstrates the interest in such a tool within the research community In this paper, an extension of the benchmark simulation model no 1 (BSM1) is proposed. This extension aims at facilitating control strategy development and performance evaluation at a plant-wide level and, consequently...... the changes, the evaluation period has been extended to one year. A prolonged evaluation period allows for long-term control strategies to be assessed and enables the use of control handles that cannot be evaluated in a realistic fashion in the one-week BSM1 evaluation period. In the paper, the extended plant...

  7. Benchmark Simulation Model No 2: finalisation of plant layout and default control strategy.

    Science.gov (United States)

    Nopens, I; Benedetti, L; Jeppsson, U; Pons, M-N; Alex, J; Copp, J B; Gernaey, K V; Rosen, C; Steyer, J-P; Vanrolleghem, P A

    2010-01-01

    The COST/IWA Benchmark Simulation Model No 1 (BSM1) has been available for almost a decade. Its primary purpose has been to create a platform for control strategy benchmarking of activated sludge processes. The fact that the research work related to the benchmark simulation models has resulted in more than 300 publications worldwide demonstrates the interest in and need of such tools within the research community. Recent efforts within the IWA Task Group on "Benchmarking of control strategies for WWTPs" have focused on an extension of the benchmark simulation model. This extension aims at facilitating control strategy development and performance evaluation at a plant-wide level and, consequently, includes both pretreatment of wastewater as well as the processes describing sludge treatment. The motivation for the extension is the increasing interest and need to operate and control wastewater treatment systems not only at an individual process level but also on a plant-wide basis. To facilitate the changes, the evaluation period has been extended to one year. A prolonged evaluation period allows for long-term control strategies to be assessed and enables the use of control handles that cannot be evaluated in a realistic fashion in the one week BSM1 evaluation period. In this paper, the finalised plant layout is summarised and, as was done for BSM1, a default control strategy is proposed. A demonstration of how BSM2 can be used to evaluate control strategies is also given. PMID:21045320

  8. Benchmarking of dynamic simulation predictions in two software platforms using an upper limb musculoskeletal model.

    Science.gov (United States)

    Saul, Katherine R; Hu, Xiao; Goehler, Craig M; Vidt, Meghan E; Daly, Melissa; Velisar, Anca; Murray, Wendy M

    2015-01-01

    Several opensource or commercially available software platforms are widely used to develop dynamic simulations of movement. While computational approaches are conceptually similar across platforms, technical differences in implementation may influence output. We present a new upper limb dynamic model as a tool to evaluate potential differences in predictive behavior between platforms. We evaluated to what extent differences in technical implementations in popular simulation software environments result in differences in kinematic predictions for single and multijoint movements using EMG- and optimization-based approaches for deriving control signals. We illustrate the benchmarking comparison using SIMM-Dynamics Pipeline-SD/Fast and OpenSim platforms. The most substantial divergence results from differences in muscle model and actuator paths. This model is a valuable resource and is available for download by other researchers. The model, data, and simulation results presented here can be used by future researchers to benchmark other software platforms and software upgrades for these two platforms. PMID:24995410

  9. BSM-MBR: A Benchmark Simulation Model to Compare Control and Operational Strategies for Membrane Bioreactors

    OpenAIRE

    Maere, Thomas; Verrecht, Bart; Moerenhout, Stefanie; Judd, Simon J.; Nopens, Ingmar

    2011-01-01

    A benchmark simulation model for membrane bioreactors (BSM-MBR) was developed to evaluate operational and control strategies in terms of effluent quality and operational costs. The configuration of the existing BSM1 for conventional wastewater treatment plants was adapted using reactor volumes, pumped sludge flows and membrane filtration for the water-sludge separation. The BSM1 performance criteria were extended for an MBR taking into account additional pumping requirements for permeate prod...

  10. Catchment & sewer network simulation model to benchmark control strategies within urban wastewater systems

    DEFF Research Database (Denmark)

    Saagi, Ramesh; Flores Alsina, Xavier; Fu, Guangtao;

    2016-01-01

    This paper aims at developing a benchmark simulation model to evaluate control strategies for the urban catchment and sewer network. Various modules describing wastewater generation in the catchment, its subsequent transport and storage in the sewer system are presented. Global/local overflow based...... explaining possible applications of the proposed model for evaluation of: 1) Control strategies; and, 2) System modifications, are provided. The proposed framework is specifically designed to allow for easy development and comparison of multiple control possibilities and integration with existing....../standard wastewater treatment models (Activated Sludge Models) to finally promote integrated assessment of urban wastewater systems....

  11. Simulation of the multiple-fracture model. Phase 1, benchmark test 2 of the DECOVALEX project

    International Nuclear Information System (INIS)

    DECOVALEX is an international co-operative project for the development of coupled models and their validation against experiments in nuclear waste isolation. The emphasis of this project is on the coupled thermo-hydro-mechanical effects in jointed hard rock. In the first phase of DECOVALEX, two benchmark tests and one test case have been selected for modelling. This report describes the results of the second benchmark test, the Multiple-Fracture Model, as obtained by the AECL Research team. This problem relates to groundwater flow and coupled thermo-hydro-mechanical deformation in a simple system comprising several blocks of porous medium and several intersecting fractures. The simulation domain is defined to be a rectangular box that is made up of an assemblage of nine blocks separated by two sets of discontinuities (planar fractures). The rock mass is subjected to in situ stress and thermal loading as well as a hydraulic gradient. Both no-flow and adiabatic heat flux acting along a section of one of the lateral boundaries will induce expansion of the rock and cause shearing in the model. The MOTIF finite-element code, developed at AECL, has been employed to simulate this problem. The simulation results show that thermal expansion of the solid blocks reduced the aperture and, consequently, the permeability of the fractures. As a result, the fluid velocity along the horizontal fractures decreased with time, except in the vicinity close to the heat source, where the velocity initially increased and then decreased as a result of the decrease in permeability. (author). 9 refs., 7 tabs., 23 figs

  12. Towards a plant-wide Benchmark Simulation Model with simultaneous nitrogen and phosphorus removal wastewater treatment processes.

    OpenAIRE

    Flores-Alsina, Xavier; Ikumi, David; Batstone, Damien; Gernaey, Krist; Brouckaert, Chris; Ekama, George A.; Jeppsson, Ulf

    2012-01-01

    It is more than 10 years since the publication of the Benchmark Simulation Model No 1 (BSM1) manual (Copp, 2002). The main objective of BSM1 was creating a platform for benchmarking carbon and nitrogen removal strategies in activated sludge systems. The initial platform evolved into BSM1_LT and BSM2, which allowed the evaluation of monitoring and plant-wide control strategies, respectively. The fact that the BSM platforms have resulted in 300+ publications demonstrates the interest for the to...

  13. Benchmarking Model Variants in Development of a Hardware-in-the-Loop Simulation System

    Science.gov (United States)

    Aretskin-Hariton, Eliot D.; Zinnecker, Alicia M.; Kratz, Jonathan L.; Culley, Dennis E.; Thomas, George L.

    2016-01-01

    Distributed engine control architecture presents a significant increase in complexity over traditional implementations when viewed from the perspective of system simulation and hardware design and test. Even if the overall function of the control scheme remains the same, the hardware implementation can have a significant effect on the overall system performance due to differences in the creation and flow of data between control elements. A Hardware-in-the-Loop (HIL) simulation system is under development at NASA Glenn Research Center that enables the exploration of these hardware dependent issues. The system is based on, but not limited to, the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k). This paper describes the step-by-step conversion from the self-contained baseline model to the hardware in the loop model, and the validation of each step. As the control model hardware fidelity was improved during HIL system development, benchmarking simulations were performed to verify that engine system performance characteristics remained the same. The results demonstrate the goal of the effort; the new HIL configurations have similar functionality and performance compared to the baseline C-MAPSS40k system.

  14. Benchmark Generation and Simulation at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Lagadapati, Mahesh [North Carolina State University (NCSU), Raleigh; Mueller, Frank [North Carolina State University (NCSU), Raleigh; Engelmann, Christian [ORNL

    2016-01-01

    The path to extreme scale high-performance computing (HPC) poses several challenges related to power, performance, resilience, productivity, programmability, data movement, and data management. Investigating the performance of parallel applications at scale on future architectures and the performance impact of different architectural choices is an important component of HPC hardware/software co-design. Simulations using models of future HPC systems and communication traces from applications running on existing HPC systems can offer an insight into the performance of future architectures. This work targets technology developed for scalable application tracing of communication events. It focuses on extreme-scale simulation of HPC applications and their communication behavior via lightweight parallel discrete event simulation for performance estimation and evaluation. Instead of simply replaying a trace within a simulator, this work promotes the generation of a benchmark from traces. This benchmark is subsequently exposed to simulation using models to reflect the performance characteristics of future-generation HPC systems. This technique provides a number of benefits, such as eliminating the data intensive trace replay and enabling simulations at different scales. The presented work features novel software co-design aspects, combining the ScalaTrace tool to generate scalable trace files, the ScalaBenchGen tool to generate the benchmark, and the xSim tool to assess the benchmark characteristics within a simulator.

  15. Simulation of the Pebble Bed Micro Model Benchmark Problem using MARS-GCR

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Dong Un [Seoul National Univ., Seoul (Korea, Republic of); Bae, Sung Won; Lee, Won Jae [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2007-07-01

    The Pebble Bed Micro Model (PBMM) is a benchmark model of the Power Conversion Unit (PCU) to demonstrate the dynamic behavior of a three shafts power conversion system with nitrogen as a working fluid. This closed recuperative Brayton power conversion cycle is one of the viable options for the power conversion unit (PCU) of the very high temperature gas cooled reactor (VHTR). It is important for the safety analysis code to be able to accurately predict the behavior of the PCU. In order to obtain data for a validation of the safety analysis code, the MARS-GCR code, is applied to the steady state and mass injection test runs of the PBMM benchmark problem.

  16. Benchmark of the Local Drift-kinetic Models for Neoclassical Transport Simulation in Helical Plasmas

    CERN Document Server

    Huang, B; Kanno, R; Sugama, H; Matsuoka, S

    2016-01-01

    The benchmarks of the neoclassical transport codes based on the several local drift-kinetic models are reported here. Here, the drift-kinetic models are ZOW, ZMD, DKES-like, and global, as classified in [Matsuoka et al., Physics of Plasmas 22, 072511 (2015)]. The magnetic geometries of HSX, LHD, and W7-X are employed in the benchmarks. It is found that the assumption of $\\boldsymbol E \\times \\boldsymbol B$ incompressibility causes discrepancy of neoclassical radial flux and parallel flow among the models, when $\\boldsymbol E \\times \\boldsymbol B$ is sufficiently large compared to the magnetic drift velocities. On the other hand, when $\\boldsymbol E \\times \\boldsymbol B$ and the magnetic drift velocities are comparable, the tangential magnetic drift, which is included in both the global and ZOW models, fills the role of suppressing unphysical peaking of neoclassical radial-fluxes found in the other local models at $E_r \\simeq 0$. In low collisionality plasmas, in particular, the tangential drift effect works w...

  17. Simulation Methods for High-Cycle Fatigue-Driven Delamination using Cohesive Zone Models - Fundamental Behavior and Benchmark Studies

    DEFF Research Database (Denmark)

    Bak, Brian Lau Verndal; Lindgaard, Esben; Turon, A.;

    2015-01-01

    A novel computational method for simulating fatigue-driven delamination cracks in composite laminated structures under cyclic loading based on a cohesive zone model [2] and new benchmark studies with four other comparable methods [3-6] are presented. The benchmark studies describe and compare the...... traction-separation response in the cohesive zone and the transition phase from quasistatic to fatigue loading for each method. Furthermore, the accuracy of the predicted crack growth rate is studied and compared for each method. It is shown that the method described in [2] is significantly more accurate...... than the other methods [3-6]. Finally, studies are presented of the dependency and sensitivity to the change in different quasi-static material parameters and model specific fitting parameters. It is shown that all the methods except [2] rely on different parameters which are not possible to determine...

  18. Modelling anaerobic co-digestion in Benchmark Simulation Model No. 2: Parameter estimation, substrate characterisation and plant-wide integration.

    Science.gov (United States)

    Arnell, Magnus; Astals, Sergi; Åmand, Linda; Batstone, Damien J; Jensen, Paul D; Jeppsson, Ulf

    2016-07-01

    Anaerobic co-digestion is an emerging practice at wastewater treatment plants (WWTPs) to improve the energy balance and integrate waste management. Modelling of co-digestion in a plant-wide WWTP model is a powerful tool to assess the impact of co-substrate selection and dose strategy on digester performance and plant-wide effects. A feasible procedure to characterise and fractionate co-substrates COD for the Benchmark Simulation Model No. 2 (BSM2) was developed. This procedure is also applicable for the Anaerobic Digestion Model No. 1 (ADM1). Long chain fatty acid inhibition was included in the ADM1 model to allow for realistic modelling of lipid rich co-substrates. Sensitivity analysis revealed that, apart from the biodegradable fraction of COD, protein and lipid fractions are the most important fractions for methane production and digester stability, with at least two major failure modes identified through principal component analysis (PCA). The model and procedure were tested on bio-methane potential (BMP) tests on three substrates, each rich on carbohydrates, proteins or lipids with good predictive capability in all three cases. This model was then applied to a plant-wide simulation study which confirmed the positive effects of co-digestion on methane production and total operational cost. Simulations also revealed the importance of limiting the protein load to the anaerobic digester to avoid ammonia inhibition in the digester and overloading of the nitrogen removal processes in the water train. In contrast, the digester can treat relatively high loads of lipid rich substrates without prolonged disturbances. PMID:27088248

  19. Benchmarking hydrological models for low-flow simulation and forecasting on French catchments

    OpenAIRE

    Nicolle, P.; Pushpalatha, R.; Perrin, C.; François, D.; Thiéry, D.; Mathevet, T.; Le Lay, M.; Besson, F.; Soubeyroux, J.-M.; Viel, C.; Regimbeau, F.; V. Andréassian; Maugis, P.; B. Augeard; Morice, E.

    2014-01-01

    Low-flow simulation and forecasting remains a difficult issue for hydrological modellers, and intercomparisons can be extremely instructive for assessing existing low-flow prediction models and for developing more efficient operational tools. This research presents the results of a collaborative experiment conducted to compare low-flow simulation and forecasting models on 21 unregulated catchments in France. Five hydrological models (four lumped storage-type models – Gardenia, GR6J, Mor...

  20. Direct Simulation of a Solidification Benchmark Experiment

    OpenAIRE

    Carozzani, Tommy; Gandin, Charles-André; Digonnet, Hugues; Bellet, Michel; Zaidat, Kader; Fautrelle, Yves

    2013-01-01

    International audience A solidification benchmark experiment is simulated using a three-dimensional cellular automaton-finite element solidification model. The experiment consists of a rectangular cavity containing a Sn-3 wt pct Pb alloy. The alloy is first melted and then solidified in the cavity. A dense array of thermocouples permits monitoring of temperatures in the cavity and in the heat exchangers surrounding the cavity. After solidification, the grain structure is revealed by metall...

  1. A benchmark simulation model to describe plant-wide phosphorus transformations in WWTPs

    DEFF Research Database (Denmark)

    Flores-Alsina, Xavier; Ikumi, D.; Kazadi-Mbamba, C.;

    (WWTPs) pursue biological/chemical phosphorus removal. However, realistic descriptions of combined C, N and P removal, adds a major, but unavoidable degree of complexity in wastewater treatment process models. This paper identifies and discusses important issues that need to be addressed to upgrade the......It is more than 10 years since the publication of the BSM1 technical report (Copp, 2002). The main objective of BSM1 was to create a platform for benchmarking C and N removal strategies in activated sludge systems. The initial platform evolved into BSM1_LT and BSM2, which allowed for the evaluation...... scientific community. In this paper, a highly necessary extension of the BSM2 is proposed. This extension aims at facilitating simultaneous C, N and P removal process development and performance evaluation at a plant-wide level. The main motivation of the work is that numerous wastewater treatment plants...

  2. Quo Vadis Benchmark Simulation Models? 8th IWA Symposium on Systems Analysis and Integrated Assessment

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J.; Batstone, D,;

    2011-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for WWTPs is coming towards an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, hi...

  3. Extending the benchmark simulation model no2 with processes for nitrous oxide production and side-stream nitrogen removal

    DEFF Research Database (Denmark)

    Boiocchi, Riccardo; Sin, Gürkan; Gernaey, Krist V.

    2015-01-01

    In this work the Benchmark Simulation Model No.2 is extended with processes for nitrous oxide production and for side-stream partial nitritation/Anammox (PN/A) treatment. For these extensions the Activated Sludge Model for Greenhouse gases No.1 was used to describe the main waterline, whereas...... increased the total nitrogen removal by 10%; (ii) reduced the aeration demand by 16% compared to the base case, and (iii) the activity of ammonia-oxidizing bacteria is most influencing nitrous oxide emissions. The extended model provides a simulation platform to generate, test and compare novel control...... strategies to improve operation performance and to meet the new plant performance criteria such as minimization of greenhouse gas (in particular of nitrous oxide) emissions....

  4. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  5. DOE Commercial Building Benchmark Models: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Torcelini, P.; Deru, M.; Griffith, B.; Benne, K.; Halverson, M.; Winiarski, D.; Crawley, D. B.

    2008-07-01

    To provide a consistent baseline of comparison and save time conducting such simulations, the U.S. Department of Energy (DOE) has developed a set of standard benchmark building models. This paper will provide an executive summary overview of these benchmark buildings, and how they can save building analysts valuable time. Fully documented and implemented to use with the EnergyPlus energy simulation program, the benchmark models are publicly available and new versions will be created to maintain compatibility with new releases of EnergyPlus. The benchmark buildings will form the basis for research on specific building technologies, energy code development, appliance standards, and measurement of progress toward DOE energy goals. Having a common starting point allows us to better share and compare research results and move forward to make more energy efficient buildings.

  6. System-wide Benchmark Simulation Model for integrated analysis of urban wastewater systems

    DEFF Research Database (Denmark)

    Saagi, R.; Flores-Alsina, X.; Gernaey, K. V.;

    Interactions between different components (sewer, wastewater treatment plant (WWTP) and river) of an urban wastewater system (UWS) are widely recognized (Benedetti et al., 2013). This has resulted in an increasing interest in the modelling of the UWS. System-wide models take into account the inte...

  7. Simulation with Different Turbulence Models in an Annex 20 Benchmark Test using Star-CCM+

    DEFF Research Database (Denmark)

    Le Dreau, Jerome; Heiselberg, Per; Nielsen, Peter V.

    The purpose of this investigation is to compare the different flow patterns obtained for the 2D isothermal test case defined in Annex 20 (1990) using different turbulence models. The different results are compared with the existing experimental data. Similar study has already been performed by Rong...... et al. (2008) using Ansys CFX 11.0. In this report, the software Star-CCM+ has been used....

  8. Benchmark simulation model no 2: general protocol and exploratory case studies

    DEFF Research Database (Denmark)

    Jeppsson, U.; Pons, M.N.; Nopens, I.;

    2007-01-01

    the control strategy evaluation rather than on modelling issues. Finally, for illustration, twelve simple operational strategies have been implemented in BSM2 and their performance evaluated. Results show that it is an interesting control engineering challenge to further improve the performance of the...... significant new development that is reported on here: Rather than only evaluating control strategies at the level of the activated sludge unit ( bioreactors and secondary clarifier) the new BSM2 now allows the evaluation of control strategies at the level of the whole plant, including primary clarifier and...

  9. Towards Systematic Benchmarking of Climate Model Performance

    Science.gov (United States)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  10. On a new benchmark for the simulation of saltwater intrusion

    Science.gov (United States)

    Stoeckl, Leonard; Graf, Thomas

    2015-04-01

    To date, many different benchmark problems for density-driven flow are available. Benchmarks are necessary to validate numerical models. The benchmark by Henry (1964) measures a saltwater wedge, intruding into a freshwater aquifer in a rectangular model. The Henry (1964) problem of saltwater intrusion is one of the most applied benchmarks in hydrogeology. Modelling saltwater intrusion will be of major importance in the future because investigating the impact of groundwater overexploitation, climate change or sea level rise are of key concern. The worthiness of the Henry (1964) problem was questioned by Simpson and Clement (2003), who compared density-coupled and density-uncoupled simulations. Density-uncoupling was achieved by neglecting density effects in the governing equations, and by considering density effects only in the flow boundary conditions. As both of their simulations showed similar results, Simpson and Clement (2003) concluded that flow patterns of the Henry (1964) problem are largely dictated by the applied flow boundary conditions and density-dependent effects are not adequately represented in the Henry (1964) problem. In the present study, we compare numerical simulations of the physical benchmark of a freshwater lens by Stoeckl and Houben (2012) to the Henry (1964) problem. In this new benchmark, the development of a freshwater lens under an island is simulated by applying freshwater recharge to the model top. Results indicate that density-uncoupling significantly alters the flow patterns of fresh- and saltwater. This leads to the conclusion that next to the boundary conditions applied, density-dependent effects are important to correctly simulate the flow dynamics of a freshwater lens.

  11. Synthetic benchmark model for parallel agent-based simulation%面向并行Agent仿真的合成基准测试模型

    Institute of Scientific and Technical Information of China (English)

    余文广; 王维平; 侯洪涛; 李群

    2012-01-01

    In order to evaluate the performance of parallel simulation algorithms, there is a need for a benchmark model. To solve the problem that there is currently lack of such a common benchmark model that is independent of applications in the parallel agent-based simulation (PABS) research community, based on the design principles of parallel HOLD which is a classic synthetic benchmark model for parallel discrete event simulations (PDES) , a common benchmark model for PABS is proposed according to the characteristics of agent-based simulations (ABS). This model can easily synthesize various required workloads based on application characteristics and exclude the impact of elements related to specific applications on the performance analysis so as to provide a common benchmark for different PABS researchers. Finally, with this model, the impact of the computation granularity of agents and the number of processors on the speedup is analyzed experimentally.%为了评估并行仿真算法的性能,需要建立一个基准测试模型.针对并行Agent仿真研究领域中缺乏一种与应用无关的基准测试模型这一问题,在借鉴并行离散事件仿真中经典的合成测试模型PHOLD设计思想的基础上,根据基于Agent仿真的特点,提出面向并行Agent仿真的合成基准测试模型,利用该模型可以方便地合成符合不同应用特点的计算负载,去除与应用相关的因素对性能分析的影响,能够为不同的并行Agent仿真研究者提供一个公共的测试基准.最后,采用该模型从实验层次上分析了Agent计算粒度、所采用的处理器数目等因素对并行Agent仿真加速比的影响.

  12. Benchmark Data Set for Wheat Growth Models: Field Experiments and AgMIP Multi-Model Simulations.

    Science.gov (United States)

    Asseng, S.; Ewert, F.; Martre, P.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P.J.; Rotter, R. P.

    2015-01-01

    The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation, maximum and minimum temperature, precipitation, surface wind, dew point temperature, relative humidity, and vapor pressure), soil characteristics, frequent growth, nitrogen in crop and soil, crop and soil water and yield components. Simulations include results from 27 wheat models and a sensitivity analysis with 26 models and 30 years (1981-2010) for each location, for elevated atmospheric CO2 and temperature changes, a heat stress sensitivity analysis at anthesis, and a sensitivity analysis with soil and crop management variations and a Global Climate Model end-century scenario.

  13. BENCHMARKING LEARNER EDUCATION USING ONLINE BUSINESS SIMULATION

    Directory of Open Access Journals (Sweden)

    Alfred H. Miller

    2016-06-01

    Full Text Available For programmatic accreditation by the Accreditation Council of Business Schools and Programs (ACBSP, business programs are required to meet STANDARD #4, Measurement and Analysis of Student Learning and Performance. Business units must demonstrate that outcome assessment systems are in place using documented evidence that shows how the results are being used to further develop or improve the academic business program. The Higher Colleges of Technology, a 17 campus federal university in the United Arab Emirates, differentiates its applied degree programs through a ‘learning by doing ethos,’ which permeates the entire curricula. This paper documents benchmarking of education for managing innovation. Using business simulation for Bachelors of Business, Year 3 learners, in a business strategy class; learners explored through a simulated environment the following functional areas; research and development, production, and marketing of a technology product. Student teams were required to use finite resources and compete against other student teams in the same universe. The study employed an instrument developed in a 60-sample pilot study of business simulation learners against which subsequent learners participating in online business simulation could be benchmarked. The results showed incremental improvement in the program due to changes made in assessment strategies, including the oral defense.

  14. Experiment vs simulation RT WFNDEC 2014 benchmark: CIVA results

    Energy Technology Data Exchange (ETDEWEB)

    Tisseur, D., E-mail: david.tisseur@cea.fr; Costin, M., E-mail: david.tisseur@cea.fr; Rattoni, B., E-mail: david.tisseur@cea.fr; Vienne, C., E-mail: david.tisseur@cea.fr; Vabre, A., E-mail: david.tisseur@cea.fr; Cattiaux, G., E-mail: david.tisseur@cea.fr [CEA LIST, CEA Saclay 91191 Gif sur Yvette Cedex (France); Sollier, T. [Institut de Radioprotection et de Sûreté Nucléaire, B.P.17 92262 Fontenay-Aux-Roses (France)

    2015-03-31

    The French Atomic Energy Commission and Alternative Energies (CEA) has developed for years the CIVA software dedicated to simulation of NDE techniques such as Radiographic Testing (RT). RT modelling is achieved in CIVA using combination of a determinist approach based on ray tracing for transmission beam simulation and a Monte Carlo model for the scattered beam computation. Furthermore, CIVA includes various detectors models, in particular common x-ray films and a photostimulable phosphor plates. This communication presents the results obtained with the configurations proposed in the World Federation of NDEC 2014 RT modelling benchmark with the RT models implemented in the CIVA software.

  15. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns. W

  16. Upgrading the Benchmark Simulation Model Framework with emerging challenges - A study of N2O emissions and the fate of pharmaceuticals in urban wastewater systems

    DEFF Research Database (Denmark)

    Snip, Laura

    for an extension of the BSM. Various challenges were encountered regarding the mathematical structure and the parameter values when expanding the BSM. The N2O models produced different results due to the assumptions on which they are based. In addition, pH and inorganic carbon concentrations have been demonstrated......Nowadays a wastewater treatment plant (WWTP) is not only expected to remove traditional pollutants from the wastewater; other emerging challenges have arisen as well. A WWTP is now, among other things, expected to also minimise its carbon footprint and deal with micropollutants. Optimising...... the performance of a WWTP can be done with mathematical models that can be used in simulation studies. The Benchmark Simulation Model (BSM) framework was developed to compare objectively different operational/control strategies. As different operational strategies of a WWTP will most likely have an effect...

  17. Benchmark Simulations of Gyro-Kinetic Electron and Fully-Kinetic Ion Model for Lower Hybrid Waves in Linear Region

    International Nuclear Information System (INIS)

    Particle-in-cell (PIC) simulation method has been proved to be a good candidate to study the interactions between plasmas and radio-frequency waves. However, for waves in the lower hybrid range of frequencies, a full PIC simulation is not efficient due to its high computational cost. In this work, a gyro-kinetic electron and fully-kinetic ion (GeFi) particle simulation model is applied to study the propagations and mode conversion processes of lower hybrid waves (LHWs) in plasmas. With this method, the computational efficiency of LHW simulations is greatly increased by using a larger grid size and time step. The simulation results in the linear regime are validated by comparison with the linear theory. (magnetically confined plasma)

  18. A benchmark model for plant wide control of waste water treatment plants; Benchmark-Modell fuer anlagenweite Klaeranlagenregelungen

    Energy Technology Data Exchange (ETDEWEB)

    Alex, Jens; Jumar, Ulrich [Institut fuer Automation und Kommunikation e.V. Magdeburg (Germany)

    2009-07-01

    For the control of waste water treatment plants a large number of proposals is published. To allow an objective evaluation and comparison of different concepts a benchmark simulation model can be utilised. The presented benchmark system is the result of a task group of the IWA - International Water Association and consists of a library of model components and an evaluation procedure for control concepts. The application of this system is demonstrated by typical control concepts (orig.)

  19. A chemical EOR benchmark study of different reservoir simulators

    Science.gov (United States)

    Goudarzi, Ali; Delshad, Mojdeh; Sepehrnoori, Kamy

    2016-09-01

    Interest in chemical EOR processes has intensified in recent years due to the advancements in chemical formulations and injection techniques. Injecting Polymer (P), surfactant/polymer (SP), and alkaline/surfactant/polymer (ASP) are techniques for improving sweep and displacement efficiencies with the aim of improving oil production in both secondary and tertiary floods. There has been great interest in chemical flooding recently for different challenging situations. These include high temperature reservoirs, formations with extreme salinity and hardness, naturally fractured carbonates, and sandstone reservoirs with heavy and viscous crude oils. More oil reservoirs are reaching maturity where secondary polymer floods and tertiary surfactant methods have become increasingly important. This significance has added to the industry's interest in using reservoir simulators as tools for reservoir evaluation and management to minimize costs and increase the process efficiency. Reservoir simulators with special features are needed to represent coupled chemical and physical processes present in chemical EOR processes. The simulators need to be first validated against well controlled lab and pilot scale experiments to reliably predict the full field implementations. The available data from laboratory scale include 1) phase behavior and rheological data; and 2) results of secondary and tertiary coreflood experiments for P, SP, and ASP floods under reservoir conditions, i.e. chemical retentions, pressure drop, and oil recovery. Data collected from corefloods are used as benchmark tests comparing numerical reservoir simulators with chemical EOR modeling capabilities such as STARS of CMG, ECLIPSE-100 of Schlumberger, REVEAL of Petroleum Experts. The research UTCHEM simulator from The University of Texas at Austin is also included since it has been the benchmark for chemical flooding simulation for over 25 years. The results of this benchmark comparison will be utilized to improve

  20. FRIB driver linac vacuum model and benchmarks

    CERN Document Server

    Durickovic, Bojan; Kersevan, Roberto; Machicoane, Guillaume

    2014-01-01

    The Facility for Rare Isotope Beams (FRIB) is a superconducting heavy-ion linear accelerator that is to produce rare isotopes far from stability for low energy nuclear science. In order to achieve this, its driver linac needs to achieve a very high beam current (up to 400 kW beam power), and this requirement makes vacuum levels of critical importance. Vacuum calculations have been carried out to verify that the vacuum system design meets the requirements. The modeling procedure was benchmarked by comparing models of an existing facility against measurements. In this paper, we present an overview of the methods used for FRIB vacuum calculations and simulation results for some interesting sections of the accelerator. (C) 2013 Elsevier Ltd. All rights reserved.

  1. A comprehensive benchmarking system for evaluating global vegetation models

    Directory of Open Access Journals (Sweden)

    D. I. Kelley

    2012-11-01

    Full Text Available We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover, composition and height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, and are compared to scores based on the temporal or spatial mean value of the observations and a "random" model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM, and the Lund-Potsdam-Jena (LPJ and Land Processes and eXchanges (LPX dynamic global vegetation models (DGVMs. SDBM reproduces observed CO2 seasonal cycles, but its simulation of independent measurements of net primary production (NPP is too high. The two DGVMs show little difference for most benchmarks (including the inter-annual variability in the growth rate and seasonal cycle of atmospheric CO2, but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change impacts and feedbacks.

  2. Reactive transport benchmarks for subsurface environmental simulation

    Energy Technology Data Exchange (ETDEWEB)

    Steefel, Carl I.; Yabusaki, Steven B.; Mayer, K. U.

    2015-06-01

    Over the last 20 years, we have seen firsthand the evolution of multicomponent reactive transport modeling and the expanding range and increasing complexity of subsurface applications it is being used to address. There is a growing reliance on reactive transport modeling (RTM) to address some of the most compelling issues facing our planet: climate change, nuclear waste management, contaminant remediation, and pollution prevention. While these issues are motivating the development of new and improved capabilities for subsurface environmental modeling using RTM (e.g., biogeochemistry from cell-scale physiology to continental-scale terrestrial ecosystems, nonisothermal multiphase conditions, coupled geomechanics), there remain longstanding challenges in characterizing the natural variability of hydrological, biological, and geochemical properties in subsurface environments and limited success in transferring models between sites and across scales. An equally important trend over the last 20 years is the evolution of modeling from a service sought out after data has been collected to a multifaceted research approach that provides (1) an organizing principle for characterization and monitoring activities; (2) a systematic framework for identifying knowledge gaps, developing and integrating new knowledge; and (3) a mechanistic understanding that represents the collective wisdom of the participating scientists and engineers. There are now large multidisciplinary projects where the research approach is model-driven, and the principal product is a holistic predictive simulation capability that can be used as a test bed for alternative conceptualizations of processes, properties, and conditions. Much of the future growth and expanded role for RTM will depend on its continued ability to exploit technological advancements in the earth and environmental sciences. Advances in measurement technology, particularly in molecular biology (genomics), isotope fractionation, and high

  3. Simulating diffusion processes in discontinuous media: Benchmark tests

    Science.gov (United States)

    Lejay, Antoine; Pichot, Géraldine

    2016-06-01

    We present several benchmark tests for Monte Carlo methods simulating diffusion in one-dimensional discontinuous media. These benchmark tests aim at studying the potential bias of the schemes and their impact on the estimation of micro- or macroscopic quantities (repartition of masses, fluxes, mean residence time, …). These benchmark tests are backed by a statistical analysis to filter out the bias from the unavoidable Monte Carlo error. We apply them on four different algorithms. The results of the numerical tests give a valuable insight into the fine behavior of these schemes, as well as rules to choose between them.

  4. Towards a plant-wide Benchmark Simulation Model with simultaneous nitrogen and phosphorus removal wastewater treatment processes

    DEFF Research Database (Denmark)

    Flores-Alsina, Xavier; Ikumi, David; Batstone, Damien;

    ) modifications of the original BSM2 physical plant layout; 4) new/upgraded generic mathematical models; 5) model integration; 6) new control handles/sensors; and 7) new extended evaluation criteria. The paper covers and analyzes all these aspects in detail, identifying the main bottlenecks that need...

  5. Implementing ADM1 for plant-wide benchmark simulations in Matlab/Simulink

    DEFF Research Database (Denmark)

    Rosen, Christian; Vrecko, Darko; Gernaey, Krist;

    2006-01-01

    , in particular if the ADM1 is to be included in dynamic simulations of plant-wide or even integrated systems. In this paper, the experiences gained from a Matlab/Simulink implementation of ADM1 into the extended COST/IWA Benchmark Simulation Model (BSM2) are presented. Aspects related to system stiffness, model...

  6. Shear Strength Measurement Benchmarking Tests for K Basin Sludge Simulants

    Energy Technology Data Exchange (ETDEWEB)

    Burns, Carolyn A.; Daniel, Richard C.; Enderlin, Carl W.; Luna, Maria; Schmidt, Andrew J.

    2009-06-10

    Equipment development and demonstration testing for sludge retrieval is being conducted by the K Basin Sludge Treatment Project (STP) at the MASF (Maintenance and Storage Facility) using sludge simulants. In testing performed at the Pacific Northwest National Laboratory (under contract with the CH2M Hill Plateau Remediation Company), the performance of the Geovane instrument was successfully benchmarked against the M5 Haake rheometer using a series of simulants with shear strengths (τ) ranging from about 700 to 22,000 Pa (shaft corrected). Operating steps for obtaining consistent shear strength measurements with the Geovane instrument during the benchmark testing were refined and documented.

  7. Simulator for SUPO, a Benchmark Aqueous Homogeneous Reactor (AHR)

    Energy Technology Data Exchange (ETDEWEB)

    Klein, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Determan, John C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-10-14

    A simulator has been developed for SUPO (Super Power) an aqueous homogeneous reactor (AHR) that operated at Los Alamos National Laboratory (LANL) from 1951 to 1974. During that period SUPO accumulated approximately 600,000 kWh of operation. It is considered the benchmark for steady-state operation of an AHR. The SUPO simulator was developed using the process that resulted in a simulator for an accelerator-driven subcritical system, which has been previously reported.

  8. Benchmarking of the simulation of the ATLAS Hall background

    CERN Document Server

    Vincke, Helmut; Müller, Hansjörg

    2000-01-01

    The LHC, mainly to be used as a proton-proton collider, providing collisions at energies of 14 TeV, will be operational in the year 2005. ATLAS, one of the LHC experiments, will provide high accuracy measurements concerning these p-p collisions. In these collisions also a high particle background is produced. This background was already calculated with the Monte Carlo simulation program FLUKA. Unfortunately, the prediction concerning this background rate is only understood within an uncertainty level of five. The main contribution of this factor can be seen as limited knowledge concerning the ability of FLUKA to simulate these kinds of scenarios. In order to reduce the uncertainty, benchmarking simulations of experiments similar to the ATLAS background situation were performed. The comparison of the simulations with the experiments proves to which extent FLUKA is able to provide reliable results concerning the ATLAS background situation. In order to perform this benchmark, an iron construction was irradiated ...

  9. Simulation of Benchmark Cases with the Terminal Area Simulation System (TASS)

    Science.gov (United States)

    Ahmad, Nash'at; Proctor, Fred

    2011-01-01

    The hydrodynamic core of the Terminal Area Simulation System (TASS) is evaluated against different benchmark cases. In the absence of closed form solutions for the equations governing atmospheric flows, the models are usually evaluated against idealized test cases. Over the years, various authors have suggested a suite of these idealized cases which have become standards for testing and evaluating the dynamics and thermodynamics of atmospheric flow models. In this paper, simulations of three such cases are described. In addition, the TASS model is evaluated against a test case that uses an exact solution of the Navier-Stokes equations. The TASS results are compared against previously reported simulations of these banchmark cases in the literature. It is demonstrated that the TASS model is highly accurate, stable and robust.

  10. Towards a Benchmark Suite for Modelica Compilers: Large Models

    OpenAIRE

    Frenkel, Jens; Schubert, Christian; Kunze, Günter; Fritzson, Peter; Sjölund, Martin; Pop, Adrian

    2011-01-01

    The paper presents a contribution to a Modelica benchmark suite. Basic ideas for a tool independent benchmark suite based on Python scripting along with models for testing the performance of Modelica compilers regarding large systems of equation are given. The automation of running the benchmark suite is demonstrated followed by a selection of benchmark results to determine the current limits of Modelica tools and how they scale for an increasing number of equations.

  11. 基于Benchmarking技术的汽车侧面碰撞建模与仿真试验%Modeling and Simulation Test of Car Side Impact Based on Benchmarking Technology

    Institute of Scientific and Technical Information of China (English)

    徐中明; 张亮; 范体强; 赵清江

    2012-01-01

    介绍了Benchmarking技术的概念和基本工作流程,并根据某车型所进行的Benchmarking数据,建立了包括车身结构、发动机和底盘系统在内的整车有限元分析模型.根据ECE R95法规进行了整车侧面碰撞仿真试验,通过对比仿真试验的结果与实车侧面碰撞试验的结果,验证了模型的有效性.本文的研究方法为Benchmarking技术在汽车碰撞安全分析上的应用提供了参考.%The concept and working process of benchmarking technology are introduced. According to a car benchmarking data,the finite element analysis model of a full-scale passenger car is developed, including body structure, engine and chassis system. Subsequently, the simulation test of the car side impact is conducted according to ECE R95 regulation. And the model is verified by comparing the results of simulation and test, reference of using benchmarking technology in the car crash safety analysis is provided by this research method.

  12. Holistic simulation of geotechnical installation processes benchmarks and simulations

    CERN Document Server

    2016-01-01

    This book examines in detail the entire process involved in implementing geotechnical projects, from a well-defined initial stress and deformation state, to the completion of the installation process.   The individual chapters provide the fundamental knowledge needed to effectively improve soil-structure interaction models. Further, they present the results of theoretical fundamental research on suitable constitutive models, contact formulations, and efficient numerical implementations and algorithms. Applications of fundamental research on boundary value problems are also considered in order to improve the implementation of the theoretical models developed. Subsequent chapters highlight parametric studies of the respective geotechnical installation process, as well as elementary and large-scale model tests under well-defined conditions, in order to identify the most essential parameters for optimizing the process. The book provides suitable methods for simulating boundary value problems in connection with g...

  13. Benchmark of Space Charge Simulations and Comparison with Experimental Results for High Intensity, Low Energy Accelerators

    CERN Document Server

    Cousineau, Sarah M

    2005-01-01

    Space charge effects are a major contributor to beam halo and emittance growth leading to beam loss in high intensity, low energy accelerators. As future accelerators strive towards unprecedented levels of beam intensity and beam loss control, a more comprehensive understanding of space charge effects is required. A wealth of simulation tools have been developed for modeling beams in linacs and rings, and with the growing availability of high-speed computing systems, computationally expensive problems that were inconceivable a decade ago are now being handled with relative ease. This has opened the field for realistic simulations of space charge effects, including detailed benchmarks with experimental data. A great deal of effort is being focused in this direction, and several recent benchmark studies have produced remarkably successful results. This paper reviews the achievements in space charge benchmarking in the last few years, and discusses the challenges that remain.

  14. Community Benchmarking of Tsunami-Induced Nearshore Current Models

    Science.gov (United States)

    Lynett, P. J.; Wilson, R. I.; Gately, K.

    2015-12-01

    To help produce accurate and consistent maritime hazard products, the National Tsunami Hazard Mitigation Program (NTHMP) Strategic Plan includes a requirement to develop and run a benchmarking workshop to evaluate the numerical tsunami modeling of currents. For this workshop, five different benchmarking datasets were organized. These datasets were selected based on characteristics such as geometric complexity, currents that are shear/separation driven (and thus are de-coupled from the incident wave forcing), tidal coupling, and interaction with the built environment. While tsunami simulation models have generally been well validated against wave height and runup, comparisons with speed data are much less common. As model results are increasingly being used to estimate or indicate damage to coastal infrastructure, understanding the accuracy and precision of speed predictions becomes of important. As a result of this 2-day workshop held in early 2015, modelers now have a better awareness of their ability to accurately capture the physics of tsunami currents, and therefore a better understanding of how to use these simulation tools for hazard assessment and mitigation efforts. In this presentation, the model results - from 14 different modelers - will be presented and summarized, with a focus on statistical and ensemble properties of the current predictions.

  15. A benchmark on computational simulation of a CT fracture experiment

    International Nuclear Information System (INIS)

    For a better understanding of the fracture behavior of cracked welds in piping, FRAMATOME, EDF and CEA have launched an important analytical research program. This program is mainly based on the analysis of the effects of the geometrical parameters (the crack size and the welded joint dimensions) and the yield strength ratio on the fracture behavior of several cracked configurations. Two approaches have been selected for the fracture analyses: on one hand, the global approach based on the concept of crack driving force J and on the other hand, a local approach of ductile fracture. In this approach the crack initiation and growth are modelized by the nucleation, growth and coalescence of cavities in front of the crack tip. The model selected in this study estimates only the growth of the cavities using the RICE and TRACEY relationship. The present study deals with a benchmark on computational simulation of CT fracture experiments using three computer codes : ALIBABA developed by EDF the CEA's code CASTEM 2000 and the FRAMATOME's code SYSTUS. The paper is split into three parts. At first, the authors present the experimental procedure for high temperature toughness testing of two CT specimens taken from a welded pipe, characteristic of pressurized water reactor primary piping. Secondly, considerations are outlined about the Finite Element analysis and the application procedure. A detailed description is given on boundary and loading conditions, on the mesh characteristics, on the numerical scheme involved and on the void growth computation. Finally, the comparisons between numerical and experimental results are presented up to the crack initiation, the tearing process being not taken into account in the present study. The variations of J and of the local variables used to estimate the damage around the crack tip (triaxiality and hydrostatic stresses, plastic deformations, void growth ...) are computed as a function of the increasing load

  16. Benchmarking novel approaches for modelling species range dynamics.

    Science.gov (United States)

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E

    2016-08-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches

  17. Benchmarking novel approaches for modelling species range dynamics

    Science.gov (United States)

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H.; Moore, Kara A.; Zimmermann, Niklaus E.

    2016-01-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species’ range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species’ response to climate change but also emphasise several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches

  18. Simulation of nonlinear benchmarks and sheet metal forming processes using linear and quadratic solid–shell elements combined with advanced anisotropic behavior models

    Directory of Open Access Journals (Sweden)

    WangPeng

    2016-01-01

    Full Text Available A family of prismatic and hexahedral solid‒shell (SHB elements with their linear and quadratic versions is presented in this paper to model thin 3D structures. Based on reduced integration and special treatments to eliminate locking effects and to control spurious zero-energy modes, the SHB solid‒shell elements are capable of modeling most thin 3D structural problems with only a single element layer, while describing accurately the various through-thickness phenomena. In this paper, the SHB elements are combined with fully 3D behavior models, including orthotropic elastic behavior for composite materials and anisotropic plastic behavior for metallic materials, which allows describing the strain/stress state in the thickness direction, in contrast to traditional shell elements. All SHB elements are implemented into ABAQUS using both standard/quasi-static and explicit/dynamic solvers. Several benchmark tests have been conducted, in order to first assess the performance of the SHB elements in quasi-static and dynamic analyses. Then, deep drawing of a hemispherical cup is performed to demonstrate the capabilities of the SHB elements in handling various types of nonlinearities (large displacements and rotations, anisotropic plasticity, and contact. Compared to classical ABAQUS solid and shell elements, the results given by the SHB elements show good agreement with the reference solutions.

  19. Coupled Climate Model Appraisal a Benchmark for Future Studies

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, T J; AchutaRao, K; Bader, D; Covey, C; Doutriaux, C M; Fiorino, M; Gleckler, P J; Sperber, K R; Taylor, K E

    2005-08-22

    The Program for Climate Model Diagnosis and Intercomparison (PCMDI) has produced an extensive appraisal of simulations of present-day climate by eleven representative coupled ocean-atmosphere general circulation models (OAGCMs) which were developed during the period 1995-2002. Because projections of potential future global climate change are derived chiefly from OAGCMs, there is a continuing need to test the credibility of these predictions by evaluating model performance in simulating the historically observed climate. For example, such an evaluation is an integral part of the periodic assessments of climate change that are reported by the Intergovernmental Panel on Climate Change. The PCMDI appraisal thus provides a useful benchmark for future studies of this type. The appraisal mainly analyzed multi-decadal simulations of present-day climate by models that employed diverse representations of climate processes for atmosphere, ocean, sea ice, and land, as well as different techniques for coupling these components (see Table). The selected models were a subset of those entered in phase 2 of the Coupled Model Intercomparison Project (CMIP2, Covey et al. 2003). For these ''CMIP2+ models'', more atmospheric or oceanic variables were provided than the minimum requirements for participation in CMIP2. However, the appraisal only considered those climate variables that were supplied from most of the CMIP2+ models. The appraisal focused on three facets of the simulations of current global climate: (1) secular trends in simulation time series which would be indicative of a problematical ''coupled climate drift''; (2) comparisons of temporally averaged fields of simulated atmospheric and oceanic climate variables with available observational climatologies; and (3) correspondences between simulated and observed modes of climatic variability. Highlights of these climatic aspects manifested by different CMIP2+ simulations are briefly

  20. Microworlds, Simulators, and Simulation: Framework for a Benchmark of Human Reliability Data Sources

    Energy Technology Data Exchange (ETDEWEB)

    Ronald Boring; Dana Kelly; Carol Smidts; Ali Mosleh; Brian Dyre

    2012-06-01

    In this paper, we propose a method to improve the data basis of human reliability analysis (HRA) by extending the data sources used to inform HRA methods. Currently, most HRA methods are based on limited empirical data, and efforts to enhance the empirical basis behind HRA methods have not yet yielded significant new data. Part of the reason behind this shortage of quality data is attributable to the data sources used. Data have been derived from unrelated industries, from infrequent risk-significant events, or from costly control room simulator studies. We propose a benchmark of four data sources: a simplified microworld simulator using unskilled student operators, a full-scope control room simulator using skilled student operators, a full-scope control room simulator using licensed commercial operators, and a human performance modeling and simulation system using virtual operators. The goal of this research is to compare findings across the data sources to determine to what extent data may be used and generalized from cost effective sources.

  1. Microbially Mediated Kinetic Sulfur Isotope Fractionation: Reactive Transport Modeling Benchmark

    Science.gov (United States)

    Wanner, C.; Druhan, J. L.; Cheng, Y.; Amos, R. T.; Steefel, C. I.; Ajo Franklin, J. B.

    2014-12-01

    Microbially mediated sulfate reduction is a ubiquitous process in many subsurface systems. Isotopic fractionation is characteristic of this anaerobic process, since sulfate reducing bacteria (SRB) favor the reduction of the lighter sulfate isotopologue (S32O42-) over the heavier isotopologue (S34O42-). Detection of isotopic shifts have been utilized as a proxy for the onset of sulfate reduction in subsurface systems such as oil reservoirs and aquifers undergoing uranium bioremediation. Reactive transport modeling (RTM) of kinetic sulfur isotope fractionation has been applied to field and laboratory studies. These RTM approaches employ different mathematical formulations in the representation of kinetic sulfur isotope fractionation. In order to test the various formulations, we propose a benchmark problem set for the simulation of kinetic sulfur isotope fractionation during microbially mediated sulfate reduction. The benchmark problem set is comprised of four problem levels and is based on a recent laboratory column experimental study of sulfur isotope fractionation. Pertinent processes impacting sulfur isotopic composition such as microbial sulfate reduction and dispersion are included in the problem set. To date, participating RTM codes are: CRUNCHTOPE, TOUGHREACT, MIN3P and THE GEOCHEMIST'S WORKBENCH. Preliminary results from various codes show reasonable agreement for the problem levels simulating sulfur isotope fractionation in 1D.

  2. Development of interfacial area transport equation - modeling and experimental benchmark

    International Nuclear Information System (INIS)

    A dynamic treatment of interfacial area concentration has been studied over the last decade by employing the interfacial area transport equation. When coupled with the two-fluid model, the interfacial area transport equation replaces the flow regime dependent correlations for interfacial area concentration and eliminates potential artificial bifurcation or numerical oscillations stemming from these static correlations. An extensive database has been established to evaluate the model under various two-phase flow conditions. These include adiabatic and heated conditions, vertical and horizontal flow orientations, round, rectangular, annulus and 8×8 rod bundle channel geometries, and normal-gravity and simulated reduced-gravity conditions. This paper reviews the current state-of-the-art in the development of the interfacial area transport equation, available experimental databases and 1D and 3D benchmarking work of the interfacial area transport equation. (author)

  3. Benchmarking of mobile network simulator, with real network data

    OpenAIRE

    Näslund, Lars

    2007-01-01

    In the radio network simulator used in this thesis the radio network from a specific operator is modeled. The real network model in the simulator uses, a 3-D building database, realistic site data (antenna types, feederloss, ...) and parameter setting from field. In addition traffic statistics are collected from the customer’s network for the modeled area. The traffic payload is used as input to the simulator and creates an inhomogeneous traffic distribution over the area. One of the outputs ...

  4. A simulation benchmark to evaluate the performance of advanced control techniques in biological wastewater treatment plants

    Directory of Open Access Journals (Sweden)

    Sotomayor O.A.Z.

    2001-01-01

    Full Text Available Wastewater treatment plants (WWTP are complex systems that incorporate a large number of biological, physicochemical and biochemical processes. They are large and nonlinear systems subject to great disturbances in incoming loads. The primary goal of a WWTP is to reduce pollutants and the second goal is disturbance rejection, in order to obtain good effluent quality. Modeling and computer simulations are key tools in the achievement of these two goals. They are essential to describe, predict and control the complicated interactions of the processes. Numerous control techniques (algorithms and control strategies (structures have been suggested to regulate WWTP; however, it is difficult to make a discerning performance evaluation due to the nonuniformity of the simulated plants used. The main objective of this paper is to present a benchmark of an entire biological wastewater treatment plant in order to evaluate, through simulations, different control techniques. This benchmark plays the role of an activated sludge process used for removal of organic matter and nitrogen from domestic effluents. The development of this simulator is based on models widely accepted by the international community and is implemented in Matlab/Simulink (The MathWorks, Inc. platform. The benchmark considers plant layout and the effects of influent characteristics. It also includes a test protocol for analyzing the open and closed-loop responses of the plant. Examples of control applications in the benchmark are implemented employing conventional PI controllers. The following common control strategies are tested: dissolved oxygen (DO concentration-based control, respirometry-based control and nitrate concentration-based control.

  5. Enthalpy benchmark experiments for numerical ice sheet models

    Directory of Open Access Journals (Sweden)

    T. Kleiner

    2014-06-01

    Full Text Available We present benchmark experiments to test the implementation of enthalpy and the corresponding boundary conditions in numerical ice sheet models. The first experiment tests particularly the functionality of the boundary condition scheme and the basal melt rate calculation during transient simulations. The second experiment addresses the steady-state enthalpy profile and the resulting position of the cold–temperate transition surface (CTS. For both experiments we assume ice flow in a parallel-sided slab decoupled from the thermal regime. Since we impose several assumptions on the experiment design, analytical solutions can be formulated for the proposed numerical experiments. We compare simulation results achieved by three different ice flow-models with these analytical solutions. The models agree well to the analytical solutions, if the change in conductivity between cold and temperate ice is properly considered in the model. In particular, the enthalpy gradient at the cold side of the CTS vanishes in the limit of vanishing conductivity in the temperate ice part as required from the physical jump conditions at the CTS.

  6. Performance benchmarks for a next generation numerical dynamo model

    Science.gov (United States)

    Matsui, Hiroaki; Heien, Eric; Aubert, Julien; Aurnou, Jonathan M.; Avery, Margaret; Brown, Ben; Buffett, Bruce A.; Busse, Friedrich; Christensen, Ulrich R.; Davies, Christopher J.; Featherstone, Nicholas; Gastine, Thomas; Glatzmaier, Gary A.; Gubbins, David; Guermond, Jean-Luc; Hayashi, Yoshi-Yuki; Hollerbach, Rainer; Hwang, Lorraine J.; Jackson, Andrew; Jones, Chris A.; Jiang, Weiyuan; Kellogg, Louise H.; Kuang, Weijia; Landeau, Maylis; Marti, Philippe; Olson, Peter; Ribeiro, Adolfo; Sasaki, Youhei; Schaeffer, Nathanaël.; Simitev, Radostin D.; Sheyko, Andrey; Silva, Luis; Stanley, Sabine; Takahashi, Futoshi; Takehiro, Shin-ichi; Wicht, Johannes; Willis, Ashley P.

    2016-05-01

    Numerical simulations of the geodynamo have successfully represented many observable characteristics of the geomagnetic field, yielding insight into the fundamental processes that generate magnetic fields in the Earth's core. Because of limited spatial resolution, however, the diffusivities in numerical dynamo models are much larger than those in the Earth's core, and consequently, questions remain about how realistic these models are. The typical strategy used to address this issue has been to continue to increase the resolution of these quasi-laminar models with increasing computational resources, thus pushing them toward more realistic parameter regimes. We assess which methods are most promising for the next generation of supercomputers, which will offer access to O(106) processor cores for large problems. Here we report performance and accuracy benchmarks from 15 dynamo codes that employ a range of numerical and parallelization methods. Computational performance is assessed on the basis of weak and strong scaling behavior up to 16,384 processor cores. Extrapolations of our weak-scaling results indicate that dynamo codes that employ two-dimensional or three-dimensional domain decompositions can perform efficiently on up to ˜106 processor cores, paving the way for more realistic simulations in the next model generation.

  7. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  8. Results of the benchmark for blade structural models, part A

    DEFF Research Database (Denmark)

    Lekou, D.J.; Chortis, D.; Belen Fariñas, A.;

    2013-01-01

    A benchmark on structural design methods for blades was performed within the InnWind.Eu project under WP2 “Lightweight Rotor” Task 2.2 “Lightweight structural design”. The present document is describes the results of the comparison simulation runs that were performed by the partners involved with...

  9. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano;

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we...... compare numerical predictions of the concrete sample final shape for these two benchmark flows obtained by various research teams around the world using various numerical techniques. Our results show that all numerical techniques compared here give very similar results suggesting that numerical...

  10. Benchmarking spin-state chemistry in starless core models

    CERN Document Server

    Sipilä, O; Harju, J

    2015-01-01

    Aims. We aim to present simulated chemical abundance profiles for a variety of important species, with special attention given to spin-state chemistry, in order to provide reference results against which present and future models can be compared. Methods. We employ gas-phase and gas-grain models to investigate chemical abundances in physical conditions corresponding to starless cores. To this end, we have developed new chemical reaction sets for both gas-phase and grain-surface chemistry, including the deuterated forms of species with up to six atoms and the spin-state chemistry of light ions and of the species involved in the ammonia and water formation networks. The physical model is kept simple in order to facilitate straightforward benchmarking of other models against the results of this paper. Results. We find that the ortho/para ratios of ammonia and water are similar in both gas-phase and gas-grain models, at late times in particular, implying that the ratios are determined by gas-phase processes. We d...

  11. Proxy benchmarks for intercomparison of 8.2 ka simulations

    Directory of Open Access Journals (Sweden)

    C. Morrill

    2013-02-01

    Full Text Available The Paleoclimate Modelling Intercomparison Project (PMIP3 now includes the 8.2 ka event as a test of model sensitivity to North Atlantic freshwater forcing. To provide benchmarks for intercomparison, we compiled and analyzed high-resolution records spanning this event. Two previously-described anomaly patterns that emerge are cooling around the North Atlantic and drier conditions in the Northern Hemisphere tropics. Newer to this compilation are more robustly-defined wetter conditions in the Southern Hemisphere tropics and regionally-limited warming in the Southern Hemisphere. Most anomalies around the globe lasted on the order of 100 to 150 yr. More quantitative reconstructions are now available and indicate cooling of ~ 1 °C and a ~ 20% decrease in precipitation in parts of Europe as well as spatial gradients in δ18O from the high to low latitudes. Unresolved questions remain about the seasonality of the climate response to freshwater forcing and the extent to which the bipolar seesaw operated in the early Holocene.

  12. Proxy benchmarks for intercomparison of 8.2 ka simulations

    Directory of Open Access Journals (Sweden)

    C. Morrill

    2012-08-01

    Full Text Available The Paleoclimate Modelling Intercomparison Project (PMIP3 now includes the 8.2 ka event as a test of model sensitivity to North Atlantic freshwater forcing. To provide benchmarks for intercomparison, we compiled and analyzed high-resolution records spanning this event. Two previously-described anomaly patterns that emerge are cooling around the North Atlantic and drier conditions in the Northern Hemisphere tropics. Newer to this compilation are more robustly-defined wetter conditions in the Southern Hemisphere tropics and regionally-limited warming in the Southern Hemisphere. Most anomalies around the globe lasted on the order of 100 to 150 yr. More quantitative reconstructions are now available and indicate cooling of 1.0 to 1.2 °C and a ~20% decrease in precipitation in parts of Europe, as well as spatial gradients in δ18O from the high to low latitudes. Unresolved questions remain about the seasonality of the climate response to freshwater forcing and the extent to which the bipolar seesaw operated in the early Holocene.

  13. Benchmarking a Visual-Basic based multi-component one-dimensional reactive transport modeling tool

    Science.gov (United States)

    Torlapati, Jagadish; Prabhakar Clement, T.

    2013-01-01

    We present the details of a comprehensive numerical modeling tool, RT1D, which can be used for simulating biochemical and geochemical reactive transport problems. The code can be run within the standard Microsoft EXCEL Visual Basic platform, and it does not require any additional software tools. The code can be easily adapted by others for simulating different types of laboratory-scale reactive transport experiments. We illustrate the capabilities of the tool by solving five benchmark problems with varying levels of reaction complexity. These literature-derived benchmarks are used to highlight the versatility of the code for solving a variety of practical reactive transport problems. The benchmarks are described in detail to provide a comprehensive database, which can be used by model developers to test other numerical codes. The VBA code presented in the study is a practical tool that can be used by laboratory researchers for analyzing both batch and column datasets within an EXCEL platform.

  14. Benchmarking Biological Nutrient Removal in Wastewater Treatment Plants:Influence of Mathematical Model Assumptions

    OpenAIRE

    Flores-Alsina, Xavier; Gernaey, Krist; Jeppsson, Ulf

    2011-01-01

    This paper examines the effect of different model assumptions when describing biological nutrient removal (BNR) by the activated sludge models (ASM) 1, 2d & 3. The performance of a nitrogen removal (WWTP1) and a combined nitrogen and phosphorus removal (WWTP2) benchmark wastewater treatment plant was compared for a series of model assumptions. Three different model approaches describing BNR are considered. In the reference case, the original model implementations are used to simulate WWTP...

  15. Adapting benchmarking to project management : an analysis of project management processes, metrics, and benchmarking process models

    OpenAIRE

    Emhjellen, Kjetil

    1997-01-01

    Since the first publication on benchmarking in 1989 by Robert C. Camp of “Benchmarking: The search for Industry Best Practices that Lead to Superior Performance”, the improvement technique benchmarking has been established as an important tool in the process focused manufacturing or production environment. The use of benchmarking has expanded to other types of industry. Benchmarking has past the doorstep and is now in early trials in the project and construction environment....

  16. TCSC impedance regulator applied to the second benchmark model

    Energy Technology Data Exchange (ETDEWEB)

    Hamel, J.P.; Dessaint, L.A. [Ecole de Technologie Superieure, Montreal, PQ (Canada). Dept. of Electrical Engineering; Champagne, R. [Ecole de Technologie Superieure, Montreal, PQ (Canada). Dept. of Software and IT Engineering; Pare, D. [Institut de Recherche d' Hydro-Quebec, Varennes, PQ (Canada)

    2008-07-01

    Due to the combination of electrical demand growth and the high cost of building new power transmission lines, series compensation is increasingly used in power systems all around the world. Series compensation has been proposed as a new way to transfer more power on existing lines. By adding series compensation to an existing line (a relatively small change), the power transfer can be increased significantly. One of the means used for line compensation is the addition of capacitive elements in series with the line. This paper presented a thyristor-controlled series capacitor (TCSC) model that used impedance as reference, had individual controls for each phase, included a linearization module and considered only the fundamental frequency for impedance computations, without using any filter. The model's dynamic behavior was validated by applying it to the second benchmark model for subsynchronous resonance (SSR). Simulation results from the proposed model, obtained using EMTP-RV and SimPowerSystems were demonstrated. It was concluded that SSR was mitigated by the proposed approach. 19 refs., 19 figs.

  17. A note on the numerical simulation of Kleijn's benchmark problem

    NARCIS (Netherlands)

    Van Veldhuizen, S.; Vuik, C.; Kleijn, C.R.

    2006-01-01

    In this study various numerical schemes for transient simulations of 2D laminar reacting gas flows, as typically found in Chemical Vapor Deposition (CVD) reactors, are proposed and compared. These systems are generally modeled by means of many stiffly coupled elementary gas phase reactions between a

  18. Fundamental modeling issues on benchmark structure for structural health monitoring

    Institute of Scientific and Technical Information of China (English)

    LI HuaJun; ZHANG Min; WANG JunRong; HU Sau-Lon James

    2009-01-01

    The IASC-ASCE Structural Health Monitoring Task Group developed a series of benchmark problems,and participants of the benchmark study were charged with using a 12-degree-of-freedom (DOF) shear building as their identification model. The present article addresses improperness, including the parameter and modeling errors, of using this particular model for the intended purpose of damage detection, while the measurements of damaged structures are synthesized from a full-order finite-element model. In addressing parameter errors, a model calibration procedure is utilized to tune the mass and stiffness matrices of the baseline identification model, and a 12-DOF shear building model that preserves the first three modes of the full-order model is obtained. Sequentially, this calibrated model is employed as the baseline model while performing the damage detection under various damage scenarios. Numerical results indicate that the 12-DOF shear building model is an over-simplified identification model, through which only idealized damage situations for the benchmark structure can be detected. It is suggested that a more sophisticated 3-dimensional frame structure model should be adopted as the identification model, if one intends to detect local member damages correctly.

  19. Fundamental modeling issues on benchmark structure for structural health monitoring

    Institute of Scientific and Technical Information of China (English)

    HU; Sau-Lon; James

    2009-01-01

    The IASC-ASCE Structural Health Monitoring Task Group developed a series of benchmark problems, and participants of the benchmark study were charged with using a 12-degree-of-freedom (DOF) shear building as their identification model. The present article addresses improperness, including the parameter and modeling errors, of using this particular model for the intended purpose of damage detec- tion, while the measurements of damaged structures are synthesized from a full-order finite-element model. In addressing parameter errors, a model calibration procedure is utilized to tune the mass and stiffness matrices of the baseline identification model, and a 12-DOF shear building model that preserves the first three modes of the full-order model is obtained. Sequentially, this calibrated model is employed as the baseline model while performing the damage detection under various damage scenarios. Numerical results indicate that the 12-DOF shear building model is an over-simplified identification model, through which only idealized damage situations for the benchmark structure can be detected. It is suggested that a more sophisticated 3-dimensional frame structure model should be adopted as the identification model, if one intends to detect local member damages correctly.

  20. Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model

    CERN Document Server

    Baudron, Anne-Marie A -M; Maday, Yvon; Riahi, Mohamed Kamel; Salomon, Julien

    2014-01-01

    We present a parareal in time algorithm for the simulation of neutron diffusion transient model. The method is made efficient by means of a coarse solver defined with large time steps and steady control rods model. Using finite element for the space discretization, our implementation provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch-Maurer-Werner (LMW) benchmark [1].

  1. Computer simulation of Masurca critical and subcritical experiments. Muse-4 benchmark. Final report

    International Nuclear Information System (INIS)

    The efficient and safe management of spent fuel produced during the operation of commercial nuclear power plants is an important issue. In this context, partitioning and transmutation (P and T) of minor actinides and long-lived fission products can play an important role, significantly reducing the burden on geological repositories of nuclear waste and allowing their more effective use. Various systems, including existing reactors, fast reactors and advanced systems have been considered to optimise the transmutation scheme. Recently, many countries have shown interest in accelerator-driven systems (ADS) due to their potential for transmutation of minor actinides. Much R and D work is still required in order to demonstrate their desired capability as a whole system, and the current analysis methods and nuclear data for minor actinide burners are not as well established as those for conventionally-fuelled systems. Recognizing a need for code and data validation in this area, the Nuclear Science Committee of the OECD/NEA has organised various theoretical benchmarks on ADS burners. Many improvements and clarifications concerning nuclear data and calculation methods have been achieved. However, some significant discrepancies for important parameters are not fully understood and still require clarification. Therefore, this international benchmark based on MASURCA experiments, which were carried out under the auspices of the EC 5. Framework Programme, was launched in December 2001 in co-operation with the CEA (France) and CIEMAT (Spain). The benchmark model was oriented to compare simulation predictions based on available codes and nuclear data libraries with experimental data related to TRU transmutation, criticality constants and time evolution of the neutronic flux following source variation, within liquid metal fast subcritical systems. A total of 16 different institutions participated in this first experiment based benchmark, providing 34 solutions. The large number

  2. Benchmark Finite Element Simulations of Postbuckling Composite Stiffened Panels

    OpenAIRE

    Orifici, Adrian; Thomson, R.; Gunnion, A.J.; Degenhardt, Richard; Abramovich, H.; Bayandor, J.

    2005-01-01

    This paper outlines the CRC-ACS contribution to a software code benchmarking exercise as part of the European Commission Project COCOMAT investigating composite postbuckling stiffened panels. Analysis was carried out using MSC.Nastran (Nastran) solution sequences SOL 106 and SOL 600, Abaqus/Standard (Abaqus) and LS-Dyna, and compared to experimental data generated previously at the Technion, Israel and DLR, Germany. The finite element (FE) analyses generally gave very good comparison u...

  3. Benchmarking of measurement and simulation of transverse rms-emittance growth

    Science.gov (United States)

    Groening, L.; Barth, W.; Bayer, W.; Clemente, G.; Dahl, L.; Forck, P.; Gerhard, P.; Hofmann, I.; Riehl, G.; Yaramyshev, S.; Jeon, D.; Uriot, D.

    2008-09-01

    Transverse emittance growth along the Alvarez drift tube linac (DTL) section is a major concern with respect to the preservation of beam quality of high current beams at the GSI UNILAC. In order to define measures to reduce this growth, appropriate tools to simulate the beam dynamics are indispensable. This paper is about the benchmarking of three beam dynamics simulation codes, i.e. DYNAMION, PARMILA, and PARTRAN against systematic measurements of beam emittances for different transverse phase advances along the DTL. Special emphasis is put on the modeling of the initial distribution for the simulations. The concept of rms equivalence is expanded from full intensity to fractions of less than 100% of the beam. The experimental setup, data reduction, preparation of the simulations, and the evaluation of the simulations are described. In the experiments and in the simulations, a minimum of the rms-emittance growth was observed at zero current phase advances of about 60°. In general, good agreement was found between simulations and experiment for the mean values of horizontal and vertical emittances at the DTL exit.

  4. Benchmark of tyre models for mechatronic application

    OpenAIRE

    Carulla Castellví, Marina

    2010-01-01

    In this paper a comparison matrix is developed in order to examine three tyre models through nine criteria. These criteria are obtained after the requirements study of the main vehicle-dynamics mechatronic applications, such as ABS, ESP, TCS and EPAS. The present study proposes a weight for each criterion related to its importance to the mentioned applications. These weights are obtained by taking into account both practical and theoretical judgement. The former was collected through experts‟...

  5. Simulation of Enhanced Geothermal Systems: A Benchmarking and Code Intercomparison Study

    Energy Technology Data Exchange (ETDEWEB)

    Scheibe, Timothy D.; White, Mark D.; White, Signe K.; Sivaramakrishnan, Chandrika; Purohit, Sumit; Black, Gary D.; Podgorney, Robert; Boyd, Lauren W.; Phillips, Benjamin R.

    2013-06-30

    Numerical simulation codes have become critical tools for understanding complex geologic processes, as applied to technology assessment, system design, monitoring, and operational guidance. Recently the need for quantitatively evaluating coupled Thermodynamic, Hydrologic, geoMechanical, and geoChemical (THMC) processes has grown, driven by new applications such as geologic sequestration of greenhouse gases and development of unconventional energy sources. Here we focus on Enhanced Geothermal Systems (EGS), which are man-made geothermal reservoirs created where hot rock exists but there is insufficient natural permeability and/or pore fluids to allow efficient energy extraction. In an EGS, carefully controlled subsurface fluid injection is performed to enhance the permeability of pre-existing fractures, which facilitates fluid circulation and heat transport. EGS technologies are relatively new, and pose significant simulation challenges. To become a trusted analytical tool for EGS, numerical simulation codes must be tested to demonstrate that they adequately represent the coupled THMC processes of concern. This presentation describes the approach and status of a benchmarking and code intercomparison effort currently underway, supported by the U. S. Department of Energy’s Geothermal Technologies Program. This study is being closely coordinated with a parallel international effort sponsored by the International Partnership for Geothermal Technology (IPGT). We have defined an extensive suite of benchmark problems, test cases, and challenge problems, ranging in complexity and difficulty, and a number of modeling teams are applying various simulation tools to these problems. The descriptions of the problems and modeling results are being compiled using the Velo framework, a scientific workflow and data management environment accessible through a simple web-based interface.

  6. Simulation of thermos-solutal convection induced macrosegregation in a Sn-10%Pb alloy benchmark during columnar solidification

    Science.gov (United States)

    Zheng, Y.; Wu, M.; Kharicha, A.; Ludwig, A.

    2016-03-01

    In order to investigate the effect of thermo-solutal convection on the formation of macrosegregation during columnar solidification, simulations with a liquid-columnar two phase model were carried out on a 2D rectangular benchmark of Sn-10%Pb alloy. The solidification direction in the benchmark is unidirectional: (') downwards from top to bottom or (2) upwards from bottom to top. Thermal expansion coefficient, solutal expansion coefficient and liquid diffusion coefficient of the melt are found to be key factors influencing the final macrosegregation. The segregation range and distribution are also strongly influenced by the benchmark configurations, e.g. the solidifying direction (upwards or downwards) and boundary conditions, et al. The global macrosegregation range increases with the velocity magnitude of the melt during the process of solidification.

  7. Model-Based Engineering and Manufacturing CAD/CAM Benchmark.

    Energy Technology Data Exchange (ETDEWEB)

    Domm, T.C.; Underwood, R.S.

    1999-10-13

    The Benchmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supporting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate their engineering practices and processes to determine direction and focus for Y-12 modernization efforts. The companies visited included several large established companies and a new, small, high-tech machining firm. As a result of this effort, changes are recommended that will enable Y-12 to become a more modern, responsive, cost-effective manufacturing facility capable of supporting the needs of the Nuclear Weapons Complex (NWC) into the 21st century. The benchmark team identified key areas of interest, both focused and general. The focus areas included Human Resources, Information Management, Manufacturing Software Tools, and Standards/Policies and Practices. Areas of general interest included Infrastructure, Computer Platforms and Networking, and Organizational Structure. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were somewhere between 3-D solid modeling and surfaced wire-frame models. The manufacturing computer tools were varied, with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) from a common model. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system than a single computer-aided manufacturing (CAM) system. The Internet was a technology that all companies were looking to either transport information more easily throughout the corporation or as a conduit for

  8. Theoretical benchmarking of laser-accelerated ion fluxes by 2D-PIC simulations

    CERN Document Server

    Mackenroth, Felix; Marklund, Mattias

    2016-01-01

    There currently exists a number of different schemes for laser based ion acceleration in the literature. Some of these schemes are also partly overlapping, making a clear distinction between the schemes difficult in certain parameter regimes. Here, we provide a systematic numerical comparison between the following schemes and their analytical models: light-sail acceleration, Coulomb explosions, hole boring acceleration, and target normal sheath acceleration (TNSA). We study realistic laser parameters and various different target designs, each optimized for one of the acceleration schemes, respectively. As a means of comparing the schemes, we compute the ion current density generated at different laser powers, using two-dimensional particle-in-cell (PIC) simulations, and benchmark the particular analytical models for the corresponding schemes against the numerical results. Finally, we discuss the consequences for attaining high fluxes through the studied laser ion-acceleration schemes.

  9. A benchmark model to assess community structure in evolving networks

    CERN Document Server

    Granell, Clara; Arenas, Alex; Fortunato, Santo; Gómez, Sergio

    2015-01-01

    Detecting the time evolution of the community structure of networks is crucial to identify major changes in the internal organization of many complex systems, which may undergo important endogenous or exogenous events. This analysis can be done in two ways: considering each snapshot as an independent community detection problem or taking into account the whole evolution of the network. In the first case, one can apply static methods on the temporal snapshots, which correspond to configurations of the system in short time windows, and match afterwards the communities across layers. Alternatively, one can develop dedicated dynamic procedures, so that multiple snapshots are simultaneously taken into account while detecting communities, which allows to keep memory of the flow. To check how well a method of any kind could capture the evolution of communities, suitable benchmarks are needed. Here we propose a model for generating simple dynamic benchmark graphs, based on stochastic block models. In them, the time e...

  10. Simulation of the OECD main steam line benchmark using the Westinghouse RAVE (TM) methodology

    International Nuclear Information System (INIS)

    In order to determine the safety of a reactor with respect to reactor system failures, a set of postulated events is analyzed, and the results are presented in Chapter 14 or 15 of the plant Final Safety Analysis Report (FSAR). In the analysis of events that are not initiated by a Loss of Coolant Accidents (non-LOCA events), the typical approach has been to make conservative and bounding analysis assumptions, either because of analysis expediency, or because of the simplified modeling assumptions. In some cases, this has resulted in combination of assumptions that cannot occur in reality. The consistency of the analysis assumptions can be improved by externally linking the reactor coolant system thermal-hydraulic calculation model to a more realistic 3-dimensional core neutronics and heat transfer model, that replaces the simple point kinetics model typically used to represent the core neutronics in the system code. Over the past few years, Westinghouse has developed the RAVETM methodology for the application of three-dimensional core neutron kinetics to the analysis of non-LOCA FSAR events. This methodology uses the NRC-approved core neutron kinetics code SPNOVA and the NRC-approved Westinghouse version of the core thermal hydraulics code VIPRE-01, in conjunction with the NRC approved Westinghouse version of the reactor coolant system thermal hydraulic code RETRAN-02. The Westinghouse methodology has been submitted to the NRC for approval in April, 2004 and the NRC Safety Evaluation Report (SER) is expected to be issued before this paper will be presented. As part of the development and licensing of the RAVETM methodology, Westinghouse has performed an analysis of the OECD Main Steam Line Break (MSLB) benchmark. This benchmark problem had been defined in a cooperative program sponsored by the OECD, the NRC, and the Pennsylvania State University, in order to simulate the core response and the reactor coolant system response to a relatively severe steamline break

  11. RESRAD benchmarking against six radiation exposure pathway models

    Energy Technology Data Exchange (ETDEWEB)

    Faillace, E.R.; Cheng, J.J.; Yu, C.

    1994-10-01

    A series of benchmarking runs were conducted so that results obtained with the RESRAD code could be compared against those obtained with six pathway analysis models used to determine the radiation dose to an individual living on a radiologically contaminated site. The RESRAD computer code was benchmarked against five other computer codes - GENII-S, GENII, DECOM, PRESTO-EPA-CPG, and PATHRAE-EPA - and the uncodified methodology presented in the NUREG/CR-5512 report. Estimated doses for the external gamma pathway; the dust inhalation pathway; and the soil, food, and water ingestion pathways were calculated for each methodology by matching, to the extent possible, input parameters such as occupancy, shielding, and consumption factors.

  12. Development of parallel benchmark code by sheet metal forming simulator 'ITAS'

    International Nuclear Information System (INIS)

    This report describes the development of parallel benchmark code by sheet metal forming simulator 'ITAS'. ITAS is a nonlinear elasto-plastic analysis program by the finite element method for the purpose of the simulation of sheet metal forming. ITAS adopts the dynamic analysis method that computes displacement of sheet metal at every time unit and utilizes the implicit method with the direct linear equation solver. Therefore the simulator is very robust. However, it requires a lot of computational time and memory capacity. In the development of the parallel benchmark code, we designed the code by MPI programming to reduce the computational time. In numerical experiments on the five kinds of parallel super computers at CCSE JAERI, i.e., SP2, SR2201, SX-4, T94 and VPP300, good performances are observed. The result will be shown to the public through WWW so that the benchmark results may become a guideline of research and development of the parallel program. (author)

  13. Analysis of the OECD/NEA Oskarshamn-2 feedwater transient and stability benchmark with SIMULATE-3K

    International Nuclear Information System (INIS)

    The OECD/NEA recently launched an international benchmark on a combined feedwater transient and stability event that occurred at the Swedish Oskarshamn-2 (O2) nuclear power plant (NPP). The primary benchmark objective is to assess advances in coupled neutronic/thermal-hydraulic codes for simulations of challenging transients including the appearance of unstable power oscillations. The Paul Scherrer Institut (PSI) is participating in this benchmark in order to enlarge the validation basis of its advanced stability analysis methodology currently under development for Swiss BWRs and based on the state-of-the-art SIMULATE-3K (S3K) code. This paper presents the development, optimization and validation of a S3K model for the first phase of the O2 benchmark, namely the analysis of the entire event. With the optimized model, the S3K solution is compared to available benchmark data both for at steady-state and transient conditions. For the latter, the qualitative as well as quantitative behavior of S3K results is compared and discussed in relation to the experimental observations. The modeling aspects found in this context to mostly affect the S3K ability to reproduce the event are also presented. Finally, the S3K model indicates that if the reactor scram had not been introduced, the observed diverging oscillations would reach a maximum amplitude before decaying back into a stable state. However, it was also found that the core could instead have evolved into limit-cycle oscillations if a stabilization of the feedwater flow and temperature had occurred just before the scram signal. (author)

  14. A Benchmarking Initiative for Reactive Transport Modeling Applied to Subsurface Environmental Applications

    Science.gov (United States)

    Steefel, C. I.

    2015-12-01

    Over the last 20 years, we have seen the evolution of multicomponent reactive transport modeling and the expanding range and increasing complexity of subsurface environmental applications it is being used to address. Reactive transport modeling is being asked to provide accurate assessments of engineering performance and risk for important issues with far-reaching consequences. As a result, the complexity and detail of subsurface processes, properties, and conditions that can be simulated have significantly expanded. Closed form solutions are necessary and useful, but limited to situations that are far simpler than typical applications that combine many physical and chemical processes, in many cases in coupled form. In the absence of closed form and yet realistic solutions for complex applications, numerical benchmark problems with an accepted set of results will be indispensable to qualifying codes for various environmental applications. The intent of this benchmarking exercise, now underway for more than five years, is to develop and publish a set of well-described benchmark problems that can be used to demonstrate simulator conformance with norms established by the subsurface science and engineering community. The objective is not to verify this or that specific code--the reactive transport codes play a supporting role in this regard—but rather to use the codes to verify that a common solution of the problem can be achieved. Thus, the objective of each of the manuscripts is to present an environmentally-relevant benchmark problem that tests the conceptual model capabilities, numerical implementation, process coupling, and accuracy. The benchmark problems developed to date include 1) microbially-mediated reactions, 2) isotopes, 3) multi-component diffusion, 4) uranium fate and transport, 5) metal mobility in mining affected systems, and 6) waste repositories and related aspects.

  15. Experimental verification of boundary conditions for numerical simulation of airflow in a benchmark ventilation channel

    Directory of Open Access Journals (Sweden)

    Lizal Frantisek

    2016-01-01

    Full Text Available Correct definition of boundary conditions is crucial for the appropriate simulation of a flow. It is a common practice that simulation of sufficiently long upstream entrance section is performed instead of experimental investigation of the actual conditions at the boundary of the examined area, in the case that the measurement is either impossible or extremely demanding. We focused on the case of a benchmark channel with ventilation outlet, which models a regular automotive ventilation system. At first, measurements of air velocity and turbulence intensity were performed at the boundary of the examined area, i.e. in the rectangular channel 272.5 mm upstream the ventilation outlet. Then, the experimentally acquired results were compared with results obtained by numerical simulation of further upstream entrance section defined according to generally approved theoretical suggestions. The comparison showed that despite the simple geometry and general agreement of average axial velocity, certain difference was found in the shape of the velocity profile. The difference was attributed to the simplifications of the numerical model and the isotropic turbulence assumption of the used turbulence model. The appropriate recommendations were stated for the future work.

  16. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  17. The Lexington Benchmarks for Numerical Simulations of Nebulae

    CERN Document Server

    Ferland, G; Contini, M; Harrington, J; Kallman, T; Netzer, H; Péquignot, D; Raymond, J; Rubin, R; Shields, G; Sutherland, R; Viegas, S

    2016-01-01

    We present the results of a meeting on numerical simulations of ionized nebulae held at the University of Kentucky in conjunction with the celebration of the 70th birthdays of Profs. Donald Osterbrock and Michael Seaton.

  18. Benchmarking analysis of three multimedia models: RESRAD, MMSOILS, and MEPAS

    International Nuclear Information System (INIS)

    Multimedia modelers from the United States Environmental Protection Agency (EPA) and the United States Department of Energy (DOE) collaborated to conduct a comprehensive and quantitative benchmarking analysis of three multimedia models. The three models-RESRAD (DOE), MMSOILS (EPA), and MEPAS (DOE)-represent analytically based tools that are used by the respective agencies for performing human exposure and health risk assessments. The study is performed by individuals who participate directly in the ongoing design, development, and application of the models. A list of physical/chemical/biological processes related to multimedia-based exposure and risk assessment is first presented as a basis for comparing the overall capabilities of RESRAD, MMSOILS, and MEPAS. Model design, formulation, and function are then examined by applying the models to a series of hypothetical problems. Major components of the models (e.g., atmospheric, surface water, groundwater) are evaluated separately and then studied as part of an integrated system for the assessment of a multimedia release scenario to determine effects due to linking components of the models. Seven modeling scenarios are used in the conduct of this benchmarking study: (1) direct biosphere exposure, (2) direct release to the air, (3) direct release to the vadose zone, (4) direct release to the saturated zone, (5) direct release to surface water, (6) surface water hydrology, and (7) multimedia release. Study results show that the models differ with respect to (1) environmental processes included (i.e., model features) and (2) the mathematical formulation and assumptions related to the implementation of solutions (i.e., parameterization)

  19. Information-Theoretic Benchmarking of Land Surface Models

    Science.gov (United States)

    Nearing, Grey; Mocko, David; Kumar, Sujay; Peters-Lidard, Christa; Xia, Youlong

    2016-04-01

    Benchmarking is a type of model evaluation that compares model performance against a baseline metric that is derived, typically, from a different existing model. Statistical benchmarking was used to qualitatively show that land surface models do not fully utilize information in boundary conditions [1] several years before Gong et al [2] discovered the particular type of benchmark that makes it possible to *quantify* the amount of information lost by an incorrect or imperfect model structure. This theoretical development laid the foundation for a formal theory of model benchmarking [3]. We here extend that theory to separate uncertainty contributions from the three major components of dynamical systems models [4]: model structures, model parameters, and boundary conditions describe time-dependent details of each prediction scenario. The key to this new development is the use of large-sample [5] data sets that span multiple soil types, climates, and biomes, which allows us to segregate uncertainty due to parameters from the two other sources. The benefit of this approach for uncertainty quantification and segregation is that it does not rely on Bayesian priors (although it is strictly coherent with Bayes' theorem and with probability theory), and therefore the partitioning of uncertainty into different components is *not* dependent on any a priori assumptions. We apply this methodology to assess the information use efficiency of the four land surface models that comprise the North American Land Data Assimilation System (Noah, Mosaic, SAC-SMA, and VIC). Specifically, we looked at the ability of these models to estimate soil moisture and latent heat fluxes. We found that in the case of soil moisture, about 25% of net information loss was from boundary conditions, around 45% was from model parameters, and 30-40% was from the model structures. In the case of latent heat flux, boundary conditions contributed about 50% of net uncertainty, and model structures contributed

  20. MF-Swift simulation study using benchmark data

    NARCIS (Netherlands)

    Jansen, S.T.H.; Verhoeff, L.; Cremers, R.; Schmeitz, A.J.C.; Besselink, I.J.M.

    2005-01-01

    The accuracy of tyre models depends to a large extent on the measurement data used to assess model parameters. The MF-Swift tyre model parameters can be identified or estimated from various combinations of experimental data. The amount and required accuracy of the measurement data can be selected ac

  1. Benchmarking and scaling studies of pseudospectral code Tarang for turbulence simulations

    KAUST Repository

    VERMA, MAHENDRA K

    2013-09-21

    Tarang is a general-purpose pseudospectral parallel code for simulating flows involving fluids, magnetohydrodynamics, and Rayleigh–Bénard convection in turbulence and instability regimes. In this paper we present code validation and benchmarking results of Tarang. We performed our simulations on 10243, 20483, and 40963 grids using the HPC system of IIT Kanpur and Shaheen of KAUST. We observe good ‘weak’ and ‘strong’ scaling for Tarang on these systems.

  2. Benchmarking and scaling studies of pseudospectral code Tarang for turbulence simulations

    Indian Academy of Sciences (India)

    Mahendra K Verma; Anando Chatterjee; K Sandeep Reddy; Rakesh K Yadav; Supriyo Paul; Mani Chandra; Ravi Samtaney

    2013-10-01

    Tarang is a general-purpose pseudospectral parallel code for simulating flows involving fluids, magnetohydrodynamics, and Rayleigh–Bénard convection in turbulence and instability regimes. In this paper we present code validation and benchmarking results of Tarang. We performed our simulations on 10243, 20483, and 40963 grids using the HPC system of IIT Kanpur and Shaheen of KAUST. We observe good `weak' and `strong' scaling for Tarang on these systems.

  3. A Programming Model Performance Study Using the NAS Parallel Benchmarks

    Directory of Open Access Journals (Sweden)

    Hongzhang Shan

    2010-01-01

    Full Text Available Harnessing the power of multicore platforms is challenging due to the additional levels of parallelism present. In this paper we use the NAS Parallel Benchmarks to study three programming models, MPI, OpenMP and PGAS to understand their performance and memory usage characteristics on current multicore architectures. To understand these characteristics we use the Integrated Performance Monitoring tool and other ways to measure communication versus computation time, as well as the fraction of the run time spent in OpenMP. The benchmarks are run on two different Cray XT5 systems and an Infiniband cluster. Our results show that in general the three programming models exhibit very similar performance characteristics. In a few cases, OpenMP is significantly faster because it explicitly avoids communication. For these particular cases, we were able to re-write the UPC versions and achieve equal performance to OpenMP. Using OpenMP was also the most advantageous in terms of memory usage. Also we compare performance differences between the two Cray systems, which have quad-core and hex-core processors. We show that at scale the performance is almost always slower on the hex-core system because of increased contention for network resources.

  4. TCC-III Engine Benchmark for Large-Eddy Simulation of IC Engine Flows

    Directory of Open Access Journals (Sweden)

    Schiffmann P.

    2016-01-01

    Full Text Available A collaborative effort is described to benchmark the TCC-III engine, and to illustrate the application of this data for the evaluation of sub-grid scale models and valve simulation details on the fidelity of Large-Eddy Simulations (LES. The TCC-III is a spark ignition 4-stroke 2-valve engine with a flat head and piston and is equipped with a full quartz liner for maximum optical access that allows high-speed flow measurements with Particle Image Velocimetry (PIV; the TCC-III has new valve seats and a modified intake-system compared to previous configurations. This work is an extension of a previous study at an engine speed of 800 RPM and an intake manifold pressure (MAP of 95 kPa, where a one-equation eddy viscosity LES model yielded accurate qualitative and quantitative predictions of ensemble averaged mean and RMS velocities during the intake and compression stroke. Here, experimental data were acquired with parametric variation of engine speed and intake manifold absolute pressure to assess the capability of LES models over a range of operating conditions of practical relevance. This paper focuses on the repeatability and accuracy of the measured PIV data, acquired at 1 300 RPM, at two different MAP (95 kPa and 40 kPa, and imaged at multiple data planes and crank angles. Two examples are provided, illustrating the application of this data to LES model development. In one example, the experimental data are used to distinguish between the efficacies of a one-equation eddy viscosity model versus a dynamic structure one-equation model for the sub-grid stresses. The second example addresses the effects of numerical intake-valve opening strategy and local mesh refinement in the valve curtain.

  5. Model-Based Engineering and Manufacturing CAD/CAM Benchmark

    International Nuclear Information System (INIS)

    The Benehmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supporting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate their engineering practices and processes to determine direction and focus fm Y-12 modmizadon efforts. The companies visited included several large established companies and anew, small, high-tech machining firm. As a result of this effort changes are recommended that will enable Y-12 to become a more responsive cost-effective manufacturing facility capable of suppording the needs of the Nuclear Weapons Complex (NW at sign) and Work Fw Others into the 21' century. The benchmark team identified key areas of interest, both focused and gencml. The focus arm included Human Resources, Information Management, Manufacturing Software Tools, and Standarda/ Policies and Practices. Areas of general interest included Inhstructure, Computer Platforms and Networking, and Organizational Structure. The method for obtaining the desired information in these areas centered on the creation of a benchmark questionnaire. The questionnaire was used throughout each of the visits as the basis for information gathering. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were using both 3-D solid modeling and surfaced Wire-frame models. The manufacturing computer tools were varie4 with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) ftom a common medel. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system

  6. Model-Based Engineering and Manufacturing CAD/CAM Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Domm, T.D.; Underwood, R.S.

    1999-04-26

    The Benehmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supprting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate lheir engineering practices and processes to determine direction and focus fm Y-12 modmizadon efforts. The companies visited included several large established companies and anew, small, high-tech machining firm. As a result of this efforL changes are recommended that will enable Y-12 to become a more responsive cost-effective manufacturing facility capable of suppordng the needs of the Nuclear Weapons Complex (NW@) and Work Fw Others into the 21' century. The benchmark team identified key areas of interest, both focused and gencml. The focus arm included Human Resources, Information Management, Manufacturing Software Tools, and Standarda/ Policies and Practices. Areas of general interest included Inhstructure, Computer Platforms and Networking, and Organizational Structure. The method for obtaining the desired information in these areas centered on the creation of a benchmark questionnaire. The questionnaire was used throughout each of the visits as the basis for information gathering. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were using both 3-D solid modeling and surfaced Wire-frame models. The manufacturing computer tools were varie4 with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) ftom a common medel. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system

  7. Beam equipment electromagnetic interaction in accelerators: simulation and experimental benchmarking

    CERN Document Server

    Passarelli, Andrea; Vaccaro, Vittorio Giorgio; Massa, Rita; Masullo, Maria Rosaria

    One of the most significant technological problems to achieve the nominal performances in the Large Hadron Collider (LHC) concerns the system of collimation of particle beams. The use of collimators crystals, exploiting the channeling effect on extracted beam, has been experimentally demonstrated. The first part of this thesis is about the optimization of UA9 goniometer at CERN, this device used for beam collimation will replace a part of the vacuum chamber. The optimization process, however, requires the calculation of the coupling impedance between the circulating beam and this structure in order to define the threshold of admissible intensity to do not trigger instability processes. Simulations have been performed with electromagnetic codes to evaluate the coupling impedance and to assess the beam-structure interaction. The results clearly showed that the most concerned resonance frequencies are due solely to the open cavity to the compartment of the motors and position sensors considering the crystal in o...

  8. CFD Simulation of Thermal-Hydraulic Benchmark V1000CT-2 Using ANSYS CFX

    Directory of Open Access Journals (Sweden)

    Thomas Höhne

    2009-01-01

    Full Text Available Plant measured data from VVER-1000 coolant mixing experiments were used within the OECD/NEA and AER coupled code benchmarks for light water reactors to test and validate computational fluid dynamic (CFD codes. The task is to compare the various calculations with measured data, using specified boundary conditions and core power distributions. The experiments, which are provided for CFD validation, include single loop cooling down or heating-up by disturbing the heat transfer in the steam generator through the steam valves at low reactor power and with all main coolant pumps in operation. CFD calculations have been performed using a numerical grid model of 4.7 million tetrahedral elements. The Best Practice Guidelines in using CFD in nuclear reactor safety applications has been used. Different advanced turbulence models were utilized in the numerical simulation. The results show a clear sector formation of the affected loop at the downcomer, lower plenum and core inlet, which corresponds to the measured values. The maximum local values of the relative temperature rise in the calculation are in the same range of the experiment. Due to this result, it is now possible to improve the mixing models which are usually used in system codes.

  9. Benchmarking consensus model quality assessment for protein fold recognition

    Directory of Open Access Journals (Sweden)

    McGuffin Liam J

    2007-09-01

    Full Text Available Abstract Background Selecting the highest quality 3D model of a protein structure from a number of alternatives remains an important challenge in the field of structural bioinformatics. Many Model Quality Assessment Programs (MQAPs have been developed which adopt various strategies in order to tackle this problem, ranging from the so called "true" MQAPs capable of producing a single energy score based on a single model, to methods which rely on structural comparisons of multiple models or additional information from meta-servers. However, it is clear that no current method can separate the highest accuracy models from the lowest consistently. In this paper, a number of the top performing MQAP methods are benchmarked in the context of the potential value that they add to protein fold recognition. Two novel methods are also described: ModSSEA, which based on the alignment of predicted secondary structure elements and ModFOLD which combines several true MQAP methods using an artificial neural network. Results The ModSSEA method is found to be an effective model quality assessment program for ranking multiple models from many servers, however further accuracy can be gained by using the consensus approach of ModFOLD. The ModFOLD method is shown to significantly outperform the true MQAPs tested and is competitive with methods which make use of clustering or additional information from multiple servers. Several of the true MQAPs are also shown to add value to most individual fold recognition servers by improving model selection, when applied as a post filter in order to re-rank models. Conclusion MQAPs should be benchmarked appropriately for the practical context in which they are intended to be used. Clustering based methods are the top performing MQAPs where many models are available from many servers; however, they often do not add value to individual fold recognition servers when limited models are available. Conversely, the true MQAP methods

  10. Benchmarking nuclear models for Gamow–Teller response

    Energy Technology Data Exchange (ETDEWEB)

    Litvinova, E., E-mail: elena.litvinova@wmich.edu [Department of Physics, Western Michigan University, Kalamazoo, MI 49008-5252 (United States); National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824-1321 (United States); Brown, B.A. [Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824-1321 (United States); National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824-1321 (United States); Fang, D.-L. [National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824-1321 (United States); Joint Institute for Nuclear Astrophysics, Michigan State University, East Lansing, MI 48824-1321 (United States); Marketin, T. [Physics Department, Faculty of Science, University of Zagreb (Croatia); Zegers, R.G.T. [Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824-1321 (United States); National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824-1321 (United States); Joint Institute for Nuclear Astrophysics, Michigan State University, East Lansing, MI 48824-1321 (United States)

    2014-03-07

    A comparative study of the nuclear Gamow–Teller response (GTR) within conceptually different state-of-the-art approaches is presented. Three nuclear microscopic models are considered: (i) the recently developed charge-exchange relativistic time blocking approximation (RTBA) based on the covariant density functional theory, (ii) the shell model (SM) with an extended “jj77” model space and (iii) the non-relativistic quasiparticle random-phase approximation (QRPA) with a Brueckner G-matrix effective interaction. We study the physics cases where two or all three of these models can be applied. The Gamow–Teller response functions are calculated for {sup 208}Pb, {sup 132}Sn and {sup 78}Ni within both RTBA and QRPA. The strengths obtained for {sup 208}Pb are compared to data that enable a firm model benchmarking. For the nucleus {sup 132}Sn, also SM calculations are performed within the model space truncated at the level of a particle–hole (ph) coupled to vibration configurations. This allows a consistent comparison to the RTBA where ph⊗phonon coupling is responsible for the spreading width and considerable quenching of the GTR. Differences between the models and perspectives of their future developments are discussed.

  11. Dark Current and Multipacting Capabilities in OPAL: Model Benchmarks and Applications

    CERN Document Server

    Wang, C; Yin, Z G; Zhang, T J

    2012-01-01

    Dark current and multiple electron impacts (multipacting), as for example observed in radio frequency (RF) structures of accelerators, are usually harmful to the equipment and the beam quality. These effects need to be suppressed to guarantee efficient and stable operation. Large scale simulations can be used to understand causes and develop strategies to suppress these phenomenas. We extend \\opal, a parallel framework for charged particle optics in accelerator structures and beam lines, with the necessary physics models to efficiently and precisely simulate multipacting phenomenas. We added a Fowler-Nordheim field emission model, two secondary electron emission models, developed by Furman-Pivi and Vaughan respectively, as well as efficient 3D boundary geometry handling capabilities. The models and their implementation are carefully benchmark against a non-stationary multipacting theory for the classic parallel plate geometry. A dedicated, parallel plate experiment is sketched.

  12. Benchmarking nuclear models for Gamow-Teller response

    CERN Document Server

    Litvinova, E; Fang, D -L; Marketin, T; Zegers, R G T

    2014-01-01

    A comparative study of the nuclear Gamow-Teller response (GTR) within conceptually different state-of-the-art approaches is presented. Three nuclear microscopic models are considered: (i) the recently developed charge-exchange relativistic time blocking approximation (RTBA) based on the covariant density functional theory, (ii) the shell model (SM) with an extended "jj77" model space and (iii) the non-relativistic quasiparticle random-phase approximation (QRPA) with a Brueckner G-matrix effective interaction. We study the physics cases where two or all three of these models can be applied. The Gamow-Teller response functions are calculated for 208-Pb, 132-Sn and 78-Ni within both RTBA and QRPA. The strengths obtained for 208-Pb are compared to data that enables a firm model benchmarking. For the nucleus 132-Sn, also SM calculations are performed within the model space truncated at the level of a particle-hole (ph) coupled to vibration configurations. This allows a consistent comparison to the RTBA where ph+ph...

  13. Development of common user data model for APOLLO3 and MARBLE and application to benchmark problems

    International Nuclear Information System (INIS)

    A Common User Data Model, CUDM, has been developed for the purpose of benchmark calculations between APOLLO3 and MARBLE code systems. The current version of CUDM was designed for core calculation benchmark problems with 3-dimensional Cartesian, 3-D XYZ, geometry. CUDM is able to manage all input/output data such as 3-D XYZ geometry, effective macroscopic cross section, effective multiplication factor and neutron flux. In addition, visualization tools for geometry and neutron flux were included. CUDM was designed by the object-oriented technique and implemented using Python programming language. Based on the CUDM, a prototype system for a benchmark calculation, CUDM-benchmark, was also developed. The CUDM-benchmark supports input/output data conversion for IDT solver in APOLLO3, and TRITAC and SNT solvers in MARBLE. In order to evaluate pertinence of CUDM, the CUDM-benchmark was applied to benchmark problems proposed by T. Takeda, G. Chiba and I. Zmijarevic. It was verified that the CUDM-benchmark successfully reproduced the results calculated with reference input data files, and provided consistent results among all the solvers by using one common input data defined by CUDM. In addition, a detailed benchmark calculation for Chiba benchmark was performed by using the CUDM-benchmark. Chiba benchmark is a neutron transport benchmark problem for fast criticality assembly without homogenization. This benchmark problem consists of 4 core configurations which have different sodium void regions, and each core configuration is defined by more than 5,000 fuel/material cells. In this application, it was found that the results by IDT and SNT solvers agreed well with the reference results by Monte-Carlo code. In addition, model effects such as quadrature set effect, Sn order effect and mesh size effect were systematically evaluated and summarized in this report. (author)

  14. Combining tumor genome simulation with crowdsourcing to benchmark somatic single-nucleotide-variant detection.

    Science.gov (United States)

    Ewing, Adam D; Houlahan, Kathleen E; Hu, Yin; Ellrott, Kyle; Caloian, Cristian; Yamaguchi, Takafumi N; Bare, J Christopher; P'ng, Christine; Waggott, Daryl; Sabelnykova, Veronica Y; Kellen, Michael R; Norman, Thea C; Haussler, David; Friend, Stephen H; Stolovitzky, Gustavo; Margolin, Adam A; Stuart, Joshua M; Boutros, Paul C

    2015-07-01

    The detection of somatic mutations from cancer genome sequences is key to understanding the genetic basis of disease progression, patient survival and response to therapy. Benchmarking is needed for tool assessment and improvement but is complicated by a lack of gold standards, by extensive resource requirements and by difficulties in sharing personal genomic information. To resolve these issues, we launched the ICGC-TCGA DREAM Somatic Mutation Calling Challenge, a crowdsourced benchmark of somatic mutation detection algorithms. Here we report the BAMSurgeon tool for simulating cancer genomes and the results of 248 analyses of three in silico tumors created with it. Different algorithms exhibit characteristic error profiles, and, intriguingly, false positives show a trinucleotide profile very similar to one found in human tumors. Although the three simulated tumors differ in sequence contamination (deviation from normal cell sequence) and in subclonality, an ensemble of pipelines outperforms the best individual pipeline in all cases. BAMSurgeon is available at https://github.com/adamewing/bamsurgeon/.

  15. Benchmarking reactive transport models at a hillslope scale

    Science.gov (United States)

    Kalbacher, T.; He, W.; Nixdorf, E.; Jang, E.; Fleckenstein, J. H.; Kolditz, O.

    2015-12-01

    The hillslope scale is an important transition between the field scale and the catchment scale. The water flow in the unsaturated zone of a hillslope can be highly dynamic, which can lead to dynamic changes of groundwater flow or stream outflow. Additionally, interactions among host rock formation, soil properties and recharge water from precipitation or anthropogenic activities (mining, agriculture etc.) can influence the water quality of groundwater and stream in the long term. To simulate reactive transport processes at such a scale is a challenging task. On the one hand, simulation of water flow in a coupled soil-aquifer system often involves solving of highly non-linear PDEs such as Richards equation; on the other hand, one has to consider complicated biogeochemical reactions (e.g. water-rock interactions, biological degradation, redox reactions). Both aspects are computationally expensive and have high requirements on the numerical precision and stabilities of the employed code. The primary goals of this study are as follows: i) Identify the bottlenecks and quantitatively analyse their influence on simulation of biogeochemical reactions at a hillslope scale; ii) find or suggest practical strategies to deal with these bottlenecks, thus to provide detailed hints for future improvements of reactive transport simulators. To achieve these goals, the parallelized reactive transport simulator OGS#IPhreeqc has been applied to simulate two benchmark examples. The first example is about uranium leaching based on Šimůnek et al. (2012), which considers the leaching of uranium from a mill tailing and accompanied mineral dissolution/precipitation. The geochemical system is then extended to include redox reactions in the second example. Based on these examples, the numerical stability and parallel performance of the tool is analysed. ReferenceŠimůnek, J., Jacques, D., Šejna, M., van Genuchten, M. T.: The HP2 program for HYDRUS (2D/3D), A coupled code for simulating two

  16. A resource for benchmarking the usefulness of protein structure models.

    KAUST Repository

    Carbajo, Daniel

    2012-08-02

    BACKGROUND: Increasingly, biologists and biochemists use computational tools to design experiments to probe the function of proteins and/or to engineer them for a variety of different purposes. The most effective strategies rely on the knowledge of the three-dimensional structure of the protein of interest. However it is often the case that an experimental structure is not available and that models of different quality are used instead. On the other hand, the relationship between the quality of a model and its appropriate use is not easy to derive in general, and so far it has been analyzed in detail only for specific application. RESULTS: This paper describes a database and related software tools that allow testing of a given structure based method on models of a protein representing different levels of accuracy. The comparison of the results of a computational experiment on the experimental structure and on a set of its decoy models will allow developers and users to assess which is the specific threshold of accuracy required to perform the task effectively. CONCLUSIONS: The ModelDB server automatically builds decoy models of different accuracy for a given protein of known structure and provides a set of useful tools for their analysis. Pre-computed data for a non-redundant set of deposited protein structures are available for analysis and download in the ModelDB database. IMPLEMENTATION, AVAILABILITY AND REQUIREMENTS: Project name: A resource for benchmarking the usefulness of protein structure models. Project home page: http://bl210.caspur.it/MODEL-DB/MODEL-DB_web/MODindex.php.Operating system(s): Platform independent. Programming language: Perl-BioPerl (program); mySQL, Perl DBI and DBD modules (database); php, JavaScript, Jmol scripting (web server). Other requirements: Java Runtime Environment v1.4 or later, Perl, BioPerl, CPAN modules, HHsearch, Modeller, LGA, NCBI Blast package, DSSP, Speedfill (Surfnet) and PSAIA. License: Free. Any restrictions to use by

  17. Indoor Modelling Benchmark for 3D Geometry Extraction

    Science.gov (United States)

    Thomson, C.; Boehm, J.

    2014-06-01

    A combination of faster, cheaper and more accurate hardware, more sophisticated software, and greater industry acceptance have all laid the foundations for an increased desire for accurate 3D parametric models of buildings. Pointclouds are the data source of choice currently with static terrestrial laser scanning the predominant tool for large, dense volume measurement. The current importance of pointclouds as the primary source of real world representation is endorsed by CAD software vendor acquisitions of pointcloud engines in 2011. Both the capture and modelling of indoor environments require great effort in time by the operator (and therefore cost). Automation is seen as a way to aid this by reducing the workload of the user and some commercial packages have appeared that provide automation to some degree. In the data capture phase, advances in indoor mobile mapping systems are speeding up the process, albeit currently with a reduction in accuracy. As a result this paper presents freely accessible pointcloud datasets of two typical areas of a building each captured with two different capture methods and each with an accurate wholly manually created model. These datasets are provided as a benchmark for the research community to gauge the performance and improvements of various techniques for indoor geometry extraction. With this in mind, non-proprietary, interoperable formats are provided such as E57 for the scans and IFC for the reference model. The datasets can be found at: http://indoor-bench.github.io/indoor-bench.

  18. CALIBRATION METHODS OF A CONSTITUTIVE MODEL FOR PARTIALLY SATURATED SOILS: A BENCHMARKING EXERCISE WITHIN THE MUSE NETWORK

    OpenAIRE

    D'Onza, Francesca

    2008-01-01

    The paper presents a benchmarking exercise comparing different procedures, adopted by seven different teams of constitutive modellers, for the determination of parameter values in the Barcelona Basic Model, which is an elasto-plastic model for unsaturated soils. Each team is asked to determine a set of parameter values based on the same laboratory test data. The different set of parameters are then employed to simulate soil behaviour along a variety of stress paths. The results are finally co...

  19. Benchmarks for interface-tracking codes in the consortium for advanced simulation of LWRs (CASL)

    International Nuclear Information System (INIS)

    A major innovation pursued by the Consortium for Advanced Simulation of LWRs (CASL) is the use of Interface Tracking Methods (ITM) to generate high-fidelity closure relations for two-phase flow and heat transfer phenomena (e.g. nucleate boiling, bubble break-up and coalescence, vapor condensation, etc.), to be used in coarser CFD, subchannel and system codes. ITMs do not assume an idealized geometry of the interface between the liquid and vapor phases, but rather calculate it from ‘first principles’. Also, used within the context of high-fidelity turbulence simulations, such as Direct Numerical Simulation (DNS) or Large Eddy Simulation (LES), ITMs can resolve the velocity (including the fluctuating field) and temperature/scalar gradients near the liquid-vapor interface, so prediction of the exchange of momentum, mass and heat at the interface in principle requires no empirical correlations. The physical complexity of the two-phase flow and heat transfer phenomena encountered in LWRs naturally lends itself to an ITM analysis approach. Several codes featuring ITM capabilities are available within CASL. These are TransAT, STAR-CCM+, PHASTA, FTC3D and FELBM. They use a variety of ITMs ranging from Volume-Of- Fluid to Level-Set, from Front-Tracking to Lattice-Boltzmann. A series of benchmark simulations is being developed to test the key capabilities of these codes and their ITMs. In this paper, three such benchmark simulations, testing DNS, LES and interface tracking, respectively, are briefly described. (author)

  20. Benchmarking of Simulation Codes Based on the Montague Resonance in the CERN Proton Synchrotron

    CERN Document Server

    Hofmann, Ingo; Cousineau, Sarah M; Franchetti, Giuliano; Giovannozzi, Massimo; Holmes, Jeffrey Alan; Jones, Frederick W; Luccio, Alfredo U; Machida, Shinji; Métral, E; Qiang, Ji; Ryne, Robert D; Spentzouris, Panagiotis

    2005-01-01

    Experimental data on emittance exchange by the space charge driven ‘‘Montague resonance'' have been obtained at the CERN Proton Synchrotron in 2002-04 as a function of the working point. These data are used to advance the benchmarking of major simulation codes (ACCSIM, IMPACT, MICROMAP, ORBIT, SIMBAD, SIMPSONS, SYNERGIA) currently employed world-wide in the design or performance improvement of high intensity circular accelerators. In this paper we summarize the experimental findings and compare them with the first three steps of simulation results of the still progressing work.

  1. Benchmark hydrogeophysical data from a physical seismic model

    Science.gov (United States)

    Lorenzo, Juan M.; Smolkin, David E.; White, Christopher; Chollett, Shannon R.; Sun, Ting

    2013-01-01

    Theoretical fluid flow models are used regularly to predict and analyze porous media flow but require verification against natural systems. Seismic monitoring in a controlled laboratory setting at a nominal scale of 1:1000 in the acoustic frequency range can help improve fluid flow models as well as elasto-granular models for uncompacted saturated-unsaturated soils. A mid-scale sand tank allows for many highly repeatable, yet flexible, experimental configurations with different material compositions and pump rates while still capturing phenomena such as patchy saturation, flow fingering, or layering. The tank (˜6×9×0.44 m) contains a heterogeneous sand pack (1.52-1.7 phi). In a set of eight benchmark experiments the water table is raised inside the sand body at increments of ˜0.05 m. Seismic events (vertical component) are recorded by a pseudowalkaway 64-channel accelerometer array (20 Hz-20 kHz), at 78 kS/s, in 100- scan stacks so as to optimize signal-to-noise ratio. Three screened well sites monitor water depth (+/-3 mm) inside the sand body. Seismic data sets in SEG Y format are publicly downloadable from the internet (http://github.com/cageo/Lorenzo-2012), in order to allow comparisons of different seismic and fluid flow analyses. The capillary fringe does not appear to completely saturate, as expected, because the interpreted compressional-wave velocity values remain so low (water levels there is no large seismic impedance contrast across the top of the water table to generate a clear reflector. Preliminary results indicate an immediate need for several additional experiments whose data sets will be added to the online database. Future benchmark data sets will grow with a control data set to show conditions in the sand body before water levels rise, and a surface 3D data set. In later experiments, buried sensors will help reduce seismic attenuation effects and in-situ saturation sensors will provide calibration values.

  2. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Directory of Open Access Journals (Sweden)

    Shane Ó Conchúir

    Full Text Available The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  3. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Science.gov (United States)

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  4. Towards a Core Model for Higher Education IT Management Benchmarking

    OpenAIRE

    Markus Juult, Janne

    2013-01-01

    This study evaluates three European higher education IT benchmarking projects by applying a custom comparison framework that is based on benchmarking literature and IT manager experience. The participating projects are Bencheit (Finland), UCISA (The United Kingdom) and UNIVERSITIC (Spain). EDUCAUSE (The United States of America) is also included as a project outside our geographical focus area due to its size and prominence in North America. Each of these projects is examined to map the data ...

  5. Mesoscale Benchmark Demonstration Problem 1: Mesoscale Simulations of Intra-granular Fission Gas Bubbles in UO2 under Post-irradiation Thermal Annealing

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yulan; Hu, Shenyang Y.; Montgomery, Robert; Gao, Fei; Sun, Xin; Tonks, Michael; Biner, Bullent; Millet, Paul; Tikare, Veena; Radhakrishnan, Balasubramaniam; Andersson , David

    2012-04-11

    A study was conducted to evaluate the capabilities of different numerical methods used to represent microstructure behavior at the mesoscale for irradiated material using an idealized benchmark problem. The purpose of the mesoscale benchmark problem was to provide a common basis to assess several mesoscale methods with the objective of identifying the strengths and areas of improvement in the predictive modeling of microstructure evolution. In this work, mesoscale models (phase-field, Potts, and kinetic Monte Carlo) developed by PNNL, INL, SNL, and ORNL were used to calculate the evolution kinetics of intra-granular fission gas bubbles in UO2 fuel under post-irradiation thermal annealing conditions. The benchmark problem was constructed to include important microstructural evolution mechanisms on the kinetics of intra-granular fission gas bubble behavior such as the atomic diffusion of Xe atoms, U vacancies, and O vacancies, the effect of vacancy capture and emission from defects, and the elastic interaction of non-equilibrium gas bubbles. An idealized set of assumptions was imposed on the benchmark problem to simplify the mechanisms considered. The capability and numerical efficiency of different models are compared against selected experimental and simulation results. These comparisons find that the phase-field methods, by the nature of the free energy formulation, are able to represent a larger subset of the mechanisms influencing the intra-granular bubble growth and coarsening mechanisms in the idealized benchmark problem as compared to the Potts and kinetic Monte Carlo methods. It is recognized that the mesoscale benchmark problem as formulated does not specifically highlight the strengths of the discrete particle modeling used in the Potts and kinetic Monte Carlo methods. Future efforts are recommended to construct increasingly more complex mesoscale benchmark problems to further verify and validate the predictive capabilities of the mesoscale modeling

  6. A resource for benchmarking the usefulness of protein structure models

    Directory of Open Access Journals (Sweden)

    Carbajo Daniel

    2012-08-01

    Full Text Available Abstract Background Increasingly, biologists and biochemists use computational tools to design experiments to probe the function of proteins and/or to engineer them for a variety of different purposes. The most effective strategies rely on the knowledge of the three-dimensional structure of the protein of interest. However it is often the case that an experimental structure is not available and that models of different quality are used instead. On the other hand, the relationship between the quality of a model and its appropriate use is not easy to derive in general, and so far it has been analyzed in detail only for specific application. Results This paper describes a database and related software tools that allow testing of a given structure based method on models of a protein representing different levels of accuracy. The comparison of the results of a computational experiment on the experimental structure and on a set of its decoy models will allow developers and users to assess which is the specific threshold of accuracy required to perform the task effectively. Conclusions The ModelDB server automatically builds decoy models of different accuracy for a given protein of known structure and provides a set of useful tools for their analysis. Pre-computed data for a non-redundant set of deposited protein structures are available for analysis and download in the ModelDB database. Implementation, availability and requirements Project name: A resource for benchmarking the usefulness of protein structure models. Project home page: http://bl210.caspur.it/MODEL-DB/MODEL-DB_web/MODindex.php. Operating system(s: Platform independent. Programming language: Perl-BioPerl (program; mySQL, Perl DBI and DBD modules (database; php, JavaScript, Jmol scripting (web server. Other requirements: Java Runtime Environment v1.4 or later, Perl, BioPerl, CPAN modules, HHsearch, Modeller, LGA, NCBI Blast package, DSSP, Speedfill (Surfnet and PSAIA. License: Free. Any

  7. Photochemistry in Terrestrial Exoplanet Atmospheres I: Photochemistry Model and Benchmark Cases

    CERN Document Server

    Hu, Renyu; Bains, William

    2012-01-01

    We present a comprehensive photochemistry model for exploration of the chemical composition of terrestrial exoplanet atmospheres. The photochemistry model is designed from the ground up to have the capacity to treat all types of terrestrial planet atmospheres, ranging from oxidizing through reducing, which makes the code suitable for applications for the wide range of anticipated terrestrial exoplanet compositions. The one-dimensional chemical transport model treats up to 800 chemical reactions, photochemical processes, dry and wet deposition, surface emission and thermal escape of O, H, C, N and S bearing species, as well as formation and deposition of elemental sulfur and sulfuric acid aerosols. We validate the model by computing the atmospheric composition of current Earth and Mars and find agreement with observations of major trace gases in Earth's and Mars' atmospheres. We simulate several plausible atmospheric scenarios of terrestrial exoplanets, and choose three benchmark cases for atmospheres from red...

  8. A unified framework for benchmark dose estimation applied to mixed models and model averaging

    DEFF Research Database (Denmark)

    Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.

    2013-01-01

    This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...

  9. A benchmark study of numerical schemes for one-dimensional arterial blood flow modelling.

    Science.gov (United States)

    Boileau, Etienne; Nithiarasu, Perumal; Blanco, Pablo J; Müller, Lucas O; Fossan, Fredrik Eikeland; Hellevik, Leif Rune; Donders, Wouter P; Huberts, Wouter; Willemet, Marie; Alastruey, Jordi

    2015-10-01

    Haemodynamical simulations using one-dimensional (1D) computational models exhibit many of the features of the systemic circulation under normal and diseased conditions. Recent interest in verifying 1D numerical schemes has led to the development of alternative experimental setups and the use of three-dimensional numerical models to acquire data not easily measured in vivo. In most studies to date, only one particular 1D scheme is tested. In this paper, we present a systematic comparison of six commonly used numerical schemes for 1D blood flow modelling: discontinuous Galerkin, locally conservative Galerkin, Galerkin least-squares finite element method, finite volume method, finite difference MacCormack method and a simplified trapezium rule method. Comparisons are made in a series of six benchmark test cases with an increasing degree of complexity. The accuracy of the numerical schemes is assessed by comparison with theoretical results, three-dimensional numerical data in compatible domains with distensible walls or experimental data in a network of silicone tubes. Results show a good agreement among all numerical schemes and their ability to capture the main features of pressure, flow and area waveforms in large arteries. All the information used in this study, including the input data for all benchmark cases, experimental data where available and numerical solutions for each scheme, is made publicly available online, providing a comprehensive reference data set to support the development of 1D models and numerical schemes.

  10. Development of a benchmarking model for lithium battery electrodes

    Science.gov (United States)

    Bergholz, Timm; Korte, Carsten; Stolten, Detlef

    2016-07-01

    This paper presents a benchmarking model to enable systematic selection of anode and cathode materials for lithium batteries in stationary applications, hybrid and battery electric vehicles. The model incorporates parameters for energy density, power density, safety, lifetime, costs and raw materials. Combinations of carbon anodes, Li4Ti5O12 or TiO2 with LiFePO4 cathodes comprise interesting combinations for application in hybrid power trains. Higher cost and raw material prioritization of stationary applications hinders the breakthrough of Li4Ti5O12, while a combination of TiO2 and LiFePO4 is suggested. The favored combinations resemble state-of-the-art materials, whereas novel cell chemistries must be optimized for cells in battery electric vehicles. In contrast to actual research efforts, sulfur as a cathode material is excluded due to its low volumetric energy density and its known lifetime and safety issues. Lithium as anode materials is discarded due to safety issues linked to electrode melting and dendrite formation. A high capacity composite Li2MnO3·LiNi0.5Co0.5O2 and high voltage spinel LiNi0.5Mn1.5O4 cathode with silicon as anode material promise high energy densities with sufficient lifetime and safety properties if electrochemical and thermal stabilization of the electrolyte/electrode interfaces and bulk materials is achieved. The model allows a systematic top-down orientation of research on lithium batteries.

  11. Benchmark models, planes lines and points for future SUSY searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    AbdusSalam, S.S. [The Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Allanach, B.C. [Cambridge Univ. (United Kingdom). Dept. of Applied Mathematics and Theoretical Physics; Dreiner, H.K. [Bonn Univ. (DE). Bethe Center for Theoretical Physics and Physikalisches Inst.] (and others)

    2012-03-15

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  12. Theory, modeling, and simulation annual report, 1992

    Energy Technology Data Exchange (ETDEWEB)

    1993-05-01

    This report briefly discusses research on the following topics: development of electronic structure methods; modeling molecular processes in clusters; modeling molecular processes in solution; modeling molecular processes in separations chemistry; modeling interfacial molecular processes; modeling molecular processes in the atmosphere; methods for periodic calculations on solids; chemistry and physics of minerals; graphical user interfaces for computational chemistry codes; visualization and analysis of molecular simulations; integrated computational chemistry environment; and benchmark computations.

  13. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  14. Simulation of hydrogen deflagration experiment – Benchmark exercise with lumped-parameter codes

    Energy Technology Data Exchange (ETDEWEB)

    Kljenak, Ivo, E-mail: ivo.kljenak@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Kuznetsov, Mikhail, E-mail: mike.kuznetsov@kit.edu [Karlsruhe Institute of Technology, Kaiserstraße 12, 76131 Karlsruhe (Germany); Kostka, Pal, E-mail: kostka@nubiki.hu [NUBIKI Nuclear Safety Research Institute, Konkoly-Thege Miklós út 29-33, 1121 Budapest (Hungary); Kubišova, Lubica, E-mail: lubica.kubisova@ujd.gov.sk [Nuclear Regulatory Authority of the Slovak Republic, Bajkalská 27, 82007 Bratislava (Slovakia); Maltsev, Mikhail, E-mail: maltsev_MB@aep.ru [JSC Atomenergoproekt, 1, st. Podolskykh Kursantov, Moscow (Russian Federation); Manzini, Giovanni, E-mail: giovanni.manzini@rse-web.it [Ricerca sul Sistema Energetico, Via Rubattino 54, 20134 Milano (Italy); Povilaitis, Mantas, E-mail: mantas.p@mail.lei.lt [Lithuania Energy Institute, Breslaujos g.3, 44403 Kaunas (Lithuania)

    2015-03-15

    Highlights: • Blind and open simulations of hydrogen combustion experiment in large-scale containment-like facility with different lumped-parameter codes. • Simulation of axial as well as radial flame propagation. • Confirmation of adequacy of lumped-parameter codes for safety analyses of actual nuclear power plants. - Abstract: An experiment on hydrogen deflagration (Upward Flame Propagation Experiment – UFPE) was proposed by the Jozef Stefan Institute (Slovenia) and performed in the HYKA A2 facility at the Karlsruhe Institute of Technology (Germany). The experimental results were used to organize a benchmark exercise for lumped-parameter codes. Six organizations (JSI, AEP, LEI, NUBIKI, RSE and UJD SR) participated in the benchmark exercise, using altogether four different computer codes: ANGAR, ASTEC, COCOSYS and ECART. Both blind and open simulations were performed. In general, all the codes provided satisfactory results of the pressure increase, whereas the results of the temperature show a wider dispersal. Concerning the flame axial and radial velocities, the results may be considered satisfactory, given the inherent simplification of the lumped-parameter description compared to the local instantaneous description.

  15. Inter-code comparison benchmark between DINA and TSC for ITER disruption modelling

    International Nuclear Information System (INIS)

    Results of 2D disruption modelling for validation of benchmark ITER scenarios using two established codes—DINA and TSC, are compared. Although the simulation models employed in those two codes ought to be equivalent in the resistive time scale, quite different defining equations and formulations are adopted in their approaches. Moreover there are considerable differences in the implemented model of solid conducting structures placed on the periphery of the plasma such as the vacuum vessel and blanket modules. Thus it has long been unanswered whether the one of the two codes is really able to reproduce the other's results correctly, since a large number of code-wise differences render the comparison task exceedingly complicated. In this paper, it is demonstrated that after the simulations are set up accounting for the model differences, a reasonably good agreement is generally obtained, corroborating the correctness of the code results. When the halo current generation and its poloidal path in the first wall are included, however, the situation is more complicated. Because of the surface averaged treatment of the magnetic field (current density) diffusion equation, DINA can only approximately handle the poloidal electric currents in the first wall that cross the field lines. Validation is carried out for DINA simulations of the halo current generation by comparing with TSC simulations, where the treatment of halo current dynamics is more justifiable. The specific details of each code, affecting the consequence in ITER disruption prediction, are highlighted and discussed. (paper)

  16. The PRISM Benchmark Suite

    OpenAIRE

    Kwiatkowsa, Marta; Norman, Gethin; Parker, David

    2012-01-01

    We present the PRISM benchmark suite: a collection of probabilistic models and property specifications, designed to facilitate testing, benchmarking and comparisons of probabilistic verification tools and implementations.

  17. Generation IV benchmarking of TRISO fuel performance models under accident conditions: Modeling input data

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise P. [Idaho National Laboratory (INL), Idaho Falls, ID (United States)

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison

  18. Tree-ring responses to extreme climate events as benchmarks for terrestrial dynamic vegetation models

    Directory of Open Access Journals (Sweden)

    A. Rammig

    2014-02-01

    Full Text Available Climate extremes can trigger exceptional responses in terrestrial ecosystems, for instance by altering growth or mortality rates. Effects of this kind are often manifested in reductions of the local net primary production (NPP. Investigating a set of European long-term data on annual radial tree growth confirms this pattern: we find that 53% of tree ring width (TRW indices are below one standard deviation, and up to 16% of the TRW values are below two standard deviations in years with extremely high temperatures and low precipitation. Based on these findings we investigate if climate driven patterns in long-term tree growth data may serve as benchmarks for state-of-the-art dynamic vegetation models such as LPJmL. The model simulates NPP but not explicitly the radial tree ring growth, hence requiring a generic method to ensure an objective comparison. Here we propose an analysis scheme that quantifies the coincidence rate of climate extremes with some biotic responses (here TRW or simulated NPP. We find that the reduction in tree-ring width during drought extremes is lower than the corresponding reduction of simulated NPP. We identify ten extreme years during the 20th century in which both, model and measurements indicate high coincidence rates across Europe. However, we detect substantial regional differences in simulated and observed responses to extreme events. One explanation for this discrepancy could be that the tree-ring data have preferentially been sampled at more climatically stressed sites. The model-data difference is amplified by the fact that dynamic vegetation models are designed to simulate mean ecosystem responses at landscape or regional scale. However, we find that both model-data and measurements display carry-over effects from the previous year. We conclude that using radial tree growth is a good basis for generic model-benchmarks if the data are analyzed by scale-free measures such as coincidence analysis. Our study shows

  19. A framework for implementation of organ effect models in TOPAS with benchmarks extended to proton therapy

    International Nuclear Information System (INIS)

    The aim of this work was to develop a framework for modeling organ effects within TOPAS (TOol for PArticle Simulation), a wrapper of the Geant4 Monte Carlo toolkit that facilitates particle therapy simulation. The DICOM interface for TOPAS was extended to permit contour input, used to assign voxels to organs. The following dose response models were implemented: The Lyman–Kutcher–Burman model, the critical element model, the population based critical volume model, the parallel-serial model, a sigmoid-based model of Niemierko for normal tissue complication probability and tumor control probability (TCP), and a Poisson-based model for TCP. The framework allows easy manipulation of the parameters of these models and the implementation of other models.As part of the verification, results for the parallel-serial and Poisson model for x-ray irradiation of a water phantom were compared to data from the AAPM Task Group 166. When using the task group dose-volume histograms (DVHs), results were found to be sensitive to the number of points in the DVH, with differences up to 2.4%, some of which are attributable to differences between the implemented models. New results are given with the point spacing specified. When using Monte Carlo calculations with TOPAS, despite the relatively good match to the published DVH’s, differences up to 9% were found for the parallel-serial model (for a maximum DVH difference of 2%) and up to 0.5% for the Poisson model (for a maximum DVH difference of 0.5%). However, differences of 74.5% (in Rectangle1), 34.8% (in PTV) and 52.1% (in Triangle) for the critical element, critical volume and the sigmoid-based models were found respectively.We propose a new benchmark for verification of organ effect models in proton therapy. The benchmark consists of customized structures in the spread out Bragg peak plateau, normal tissue, tumor, penumbra and in the distal region. The DVH’s, DVH point spacing, and results of the organ effect models are

  20. Summary of FY15 results of benchmark modeling activities

    Energy Technology Data Exchange (ETDEWEB)

    Arguello, J. Guadalupe [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    Sandia is participating in the third phase of an is a contributing partner to a U.S.-German "Joint Project" entitled "Comparison of current constitutive models and simulation procedures on the basis of model calculations of the thermo-mechanical behavior and healing of rock salt." The first goal of the project is to check the ability of numerical modeling tools to correctly describe the relevant deformation phenomena in rock salt under various influences. Achieving this goal will lead to increased confidence in the results of numerical simulations related to the secure storage of radioactive wastes in rock salt, thereby enhancing the acceptance of the results. These results may ultimately be used to make various assertions regarding both the stability analysis of an underground repository in salt, during the operating phase, and the long-term integrity of the geological barrier against the release of harmful substances into the biosphere, in the post-operating phase.

  1. SCALE Modeling of Selected Neutronics Test Problems within the OECD UAM LWR’s Benchmark

    Directory of Open Access Journals (Sweden)

    Luigi Mercatali

    2013-01-01

    Full Text Available The OECD UAM Benchmark was launched in 2005 with the objective of determining the uncertainty in the simulation of Light Water Reactors (LWRs system calculations at all the stages of the coupled reactor physics—thermal hydraulics modeling. Within the framework of the “Neutronics Phase” of the Benchmark the solutions of some selected test cases at the cell physics and lattice physics levels are presented. The SCALE 6.1 code package has been used for the neutronics modeling of the selected exercises. Sensitivity and Uncertainty analysis (S/U based on the generalized perturbation theory has been performed in order to assess the uncertainty of the computation of some selected reactor integral parameters due to the uncertainty in the basic nuclear data. As a general trend, it has been found that the main sources of uncertainty are the 238U (n, and the 239Pu nubar for the UOX- and the MOX-fuelled test cases, respectively. Moreover, the reference solutions for the test cases obtained using Monte Carlo methodologies together with a comparison between deterministic and stochastic solutions are presented.

  2. Relative importance of secondary settling tank models in WWTP simulations

    DEFF Research Database (Denmark)

    Ramin, Elham; Flores-Alsina, Xavier; Sin, Gürkan;

    2012-01-01

    Results obtained in a study using the Benchmark Simulation Model No. 1 (BSM1) show that a one-dimensional secondary settling tank (1-D SST) model structure and its parameters are among the most significant sources of uncertainty in wastewater treatment plant (WWTP) simulations [Ramin et al., 2011......]. The sensitivity results consistently indicate that the prediction of sludge production is most sensitive to the variation of the settling parameters. In the present study, we use the Benchmark Simulation Model No. 2 (BSM2), a plant-wide benchmark, that combines the Activated Sludge Model No. 1 (ASM1......) with the Anaerobic Digestion Model No. 1 (ADM1). We use BSM2 as a vehicle to compare two different 1-D SST models, and to assess the relative significance of their performance on WWTP simulation model outputs. The two 1-D SST models assessed include the firstorder model by Takács et al. [1991] and the second...

  3. Physical Model Development and Benchmarking for MHD Flows in Blanket Design

    International Nuclear Information System (INIS)

    An advanced simulation environment to model incompressible MHD flows relevant to blanket conditions in fusion reactors has been developed at HyPerComp in research collaboration with TEXCEL. The goals of this phase-II project are two-fold: The first is the incorporation of crucial physical phenomena such as induced magnetic field modeling, and extending the capabilities beyond fluid flow prediction to model heat transfer with natural convection and mass transfer including tritium transport and permeation. The second is the design of a sequence of benchmark tests to establish code competence for several classes of physical phenomena in isolation as well as in select (termed here as 'canonical',) combinations. No previous attempts to develop such a comprehensive MHD modeling capability exist in the literature, and this study represents essentially uncharted territory. During the course of this Phase-II project, a significant breakthrough was achieved in modeling liquid metal flows at high Hartmann numbers. We developed a unique mathematical technique to accurately compute the fluid flow in complex geometries at extremely high Hartmann numbers (10,000 and greater), thus extending the state of the art of liquid metal MHD modeling relevant to fusion reactors at the present time. These developments have been published in noted international journals. A sequence of theoretical and experimental results was used to verify and validate the results obtained. The code was applied to a complete DCLL module simulation study with promising results.

  4. Physical Model Development and Benchmarking for MHD Flows in Blanket Design

    Energy Technology Data Exchange (ETDEWEB)

    Ramakanth Munipalli; P.-Y.Huang; C.Chandler; C.Rowell; M.-J.Ni; N.Morley; S.Smolentsev; M.Abdou

    2008-06-05

    An advanced simulation environment to model incompressible MHD flows relevant to blanket conditions in fusion reactors has been developed at HyPerComp in research collaboration with TEXCEL. The goals of this phase-II project are two-fold: The first is the incorporation of crucial physical phenomena such as induced magnetic field modeling, and extending the capabilities beyond fluid flow prediction to model heat transfer with natural convection and mass transfer including tritium transport and permeation. The second is the design of a sequence of benchmark tests to establish code competence for several classes of physical phenomena in isolation as well as in select (termed here as “canonical”,) combinations. No previous attempts to develop such a comprehensive MHD modeling capability exist in the literature, and this study represents essentially uncharted territory. During the course of this Phase-II project, a significant breakthrough was achieved in modeling liquid metal flows at high Hartmann numbers. We developed a unique mathematical technique to accurately compute the fluid flow in complex geometries at extremely high Hartmann numbers (10,000 and greater), thus extending the state of the art of liquid metal MHD modeling relevant to fusion reactors at the present time. These developments have been published in noted international journals. A sequence of theoretical and experimental results was used to verify and validate the results obtained. The code was applied to a complete DCLL module simulation study with promising results.

  5. Semi-active model predictive control for 3rd generation benchmark problem using smart dampers

    Institute of Scientific and Technical Information of China (English)

    Yan Guiyun; Sun Bingnan; Lü Yanping

    2007-01-01

    A semi-active strategy for model predictive control (MPC), in which magneto-rheological dampers are used as an actuator, is presented for use in reducing the nonlinear seismic response of high-rise buildings. A multi-step predictive model is developed to estimate the seismic performance of high-rise buildings, taking into account of the effects of nonlinearity, time-variability, model mismatching, and disturbances and uncertainty of controlled system parameters by the predicted error feedback in the multi-step predictive model. Based on the predictive model, a Kalman-Bucy observer suitable for semi-active strategy is proposed to estimate the state vector from the acceleration and semi-active control force feedback.The main advantage of the proposed strategy is its inherent stability, simplicity, on-line real-time operation, and the ability to handle nonlinearity, uncertainty, and time-variability properties of structures. Numerical simulation of the nonlinear seismic responses of a controlled 20-story benchmark building is carried out, and the simulation results are compared to those of other control systems. The results show that the developed semi-active strategy can efficiently reduce the nonlinear seismic response of high-rise buildings.

  6. OECD/NEA main steam line break PWR benchmark simulation by TRACE/S3K coupled code

    International Nuclear Information System (INIS)

    A coupling between the TRACE system thermal-hydraulics code and the SIMULATE-3K (S3K) three-dimensional reactor kinetics code has been developed in a collaboration between the Paul Scherrer Institut (PSI) and Studsvik. In order to verify the coupling scheme and the coupled code capabilities with regards to plant transients, the OECD/NEA Main Steam Line Break PWR benchmark was simulated with the coupled TRACE/S3K code. The core/plant system data were taken from the benchmark specifications, while the nuclear data were generated with the Studsvik's lattice code CASMO-4 and the core analysis code SIMULATE-3. The TRACE/S3K results were compared with the published results obtained by the 17 participants of the benchmark. The comparison shows that the TRACE/S3K code reproduces satisfactory the main transient parameters, namely, the power and reactivity history, steam generator inventory, and pressure response. (author)

  7. Benchmark of the FLUKA model of crystal channeling against the UA9-H8 experiment

    Science.gov (United States)

    Schoofs, P.; Cerutti, F.; Ferrari, A.; Smirnov, G.

    2015-07-01

    Channeling in bent crystals is increasingly considered as an option for the collimation of high-energy particle beams. The installation of crystals in the LHC has taken place during this past year and aims at demonstrating the feasibility of crystal collimation and a possible cleaning efficiency improvement. The performance of CERN collimation insertions is evaluated with the Monte Carlo code FLUKA, which is capable to simulate energy deposition in collimators as well as beam loss monitor signals. A new model of crystal channeling was developed specifically so that similar simulations can be conducted in the case of crystal-assisted collimation. In this paper, most recent results of this model are brought forward in the framework of a joint activity inside the UA9 collaboration to benchmark the different simulation tools available. The performance of crystal STF 45, produced at INFN Ferrara, was measured at the H8 beamline at CERN in 2010 and serves as the basis to the comparison. Distributions of deflected particles are shown to be in very good agreement with experimental data. Calculated dechanneling lengths and crystal performance in the transition region between amorphous regime and volume reflection are also close to the measured ones.

  8. Theoretical analysis of the worthiness of Henry and Elder problems as benchmarks of density-dependent groundwater flow models

    Science.gov (United States)

    Simpson, M. J.; Clement, T. P.

    Computer models must be tested to ensure that the mathematical statements and solution schemes accurately represent the physical processes of interest. Because the availability of benchmark problems for testing density-dependent groundwater models is limited, one should be careful in using these problems appropriately. Details of a Galerkin finite-element model for the simulation of density-dependent, variably saturated flow processes are presented here. The model is tested using the Henry salt-water intrusion problem and Elder salt convection problem. The quality of these benchmark problems is then evaluated by solving the problems in the standard density-coupled mode and in a new density-uncoupled mode. The differences between the solutions indicate that the Henry salt-water intrusion problem has limited usefulness in benchmarking density-dependent flow models because the internal flow dynamics are largely determined by the boundary forcing. Alternatively, the Elder salt-convection problem is more suited to the model testing process because the flow patterns are completely determined by the internal balance of pressure and gravity forces.

  9. Benchmarking electron-cloud simulations and pressure measurements at the LHC

    CERN Document Server

    Dominguez, O

    2013-01-01

    During the beam commissioning of the Large Hadron Collider (LHC) with 150, 75, 50 and 25-ns bunch spacing, important electron-cloud effects, like pressure rise, cryogenic heat load, beam instabilities or emittance growth, were observed. A method has been developed to infer different key beam-pipe surface parameters by benchmarking simulations and pressure rise observed in the machine. This method allows us to monitor the scrubbing process (i.e. the reduction of the secondary emission yield as a function of time) in the regions where the vacuum-pressure gauges are located, in order to decide on the most appropriate strategies for machine operation. In this paper we present the methodology and first results from applying this technique to the LHC.

  10. Current modeling practice may lead to falsely high benchmark dose estimates.

    Science.gov (United States)

    Ringblom, Joakim; Johanson, Gunnar; Öberg, Mattias

    2014-07-01

    Benchmark dose (BMD) modeling is increasingly used as the preferred approach to define the point-of-departure for health risk assessment of chemicals. As data are inherently variable, there is always a risk to select a model that defines a lower confidence bound of the BMD (BMDL) that, contrary to expected, exceeds the true BMD. The aim of this study was to investigate how often and under what circumstances such anomalies occur under current modeling practice. Continuous data were generated from a realistic dose-effect curve by Monte Carlo simulations using four dose groups and a set of five different dose placement scenarios, group sizes between 5 and 50 animals and coefficients of variations of 5-15%. The BMD calculations were conducted using nested exponential models, as most BMD software use nested approaches. "Non-protective" BMDLs (higher than true BMD) were frequently observed, in some scenarios reaching 80%. The phenomenon was mainly related to the selection of the non-sigmoidal exponential model (Effect=a·e(b)(·dose)). In conclusion, non-sigmoid models should be used with caution as it may underestimate the risk, illustrating that awareness of the model selection process and sound identification of the point-of-departure is vital for health risk assessment.

  11. Simulation of TRIGA Mark II Benchmark Experiment using WIMSD4 and CITATION codes

    International Nuclear Information System (INIS)

    This paper presents a simulation of the TRIGA Mark II Benchmark Experiment, Part I: Steady-State Operation and is part of the calculation methodology validation developed to the neutronic calculation of the CDTN's TRIGA IPR - R1 reactor. A version of the WIMSD4, obtained in the Centro de Tecnologia Nuclear, in Cuba, was used in the cells calculation. In the core calculations was adopted the diffusion code CITATION. Was adopted a 3D representation of the core and the calculations were carried out at two energy groups. Many of the experiments were simulated, including, Keff, control rods reactivity worth, fuel elements reactivity worth distribution and the fuel temperature reactivity coefficient. The comparison of the obtained results, with the experimental results, shows differences in the range of the accuracy of the measurements, to the control rods worth and fuel temperature reactivity coefficient, or on an acceptable range, following the literature, to the Keff and fuel elements reactivity worth distribution and the fuel temperature reactivity coefficient. The comparison of the obtained results, with the experimental. results, shows differences in the range of the accuracy of the measurements, to the control rods worth and fuel temperature reactivity coefficient, or in an acceptable range, following the literature, to the Keff and fuel elements reactivity worth distribution. (author)

  12. Viscoelastic silicone oils in analog modeling - a rheological benchmark

    Science.gov (United States)

    Rudolf, Michael; Boutelier, David; Rosenau, Matthias; Schreurs, Guido; Oncken, Onno

    2016-04-01

    Tectonic analog models frequently use silicone oils to simulate viscous flow in the lower crust and mantle. Precise knowledge of the model rheology is required to ensure dynamic similarity with the prototype. We assessed the rheological properties of various silicone oils using rotational and oscillatory tests. Resulting viscosities are in the range of 2 - 3 ×104 Pa s with a transition from Newtonian viscous to power-law, shear-thinning, around shear rates of 10‑2 to 10‑1 s‑1. Maxwell relaxation times are in the range of 10‑1 s. Comparing the rheological properties of chemically similar silicone oils from different laboratories shows that they differ from laboratory to laboratory. Furthermore, we characterized the temperature dependency of viscosity and aging effects. The samples show a reduction in zero-shear viscosity over time. This stabilizes at a certain value over several months. The dynamic moduli decrease as well, but other viscoelastic constants, such as the Maxwell relaxation time, are not affected by aging. We conclude that the aging is mainly controlled by the storage conditions and that a silicone shows no further aging when it has equilibrated with the ambient laboratory conditions. We consider all these differences as minor compared to the much larger uncertainties for estimating the lithosphere rheology. Nevertheless, it is important that the rheological properties of the experimental materials are monitored during an experimental series that spans over several weeks to months. Additionally, the viscoelastic properties may be scaled using dimensionless parameters (Deborah number) and show a dynamically similar change from Newtonian to power-law flow, like the natural prototype. In consequence, the viscoelasticity of these silicone oils is able to mimic the change in deformation mechanism from diffusion to dislocation creep.

  13. Benchmarking Electron-Cloud Build-Up and Heat-Load Simulations against Large-Hadron-Collider Observations

    OpenAIRE

    Dominguez, O; Iriso, U; Maury, H.; Rumolo, G.; Zimmermann, F

    2011-01-01

    After reviewing the basic features of electron clouds in particle accelerators, the pertinent vacuum-chamber surface properties, and the electron-cloud simulation tools in use at CERN, we report recent observations of electron-cloud phenomena at the Large Hadron Collider (LHC) and ongoing attempts to benchmark the measured LHC vacuum pressure increases and heat loads against electron-cloud build-up simulations aimed at determining the actual surface parameters and at monitoring the so-called ...

  14. Benchmarking the Calculation of Stochastic Heating and Emissivity of Dust Grains in the Context of Radiative Transfer Simulations

    CERN Document Server

    Camps, Peter; Bianchi, Simone; Lunttila, Tuomas; Pinte, Christophe; Natale, Giovanni; Juvela, Mika; Fischera, Joerg; Fitzgerald, Michael P; Gordon, Karl; Baes, Maarten; Steinacker, Juergen

    2015-01-01

    We define an appropriate problem for benchmarking dust emissivity calculations in the context of radiative transfer (RT) simulations, specifically including the emission from stochastically heated dust grains. Our aim is to provide a self-contained guide for implementors of such functionality, and to offer insights in the effects of the various approximations and heuristics implemented by the participating codes to accelerate the calculations. The benchmark problem definition includes the optical and calorimetric material properties, and the grain size distributions, for a typical astronomical dust mixture with silicate, graphite and PAH components; a series of analytically defined radiation fields to which the dust population is to be exposed; and instructions for the desired output. We process this problem using six RT codes participating in this benchmark effort, and compare the results to a reference solution computed with the publicly available dust emission code DustEM. The participating codes implement...

  15. Modeling Nonlinear Growth with Three Data Points: Illustration with Benchmarking Data

    Science.gov (United States)

    Kamata, Akihito; Nese, Joseph F. T.; Patarapichayatham, Chalie; Lai, Cheng-Fei

    2013-01-01

    The purpose of this article is to demonstrate ways to model nonlinear growth using three testing occasions. We demonstrate our growth models in the context of curriculum-based measurement using the fall, winter, and spring passage reading fluency benchmark assessments. We present a brief technical overview that includes the limitations of a growth…

  16. Benchmarking nuclear models of FLUKA and GEANT4 for carbon ion therapy

    International Nuclear Information System (INIS)

    As carbon ions, at therapeutic energies, penetrate tissue, they undergo inelastic nuclear reactions and give rise to significant yields of secondary fragment fluences. Therefore, an accurate prediction of these fluences resulting from the primary carbon interactions is necessary in the patient's body in order to precisely simulate the spatial dose distribution and the resulting biological effect. In this paper, the performance of nuclear fragmentation models of the Monte Carlo transport codes, FLUKA and GEANT4, in tissue-like media and for an energy regime relevant for therapeutic carbon ions is investigated. The ability of these Monte Carlo codes to reproduce experimental data of charge-changing cross sections and integral and differential yields of secondary charged fragments is evaluated. For the fragment yields, the main focus is on the consideration of experimental approximations and uncertainties such as the energy measurement by time-of-flight. For GEANT4, the hadronic models G4BinaryLightIonReaction and G4QMD are benchmarked together with some recently enhanced de-excitation models. For non-differential quantities, discrepancies of some tens of percent are found for both codes. For differential quantities, even larger deviations are found. Implications of these findings for the therapeutic use of carbon ions are discussed.

  17. Benchmarking of Monte Carlo simulation of bremsstrahlung from thick targets at radiotherapy energies

    Energy Technology Data Exchange (ETDEWEB)

    Faddegon, Bruce A.; Asai, Makoto; Perl, Joseph; Ross, Carl; Sempau, Josep; Tinslay, Jane; Salvat, Francesc [Department of Radiation Oncology, University of California at San Francisco, San Francisco, California 94143 (United States); Stanford Linear Accelerator Center, 2575 Sand Hill Road, Menlo Park, California 94025 (United States); National Research Council Canada, Institute for National Measurement Standards, 1200 Montreal Road, Building M-36, Ottawa, Ontario K1A 0R6 (Canada); Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya and Centro de Investigacion Biomedica en Red en Bioingenieria, Biomateriales y Nanomedicina (CIBER-BBN), Diagonal 647, 08028 Barcelona (Spain); Stanford Linear Accelerator Center, 2575 Sand Hill Road, Menlo Park, California 94025 (United States); Facultat de Fisica (ECM), Universitat de Barcelona, Societat Catalana de Fisica (IEC), Diagonal 647, 08028 Barcelona (Spain)

    2008-10-15

    Several Monte Carlo systems were benchmarked against published measurements of bremsstrahlung yield from thick targets for 10-30 MV beams. The quantity measured was photon fluence at 1 m per unit energy per incident electron (spectra), and total photon fluence, integrated over energy, per incident electron (photon yield). Results were reported at 10-30 MV on the beam axis for Al and Pb targets and at 15 MV at angles out to 90 degree sign for Be, Al, and Pb targets. Beam energy was revised with improved accuracy of 0.5% using an improved energy calibration of the accelerator. Recently released versions of the Monte Carlo systems EGSNRC, GEANT4, and PENELOPE were benchmarked against the published measurements using the revised beam energies. Monte Carlo simulation was capable of calculation of photon yield in the experimental geometry to 5% out to 30 degree sign , 10% at wider angles, and photon spectra to 10% at intermediate photon energies, 15% at lower energies. Accuracy of measured photon yield from 0 to 30 degree sign was 5%, 1 s.d., increasing to 7% for the larger angles. EGSNRC and PENELOPE results were within 2 s.d. of the measured photon yield at all beam energies and angles, GEANT4 within 3 s.d. Photon yield at nonzero angles for angles covering conventional field sizes used in radiotherapy (out to 10 degree sign ), measured with an accuracy of 3%, was calculated within 1 s.d. of measurement for EGSNRC, 2 s.d. for PENELOPE and GEANT4. Calculated spectra closely matched measurement at photon energies over 5 MeV. Photon spectra near 5 MeV were underestimated by as much as 10% by all three codes. The photon spectra below 2-3 MeV for the Be and Al targets and small angles were overestimated by up to 15% when using EGSNRC and PENELOPE, 20% with GEANT4. EGSNRC results with the NIST option for the bremsstrahlung cross section were preferred over the alternative cross section available in EGSNRC and over EGS4. GEANT4 results calculated with the &apos

  18. Simulation modeling and arena

    CERN Document Server

    Rossetti, Manuel D

    2015-01-01

    Emphasizes a hands-on approach to learning statistical analysis and model building through the use of comprehensive examples, problems sets, and software applications With a unique blend of theory and applications, Simulation Modeling and Arena®, Second Edition integrates coverage of statistical analysis and model building to emphasize the importance of both topics in simulation. Featuring introductory coverage on how simulation works and why it matters, the Second Edition expands coverage on static simulation and the applications of spreadsheets to perform simulation. The new edition als

  19. Benchmarked Empirical Bayes Methods in Multiplicative Area-level Models with Risk Evaluation

    OpenAIRE

    Ghosh, Malay; Kubokawa, Tatsuya; Kawakubo, Yuki

    2014-01-01

    The paper develops empirical Bayes and benchmarked empirical Bayes estimators of positive small area means under multiplicative models. A simple example will be estimation of per capita income for small areas. It is now well-understood that small area estimation needs explicit, or at least implicit use of models. One potential difficulty with model-based estimators is that the overall estimator for a larger geographical area based on (weighted) sum of the model-based estimators is not necessa...

  20. An Evaluation of Fault Tolerant Wind Turbine Control Schemes applied to a Benchmark Model

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob

    2014-01-01

    Reliability and availability of modern wind turbines increases in importance as the ratio in the world's power supply increases. This is important in order to increase the energy generated per unit and their lowering cost of energy and as well to ensure availability of generated power, which helps...... keeping the power grids stable. Advanced Fault Tolerant Control is one of the potential tools to increase reliability of modern wind turbines. A benchmark model for wind turbine fault detection and isolation and fault tolerant control has previously been proposed, and based on this benchmark...

  1. Analysis of Transitional and Turbulent Flow Through the FDA Benchmark Nozzle Model Using Laser Doppler Velocimetry.

    Science.gov (United States)

    Taylor, Joshua O; Good, Bryan C; Paterno, Anthony V; Hariharan, Prasanna; Deutsch, Steven; Malinauskas, Richard A; Manning, Keefe B

    2016-09-01

    Transitional and turbulent flow through a simplified medical device model is analyzed as part of the FDA's Critical Path Initiative, designed to improve the process of bringing medical products to market. Computational predictions are often used in the development of devices and reliable in vitro data is needed to validate computational results, particularly estimations of the Reynolds stresses that could play a role in damaging blood elements. The high spatial resolution of laser Doppler velocimetry (LDV) is used to collect two component velocity data within the FDA benchmark nozzle model. Two flow conditions are used to produce flow encompassing laminar, transitional, and turbulent regimes, and viscous stresses, principal Reynolds stresses, and turbulence intensities are calculated from the measured LDV velocities. Axial velocities and viscous stresses are compared to data from a prior inter-laboratory study conducted with particle image velocimetry. Large velocity gradients are observed near the wall in the nozzle throat and in the jet shear layer located in the expansion downstream of the throat, with axial velocity changing as much as 4.5 m/s over 200 μm. Additionally, maximum Reynolds shear stresses of 1000-2000 Pa are calculated in the high shear regions, which are an order of magnitude higher than the peak viscous shear stresses (<100 Pa). It is important to consider the effects of both viscous and turbulent stresses when simulating flow through medical devices. Reynolds stresses above commonly accepted hemolysis thresholds are measured in the nozzle model, indicating that hemolysis may occur under certain flow conditions. As such, the presented turbulence quantities from LDV, which are also available for download at https://fdacfd.nci.nih.gov/ , provide an ideal validation test for computational simulations that seek to characterize the flow field and to predict hemolysis within the FDA nozzle geometry.

  2. Developing of Indicators of an E-Learning Benchmarking Model for Higher Education Institutions

    Science.gov (United States)

    Sae-Khow, Jirasak

    2014-01-01

    This study was the development of e-learning indicators used as an e-learning benchmarking model for higher education institutes. Specifically, it aimed to: 1) synthesize the e-learning indicators; 2) examine content validity by specialists; and 3) explore appropriateness of the e-learning indicators. Review of related literature included…

  3. Structural modeling and fuzzy-logic based diagnosis of a ship propulsion benchmark

    DEFF Research Database (Denmark)

    Izadi-Zamanabadi, Roozbeh; Blanke, M.; Katebi, S.D.

    2000-01-01

    An analysis of structural model of a ship propulsion benchmark leads to identifying the subsystems with inherent redundant information. For a nonlinear part of the system, a Fuzzy logic based FD algorithm with adaptive threshold is employed. The results illustrate the applicability of structural...... analysis as well as fuzzy observer....

  4. Structural modeling and fuzzy-logic based diagnosis of a ship propulsion benchmark

    DEFF Research Database (Denmark)

    Izadi-Zamanabadi, Roozbeh; Blanke, M.; Katebi, S.D.

    2000-01-01

    An analysis of structural model of a ship propulsion benchmark leads to identifying the subsystems with inherent redundant information. For a nonlinear part of the system, a Fuzzy logic based FD algorithm with adaptive threshold is employed. The results illustrate the applicability of structural...... analysis as well as fuzzy observer...

  5. Creating a benchmark of vertical axis wind turbines in dynamic stall for validating numerical models

    DEFF Research Database (Denmark)

    Castelein, D.; Ragni, D.; Tescione, G.;

    2015-01-01

    An experimental campaign using Particle Image Velocimetry (2C-PIV) technique has been conducted on a H-type Vertical Axis Wind Turbine (VAWT) to create a benchmark for validating and comparing numerical models. The turbine is operated at tip speed ratios (TSR) of 4.5 and 2, at an average chord-ba...

  6. TRIPOLI-4® - MCNP5 ITER A-lite neutronic model benchmarking

    Science.gov (United States)

    Jaboulay, J.-C.; Cayla, P.-Y.; Fausser, C.; Lee, Y.-K.; Trama, J.-C.; Li-Puma, A.

    2014-06-01

    The aim of this paper is to present the capability of TRIPOLI-4®, the CEA Monte Carlo code, to model a large-scale fusion reactor with complex neutron source and geometry. In the past, numerous benchmarks were conducted for TRIPOLI-4® assessment on fusion applications. Experiments (KANT, OKTAVIAN, FNG) analysis and numerical benchmarks (between TRIPOLI-4® and MCNP5) on the HCLL DEMO2007 and ITER models were carried out successively. In this previous ITER benchmark, nevertheless, only the neutron wall loading was analyzed, its main purpose was to present MCAM (the FDS Team CAD import tool) extension for TRIPOLI-4®. Starting from this work a more extended benchmark has been performed about the estimation of neutron flux, nuclear heating in the shielding blankets and tritium production rate in the European TBMs (HCLL and HCPB) and it is presented in this paper. The methodology to build the TRIPOLI-4® A-lite model is based on MCAM and the MCNP A-lite model (version 4.1). Simplified TBMs (from KIT) have been integrated in the equatorial-port. Comparisons of neutron wall loading, flux, nuclear heating and tritium production rate show a good agreement between the two codes. Discrepancies are mainly included in the Monte Carlo codes statistical error.

  7. Pore-scale and Continuum Simulations of Solute Transport Micromodel Benchmark Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Oostrom, Martinus; Mehmani, Yashar; Romero Gomez, Pedro DJ; Tang, Y.; Liu, H.; Yoon, Hongkyu; Kang, Qinjun; Joekar Niasar, Vahid; Balhoff, Matthew; Dewers, T.; Tartakovsky, Guzel D.; Leist, Emily AE; Hess, Nancy J.; Perkins, William A.; Rakowski, Cynthia L.; Richmond, Marshall C.; Serkowski, John A.; Werth, Charles J.; Valocchi, Albert J.; Wietsma, Thomas W.; Zhang, Changyong

    2016-08-01

    Four sets of micromodel nonreactive solute transport experiments were conducted with flow velocity, grain diameter, pore-aspect ratio, and flow focusing heterogeneity as the variables. The data sets were offered to pore-scale modeling groups to test their simulators. Each set consisted of two learning experiments, for which all results was made available, and a challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing, and considerably enhanced mixing due to flow focusing. Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice-Boltzmann (LB) approach, and one employed a computational fluid dynamics (CFD) technique. The learning experiments were used by the PN models to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used these experiments to appropriately discretize the grid representations. The continuum model use published non-linear relations between transverse dispersion coefficients and Peclet numbers to compute the required dispersivity input values. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values and, resulting in less dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models needed up to several days on supercomputers to resolve the more complex problems.

  8. A benchmark for the validation of solidification modelling algorithms

    Science.gov (United States)

    Kaschnitz, E.; Heugenhauser, S.; Schumacher, P.

    2015-06-01

    This work presents two three-dimensional solidification models, which were solved by several commercial solvers (MAGMASOFT, FLOW-3D, ProCAST, WinCast, ANSYS, and OpenFOAM). Surprisingly, the results show noticeable differences. The results are analyzed similar to a round-robin test procedure to obtain reference values for temperatures and their uncertainties at selected positions in the model. The first model is similar to an adiabatic calorimeter with an aluminum alloy solidifying in a copper block. For this model, an analytical solution for the overall temperature at steady state can be calculated. The second model implements additional heat transfer boundary conditions at outer faces. The geometry of the models, the initial and boundary conditions as well as the material properties are kept as simple as possible but, nevertheless, close to a realistic solidification situation. The gained temperature results can be used to validate self-written solidification solvers and check the accuracy of commercial solidification programs.

  9. Data for model validation summary report. A summary of data for validation and benchmarking of recovery boiler models

    Energy Technology Data Exchange (ETDEWEB)

    Grace, T.; Lien, S.; Schmidl, W.; Salcudean, M.; Abdullah, Z.

    1997-07-01

    One of the tasks in the project was to obtain data from operating recovery boilers for the purpose of model validation. Another task was to obtain water model data and computer output from University of British Columbia for purposes of benchmarking the UBC model against other codes. In the course of discussions on recovery boiler modeling over the course of this project, it became evident that there would be value in having some common cases for carrying out benchmarking exercises with different recovery boiler models. In order to facilitate such a benchmarking exercise, the data that was obtained on this project for validation and benchmarking purposes has been brought together in a single, separate report. The intent is to make this data available to anyone who may want to use it for model validation. The report contains data from three different cases. Case 1 is an ABBCE recovery boiler which was used for model validation. The data are for a single set of operating conditions. Case 2 is a Babcock & Wilcox recovery boiler that was modified by Tampella. In this data set, several different operating conditions were employed. The third case is water flow data supplied by UBC, along with computational output using the UBC code, for benchmarking purposes.

  10. Benchmark Dose Software Development and Maintenance Ten Berge Cxt Models

    Science.gov (United States)

    This report is intended to provide an overview of beta version 1.0 of the implementation of a concentration-time (CxT) model originally programmed and provided by Wil ten Berge (referred to hereafter as the ten Berge model). The recoding and development described here represent ...

  11. Simulation of TRIGA Mark II Benchmark Experiment using WIMSD4 and CITATION codes; Simulacao com WIMSD4 e CITATION do Triga Mark II benchmark experiment

    Energy Technology Data Exchange (ETDEWEB)

    Dalle, Hugo Moura [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN), Belo Horizonte, MG (Brazil); Pereira, Claubia [Minas Gerais Univ., Belo Horizonte, MG (Brazil). Dept. de Engenharia Nuclear

    2000-07-01

    This paper presents a simulation of the TRIGA Mark II Benchmark Experiment, Part I: Steady-State Operation and is part of the calculation methodology validation developed to the neutronic calculation of the CDTN's TRIGA IPR - R1 reactor. A version of the WIMSD4, obtained in the Centro de Tecnologia Nuclear, in Cuba, was used in the cells calculation. In the core calculations was adopted the diffusion code CITATION. Was adopted a 3D representation of the core and the calculations were carried out at two energy groups. Many of the experiments were simulated, including, K{sub eff}, control rods reactivity worth, fuel elements reactivity worth distribution and the fuel temperature reactivity coefficient. The comparison of the obtained results, with the experimental results, shows differences in the range of the accuracy of the measurements, to the control rods worth and fuel temperature reactivity coefficient, or on an acceptable range, following the literature, to the K{sub eff} and fuel elements reactivity worth distribution and the fuel temperature reactivity coefficient. The comparison of the obtained results, with the experimental. results, shows differences in the range of the accuracy of the measurements, to the control rods worth and fuel temperature reactivity coefficient, or in an acceptable range, following the literature, to the K{sub eff} and fuel elements reactivity worth distribution. (author)

  12. Scale resolved simulations of the OECD/NEA–Vattenfall T-junction benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Höhne, Thomas, E-mail: t.hoehne@hzdr.de

    2014-04-01

    Mixing of fluids in T-junction geometries is of significant interest for nuclear safety research. The most prominent example is the thermal striping phenomena in piping T-junctions, where hot and cold streams join and turbulently mix, however not completely or not immediately at the T-junction. This result in significant temperature fluctuations near the piping wall, either at the side of the secondary pipe branch or at the opposite side of the main branch pipe. The wall temperature fluctuation can cause cyclical thermal stresses and subsequently fatigue cracking of the wall. Thermal mixing in a T-junction has been studied for validation of CFD-calculations. A T-junction thermal mixing test was carried out at the Älvkarleby Laboratory of Vattenfall Research and Development (VRD) in Sweden. Data from this test have been reserved specifically for a OECD CFD benchmark exercise. The computational results show that RANS fail to predict a realistic mixing between the fluids. The results were significantly better with scale-resolving methods such as LES, showing fairly good predictions of the velocity field and mean temperatures. The calculation predicts also similar fluctuations and frequencies observed in the model test.

  13. Benchmark 2 - Springback of a draw / re-draw panel: Part C: Benchmark analysis

    Science.gov (United States)

    Carsley, John E.; Xia, Cedric; Yang, Lianxiang; Stoughton, Thomas B.; Xu, Siguang; Hartfield-Wünsch, Susan E.; Li, Jingjing

    2013-12-01

    Benchmark analysis is summarized for DP600 and AA 5182-O. Nine simulation results submitted for this benchmark study are compared to the physical measurement results. The details on the codes, friction parameters, mesh technology, CPU, and material models are also summarized at the end of this report with the participant information details.

  14. Upgrading the Benchmark Simulation Model Framework with emerging challenges - A study of N2O emissions and the fate of pharmaceuticals in urban wastewater systems

    OpenAIRE

    Snip, Laura; Plósz, Benedek G.; Flores Alsina, Xavier; Jeppsson, Ulf A. C.; Gernaey, Krist

    2015-01-01

    I dag forventes der at et spildevandsrensningsanlæg ikke kun fjerner de almindelige forurenende stoffer fra spildevandet. Indenfor de seneste årtier er mange nye udfordringer opstået, hvilket har markant øget kravene til rensningsanlæggene. For eksempel forventes et rensningsanlæg nu til dags også at minimere sit CO2‐aftryk (carbon footprint) og fjerne mikroforureninger fra spildevandet. Optimering af driften af et rensningsanlæg kan undersøges og forbedres ved brug af matematiske modeller, d...

  15. Calculation of benchmarks with a shear beam model

    NARCIS (Netherlands)

    Ferreira, D.

    2015-01-01

    Fiber models for beam and shell elements allow for relatively rapid finite element analysis of concrete structures and structural elements. This project aims at the development of the formulation of such elements and a pilot implementation. Standard nonlinear fiber beam formulations do not account

  16. RANS Modeling of Benchmark Shockwave / Boundary Layer Interaction Experiments

    Science.gov (United States)

    Georgiadis, Nick; Vyas, Manan; Yoder, Dennis

    2010-01-01

    This presentation summarizes the computations of a set of shock wave / turbulent boundary layer interaction (SWTBLI) test cases using the Wind-US code, as part of the 2010 American Institute of Aeronautics and Astronautics (AIAA) shock / boundary layer interaction workshop. The experiments involve supersonic flows in wind tunnels with a shock generator that directs an oblique shock wave toward the boundary layer along one of the walls of the wind tunnel. The Wind-US calculations utilized structured grid computations performed in Reynolds-averaged Navier-Stokes mode. Three turbulence models were investigated: the Spalart-Allmaras one-equation model, the Menter Shear Stress Transport wavenumber-angular frequency two-equation model, and an explicit algebraic stress wavenumber-angular frequency formulation. Effects of grid resolution and upwinding scheme were also considered. The results from the CFD calculations are compared to particle image velocimetry (PIV) data from the experiments. As expected, turbulence model effects dominated the accuracy of the solutions with upwinding scheme selection indicating minimal effects.!

  17. Models of asthma: density-equalizing mapping and output benchmarking

    Directory of Open Access Journals (Sweden)

    Fischer Tanja C

    2008-02-01

    Full Text Available Abstract Despite the large amount of experimental studies already conducted on bronchial asthma, further insights into the molecular basics of the disease are required to establish new therapeutic approaches. As a basis for this research different animal models of asthma have been developed in the past years. However, precise bibliometric data on the use of different models do not exist so far. Therefore the present study was conducted to establish a data base of the existing experimental approaches. Density-equalizing algorithms were used and data was retrieved from a Thomson Institute for Scientific Information database. During the period from 1900 to 2006 a number of 3489 filed items were connected to animal models of asthma, the first being published in the year 1968. The studies were published by 52 countries with the US, Japan and the UK being the most productive suppliers, participating in 55.8% of all published items. Analyzing the average citation per item as an indicator for research quality Switzerland ranked first (30.54/item and New Zealand ranked second for countries with more than 10 published studies. The 10 most productive journals included 4 with a main focus allergy and immunology and 4 with a main focus on the respiratory system. Two journals focussed on pharmacology or pharmacy. In all assigned subject categories examined for a relation to animal models of asthma, immunology ranked first. Assessing numbers of published items in relation to animal species it was found that mice were the preferred species followed by guinea pigs. In summary it can be concluded from density-equalizing calculations that the use of animal models of asthma is restricted to a relatively small number of countries. There are also differences in the use of species. These differences are based on variations in the research focus as assessed by subject category analysis.

  18. Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) Benchmark Phase II: Identification of Influential Parameters

    International Nuclear Information System (INIS)

    The objective of the Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) benchmark is to progress on the issue of the quantification of the uncertainty of the physical models in system thermal-hydraulic codes by considering a concrete case: the physical models involved in the prediction of core reflooding. The PREMIUM benchmark consists of five phases. This report presents the results of Phase II dedicated to the identification of the uncertain code parameters associated with physical models used in the simulation of reflooding conditions. This identification is made on the basis of the Test 216 of the FEBA/SEFLEX programme according to the following steps: - identification of influential phenomena; - identification of the associated physical models and parameters, depending on the used code; - quantification of the variation range of identified input parameters through a series of sensitivity calculations. A procedure for the identification of potentially influential code input parameters has been set up in the Specifications of Phase II of PREMIUM benchmark. A set of quantitative criteria has been as well proposed for the identification of influential IP and their respective variation range. Thirteen participating organisations, using 8 different codes (7 system thermal-hydraulic codes and 1 sub-channel module of a system thermal-hydraulic code) submitted Phase II results. The base case calculations show spread in predicted cladding temperatures and quench front propagation that has been characterized. All the participants, except one, predict a too fast quench front progression. Besides, the cladding temperature time trends obtained by almost all the participants show oscillatory behaviour which may have numeric origins. Adopted criteria for identification of influential input parameters differ between the participants: some organisations used the set of criteria proposed in Specifications 'as is', some modified the quantitative thresholds

  19. Multichannel Multi Market Media Service Business Model Evaluation and Benchmark

    OpenAIRE

    2012-01-01

    This research was conducted as a part of Next Media’s Multichannel Multimarket Media Services research programme. The project’s members are publishing companies and research institutions in Finland, including AAC Global, SanomaPro, Aalto University, Laurea, and the VTT Technical Research Centre of Finland. This study examines business models for the e-learning industry both in Finland and in international markets. Three complementary research pieces are presented in this report, and each ...

  20. Analysis of Transitional and Turbulent Flow Through the FDA Benchmark Nozzle Model Using Laser Doppler Velocimetry.

    Science.gov (United States)

    Taylor, Joshua O; Good, Bryan C; Paterno, Anthony V; Hariharan, Prasanna; Deutsch, Steven; Malinauskas, Richard A; Manning, Keefe B

    2016-09-01

    Transitional and turbulent flow through a simplified medical device model is analyzed as part of the FDA's Critical Path Initiative, designed to improve the process of bringing medical products to market. Computational predictions are often used in the development of devices and reliable in vitro data is needed to validate computational results, particularly estimations of the Reynolds stresses that could play a role in damaging blood elements. The high spatial resolution of laser Doppler velocimetry (LDV) is used to collect two component velocity data within the FDA benchmark nozzle model. Two flow conditions are used to produce flow encompassing laminar, transitional, and turbulent regimes, and viscous stresses, principal Reynolds stresses, and turbulence intensities are calculated from the measured LDV velocities. Axial velocities and viscous stresses are compared to data from a prior inter-laboratory study conducted with particle image velocimetry. Large velocity gradients are observed near the wall in the nozzle throat and in the jet shear layer located in the expansion downstream of the throat, with axial velocity changing as much as 4.5 m/s over 200 μm. Additionally, maximum Reynolds shear stresses of 1000-2000 Pa are calculated in the high shear regions, which are an order of magnitude higher than the peak viscous shear stresses (nozzle model, indicating that hemolysis may occur under certain flow conditions. As such, the presented turbulence quantities from LDV, which are also available for download at https://fdacfd.nci.nih.gov/ , provide an ideal validation test for computational simulations that seek to characterize the flow field and to predict hemolysis within the FDA nozzle geometry. PMID:27350137

  1. Reusability of Simulation Models

    NARCIS (Netherlands)

    Pos, Anita; Borst, Pim; Top, Jan; Akkermans, Hans

    1996-01-01

    Modelling and simulation are widely used methods in design and other engineering tasks. Providing libraries of reusable model fragments is a promising way of supporting engineering modelling. The paper discusses means of structuring such libraries in a generic and reusable way. Model content facts c

  2. A two- and three-dimensional numerical modelling benchmark of slab detachment

    Science.gov (United States)

    Thieulot, Cedric; Glerum, Anne; Hillebrand, Bram; Schmalholz, Stefan; Spakman, Wim; Torsvik, Trond

    2014-05-01

    Subduction is likely to be the most studied phenomenon in Numerical Geodynamics. Over the past 20 years, hundreds of publications have focused on its various aspects (influence of the rheology and thermal state of the plates, slab-mantle coupling, roll-back, mantle wedge evolution, buoyancy changes due to phase change, ...) and results were obtained with a variety of codes. Slab detachment has recently received some attention (e.g. Duretz, 2012) but remains a field worth exploring due to its profound influence on dynamic topography, mantle flow and subsequent stress state of the plates, and is believed to have occured in the Zagros, Carpathians and beneath eastern Anatolia, to name only a few regions. Following the work of Schmalholz (2011), we propose a two- and three-dimensional numerical benchmark of slab detachment. The geometry is simple: a power-law T-shaped plate including an already subducted slab overlies the mantle whose viscosity is either linear or power-law. Boundary conditions are free-slip on the top and the bottom of the domain, and no-slip on the sides. When the system evolves in time, the slab stretches out vertically and shows buoyancy-driven necking, until it finally detaches. The benchmark is subdivided into several sub-experiments with gradually increase in complexity (free surface, coupling of the rheology with temperature, ...). An array of objective measurements is recorded throughout the simulation such as the width of the necked slab over time and the exact time of detachment. The experiments will be run in two-dimensions and repeated in three-dimensional, the latter case being designed so as to allow both poloidal and toroidal flow. We show results obtained with a multitude of Finite Element and Finite Difference codes, using either compositional fields, level sets or tracers to track the compositions. A good agreement is found for most of the measurements in the two-dimensional case, and preliminary three-dimensional measurements will

  3. Generic Hockey-Stick Model for Estimating Benchmark Dose and Potency: Performance Relative to BMDS and Application to Anthraquinone.

    Science.gov (United States)

    Bogen, Kenneth T

    2011-01-01

    Benchmark Dose Model software (BMDS), developed by the U.S. Environmental Protection Agency, involves a growing suite of models and decision rules now widely applied to assess noncancer and cancer risk, yet its statistical performance has never been examined systematically. As typically applied, BMDS also ignores the possibility of reduced risk at low doses ("hormesis"). A simpler, proposed Generic Hockey-Stick (GHS) model also estimates benchmark dose and potency, and additionally characterizes and tests objectively for hormetic trend. Using 100 simulated dichotomous-data sets (5 dose groups, 50 animals/group), sampled from each of seven risk functions, GHS estimators performed about as well or better than BMDS estimators, and a surprising observation was that BMDS mis-specified all of six non-hormetic sampled risk functions most or all of the time. When applied to data on rodent tumors induced by the genotoxic chemical carcinogen anthraquinone (AQ), the GHS model yielded significantly negative estimates of net potency exhibited by the combined rodent data, suggesting that-consistent with the anti-leukemogenic properties of AQ and structurally similar quinones-environmental AQ exposures do not likely increase net cancer risk. In addition to its simplicity and flexibility, the GHS approach offers a unified, consistent approach to quantifying environmental chemical risk. PMID:21731536

  4. A Review of Flood Loss Models as Basis for Harmonization and Benchmarking

    Science.gov (United States)

    Kreibich, Heidi; Franco, Guillermo; Marechal, David

    2016-01-01

    Risk-based approaches have been increasingly accepted and operationalized in flood risk management during recent decades. For instance, commercial flood risk models are used by the insurance industry to assess potential losses, establish the pricing of policies and determine reinsurance needs. Despite considerable progress in the development of loss estimation tools since the 1980s, loss estimates still reflect high uncertainties and disparities that often lead to questioning their quality. This requires an assessment of the validity and robustness of loss models as it affects prioritization and investment decision in flood risk management as well as regulatory requirements and business decisions in the insurance industry. Hence, more effort is needed to quantify uncertainties and undertake validations. Due to a lack of detailed and reliable flood loss data, first order validations are difficult to accomplish, so that model comparisons in terms of benchmarking are essential. It is checked if the models are informed by existing data and knowledge and if the assumptions made in the models are aligned with the existing knowledge. When this alignment is confirmed through validation or benchmarking exercises, the user gains confidence in the models. Before these benchmarking exercises are feasible, however, a cohesive survey of existing knowledge needs to be undertaken. With that aim, this work presents a review of flood loss–or flood vulnerability–relationships collected from the public domain and some professional sources. Our survey analyses 61 sources consisting of publications or software packages, of which 47 are reviewed in detail. This exercise results in probably the most complete review of flood loss models to date containing nearly a thousand vulnerability functions. These functions are highly heterogeneous and only about half of the loss models are found to be accompanied by explicit validation at the time of their proposal. This paper exemplarily

  5. A Review of Flood Loss Models as Basis for Harmonization and Benchmarking.

    Science.gov (United States)

    Gerl, Tina; Kreibich, Heidi; Franco, Guillermo; Marechal, David; Schröter, Kai

    2016-01-01

    Risk-based approaches have been increasingly accepted and operationalized in flood risk management during recent decades. For instance, commercial flood risk models are used by the insurance industry to assess potential losses, establish the pricing of policies and determine reinsurance needs. Despite considerable progress in the development of loss estimation tools since the 1980s, loss estimates still reflect high uncertainties and disparities that often lead to questioning their quality. This requires an assessment of the validity and robustness of loss models as it affects prioritization and investment decision in flood risk management as well as regulatory requirements and business decisions in the insurance industry. Hence, more effort is needed to quantify uncertainties and undertake validations. Due to a lack of detailed and reliable flood loss data, first order validations are difficult to accomplish, so that model comparisons in terms of benchmarking are essential. It is checked if the models are informed by existing data and knowledge and if the assumptions made in the models are aligned with the existing knowledge. When this alignment is confirmed through validation or benchmarking exercises, the user gains confidence in the models. Before these benchmarking exercises are feasible, however, a cohesive survey of existing knowledge needs to be undertaken. With that aim, this work presents a review of flood loss-or flood vulnerability-relationships collected from the public domain and some professional sources. Our survey analyses 61 sources consisting of publications or software packages, of which 47 are reviewed in detail. This exercise results in probably the most complete review of flood loss models to date containing nearly a thousand vulnerability functions. These functions are highly heterogeneous and only about half of the loss models are found to be accompanied by explicit validation at the time of their proposal. This paper exemplarily presents

  6. A Review of Flood Loss Models as Basis for Harmonization and Benchmarking.

    Directory of Open Access Journals (Sweden)

    Tina Gerl

    Full Text Available Risk-based approaches have been increasingly accepted and operationalized in flood risk management during recent decades. For instance, commercial flood risk models are used by the insurance industry to assess potential losses, establish the pricing of policies and determine reinsurance needs. Despite considerable progress in the development of loss estimation tools since the 1980s, loss estimates still reflect high uncertainties and disparities that often lead to questioning their quality. This requires an assessment of the validity and robustness of loss models as it affects prioritization and investment decision in flood risk management as well as regulatory requirements and business decisions in the insurance industry. Hence, more effort is needed to quantify uncertainties and undertake validations. Due to a lack of detailed and reliable flood loss data, first order validations are difficult to accomplish, so that model comparisons in terms of benchmarking are essential. It is checked if the models are informed by existing data and knowledge and if the assumptions made in the models are aligned with the existing knowledge. When this alignment is confirmed through validation or benchmarking exercises, the user gains confidence in the models. Before these benchmarking exercises are feasible, however, a cohesive survey of existing knowledge needs to be undertaken. With that aim, this work presents a review of flood loss-or flood vulnerability-relationships collected from the public domain and some professional sources. Our survey analyses 61 sources consisting of publications or software packages, of which 47 are reviewed in detail. This exercise results in probably the most complete review of flood loss models to date containing nearly a thousand vulnerability functions. These functions are highly heterogeneous and only about half of the loss models are found to be accompanied by explicit validation at the time of their proposal. This paper

  7. Benchmarks and models for 1-D radiation transport in stochastic participating media

    Energy Technology Data Exchange (ETDEWEB)

    Miller, D S

    2000-08-21

    Benchmark calculations for radiation transport coupled to a material temperature equation in a 1-D slab and 1-D spherical geometry binary random media are presented. The mixing statistics are taken to be homogeneous Markov statistics in the 1-D slab but only approximately Markov statistics in the 1-D sphere. The material chunk sizes are described by Poisson distribution functions. The material opacities are first taken to be constant and then allowed to vary as a strong function of material temperature. Benchmark values and variances for time evolution of the ensemble average of material temperature energy density and radiation transmission are computed via a Monte Carlo type method. These benchmarks are used as a basis for comparison with three other approximate methods of solution. One of these approximate methods is simple atomic mix. The second approximate model is an adaptation of what is commonly called the Levermore-Pomraning model and which is referred to here as the standard model. It is shown that recasting the temperature coupling as a type of effective scattering can be useful in formulating the third approximate model, an adaptation of a model due to Su and Pomraning which attempts to account for the effects of scattering in a stochastic context. This last adaptation shows consistent improvement over both the atomic mix and standard models when used in the 1-D slab geometry but shows limited improvement in the 1-D spherical geometry. Benchmark values are also computed for radiation transmission from the 1-D sphere without material heating present. This is to evaluate the performance of the standard model on this geometry--something which has never been done before. All of the various tests demonstrate the importance of stochastic structure on the solution. Also demonstrated are the range of usefulness and limitations of a simple atomic mix formulation.

  8. Algorithm comparison and benchmarking using a parallel spectra transform shallow water model

    Energy Technology Data Exchange (ETDEWEB)

    Worley, P.H. [Oak Ridge National Lab., TN (United States); Foster, I.T.; Toonen, B. [Argonne National Lab., IL (United States)

    1995-04-01

    In recent years, a number of computer vendors have produced supercomputers based on a massively parallel processing (MPP) architecture. These computers have been shown to be competitive in performance with conventional vector supercomputers for some applications. As spectral weather and climate models are heavy users of vector supercomputers, it is interesting to determine how these models perform on MPPS, and which MPPs are best suited to the execution of spectral models. The benchmarking of MPPs is complicated by the fact that different algorithms may be more efficient on different architectures. Hence, a comprehensive benchmarking effort must answer two related questions: which algorithm is most efficient on each computer and how do the most efficient algorithms compare on different computers. In general, these are difficult questions to answer because of the high cost associated with implementing and evaluating a range of different parallel algorithms on each MPP platform.

  9. Network Generation Model Based on Evolution Dynamics To Generate Benchmark Graphs

    CERN Document Server

    Pasta, Muhammad Qasim

    2016-01-01

    Network generation models provide an understanding of the dynamics behind the formation and evolution of different networks including social networks, technological networks and biological networks. Two important applications of these models are to study the evolution dynamics of network formation and to generate benchmark networks with known community structures. Research has been conducted in both these directions relatively independent of the other application area. This creates a disjunct between real world networks and the networks generated to study community detection algorithms. In this paper, we propose to study both these application areas together i.e.\\ introduce a network generation model based on evolution dynamics of real world networks and generate networks with community structures that can be used as benchmark graphs to study community detection algorithms. The generated networks possess tunable modular structures which can be used to generate networks with known community structures. We stud...

  10. Benchmarking of computer codes and approaches for modeling exposure scenarios

    International Nuclear Information System (INIS)

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided

  11. Benchmarking of computer codes and approaches for modeling exposure scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Seitz, R.R. [EG and G Idaho, Inc., Idaho Falls, ID (United States); Rittmann, P.D.; Wood, M.I. [Westinghouse Hanford Co., Richland, WA (United States); Cook, J.R. [Westinghouse Savannah River Co., Aiken, SC (United States)

    1994-08-01

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.

  12. Interactions of model biomolecules. Benchmark CC calculations within MOLCAS

    Energy Technology Data Exchange (ETDEWEB)

    Urban, Miroslav [Slovak University of Technology in Bratislava, Faculty of Materials Science and Technology in Trnava, Institute of Materials Science, Bottova 25, SK-917 24 Trnava, Slovakia and Department of Physical and Theoretical Chemistry, Faculty of Natural Scie (Slovakia); Pitoňák, Michal; Neogrády, Pavel; Dedíková, Pavlína [Department of Physical and Theoretical Chemistry, Faculty of Natural Sciences, Comenius University, Mlynská dolina, SK-842 15 Bratislava (Slovakia); Hobza, Pavel [Institute of Organic Chemistry and Biochemistry and Center for Complex Molecular Systems and biomolecules, Academy of Sciences of the Czech Republic, Prague (Czech Republic)

    2015-01-22

    We present results using the OVOS approach (Optimized Virtual Orbitals Space) aimed at enhancing the effectiveness of the Coupled Cluster calculations. This approach allows to reduce the total computer time required for large-scale CCSD(T) calculations about ten times when the original full virtual space is reduced to about 50% of its original size without affecting the accuracy. The method is implemented in the MOLCAS computer program. When combined with the Cholesky decomposition of the two-electron integrals and suitable parallelization it allows calculations which were formerly prohibitively too demanding. We focused ourselves to accurate calculations of the hydrogen bonded and the stacking interactions of the model biomolecules. Interaction energies of the formaldehyde, formamide, benzene, and uracil dimers and the three-body contributions in the cytosine – guanine tetramer are presented. Other applications, as the electron affinity of the uracil affected by solvation are also shortly mentioned.

  13. Generative Benchmark Models for Mesoscale Structures in Multilayer Networks

    CERN Document Server

    Bazzi, Marya; Arenas, Alex; Howison, Sam D; Porter, Mason A

    2016-01-01

    Multilayer networks allow one to represent diverse and interdependent connectivity patterns --- e.g., time-dependence, multiple subsystems, or both --- that arise in many applications and which are difficult or awkward to incorporate into standard network representations. In the study of multilayer networks, it is important to investigate "mesoscale" (i.e., intermediate-scale) structures, such as dense sets of nodes known as "communities" that are connected sparsely to each other, to discover network features that are not apparent at the microscale or the macroscale. A variety of methods and algorithms are available to identify communities in multilayer networks, but they differ in their definitions and/or assumptions of what constitutes a community, and many scalable algorithms provide approximate solutions with little or no theoretical guarantee on the quality of their approximations. Consequently, it is crucial to develop generative models of networks to use as a common test of community-detection tools. I...

  14. Benchmarking of thermal hydraulic loop models for Lead-Alloy Cooled Advanced Nuclear Energy System (LACANES), phase-I: Isothermal steady state forced convection

    International Nuclear Information System (INIS)

    As highly promising coolant for new generation nuclear reactors, liquid Lead-Bismuth Eutectic has been extensively worldwide investigated. With high expectation about this advanced coolant, a multi-national systematic study on LBE was proposed in 2007, which covers benchmarking of thermal hydraulic prediction models for Lead-Alloy Cooled Advanced Nuclear Energy System (LACANES). This international collaboration has been organized by OECD/NEA, and nine organizations - ENEA, ERSE, GIDROPRESS, IAEA, IPPE, KIT/IKET, KIT/INR, NUTRECK, and RRC KI - contribute their efforts to LACANES benchmarking. To produce experimental data for LACANES benchmarking, thermal-hydraulic tests were conducted by using a 12-m tall LBE integral test facility, named as Heavy Eutectic liquid metal loop for integral test of Operability and Safety of PEACER (HELIOS) which has been constructed in 2005 at the Seoul National University in the Republic of Korea. LACANES benchmark campaigns consist of a forced convection (phase-I) and a natural circulation (phase-II). In the forced convection case, the predictions of pressure losses based on handbook correlations and that obtained by Computational Fluid Dynamics code simulation were compared with the measured data for various components of the HELIOS test facility. Based on comparative analyses of the predictions and the measured data, recommendations for the prediction methods of a pressure loss in LACANES were obtained. In this paper, results for the forced convection case (phase-I) of LACANES benchmarking are described.

  15. Model-Based Engineering and Manufacturing CAD/CAM Benchmark.; FINAL

    International Nuclear Information System (INIS)

    The Benchmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supporting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate their engineering practices and processes to determine direction and focus for Y-12 modernization efforts. The companies visited included several large established companies and a new, small, high-tech machining firm. As a result of this effort, changes are recommended that will enable Y-12 to become a more modern, responsive, cost-effective manufacturing facility capable of supporting the needs of the Nuclear Weapons Complex (NWC) into the 21st century. The benchmark team identified key areas of interest, both focused and general. The focus areas included Human Resources, Information Management, Manufacturing Software Tools, and Standards/Policies and Practices. Areas of general interest included Infrastructure, Computer Platforms and Networking, and Organizational Structure. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were somewhere between 3-D solid modeling and surfaced wire-frame models. The manufacturing computer tools were varied, with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) from a common model. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system than a single computer-aided manufacturing (CAM) system. The Internet was a technology that all companies were looking to either transport information more easily throughout the corporation or as a conduit for

  16. GEANT4 simulations of the n{sub T}OF spallation source and their benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Lo Meo, S. [Research Centre ' ' Ezio Clementel' ' , ENEA, Bologna (Italy); Section of Bologna, INFN, Bologna (Italy); Cortes-Giraldo, M.A.; Lerendegui-Marco, J.; Guerrero, C.; Quesada, J.M. [Universidad de Sevilla, Facultad de Fisica, Sevilla (Spain); Massimi, C.; Vannini, G. [Section of Bologna, INFN, Bologna (Italy); University of Bologna, Physics and Astronomy Dept. ' ' Alma Mater Studiorum' ' , Bologna (Italy); Barbagallo, M.; Colonna, N. [INFN, Section of Bari, Bari (Italy); Mancusi, D. [CEA-Saclay, DEN, DM2S, SERMA, LTSD, Gif-sur-Yvette CEDEX (France); Mingrone, F. [Section of Bologna, INFN, Bologna (Italy); Sabate-Gilarte, M. [Universidad de Sevilla, Facultad de Fisica, Sevilla (Spain); European Organization for Nuclear Research (CERN), Geneva (Switzerland); Vlachoudis, V. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Collaboration: The n_TOF Collaboration

    2015-12-15

    Neutron production and transport in the spallation target of the n{sub T}OF facility at CERN has been simulated with GEANT4. The results obtained with different models of high-energy nucleon-nucleus interaction have been compared with the measured characteristics of the neutron beam, in particular the flux and its dependence on neutron energy, measured in the first experimental area. The best agreement at present, within 20% for the absolute value of the flux, and within few percent for the energy dependence in the whole energy range from thermal to 1 GeV, is obtained with the INCL++ model coupled with the GEANT4 native de-excitation model. All other available models overestimate by a larger factor, of up to 70%, the n{sub T}OF neutron flux. The simulations are also able to accurately reproduce the neutron beam energy resolution function, which is essentially determined by the moderation time inside the target/moderator assembly. The results here reported provide confidence on the use of GEANT4 for simulations of spallation neutron sources. (orig.)

  17. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  18. Theory Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Shlachter, Jack [Los Alamos National Laboratory

    2012-08-23

    Los Alamos has a long history in theory, modeling and simulation. We focus on multidisciplinary teams that tackle complex problems. Theory, modeling and simulation are tools to solve problems just like an NMR spectrometer, a gas chromatograph or an electron microscope. Problems should be used to define the theoretical tools needed and not the other way around. Best results occur when theory and experiments are working together in a team.

  19. Simulating Online Business Models

    OpenAIRE

    Schuster, Stephan; Gilbert, Nigel

    2004-01-01

    The online content market for news and music is changing rapidly with the spread of technology and innovative business models (e.g. the online delivery of music, specialised subscription news services). It is correspondingly hard for suppliers of online content to anticipate developments and the effects of their businesses. The paper describes a prototype multiagent simulation to model possible scenarios in this market. The simulation is intended for use by business strategists and has been d...

  20. Peculiarity by Modeling of the Control Rod Movement by the Kalinin-3 Benchmark

    International Nuclear Information System (INIS)

    The paper presents an important part of the results of the OECD/NEA benchmark transient 'Switching off one main circulation pump at nominal power' analyzed as a boundary condition problem by the coupled system code ATHLET-BIPR-VVER. Some observations and comparisons with measured data for integral reactor parameters are discussed. Special attention is paid on the modeling and comparisons performed for the control rod movement and the reactor power history. (Authors)

  1. Logistics Cost Modeling in Strategic Benchmarking Project : cases: CEL Consulting & Cement Group A

    OpenAIRE

    Nguyen Cong Minh, Vu

    2010-01-01

    This thesis deals with logistics cost modeling for a benchmarking project as consulting service from CEL Consulting for Cement Group A. The project aims at providing flows and cost comparison of bagged cement of all cement players to relevant markets in Indonesia. The results of the project yielded strategic elements for Cement Group A in planning their penetration strategy with new investments. Due to the specific needs, Cement Group A requested a flexible costing approach taking into ...

  2. Quality assurance for online adapted treatment plans: Benchmarking and delivery monitoring simulation

    Energy Technology Data Exchange (ETDEWEB)

    Li, Taoran, E-mail: taoran.li.duke@gmail.com; Wu, Qiuwen; Yang, Yun; Rodrigues, Anna; Yin, Fang-Fang; Jackie Wu, Q. [Department of Radiation Oncology, Duke University Medical Center Durham, North Carolina 27710 (United States)

    2015-01-15

    Purpose: An important challenge facing online adaptive radiation therapy is the development of feasible and efficient quality assurance (QA). This project aimed to validate the deliverability of online adapted plans and develop a proof-of-concept online delivery monitoring system for online adaptive radiation therapy QA. Methods: The first part of this project benchmarked automatically online adapted prostate treatment plans using traditional portal dosimetry IMRT QA. The portal dosimetry QA results of online adapted plans were compared to original (unadapted) plans as well as randomly selected prostate IMRT plans from our clinic. In the second part, an online delivery monitoring system was designed and validated via a simulated treatment with intentional multileaf collimator (MLC) errors. This system was based on inputs from the dynamic machine information (DMI), which continuously reports actual MLC positions and machine monitor units (MUs) at intervals of 50 ms or less during delivery. Based on the DMI, the system performed two levels of monitoring/verification during the delivery: (1) dynamic monitoring of cumulative fluence errors resulting from leaf position deviations and visualization using fluence error maps (FEMs); and (2) verification of MLC positions against the treatment plan for potential errors in MLC motion and data transfer at each control point. Validation of the online delivery monitoring system was performed by introducing intentional systematic MLC errors (ranging from 0.5 to 2 mm) to the DMI files for both leaf banks. These DMI files were analyzed by the proposed system to evaluate the system’s performance in quantifying errors and revealing the source of errors, as well as to understand patterns in the FEMs. In addition, FEMs from 210 actual prostate IMRT beams were analyzed using the proposed system to further validate its ability to catch and identify errors, as well as establish error magnitude baselines for prostate IMRT delivery

  3. Benchmark of SIMULATE5 thermal hydraulics against the Frigg and NUPEC full bundle test experiments

    International Nuclear Information System (INIS)

    SIMULATE5 is Studsvik Scandpower's next generation nodal code. The core portion of the thermal hydraulic models of PWR and BWRs are treated as essentially identical, with each assembly having an active channel and a number of parallel water channels. In addition, the BWR assembly may be divided into four radial sub-assemblies. For natural circulation reactors, the BWR thermal hydraulic model is capable of modeling an entire vessel loop: core, chimney, upper plenum, standpipes, steam separators, downcomer, recirculation pumps, and lower plenum. This paper presents results of the validation of the BWR thermal hydraulic model against: (1) pressure drop data measured in the Frigg and NUPEC test facilities; (2) void fraction distribution measured in the Frigg and NUPEC loops; (3) quarter-assembly void fraction measured in the NUPEC experiments and (4) natural and forced circulation flow measurements in the Frigg loop. (author)

  4. Generic Hockey-Stick Model for Estimating Benchmark Dose and Potency: Performance Relative to BMDS and Application to Anthraquinone

    OpenAIRE

    Kenneth T. Bogen

    2010-01-01

    Benchmark Dose Model software (BMDS), developed by the U.S. Environmental Protection Agency, involves a growing suite of models and decision rules now widely applied to assess noncancer and cancer risk, yet its statistical performance has never been examined systematically. As typically applied, BMDS also ignores the possibility of reduced risk at low doses (“hormesis”). A simpler, proposed Generic Hockey-Stick (GHS) model also estimates benchmark dose and potency, and additionally characteri...

  5. AEGIS geologic simulation model

    International Nuclear Information System (INIS)

    The Geologic Simulation Model (GSM) is used by the AEGIS (Assessment of Effectiveness of Geologic Isolation Systems) program at the Pacific Northwest Laboratory to simulate the dynamic geology and hydrology of a geologic nuclear waste repository site over a million-year period following repository closure. The GSM helps to organize geologic/hydrologic data; to focus attention on active natural processes by requiring their simulation; and, through interactive simulation and calibration, to reduce subjective evaluations of the geologic system. During each computer run, the GSM produces a million-year geologic history that is possible for the region and the repository site. In addition, the GSM records in permanent history files everything that occurred during that time span. Statistical analyses of data in the history files of several hundred simulations are used to classify typical evolutionary paths, to establish the probabilities associated with deviations from the typical paths, and to determine which types of perturbations of the geologic/hydrologic system, if any, are most likely to occur. These simulations will be evaluated by geologists familiar with the repository region to determine validity of the results. Perturbed systems that are determined to be the most realistic, within whatever probability limits are established, will be used for the analyses that involve radionuclide transport and dose models. The GSM is designed to be continuously refined and updated. Simulation models are site specific, and, although the submodels may have limited general applicability, the input data equirements necessitate detailed characterization of each site before application

  6. LHC benchmark scenarios for the real Higgs singlet extension of the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Robens, Tania [TU Dresden, Institut fuer Kern- und Teilchenphysik, Dresden (Germany); Stefaniak, Tim [University of California, Department of Physics and Santa Cruz Institute for Particle Physics, Santa Cruz, CA (United States)

    2016-05-15

    We present benchmark scenarios for searches for an additional Higgs state in the real Higgs singlet extension of the Standard Model in Run 2 of the LHC. The scenarios are selected such that they fulfill all relevant current theoretical and experimental constraints, but can potentially be discovered at the current LHC run. We take into account the results presented in earlier work and update the experimental constraints from relevant LHC Higgs searches and signal rate measurements. The benchmark scenarios are given separately for the low-mass and high-mass region, i.e. the mass range where the additional Higgs state is lighter or heavier than the discovered Higgs state at around 125 GeV. They have also been presented in the framework of the LHC Higgs Cross Section Working Group. (orig.)

  7. CFD Simulation of Thermal-Hydraulic Benchmark V1000CT-2 Using ANSYS CFX

    OpenAIRE

    Thomas Höhne

    2009-01-01

    Plant measured data from VVER-1000 coolant mixing experiments were used within the OECD/NEA and AER coupled code benchmarks for light water reactors to test and validate computational fluid dynamic (CFD) codes. The task is to compare the various calculations with measured data, using specified boundary conditions and core power distributions. The experiments, which are provided for CFD validation, include single loop cooling down or heating-up by disturbing the heat transfer in the steam gene...

  8. Benchmark of a new multi-ion-species collision operator for $\\delta f$ Monte Carlo neoclassical simulation

    CERN Document Server

    Satake, Shinsuke; Pianpanit, Theerasarn; Sugama, Hideo; Nunami, Masanori; Matsuoka, Seikichi; Ishiguro, Seiji; Kanno, Ryutaro

    2016-01-01

    A numerical method to implement a linearized Coulomb collision operator for multi-ion-species neoclassical transport simulation using two-weight $\\delta f$ Monte Carlo method is developed. The conservation properties and the adjointness of the operator in the collisions between two particle species with different temperatures are verified. The linearized operator in a $\\delta f$ Monte Carlo code is benchmarked with other two kinetic simulation codes, i. e., a $\\delta f$ continuum gyrokinetic code with the same linearized collision operator and a full-f PIC code with Nanbu collision operator. The benchmark simulations of the equilibration process of plasma flow and temperature fluctuation among several particle species show very good agreement between $\\delta f$ Monte Carlo code and the other two codes. An error in the H-theorem in the two-weight $\\delta f$ Monte Carlo method is found, which is caused by the weight spreading phenomenon inherent in the two-weight $\\delta f$ method. It is demonstrated that the w...

  9. A Cooperative Activity on Quenching Process Simulation--- Japanese IMS-VHT Project on the Benchmark Analysis and Experiment ---

    Institute of Scientific and Technical Information of China (English)

    Tatsuo Inoue; Youichi Watanabe; Kazuo Okamura; Michiharu Narazaki; Hayato Shichino; Dong-Ying Ju; Hideo Kanamori; Katsumi Ichitani

    2004-01-01

    Japanese IMS-VHT project on the Virtual Heat Treatment tool for monitoring and optimising HT process in relation with the international cooperative programs is briefly introduced. This project motivates to develop virtual tools for computer to optimize the heat treatment condition and to support decision for HT operation by knowledge based database in addition to process simulation. As one of the activities with the cooperation of the Society of Materials Science, Japan and the Japan Society for Heat Treatment, a benchmark project is undergoing. This includes simulation of carburized quenching process of a cylinder, disc, and ring as well as a helical gear by use of common data of materials properties and cooling characteristics by several available simulation programs. A part of the newly obtained results is presented as an interim report.

  10. Laser-plasma interaction in ignition relevant plasmas: benchmarking our 3D modelling capabilities versus recent experiments

    Energy Technology Data Exchange (ETDEWEB)

    Divol, L; Froula, D H; Meezan, N; Berger, R; London, R A; Michel, P; Glenzer, S H

    2007-09-27

    We have developed a new target platform to study Laser Plasma Interaction in ignition-relevant condition at the Omega laser facility (LLE/Rochester)[1]. By shooting an interaction beam along the axis of a gas-filled hohlraum heated by up to 17 kJ of heater beam energy, we were able to create a millimeter-scale underdense uniform plasma at electron temperatures above 3 keV. Extensive Thomson scattering measurements allowed us to benchmark our hydrodynamic simulations performed with HYDRA [1]. As a result of this effort, we can use with much confidence these simulations as input parameters for our LPI simulation code pF3d [2]. In this paper, we show that by using accurate hydrodynamic profiles and full three-dimensional simulations including a realistic modeling of the laser intensity pattern generated by various smoothing options, fluid LPI theory reproduces the SBS thresholds and absolute reflectivity values and the absence of measurable SRS. This good agreement was made possible by the recent increase in computing power routinely available for such simulations.

  11. Modeling the emetic potencies of food-borne trichothecenes by benchmark dose methodology.

    Science.gov (United States)

    Male, Denis; Wu, Wenda; Mitchell, Nicole J; Bursian, Steven; Pestka, James J; Wu, Felicia

    2016-08-01

    Trichothecene mycotoxins commonly co-contaminate cereal products. They cause immunosuppression, anorexia, and emesis in multiple species. Dietary exposure to such toxins often occurs in mixtures. Hence, if it were possible to determine their relative toxicities and assign toxic equivalency factors (TEFs) to each trichothecene, risk management and regulation of these mycotoxins could become more comprehensive and simple. We used a mink emesis model to compare the toxicities of deoxynivalenol, 3-acetyldeoxynivalenol, 15-acetyldeoxynivalenol, nivalenol, fusarenon-X, HT-2 toxin, and T-2 toxin. These toxins were administered to mink via gavage and intraperitoneal injection. The United States Environmental Protection Agency (EPA) benchmark dose software was used to determine benchmark doses for each trichothecene. The relative potencies of each of these toxins were calculated as the ratios of their benchmark doses to that of DON. Our results showed that mink were more sensitive to orally administered toxins than to toxins administered by IP. T-2 and HT-2 toxins caused the greatest emetic responses, followed by FX, and then by DON, its acetylated derivatives, and NIV. Although these results provide key information on comparative toxicities, there is still a need for more animal based studies focusing on various endpoints and combined effects of trichothecenes before TEFs can be established. PMID:27292944

  12. A parallel high-order accurate finite element nonlinear Stokes ice sheet model and benchmark experiments

    Energy Technology Data Exchange (ETDEWEB)

    Leng, Wei [Chinese Academy of Sciences; Ju, Lili [University of South Carolina; Gunzburger, Max [Florida State University; Price, Stephen [Los Alamos National Laboratory; Ringler, Todd [Los Alamos National Laboratory,

    2012-01-01

    The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.

  13. Benchmark Modeling of the Near-Field and Far-Field Wave Effects of Wave Energy Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Rhinefrank, Kenneth E; Haller, Merrick C; Ozkan-Haller, H Tuba

    2013-01-26

    This project is an industry-led partnership between Columbia Power Technologies and Oregon State University that will perform benchmark laboratory experiments and numerical modeling of the near-field and far-field impacts of wave scattering from an array of wave energy devices. These benchmark experimental observations will help to fill a gaping hole in our present knowledge of the near-field effects of multiple, floating wave energy converters and are a critical requirement for estimating the potential far-field environmental effects of wave energy arrays. The experiments will be performed at the Hinsdale Wave Research Laboratory (Oregon State University) and will utilize an array of newly developed Buoys' that are realistic, lab-scale floating power converters. The array of Buoys will be subjected to realistic, directional wave forcing (1:33 scale) that will approximate the expected conditions (waves and water depths) to be found off the Central Oregon Coast. Experimental observations will include comprehensive in-situ wave and current measurements as well as a suite of novel optical measurements. These new optical capabilities will include imaging of the 3D wave scattering using a binocular stereo camera system, as well as 3D device motion tracking using a newly acquired LED system. These observing systems will capture the 3D motion history of individual Buoys as well as resolve the 3D scattered wave field; thus resolving the constructive and destructive wave interference patterns produced by the array at high resolution. These data combined with the device motion tracking will provide necessary information for array design in order to balance array performance with the mitigation of far-field impacts. As a benchmark data set, these data will be an important resource for testing of models for wave/buoy interactions, buoy performance, and far-field effects on wave and current patterns due to the presence of arrays. Under the proposed project we will initiate

  14. Benchmark Modeling of the Near-Field and Far-Field Wave Effects of Wave Energy Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Rhinefrank, Kenneth E.; Haller, Merrick C.; Ozkan-Haller, H. Tuba

    2013-01-26

    This project is an industry-led partnership between Columbia Power Technologies and Oregon State University that will perform benchmark laboratory experiments and numerical modeling of the near-field and far-field impacts of wave scattering from an array of wave energy devices. These benchmark experimental observations will help to fill a gaping hole in our present knowledge of the near-field effects of multiple, floating wave energy converters and are a critical requirement for estimating the potential far-field environmental effects of wave energy arrays. The experiments will be performed at the Hinsdale Wave Research Laboratory (Oregon State University) and will utilize an array of newly developed Buoys that are realistic, lab-scale floating power converters. The array of Buoys will be subjected to realistic, directional wave forcing (1:33 scale) that will approximate the expected conditions (waves and water depths) to be found off the Central Oregon Coast. Experimental observations will include comprehensive in-situ wave and current measurements as well as a suite of novel optical measurements. These new optical capabilities will include imaging of the 3D wave scattering using a binocular stereo camera system, as well as 3D device motion tracking using a newly acquired LED system. These observing systems will capture the 3D motion history of individual Buoys as well as resolve the 3D scattered wave field; thus resolving the constructive and destructive wave interference patterns produced by the array at high resolution. These data combined with the device motion tracking will provide necessary information for array design in order to balance array performance with the mitigation of far-field impacts. As a benchmark data set, these data will be an important resource for testing of models for wave/buoy interactions, buoy performance, and far-field effects on wave and current patterns due to the presence of arrays. Under the

  15. Monte Carlo simulation of MLC-shaped TrueBeam electron fields benchmarked against measurement

    CERN Document Server

    Lloyd, Samantha AM; Zavgorodni, Sergei

    2014-01-01

    Modulated electron radiotherapy (MERT) and combined, modulated photon/electron radiotherapy (MPERT) have received increased research attention, having shown capacity for reduced low dose exposure to healthy tissue and comparable, if not improved, target coverage for a number of treatment sites. Accurate dose calculation tools are necessary for clinical treatment planning, and Monte Carlo (MC) is the gold standard for electron field simulation. With many clinics replacing older accelerators, MC source models of the new machines are needed for continued development, however, Varian has kept internal schematics of the TrueBeam confidential and electron phase-space sources have not been made available. TrueBeam electron fields are not substantially different from those generated by the Clinac 21EX, so we have modified the internal schematics of the Clinac 21EX to simulate TrueBeam electrons. BEAMnrc/DOSXYZnrc were used to simulate 5x5 and 20x20 cm$^2$ electron fields with MLC-shaped apertures. Secondary collimati...

  16. Validation of simulation models

    DEFF Research Database (Denmark)

    Rehman, Muniza; Pedersen, Stig Andur

    2012-01-01

    of models with regards to their purpose, character, field of application and time dimension inherently calls for a similar diversity in validation approaches. A classification of models in terms of the mentioned elements is presented and used to shed light on possible types of validation leading...... of models has been somewhat narrow-minded reducing the notion of validation to establishment of truth. This article puts forward the diversity in applications of simulation models that demands a corresponding diversity in the notion of validation....

  17. Benchmarking Electron-Cloud Build-Up and Heat-Load Simulations against Large-Hadron-Collider Observations

    CERN Document Server

    Dominguez, O; Maury, H; Rumolo, G; Zimmermann, F

    2011-01-01

    After reviewing the basic features of electron clouds in particle accelerators, the pertinent vacuum-chamber surface properties, and the electron-cloud simulation tools in use at CERN, we report recent observations of electron-cloud phenomena at the Large Hadron Collider (LHC) and ongoing attempts to benchmark the measured LHC vacuum pressure increases and heat loads against electron-cloud build-up simulations aimed at determining the actual surface parameters and at monitoring the so-called scrubbing process. Finally, some other electron-cloud studies related to the LHC are mentioned, and future study plans are described. Presented at MulCoPim2011, Valencia, Spain, 21-23 September 2011.

  18. Modeling of shielding benchmark for Na-24 γ-rays using scale code package and QAD-CGGP code

    International Nuclear Information System (INIS)

    The benchmark data were recently published for 1.37 and 2.75 MeV photons emitted by an Na-24 uniform disc source penetrating shields of six two-layer combinations, namely, 12''Al+Fe, 12''+Pb, 6''Fe+Al, 6''Fe+Pb, 4''Pb+Al, and 4''Pb+Fe. These benchmark data fill a gap in the energy range of practical interest and provide useful reference values for computational method evaluation. In order to evaluate the computational methods incorporated into widely used shielding codes SCALE and QAD we compared the benchmark data with results of benchmark modeling with these codes. Using the functional module SAS4 of SCALE4 modular code package and the point kernel code system for gamma-ray shielding calculations QAD-CGGP scalar flux density spectra in benchmark energy group structure for three two-layer combinations were calculated. The comparison of the benchmark data and the results obtained showed that QAD-CGGP and SAS4 results are in good agreement, but the benchmark experimental data differ significantly from the both of them. (author)

  19. Finite Element Method Modeling of Sensible Heat Thermal Energy Storage with Innovative Concretes and Comparative Analysis with Literature Benchmarks

    Directory of Open Access Journals (Sweden)

    Claudio Ferone

    2014-08-01

    Full Text Available Efficient systems for high performance buildings are required to improve the integration of renewable energy sources and to reduce primary energy consumption from fossil fuels. This paper is focused on sensible heat thermal energy storage (SHTES systems using solid media and numerical simulation of their transient behavior using the finite element method (FEM. Unlike other papers in the literature, the numerical model and simulation approach has simultaneously taken into consideration various aspects: thermal properties at high temperature, the actual geometry of the repeated storage element and the actual storage cycle adopted. High-performance thermal storage materials from the literatures have been tested and used here as reference benchmarks. Other materials tested are lightweight concretes with recycled aggregates and a geopolymer concrete. Their thermal properties have been measured and used as inputs in the numerical model to preliminarily evaluate their application in thermal storage. The analysis carried out can also be used to optimize the storage system, in terms of thermal properties required to the storage material. The results showed a significant influence of the thermal properties on the performances of the storage elements. Simulation results have provided information for further scale-up from a single differential storage element to the entire module as a function of material thermal properties.

  20. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Peiyuan [Univ. of Colorado, Boulder, CO (United States); Brown, Timothy [Univ. of Colorado, Boulder, CO (United States); Fullmer, William D. [Univ. of Colorado, Boulder, CO (United States); Hauser, Thomas [Univ. of Colorado, Boulder, CO (United States); Hrenya, Christine [Univ. of Colorado, Boulder, CO (United States); Grout, Ray [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sitaraman, Hariswaran [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-01-29

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling of the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.

  1. Benchmarking the Higher Education Institutions in Egypt using Composite Index Model

    Directory of Open Access Journals (Sweden)

    Mohamed Rashad M El-Hefnawy

    2014-11-01

    Full Text Available Egypt has the largest and most significant higher education system in the Middle East and North Africa but it had been continuously facing serious and accumulated challenges. The Higher Education Institutions in Egypt are undergoing important changes involving the development of performance, they are implementing strategies to enhance the overall performance of their universities using ICT, but still the gap between what is existing and what is supposed to be for the self-regulation and improvement processes is not entirely clear to face these challenges. The using of strategic comparative analysis model and tools to evaluate the current and future states will affect the overall performance of universities and shape new paradigms in development of Higher Education System (HES, several studies have investigated the evaluation of universities through the development and use of ranking and benchmark systems In this paper, we provide a model to construct unified Composite Index (CI based on a set of SMART indictors emulate the nature of higher education systems in Egypt. The outcomes of the proposed model aim to measure overall performance of universities and provide unified benchmarking method in this context. The model was discussed from theoretical and technical perspectives. Meanwhile, the research study was conducted with 40 professors from 19 renowned universities in Egypt as education domain experts.

  2. Groundwater flow with energy transport and water-ice phase change: Numerical simulations, benchmarks, and application to freezing in peat bogs

    Science.gov (United States)

    McKenzie, J.M.; Voss, C.I.; Siegel, D.I.

    2007-01-01

    In northern peatlands, subsurface ice formation is an important process that can control heat transport, groundwater flow, and biological activity. Temperature was measured over one and a half years in a vertical profile in the Red Lake Bog, Minnesota. To successfully simulate the transport of heat within the peat profile, the U.S. Geological Survey's SUTRA computer code was modified. The modified code simulates fully saturated, coupled porewater-energy transport, with freezing and melting porewater, and includes proportional heat capacity and thermal conductivity of water and ice, decreasing matrix permeability due to ice formation, and latent heat. The model is verified by correctly simulating the Lunardini analytical solution for ice formation in a porous medium with a mixed ice-water zone. The modified SUTRA model correctly simulates the temperature and ice distributions in the peat bog. Two possible benchmark problems for groundwater and energy transport with ice formation and melting are proposed that may be used by other researchers for code comparison. ?? 2006 Elsevier Ltd. All rights reserved.

  3. Gravity for Detecting Caves: Airborne and Terrestrial Simulations Based on a Comprehensive Karstic Cave Benchmark

    Science.gov (United States)

    Braitenberg, Carla; Sampietro, Daniele; Pivetta, Tommaso; Zuliani, David; Barbagallo, Alfio; Fabris, Paolo; Rossi, Lorenzo; Fabbri, Julius; Mansi, Ahmed Hamdi

    2016-04-01

    Underground caves bear a natural hazard due to their possible evolution into a sink hole. Mapping of all existing caves could be useful for general civil usages as natural deposits or tourism and sports. Natural caves exist globally and are typical in karst areas. We investigate the resolution power of modern gravity campaigns to systematically detect all void caves of a minimum size in a given area. Both aerogravity and terrestrial acquisitions are considered. Positioning of the gravity station is fastest with GNSS methods the performance of which is investigated. The estimates are based on a benchmark cave of which the geometry is known precisely through a laser-scan survey. The cave is the Grotta Gigante cave in NE Italy in the classic karst. The gravity acquisition is discussed, where heights have been acquired with dual-frequency geodetic GNSS receivers and Total Station. Height acquisitions with non-geodetic low-cost receivers are shown to be useful, although the error on the gravity field is larger. The cave produces a signal of -1.5 × 10-5 m/s2, with a clear elliptic geometry. We analyze feasibility of airborne gravity acquisitions for the purpose of systematically mapping void caves. It is found that observations from fixed wing aircraft cannot resolve the caves, but observations from slower and low-flying helicopters or drones do. In order to detect the presence of caves the size of the benchmark cave, systematic terrestrial acquisitions require a density of three stations on square 500 by 500 m2 tiles. The question has a large impact on civil and environmental purposes, since it will allow planning of urban development at a safe distance from subsurface caves. The survey shows that a systematic coverage of the karst would have the benefit to recover the position of all of the greater existing void caves.

  4. Benchmark models and experimental data for a U(20) polyethylene-moderated critical system

    Energy Technology Data Exchange (ETDEWEB)

    Wetzel, Larry [Babcock & Wilcox Nuclear Operations Group Inc.; Busch, Robert D. [University of New Mexico, Albuquerque; Bowen, Douglas G [ORNL

    2015-01-01

    This work involves the analysis of recent experiments performed on the Aerojet General Nucleonics (AGN)-201M (AGN) polyethylene-moderated research reactor at the University of New Mexico (UNM). The experiments include 36 delayed critical (DC) configurations and 11 positive-period and rod-drop measurements (transient sequences). The Even Parity Neutron Transport (EVENT) radiation transport code was chosen to analyze these steady state and time-dependent experimental configurations. The UNM AGN specifications provided in a benchmark calculation report (2007) were used to initiate AGN EVENT model development and to test the EVENT AGN calculation methodology. The results of the EVENT DC experimental analyses compared well with the experimental data; the average AGN EVENT calculation bias in the keff is –0.0048% for the Legrendre Flux Expansion Order of 11 (P11) cases and +0.0119% for the P13 cases. The EVENT transient analysis also compared well with the AGN experimental data with respect to predicting the reactor period and control rod worth values. This paper discusses the benchmark models used, the recent experimental configurations, and the EVENT experimental analysis.

  5. Benchmarking biological nutrient removal in wastewater treatment plants: influence of mathematical model assumptions

    DEFF Research Database (Denmark)

    Flores-Alsina, Xavier; Gernaey, Krist V.; Jeppsson, Ulf

    2012-01-01

    This paper examines the effect of different model assumptions when describing biological nutrient removal (BNR) by the activated sludge models (ASM) 1, 2d & 3. The performance of a nitrogen removal (WWTP1) and a combined nitrogen and phosphorus removal (WWTP2) benchmark wastewater treatment plant...... of the non-reactive TSS sedimentation and transport in the reference case with the full set of ASM processes. Finally, the third set of models is based on including electron acceptor dependency of biomass decay rates for ASM1 (WWTP1) and ASM2d (WWTP2). The results show that incorporation of a reactive...... settler: (1) increases the hydrolysis of particulates; (2) increases the overall plant's denitrification efficiency by reducing the SNOx concentration at the bottom of the clarifier; (3) increases the oxidation of COD compounds; (4) increases XOHO and XANO decay; and, finally, (5) increases the growth...

  6. A control benchmark on the energy management of a plug-in hybrid electric vehicle

    OpenAIRE

    Sciarretta, Antonio; Serrao, Lorenzo; Dewagan, Prakash Chandra; Tona, Paolino; Bergshoeff, E.N. D.; Bordons, C.; Charmpa, E.; Elbert, P.; Eriksson, L.; Hofman, T.; Hubacher, H.; Isenegger, P.; Lacandia, F.; Laveau, A.; Li, H.

    2014-01-01

    International audience A benchmark control problem was developed for a special session of the IFAC Workshop on Engine and Powertrain Control, Simulation and Modeling (E-COSM 12), held in Rueil-Malmaison, France, in October 2012. The online energy management of a plug-in hybrid-electric vehicle was to be developed by the benchmark participants. The simulator, provided by the benchmark organizers, implements a model of the GM Voltec powertrain. Each solution was evaluated according to severa...

  7. Modelling Hydraulic and Thermal Responses in a Benchmark for Deep Geothermal Heat Production

    Science.gov (United States)

    Holzbecher, E.; Oberdorfer, P.

    2012-04-01

    Geothermal heat production from deep reservoirs (5000-7000 m) is currently examined within the collaborative research program "Geothermal Energy and High-Performance Drilling" (gebo), funded by the Ministry of Science and Culture of Lower Saxony (Germany) and Baker Hughes. The projects concern exploration and characterization of geothermal reservoirs as well as production. They are gathered in the four major topic fields: geosystem, drilling, materials, technical system. We present modelling of a benchmark set-up concerning the geothermal production itself. The benchmark model "Horstberg" was originally created by J. Löhken and is based on geological data, concerning the Horstberg site in Lower Saxony. The model region consists of a cube with a side length of 5 km, in which 13 geological layers are included. A fault zone splits the region into two parts with shifted layering. A well is implemented, reaching from the top to an optional depth crossing all layers including the fault zone. The original geological model was rebuilt and improved in COMSOL Multiphysics Version 4.2a. The heterogeneous and detailed configuration makes the model interesting for benchmarking hydrogeological and geothermal applications. It is possible to inject and pump at any level in the well and to study the hydraulic and thermal responses of the system. The hydraulic and thermal parameters can be varied, and groundwater flow can be introduced. Moreover, it is also possible to examine structural mechanical responses to changes in the stress field (which is not further examined here). The main purpose of the presented study is to examine the dynamical flow characteristics of a hydraulic high conductive zone (Detfurth) in connection to a high conductive fault. One example is the fluid injection in the Detfurth zone and production in the fault. The high conductive domains can provide a hydraulic connection between the well screens and the initiated flow circuit could be used for geothermal

  8. Benchmarking of a 1D Scrape-off layer code SOLF1D with SOLPS and its use in modelling long-legged divertors

    CERN Document Server

    Havlickova, E; Subba, F; Coster, D; Wischmeier, M; Fishpool, G

    2013-01-01

    A 1D code modelling SOL transport parallel to the magnetic field (SOLF1D) is benchmarked with 2D simulations of MAST-U SOL performed via the SOLPS code for two different collisionalities. Based on this comparison, SOLF1D is then used to model the effects of divertor leg stretching in 1D, in support of the planned Super-X divertor on MAST. The aim is to separate magnetic flux expansion from volumetric power losses due to recycling neutrals by stretching the divertor leg either vertically or radially.

  9. Simulation - modeling - experiment

    International Nuclear Information System (INIS)

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  10. How to Use Benchmark and Cross-section Studies to Improve Data Libraries and Models

    Science.gov (United States)

    Wagner, V.; Suchopár, M.; Vrzalová, J.; Chudoba, P.; Svoboda, O.; Tichý, P.; Krása, A.; Majerle, M.; Kugler, A.; Adam, J.; Baldin, A.; Furman, W.; Kadykov, M.; Solnyshkin, A.; Tsoupko-Sitnikov, S.; Tyutyunikov, S.; Vladimirovna, N.; Závorka, L.

    2016-06-01

    Improvements of the Monte Carlo transport codes and cross-section libraries are very important steps towards usage of the accelerator-driven transmutation systems. We have conducted a lot of benchmark experiments with different set-ups consisting of lead, natural uranium and moderator irradiated by relativistic protons and deuterons within framework of the collaboration “Energy and Transmutation of Radioactive Waste”. Unfortunately, the knowledge of the total or partial cross-sections of important reactions is insufficient. Due to this reason we have started extensive studies of different reaction cross-sections. We measure cross-sections of important neutron reactions by means of the quasi-monoenergetic neutron sources based on the cyclotrons at Nuclear Physics Institute in Řež and at The Svedberg Laboratory in Uppsala. Measurements of partial cross-sections of relativistic deuteron reactions were the second direction of our studies. The new results obtained during last years will be shown. Possible use of these data for improvement of libraries, models and benchmark studies will be discussed.

  11. Studies of accurate multi-component lattice Boltzmann models on benchmark cases required for engineering applications

    CERN Document Server

    Otomo, Hiroshi; Li, Yong; Dressler, Marco; Staroselsky, Ilya; Zhang, Raoyang; Chen, Hudong

    2016-01-01

    We present recent developments in lattice Boltzmann modeling for multi-component flows, implemented on the platform of a general purpose, arbitrary geometry solver PowerFLOW. Presented benchmark cases demonstrate the method's accuracy and robustness necessary for handling real world engineering applications at practical resolution and computational cost. The key requirements for such approach are that the relevant physical properties and flow characteristics do not strongly depend on numerics. In particular, the strength of surface tension obtained using our new approach is independent of viscosity and resolution, while the spurious currents are significantly suppressed. Using a much improved surface wetting model, undesirable numerical artifacts including thin film and artificial droplet movement on inclined wall are significantly reduced.

  12. Comparison of different numerical models using a two-dimensional density-driven benchmark of a freshwater lens

    Science.gov (United States)

    Stoeckl, L.; Walther, M.; Schneider, A.; Yang, J.; Gaj, M.; Graf, T.

    2013-12-01

    The physical experiment of Stoeckl and Houben (2012)* was taken as a benchmark to compare results of calculations by several finite volume and finite element programs. In the experiment, an acrylic glass box was used to simulate a cross section of an infinite strip island. Degassed salt water (density 1021 kg m-3) was injected, saturating the sand from bottom to top. Fluorescent tracer dyes (uranine, eosine and indigotine) were used to mark infiltrating fresh water (density 997 kg m-3) from the top. While freshwater constantly infiltrated, saltwater was displaced and a freshwater lens started to develop until reaching equilibrium. The experiment was recorded and analyzed using fast motion mode. The numerical groundwater flow models used for comparison are Feflow, Spring, OpenGeoSys, d3f and HydroGeoSphere. All programs are capable to solve the partial differential equations of coupled flow and transport. To ensure highest level of comparison, the setups are defined as similar as possible: identical temporal and spatial resolutions are applied to all models (triangular grid with 14,432 elements and constant time steps of 8.64 s); furthermore, the same boundary conditions and parameters are used; finally, the output of each model is converted into the same format and post-processed in the open-source program ParaView. Transient as well as steady state flow fields and concentration distributions are compared. Capabilities of the different models are described, showing differences, limitations and advantages. The results show, that all models are capable to represent the benchmark to a high degree. Still, differences are observed, even by keeping the models as similar as possible. Some deviations may be explained by omitted processes, which cannot be represented in certain models, whereas other deviations may be explained by program-specific differences in solving the partial differential equations. * Stoeckl, L., Houben, G. (2012): Flow dynamics and age stratification

  13. Benchmark 1 - Failure Prediction after Cup Drawing, Reverse Redrawing and Expansion Part A: Benchmark Description

    Science.gov (United States)

    Watson, Martin; Dick, Robert; Huang, Y. Helen; Lockley, Andrew; Cardoso, Rui; Santos, Abel

    2016-08-01

    This Benchmark is designed to predict the fracture of a food can after drawing, reverse redrawing and expansion. The aim is to assess different sheet metal forming difficulties such as plastic anisotropic earing and failure models (strain and stress based Forming Limit Diagrams) under complex nonlinear strain paths. To study these effects, two distinct materials, TH330 steel (unstoved) and AA5352 aluminum alloy are considered in this Benchmark. Problem description, material properties, and simulation reports with experimental data are summarized.

  14. Building America Top Innovations 2012: House Simulation Protocols (the Building America Benchmark)

    Energy Technology Data Exchange (ETDEWEB)

    none,

    2013-01-01

    This Building America Top Innovations profile describes the DOE-sponsored House Simulation Protocols, which have helped ensure consistent and accurate energy-efficiency assessments for tens of thousands of new and retrofit homes supported by the Building America program.

  15. Monte Carlo Simulation of the TRIGA Mark II Benchmark Experiment with Burned Fuel

    International Nuclear Information System (INIS)

    Monte Carlo calculations of a criticality experiment with burned fuel on the TRIGA Mark II research reactor are presented. The main objective was to incorporate burned fuel composition calculated with the WIMSD4 deterministic code into the MCNP4B Monte Carlo code and compare the calculated keff with the measurements. The criticality experiment was performed in 1998 at the ''Jozef Stefan'' Institute TRIGA Mark II reactor in Ljubljana, Slovenia, with the same fuel elements and loading pattern as in the TRIGA criticality benchmark experiment with fresh fuel performed in 1991. The only difference was that in 1998, the fuel elements had on average burnup of ∼3%, corresponding to 1.3-MWd energy produced in the core in the period between 1991 and 1998. The fuel element burnup accumulated during 1991-1998 was calculated with the TRIGLAV in-house-developed fuel management two-dimensional multigroup diffusion code. The burned fuel isotopic composition was calculated with the WIMSD4 code and compared to the ORIGEN2 calculations. Extensive comparison of burned fuel material composition was performed for both codes for burnups up to 20% burned 235U, and the differences were evaluated in terms of reactivity. The WIMSD4 and ORIGEN2 results agreed well for all isotopes important in reactivity calculations, giving increased confidence in the WIMSD4 calculation of the burned fuel material composition. The keff calculated with the combined WIMSD4 and MCNP4B calculations showed good agreement with the experimental values. This shows that linking of WIMSD4 with MCNP4B for criticality calculations with burned fuel is feasible and gives reliable results

  16. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  17. Chaos and bifurcation control of SSR in the IEEE second benchmark model

    Energy Technology Data Exchange (ETDEWEB)

    Harb, A.M. E-mail: aharb@just.edu.jo; Widyan, M.S

    2004-07-01

    Linear and nonlinear state feedback controllers are proposed to control the bifurcation of a phenomenon in power system, this phenomenon of electro-mechanical interaction between the series resonant circuits and torsional mechanical frequencies of the turbine-generator sections, which known as subsynchronous resonance (SSR). The first system of the IEEE second benchmark model is considered. The dynamics of the two axes damper windings, automatic voltage regulator and power system stabilizer are included. The linear controller gives better initial disturbance response than that of the nonlinear, but in a small narrow region of compensation factors. The nonlinear controller not only can be easily implemented, but also it stabilizes the operating point for all values of the bifurcation parameter.

  18. A Gross-Margin Model for Defining Technoeconomic Benchmarks in the Electroreduction of CO2.

    Science.gov (United States)

    Verma, Sumit; Kim, Byoungsu; Jhong, Huei-Ru Molly; Ma, Sichao; Kenis, Paul J A

    2016-08-01

    We introduce a gross-margin model to evaluate the technoeconomic feasibility of producing different C1 -C2 chemicals such as carbon monoxide, formic acid, methanol, methane, ethanol, and ethylene through the electroreduction of CO2 . Key performance benchmarks including the maximum operating cell potential (Vmax ), minimum operating current density (jmin ), Faradaic efficiency (FE), and catalyst durability (tcatdur ) are derived. The Vmax values obtained for the different chemicals indicate that CO and HCOOH are the most economically viable products. Selectivity requirements suggest that the coproduction of an economically less feasible chemical (CH3 OH, CH4 , C2 H5 OH, C2 H4 ) with a more feasible chemical (CO, HCOOH) can be a strategy to offset the Vmax requirements for individual products. Other performance requirements such as jmin and tcatdur are also derived, and the feasibility of alternative process designs and operating conditions are evaluated. PMID:27345560

  19. A dynamic flow simulation code benchmark study addressing the highly heterogeneous properties of the Stuttgart formation at the Ketzin pilot site

    Science.gov (United States)

    Kempka, Thomas; Class, Holger; Görke, Uwe-Jens; Norden, Ben; Kolditz, Olaf; Kühn, Michael; Walter, Lena; Wang, Wenqing; Zehner, Björn

    2013-04-01

    CO2 injection at the Ketzin pilot site located in Eastern Germany (Brandenburg) about 25 km west of Berlin is undertaken since June 2008 with a scheduled total amount of about 70,000 t CO2 to be injected into the saline aquifer represented by the Stuttgart Formation at a depth of 630 m to 650 m until the end of August 2013. The Stuttgart Formation is of fluvial origin determined by high-permeablity sandstone channels embedded in a floodplain facies of low permeability indicating a highly heterogeneous distribution of reservoir properties as facies distribution, porosity and permeability relevant for dynamic flow simulations. Following the dynamic modelling activities discussed by Kempka et al. (2010), a revised geological model allowed us to history match CO2 arrival times in the observation wells and reservoir pressure with a good agreement (Martens et al., 2012). Consequently, the validated reservoir model of the Stuttgart Formation at the Ketzin pilot site enabled us to predict the development of reservoir pressure and the CO2 plume migration in the storage formation by dynamic flow simulations. A benchmark study of industrial (ECLIPSE 100 as well as ECLIPSE 300 CO2STORE and GASWAT) and scientific dynamic flow simulations codes (TOUGH2-MP/ECO2N, OpenGeoSys and DuMuX) was initiated to address and compare the simulator capabilities considering a highly complex reservoir model. Hence, our dynamic flow simulations take into account different properties of the geological model such as significant variation of porosity and permeability in the Stuttgart Formation as well as structural geological features implemented in the geological model such as seven major faults located at the top of the Ketzin anticline. Integration of the geological model into reservoir models suitable for the different dynamic flow simulators applied demonstrated that a direct conversion of reservoir model discretization between Finite Volume and Finite Element flow simulators is not feasible

  20. Flexing computational muscle: modeling and simulation of musculotendon dynamics.

    Science.gov (United States)

    Millard, Matthew; Uchida, Thomas; Seth, Ajay; Delp, Scott L

    2013-02-01

    Muscle-driven simulations of human and animal motion are widely used to complement physical experiments for studying movement dynamics. Musculotendon models are an essential component of muscle-driven simulations, yet neither the computational speed nor the biological accuracy of the simulated forces has been adequately evaluated. Here we compare the speed and accuracy of three musculotendon models: two with an elastic tendon (an equilibrium model and a damped equilibrium model) and one with a rigid tendon. Our simulation benchmarks demonstrate that the equilibrium and damped equilibrium models produce similar force profiles but have different computational speeds. At low activation, the damped equilibrium model is 29 times faster than the equilibrium model when using an explicit integrator and 3 times faster when using an implicit integrator; at high activation, the two models have similar simulation speeds. In the special case of simulating a muscle with a short tendon, the rigid-tendon model produces forces that match those generated by the elastic-tendon models, but simulates 2-54 times faster when an explicit integrator is used and 6-31 times faster when an implicit integrator is used. The equilibrium, damped equilibrium, and rigid-tendon models reproduce forces generated by maximally-activated biological muscle with mean absolute errors less than 8.9%, 8.9%, and 20.9% of the maximum isometric muscle force, respectively. When compared to forces generated by submaximally-activated biological muscle, the forces produced by the equilibrium, damped equilibrium, and rigid-tendon models have mean absolute errors less than 16.2%, 16.4%, and 18.5%, respectively. To encourage further development of musculotendon models, we provide implementations of each of these models in OpenSim version 3.1 and benchmark data online, enabling others to reproduce our results and test their models of musculotendon dynamics. PMID:23445050

  1. Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model

    International Nuclear Information System (INIS)

    In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity of the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark

  2. Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model

    Energy Technology Data Exchange (ETDEWEB)

    Baudron, Anne-Marie, E-mail: anne-marie.baudron@cea.fr [Laboratoire de Recherche Conventionné MANON, CEA/DEN/DANS/DM2S and UPMC-CNRS/LJLL (France); CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex (France); Lautard, Jean-Jacques, E-mail: jean-jacques.lautard@cea.fr [Laboratoire de Recherche Conventionné MANON, CEA/DEN/DANS/DM2S and UPMC-CNRS/LJLL (France); CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex (France); Maday, Yvon, E-mail: maday@ann.jussieu.fr [Sorbonne Universités, UPMC Univ Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions and Institut Universitaire de France, F-75005, Paris (France); Laboratoire de Recherche Conventionné MANON, CEA/DEN/DANS/DM2S and UPMC-CNRS/LJLL (France); Brown Univ, Division of Applied Maths, Providence, RI (United States); Riahi, Mohamed Kamel, E-mail: riahi@cmap.polytechnique.fr [Laboratoire de Recherche Conventionné MANON, CEA/DEN/DANS/DM2S and UPMC-CNRS/LJLL (France); CMAP, Inria-Saclay and X-Ecole Polytechnique, Route de Saclay, 91128 Palaiseau Cedex (France); Salomon, Julien, E-mail: salomon@ceremade.dauphine.fr [CEREMADE, Univ Paris-Dauphine, Pl. du Mal. de Lattre de Tassigny, F-75016, Paris (France)

    2014-12-15

    In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity of the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark.

  3. 3D simulation of Industrial Hall in case of fire. Benchmark between ABAQUS, ANSYS and SAFIR

    OpenAIRE

    Vassart, Olivier; Cajot, Louis-Guy; O'Connor, Marc; Shenkai, Y.; Fraud, C.; Zhao, Bin; De la Quintana, Jesus; Martinez de Aragon, J.; Franssen, Jean-Marc; Gens, Frederic

    2004-01-01

    For simple storey buildings, the structural behaviour in case of fire is relevant only for the safety of the firemen. The protection of occupants and goods is a matter of fire spread, smoke propagation, active fire fighting measures and evacuation facilities. Brittle failure, progressive collapse and partial failure of façades elements outwards may endanger the fire fighters and have to be avoided. In order to deal with such an objective, the simulation softwares has to cover the 3D structura...

  4. Validation of mechanical models for reinforced concrete structures: Presentation of the French project ``Benchmark des Poutres de la Rance''

    Science.gov (United States)

    L'Hostis, V.; Brunet, C.; Poupard, O.; Petre-Lazar, I.

    2006-11-01

    Several ageing models are available for the prediction of the mechanical consequences of rebar corrosion. They are used for service life prediction of reinforced concrete structures. Concerning corrosion diagnosis of reinforced concrete, some Non Destructive Testing (NDT) tools have been developed, and have been in use for some years. However, these developments require validation on existing concrete structures. The French project “Benchmark des Poutres de la Rance” contributes to this aspect. It has two main objectives: (i) validation of mechanical models to estimate the influence of rebar corrosion on the load bearing capacity of a structure, (ii) qualification of the use of the NDT results to collect information on steel corrosion within reinforced-concrete structures. Ten French and European institutions from both academic research laboratories and industrial companies contributed during the years 2004 and 2005. This paper presents the project that was divided into several work packages: (i) the reinforced concrete beams were characterized from non-destructive testing tools, (ii) the mechanical behaviour of the beams was experimentally tested, (iii) complementary laboratory analysis were performed and (iv) finally numerical simulations results were compared to the experimental results obtained with the mechanical tests.

  5. Effects of Exciting Evaluated Nuclear Date Files on Nuclear Parameters of the BFS-62-3A Assembly Benchmark Model

    OpenAIRE

    Mikhail

    2002-01-01

    This report is continuation of studying of the experiments performed on BFS-62-3A critical assembly in Russia. The objective of work is definition of the cross section uncertainties on reactor neutronics parameters as applied to the hybrid core of the BN-600 reactor of Beloyarskaya NPP. Two-dimensional benchmark model of BFS-62-3A was created specially for these purposes and experimental values were reduced to it. Benchmark characteristics for this assembly are (1)criticality; (2)central fiss...

  6. Systematic effects in CALOR simulation code to model experimental configurations

    International Nuclear Information System (INIS)

    CALOR89 code system is being used to simulate test beam results and the design parameters of several calorimeter configurations. It has been bench-marked against the ZEUS, Dθ and HELIOS data. This study identifies the systematic effects in CALOR simulation to model the experimental configurations. Five major systematic effects are identified. These are the choice of high energy nuclear collision model, material composition, scintillator saturation, shower integration time, and the shower containment. Quantitative estimates of these systematic effects are presented. 23 refs., 6 figs., 7 tabs

  7. Advancing Material Models for Automotive Forming Simulations

    Science.gov (United States)

    Vegter, H.; An, Y.; ten Horn, C. H. L. J.; Atzema, E. H.; Roelofsen, M. E.

    2005-08-01

    Simulations in automotive industry need more advanced material models to achieve highly reliable forming and springback predictions. Conventional material models implemented in the FEM-simulation models are not capable to describe the plastic material behaviour during monotonic strain paths with sufficient accuracy. Recently, ESI and Corus co-operate on the implementation of an advanced material model in the FEM-code PAMSTAMP 2G. This applies to the strain hardening model, the influence of strain rate, and the description of the yield locus in these models. A subsequent challenge is the description of the material after a change of strain path. The use of advanced high strength steels in the automotive industry requires a description of plastic material behaviour of multiphase steels. The simplest variant is dual phase steel consisting of a ferritic and a martensitic phase. Multiphase materials also contain a bainitic phase in addition to the ferritic and martensitic phase. More physical descriptions of strain hardening than simple fitted Ludwik/Nadai curves are necessary. Methods to predict plastic behaviour of single-phase materials use a simple dislocation interaction model based on the formed cells structures only. At Corus, a new method is proposed to predict plastic behaviour of multiphase materials have to take hard phases into account, which deform less easily. The resulting deformation gradients create geometrically necessary dislocations. Additional micro-structural information such as morphology and size of hard phase particles or grains is necessary to derive the strain hardening models for this type of materials. Measurements available from the Numisheet benchmarks allow these models to be validated. At Corus, additional measured values are available from cross-die tests. This laboratory test can attain critical deformations by large variations in blank size and processing conditions. The tests are a powerful tool in optimising forming simulations

  8. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...

  9. Benchmarking monthly homogenization algorithms

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2011-08-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  10. Benchmark Simulation for the Development of the Regulatory Audit Subchannel Analysis Code

    International Nuclear Information System (INIS)

    For the safe and reliable operation of a reactor, it is important to predict accurately the flow and temperature distributions in the thermal-hydraulic design of a reactor core. A subchannel approach can give the reasonable flow and temperature distributions with the short computing time. Korea Institute of Nuclear Safety (KINS) is presently reviewing new subchannel code, THALES, which will substitute for both THINC-IV and TORC code. To assess the prediction performance of THALES, KINS is developing the subchannel analysis code for the independent audit calculation. The code is based on workstation version of COBRA-IV-I. The main objective of the present study is to assess the performance of COBRA-IV-I code by comparing the simulation results with experimental ones for the sample problems

  11. Antibiotic reimbursement in a model delinked from sales: a benchmark-based worldwide approach.

    Science.gov (United States)

    Rex, John H; Outterson, Kevin

    2016-04-01

    Despite the life-saving ability of antibiotics and their importance as a key enabler of all of modern health care, their effectiveness is now threatened by a rising tide of resistance. Unfortunately, the antibiotic pipeline does not match health needs because of challenges in discovery and development, as well as the poor economics of antibiotics. Discovery and development are being addressed by a range of public-private partnerships; however, correcting the poor economics of antibiotics will need an overhaul of the present business model on a worldwide scale. Discussions are now converging on delinking reward from antibiotic sales through prizes, milestone payments, or insurance-like models in which innovation is rewarded with a fixed series of payments of a predictable size. Rewarding all drugs with the same payments could create perverse incentives to produce drugs that provide the least possible innovation. Thus, we propose a payment model using a graded array of benchmarked rewards designed to encourage the development of antibiotics with the greatest societal value, together with appropriate worldwide access to antibiotics to maximise human health.

  12. Climate simulations for 1880-2003 with GISS modelE

    CERN Document Server

    Hansen, J; Bauer, S; Baum, E; Cairns, B; Canuto, V; Chandler, M; Cheng, Y; Cohen, A; Faluvegi, G; Fleming, E; Friend, A; Genio, A D; Hall, T; Jackman, C; Jonas, J; Kelley, M; Kharecha, P; Kiang, N Y; Koch, D; Labow, G; Lacis, A; Lerner, J; Lo, K; Menon, S; Miller, R; Nazarenko, L; Novakov, T; Oinas, V; Perlwitz, J; Rind, D; Romanou, A; Ruedy, R; Russell, G; Sato, M; Schmidt, G A; Schmunk, R; Shindell, D; Stone, P; Streets, D; Sun, S; Tausnev, N; Thresher, D; Unger, N; Yao, M; Zhang, S; Perlwitz, Ja.; Perlwitz, Ju.

    2006-01-01

    We carry out climate simulations for 1880-2003 with GISS modelE driven by ten measured or estimated climate forcings. An ensemble of climate model runs is carried out for each forcing acting individually and for all forcing mechanisms acting together. We compare side-by-side simulated climate change for each forcing, all forcings, observations, unforced variability among model ensemble members, and, if available, observed variability. Discrepancies between observations and simulations with all forcings are due to model deficiencies, inaccurate or incomplete forcings, and imperfect observations. Although there are notable discrepancies between model and observations, the fidelity is sufficient to encourage use of the model for simulations of future climate change. By using a fixed well-documented model and accurately defining the 1880-2003 forcings, we aim to provide a benchmark against which the effect of improvements in the model, climate forcings, and observations can be tested. Principal model deficiencies...

  13. 3-D core modelling of RIA transient: the TMI-1 benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Ferraresi, P. [CEA Cadarache, Institut de Protection et de Surete Nucleaire, Dept. de Recherches en Securite, 13 - Saint Paul Lez Durance (France); Studer, E. [CEA Saclay, Dept. Modelisation de Systemes et Structures, 91 - Gif sur Yvette (France); Avvakumov, A.; Malofeev, V. [Nuclear Safety Institute of Russian Research Center, Kurchatov Institute, Moscow (Russian Federation); Diamond, D.; Bromley, B. [Nuclear Energy and Infrastructure Systems Div., Brookhaven National Lab., BNL, Upton, NY (United States)

    2001-07-01

    The increase of fuel burn up in core management poses actually the problem of the evaluation of the deposited energy during Reactivity Insertion Accidents (RIA). In order to precisely evaluate this energy, 3-D approaches are used more and more frequently in core calculations. This 'best-estimate' approach requires the evaluation of code uncertainties. To contribute to this evaluation, a code benchmark has been launched. A 3-D modelling for the TMI-1 central Ejected Rod Accident with zero and intermediate initial powers was carried out with three different methods of calculation for an inserted reactivity respectively fixed at 1.2 $ and 1.26 $. The studies implemented by the neutronics codes PARCS (BNL) and CRONOS (IPSN/CEA) describe an homogeneous assembly, whereas the BARS (KI) code allows a pin-by-pin representation (CRONOS has both possibilities). All the calculations are consistent, the variation in figures resulting mainly from the method used to build cross sections and reflectors constants. The maximum rise in enthalpy for the intermediate initial power (33 % P{sub N}) calculation is, for this academic calculation, about 30 cal/g. This work will be completed in a next step by an evaluation of the uncertainty induced by the uncertainty on model parameters, and a sensitivity study of the key parameters for a peripheral Rod Ejection Accident. (authors)

  14. Benchmarking atomic physics models for magnetically confined fusion plasma physics experiments

    International Nuclear Information System (INIS)

    In present magnetically confined fusion devices, high and intermediate Z impurities are either puffed into the plasma for divertor radiative cooling experiments or are sputtered from the high Z plasma facing armor. The beneficial cooling of the edge as well as the detrimental radiative losses from the core of these impurities can be properly understood only if the atomic physics used in the modeling of the cooling curves is very accurate. To this end, a comprehensive experimental and theoretical analysis of some relevant impurities is undertaken. Gases (Ne, Ar, Kr, and Xe) are puffed and nongases are introduced through laser ablation into the FTU tokamak plasma. The charge state distributions and total density of these impurities are determined from spatial scans of several photometrically calibrated vacuum ultraviolet and x-ray spectrographs (3 - 1600 Angstrom), the multiple ionization state transport code transport code (MIST) and a collisional radiative model. The radiative power losses are measured with bolometery, and the emissivity profiles were measured by a visible bremsstrahlung array. The ionization balance, excitation physics, and the radiative cooling curves are computed from the Hebrew University Lawrence Livermore atomic code (HULLAC) and are benchmarked by these experiments. (Supported by U.S. DOE Grant No. DE-FG02-86ER53214 at JHU and Contract No. W-7405-ENG-48 at LLNL.) copyright 1999 American Institute of Physics

  15. Benchmarking atomic physics models for magnetically confined fusion plasma physics experiments

    Science.gov (United States)

    May, M. J.; Finkenthal, M.; Soukhanovskii, V.; Stutman, D.; Moos, H. W.; Pacella, D.; Mazzitelli, G.; Fournier, K.; Goldstein, W.; Gregory, B.

    1999-01-01

    In present magnetically confined fusion devices, high and intermediate Z impurities are either puffed into the plasma for divertor radiative cooling experiments or are sputtered from the high Z plasma facing armor. The beneficial cooling of the edge as well as the detrimental radiative losses from the core of these impurities can be properly understood only if the atomic physics used in the modeling of the cooling curves is very accurate. To this end, a comprehensive experimental and theoretical analysis of some relevant impurities is undertaken. Gases (Ne, Ar, Kr, and Xe) are puffed and nongases are introduced through laser ablation into the FTU tokamak plasma. The charge state distributions and total density of these impurities are determined from spatial scans of several photometrically calibrated vacuum ultraviolet and x-ray spectrographs (3-1600 Å), the multiple ionization state transport code transport code (MIST) and a collisional radiative model. The radiative power losses are measured with bolometery, and the emissivity profiles were measured by a visible bremsstrahlung array. The ionization balance, excitation physics, and the radiative cooling curves are computed from the Hebrew University Lawrence Livermore atomic code (HULLAC) and are benchmarked by these experiments. (Supported by U.S. DOE Grant No. DE-FG02-86ER53214 at JHU and Contract No. W-7405-ENG-48 at LLNL.)

  16. Preliminary assessment of Geant4 HP models and cross section libraries by reactor criticality benchmark calculations

    DEFF Research Database (Denmark)

    Cai, Xiao-Xiao; Llamas-Jansa, Isabel; Mullet, Steven;

    2013-01-01

    Geant4 is an open source general purpose simulation toolkit for particle transportation in matter. Since the extension of the thermal scattering model in Geant4.9.5 and the availability of the IAEA HP model cross section libraries, it is now possible to extend the application area of Geant4 to re...

  17. Developing a reference-model for benchmarking: performance improvement in operation and maintenance

    OpenAIRE

    El-Wardani, Riad

    2012-01-01

    Statoil has a major responsibility of “driving simplification and improvement initiatives” by relying on tools such as benchmarking. The aim is to drive business performance based on best practice rather than on compliance. To date, the full potential of benchmarking has not been realized since the concept is not easy to define, let alone follow-up. A great deal of knowledge and practice remains hidden in the Statoil system that can be effectively used to drive performance based on effective ...

  18. Thermochemistry of organic reactions in microporous oxides by atomistic simulations: benchmarking against periodic B3LYP.

    Science.gov (United States)

    Bleken, Francesca; Svelle, Stian; Lillerud, Karl Petter; Olsbye, Unni; Arstad, Bjørnar; Swang, Ole

    2010-07-15

    The methylation of ethene by methyl chloride and methanol in the microporous materials SAPO-34 and SSZ-13 has been studied using different periodic atomistic modeling approaches based on density functional theory. The RPBE functional, which earlier has been used successfully in studies of surface reactions on metals, fails to yield a qualitatively correct description of the transition states under study. Employing B3LYP as functional gives results in line with experimental data: (1) Methanol is adsorbed more strongly than methyl chloride to the acid site. (2) The activation energies for the methylation of ethene are slightly lower for SSZ-13. Furthermore, the B3LYP activation energies are lower for methyl chloride than for methanol. PMID:20557090

  19. Local Job Accessibility Measurement: When the Model Makes the Results. Methodological Contribution and Empirical Benchmarking on the Paris Region

    OpenAIRE

    Matthieu Bunel; Elisabeth Tovar

    2012-01-01

    This paper focuses on local job accessibility measurement. We propose an original model that uses national exhaustive micro data and allows for i) a full estimation of job availability according to an extensive set of individual characteristics, ii) a full appraisal of job competition on the labour market and iii) a full control of frontier effects. By matching several exhaustive micro data sources on the Paris region municipalities, we compare the results produced by this benchmark model to ...

  20. Proton Exchange Membrane Fuel Cell Engineering Model Powerplant. Test Report: Benchmark Tests in Three Spatial Orientations

    Science.gov (United States)

    Loyselle, Patricia; Prokopius, Kevin

    2011-01-01

    Proton exchange membrane (PEM) fuel cell technology is the leading candidate to replace the aging alkaline fuel cell technology, currently used on the Shuttle, for future space missions. This test effort marks the final phase of a 5-yr development program that began under the Second Generation Reusable Launch Vehicle (RLV) Program, transitioned into the Next Generation Launch Technologies (NGLT) Program, and continued under Constellation Systems in the Exploration Technology Development Program. Initially, the engineering model (EM) powerplant was evaluated with respect to its performance as compared to acceptance tests carried out at the manufacturer. This was to determine the sensitivity of the powerplant performance to changes in test environment. In addition, a series of tests were performed with the powerplant in the original standard orientation. This report details the continuing EM benchmark test results in three spatial orientations as well as extended duration testing in the mission profile test. The results from these tests verify the applicability of PEM fuel cells for future NASA missions. The specifics of these different tests are described in the following sections.

  1. ASPECTS ABOUT SIMULATED MODEL TRUSTINESS

    OpenAIRE

    CRISAN DANIELA ALEXANDRA; STANICA JUSTINA LAVINIA; DESPA RADU; COCULESCU CRISTINA

    2009-01-01

    Nowadays, grace of computing possibilities that electronic computers offer and namely, big memory volume and computing speed, there is the improving of modeling methods, an important role having complex system modeling using simulation techniques. These o

  2. Bridging experiments, models and simulations

    DEFF Research Database (Denmark)

    Carusi, Annamaria; Burrage, Kevin; Rodríguez, Blanca

    2012-01-01

    study aims at informing strategies for validation by elucidating the complex interrelations among experiments, models, and simulations in cardiac electrophysiology. We describe the processes, data, and knowledge involved in the construction of whole ventricular multiscale models of cardiac...

  3. The MCNP6 Analytic Criticality Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Codes Group

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  4. Simulation - modeling - experiment; Simulation - modelisation - experience

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  5. Comparative Benchmark Dose Modeling as a Tool to Make the First Estimate of Safe Human Exposure Levels to Lunar Dust

    Science.gov (United States)

    James, John T.; Lam, Chiu-wing; Scully, Robert R.

    2013-01-01

    Brief exposures of Apollo Astronauts to lunar dust occasionally elicited upper respiratory irritation; however, no limits were ever set for prolonged exposure ot lunar dust. Habitats for exploration, whether mobile of fixed must be designed to limit human exposure to lunar dust to safe levels. We have used a new technique we call Comparative Benchmark Dose Modeling to estimate safe exposure limits for lunar dust collected during the Apollo 14 mission.

  6. Benchmark Comparison of Dual- and Quad-Core Processor Linux Clusters with Two Global Climate Modeling Workloads

    Science.gov (United States)

    McGalliard, James

    2008-01-01

    This viewgraph presentation details the science and systems environments that NASA High End computing program serves. Included is a discussion of the workload that is involved in the processing for the Global Climate Modeling. The Goddard Earth Observing System Model, Version 5 (GEOS-5) is a system of models integrated using the Earth System Modeling Framework (ESMF). The GEOS-5 system was used for the Benchmark tests, and the results of the tests are shown and discussed. Tests were also run for the Cubed Sphere system, results for these test are also shown.

  7. Benchmarking Asteroid-Deflection Experiment

    Science.gov (United States)

    Remington, Tane; Bruck Syal, Megan; Owen, John Michael; Miller, Paul L.

    2016-10-01

    An asteroid impacting Earth could have devastating consequences. In preparation to deflect or disrupt one before it reaches Earth, it is imperative to have modeling capabilities that adequately simulate the deflection actions. Code validation is key to ensuring full confidence in simulation results used in an asteroid-mitigation plan. We are benchmarking well-known impact experiments using Spheral, an adaptive smoothed-particle hydrodynamics code, to validate our modeling of asteroid deflection. We describe our simulation results, compare them with experimental data, and discuss what we have learned from our work. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-695540

  8. Automatic programming of simulation models

    Science.gov (United States)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.

    1988-01-01

    The objective of automatic programming is to improve the overall environment for describing the program. This improved environment is realized by a reduction in the amount of detail that the programmer needs to know and is exposed to. Furthermore, this improved environment is achieved by a specification language that is more natural to the user's problem domain and to the user's way of thinking and looking at the problem. The goal of this research is to apply the concepts of automatic programming (AP) to modeling discrete event simulation system. Specific emphasis is on the design and development of simulation tools to assist the modeler define or construct a model of the system and to then automatically write the corresponding simulation code in the target simulation language, GPSS/PC. A related goal is to evaluate the feasibility of various languages for constructing automatic programming simulation tools.

  9. A benchmark test suite for proton transfer energies and its use to test electronic structure model chemistries

    International Nuclear Information System (INIS)

    Highlights: ► We present benchmark calculations of energies of complexation and barriers for proton transfer to water. ► Benchmark calculations are used to test methods suitable for application to large and complex systems. ► Methods tested include hybrid meta-GGAs, M06-L, PW6B95, SOGGA11, MP2, SCC-DFTB, PMO, and NDDO. - Abstract: We present benchmark calculations of nine selected points on potential energy surfaces describing proton transfer processes in three model systems, H5O2+, CH3OH…H+…OH2, and CH3COOH…OH2. The calculated relative energies of these geometries are compared to those calculated by various wave function and density functional methods, including the polarized molecular orbital (PMO) model recently developed in our research group and other semiempirical molecular orbital methods. We found that the SCC-DFTB and PMO methods (the latter available so far only for molecules consisting of only O and H and therefore only for the first of the three model systems) give results that are, on average, within 2 kcal/mol of the benchmark results. Other semiempirical molecular orbital methods have mean unsigned errors (MUEs) of 3–8 kcal/mol, local density functionals have MUEs in the range 0.7–3.7 kcal/mol, and hybrid density functionals have MUEs of only 0.3–1.0 kcal/mol, with the best density functional performance obtained by hybrid meta-GGAs, especially M06 and PW6B95.

  10. Modeling and Simulation with INS.

    Science.gov (United States)

    Roberts, Stephen D.; And Others

    INS, the Integrated Network Simulation language, puts simulation modeling into a network framework and automatically performs such programming activities as placing the problem into a next event structure, coding events, collecting statistics, monitoring status, and formatting reports. To do this, INS provides a set of symbols (nodes and branches)…

  11. Comparative modeling and benchmarking data sets for human histone deacetylases and sirtuin families.

    Science.gov (United States)

    Xia, Jie; Tilahun, Ermias Lemma; Kebede, Eyob Hailu; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2015-02-23

    Histone deacetylases (HDACs) are an important class of drug targets for the treatment of cancers, neurodegenerative diseases, and other types of diseases. Virtual screening (VS) has become fairly effective approaches for drug discovery of novel and highly selective histone deacetylase inhibitors (HDACIs). To facilitate the process, we constructed maximal unbiased benchmarking data sets for HDACs (MUBD-HDACs) using our recently published methods that were originally developed for building unbiased benchmarking sets for ligand-based virtual screening (LBVS). The MUBD-HDACs cover all four classes including Class III (Sirtuins family) and 14 HDAC isoforms, composed of 631 inhibitors and 24609 unbiased decoys. Its ligand sets have been validated extensively as chemically diverse, while the decoy sets were shown to be property-matching with ligands and maximal unbiased in terms of "artificial enrichment" and "analogue bias". We also conducted comparative studies with DUD-E and DEKOIS 2.0 sets against HDAC2 and HDAC8 targets and demonstrate that our MUBD-HDACs are unique in that they can be applied unbiasedly to both LBVS and SBVS approaches. In addition, we defined a novel metric, i.e. NLBScore, to detect the "2D bias" and "LBVS favorable" effect within the benchmarking sets. In summary, MUBD-HDACs are the only comprehensive and maximal-unbiased benchmark data sets for HDACs (including Sirtuins) that are available so far. MUBD-HDACs are freely available at http://www.xswlab.org/ . PMID:25633490

  12. Wake modeling and simulation

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Madsen Aagaard, Helge; Larsen, Torben J.;

    We present a consistent, physically based theory for the wake meandering phenomenon, which we consider of crucial importance for the overall description of wind turbine loadings in wind farms. In its present version the model is confined to single wake situations. The model philosophy does, however....... This capability is a direct and attractive consequence of the model being based on the underlying physical process, and it potentially opens for optimization of wind farm topology, of wind farm operation as well as of control strategies for the individual turbine. To establish an integrated modeling tool, the DWM......, have the potential to include also mutual wake interaction phenomenons. The basic conjecture behind the dynamic wake meandering (DWM) model is that wake transportation in the atmospheric boundary layer is driven by the large scale lateral- and vertical turbulence components. Based on this conjecture...

  13. TREAT Modeling and Simulation Strategy

    International Nuclear Information System (INIS)

    This report summarizes a four-phase process used to describe the strategy in developing modeling and simulation software for the Transient Reactor Test Facility. The four phases of this research and development task are identified as (1) full core transient calculations with feedback, (2) experiment modeling, (3) full core plus experiment simulation and (4) quality assurance. The document describes the four phases, the relationship between these research phases, and anticipated needs within each phase.

  14. TREAT Modeling and Simulation Strategy

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, Mark David [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a four-phase process used to describe the strategy in developing modeling and simulation software for the Transient Reactor Test Facility. The four phases of this research and development task are identified as (1) full core transient calculations with feedback, (2) experiment modeling, (3) full core plus experiment simulation and (4) quality assurance. The document describes the four phases, the relationship between these research phases, and anticipated needs within each phase.

  15. Dark Matter Benchmark Models for Early LHC Run-2 Searches: Report of the ATLAS/CMS Dark Matter Forum

    CERN Document Server

    Abercrombie, Daniel; Akilli, Ece; Maestre, Juan Alcaraz; Allen, Brandon; Gonzalez, Barbara Alvarez; Andrea, Jeremy; Arbey, Alexandre; Azuelos, Georges; Azzi, Patrizia; Backović, Mihailo; Bai, Yang; Banerjee, Swagato; Beacham, James; Belyaev, Alexander; Boveia, Antonio; Brennan, Amelia Jean; Buchmueller, Oliver; Buckley, Matthew R; Busoni, Giorgio; Buttignol, Michael; Cacciapaglia, Giacomo; Caputo, Regina; Carpenter, Linda; Castro, Nuno Filipe; Ceballos, Guillelmo Gomez; Cheng, Yangyang; Chou, John Paul; Gonzalez, Arely Cortes; Cowden, Chris; D'Eramo, Francesco; De Cosa, Annapaola; De Gruttola, Michele; De Roeck, Albert; De Simone, Andrea; Deandrea, Aldo; Demiragli, Zeynep; DiFranzo, Anthony; Doglioni, Caterina; Pree, Tristan du; Erbacher, Robin; Erdmann, Johannes; Fischer, Cora; Flaecher, Henning; Fox, Patrick J; Fuks, Benjamin; Genest, Marie-Helene; Gomber, Bhawna; Goudelis, Andreas; Gramling, Johanna; Gunion, John; Hahn, Kristian; Haisch, Ulrich; Harnik, Roni; Harris, Philip C; Hoepfner, Kerstin; Hoh, Siew Yan; Hsu, Dylan George; Hsu, Shih-Chieh; Iiyama, Yutaro; Ippolito, Valerio; Jacques, Thomas; Ju, Xiangyang; Kahlhoefer, Felix; Kalogeropoulos, Alexis; Kaplan, Laser Seymour; Kashif, Lashkar; Khoze, Valentin V; Khurana, Raman; Kotov, Khristian; Kovalskyi, Dmytro; Kulkarni, Suchita; Kunori, Shuichi; Kutzner, Viktor; Lee, Hyun Min; Lee, Sung-Won; Liew, Seng Pei; Lin, Tongyan; Lowette, Steven; Madar, Romain; Malik, Sarah; Maltoni, Fabio; Perez, Mario Martinez; Mattelaer, Olivier; Mawatari, Kentarou; McCabe, Christopher; Megy, Théo; Morgante, Enrico; Mrenna, Stephen; Narayanan, Siddharth M; Nelson, Andy; Novaes, Sérgio F; Padeken, Klaas Ole; Pani, Priscilla; Papucci, Michele; Paulini, Manfred; Paus, Christoph; Pazzini, Jacopo; Penning, Björn; Peskin, Michael E; Pinna, Deborah; Procura, Massimiliano; Qazi, Shamona F; Racco, Davide; Re, Emanuele; Riotto, Antonio; Rizzo, Thomas G; Roehrig, Rainer; Salek, David; Pineda, Arturo Sanchez; Sarkar, Subir; Schmidt, Alexander; Schramm, Steven Randolph; Shepherd, William; Singh, Gurpreet; Soffi, Livia; Srimanobhas, Norraphat; Sung, Kevin; Tait, Tim M P; Theveneaux-Pelzer, Timothee; Thomas, Marc; Tosi, Mia; Trocino, Daniele; Undleeb, Sonaina; Vichi, Alessandro; Wang, Fuquan; Wang, Lian-Tao; Wang, Ren-Jie; Whallon, Nikola; Worm, Steven; Wu, Mengqing; Wu, Sau Lan; Yang, Hongtao; Yang, Yong; Yu, Shin-Shan; Zaldivar, Bryan; Zanetti, Marco; Zhang, Zhiqing; Zucchetta, Alberto

    2015-01-01

    This document is the final report of the ATLAS-CMS Dark Matter Forum, a forum organized by the ATLAS and CMS collaborations with the participation of experts on theories of Dark Matter, to select a minimal basis set of dark matter simplified models that should support the design of the early LHC Run-2 searches. A prioritized, compact set of benchmark models is proposed, accompanied by studies of the parameter space of these models and a repository of generator implementations. This report also addresses how to apply the Effective Field Theory formalism for collider searches and present the results of such interpretations.

  16. Assessment of Static Delamination Propagation Capabilities in Commercial Finite Element Codes Using Benchmark Analysis

    Science.gov (United States)

    Orifici, Adrian C.; Krueger, Ronald

    2010-01-01

    With capabilities for simulating delamination growth in composite materials becoming available, the need for benchmarking and assessing these capabilities is critical. In this study, benchmark analyses were performed to assess the delamination propagation simulation capabilities of the VCCT implementations in Marc TM and MD NastranTM. Benchmark delamination growth results for Double Cantilever Beam, Single Leg Bending and End Notched Flexure specimens were generated using a numerical approach. This numerical approach was developed previously, and involves comparing results from a series of analyses at different delamination lengths to a single analysis with automatic crack propagation. Specimens were analyzed with three-dimensional and two-dimensional models, and compared with previous analyses using Abaqus . The results demonstrated that the VCCT implementation in Marc TM and MD Nastran(TradeMark) was capable of accurately replicating the benchmark delamination growth results and that the use of the numerical benchmarks offers advantages over benchmarking using experimental and analytical results.

  17. Stochastic modeling analysis and simulation

    CERN Document Server

    Nelson, Barry L

    1995-01-01

    A coherent introduction to the techniques for modeling dynamic stochastic systems, this volume also offers a guide to the mathematical, numerical, and simulation tools of systems analysis. Suitable for advanced undergraduates and graduate-level industrial engineers and management science majors, it proposes modeling systems in terms of their simulation, regardless of whether simulation is employed for analysis. Beginning with a view of the conditions that permit a mathematical-numerical analysis, the text explores Poisson and renewal processes, Markov chains in discrete and continuous time, se

  18. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    International Nuclear Information System (INIS)

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the 235U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments

  19. Model reduction for circuit simulation

    CERN Document Server

    Hinze, Michael; Maten, E Jan W Ter

    2011-01-01

    Simulation based on mathematical models plays a major role in computer aided design of integrated circuits (ICs). Decreasing structure sizes, increasing packing densities and driving frequencies require the use of refined mathematical models, and to take into account secondary, parasitic effects. This leads to very high dimensional problems which nowadays require simulation times too large for the short time-to-market demands in industry. Modern Model Order Reduction (MOR) techniques present a way out of this dilemma in providing surrogate models which keep the main characteristics of the devi

  20. Monte Carlo simulations and benchmark measurements on the response of TE(TE) and Mg(Ar) ionization chambers in photon, electron and neutron beams

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Yi-Chun [Health Physics Division, Institute of Nuclear Energy Research, Taoyuan County, Taiwan (China); Huang, Tseng-Te, E-mail: huangtt@iner.gov.tw [Health Physics Division, Institute of Nuclear Energy Research, Taoyuan County, Taiwan (China); Liu, Yuan-Hao [Nuclear Science and Technology Development Center, National Tsing Hua University, Hsinchu City, Taiwan (China); Chen, Wei-Lin [Institute of Nuclear Engineering and Science, National Tsing Hua University, Hsinchu City, Taiwan (China); Chen, Yen-Fu [Atomic Energy Council, New Taipei City, Taiwan (China); Wu, Shu-Wei [Dept. of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan (China); Nievaart, Sander [Institute for Energy, Joint Research Centre, European Commission, Petten (Netherlands); Jiang, Shiang-Huei [Dept. of Engineering and System Science, National Tsing Hua University, Hsinchu, Taiwan (China)

    2015-06-01

    reached 7.8–16.5% below 120 kVp X-ray beams. In this study, we were especially interested in BNCT doses where low energy photon contribution is less to ignore, MCNP model is recognized as the most suitable to simulate wide photon–electron and neutron energy distributed responses of the paired ICs. Also, MCNP provides the best prediction of BNCT source adjustment by the detector’s neutron and photon responses. - Highlights: • We established optimal T2 & M2 paired ICs model in benchmark x, γ, e & n fields. • We used MCNP, EGSnrc, FLUKA or GEANT4 for IC current simulations. • In keV photon, MCNP underestimated M2 response, but accurately estimated T2. • On detector response, we commented for source component adjustment. • For BNCT, MCNP still provides the best prediction of n & photon responses.

  1. Global and local scale flood discharge simulations in the Rhine River basin for flood risk reduction benchmarking in the Flagship Project

    Science.gov (United States)

    Gädeke, Anne; Gusyev, Maksym; Magome, Jun; Sugiura, Ai; Cullmann, Johannes; Takeuchi, Kuniyoshi

    2015-04-01

    The global flood risk assessment is prerequisite to set global measurable targets of post-Hyogo Framework for Action (HFA) that mobilize international cooperation and national coordination towards disaster risk reduction (DRR) and requires the establishment of a uniform flood risk assessment methodology on various scales. To address these issues, the International Flood Initiative (IFI) has initiated a Flagship Project, which was launched in year 2013, to support flood risk reduction benchmarking at global, national and local levels. In the Flagship Project road map, it is planned to identify the original risk (1), to identify the reduced risk (2), and to facilitate the risk reduction actions (3). In order to achieve this goal at global, regional and local scales, international research collaboration is absolutely necessary involving domestic and international institutes, academia and research networks such as UNESCO International Centres. The joint collaboration by ICHARM and BfG was the first attempt that produced the first step (1a) results on the flood discharge estimates with inundation maps under way. As a result of this collaboration, we demonstrate the outcomes of the first step of the IFI Flagship Project to identify flood hazard in the Rhine river basin on the global and local scale. In our assessment, we utilized a distributed hydrological Block-wise TOP (BTOP) model on 20-km and 0.5-km scales with local precipitation and temperature input data between 1980 and 2004. We utilized existing 20-km BTOP model, which is applied globally, and constructed the local scale 0.5-km BTOP model for the Rhine River basin. For the BTOP model results, both calibrated 20-km and 0.5-km BTOP models had similar statistical performance and represented observed flood river discharges, epecially for 1993 and 1995 floods. From 20-km and 0.5-km BTOP simulation, the flood discharges of the selected return period were estimated using flood frequency analysis and were comparable to

  2. Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2009-01-01

    This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial metam

  3. Modelling and Simulation: An Overview

    NARCIS (Netherlands)

    M.J. McAleer (Michael); F. Chan (Felix); L. Oxley (Les)

    2013-01-01

    textabstractThe papers in this special issue of Mathematics and Computers in Simulation cover the following topics: improving judgmental adjustment of model-based forecasts, whether forecast updates are progressive, on a constrained mixture vector autoregressive model, whether all estimators are bor

  4. 一个针对并行模拟引擎的性能评测实例%Parallel Benchmark for Evaluating Parallel Simulation Engine

    Institute of Scientific and Technical Information of China (English)

    吴志敏; 吕慧伟; 陈明宇

    2013-01-01

    SimK is a parallel discrete event simulation engine developed by state key laboratory of computer architecture in institute of computing technology, chinese academy of sciences. Based on the released SimK-1. 0, we extended the function of task partition and the blocking controlling in the process of synchronization. We released version 1. 1 of SimK. On the other hand,as it lacks a benchmark to specifically SimK simulation performance and there is no comprehensive evaluation data, we first proposed the rules of development Benchmark for parallel simulation engine. Then we introduced the example "PassBall". We used it to do the evaluation of SimK on the weak and strong scalability, as well as the strong scalability in unbalanced workload condition. Then we compared the speed-up ratio between the balanced workload and unbalanced workload condition in strong scalability test The influence of simulated computing workload on speed-up ration from was also explored. We also discussed the applicability of the Benchmark. It can be concluded from our experiments as follows:a)our example "PassBall" is available to be the benchmark for SimK,as well as other parallel simulation engine. b)SimK has favorable strong and weak scalability. c)Both the load balance and the simulated computing workload will have effect on the speed-up ratio.%SimK是由中科院计算所体系结构国家重点实验室开发的一个并行离散时间模拟引擎.基于已经发布的SimK1.0版本,对任务划分及同步推进阻塞控制进行了功能扩展,开发了SimK的1.1版本.同时由于缺乏一个专门对SimK模拟性能评测的Benchmark以及全面的评测结果,首先讨论了并行模拟引擎Benchmark的设计准则,之后介绍了开发的Benchmark-PassBall,并且使用它对SimK的强弱扩展性、组件负载不均衡情况下的强扩展性进行了评测,同时对比了组件负载不均衡和均衡情况下的加速比,探讨了模拟计算量的变化对模拟加速比的影

  5. A Bootstrap Approach of Benchmarking Organizational Maturity Model of Software Product With Educational Maturity Model

    Directory of Open Access Journals (Sweden)

    R.Manjula

    2012-06-01

    Full Text Available This Software product line engineering is an inter-disciplinary concept. It spans the dimensions of business, architecture, process, and the organization. Similarly, Education System engineering is also an inter-disciplinary concept, which spans the dimensions of academic, infrastructure, facilities, administration etc. Some of the potential benefits of this approach include continuous improvements in System quality and adhering to global standards. The increasing competency in IT and Educational Sectors necessitates a process maturity evaluation methodology. Accordingly, this paper presents an organizational maturity model for Education system for evaluating the maturity of multi- dimension factors and attributes of an Education System. Assessment questionnaires and a rating methodology comprise the framework of this Educational maturity model. The objective and design of the questionnaires are to collect information about the Education system engineering process from the multi perspectives of academic, infrastructure, administration, facilities etc. Furthermore, we conducted one case study and reported the assessment results using the organizational maturity model presented in this paper.

  6. Vehicle dynamics modeling and simulation

    CERN Document Server

    Schramm, Dieter; Bardini, Roberto

    2014-01-01

    The authors examine in detail the fundamentals and mathematical descriptions of the dynamics of automobiles. In this context different levels of complexity will be presented, starting with basic single-track models up to complex three-dimensional multi-body models. A particular focus is on the process of establishing mathematical models on the basis of real cars and the validation of simulation results. The methods presented are explained in detail by means of selected application scenarios.

  7. The Benchmarking of the Government to Employee (G2e Technology Development: Theoretical Aspects of the Model Construction

    Directory of Open Access Journals (Sweden)

    Alvydas Baležentis

    2012-07-01

    Full Text Available Purpose—To fill the gap in the currently very rare discussion on the important topic of e-government research—design, development and usage of information and communication technologies for human resource management in the public sector and to formulate theoretical benchmarks for development of the government to employee (G2E model.Design/methodology/approach—Literature analysis of mostly empirical research from the field of government to government (G2G, government to citizen (G2C and business to employee (B2E was made. With the help of analogy method possible elements of model were described.Findings—Analysis of literature gave a clearer understanding of thre G2E model, it’s possible stages and elements. Analogy methods helps to recognize which parts of other models can be adopted for the G2E model. The results of the review of literature on this theme and the given conclusions provide a strong background for the G2E research roadmap on the international as well as national level.Research limitations/implications—Article is based on theoretical analysis of theoretical and empirical researches.Practical implications, originality/Value—This article fills the gap in literature and prognoses future research directions.Keywords—government to employee (G2E, government to government (G2G, government to citizen (G2C, business to employee (B2E, human resource management, public sector, e-government, benchmarking.Research type—viewpoint.

  8. Active vibration control of nonlinear benchmark buildings

    Institute of Scientific and Technical Information of China (English)

    ZHOU Xing-de; CHEN Dao-zheng

    2007-01-01

    The present nonlinear model reduction methods unfit the nonlinear benchmark buildings as their vibration equations belong to a non-affine system. Meanwhile,the controllers designed directly by the nonlinear control strategy have a high order, and they are difficult to be applied actually. Therefore, a new active vibration control way which fits the nonlinear buildings is proposed. The idea of the proposed way is based on the model identification and structural model linearization, and exerting the control force to the built model according to the force action principle. This proposed way has a better practicability as the built model can be reduced by the balance reduction method based on the empirical Grammian matrix. A three-story benchmark structure is presented and the simulation results illustrate that the proposed method is viable for the civil engineering structures.

  9. Benchmark experiments with global climate models applicable to extra-solar gas giant planets in the shallow atmosphere approximation

    CERN Document Server

    Bending, V L; Kolb, U

    2012-01-01

    The growing field of exoplanetary atmospheric modelling has seen little work on standardised benchmark tests for its models, limiting understanding of the dependence of results on specific models and conditions. With spatially resolved observations as yet difficult to obtain, such a test is invaluable. Although an intercomparison test for models of tidally locked gas giant planets has previously been suggested and carried out, the data provided were limited in terms of comparability. Here, the shallow PUMA model is subjected to such a test, and detailed statistics produced to facilitate comparison, with both time means and the associated standard deviations displayed, removing the time dependence and providing a measure of the variability. Model runs have been analysed to determine the variability between resolutions, and the effect of resolution on the energy spectra studied. Superrotation is a robust and reproducible feature at all resolutions.

  10. Stochastic models: theory and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2008-03-01

    Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

  11. Benchmarking nuclear models of FLUKA and GEANT4 for carbon ion therapy

    CERN Document Server

    Bohlen, TT; Quesada, J M; Bohlen, T T; Cerutti, F; Gudowska, I; Ferrari, A; Mairani, A

    2010-01-01

    As carbon ions, at therapeutic energies, penetrate tissue, they undergo inelastic nuclear reactions and give rise to significant yields of secondary fragment fluences. Therefore, an accurate prediction of these fluences resulting from the primary carbon interactions is necessary in the patient's body in order to precisely simulate the spatial dose distribution and the resulting biological effect. In this paper, the performance of nuclear fragmentation models of the Monte Carlo transport codes, FLUKA and GEANT4, in tissue-like media and for an energy regime relevant for therapeutic carbon ions is investigated. The ability of these Monte Carlo codes to reproduce experimental data of charge-changing cross sections and integral and differential yields of secondary charged fragments is evaluated. For the fragment yields, the main focus is on the consideration of experimental approximations and uncertainties such as the energy measurement by time-of-flight. For GEANT4, the hadronic models G4BinaryLightIonReaction a...

  12. Benchmarking GEANT4 nuclear models for carbon-therapy at 95 MeV/A

    CERN Document Server

    Dudouet, J; Durand, D; Labalme, M

    2013-01-01

    In carbon-therapy, the interaction of the incoming beam with human tissues may lead to the production of a large amount of nuclear fragments and secondary light particles. An accurate estimation of the biological dose deposited into the tumor and the surrounding healthy tissues thus requires sophisticated simulation tools based on nuclear reaction models. The validity of such models requires intensive comparisons with as many sets of experimental data as possible. Up to now, a rather limited set of double di erential carbon fragmentation cross sections have been measured in the energy range used in hadrontherapy (up to 400 MeV/A). However, new data have been recently obtained at intermediate energy (95 MeV/A). The aim of this work is to compare the reaction models embedded in the GEANT4 Monte Carlo toolkit with these new data. The strengths and weaknesses of each tested model, i.e. G4BinaryLightIonReaction, G4QMDReaction and INCL++, coupled to two di fferent de-excitation models, i.e. the generalized evaporat...

  13. Effects of Secondary Circuit Modeling on Results of Pressurized Water Reactor Main Steam Line Break Benchmark Calculations with New Coupled Code TRAB-3D/SMABRE

    International Nuclear Information System (INIS)

    All of the three exercises of the Organization for Economic Cooperation and Development/Nuclear Regulatory Commission pressurized water reactor main steam line break (PWR MSLB) benchmark were calculated at VTT, the Technical Research Centre of Finland. For the first exercise, the plant simulation with point-kinetic neutronics, the thermal-hydraulics code SMABRE was used. The second exercise was calculated with the three-dimensional reactor dynamics code TRAB-3D, and the third exercise with the combination TRAB-3D/SMABRE. VTT has over ten years' experience of coupling neutronic and thermal-hydraulic codes, but this benchmark was the first time these two codes, both developed at VTT, were coupled together. The coupled code system is fast and efficient; the total computation time of the 100-s transient in the third exercise was 16 min on a modern UNIX workstation. The results of all the exercises are similar to those of the other participants. In order to demonstrate the effect of secondary circuit modeling on the results, three different cases were calculated. In case 1 there is no phase separation in the steam lines and no flow reversal in the aspirator. In case 2 the flow reversal in the aspirator is allowed, but there is no phase separation in the steam lines. Finally, in case 3 the drift-flux model is used for the phase separation in the steam lines, but the aspirator flow reversal is not allowed. With these two modeling variations, it is possible to cover a remarkably broad range of results. The maximum power level reached after the reactor trip varies from 534 to 904 MW, the range of the time of the power maximum being close to 30 s. Compared to the total calculated transient time of 100 s, the effect of the secondary side modeling is extremely important

  14. Benchmark studies of thermal jet mixing in SFRs using a two-jet model

    International Nuclear Information System (INIS)

    To guide the modeling, simulations and design of Sodium Fast Reactors (SFRs), we explore and compare the predictive capabilities of two numerical solvers COMSOL and OpenFOAM in the thermal jet mixing of two buoyant jets typical of the outlet flow from a SFR tube bundle. This process will help optimize on-going experimental efforts at obtaining high resolution data for V and V of CFD codes as anticipated in next generation nuclear systems. Using the k-ε turbulence models of both codes as reference, their ability to simulate the turbulence behavior in similar environments was first validated for single jet experimental data reported in literature. This study investigates the thermal mixing of two parallel jets having a temperature difference (hot-to-cold) ΔThc= 5 deg. C, 10 deg. C and velocity ratios Uc/Uh = 0.5, 1. Results of the computed turbulent quantities due to convective mixing and the variations in flow field along the axial position are presented. In addition, this study also evaluates the effect of spacing ratio between jets in predicting the flow field and jet behavior in near and far fields. (authors)

  15. Harmonic oscillator in heat bath: Exact simulation of time-lapse-recorded data and exact analytical benchmark statistics

    DEFF Research Database (Denmark)

    Nørrelykke, Simon F; Flyvbjerg, Henrik

    2011-01-01

    The stochastic dynamics of the damped harmonic oscillator in a heat bath is simulated with an algorithm that is exact for time steps of arbitrary size. Exact analytical results are given for correlation functions and power spectra in the form they acquire when computed from experimental time...... to the extent that it is interpreted as a damped harmonic oscillator at finite temperature-such as an AFM cantilever. (iii) Three other models of fundamental interest are limiting cases of the damped harmonic oscillator at finite temperature; it consequently bridges their differences and describes the effects...

  16. Using in-situ observations of atmospheric water vapor isotopes to benchmark and isotope-enabled General Circulation Models and improve ice core paleo-climate reconstruction

    Science.gov (United States)

    Steen-Larsen, Hans Christian; Sveinbjörnsdottir, Arny; Masson-Delmotte, Valerie; Werner, Martin; Risi, Camille; Yoshimura, Kei

    2016-04-01

    We have since 2010 carried out in-situ continuous water vapor isotope observations on top of the Greenland Ice Sheet (3 seasons at NEEM), in Svalbard (1 year), in Iceland (4 years), in Bermuda (4 years). The expansive dataset containing high accuracy and precision measurements of δ18O, δD, and the d-excess allow us to validate and benchmark the treatment of the atmospheric hydrological cycle's processes in General Circulation Models using simulations nudged to reanalysis products. Recent findings from both Antarctica and Greenland have documented strong interaction between the snow surface isotopes and the near surface atmospheric water vapor isotopes on diurnal to synoptic time scales. In fact, it has been shown that the snow surface isotopes take up the synoptic driven atmospheric water vapor isotopic signal in-between precipitation events, erasing the precipitation isotope signal in the surface snow. This highlights the importance of using General or Regional Climate Models, which accurately are able to simulate the atmospheric water vapor isotopic composition, to understand and interpret the ice core isotope signal. With this in mind we have used three isotope-enabled General Circulation Models (isoGSM, ECHAM5-wiso, and LMDZiso) nudged to reanalysis products. We have compared the simulations of daily mean isotope values directly with our in-situ observations. This has allowed us to characterize the variability of the isotopic composition in the models and compared it to our observations. We have specifically focused on the d-excess in order to characterize why both the mean and the variability is significantly lower than our observations. We argue that using water vapor isotopes to benchmark General Circulation Models offers an excellent tool for improving the treatment and parameterization of the atmospheric hydrological cycle. Recent studies have documented a very large inter-model dispersion in the treatment of the Arctic water cycle under a future global

  17. SIMULATION OF COLLECTIVE RISK MODEL

    Directory of Open Access Journals (Sweden)

    Viera Pacáková

    2007-12-01

    Full Text Available The article focuses on providing brief theoretical definitions of the basic terms and methods of modeling and simulations of insurance risks in non-life insurance by means of mathematical and statistical methods using statistical software. While risk assessment of insurance company in connection with its solvency is a rather complex and comprehensible problem, its solution starts with statistical modeling of number and amount of individual claims. Successful solution of these fundamental problems enables solving of curtail problems of insurance such as modeling and simulation of collective risk, premium an reinsurance premium calculation, estimation of probabiliy of ruin etc. The article also presents some essential ideas underlying Monte Carlo methods and their applications to modeling of insurance risk. Solving problem is to find the probability distribution of the collective risk in non-life insurance portfolio. Simulation of the compound distribution function of the aggregate claim amount can be carried out, if the distibution functions of the claim number process and the claim size are assumed given. The Monte Carlo simulation is suitable method to confirm the results of other methods and for treatments of catastrophic claims, when small collectives are studied. Analysis of insurance risks using risk theory is important part of the project Solvency II. Risk theory is analysis of stochastic features of non-life insurance process. The field of application of risk theory has grown rapidly. There is a need to develop the theory into form suitable for practical purposes and demostrate their application. Modern computer simulation techniques open up a wide field of practical applications for risk theory concepts, without requiring the restricive assumptions and sophisticated mathematics. This article presents some comparisons of the traditional actuarial methods and of simulation methods of the collective risk model.

  18. Benchmarking Exercises To Validate The Updated ELLWF GoldSim Slit Trench Model

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, G. A.; Hiergesell, R. A.

    2013-11-12

    The Savannah River National Laboratory (SRNL) results of the 2008 Performance Assessment (PA) (WSRC, 2008) sensitivity/uncertainty analyses conducted for the trenches located in the EArea LowLevel Waste Facility (ELLWF) were subject to review by the United States Department of Energy (U.S. DOE) Low-Level Waste Disposal Facility Federal Review Group (LFRG) (LFRG, 2008). LFRG comments were generally approving of the use of probabilistic modeling in GoldSim to support the quantitative sensitivity analysis. A recommendation was made, however, that the probabilistic models be revised and updated to bolster their defensibility. SRS committed to addressing those comments and, in response, contracted with Neptune and Company to rewrite the three GoldSim models. The initial portion of this work, development of Slit Trench (ST), Engineered Trench (ET) and Components-in-Grout (CIG) trench GoldSim models, has been completed. The work described in this report utilizes these revised models to test and evaluate the results against the 2008 PORFLOW model results. This was accomplished by first performing a rigorous code-to-code comparison of the PORFLOW and GoldSim codes and then performing a deterministic comparison of the two-dimensional (2D) unsaturated zone and three-dimensional (3D) saturated zone PORFLOW Slit Trench models against results from the one-dimensional (1D) GoldSim Slit Trench model. The results of the code-to-code comparison indicate that when the mechanisms of radioactive decay, partitioning of contaminants between solid and fluid, implementation of specific boundary conditions and the imposition of solubility controls were all tested using identical flow fields, that GoldSim and PORFLOW produce nearly identical results. It is also noted that GoldSim has an advantage over PORFLOW in that it simulates all radionuclides simultaneously - thus avoiding a potential problem as demonstrated in the Case Study (see Section 2.6). Hence, it was concluded that the follow

  19. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  20. Model for Simulation Atmospheric Turbulence

    DEFF Research Database (Denmark)

    Lundtang Petersen, Erik

    1976-01-01

    A method that produces realistic simulations of atmospheric turbulence is developed and analyzed. The procedure makes use of a generalized spectral analysis, often called a proper orthogonal decomposition or the Karhunen-Loève expansion. A set of criteria, emphasizing a realistic appearance...... eigenfunctions and estimates of the distributions of the corresponding expansion coefficients. The simulation method utilizes the eigenfunction expansion procedure to produce preliminary time histories of the three velocity components simultaneously. As a final step, a spectral shaping procedure is then applied....... The method is unique in modeling the three velocity components simultaneously, and it is found that important cross-statistical features are reasonably well-behaved. It is concluded that the model provides a practical, operational simulator of atmospheric turbulence....

  1. A benchmark test suite for proton transfer energies and its use to test electronic structure model chemistries

    Science.gov (United States)

    Nachimuthu, Santhanamoorthi; Gao, Jiali; Truhlar, Donald G.

    2012-05-01

    We present benchmark calculations of nine selected points on potential energy surfaces describing proton transfer processes in three model systems, H5O2+, CH3OH…H+…OH2, and CH3COOH…OH2. The calculated relative energies of these geometries are compared to those calculated by various wave function and density functional methods, including the polarized molecular orbital (PMO) model recently developed in our research group and other semiempirical molecular orbital methods. We found that the SCC-DFTB and PMO methods (the latter available so far only for molecules consisting of only O and H and therefore only for the first of the three model systems) give results that are, on average, within 2 kcal/mol of the benchmark results. Other semiempirical molecular orbital methods have mean unsigned errors (MUEs) of 3-8 kcal/mol, local density functionals have MUEs in the range 0.7-3.7 kcal/mol, and hybrid density functionals have MUEs of only 0.3-1.0 kcal/mol, with the best density functional performance obtained by hybrid meta-GGAs, especially M06 and PW6B95.

  2. Modelling, simulating and optimizing Boilers

    DEFF Research Database (Denmark)

    Sørensen, Kim; Condra, Thomas Joseph; Houbak, Niels

    2003-01-01

    This paper describes the modelling, simulating and optimizing including experimental verication as being carried out as part of a Ph.D. project being written resp. supervised by the authors. The work covers dynamic performance of both water-tube boilers and re tube boilers. A detailed dynamic mod...

  3. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...

  4. Surrogate model approach for improving the performance of reactive transport simulations

    Science.gov (United States)

    Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris

    2016-04-01

    Reactive transport models can serve a large number of important geoscientific applications involving underground resources in industry and scientific research. It is common for simulation of reactive transport to consist of at least two coupled simulation models. First is a hydrodynamics simulator that is responsible for simulating the flow of groundwaters and transport of solutes. Hydrodynamics simulators are well established technology and can be very efficient. When hydrodynamics simulations are performed without coupled geochemistry, their spatial geometries can span millions of elements even when running on desktop workstations. Second is a geochemical simulation model that is coupled to the hydrodynamics simulator. Geochemical simulation models are much more computationally costly. This is a problem that makes reactive transport simulations spanning millions of spatial elements very difficult to achieve. To address this problem we propose to replace the coupled geochemical simulation model with a surrogate model. A surrogate is a statistical model created to include only the necessary subset of simulator complexity for a particular scenario. To demonstrate the viability of such an approach we tested it on a popular reactive transport benchmark problem that involves 1D Calcite transport. This is a published benchmark problem (Kolditz, 2012) for simulation models and for this reason we use it to test the surrogate model approach. To do this we tried a number of statistical models available through the caret and DiceEval packages for R, to be used as surrogate models. These were trained on randomly sampled subset of the input-output data from the geochemical simulation model used in the original reactive transport simulation. For validation we use the surrogate model to predict the simulator output using the part of sampled input data that was not used for training the statistical model. For this scenario we find that the multivariate adaptive regression splines

  5. Benchmark exercise on SBLOCA experiment of PWR PACTEL facility

    International Nuclear Information System (INIS)

    Highlights: • PWR PACTEL, the facility with EPR type steam generators, is introduced. • The focus of the benchmark was on the analyses of the SBLOCA test with PWR PACTEL. • System codes with several modeling approaches were utilized to analyze the test. • Proper consideration of heat and pressure losses improves simulation remarkably. - Abstract: The PWR PACTEL benchmark exercise was organized in Lappeenranta, Finland by Lappeenranta University of Technology. The benchmark consisted of two phases, i.e. a blind and an open calculation task. Seven organizations from the Czech Republic, Germany, Italy, Sweden and Finland participated in the benchmark exercise, and four system codes were utilized in the benchmark simulation tasks. Two workshops were organized for launching and concluding the benchmark, the latter of which involved presentations of the calculation results as well as discussions on the related modeling issues. The chosen experiment for the benchmark was a small break loss of coolant accident experiment which was performed to study the natural circulation behavior over a continuous range of primary side coolant inventories. For the blind calculation task, the detailed facility descriptions, the measured pressure and heat losses as well as the results of a short characterizing transient were provided. For the open calculation task part, the experiment results were released. According to the simulation results, the benchmark experiment was quite challenging to model. Several improvements were found and utilized especially for the open calculation case. The issues concerned model construction, heat and pressure losses impact, interpreting measured and calculated data, non-condensable gas effect, testing several condensation and CCFL correlations, sensitivity studies, as well as break modeling. There is a clear need for user guidelines or for a collection of best practices in modeling for every code. The benchmark offered a unique opportunity to test

  6. Monte Carlo simulations and benchmark measurements on the response of TE(TE) and Mg(Ar) ionization chambers in photon, electron and neutron beams

    Science.gov (United States)

    Lin, Yi-Chun; Huang, Tseng-Te; Liu, Yuan-Hao; Chen, Wei-Lin; Chen, Yen-Fu; Wu, Shu-Wei; Nievaart, Sander; Jiang, Shiang-Huei

    2015-06-01

    The paired ionization chambers (ICs) technique is commonly employed to determine neutron and photon doses in radiology or radiotherapy neutron beams, where neutron dose shows very strong dependence on the accuracy of accompanying high energy photon dose. During the dose derivation, it is an important issue to evaluate the photon and electron response functions of two commercially available ionization chambers, denoted as TE(TE) and Mg(Ar), used in our reactor based epithermal neutron beam. Nowadays, most perturbation corrections for accurate dose determination and many treatment planning systems are based on the Monte Carlo technique. We used general purposed Monte Carlo codes, MCNP5, EGSnrc, FLUKA or GEANT4 for benchmark verifications among them and carefully measured values for a precise estimation of chamber current from absorbed dose rate of cavity gas. Also, energy dependent response functions of two chambers were calculated in a parallel beam with mono-energies from 20 keV to 20 MeV photons and electrons by using the optimal simple spherical and detailed IC models. The measurements were performed in the well-defined (a) four primary M-80, M-100, M120 and M150 X-ray calibration fields, (b) primary 60Co calibration beam, (c) 6 MV and 10 MV photon, (d) 6 MeV and 18 MeV electron LINACs in hospital and (e) BNCT clinical trials neutron beam. For the TE(TE) chamber, all codes were almost identical over the whole photon energy range. In the Mg(Ar) chamber, MCNP5 showed lower response than other codes for photon energy region below 0.1 MeV and presented similar response above 0.2 MeV (agreed within 5% in the simple spherical model). With the increase of electron energy, the response difference between MCNP5 and other codes became larger in both chambers. Compared with the measured currents, MCNP5 had the difference from the measurement data within 5% for the 60Co, 6 MV, 10 MV, 6 MeV and 18 MeV LINACs beams. But for the Mg(Ar) chamber, the derivations reached 7

  7. Validation process of simulation model

    International Nuclear Information System (INIS)

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  8. Multiscale Stochastic Simulation and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    James Glimm; Xiaolin Li

    2006-01-10

    Acceleration driven instabilities of fluid mixing layers include the classical cases of Rayleigh-Taylor instability, driven by a steady acceleration and Richtmyer-Meshkov instability, driven by an impulsive acceleration. Our program starts with high resolution methods of numerical simulation of two (or more) distinct fluids, continues with analytic analysis of these solutions, and the derivation of averaged equations. A striking achievement has been the systematic agreement we obtained between simulation and experiment by using a high resolution numerical method and improved physical modeling, with surface tension. Our study is accompanies by analysis using stochastic modeling and averaged equations for the multiphase problem. We have quantified the error and uncertainty using statistical modeling methods.

  9. Time-resolved particle image velocimetry measurements with wall shear stress and uncertainty quantification for the FDA benchmark nozzle model

    CERN Document Server

    Raben, Jaime S; Robinson, Ronald; Malinauskas, Richard; Vlachos, Pavlos P

    2014-01-01

    We present validation of benchmark experimental data for computational fluid dynamics (CFD) analyses of medical devices using advanced Particle Image Velocimetry (PIV) processing and post-processing techniques. This work is an extension of a previous FDA-sponsored multi-laboratory study, which used a medical device mimicking geometry referred to as the FDA benchmark nozzle model. Time-resolved PIV analysis was performed in five overlapping regions of the model for Reynolds numbers in the nozzle throat of 500, 2,000, 5,000, and 8,000. Images included a two-fold increase in spatial resolution in comparison to the previous study. Data was processed using ensemble correlation, dynamic range enhancement, and phase correlations to increase signal-to-noise ratios and measurement accuracy, and to resolve flow regions with large velocity ranges and gradients, which is typical of many blood-contacting medical devices. Parameters relevant to device safety, including shear stress at the wall and in bulk flow, were comput...

  10. Issues in benchmarking human reliability analysis methods : a literature review.

    Energy Technology Data Exchange (ETDEWEB)

    Lois, Erasmia (US Nuclear Regulatory Commission); Forester, John Alan; Tran, Tuan Q. (Idaho National Laboratory, Idaho Falls, ID); Hendrickson, Stacey M. Langfitt; Boring, Ronald L. (Idaho National Laboratory, Idaho Falls, ID)

    2008-04-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.

  11. Benchmarking the Sandbox: Quantitative Comparisons of Numerical and Analogue Models of Brittle Wedge Dynamics (Invited)

    Science.gov (United States)

    Buiter, S.; Schreurs, G.; Geomod2008 Team

    2010-12-01

    , we find differences in shear zone dip angle and surface slope between numerical and analogue models and, in 3D experiments, along-strike variations of structures in map view. Our experiments point out that we need careful treatment of material properties, discontinuities in boundary conditions, model building techniques, and boundary friction for sandbox-like setups. We show that to first order we successfully simulate sandbox-style brittle behavior using different numerical modeling techniques and that we can obtain similar styles of deformation behavior in numerical and laboratory experiments at similar levels of variability. * The GeoMod2008 Team: M. Albertz, C. Beaumont, C. Burberry, J.-P. Callot, C. Cavozzi, M. Cerca, J.-H. Chen, E. Cristallini, A. Cruden, L. Cruz, M. Cooke, T. Crook, J.-M. Daniel, D. Egholm, S. Ellis, T. Gerya, L. Hodkinson, F. Hofmann, V Garcia, C. Gomes, C. Grall, Y. Guillot, C. Guzmán, T. Nur Hidayah, G. Hilley, B. Kaus, M. Klinkmüller, H. Koyi, W. Landry, C.-Y. Lu, J. Macauley, B. Maillot, C. Meriaux, Y. Mishin, F. Nilfouroushan, C.-C. Pan, C. Pascal, D. Pillot, R. Portillo, M.Rosenau, W. Schellart, R. Schlische, P. Souloumiac, A. Take, B. Vendeville, M. Vettori, M. Vergnaud, S.-H. Wang, M. Withjack, D. Yagupsky, Y. Yamada

  12. Assessment of Molecular Modeling & Simulation

    Energy Technology Data Exchange (ETDEWEB)

    None

    2002-01-03

    This report reviews the development and applications of molecular and materials modeling in Europe and Japan in comparison to those in the United States. Topics covered include computational quantum chemistry, molecular simulations by molecular dynamics and Monte Carlo methods, mesoscale modeling of material domains, molecular-structure/macroscale property correlations like QSARs and QSPRs, and related information technologies like informatics and special-purpose molecular-modeling computers. The panel's findings include the following: The United States leads this field in many scientific areas. However, Canada has particular strengths in DFT methods and homogeneous catalysis; Europe in heterogeneous catalysis, mesoscale, and materials modeling; and Japan in materials modeling and special-purpose computing. Major government-industry initiatives are underway in Europe and Japan, notably in multi-scale materials modeling and in development of chemistry-capable ab-initio molecular dynamics codes.

  13. CFD Modeling of Thermal Manikin Heat Loss in a Comfort Evaluation Benchmark Test

    DEFF Research Database (Denmark)

    Nilsson, Håkan O.; Brohus, Henrik; Nielsen, Peter V.

    2007-01-01

    Computer simulated persons (CSPs) today are different in many ways, reflecting various software possibilities and limitations as well as different research interest. Unfortunately, too few of the theories behind thermal manikin simulations are available in the public domain. Many researchers...

  14. Spray model validation on single droplet heat and mass transfers for containment applications - SARNET-2 benchmark

    International Nuclear Information System (INIS)

    This work is performed in the frame of the SARNET-2 network, within the Sub-Work Package WP7-2, Task 1 (spray activities). Three different elementary test series have been proposed for benchmarking and the first series, concerning heat and mass transfer on a single droplet, is presented here. Code-experiment and code-to-code comparisons are presented. It is shown that the mass transfer terms are responsible for most of the differences and, depending on the kind of test, that the errors can either compensate together or be enhanced. Since the errors are propagating over the droplet height fall, they could be not negligible for real containment cases. (author)

  15. Standard for Models and Simulations

    Science.gov (United States)

    Steele, Martin J.

    2016-01-01

    This NASA Technical Standard establishes uniform practices in modeling and simulation to ensure essential requirements are applied to the design, development, and use of models and simulations (MS), while ensuring acceptance criteria are defined by the program project and approved by the responsible Technical Authority. It also provides an approved set of requirements, recommendations, and criteria with which MS may be developed, accepted, and used in support of NASA activities. As the MS disciplines employed and application areas involved are broad, the common aspects of MS across all NASA activities are addressed. The discipline-specific details of a given MS should be obtained from relevant recommended practices. The primary purpose is to reduce the risks associated with MS-influenced decisions by ensuring the complete communication of the credibility of MS results.

  16. Simulating spin models on GPU

    Science.gov (United States)

    Weigel, Martin

    2011-09-01

    Over the last couple of years it has been realized that the vast computational power of graphics processing units (GPUs) could be harvested for purposes other than the video game industry. This power, which at least nominally exceeds that of current CPUs by large factors, results from the relative simplicity of the GPU architectures as compared to CPUs, combined with a large number of parallel processing units on a single chip. To benefit from this setup for general computing purposes, the problems at hand need to be prepared in a way to profit from the inherent parallelism and hierarchical structure of memory accesses. In this contribution I discuss the performance potential for simulating spin models, such as the Ising model, on GPU as compared to conventional simulations on CPU.

  17. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    International Nuclear Information System (INIS)

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments

  18. Robust fuzzy output feedback controller for affine nonlinear systems via T-S fuzzy bilinear model: CSTR benchmark.

    Science.gov (United States)

    Hamdy, M; Hamdan, I

    2015-07-01

    In this paper, a robust H∞ fuzzy output feedback controller is designed for a class of affine nonlinear systems with disturbance via Takagi-Sugeno (T-S) fuzzy bilinear model. The parallel distributed compensation (PDC) technique is utilized to design a fuzzy controller. The stability conditions of the overall closed loop T-S fuzzy bilinear model are formulated in terms of Lyapunov function via linear matrix inequality (LMI). The control law is robustified by H∞ sense to attenuate external disturbance. Moreover, the desired controller gains can be obtained by solving a set of LMI. A continuous stirred tank reactor (CSTR), which is a benchmark problem in nonlinear process control, is discussed in detail to verify the effectiveness of the proposed approach with a comparative study.

  19. Benchmark experiments and numerical modelling of the columnar-equiaxed dendritic growth in the transparent alloy Neopentylglycol-(d)Camphor

    Science.gov (United States)

    Sturz, L.; Wu, M.; Zimmermann, G.; Ludwig, A.; Ahmadein, M.

    2015-06-01

    Solidification benchmark experiments on columnar and equiaxed dendritic growth, as well as the columnar-equiaxed transition have been carried out under diffusion-dominated conditions for heat and mass transfer in a low-gravity environment. The system under investigation is the transparent organic alloy system Neopentylglycol-37.5wt.-%(d)Camphor, processed aboard a TEXUS sounding rocket flight. Solidifications was observed by standard optical methods in addition to measurements of the thermal fields within the sheet like experimental cells of 1 mm thickness. The dendrite tip kinetic, primary dendrite arm spacing, temporal and spatial temperature evolution, columnar tip velocity and the critical parameters at the CET have been analysed. Here we focus on a detailed comparison of the experiment “TRACE” with a 5-phase volume averaging model to validate the numerical model and to give insight into the corresponding physical mechanisms and parameters leading to CET. The results are discussed in terms of sensitivity versus numerical parameters.

  20. Development of a chronic noncancer oral reference dose and drinking water screening level for sulfolane using benchmark dose modeling.

    Science.gov (United States)

    Thompson, Chad M; Gaylor, David W; Tachovsky, J Andrew; Perry, Camarie; Carakostas, Michael C; Haws, Laurie C

    2013-12-01

    Sulfolane is a widely used industrial solvent that is often used for gas treatment (sour gas sweetening; hydrogen sulfide removal from shale and coal processes, etc.), and in the manufacture of polymers and electronics, and may be found in pharmaceuticals as a residual solvent used in the manufacturing processes. Sulfolane is considered a high production volume chemical with worldwide production around 18 000-36 000 tons per year. Given that sulfolane has been detected as a contaminant in groundwater, an important potential route of exposure is tap water ingestion. Because there are currently no federal drinking water standards for sulfolane in the USA, we developed a noncancer oral reference dose (RfD) based on benchmark dose modeling, as well as a tap water screening value that is protective of ingestion. Review of the available literature suggests that sulfolane is not likely to be mutagenic, clastogenic or carcinogenic, or pose reproductive or developmental health risks except perhaps at very high exposure concentrations. RfD values derived using benchmark dose modeling were 0.01-0.04 mg kg(-1) per day, although modeling of developmental endpoints resulted in higher values, approximately 0.4 mg kg(-1) per day. The lowest, most conservative, RfD of 0.01 mg kg(-1) per day was based on reduced white blood cell counts in female rats. This RfD was used to develop a tap water screening level that is protective of ingestion, viz. 365 µg l(-1). It is anticipated that these values, along with the hazard identification and dose-response modeling described herein, should be informative for risk assessors and regulators interested in setting health-protective drinking water guideline values for sulfolane.

  1. Simple benchmark for complex dose finding studies.

    Science.gov (United States)

    Cheung, Ying Kuen

    2014-06-01

    While a general goal of early phase clinical studies is to identify an acceptable dose for further investigation, modern dose finding studies and designs are highly specific to individual clinical settings. In addition, as outcome-adaptive dose finding methods often involve complex algorithms, it is crucial to have diagnostic tools to evaluate the plausibility of a method's simulated performance and the adequacy of the algorithm. In this article, we propose a simple technique that provides an upper limit, or a benchmark, of accuracy for dose finding methods for a given design objective. The proposed benchmark is nonparametric optimal in the sense of O'Quigley et al. (2002, Biostatistics 3, 51-56), and is demonstrated by examples to be a practical accuracy upper bound for model-based dose finding methods. We illustrate the implementation of the technique in the context of phase I trials that consider multiple toxicities and phase I/II trials where dosing decisions are based on both toxicity and efficacy, and apply the benchmark to several clinical examples considered in the literature. By comparing the operating characteristics of a dose finding method to that of the benchmark, we can form quick initial assessments of whether the method is adequately calibrated and evaluate its sensitivity to the dose-outcome relationships.

  2. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  3. A BENCHMARKING ANALYSIS FOR FIVE RADIONUCLIDE VADOSE ZONE MODELS (CHAIN, MULTIMED_DP, FECTUZ, HYDRUS, AND CHAIN 2D) IN SOIL SCREENING LEVEL CALCULATIONS

    Science.gov (United States)

    Five radionuclide vadose zone models with different degrees of complexity (CHAIN, MULTIMED_DP, FECTUZ, HYDRUS, and CHAIN 2D) were selected for use in soil screening level (SSL) calculations. A benchmarking analysis between the models was conducted for a radionuclide (99Tc) rele...

  4. Advances in Intelligent Modelling and Simulation Simulation Tools and Applications

    CERN Document Server

    Oplatková, Zuzana; Carvalho, Marco; Kisiel-Dorohinicki, Marek

    2012-01-01

    The human capacity to abstract complex systems and phenomena into simplified models has played a critical role in the rapid evolution of our modern industrial processes and scientific research. As a science and an art, Modelling and Simulation have been one of the core enablers of this remarkable human trace, and have become a topic of great importance for researchers and practitioners. This book was created to compile some of the most recent concepts, advances, challenges and ideas associated with Intelligent Modelling and Simulation frameworks, tools and applications. The first chapter discusses the important aspects of a human interaction and the correct interpretation of results during simulations. The second chapter gets to the heart of the analysis of entrepreneurship by means of agent-based modelling and simulations. The following three chapters bring together the central theme of simulation frameworks, first describing an agent-based simulation framework, then a simulator for electrical machines, and...

  5. Experimental benchmarks and simulation of GAMMA-T for overcooling and undercooling transients in HTGRs coupled with MED desalination plants

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ho Sik, E-mail: hskim25@kaist.ac.kr [Korea Advanced Institute of Science and Technology (KAIST), Department of Nuclear and Quantum Engineering, 291 Daehak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of); Kim, In Hun, E-mail: nuclea@kaist.ac.kr [Korea Advanced Institute of Science and Technology (KAIST), Department of Nuclear and Quantum Engineering, 291 Daehak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of); NO, Hee Cheon, E-mail: hcno@kaist.ac.kr [Korea Advanced Institute of Science and Technology (KAIST), Department of Nuclear and Quantum Engineering, 291 Daehak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of); Jin, Hyung Gon, E-mail: gonijin@gmail.com [Korea Advanced Institute of Science and Technology (KAIST), Department of Nuclear and Quantum Engineering, 291 Daehak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of)

    2013-06-15

    Highlights: ► The GAMMA-T code was well validated through benchmark experiments. ► Based on the KAIST coupling scheme, the GTHTR300 + MED systems were made. ► Safety analysis was performed for overcooling and undercooling accidents. ► In all accidents, maximum peak fuel temperatures were well below than 1600 °C. ► In all accidents, the HTGR + MED system could be operated continuously. -- Abstracts: The nuclear desalination based on the high temperature gas-cooled reactor (HTGR) with gas turbomachinery and multi-effect distillation (MED) is attracting attention because the coupling system can utilize the waste heat of the nuclear power system for the MED desalination system. In previous work, KAIST proposed the new HTGR + MED coupling scheme, evaluated desalination performance, and performed cost analysis for the system. In this paper, in order to confirm the safety and the performance of the coupling system, we performed the transient analysis with GAMMA-T (GAs Multidimensional Multicomponent mixture Analysis–Turbomachinery) code for the KAIST HTGR + MED systems. The experimental benchmarks of GAMMA-T code were set up before the transient analysis for several accident scenarios. The GAMMA-T code was well validated against steady state and transient scenarios of the He–Water test loop such as changes in water mass flow rate and water inlet temperatures. Then, for transient analysis, the GTHTR300 was chosen as a reference plant. The GTHTR300 + MED systems were made, based on the KAIST HTGR + MED coupling scheme. Transient analysis was performed for three kinds of accidents scenarios: (1) loss of heat rejection through MED plant, (2) loss of heat rejection through heat sink, and (3) overcooling due to abnormal cold temperature of seawater. In all kinds of accident scenarios, maximum peak fuel temperatures were well below than the fuel failure criterion, 1600 °C and the GTHTR300 + MED system could be operated continuously and safely. Specially, in the

  6. Results of wake simulations at the Horns Rev I and Lillgrund wind farms using the modified Park model

    DEFF Research Database (Denmark)

    Peña, Alfredo; Réthoré, Pierre-Elouan; Hasager, Charlotte Bay;

    This document reports on the results of the wake simulations performed at both the Horns Rev I and Lillgrund oshore wind farms using the modified Park model for the benchmark cases established under the project EERA-DTOC and the IEC Wind Task Wakebench. It also illustrates the comparison between...... are post-processed to partly take into account the wind direction uncertainty and when the wake decay coefficient is estimated either as function of the roughness, height, and atmospheric stability or turbulence intensity. For Lillgrund, the trends of the simulations and the observations are generally...... model simulations and the data. The latter were first independently analyzed by Kurt Hansen and kindly delivered to us after the results of the models' benchmarks were publicly released. For Horns Rev I, the simulations agree very well with the observations, particularly when the simulation results...

  7. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    model of the boiler has been developed and simulations carried out by means of the Matlab integration routines. The model is prepared as a dynamic model consisting of both ordinary differential equations and algebraic equations, together formulated as a Differential-Algebraic-Equation system. Being able...... to the internal pressure the consequence of the increased volume (i.e. water-/steam space) is an increased wall thickness in the pressure part of the boiler. The stresses introduced in the boiler pressure part as a result of the temperature gradients are proportional to the square of the wall thickness......, and the total stress level (i.e. stresses introduced due to internal pressure plus stresses introduced due to temperature gradients) must always be kept below the allowable stress level. In this way, the increased water-/steam space that should allow for better dynamic performance, in the end causes limited...

  8. Validating Cellular Automata Lava Flow Emplacement Algorithms with Standard Benchmarks

    Science.gov (United States)

    Richardson, J. A.; Connor, L.; Charbonnier, S. J.; Connor, C.; Gallant, E.

    2015-12-01

    A major existing need in assessing lava flow simulators is a common set of validation benchmark tests. We propose three levels of benchmarks which test model output against increasingly complex standards. First, imulated lava flows should be morphologically identical, given changes in parameter space that should be inconsequential, such as slope direction. Second, lava flows simulated in simple parameter spaces can be tested against analytical solutions or empirical relationships seen in Bingham fluids. For instance, a lava flow simulated on a flat surface should produce a circular outline. Third, lava flows simulated over real world topography can be compared to recent real world lava flows, such as those at Tolbachik, Russia, and Fogo, Cape Verde. Success or failure of emplacement algorithms in these validation benchmarks can be determined using a Bayesian approach, which directly tests the ability of an emplacement algorithm to correctly forecast lava inundation. Here we focus on two posterior metrics, P(A|B) and P(¬A|¬B), which describe the positive and negative predictive value of flow algorithms. This is an improvement on less direct statistics such as model sensitivity and the Jaccard fitness coefficient. We have performed these validation benchmarks on a new, modular lava flow emplacement simulator that we have developed. This simulator, which we call MOLASSES, follows a Cellular Automata (CA) method. The code is developed in several interchangeable modules, which enables quick modification of the distribution algorithm from cell locations to their neighbors. By assessing several different distribution schemes with the benchmark tests, we have improved the performance of MOLASSES to correctly match early stages of the 2012-3 Tolbachik Flow, Kamchakta Russia, to 80%. We also can evaluate model performance given uncertain input parameters using a Monte Carlo setup. This illuminates sensitivity to model uncertainty.

  9. Kvantitativ benchmark - Produktionsvirksomheder

    DEFF Research Database (Denmark)

    Sørensen, Ole H.; Andersen, Vibeke

    Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet.......Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet....

  10. Climate simulations for 1880-2003 with GISS modelE

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, J. [NASA Goddard Inst. for Space Studies, New York, NY (United States)]|[Columbia Univ. Earth Inst., New York, NY (United States); Sato, M.; Kharecha, P.; Nazarenko, L.; Aleinov, I.; Bauer, S.; Chandler, M.; Faluvegi, G.; Jonas, J.; Lerner, J.; Perlwitz, J.; Unger, N.; Zhang, S. [Columbia Univ. Earth Inst., New York, NY (United States); Ruedy, R.; Lo, K.; Cheng, Y.; Oinas, V.; Schmunk, R.; Tausnev, N.; Yao, M. [Sigma Space Partners LLC, New York, NY (United States); Lacis, A.; Schmidt, G.A.; Del Genio, A.; Rind, D.; Romanou, A.; Shindell, D. [NASA Goddard Inst. for Space Studies, New York, NY (United States)]|[Columbia Univ., Dept. of Earth and Environmental Sciences, New York, NY (United States); Miller, R.; Hall, T. [NASA Goddard Inst. for Space Studies, New York, NY (United States)]|[Columbia Univ., Dept. of Applied Physics and Applied Mathematics, New York, NY (United States); Russell, G.; Canuto, V.; Kiang, N.Y. [NASA Goddard Inst. for Space Studies, New York, NY (United States); Baum, E.; Cohen, A. [Clean Air Task Force, Boston, MA (United States); Cairns, B.; Perlwitz, J. [Columbia Univ., Dept. of Applied Physics and Applied Mathematics, New York, NY (United States); Fleming, E.; Jackman, C.; Labow, G. [NASA Goddard Space Flight Center, Greenbelt, MD (United States); Friend, A.; Kelley, M. [Lab. des Sciences du Climat et de l' Environnement, Gif-sur-Yvette (France); Koch, D. [Columbia Univ. Earth Inst., New York, NY (United States)]|[Yale Univ., Dept. of Geology, New Haven, CT (United States); Menon, S.; Novakov, T. [Lawrence Berkeley National Lab., CA (United States); Stone, P. [Massachusetts Inst. of Tech., Cambridge, MA (United States); Sun, S. [NASA Goddard Inst. for Space Studies, New York, NY (United States)]|[Massachusetts Inst. of Tech., Cambridge, MA (United States); Streets, D. [Argonne National Lab., IL (United States); Thresher, D. [Columbia Univ., Dept. of Earth and Environmental Sciences, New York, NY (United States)

    2007-12-15

    We carry out climate simulations for 1880-2003 with GISS modelE driven by ten measured or estimated climate forcings. An ensemble of climate model runs is carried out for each forcing acting individually and for all forcing mechanisms acting together. We compare side-by-side simulated climate change for each forcing, all forcings, observations, unforced variability among model ensemble members, and, if available, observed variability. Discrepancies between observations and simulations with all forcings are due to model deficiencies, inaccurate or incomplete forcings, and imperfect observations. Although there are notable discrepancies between model and observations, the fidelity is sufficient to encourage use of the model for simulations of future climate change. By using a fixed well-documented model and accurately defining the 1880-2003 forcings, we aim to provide a benchmark against which the effect of improvements in the model, climate forcings, and observations can be tested. Principal model deficiencies include unrealistically weak tropical El Nino-like variability and a poor distribution of sea ice, with too much sea ice in the Northern Hemisphere and too little in the Southern Hemisphere. Greatest uncertainties in the forcings are the temporal and spatial variations of anthropogenic aerosols and their indirect effects on clouds. (orig.)

  11. Radiography benchmark 2014

    Science.gov (United States)

    Jaenisch, G.-R.; Deresch, A.; Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-01

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  12. The Benchmarking of the Government to Employee (G2e Technology Development: Theoretical Aspects of the Model Construction

    Directory of Open Access Journals (Sweden)

    Alvydas Baležentis

    2013-01-01

    Full Text Available Purpose—To fill the gap in the currently very rare discussion on the important topic of e-government research—design, development and usage of information and communication technologies for human resource management in the public sector and to formulate theoretical benchmarks for development of the government to employee (G2E model. Design/methodology/approach—Literature analysis of mostly empirical research from the field of government to government (G2G, government to citizen (G2C and business to employee (B2E was made. With the help of analogy method possible elements of model were described. Findings—Analysis of literature gave a clearer understanding of thre G2E model, it’s possible stages and elements. Analogy methods helps to recognize which parts of other models can be adopted for the G2E model. The results of the review of literature on this theme and the given conclusions provide a strong background for the G2E research roadmap on the international as well as national level. Research limitations/implications—Article is based on theoretical analysis of theoretical and empirical researches. Practical implications, originality/Value—This article fills the gap in literature and prognoses future research directions.

  13. Simulation model of a PWR power plant

    International Nuclear Information System (INIS)

    A simulation model of a hypothetical PWR power plant is described. A large number of disturbances and failures in plant function can be simulated. The model is written as seven modules to the modular simulation system for continuous processes DYSIM and serves also as a user example of this system. The model runs in Fortran 77 on the IBM-PC-AT. (author)

  14. Validation of mechanical models for reinforced concrete structures: Presentation of the French project 'Benchmark des Poutres de la Rance'

    Energy Technology Data Exchange (ETDEWEB)

    L' Hostis, V. [Laboratoire d' Etude du Comportement des Betons et des Argiles, CEA Saclay (France); Brunet, C. [EDF/SEPTEN, Villeurbanne (France); Poupard, O. [Laboratoire Pierre Sue, CNRS/CEA Saclay (France)]|[Laboratoire d' Etude du Comportement des Betons et des Argiles, CEA Saclay (France); Petre-Lazar, I. [EDF/DRD/MMC (France)

    2006-07-01

    Several ageing models are available for the prediction of the mechanical consequences of rebar corrosion. They are used for service life prediction of reinforced concrete structures. Concerning corrosion diagnosis of reinforced concrete, some Non Destructive Testing (NDT) tools have been developed, and have been in use for some years. However, these developments require validation on existing concrete structures. The French project 'Benchmark des Poutres de la Rance' contributes to this aspect. It has two main objectives: (i) validation of mechanical models to estimate the influence of rebar corrosion on the load bearing capacity of a structure, (ii) qualification of the use of the NDT results to collect information on steel corrosion within reinforced-concrete structures. Ten French and European institutions from both academic research laboratories and industrial companies contributed during the years 2004 and 2005. This paper presents the project that was divided into several work packages: (i) the reinforced concrete beams were characterized from non-destructive testing tools, (ii) the mechanical behaviour of the beams was experimentally tested, (iii) complementary laboratory analysis were performed and (iv) finally numerical simulations results were compared to the experimental results obtained with the mechanical tests. (authors)

  15. Differential Die-Away Instrument: Report on Benchmark Measurements and Comparison with Simulation for the Effects of Neutron Poisons

    Energy Technology Data Exchange (ETDEWEB)

    Goodsell, Alison Victoria [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Swinhoe, Martyn Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Henzl, Vladimir [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rael, Carlos D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Desimone, David J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-03-30

    In this report, new experimental data and MCNPX simulation results of the differential die-away (DDA) instrument response to the presence of neutron absorbers are evaluated. In our previous fresh nuclear fuel experiments and simulations, no neutron absorbers or poisons were included in the fuel definition. These new results showcase the capability of the DDA instrument to acquire data from a system that better mimics spent nuclear fuel.

  16. Modeling coupled blast/structure interaction with Zapotec, benchmark calculations for the Conventional Weapon Effects Backfill (CONWEB) tests.

    Energy Technology Data Exchange (ETDEWEB)

    Bessette, Gregory Carl

    2004-09-01

    Modeling the response of buried reinforced concrete structures subjected to close-in detonations of conventional high explosives poses a challenge for a number of reasons. Foremost, there is the potential for coupled interaction between the blast and structure. Coupling enters the problem whenever the structure deformation affects the stress state in the neighboring soil, which in turn, affects the loading on the structure. Additional challenges for numerical modeling include handling disparate degrees of material deformation encountered in the structure and surrounding soil, modeling the structure details (e.g., modeling the concrete with embedded reinforcement, jointed connections, etc.), providing adequate mesh resolution, and characterizing the soil response under blast loading. There are numerous numerical approaches for modeling this class of problem (e.g., coupled finite element/smooth particle hydrodynamics, arbitrary Lagrange-Eulerian methods, etc.). The focus of this work will be the use of a coupled Euler-Lagrange (CEL) solution approach. In particular, the development and application of a CEL capability within the Zapotec code is described. Zapotec links two production codes, CTH and Pronto3D. CTH, an Eulerian shock physics code, performs the Eulerian portion of the calculation, while Pronto3D, an explicit finite element code, performs the Lagrangian portion. The two codes are run concurrently with the appropriate portions of a problem solved on their respective computational domains. Zapotec handles the coupling between the two domains. The application of the CEL methodology within Zapotec for modeling coupled blast/structure interaction will be investigated by a series of benchmark calculations. These benchmarks rely on data from the Conventional Weapons Effects Backfill (CONWEB) test series. In these tests, a 15.4-lb pipe-encased C-4 charge was detonated in soil at a 5-foot standoff from a buried test structure. The test structure was composed of a

  17. The NAS Parallel Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental

  18. SWEEPOP a simulation model for Target Simulation Mode minesweeping

    NARCIS (Netherlands)

    Keus, H.E.; Beckers, A.L.D.; Cleophas, P.L.H.

    2005-01-01

    SWEEPOP is a flexible model that simulates the physical interaction between objects in a maritime underwater environment. The model was built to analyse the deployment and the performance of a Target Simulation Mode (TSM) minesweeping system for the Royal Netherlands Navy (RNLN) and to support its p

  19. Collaborative weed modelling with Universal Simulator

    OpenAIRE

    Holst, Niels

    2010-01-01

    Universal Simulator is • open-source • modular • extendible • re-usable Universal Simulator includes • the INTERCOM model of plant growh • the Conductance model of plant growth • an annual weed demographic model • an insect demographic model • options to extend with any model and combine with the above

  20. Techniques and Simulation Models in Risk Management

    OpenAIRE

    Mirela GHEORGHE

    2012-01-01

    In the present paper, the scientific approach of the research starts from the theoretical framework of the simulation concept and then continues in the setting of the practical reality, thus providing simulation models for a broad range of inherent risks specific to any organization and simulation of those models, using the informatics instrument @Risk (Palisade). The reason behind this research lies in the need for simulation models that will allow the person in charge with decision taking i...

  1. Validation of advanced NSSS simulator model for loss-of-coolant accidents

    Energy Technology Data Exchange (ETDEWEB)

    Kao, S.P.; Chang, S.K.; Huang, H.C. [Nuclear Training Branch, Northeast Utilities, Waterford, CT (United States)

    1995-09-01

    The replacement of the NSSS (Nuclear Steam Supply System) model on the Millstone 2 full-scope simulator has significantly increased its fidelity to simulate adverse conditions in the RCS. The new simulator NSSS model is a real-time derivative of the Nuclear Plant Analyzer by ABB. The thermal-hydraulic model is a five-equation, non-homogeneous model for water, steam, and non-condensible gases. The neutronic model is a three-dimensional nodal diffusion model. In order to certify the new NSSS model for operator training, an extensive validation effort has been performed by benchmarking the model performance against RELAP5/MOD2. This paper presents the validation results for the cases of small-and large-break loss-of-coolant accidents (LOCA). Detailed comparisons in the phenomena of reflux-condensation, phase separation, and two-phase natural circulation are discussed.

  2. Nanotechnology convergence and modeling paradigm of sustainable energy system using polymer electrolyte membrane fuel cell as a benchmark example

    International Nuclear Information System (INIS)

    Developments in nanotechnology have led to innovative progress and converging technologies in engineering and science. These demand novel methodologies that enable efficient communications from the nanoscale all the way to decision-making criteria for actual production systems. In this paper, we discuss the convergence of nanotechnology and novel multi-scale modeling paradigms by using the fuel cell system as a benchmark example. This approach includes complex multi-phenomena at different time and length scales along with the introduction of an optimization framework for application-driven nanotechnology research trends. The modeling paradigm introduced here covers the novel holistic integration from atomistic/molecular phenomena to meso/continuum scales. System optimization is also discussed with respect to the reduced order parameters for a coarse-graining procedure in multi-scale model integration as well as system design. The development of a hierarchical multi-scale paradigm consolidates the theoretical analysis and enables large-scale decision-making of process level design, based on first-principles, and therefore promotes the convergence of nanotechnology to sustainable energy technologies.

  3. Nanotechnology convergence and modeling paradigm of sustainable energy system using polymer electrolyte membrane fuel cell as a benchmark example

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Pil Seung; So, Dae Sup; Biegler, Lorenz T.; Jhon, Myung S., E-mail: mj3a@andrew.cmu.edu [Carnegie Mellon University, Department of Chemical Engineering (United States)

    2012-08-15

    Developments in nanotechnology have led to innovative progress and converging technologies in engineering and science. These demand novel methodologies that enable efficient communications from the nanoscale all the way to decision-making criteria for actual production systems. In this paper, we discuss the convergence of nanotechnology and novel multi-scale modeling paradigms by using the fuel cell system as a benchmark example. This approach includes complex multi-phenomena at different time and length scales along with the introduction of an optimization framework for application-driven nanotechnology research trends. The modeling paradigm introduced here covers the novel holistic integration from atomistic/molecular phenomena to meso/continuum scales. System optimization is also discussed with respect to the reduced order parameters for a coarse-graining procedure in multi-scale model integration as well as system design. The development of a hierarchical multi-scale paradigm consolidates the theoretical analysis and enables large-scale decision-making of process level design, based on first-principles, and therefore promotes the convergence of nanotechnology to sustainable energy technologies.

  4. Simulation of sound waves using the Lattice Boltzmann Method for fluid flow: Benchmark cases for outdoor sound propagation

    NARCIS (Netherlands)

    Salomons, E.M.; Lohman, W.J.A.; Zhou, H.

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-fi

  5. Benchmark of CFD Simulations Using Temperatures Measured Within an Enclosed Array of Heater Rods Oriented Vertically and Horizontally

    Science.gov (United States)

    Chalasani, Narayana Rao

    Experiments and computational fluid dynamics/radiation heat transfer simulations of an 8x8 array of heated rods within an aluminum enclosure are performed. This configuration represents a region inside the channel of a spent boiling water reactor (BWR) fuel assembly between two consecutive spacer plates. The heater rods can be oriented horizontally or vertically to represent transport or storage conditions, respectively. The measured and simulated rod-to-wall temperature differences are compared for various heater rod power levels (100, 200, 300, 400 and 500W), gases (Helium and Nitrogen), enclosure wall temperatures, pressures (1, 2 and 3 atm) and orientations (Horizontal and Vertical) to assess the accuracy of the computational fluid dynamics (CFD) code. For analysis of spent nuclear fuel casks, it is crucial to predict the temperature of the hottest rods in an assembly to ensure that none of the fuel cladding exceeds its temperature limit. The measured temperatures are compared to those determined using CFD code to assess the adequacy of the computer code. Simulations show that temperature gradients are much steeper near the enclosure walls than they are near the center of the heater rod array. The measured maximum heater rod temperatures are above the center of heater rod array for nitrogen experiments in both horizontal and vertical orientations, whereas for helium the maximum temperatures are at the center of heater rod array irrespective of the orientation due to the high thermal conductivity of the helium gas. The measured temperatures of rods at symmetric locations are not identical, and the difference is larger for rods close to the enclosure wall than for those far from it. Small but uncontrolled deviations of the rod positions away from the design locations may cause these differences. For 2-inch insulated nitrogen experiment in vertical orientation with 1 atm pressure and a total heater rod power of 500 W, the maximum measured heater rod and enclosure

  6. Benchmark campaign and case study episode in central Europe for development and assessment of advanced GNSS tropospheric models and products

    Science.gov (United States)

    Douša, Jan; Dick, Galina; Kačmařík, Michal; Brožková, Radmila; Zus, Florian; Brenot, Hugues; Stoycheva, Anastasia; Möller, Gregor; Kaplon, Jan

    2016-07-01

    Initial objectives and design of the Benchmark campaign organized within the European COST Action ES1206 (2013-2017) are described in the paper. This campaign has aimed to support the development and validation of advanced Global Navigation Satellite System (GNSS) tropospheric products, in particular high-resolution and ultra-fast zenith total delays (ZTDs) and tropospheric gradients derived from a dense permanent network. A complex data set was collected for the 8-week period when several extreme heavy precipitation episodes occurred in central Europe which caused severe river floods in this area. An initial processing of data sets from GNSS products and numerical weather models (NWMs) provided independently estimated reference parameters - zenith tropospheric delays and tropospheric horizontal gradients. Their provision gave an overview about the product similarities and complementarities, and thus a potential for improvements of a synergy in their optimal exploitations in future. Reference GNSS and NWM results were intercompared and visually analysed using animated maps. ZTDs from two reference GNSS solutions compared to global ERA-Interim reanalysis resulted in accuracy at the 10 mm level in terms of the root mean square (rms) with a negligible overall bias, comparisons to Global Forecast System (GFS) forecasts showed accuracy at the 12 mm level with the overall bias of -5 mm and, finally, comparisons to mesoscale ALADIN-CZ forecast resulted in accuracy at the 8 mm level with a negligible total bias. The comparison of horizontal tropospheric gradients from GNSS and NWM data demonstrated a very good agreement among independent solutions with negligible biases and an accuracy of about 0.5 mm. Visual comparisons of maps of zenith wet delays and tropospheric horizontal gradients showed very promising results for future exploitations of advanced GNSS tropospheric products in meteorological applications, such as severe weather event monitoring and weather nowcasting

  7. A fast and flexible reactor physics model for simulating neutron spectra and depletion in fast reactors

    Science.gov (United States)

    Recktenwald, Geoff; Deinert, Mark

    2010-03-01

    Determining the time dependent concentration of isotopes within a nuclear reactor core is central to the analysis of nuclear fuel cycles. We present a fast, flexible tool for determining the time dependent neutron spectrum within fast reactors. The code (VBUDS: visualization, burnup, depletion and spectra) uses a two region, multigroup collision probability model to simulate the energy dependent neutron flux and tracks the buildup and burnout of 24 actinides, as well as fission products. While originally developed for LWR simulations, the model is shown to produce fast reactor spectra that show high degree of fidelity to available fast reactor benchmarks.

  8. Benchmarking Deep Networks for Predicting Residue-Specific Quality of Individual Protein Models in CASP11

    Science.gov (United States)

    Liu, Tong; Wang, Yiheng; Eickholt, Jesse; Wang, Zheng

    2016-01-01

    Quality assessment of a protein model is to predict the absolute or relative quality of a protein model using computational methods before the native structure is available. Single-model methods only need one model as input and can predict the absolute residue-specific quality of an individual model. Here, we have developed four novel single-model methods (Wang_deep_1, Wang_deep_2, Wang_deep_3, and Wang_SVM) based on stacked denoising autoencoders (SdAs) and support vector machines (SVMs). We evaluated these four methods along with six other methods participating in CASP11 at the global and local levels using Pearson’s correlation coefficients and ROC analysis. As for residue-specific quality assessment, our four methods achieved better performance than most of the six other CASP11 methods in distinguishing the reliably modeled residues from the unreliable measured by ROC analysis; and our SdA-based method Wang_deep_1 has achieved the highest accuracy, 0.77, compared to SVM-based methods and our ensemble of an SVM and SdAs. However, we found that Wang_deep_2 and Wang_deep_3, both based on an ensemble of multiple SdAs and an SVM, performed slightly better than Wang_deep_1 in terms of ROC analysis, indicating that integrating an SVM with deep networks works well in terms of certain measurements. PMID:26763289

  9. Benchmarking Deep Networks for Predicting Residue-Specific Quality of Individual Protein Models in CASP11

    Science.gov (United States)

    Liu, Tong; Wang, Yiheng; Eickholt, Jesse; Wang, Zheng

    2016-01-01

    Quality assessment of a protein model is to predict the absolute or relative quality of a protein model using computational methods before the native structure is available. Single-model methods only need one model as input and can predict the absolute residue-specific quality of an individual model. Here, we have developed four novel single-model methods (Wang_deep_1, Wang_deep_2, Wang_deep_3, and Wang_SVM) based on stacked denoising autoencoders (SdAs) and support vector machines (SVMs). We evaluated these four methods along with six other methods participating in CASP11 at the global and local levels using Pearson’s correlation coefficients and ROC analysis. As for residue-specific quality assessment, our four methods achieved better performance than most of the six other CASP11 methods in distinguishing the reliably modeled residues from the unreliable measured by ROC analysis; and our SdA-based method Wang_deep_1 has achieved the highest accuracy, 0.77, compared to SVM-based methods and our ensemble of an SVM and SdAs. However, we found that Wang_deep_2 and Wang_deep_3, both based on an ensemble of multiple SdAs and an SVM, performed slightly better than Wang_deep_1 in terms of ROC analysis, indicating that integrating an SVM with deep networks works well in terms of certain measurements.

  10. From Physical Benchmarks to Mental Benchmarks: A Four Dimensions Dynamic Model to Assure the Quality of Instructional Activities in Electronic and Virtual Learning Environments

    Science.gov (United States)

    Ahmed Abdelaziz, Hamdy

    2013-01-01

    The objective of this paper was to develop a four dimensions dynamic model for designing instructional activities appropriate to electronic and virtual learning environments. The suggested model is guided by learning principles of cognitivism, constructivism, and connectivism learning theories in order to help online learners to build and acquire…

  11. Distributed simulation a model driven engineering approach

    CERN Document Server

    Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent

    2016-01-01

    Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.

  12. Plasma Waves as a Benchmark Problem

    CERN Document Server

    Kilian, Patrick; Schreiner, Cedric; Spanier, Felix

    2016-01-01

    A large number of wave modes exist in a magnetized plasma. Their properties are determined by the interaction of particles and waves. In a simulation code, the correct treatment of field quantities and particle behavior is essential to correctly reproduce the wave properties. Consequently, plasma waves provide test problems that cover a large fraction of the simulation code. The large number of possible wave modes and the freedom to choose parameters make the selection of test problems time consuming and comparison between different codes difficult. This paper therefore aims to provide a selection of test problems, based on different wave modes and with well defined parameter values, that is accessible to a large number of simulation codes to allow for easy benchmarking and cross validation. Example results are provided for a number of plasma models. For all plasma models and wave modes that are used in the test problems, a mathematical description is provided to clarify notation and avoid possible misunderst...

  13. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  14. Singlet Extensions of the Standard Model at LHC Run 2: Benchmarks and Comparison with the NMSSM

    CERN Document Server

    Costa, Raul; Sampaio, Marco O P; Santos, Rui

    2015-01-01

    The Complex singlet extension of the Standard Model (CxSM) is the simplest extension which provides scenarios for Higgs pair production with different masses. The model has two interesting phases: the dark matter phase, with a Standard Model-like Higgs boson, a new scalar and a dark matter candidate; and the broken phase, with all three neutral scalars mixing. In the latter phase Higgs decays into a pair of two different Higgs bosons are possible. In this study we analyse Higgs-to-Higgs decays in the framework of singlet extensions of the Standard Model (SM), with focus on the CxSM. After demonstrating that scenarios with large rates for such chain decays are possible we perform a comparison between the NMSSM and the CxSM. We find that, based on Higgs-to-Higgs decays, the only possibility to distinguish the two models at the LHC run 2 is through final states with two different scalars. This conclusion builds a strong case for searches for final states with two different scalars at the LHC run 2. Finally, we p...

  15. New LHC benchmarks for the CP -conserving two-Higgs-doublet model

    OpenAIRE

    Haber, Howard E.; Stål, Oscar

    2015-01-01

    We introduce a strategy to study the parameter space of the general, CP -conserving, two-Higgs-doublet Model (2HDM) with a softly broken Z2 -symmetry by means of a new “hybrid” basis. In this basis the input parameters are the measured values of the mass of the observed Standard Model (SM)-like Higgs boson and its coupling strength to vector boson pairs, the mass of the second CP -even Higgs boson, the ratio of neutral Higgs vacuum expectation values, and three additional dimensionless parame...

  16. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation.

    Directory of Open Access Journals (Sweden)

    Erik M Salomons

    Full Text Available Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i reduction of the kinematic viscosity and ii reduction of the lattice spacing.

  17. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation.

    Science.gov (United States)

    Salomons, Erik M; Lohman, Walter J A; Zhou, Han

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing.

  18. Analytical solutions for benchmarking cold regions subsurface water flow and energy transport models: one-dimensional soil thaw with conduction and advection

    Science.gov (United States)

    Kurylyk, Barret L.; McKenzie, Jeffrey M; MacQuarrie, Kerry T. B.; Voss, Clifford I.

    2014-01-01

    Numerous cold regions water flow and energy transport models have emerged in recent years. Dissimilarities often exist in their mathematical formulations and/or numerical solution techniques, but few analytical solutions exist for benchmarking flow and energy transport models that include pore water phase change. This paper presents a detailed derivation of the Lunardini solution, an approximate analytical solution for predicting soil thawing subject to conduction, advection, and phase change. Fifteen thawing scenarios are examined by considering differences in porosity, surface temperature, Darcy velocity, and initial temperature. The accuracy of the Lunardini solution is shown to be proportional to the Stefan number. The analytical solution results obtained for soil thawing scenarios with water flow and advection are compared to those obtained from the finite element model SUTRA. Three problems, two involving the Lunardini solution and one involving the classic Neumann solution, are recommended as standard benchmarks for future model development and testing.

  19. Terrestrial Microgravity Model and Threshold Gravity Simulation using Magnetic Levitation

    Science.gov (United States)

    Ramachandran, N.

    2005-01-01

    What is the threshold gravity (minimum gravity level) required for the nominal functioning of the human system? What dosage is required? Do human cell lines behave differently in microgravity in response to an external stimulus? The critical need for such a gravity simulator is emphasized by recent experiments on human epithelial cells and lymphocytes on the Space Shuttle clearly showing that cell growth and function are markedly different from those observed terrestrially. Those differences are also dramatic between cells grown in space and those in Rotating Wall Vessels (RWV), or NASA bioreactor often used to simulate microgravity, indicating that although morphological growth patterns (three dimensional growth) can be successfully simulated using RWVs, cell function performance is not reproduced - a critical difference. If cell function is dramatically affected by gravity off-loading, then cell response to stimuli such as radiation, stress, etc. can be very different from terrestrial cell lines. Yet, we have no good gravity simulator for use in study of these phenomena. This represents a profound shortcoming for countermeasures research. We postulate that we can use magnetic levitation of cells and tissue, through the use of strong magnetic fields and field gradients, as a terrestrial microgravity model to study human cells. Specific objectives of the research are: 1. To develop a tried, tested and benchmarked terrestrial microgravity model for cell culture studies; 2. Gravity threshold determination; 3. Dosage (magnitude and duration) of g-level required for nominal functioning of cells; 4. Comparisons of magnetic levitation model to other models such as RWV, hind limb suspension, etc. and 5. Cellular response to reduced gravity levels of Moon and Mars. The paper will discuss experiments md modeling work to date in support of this project.

  20. An introduction to enterprise modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ostic, J.K.; Cannon, C.E. [Los Alamos National Lab., NM (United States). Technology Modeling and Analysis Group

    1996-09-01

    As part of an ongoing effort to continuously improve productivity, quality, and efficiency of both industry and Department of Energy enterprises, Los Alamos National Laboratory is investigating various manufacturing and business enterprise simulation methods. A number of enterprise simulation software models are being developed to enable engineering analysis of enterprise activities. In this document the authors define the scope of enterprise modeling and simulation efforts, and review recent work in enterprise simulation at Los Alamos National Laboratory as well as at other industrial, academic, and research institutions. References of enterprise modeling and simulation methods and a glossary of enterprise-related terms are provided.

  1. Structured building model reduction toward parallel simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University

    2013-08-26

    Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.

  2. Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2014-01-01

    This book includes extended and revised versions of a set of selected papers from the 2012 International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2012) which was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and held in Rome, Italy. SIMULTECH 2012 was technically co-sponsored by the Society for Modeling & Simulation International (SCS), GDR I3, Lionphant Simulation, Simulation Team and IFIP and held in cooperation with AIS Special Interest Group of Modeling and Simulation (AIS SIGMAS) and the Movimento Italiano Modellazione e Simulazione (MIMOS).

  3. Benchmarking of numerical models describing the dispersion of radionuclides in the Arctic Seas

    DEFF Research Database (Denmark)

    Scott, E.M.; Gurbutt, P.; Harms, I.;

    1997-01-01

    As part of the International Arctic Seas Assessment Project (IASAP) of the International Atomic Energy Agency (IAEA), a working group was created to model the dispersal and transfer of radionuclides released from radioactive waste disposed of in the Kara Sea. The objectives of this group are: (1...

  4. Finite Element Method Modeling of Sensible Heat Thermal Energy Storage with Innovative Concretes and Comparative Analysis with Literature Benchmarks

    OpenAIRE

    Claudio Ferone; Francesco Colangelo; Domenico Frattini; Giuseppina Roviello; Raffaele Cioffi; Rosa di Maggio

    2014-01-01

    Efficient systems for high performance buildings are required to improve the integration of renewable energy sources and to reduce primary energy consumption from fossil fuels. This paper is focused on sensible heat thermal energy storage (SHTES) systems using solid media and numerical simulation of their transient behavior using the finite element method (FEM). Unlike other papers in the literature, the numerical model and simulation approach has simultaneously taken into consideration vario...

  5. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  6. Dose-response modeling : Evaluation, application, and development of procedures for benchmark dose analysis in health risk assessment of chemical substances

    OpenAIRE

    Sand, Salomon

    2005-01-01

    In this thesis, dose-response modeling and procedures for benchmark dose (BMD) analysis in health risk assessment of chemical substances have been investigated. The BMD method has been proposed as an alternative to the NOAEL (no-observedadverse- effect-level) approach in health risk assessment of non-genotoxic agents. According to the BMD concept, a dose-response model is fitted to data and the BMD is defined as the dose causing a predetermined change in response. A lowe...

  7. Title: Benchmark of the radiation field simulations of the LHC injection lines with the RadMon detectors

    CERN Document Server

    Boccone, V; Kramer, D; Roeed, K

    2011-01-01

    In this paper we present the high energy hadron (HEH) fluence simulations in the LHC injection regions (TI2 and TI8) performed by the FLUKA Monte-Carlo code and we compare the expected nominal single event upset (SEU) counts in the RadMon detectors with the measured values. During the LHC injection setup the 450 GeV/c proton beam from the Super Proton Synchrotron (SPS) is progressively adjusted and aligned through the two injection lines until the LHC septum magnets. During the first phase of alignment the beam is dumped onto the injection line dumps TEDs and the debris of the interactions streams trough junction tunnel which contains sensitive electronic equipment. In the case of the TI8 injection lines also the losses on the TCDIH.87904 collimator might represent a problem for the electronics in the UJ87 tunnel.

  8. Nonsmooth Modeling and Simulation for Switched Circuits

    CERN Document Server

    Acary, Vincent; Brogliato, Bernard

    2011-01-01

    "Nonsmooth Modeling and Simulation for Switched Circuits" concerns the modeling and the numerical simulation of switched circuits with the nonsmooth dynamical systems (NSDS) approach, using piecewise-linear and multivalued models of electronic devices like diodes, transistors, switches. Numerous examples (ranging from introductory academic circuits to various types of power converters) are analyzed and many simulation results obtained with the INRIA open-source SICONOS software package are presented. Comparisons with SPICE and hybrid methods demonstrate the power of the NSDS approach

  9. Software-Engineering Process Simulation Model (SEPS)

    OpenAIRE

    Lin, C. Y.; Abdel-Hamid, T.; Sherif, J.S.

    1999-01-01

    This article describes tlie Software-Engineering Process Simulation (SEPS) model developed at JPL. SEPS is a dynamic simulation model of the software project-development process. It uses the feedback principles of system dynamics to simulate the dynamic interactions among various software life-cycle development activities and management decision-making processes. The model is designed to be a planning tool to examine trade-offs of cost, schedule, and functionality, and to test ...

  10. Simulation modeling and analysis with Arena

    CERN Document Server

    Altiok, Tayfur

    2007-01-01

    Simulation Modeling and Analysis with Arena is a highly readable textbook which treats the essentials of the Monte Carlo discrete-event simulation methodology, and does so in the context of a popular Arena simulation environment.” It treats simulation modeling as an in-vitro laboratory that facilitates the understanding of complex systems and experimentation with what-if scenarios in order to estimate their performance metrics. The book contains chapters on the simulation modeling methodology and the underpinnings of discrete-event systems, as well as the relevant underlying probability, statistics, stochastic processes, input analysis, model validation and output analysis. All simulation-related concepts are illustrated in numerous Arena examples, encompassing production lines, manufacturing and inventory systems, transportation systems, and computer information systems in networked settings.· Introduces the concept of discrete event Monte Carlo simulation, the most commonly used methodology for modeli...

  11. Assessment of CTF Boiling Transition and Critical Heat Flux Modeling Capabilities Using the OECD/NRC BFBT and PSBT Benchmark Databases

    Directory of Open Access Journals (Sweden)

    Maria Avramova

    2013-01-01

    Full Text Available Over the last few years, the Pennsylvania State University (PSU under the sponsorship of the US Nuclear Regulatory Commission (NRC has prepared, organized, conducted, and summarized two international benchmarks based on the NUPEC data—the OECD/NRC Full-Size Fine-Mesh Bundle Test (BFBT Benchmark and the OECD/NRC PWR Sub-Channel and Bundle Test (PSBT Benchmark. The benchmarks’ activities have been conducted in cooperation with the Nuclear Energy Agency/Organization for Economic Co-operation and Development (NEA/OECD and the Japan Nuclear Energy Safety (JNES Organization. This paper presents an application of the joint Penn State University/Technical University of Madrid (UPM version of the well-known sub-channel code COBRA-TF (Coolant Boiling in Rod Array-Two Fluid, namely, CTF, to the steady state critical power and departure from nucleate boiling (DNB exercises of the OECD/NRC BFBT and PSBT benchmarks. The goal is two-fold: firstly, to assess these models and to examine their strengths and weaknesses; and secondly, to identify the areas for improvement.

  12. Network Modeling and Simulation A Practical Perspective

    CERN Document Server

    Guizani, Mohsen; Khan, Bilal

    2010-01-01

    Network Modeling and Simulation is a practical guide to using modeling and simulation to solve real-life problems. The authors give a comprehensive exposition of the core concepts in modeling and simulation, and then systematically address the many practical considerations faced by developers in modeling complex large-scale systems. The authors provide examples from computer and telecommunication networks and use these to illustrate the process of mapping generic simulation concepts to domain-specific problems in different industries and disciplines. Key features: Provides the tools and strate

  13. BN-600 full MOX core benchmark analysis

    International Nuclear Information System (INIS)

    As a follow-up of the BN-600 hybrid core benchmark, a full MOX core benchmark was performed within the framework of the IAEA co-ordinated research project. Discrepancies between the values of main reactivity coefficients obtained by the participants for the BN-600 full MOX core benchmark appear to be larger than those in the previous hybrid core benchmarks on traditional core configurations. This arises due to uncertainties in the proper modelling of the axial sodium plenum above the core. It was recognized that the sodium density coefficient strongly depends on the core model configuration of interest (hybrid core vs. fully MOX fuelled core with sodium plenum above the core) in conjunction with the calculation method (diffusion vs. transport theory). The effects of the discrepancies revealed between the participants results on the ULOF and UTOP transient behaviours of the BN-600 full MOX core were investigated in simplified transient analyses. Generally the diffusion approximation predicts more benign consequences for the ULOF accident but more hazardous ones for the UTOP accident when compared with the transport theory results. The heterogeneity effect does not have any significant effect on the simulation of the transient. The comparison of the transient analyses results concluded that the fuel Doppler coefficient and the sodium density coefficient are the two most important coefficients in understanding the ULOF transient behaviour. In particular, the uncertainty in evaluating the sodium density coefficient distribution has the largest impact on the description of reactor dynamics. This is because the maximum sodium temperature rise takes place at the top of the core and in the sodium plenum.

  14. Magnetic Design and Code Benchmarking of the SMC (Short Model Coil) Dipole Magnet

    CERN Document Server

    Manil, P; Rochford, J; Fessia, P; Canfer, S; Baynham, E; Nunio, F; de Rijk, G; Védrine, P

    2010-01-01

    The Short Model Coil (SMC) working group was set in February 2007 to complement the Next European Dipole (NED) program, in order to develop a short-scale model of a Nb3Sn dipole magnet. In 2009, the EuCARD/HFM (High Field Magnets) program took over these programs. The SMC group comprises four laboratories: CERN/TE-MSC group (CH), CEA/IRFU (FR), RAL (UK) and LBNL (US). The SMC magnet is designed to reach a peak field of about 13 Tesla (T) on conductor, using a 2500 A/mm2 Powder-In-Tube (PIT) strand. The aim of this magnet device is to study the degradation of the magnetic properties of the Nb3Sn cable, by applying different levels of pre-stress. To fully satisfy this purpose, a versatile and easy-to-assemble structure has been realized. The design of the SMC magnet has been developed from an existing dipole magnet, the SD01, designed, built and tested at LBNL with support from CEA. The goal of the magnetic design presented in this paper is to match the high field region with the high stress region, located alo...

  15. Terrestrial Microgravity Model and Threshold Gravity Simulation sing Magnetic Levitation

    Science.gov (United States)

    Ramachandran, N.

    2005-01-01

    What is the threshold gravity (minimum gravity level) required for the nominal functioning of the human system? What dosage is required? Do human cell lines behave differently in microgravity in response to an external stimulus? The critical need for such a gravity simulator is emphasized by recent experiments on human epithelial cells and lymphocytes on the Space Shuttle clearly showing that cell growth and function are markedly different from those observed terrestrially. Those differences are also dramatic between cells grown in space and those in Rotating Wall Vessels (RWV), or NASA bioreactor often used to simulate microgravity, indicating that although morphological growth patterns (three dimensional growth) can be successiblly simulated using RWVs, cell function performance is not reproduced - a critical difference. If cell function is dramatically affected by gravity off-loading, then cell response to stimuli such as radiation, stress, etc. can be very different from terrestrial cell lines. Yet, we have no good gravity simulator for use in study of these phenomena. This represents a profound shortcoming for countermeasures research. We postulate that we can use magnetic levitation of cells and tissue, through the use of strong magnetic fields and field gradients, as a terrestrial microgravity model to study human cells. Specific objectives of the research are: 1. To develop a tried, tested and benchmarked terrestrial microgravity model for cell culture studies; 2. Gravity threshold determination; 3. Dosage (magnitude and duration) of g-level required for nominal functioning of cells; 4. Comparisons of magnetic levitation model to other models such as RWV, hind limb suspension, etc. and 5. Cellular response to reduced gravity levels of Moon and Mars.

  16. A Uranium Bioremediation Reactive Transport Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Yabusaki, Steven B.; Sengor, Sevinc; Fang, Yilin

    2015-06-01

    A reactive transport benchmark problem set has been developed based on in situ uranium bio-immobilization experiments that have been performed at a former uranium mill tailings site in Rifle, Colorado, USA. Acetate-amended groundwater stimulates indigenous microorganisms to catalyze the reduction of U(VI) to a sparingly soluble U(IV) mineral. The interplay between the flow, acetate loading periods and rates, microbially-mediated and geochemical reactions leads to dynamic behavior in metal- and sulfate-reducing bacteria, pH, alkalinity, and reactive mineral surfaces. The benchmark is based on an 8.5 m long one-dimensional model domain with constant saturated flow and uniform porosity. The 159-day simulation introduces acetate and bromide through the upgradient boundary in 14-day and 85-day pulses separated by a 10 day interruption. Acetate loading is tripled during the second pulse, which is followed by a 50 day recovery period. Terminal electron accepting processes for goethite, phyllosilicate Fe(III), U(VI), and sulfate are modeled using Monod-type rate laws. Major ion geochemistry modeled includes mineral reactions, as well as aqueous and surface complexation reactions for UO2++, Fe++, and H+. In addition to the dynamics imparted by the transport of the acetate pulses, U(VI) behavior involves the interplay between bioreduction, which is dependent on acetate availability, and speciation-controlled surface complexation, which is dependent on pH, alkalinity and available surface complexation sites. The general difficulty of this benchmark is the large number of reactions (74), multiple rate law formulations, a multisite uranium surface complexation model, and the strong interdependency and sensitivity of the reaction processes. Results are presented for three simulators: HYDROGEOCHEM, PHT3D, and PHREEQC.

  17. Enabling benchmarking and improving operational efficiency at nuclear power plants through adoption of a common process model: SNPM (standard nuclear performance model)

    International Nuclear Information System (INIS)

    To support the projected increase in base-load electricity demand, nuclear operating companies must maintain or improve upon current generation rates, all while their assets continue to age. Certainly new plants are and will be built, however the bulk of the world's nuclear generation comes from plants constructed in the 1970's and 1980's. The nuclear energy industry in the United States has dramatically increased its electricity production over the past decade; from 75.1% in 1994 to 91.9% by 2002 (source NEI US Nuclear Industry Net Capacity Factors - 1980 to 2003). This increase, coupled with lowered production costs; $2.43 in 1994 to $1.71 in 2002 (factored for inflation source NEI US Nuclear Industry net Production Costs 1980 to 2002) is due in large part to a focus on operational excellence that is driven by an industry effort to develop and share best practices for the purposes of benchmarking and improving overall performance. These best-practice processes, known as the standard nuclear performance model (SNPM), present an opportunity for European nuclear power generators who are looking to improve current production rates. In essence the SNPM is a model for the safe, reliable, and economically competitive nuclear power generation. The SNPM has been a joint effort of several industry bodies: Nuclear Energy Institute, Electric Cost Utility Group, and Institute of Nuclear Power Operations (INPO). The standard nuclear performance model (see figure 1) is comprised of eight primary processes, supported by forty four sub-processes and a number of company specific activities and tasks. The processes were originally envisioned by INPO in 1994 and evolved into the SNPM that was originally launched in 1998. Since that time communities of practice (CoPs) have emerged via workshops to further improve the processes and their inter-operability, CoP representatives include people from: nuclear power operating companies, policy bodies, industry suppliers and consultants, and

  18. Benchmarking the invariant embedding method against analytical solutions in model transport problems

    Directory of Open Access Journals (Sweden)

    Wahlberg Malin

    2006-01-01

    Full Text Available The purpose of this paper is to demonstrate the use of the invariant embedding method in a few model transport problems for which it is also possible to obtain an analytical solution. The use of the method is demonstrated in three different areas. The first is the calculation of the energy spectrum of sputtered particles from a scattering medium without absorption, where the multiplication (particle cascade is generated by recoil production. Both constant and energy dependent cross-sections with a power law dependence were treated. The second application concerns the calculation of the path length distribution of reflected particles from a medium without multiplication. This is a relatively novel application, since the embedding equations do not resolve the depth variable. The third application concerns the demonstration that solutions in an infinite medium and in a half-space are interrelated through embedding-like integral equations, by the solution of which the flux reflected from a half-space can be reconstructed from solutions in an infinite medium or vice versa. In all cases, the invariant embedding method proved to be robust, fast, and monotonically converging to the exact solutions.

  19. Modern multicore and manycore architectures: Modelling, optimisation and benchmarking a multiblock CFD code

    Science.gov (United States)

    Hadade, Ioan; di Mare, Luca

    2016-08-01

    Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.

  20. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in...

  1. Benchmarking and Modeling of a Conventional Mid-Size Car Using ALPHA (SAE Paper 2015-01-1140)

    Science.gov (United States)

    The Advanced Light-Duty Powertrain and Hybrid Analysis (ALPHA) modeling tool was created by EPA to estimate greenhouse gas (GHG) emissions of light-duty vehicles. ALPHA is a physics-based, forward-looking, full vehicle computer simulation capable of analyzing various vehicle type...

  2. Track 3: growth of nuclear technology and research numerical and computational aspects of the coupled three-dimensional core/plant simulations: organization for economic cooperation and development/U.S. nuclear regulatory commission pressurized water reactor main-steam-line-break benchmark-I. 5. Analyses of the OECD MSLB Benchmark with the Codes DYN3D and DYN3D/ATHLET

    International Nuclear Information System (INIS)

    The code DYN3D coupled with ATHLET was used for the analysis of the OECD Main-Steam-Line-Break (MSLB) Benchmark, which is based on real plant design and operational data of the TMI-1 pressurized water reactor (PWR). Like the codes RELAP or TRAC,ATHLET is a thermal-hydraulic system code with point or one-dimensional neutron kinetic models. ATHLET, developed by the Gesellschaft for Anlagen- und Reaktorsicherheit, is widely used in Germany for safety analyses of nuclear power plants. DYN3D consists of three-dimensional nodal kinetic models and a thermal-hydraulic part with parallel coolant channels of the reactor core. DYN3D was coupled with ATHLET for analyzing more complex transients with interactions between coolant flow conditions and core behavior. It can be applied to the whole spectrum of operational transients and accidents, from small and intermediate leaks to large breaks of coolant loops or steam lines at PWRs and boiling water reactors. The so-called external coupling is used for the benchmark, where the thermal hydraulics is split into two parts: DYN3D describes the thermal hydraulics of the core, while ATHLET models the coolant system. Three exercises of the benchmark were simulated: Exercise 1: point kinetics plant simulation (ATHLET) Exercise 2: coupled three-dimensional neutronics/core thermal-hydraulics evaluation of the core response for given core thermal-hydraulic boundary conditions (DYN3D) Exercise 3: best-estimate coupled core-plant transient analysis (DYN3D/ATHLET). Considering the best-estimate cases (scenarios 1 of exercises 2 and 3), the reactor does not reach criticality after the reactor trip. Defining more serious tests for the codes, the efficiency of the control rods was decreased (scenarios 2 of exercises 2 and 3) to obtain recriticality during the transient. Besides the standard simulation given by the specification, modifications are introduced for sensitivity studies. The results presented here show (a) the influence of a reduced

  3. Implementation of Benchmarking Transportation Logistics Practices and Future Benchmarking Organizations

    International Nuclear Information System (INIS)

    The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how these findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in

  4. Evaluation and comparison of benchmark QSAR models to predict a relevant REACH endpoint: The bioconcentration factor (BCF)

    Energy Technology Data Exchange (ETDEWEB)

    Gissi, Andrea [Laboratory of Environmental Chemistry and Toxicology, IRCCS – Istituto di Ricerche Farmacologiche Mario Negri, Via La Masa 19, 20156 Milano (Italy); Dipartimento di Farmacia – Scienze del Farmaco, Università degli Studi di Bari “Aldo Moro”, Via E. Orabona 4, 70125 Bari (Italy); Lombardo, Anna; Roncaglioni, Alessandra [Laboratory of Environmental Chemistry and Toxicology, IRCCS – Istituto di Ricerche Farmacologiche Mario Negri, Via La Masa 19, 20156 Milano (Italy); Gadaleta, Domenico [Laboratory of Environmental Chemistry and Toxicology, IRCCS – Istituto di Ricerche Farmacologiche Mario Negri, Via La Masa 19, 20156 Milano (Italy); Dipartimento di Farmacia – Scienze del Farmaco, Università degli Studi di Bari “Aldo Moro”, Via E. Orabona 4, 70125 Bari (Italy); Mangiatordi, Giuseppe Felice; Nicolotti, Orazio [Dipartimento di Farmacia – Scienze del Farmaco, Università degli Studi di Bari “Aldo Moro”, Via E. Orabona 4, 70125 Bari (Italy); Benfenati, Emilio, E-mail: emilio.benfenati@marionegri.it [Laboratory of Environmental Chemistry and Toxicology, IRCCS – Istituto di Ricerche Farmacologiche Mario Negri, Via La Masa 19, 20156 Milano (Italy)

    2015-02-15

    regression (R{sup 2}=0.85) and sensitivity (average>0.70) for new compounds in the AD but not present in the training set. However, no single optimal model exists and, thus, it would be wise a case-by-case assessment. Yet, integrating the wealth of information from multiple models remains the winner approach. - Highlights: • REACH encourages the use of in silico methods in the assessment of chemicals safety. • The performances of nine BCF models were evaluated on a benchmark database of 851 chemicals. • We compared the models on the basis of both regression and classification performance. • Statistics on chemicals out of the training set and/or within the applicability domain were compiled. • The results show that QSAR models are useful as weight-of-evidence in support to other methods.

  5. Uncertainty and sensitivity analysis of control strategies using the benchmark simulation model No1 (BSM1)

    DEFF Research Database (Denmark)

    Flores-Alsina, Xavier; Rodriguez-Roda, Ignasi; Sin, Gürkan;

    2009-01-01

    -NH) controller reduce uncertainty in both overall pollution removal and effluent total Kjeldahl nitrogen. Also, control strategies with an external carbon source reduce the effluent nitrate (S-NO) uncertainty increasing both their economical cost and variability as a trade-off. Finally, the maximum specific...... autotrophic growth rate (mu(A)) causes most of the variance in the effluent for all the evaluated control strategies. The influence of denitrification related parameters, e. g. eta(g) (anoxic growth rate correction factor) and eta(h) (anoxic hydrolysis rate correction factor), becomes less important when a S...

  6. Evaluation of cloud resolving model simulations of midlatitude cirrus with ARM and A-Train observations

    Science.gov (United States)

    Muehlbauer, A. D.; Ackerman, T. P.; Lawson, P.; Xie, S.; Zhang, Y.

    2015-12-01

    This paper evaluates cloud resolving model (CRM) and cloud system-resolving model (CSRM) simulations of a midlatitude cirrus case with comprehensive observations collected under the auspices of the Atmospheric Radiation Measurements (ARM) program and with spaceborne observations from the National Aeronautics and Space Administration (NASA) A-train satellites. Vertical profiles of temperature, relative humidity and wind speeds are reasonably well simulated by the CSRM and CRM but there are remaining biases in the temperature, wind speeds and relative humidity, which can be mitigated through nudging the model simulations toward the observed radiosonde profiles. Simulated vertical velocities are underestimated in all simulations except in the CRM simulations with grid spacings of 500m or finer, which suggests that turbulent vertical air motions in cirrus clouds need to be parameterized in GCMs and in CSRM simulations with horizontal grid spacings on the order of 1km. The simulated ice water content and ice number concentrations agree with the observations in the CSRM but are underestimated in the CRM simulations. The underestimation of ice number concentrations is consistent with the overestimation of radar reflectivity in the CRM simulations and suggests that the model produces too many large ice particles especially toward cloud base. Simulated cloud profiles are rather insensitive to perturbations in the initial conditions or the dimensionality of the model domain but the treatment of the forcing data has a considerable effect on the outcome of the model simulations. Despite considerable progress in observations and microphysical parameterizations, simulating the microphysical, macrophysical and radiative properties of cirrus remains challenging. Comparing model simulations with observations from multiple instruments and observational platforms is important for revealing model deficiencies and for providing rigorous benchmarks. However, there still is considerable

  7. Benchmarking concentrating photovoltaic systems

    Science.gov (United States)

    Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo

    2010-08-01

    Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.

  8. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  9. Benchmark Dose Modeling

    Science.gov (United States)

    Finite doses are employed in experimental toxicology studies. Under the traditional methodology, the point of departure (POD) value for low dose extrapolation is identified as one of these doses. Dose spacing necessarily precludes a more accurate description of the POD value. ...

  10. Whole-building Hygrothermal Simulation Model

    DEFF Research Database (Denmark)

    Rode, Carsten; Grau, Karl

    2003-01-01

    An existing integrated simulation tool for dynamic thermal simulation of building was extended with a transient model for moisture release and uptake in building materials. Validation of the new model was begun with comparison against measurements in an outdoor test cell furnished with single...... materials. Almost quasi-steady, cyclic experiments were used to compare the indoor humidity variation and the numerical results of the integrated simulation tool with the new moisture model. Except for the case with chipboard as furnishing, the predictions of indoor humidity with the detailed model were...

  11. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  12. Benchmarking water productivity in agriculture and the scope for improvement - remote sensing modelling from field to global scale

    NARCIS (Netherlands)

    Zwart, S.J.

    2010-01-01

    Agriculture is the largest consumer and water. In the context of an increasing population and less water available for the agricultural sector, the water productivity needs to be sustained or increased to secure food security. This study provides benchmark values for water productivity for the major

  13. Warehouse Simulation Through Model Configuration

    NARCIS (Netherlands)

    Verriet, J.H.; Hamberg, R.; Caarls, J.; Wijngaarden, B. van

    2013-01-01

    The pre-build development of warehouse systems leads from a specific customer request to a specific customer quotation. This involves a process of configuring a warehouse system using a sequence of steps that contain increasingly more details. Simulation is a helpful tool in analyzing warehouse desi

  14. Modeling and Simulation of Matrix Converter

    DEFF Research Database (Denmark)

    Liu, Fu-rong; Klumpner, Christian; Blaabjerg, Frede

    2005-01-01

    This paper discusses the modeling and simulation of matrix converter. Two models of matrix converter are presented: one is based on indirect space vector modulation and the other is based on power balance equation. The basis of these two models is• given and the process on modeling is introduced in...

  15. General introduction to simulation models

    DEFF Research Database (Denmark)

    Hisham Beshara Halasa, Tariq; Boklund, Anette

    2012-01-01

    ) and the North American Animal Disease Spread Model (NAADSM). The models are rather data intensive, but in varying degrees. They generally demand data on the farm level, including farm location, type, number of animals, and movement and contact frequency to other farms. To be able to generate a useful model...... attached to the occurrence of the disease. Model inputs are usually given in distributions to represent biological variability as well as uncertainty. Subsequently, model outputs are usually given as distributions, sometimes with wide ranges. Use of modeling will help us to gain insight to a system as well...

  16. Benchmarking in healthcare using aggregated indicators

    DEFF Research Database (Denmark)

    Traberg, Andreas; Jacobsen, Peter

    2010-01-01

    Benchmarking has become a fundamental part of modern health care systems, but unfortunately, no benchmarking framework is unanimously accepted for assessing both quality and performance. The aim of this paper is to present a benchmarking model that is able to take different stakeholder perspectives...... into account. By presenting performance as a function of a patient perspective, an operations management perspective, and an employee perspective a more holistic approach to benchmarking is proposed. By collecting statistical information from several national and regional agencies and internal databases......, the model is constructed as a comprehensive hierarchy of indicators. By aggregating the outcome of each indicator, the model is able to benchmark healthcare providing units. By assessing performance deeper in the hierarchy, a more detailed view of performance is obtained. The validity test of the model...

  17. A qualitative dynamical model for cardiotocography simulation

    OpenAIRE

    Illanes, Alfredo; Haritopoulos, Michel; Robles, Felipe; Guerra, Francisco

    2015-01-01

    The purpose of this work is to present a new mathematical model for fetal monitoring simulation. It involves the simultaneous generation of fetal heart rate and maternal uterine contraction signals through a parametrical model. This model allows the generation of the main fetal monitoring dynamics including fetal movements, acceleration and deceleration of the heart rate and the dynami-cal adjustment of fetal heart rate following an uterine contraction. Simulated tracings were analyzed by spe...

  18. Modelling Reactive and Proactive Behaviour in Simulation

    OpenAIRE

    Majid, Mazlina Abdul; Siebers, Peer-Olaf; Aickelin, Uwe

    2010-01-01

    This research investigated the simulation model behaviour of a traditional and combined discrete event as well as agent based simulation models when modelling human reactive and proactive behaviour in human centric complex systems. A departmental store was chosen as human centric complex case study where the operation system of a fitting room in WomensWear department was investigated. We have looked at ways to determine the efficiency of new management policies for the fitting room operati...

  19. Comparison of a global-climate model simulation to a cloud-system resolving model simulation for long-term thin stratocumulus clouds

    Directory of Open Access Journals (Sweden)

    S. S. Lee

    2009-05-01

    Full Text Available A case of thin, warm marine-boundary-layer (MBL clouds is simulated by a cloud-system resolving model (CSRM and is compared to the same case of clouds simulated by a general circulation model (GCM. In this study, the simulation by the CSRM adopts higher resolutions and more advanced microphysics as compared to those by the GCM, enabling the CSRM-simulation to act as a benchmark to assess the simulation by the GCM. Explicitly simulated interactions among the surface latent heat (LH fluxes, buoyancy fluxes, and cloud-top entrainment lead to the deepening-warming decoupling and thereby the transition from stratiform clouds to cumulus clouds in the CSRM. However, in the simulation by the GCM, these interactions are not resolved and thus the transition to cumulus clouds is not simulated. This leads to substantial differences in cloud mass and radiation between simulations by the CSRM and the GCM. When stratocumulus clouds are dominant prior to the transition to cumulus clouds, interactions between supersaturation and cloud droplet number concentration (CDNC (controlling condensation and those between rain evaporation and cloud-base instability (controlling cloud dynamics and thereby condensation determine cloud mass and thus the radiation budget in the simulation by the CSRM. These interactions result in smaller condensation and thus smaller cloud mass and reflected solar radiation by clouds in the simulation by the CSRM than in the simulation by the GCM where these interactions are not resolved. The resolved interactions (associated with condensation and the transition to cumulus clouds lead to better agreement between the CSRM-simulation and observation than that between the GCM-simulation and observation.

  20. Simulation-based Manufacturing System Modeling

    Institute of Scientific and Technical Information of China (English)

    卫东; 金烨; 范秀敏; 严隽琪

    2003-01-01

    In recent years, computer simulation appears to be very advantageous technique for researching the resource-constrained manufacturing system. This paper presents an object-oriented simulation modeling method, which combines the merits of traditional methods such as IDEF0 and Petri Net. In this paper, a four-layer-one-angel hierarchical modeling framework based on OOP is defined. And the modeling description of these layers is expounded, such as: hybrid production control modeling and human resource dispatch modeling. To validate the modeling method, a case study of an auto-product line in a motor manufacturing company has been carried out.

  1. Quantum simulation of the t- J model

    Science.gov (United States)

    Yamaguchi, Fumiko; Yamamoto, Yoshihisa

    2002-12-01

    Computer simulation of a many-particle quantum system is bound to reach the inevitable limits of its ability as the system size increases. The primary reason for this is that the memory size used in a classical simulator grows polynomially whereas the Hilbert space of the quantum system does so exponentially. Replacing the classical simulator by a quantum simulator would be an effective method of surmounting this obstacle. The prevailing techniques for simulating quantum systems on a quantum computer have been developed for purposes of computing numerical algorithms designed to obtain approximate physical quantities of interest. The method suggested here requires no numerical algorithms; it is a direct isomorphic translation between a quantum simulator and the quantum system to be simulated. In the quantum simulator, physical parameters of the system, which are the fixed parameters of the simulated quantum system, are under the control of the experimenter. A method of simulating a model for high-temperature superconducting oxides, the t- J model, by optical control, as an example of such a quantum simulation, is presented.

  2. Benchmarking Combined Biological Phosphorus and Nitrogen Removal Wastewater Treatment Processes

    DEFF Research Database (Denmark)

    Gernaey, Krist; Jørgensen, Sten Bay

    2004-01-01

    are to a large extent based on the already existing nitrogen removal simulation benchmark. The paper illustrates and motivates the selection of the treatment plant lay-out, the selection of the biological process model, the development of realistic influent disturbance scenarios for dry, rain and storm weather...... resulting from open loop simulations with a dynamic dry weather influent scenario. The influence of the dissolved oxygen set point selection on the nitrate control loop performance observed in the simulations further illustrates the need for a plant-wide optimization approach to reach optimal plant...

  3. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  4. 考虑基准资产的动态投资组合选取%A Dynamic Model of Portfolio Choice with Benchmark Orientation

    Institute of Scientific and Technical Information of China (English)

    郭文英

    2013-01-01

    职业基金经理的目标经常是希望自己的投资组合以稳定的表现能够超越所某一基准资产或组合。因此本文给出一个考虑基准资产的动态均值---方差投资组合选取模型。假设状态之间的转移遵循马氏过程,给定状态转移矩阵,可以得到对风险资产最优投入的解析表达式。此表达式表明对风险资产的投入由三项构成,前两项是不考虑基准资产时对风险资产的投入,最后一项与基准资产有关;在基准资产上的权重由基准资产收益的大小来决定,与积极投资组合管理者的风险厌恶程度无关;随着风险厌恶程度的增加,管理者会减少在风险资产上的投入。数值分析显示考虑基准资产的投资组合是一个积极的投资组合。%Professional fund managers' goals are often hope that their portfolios to stable performance can go beyond a certain benchmark asset or portfolio. So in this paper, we give a dynamic mean-variance considering benchmark asset portfolio selection model. Assumes that the state transfer between follow markov process, a given state transition matrix, can get analytic expressions of the optimal risk assets. This expression shows that investment in risky assets is composed of three , the first two benchmark is not consider assets for risky assets into, when the last item related to benchmark asset. On the baseline assets are weighted by benchmark yields to decide the size of the assets, has nothing to do with the degree of risk aversion of the active portfolio managers. With the increase of degree of risk aversion, risk managers will reduce the asset to hurt. Portfolios of numerical analysis shows that considering the benchmark portfolio is a positive.

  5. Model fan passage flow simulation

    OpenAIRE

    Myre, David D.

    1992-01-01

    Approved for public release; distribution is unlimited Two-dimensional experimental and numerical simulations of a transonic fan blade passage were conducted at a Mach number of 1.4 to provide baseline data for the study of the effects of vortex generating devices on the suction surface shock-boundary layer interaction. In the experimental program, a probe and transverse system were designed and constructed. A new data acquisition system was adapted to record data from probe surveys and ...

  6. Multiscale Model Approach for Magnetization Dynamics Simulations

    CERN Document Server

    De Lucia, Andrea; Tretiakov, Oleg A; Kläui, Mathias

    2016-01-01

    Simulations of magnetization dynamics in a multiscale environment enable rapid evaluation of the Landau-Lifshitz-Gilbert equation in a mesoscopic sample with nanoscopic accuracy in areas where such accuracy is required. We have developed a multiscale magnetization dynamics simulation approach that can be applied to large systems with spin structures that vary locally on small length scales. To implement this, the conventional micromagnetic simulation framework has been expanded to include a multiscale solving routine. The software selectively simulates different regions of a ferromagnetic sample according to the spin structures located within in order to employ a suitable discretization and use either a micromagnetic or an atomistic model. To demonstrate the validity of the multiscale approach, we simulate the spin wave transmission across the regions simulated with the two different models and different discretizations. We find that the interface between the regions is fully transparent for spin waves with f...

  7. Systematic modelling and simulation of refrigeration systems

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1998-01-01

    The task of developing a simulation model of a refrigeration system can be very difficult and time consuming. In order for this process to be effective, a systematic method for developing the system model is required. This method should aim at guiding the developer to clarify the purpose of the s......The task of developing a simulation model of a refrigeration system can be very difficult and time consuming. In order for this process to be effective, a systematic method for developing the system model is required. This method should aim at guiding the developer to clarify the purpose...... of the simulation, to select appropriate component models and to set up the equations in a well-arranged way. In this paper the outline of such a method is proposed and examples showing the use of this method for simulation of refrigeration systems are given....

  8. Software-Engineering Process Simulation (SEPS) model

    Science.gov (United States)

    Lin, C. Y.; Abdel-Hamid, T.; Sherif, J. S.

    1992-01-01

    The Software Engineering Process Simulation (SEPS) model is described which was developed at JPL. SEPS is a dynamic simulation model of the software project development process. It uses the feedback principles of system dynamics to simulate the dynamic interactions among various software life cycle development activities and management decision making processes. The model is designed to be a planning tool to examine tradeoffs of cost, schedule, and functionality, and to test the implications of different managerial policies on a project's outcome. Furthermore, SEPS will enable software managers to gain a better understanding of the dynamics of software project development and perform postmodern assessments.

  9. Simulation of subgrid orographic precipitation with an embedded 2-D cloud-resolving model

    Science.gov (United States)

    Jung, Joon-Hee; Arakawa, Akio

    2016-03-01

    By explicitly resolving cloud-scale processes with embedded two-dimensional (2-D) cloud-resolving models (CRMs), superparameterized global atmospheric models have successfully simulated various atmospheric events over a wide range of time scales. Up to now, however, such models have not included the effects of topography on the CRM grid scale. We have used both 3-D and 2-D CRMs to simulate the effects of topography with prescribed "large-scale" winds. The 3-D CRM is used as a benchmark. The results show that the mean precipitation can be simulated reasonably well by using a 2-D representation of topography as long as the statistics of the topography such as the mean and standard deviation are closely represented. It is also shown that the use of a set of two perpendicular 2-D grids can significantly reduce the error due to a 2-D representation of topography.

  10. NATO Modelling and Simulation Standards Profile

    OpenAIRE

    Huiskamp, W.; Igarza, J.L.; Voiculet, A.

    2012-01-01

    Open and common standards are essential enablers for simulation interoperability and re-use. This includes:  Technical architecture standards - e.g. HLA - the High Level Architecture,  Data interchange standards - e.g. SEDRIS - Synthetic Environment Data Representation and Interchange Specification, and  Best practices - e.g. DSEEP – Distributed Simulation Engineering and Execution Process. The NATO Modelling and Simulation Group (NMSG), the NATO Delegated Tasking Authority for standardisa...

  11. A Consumer's Guide to Benchmark Dose Models: Results of U.S. EPA Testing of 14 Dichotomous, 8 Continuous, and 6 Developmental Models (Presentation)

    Science.gov (United States)

    Benchmark dose risk assessment software (BMDS) was designed by EPA to generate dose-response curves and facilitate the analysis, interpretation and synthesis of toxicological data. Partial results of QA/QC testing of the EPA benchmark dose software (BMDS) are presented. BMDS pr...

  12. Advanced fluid modelling and PIC/MCC simulations of low-pressure ccrf discharges

    CERN Document Server

    Becker, Markus M; Sun, Anbang; Bonitz, Michael; Loffhagen, Detlef

    2016-01-01

    Comparative studies of capacitively coupled radio-frequency discharges in helium and argon at pressures between 10 and 80 Pa are presented applying two different fluid modelling approaches as well as two independently developed particle-in-cell/Monte Carlo collision (PIC/MCC) codes. The focus is on the analysis of the range of applicability of a recently proposed fluid model including an improved drift-diffusion approximation for the electron component as well as its comparison with fluid modelling results using the classical drift-diffusion approximation and benchmark results obtained by PIC/MCC simulations. Main features of this time- and space-dependent fluid model are given. It is found that the novel approach shows generally quite good agreement with the macroscopic properties derived by the kinetic simulations and is largely able to characterize qualitatively and quantitatively the discharge behaviour even at conditions when the classical fluid modelling approach fails. Furthermore, the excellent agreem...

  13. Modeling and simulation of complex systems a framework for efficient agent-based modeling and simulation

    CERN Document Server

    Siegfried, Robert

    2014-01-01

    Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard

  14. THE APPLICATION OF DATA ENVELOPMENT ANALYSIS METHODOLOGY TO IMPROVE THE BENCHMARKING PROCESS IN THE EFQM BUSINESS MODEL (CASE STUDY: AUTOMOTIVE INDUSTRY OF IRAN

    Directory of Open Access Journals (Sweden)

    K. Shahroudi

    2009-10-01

    Full Text Available This paper reports a survey and case study research outcomes on the application of Data Envelopment Analysis (DEA to the ranking method of European Foundation for Quality Management (EFQM Business Excellence Model in Iran’s Automotive Industry and improving benchmarking process after assessment. Following the global trend, the Iranian industry leaders have introduced the EFQM practice to their supply chain in order to improve the supply base competitiveness during the last four years. A question which is raises is whether the EFQM model can be combined with a mathematical model such as DEA in order to generate a new ranking method and develop or facilitate the benchmarking process. The developed model of this paper is simple. However, it provides some new and interesting insights. The paper assesses the usefulness and capability of the DEA technique to recognize a new scoring system in order to compare the classical ranking method and the EFQM business model. We used this method to identify meaningful exemplar companies for each criterion of the EFQM model then we designed a road map based on realistic targets in the criterion which have currently been achieved by exemplar companies. The research indicates that the DEA approach is a reliable tool to analyze the latent knowledge of scores generated by conducting self- assessments. The Wilcoxon Rank Sum Test is used to compare two scores and the Test of Hypothesis reveals the meaningful relation between the EFQM and DEA new ranking methods. Finally, we drew a road map based on the benchmarking concept using the research results.

  15. Computer simulations of the random barrier model

    DEFF Research Database (Denmark)

    Schrøder, Thomas; Dyre, Jeppe

    2002-01-01

    A brief review of experimental facts regarding ac electronic and ionic conduction in disordered solids is given followed by a discussion of what is perhaps the simplest realistic model, the random barrier model (symmetric hopping model). Results from large scale computer simulations are presented...

  16. Maintenance Personnel Performance Simulation (MAPPS) model

    International Nuclear Information System (INIS)

    A stochastic computer model for simulating the actions and behavior of nuclear power plant maintenance personnel is described. The model considers personnel, environmental, and motivational variables to yield predictions of maintenance performance quality and time to perform. The mode has been fully developed and sensitivity tested. Additional evaluation of the model is now taking place

  17. Challenges in SysML Model Simulation

    Directory of Open Access Journals (Sweden)

    Mara Nikolaidou

    2016-07-01

    Full Text Available Systems Modeling Language (SysML is a standard proposed by the OMG for systems-of-systems (SoS modeling and engineering. To this end, it provides the means to depict SoS components and their behavior in a hierarchical, multi-layer fashion, facilitating alternative engineering activities, such as system design. To explore the performance of SysML, simulation is one of the preferred methods. There are many efforts targeting simulation code generation from SysML models. Numerous simulation methodologies and tools are employed, while different SysML diagrams are utilized. Nevertheless, this process is not standardized, although most of current approaches tend to follow the same steps, even if they employ different tools. The scope of this paper is to provide a comprehensive understanding of the similarities and differences of existing approaches and identify current challenges in fully automating SysML models simulation process.

  18. Modelling Reactive and Proactive Behaviour in Simulation

    CERN Document Server

    Majid, Mazlina Abdul; Aickelin, Uwe

    2010-01-01

    This research investigated the simulation model behaviour of a traditional and combined discrete event as well as agent based simulation models when modelling human reactive and proactive behaviour in human centric complex systems. A departmental store was chosen as human centric complex case study where the operation system of a fitting room in WomensWear department was investigated. We have looked at ways to determine the efficiency of new management policies for the fitting room operation through simulating the reactive and proactive behaviour of staff towards customers. Once development of the simulation models and their verification had been done, we carried out a validation experiment in the form of a sensitivity analysis. Subsequently, we executed a statistical analysis where the mixed reactive and proactive behaviour experimental results were compared with some reactive experimental results from previously published works. Generally, this case study discovered that simple proactive individual behaviou...

  19. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    Energy Technology Data Exchange (ETDEWEB)

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  20. Application of Chebyshev Polynomial to simulated modeling

    Institute of Scientific and Technical Information of China (English)

    CHI Hai-hong; LI Dian-pu

    2006-01-01

    Chebyshev polynomial is widely used in many fields, and used usually as function approximation in numerical calculation. In this paper, Chebyshev polynomial expression of the propeller properties across four quadrants is given at first, then the expression of Chebyshev polynomial is transformed to ordinary polynomial for the need of simulation of propeller dynamics. On the basis of it,the dynamical models of propeller across four quadrants are given. The simulation results show the efficiency of mathematical model.

  1. Comparison of the containment codes used in the benchmark exercise from the modelling and numerical treatment point of view

    International Nuclear Information System (INIS)

    This report is the subject of a study contract sponsored by the containment loading and response group (CONT), a sub-group of the safety working group of the fast reactor co-ordinating committee - CEC. The analysises provided here will form part of a final report on containment codes, sensitivity analysis, and benchmark comparison, performed by the group in recent years. The contribution of this study contract is to assess the six different containment codes, used in the benchmark comparison, with regard to their procedures and methods, and also to provide an assessment of their benchmark calculation results, so that an overall assessment of their effectiveness for use in containment problems can be made. Each code description, which has been provided by the relevant user, contains a large amount of detailed information and a large number of equations, which would be unwieldy to reproduce and probably unnecessary. For this reason the report has concentrated on a fuller description of the SEURBNUK code, this being the code most familiar to the author, and other code descriptions have concentrated on noting variations and differences. Also, the code SEURBNUK/EURDYN has been used for the sensitivity analysis, this code being an extension of the original code SEURBNUK with the addition of axi-symmetric finite element capabilities. The six containment codes described and assessed in this report are those which were being actively used within the European community at the time

  2. The Conic Benchmark Format

    DEFF Research Database (Denmark)

    Friberg, Henrik A.

    This document constitutes the technical reference manual of the Conic Benchmark Format with le extension: .cbf or .CBF. It unies linear, second-order cone (also known as conic quadratic) and semidenite optimization with mixed-integer variables. The format has been designed with benchmark libraries...... in mind, and therefore focuses on compact and easily parsable representations. The problem structure is separated from the problem data, and the format moreover facilitate benchmarking of hotstart capability through sequences of changes....

  3. Modeling of magnetic particle suspensions for simulations

    CERN Document Server

    Satoh, Akira

    2016-01-01

    The main objective of the book is to highlight the modeling of magnetic particles with different shapes and magnetic properties, to provide graduate students and young researchers information on the theoretical aspects and actual techniques for the treatment of magnetic particles in particle-based simulations. In simulation, we focus on the Monte Carlo, molecular dynamics, Brownian dynamics, lattice Boltzmann and stochastic rotation dynamics (multi-particle collision dynamics) methods. The latter two simulation methods can simulate both the particle motion and the ambient flow field simultaneously. In general, specialized knowledge can only be obtained in an effective manner under the supervision of an expert. The present book is written to play such a role for readers who wish to develop the skill of modeling magnetic particles and develop a computer simulation program using their own ability. This book is therefore a self-learning book for graduate students and young researchers. Armed with this knowledge,...

  4. Modeling and simulation of multiport RF switch

    Energy Technology Data Exchange (ETDEWEB)

    Vijay, J [Student, Department of Instrumentation and Control Engineering, National Institute of Technology, Tiruchirappalli-620015 (India); Saha, Ivan [Scientist, Indian Space Research Organisation (ISRO) (India); Uma, G [Lecturer, Department of Instrumentation and Control Engineering, National Institute of Technology, Tiruchirappalli-620015 (India); Umapathy, M [Assistant Professor, Department of Instrumentation and Control Engineering, National Institute of Technology, Tiruchirappalli-620015 (India)

    2006-04-01

    This paper describes the modeling and simulation of 'Multi Port RF Switch' where the latching mechanism is realized with two hot arm electro thermal actuators and the switching action is realized with electrostatic actuators. It can act as single pole single thrown as well as single pole multi thrown switch. The proposed structure is modeled analytically and required parameters are simulated using MATLAB. The analytical simulation results are validated using Finite Element Analysis of the same in the COVENTORWARE software.

  5. Modeling and simulation of discrete event systems

    CERN Document Server

    Choi, Byoung Kyu

    2013-01-01

    Computer modeling and simulation (M&S) allows engineers to study and analyze complex systems. Discrete-event system (DES)-M&S is used in modern management, industrial engineering, computer science, and the military. As computer speeds and memory capacity increase, so DES-M&S tools become more powerful and more widely used in solving real-life problems. Based on over 20 years of evolution within a classroom environment, as well as on decades-long experience in developing simulation-based solutions for high-tech industries, Modeling and Simulation of Discrete-Event Systems is the only book on

  6. Modelling and Simulation of Wave Loads

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    velocity can be approximated by a Gaussian Markov process. Known approximate results for the first-passage density or equivalently, the distribution of the extremes of wave loads are presented and compared with rather precise simulation results. It is demonstrated that the approximate results......A simple model of the wave load on slender members of offshore structures is described. The wave elevation of the sea state is modelled by a stationary Gaussian process. A new procedure to simulate realizations of the wave loads is developed. The simulation method assumes that the wave particle...... are unconservative at least for the spectra used in this investigation....

  7. Modelling and Simulation of Wave Loads

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    1985-01-01

    velocity can be approximated by a Gaussian Markov process. Known approximate results for the first passage density or equivalently, the distribution of the extremes of wave loads are presented and compared with rather precise simulation results. It is demonstrated that the approximate results......A simple model of the wave load on stender members of offshore structures is described . The wave elevation of the sea stateis modelled by a stationary Gaussian process. A new procedure to simulate realizations of the wave loads is developed. The simulation method assumes that the wave particle...... are unconservative at least for the spectra used in this investigation....

  8. Aeroelastic Benchmark Experiments Project

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to conduct canonical aeroelastic benchmark experiments. These experiments will augment existing sources for aeroelastic data in the...

  9. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    nchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The applica......nchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  10. Modelling and simulating fire tube boiler performance

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels;

    2003-01-01

    A model for a flue gas boiler covering the flue gas and the water-/steam side has been formulated. The model has been formulated as a number of sub models that are merged into an overall model for the complete boiler. Sub models have been defined for the furnace, the convection zone (split in 2......: a zone submerged in water and a zone covered by steam), a model for the material in the boiler (the steel) and 2 models for resp. the water/steam zone (the boiling) and the steam. The dynamic model has been developed as a number of Differential-Algebraic-Equation system (DAE). Subsequently Mat......Lab/Simulink has been applied for carrying out the simulations. To be able to verify the simulated results experiments has been carried out on a full scale boiler plant....

  11. Modeling and Simulation of Hydraulic Engine Mounts

    Institute of Scientific and Technical Information of China (English)

    DUAN Shanzhong; Marshall McNea

    2012-01-01

    Hydraulic engine mounts are widely used in automotive powertrains for vibration isolation.A lumped mechanical parameter model is a traditional approach to model and simulate such mounts.This paper presents a dynamical model of a passive hydraulic engine mount with a double-chamber,an inertia track,a decoupler,and a plunger.The model is developed based on analogy between electrical systems and mechanical-hydraulic systems.The model is established to capture both low and high frequency dynatmic behaviors of the hydraulic mount.The model will be further used to find the approximate pulse responses of the mounts in terms of the force transmission and top chamber pressure.The close form solution from the simplifiod linear model may provide some insight into the highly nonlinear behavior of the mounts.Based on the model,computer simulation has been carried out to study dynamic performance of the hydraulic mount.

  12. SOFT MODELLING AND SIMULATION IN STRATEGY

    Directory of Open Access Journals (Sweden)

    Luciano Rossoni

    2006-06-01

    Full Text Available A certain resistance on the part of the responsible controllers for the strategy exists, in using techniques and tools of modeling and simulation. Many find them excessively complicated, already others see them as rigid and mathematical for excessively for the use of strategies in uncertain and turbulent environments. However, some interpretative boarding that take care of, in part exist, the necessities of these borrowers of decision. The objective of this work is to demonstrate of a clear and simple form, some of the most powerful boarding, methodologies and interpretative tools (soft of modeling and simulation in the business-oriented area of strategy. We will define initially, what they are on models, simulation and some aspects to the modeling and simulation in the strategy area. Later we will see some boarding of modeling soft, that they see the modeling process much more of that simply a mechanical process, therefore, as seen for Simon, the human beings rationally are limited and its decisions are influenced by a series of questions of subjective character, related to the way where it is inserted. Keywords: strategy, modeling and simulation, soft systems methodology, cognitive map, systems dynamics.

  13. Benchmark Energetic Data in a Model System for Grubbs II Metathesis Catalysis and Their Use for the Development, Assessment, and Validation of Electronic Structure Methods

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Yan; Truhlar, Donald G.

    2009-01-31

    We present benchmark relative energetics in the catalytic cycle of a model system for Grubbs second-generation olefin metathesis catalysts. The benchmark data were determined by a composite approach based on CCSD(T) calculations, and they were used as a training set to develop a new spin-component-scaled MP2 method optimized for catalysis, which is called SCSC-MP2. The SCSC-MP2 method has improved performance for modeling Grubbs II olefin metathesis catalysts as compared to canonical MP2 or SCS-MP2. We also employed the benchmark data to test 17 WFT methods and 39 density functionals. Among the tested density functionals, M06 is the best performing functional. M06/TZQS gives an MUE of only 1.06 kcal/mol, and it is a much more affordable method than the SCSC-MP2 method or any other correlated WFT methods. The best performing meta-GGA is M06-L, and M06-L/DZQ gives an MUE of 1.77 kcal/mol. PBEh is the best performing hybrid GGA, with an MUE of 3.01 kcal/mol; however, it does not perform well for the larger, real Grubbs II catalyst. B3LYP and many other functionals containing the LYP correlation functional perform poorly, and B3LYP underestimates the stability of stationary points for the cis-pathway of the model system by a large margin. From the assessments, we recommend the M06, M06-L, and MPW1B95 functionals for modeling Grubbs II olefin metathesis catalysts. The local M06-L method is especially efficient for calculations on large systems.

  14. Model Driven Development of Simulation Models: Defining and Transforming Conceptual Models into Simulation Models by Using Metamodels and Model Transformations

    NARCIS (Netherlands)

    Küçükkeçeci Çetinkaya, D.

    2013-01-01

    Modeling and simulation (M&S) is an effective method for analyzing and designing systems and it is of interest to scientists and engineers from all disciplines. This thesis proposes the application of a model driven software development approach throughout the whole set of M&S activities and it prop

  15. Modeling and simulating of unloading welding transformer

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The simulation model of an unloading welding transformer was established on the basis of MATLAB software, and the modeling principle was described in detail in the paper. The model was made up of three sub-models, i.e. the linear inductor sub-model, the non-linear inductor sub-model and series connection sub-model controlled by current, and these sub-models were jointed together by means of segmented linearization. The simulating results showed that, in the conditions of the high convert frequency and the large cross section of the magnet core of a welding transformer, the non-linear inductor sub-model can be substituted by a linear inductor sub-model in the model; and the leakage reactance in the welding transformer is one of the main reasons of producing over-current and over-voltage in the inverter. The simulation results demonstrate that the over-voltage produced by leakage reactance is nearly two times of the input voltage supplied to the transformer, and the lasting time of over-voltage depends on time constant τ1. With reducing of τ1, the amplitude of the over-current will increase, and the lasting time becomes shorter. Contrarily, with increasing of τ1, the amplitude of the over-current will decrease, and the lasting time becomes longer. The model has played the important role for the development of the inverter resistance welding machine.

  16. Modeling and Simulation of Fish-Like Swimming in a Straight-Line Swimming State Using Immersed Boundary Method

    OpenAIRE

    Wenquan Wang; Rui Yin; Dongwei Hao; Yan Yan

    2014-01-01

    A self-propelled swimming fish model is established, which can reflect the interaction between fish movement, internal force generated by muscle contraction, and the external force provided by fluid. Using finite element immersed boundary method combined with traditional feedback force method, the self-propelled swimming fish is numerically simulated. Firstly, a self-induced vibration of a cantilever beam immersed in a fluid is one of the benchmarks of fluid-structure interaction, which is us...

  17. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    DEFF Research Database (Denmark)

    Sørensen, Kim; Condra, Thomas Joseph; Houbak, Niels

    2004-01-01

    of the boiler is (with an acceptable accuracy) proportional with the volume of the boiler. For the dynamic operation capability a cost function penalizing limited dynamic operation capability and vise-versa has been dened. The main idea is that it by mean of the parameters in this function is possible to t its...... shape to the actual application. In the paper an optimization example is shown and the results discussed. By means of the developed model it is shown how the optimum changes from a boiler favoring a good dynamic capability (i.e. a boiler with a relatively large volume) to a boiler not penalizing...

  18. Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation

    International Nuclear Information System (INIS)

    Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3DC/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)

  19. Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation

    Energy Technology Data Exchange (ETDEWEB)

    Pecchia, M.; D' Auria, F. [San Piero A Grado Nuclear Research Group GRNSPG, Univ. of Pisa, via Diotisalvi, 2, 56122 - Pisa (Italy); Mazzantini, O. [Nucleo-electrica Argentina Societad Anonima NA-SA, Buenos Aires (Argentina)

    2012-07-01

    Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)

  20. Application of experimental design techniques to structural simulation meta-model building using neural network

    Institute of Scientific and Technical Information of China (English)

    费庆国; 张令弥

    2004-01-01

    Neural networks are being used to construct meta-models in numerical simulation of structures. In addition to network structures and training algorithms, training samples also greatly affect the accuracy of neural network models. In this paper, some existing main sampling techniques are evaluated, including techniques based on experimental design theory,random selection, and rotating sampling. First, advantages and disadvantages of each technique are reviewed. Then, seven techniques are used to generate samples for training radial neural networks models for two benchmarks: an antenna model and an aircraft model. Results show that the uniform design, in which the number of samples and mean square error network models are considered, is the best sampling technique for neural network based meta-model building.

  1. Microfluidic very large scale integration (VLSI) modeling, simulation, testing, compilation and physical synthesis

    CERN Document Server

    Pop, Paul; Madsen, Jan

    2016-01-01

    This book presents the state-of-the-art techniques for the modeling, simulation, testing, compilation and physical synthesis of mVLSI biochips. The authors describe a top-down modeling and synthesis methodology for the mVLSI biochips, inspired by microelectronics VLSI methodologies. They introduce a modeling framework for the components and the biochip architecture, and a high-level microfluidic protocol language. Coverage includes a topology graph-based model for the biochip architecture, and a sequencing graph to model for biochemical application, showing how the application model can be obtained from the protocol language. The techniques described facilitate programmability and automation, enabling developers in the emerging, large biochip market. · Presents the current models used for the research on compilation and synthesis techniques of mVLSI biochips in a tutorial fashion; · Includes a set of "benchmarks", that are presented in great detail and includes the source code of several of the techniques p...

  2. Simulering af dagslys i digitale modeller

    DEFF Research Database (Denmark)

    Villaume, René Domine; Ørstrup, Finn Rude

    2004-01-01

    Projektet undersøger via forskellige simuleringer af dagslys, kvaliteten af visualiseringer af komplekse lysforhold i digitale modeller i forbindelse med formidling af arkitektur via nettet. I en digital 3D model af Utzon Associates Paustians hus, simulers naturligt dagslysindfald med  forskellig...... Renderingsmetoder som: "shaded render" /  ”raytraceing” /  "Final Gather /  ”Global Illumination”...

  3. Simulation and modeling of turbulent flows

    CERN Document Server

    Gatski, Thomas B; Lumley, John L

    1996-01-01

    This book provides students and researchers in fluid engineering with an up-to-date overview of turbulent flow research in the areas of simulation and modeling. A key element of the book is the systematic, rational development of turbulence closure models and related aspects of modern turbulent flow theory and prediction. Starting with a review of the spectral dynamics of homogenous and inhomogeneous turbulent flows, succeeding chapters deal with numerical simulation techniques, renormalization group methods and turbulent closure modeling. Each chapter is authored by recognized leaders in their respective fields, and each provides a thorough and cohesive treatment of the subject.

  4. Hybrid simulation models of production networks

    CERN Document Server

    Kouikoglou, Vassilis S

    2001-01-01

    This book is concerned with a most important area of industrial production, that of analysis and optimization of production lines and networks using discrete-event models and simulation. The book introduces a novel approach that combines analytic models and discrete-event simulation. Unlike conventional piece-by-piece simulation, this method observes a reduced number of events between which the evolution of the system is tracked analytically. Using this hybrid approach, several models are developed for the analysis of production lines and networks. The hybrid approach combines speed and accuracy for exceptional analysis of most practical situations. A number of optimization problems, involving buffer design, workforce planning, and production control, are solved through the use of hybrid models.

  5. Investigating the performance and energy saving potential of Chinese commercial building benchmark models for the hot humid and severe cold climate regions

    Science.gov (United States)

    Herrmann, Lesley Anne

    2011-12-01

    The demand for energy in China is growing at an alarming rate. Buildings have become a significant component of the energy-demand mix accounting for nearly one-quarter of the country's total primary energy consumption. This study compares the building code standards for office and hotel buildings in the hot humid and severe cold climate regions of China and the United States. Benchmark office and hotel building models have been developed for Guangzhou and Harbin, China that meets China's minimum national and regional building energy codes with the integration of common design and construction practices for each region. These models are compared to the ASHRAE standard based US reference building models for Houston, Texas and Duluth, Minnesota which have similar climate conditions. The research further uses a building energy optimization tool to optimize the Chinese benchmarks using existing US products to identify the primary areas for potential energy savings. In the case of the Harbin models, an economic analysis has also been performed to determine the economic feasibility of alternative building designs. The most significant energy-saving options are then presented as recommendations for potential improvements to current China building energy codes.

  6. Comparative evaluation of 1D and quasi-2D hydraulic models based on benchmark and real-world applications for uncertainty assessment in flood mapping

    Science.gov (United States)

    Dimitriadis, Panayiotis; Tegos, Aristoteles; Oikonomou, Athanasios; Pagana, Vassiliki; Koukouvinos, Antonios; Mamassis, Nikos; Koutsoyiannis, Demetris; Efstratiadis, Andreas

    2016-03-01

    One-dimensional and quasi-two-dimensional hydraulic freeware models (HEC-RAS, LISFLOOD-FP and FLO-2d) are widely used for flood inundation mapping. These models are tested on a benchmark test with a mixed rectangular-triangular channel cross section. Using a Monte-Carlo approach, we employ extended sensitivity analysis by simultaneously varying the input discharge, longitudinal and lateral gradients and roughness coefficients, as well as the grid cell size. Based on statistical analysis of three output variables of interest, i.e. water depths at the inflow and outflow locations and total flood volume, we investigate the uncertainty enclosed in different model configurations and flow conditions, without the influence of errors and other assumptions on topography, channel geometry and boundary conditions. Moreover, we estimate the uncertainty associated to each input variable and we compare it to the overall one. The outcomes of the benchmark analysis are further highlighted by applying the three models to real-world flood propagation problems, in the context of two challenging case studies in Greece.

  7. Investigating Output Accuracy for a Discrete Event Simulation Model and an Agent Based Simulation Model

    CERN Document Server

    Majid, Mazlina Abdul; Siebers, Peer-Olaf

    2010-01-01

    In this paper, we investigate output accuracy for a Discrete Event Simulation (DES) model and Agent Based Simulation (ABS) model. The purpose of this investigation is to find out which of these simulation techniques is the best one for modelling human reactive behaviour in the retail sector. In order to study the output accuracy in both models, we have carried out a validation experiment in which we compared the results from our simulation models to the performance of a real system. Our experiment was carried out using a large UK department store as a case study. We had to determine an efficient implementation of management policy in the store's fitting room using DES and ABS. Overall, we have found that both simulation models were a good representation of the real system when modelling human reactive behaviour.

  8. Modeling, simulation and emergency response

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, T.J.

    1985-01-01

    The Department of Energy's Atmospheric Release Advisory Capability (ARAC) has been developed at the Lawrence Livermore National Laboratory to provide a national capability in emergency response to radiological accidents. For the past two years the system has been undergoing a complete redesign and upgrade in software and hardware. Communications, geophysical databases, atmospheric transport and diffusion models and experienced staff form the core of this rapid response capability. The ARAC system has been used to support DOE commitments to radiological accidents including the Three Mile Island accident, the COSMOS satellite reentries, the TITAN II missile accident and several others. This paper describes the major components of the ARAC system, presents example calculations and discusses the interactive process of the man-machine environment in an emergency response system.

  9. Handleiding benchmark VO

    NARCIS (Netherlands)

    Blank, j.l.t.

    2008-01-01

    OnderzoeksrapportenArchiefTechniek, Bestuur en Management> Over faculteit> Afdelingen> Innovation Systems> IPSE> Onderzoek> Publicaties> Onderzoeksrapporten> Handleiding benchmark VO Handleiding benchmark VO 25 november 2008 door IPSE Studies Door J.L.T. Blank. Handleiding voor het lezen van de i

  10. Benchmark af erhvervsuddannelserne

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    I dette arbejdspapir diskuterer vi, hvorledes de danske erhvervsskoler kan benchmarkes, og vi præsenterer resultaterne af en række beregningsmodeller. Det er begrebsmæssigt kompliceret at benchmarke erhvervsskolerne. Skolerne udbyder en lang række forskellige uddannelser. Det gør det vanskeligt...

  11. A queuing model for road traffic simulation

    International Nuclear Information System (INIS)

    We present in this article a stochastic queuing model for the raod traffic. The model is based on the M/G/c/c state dependent queuing model, and is inspired from the deterministic Godunov scheme for the road traffic simulation. We first propose a variant of M/G/c/c state dependent model that works with density-flow fundamental diagrams rather than density-speed relationships. We then extend this model in order to consider upstream traffic demand as well as downstream traffic supply. Finally, we show how to model a whole raod by concatenating raod sections as in the deterministic Godunov scheme

  12. Simulation of Gravity Currents Using VOF Model

    Institute of Scientific and Technical Information of China (English)

    邹建锋; 黄钰期; 应新亚; 任安禄

    2002-01-01

    By the Volume of Fluid (VOF) multiphase flow model two-dimensional gravity currents with three phases including air are numerically simulated in this article. The necessity of consideration of turbulence effect for high Reynolds numbers is demonstrated quantitatively by LES (the Large Eddy Simulation) turbulence model. The gravity currents are simulated for h ≠ H as well as h = H, where h is the depth of the gravity current before the release and H is the depth of the intruded fluid. Uprising of swell occurs when a current flows horizontally into another lighter one for h ≠ H. The problems under what condition the uprising of swell occurs and how long it takes are considered in this article. All the simulated results are in reasonable agreement with the experimental results available.

  13. Power electronics system modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Lai, Jih-Sheng

    1994-12-31

    This paper introduces control system design based softwares, SIMNON and MATLAB/SIMULINK, for power electronics system simulation. A complete power electronics system typically consists of a rectifier bridge along with its smoothing capacitor, an inverter, and a motor. The system components, featuring discrete or continuous, linear or nonlinear, are modeled in mathematical equations. Inverter control methods,such as pulse-width-modulation and hysteresis current control, are expressed in either computer algorithms or digital circuits. After describing component models and control methods, computer programs are then developed for complete systems simulation. Simulation results are mainly used for studying system performances, such as input and output current harmonics, torque ripples, and speed responses. Key computer programs and simulation results are demonstrated for educational purposes.

  14. Benchmarking and comparing first and second generation post combustion CO2 capture technologies

    DEFF Research Database (Denmark)

    Fosbøl, Philip Loldrup; Gaspar, Jozsef; Ehlers, Sören;

    2014-01-01

    The Octavius FP7 project focuses on demonstration of CO2 capture for zero emission power generation. As part of this work many partners are involved using different rate based simulation tools to develop tomorrow’s new power plants. A benchmarking is performed, in order to synchronize accuracy an...... and quality control the used modeling tools....

  15. A benchmark for fault tolerant flight control evaluation

    Science.gov (United States)

    Smaili, H.; Breeman, J.; Lombaerts, T.; Stroosma, O.

    2013-12-01

    A large transport aircraft simulation benchmark (REconfigurable COntrol for Vehicle Emergency Return - RECOVER) has been developed within the GARTEUR (Group for Aeronautical Research and Technology in Europe) Flight Mechanics Action Group 16 (FM-AG(16)) on Fault Tolerant Control (2004 2008) for the integrated evaluation of fault detection and identification (FDI) and reconfigurable flight control strategies. The benchmark includes a suitable set of assessment criteria and failure cases, based on reconstructed accident scenarios, to assess the potential of new adaptive control strategies to improve aircraft survivability. The application of reconstruction and modeling techniques, based on accident flight data, has resulted in high-fidelity nonlinear aircraft and fault models to evaluate new Fault Tolerant Flight Control (FTFC) concepts and their real-time performance to accommodate in-flight failures.

  16. Simulation Modeling of Software Development Processes

    Science.gov (United States)

    Calavaro, G. F.; Basili, V. R.; Iazeolla, G.

    1996-01-01

    A simulation modeling approach is proposed for the prediction of software process productivity indices, such as cost and time-to-market, and the sensitivity analysis of such indices to changes in the organization parameters and user requirements. The approach uses a timed Petri Net and Object Oriented top-down model specification. Results demonstrate the model representativeness, and its usefulness in verifying process conformance to expectations, and in performing continuous process improvement and optimization.

  17. MODELING AND SIMULATION OF A HYDROCRACKING UNIT

    OpenAIRE

    HASSAN A. FARAG; N. S. YOUSEF; RANIA FAROUQ

    2016-01-01

    Hydrocracking is used in the petroleum industry to convert low quality feed stocks into high valued transportation fuels such as gasoline, diesel, and jet fuel. The aim of the present work is to develop a rigorous steady state two-dimensional mathematical model which includes conservation equations of mass and energy for simulating the operation of a hydrocracking unit. Both the catalyst bed and quench zone have been included in this integrated model. The model equations were numerically s...

  18. Analyzing Strategic Business Rules through Simulation Modeling

    Science.gov (United States)

    Orta, Elena; Ruiz, Mercedes; Toro, Miguel

    Service Oriented Architecture (SOA) holds promise for business agility since it allows business process to change to meet new customer demands or market needs without causing a cascade effect of changes in the underlying IT systems. Business rules are the instrument chosen to help business and IT to collaborate. In this paper, we propose the utilization of simulation models to model and simulate strategic business rules that are then disaggregated at different levels of an SOA architecture. Our proposal is aimed to help find a good configuration for strategic business objectives and IT parameters. The paper includes a case study where a simulation model is built to help business decision-making in a context where finding a good configuration for different business parameters and performance is too complex to analyze by trial and error.

  19. Performance of subgrid-scale models in coarse large eddy simulations of a laminar separation bubble

    Science.gov (United States)

    Cadieux, Francois; Domaradzki, Julian A.

    2015-04-01

    The flow over many blades and airfoils at moderate angles of attack and Reynolds numbers ranging from 104 to 105 undergoes separation due to the adverse pressure gradient generated by surface curvature. In many cases, the separated shear layer then transitions to turbulence and reattaches, closing off a recirculation region—the laminar separation bubble. An equivalent problem is formulated by imposing suitable boundary conditions for flow over a flat plate to avoid numerical and mesh generation issues. Recent work demonstrated that accurate large eddy simulation (LES) of such a flow is possible using only O(1%) of the direct numerical simulation (DNS) resolution but the performance of different subgrid-scale models could not be properly assessed because of the effects of unquantified numerical dissipation. LES of a laminar separation bubble flow over a flat plate is performed using a pseudo-spectral Navier-Stokes solver at resolutions corresponding to 3% and 1% of the chosen DNS benchmark by Spalart and Strelets (2000). The negligible numerical dissipation of the pseudo-spectral code allows an unambiguous assessment of the performance of subgrid-scale models. Three explicit subgrid-scale models—dynamic Smagorinsky, σ, and truncated Navier-Stokes (TNS)—are compared to a no-model simulation (under-resolved DNS) and evaluated against benchmark DNS data focusing on two quantities of critical importance to airfoil and blade designers: time-averaged pressure (Cp) and skin friction (Cf) predictions used in lift and drag calculations. Results obtained with explicit subgrid-scale models confirm that accurate LES of laminar separation bubble flows is attainable with as low as 1% of DNS resolution, and the poor performance of the no-model simulation underscores the necessity of subgrid-scale modeling in coarse LES with low numerical dissipation.

  20. 超高层建筑结构 benchmark 模型及其地震反应分析%A Benchmark Model of Mega-tall Buildings and Analysis of its Seismic Responses

    Institute of Scientific and Technical Information of China (English)

    吕西林; 姜淳; 蒋欢军

    2015-01-01

    参照上海中心,根据设定的性能目标设计了一个超高层建筑结构的 benchmark 模型用于超高层建筑结构抗震研究。该结构总高度为606.1 m,抗震设防烈度为7度,场地类别为 IV 类,设计分组为第一组。该结构采用巨型框架-核心筒-伸臂桁架钢-混凝土混合结构体系,8道环带桁架将结构分为9个区,环带桁架与型钢混凝土巨柱共同构成了巨型框架结构体系,并通过6道伸臂桁架与核心筒相连,共同承受水平荷载。利用 PERFORM-3D 软件建立了结构的非线性数值计算模型,对结构进行了弹塑性地震反应分析,验证了结构的抗震性能。计算结果表明,满足现行设计规范的该超高层结构在大震作用下具有较大的安全余量。%This paper proposes a benchmark model of mega-tall buildings for investigating the seismic performance.The structure is designed based on the prototype of Shanghai Tower with the specific seismic performance objective.The total height of the structure is 606.1 m,with the seismic fortification of intensity of 7.The soil type is IV,and the seismic design class is the 1st class.The mega frame-core tube with outriggers steel-concrete composite structure system is adopped.The structure is divided into 9 zones by 8 belted trusses which form the mega frame system together with SRC mega-columns.The mega frame is connected to the core tube with 6 outrigger trusses,resisting the lateral load together.The elasto-plastic analysis of the model is conducted to validate the seismic performance by using PERFORM-3D software.The result shows that the structure which meets the requirements of the current design code has a considerable safety margin under severe earthquakes.

  1. Numerical Simulation on Hydromechanical Coupling in Porous Media Adopting Three-Dimensional Pore-Scale Model

    Science.gov (United States)

    Liu, Jianjun; Song, Rui; Cui, Mengmeng

    2014-01-01

    A novel approach of simulating hydromechanical coupling in pore-scale models of porous media is presented in this paper. Parameters of the sandstone samples, such as the stress-strain curve, Poisson's ratio, and permeability under different pore pressure and confining pressure, are tested in laboratory scale. The micro-CT scanner is employed to scan the samples for three-dimensional images, as input to construct the model. Accordingly, four physical models possessing the same pore and rock matrix characteristics as the natural sandstones are developed. Based on the micro-CT images, the three-dimensional finite element models of both rock matrix and pore space are established by MIMICS and ICEM software platform. Navier-Stokes equation and elastic constitutive equation are used as the mathematical model for simulation. A hydromechanical coupling analysis in pore-scale finite element model of porous media is simulated by ANSYS and CFX software. Hereby, permeability of sandstone samples under different pore pressure and confining pressure has been predicted. The simulation results agree well with the benchmark data. Through reproducing its stress state underground, the prediction accuracy of the porous rock permeability in pore-scale simulation is promoted. Consequently, the effects of pore pressure and confining pressure on permeability are revealed from the microscopic view. PMID:24955384

  2. Wind Shear Target Echo Modeling and Simulation

    Directory of Open Access Journals (Sweden)

    Xiaoyang Liu

    2015-01-01

    Full Text Available Wind shear is a dangerous atmospheric phenomenon in aviation. Wind shear is defined as a sudden change of speed or direction of the wind. In order to analyze the influence of wind shear on the efficiency of the airplane, this paper proposes a mathematical model of point target rain echo and weather target signal echo based on Doppler effect. The wind field model is developed in this paper, and the antenna model is also studied by using Bessel function. The spectrum distribution of symmetric and asymmetric wind fields is researched by using the mathematical model proposed in this paper. The simulation results are in accordance with radial velocity component, and the simulation results also confirm the correctness of the established model of antenna.

  3. Nuclear reactor core modelling in multifunctional simulators

    Energy Technology Data Exchange (ETDEWEB)

    Puska, E.K. [VTT Energy, Nuclear Energy, Espoo (Finland)

    1999-06-01

    The thesis concentrates on the development of nuclear reactor core models for the APROS multifunctional simulation environment and the use of the core models in various kinds of applications. The work was started in 1986 as a part of the development of the entire APROS simulation system. The aim was to create core models that would serve in a reliable manner in an interactive, modular and multifunctional simulator/plant analyser environment. One-dimensional and three-dimensional core neutronics models have been developed. Both models have two energy groups and six delayed neutron groups. The three-dimensional finite difference type core model is able to describe both BWR- and PWR-type cores with quadratic fuel assemblies and VVER-type cores with hexagonal fuel assemblies. The one- and three-dimensional core neutronics models can be connected with the homogeneous, the five-equation or the six-equation thermal hydraulic models of APROS. The key feature of APROS is that the same physical models can be used in various applications. The nuclear reactor core models of APROS have been built in such a manner that the same models can be used in simulator and plant analyser applications, as well as in safety analysis. In the APROS environment the user can select the number of flow channels in the three-dimensional reactor core and either the homogeneous, the five- or the six-equation thermal hydraulic model for these channels. The thermal hydraulic model and the number of flow channels have a decisive effect on the calculation time of the three-dimensional core model and thus, at present, these particular selections make the major difference between a safety analysis core model and a training simulator core model. The emphasis on this thesis is on the three-dimensional core model and its capability to analyse symmetric and asymmetric events in the core. The factors affecting the calculation times of various three-dimensional BWR, PWR and WWER-type APROS core models have been

  4. An artificial neural network based fast radiative transfer model for simulating infrared sounder radiances

    Indian Academy of Sciences (India)

    Praveen Krishnan; K Srinivasa Ramanujam; C Balaji

    2012-08-01

    The first step in developing any algorithm to retrieve the atmospheric temperature and humidity parameters at various pressure levels is the simulation of the top of the atmosphere radiances that can be measured by the satellite. This study reports the results of radiative transfer simulations for the multichannel infrared sounder of the proposed Indian satellite INSAT-3D due to be launched shortly. Here, the widely used community software k Compressed Atmospheric Radiative Transfer Algorithm (kCARTA) is employed for performing the radiative transfer simulations. Though well established and benchmarked, kCARTA is a line-by-line solver and hence takes enormous computational time and effort for simulating the multispectral radiances for a given atmospheric scene. This necessitates the development of a much faster and at the same time, equally accurate RT model that can drive a real-time retrieval algorithm. In the present study, a fast radiative transfer model using neural networks is proposed to simulate radiances corresponding to the wavenumbers of INSAT-3D. Realistic atmospheric temperature and humidity profiles have been used for training the network. Spectral response functions of GOES-13, a satellite similar in construction, purpose and design and already in use are used. The fast RT model is able to simulate the radiances for 1200 profiles in 18 ms for a 15-channel GOES profile, with a correlation coefficient of over 99%. Finally, the robustness of the model is tested using additional synthetic profiles generated using empirical orthogonal functions (EOF).

  5. A Seafloor Benchmark for 3-dimensional Geodesy

    Science.gov (United States)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  6. Fuzzy delay model based fault simulator for crosstalk delay fault test generation in asynchronous sequential circuits

    Indian Academy of Sciences (India)

    S Jayanthy; M C Bhuvaneswari

    2015-02-01

    In this paper, a fuzzy delay model based crosstalk delay fault simulator is proposed. As design trends move towards nanometer technologies, more number of new parameters affects the delay of the component. Fuzzy delay models are ideal for modelling the uncertainty found in the design and manufacturing steps. The fault simulator based on fuzzy delay detects unstable states, oscillations and non-confluence of settling states in asynchronous sequential circuits. The fuzzy delay model based fault simulator is used to validate the test patterns produced by Elitist Non-dominated sorting Genetic Algorithm (ENGA) based test generator, for detecting crosstalk delay faults in asynchronous sequential circuits. The multi-objective genetic algorithm, ENGA targets two objectives of maximizing fault coverage and minimizing number of transitions. Experimental results are tabulated for SIS benchmark circuits for three gate delay models, namely unit delay model, rise/fall delay model and fuzzy delay model. Experimental results indicate that test validation using fuzzy delay model is more accurate than unit delay model and rise/fall delay model.

  7. Vermont Yankee simulator BOP model upgrade

    International Nuclear Information System (INIS)

    The Vermont Yankee simulator has undergone significant changes in the 20 years since the original order was placed. After the move from the original Unix to MS Windows environment, and upgrade to the latest version of SimPort, now called MASTER, the platform was set for an overhaul and replacement of major plant system models. Over a period of a few months, the VY simulator team, in partnership with WSC engineers, replaced outdated legacy models of the main steam, condenser, condensate, circulating water, feedwater and feedwater heaters, and main turbine and auxiliaries. The timing was ideal, as the plant was undergoing a power up-rate, so the opportunity was taken to replace the legacy models with industry-leading, true on-line object oriented graphical models. Due to the efficiency of design and ease of use of the MASTER tools, VY staff performed the majority of the modeling work themselves with great success, with only occasional assistance from WSC, in a relatively short time-period, despite having to maintain all of their 'regular' simulator maintenance responsibilities. This paper will provide a more detailed view of the VY simulator, including how it is used and how it has benefited from the enhancements and upgrades implemented during the project. (author)

  8. Simple Benchmark Specifications for Space Radiation Protection

    Science.gov (United States)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  9. Damage modeling for Taylor impact simulations

    Science.gov (United States)

    Anderson, C. E., Jr.; Chocron, I. S.; Nicholls, A. E.

    2006-08-01

    G. I. Taylor showed that dynamic material properties could be deduced from the impact of a projectile against a rigid boundary. The Taylor anvil test became very useful with the advent of numerical simulations and has been used to infer and/or to validate material constitutive constants. A new experimental facility has been developed to conduct Taylor anvil impacts to support validation of constitutive constants used in simulations. Typically, numerical simulations are conducted assuming 2-D cylindrical symmetry, but such computations cannot hope to capture the damage observed in higher velocity experiments. A computational study was initiated to examine the ability to simulate damage and subsequent deformation of the Taylor specimens. Three-dimensional simulations, using the Johnson-Cook damage model, were conducted with the nonlinear Eulerian wavecode CTH. The results of the simulations are compared to experimental deformations of 6061-T6 aluminum specimens as a function of impact velocity, and conclusions regarding the ability to simulate fracture and reproduce the observed deformations are summarized.

  10. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  11. Assessment of evaluated (n,d) energy-angle elastic scattering distributions using MCNP simulations of critical measurements and simplified calculation benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Kozier, K.S. [Atomic Energy of Canada Limited, Chalk River Laboratories, Chalk River, Ontario (Canada)

    2008-07-01

    Different evaluated (n,d) energy-angle elastic scattering distributions produce k-effective differences in MCNP5 simulations of critical experiments involving heavy water (D{sub 2}O) of sufficient magnitude to suggest a need for new (n,d) scattering measurements and/or distributions derived from modern theoretical nuclear models, especially at neutron energies below a few MeV. The present work focuses on the small reactivity change of < 1 mk that is observed in the MCNP5 D{sub 2}O coolant-void-reactivity calculation bias for simulations of two pairs of critical experiments performed in the ZED-2 reactor at the Chalk River Laboratories when different nuclear data libraries are used for deuterium. The deuterium data libraries tested include Endf/B-VII.0, Endf/B-VI.4, JENDL-3.3 and a new evaluation, labelled Bonn-B, which is based on recent theoretical nuclear-model calculations. Comparison calculations were also performed for a simplified, two-region, spherical model having an inner, 250-cm radius, homogeneous sphere of UO{sub 2}, without and with deuterium, and an outer 20-cm-thick deuterium reflector. A notable observation from this work is the reduction of about 0.4 mk in the MCNP5 ZED-2 CVR calculation bias that is obtained when the O-in-UO{sub 2} thermal scattering data comes from Endf-B-VII.0. (author)

  12. Track 3: growth of nuclear technology and research numerical and computational aspects of the coupled three-dimensional core/plant simulations: organization for economic cooperation and development/U.S. nuclear regulatory commission pressurized water reactor main-steam-line-break benchmark-I. 6. CEA-IPSN Participation in the MSLB Benchmark

    International Nuclear Information System (INIS)

    The OECD/NEA Main-Steam-Line-Break (MSLB) Benchmark lets us compare state-of-the-art and best-estimate models used to compute reactivity accidents.A comprehensive study has been carried out by CEA and IPSN with the CATHARE, CRONOS2, and FLICA4 codes to assess the three-dimensional (3-D) effects in the MSLB accident and to explain the return-to-power (RTP) occurrence. The three exercises of the MSLB benchmark are defined with the aim of analyzing the space and time effects in the core and their modeling with computational tools. Point kinetics (exercise 1) simulation results in an RTP after scram, whereas 3-D kinetics (exercises 2 and 3) does not display any RTP. Our objective is to understand the reasons for the conservative solution of point kinetics and to assess the benefits of best-estimate models. First, the core vessel mixing model is analyzed; second, sensitivity studies on point kinetics are compared to 3-D kinetics; third, the core thermal-hydraulics model and coupling with neutronics is presented; finally, RTP and a suitable model for MSLB are discussed. Modeling of the vessel mixing is identified as a major concern for an accurate computation of MSLB. On one hand, the RTP in exercise 1 is driven by the mixing between primary loops, and on the other hand, the hot assembly power in exercise 3 depends on the inlet temperature map at assembly level. Vessel mixing between primary loops is defined by the ratio of the hot-leg temperature difference over the cold-leg temperature difference. Specifications indicate a ratio of 50%. Sensitivity studies on this ratio were conducted with CATHARE and point kinetics. Full mixing of the primary loops leads to a sooner and higher RTP, while no mixing results in a later and weaker RTP. Indeed, the intact steam generator (SG) is used to cool down the broken SG when both loops are mixed in the vessel, and the primary temperature decreases faster. In the extreme case of no mixing, only one-half of the primary circuit is

  13. EXACT SIMULATION OF A BOOLEAN MODEL

    Directory of Open Access Journals (Sweden)

    Christian Lantuéjoul

    2013-06-01

    Full Text Available A Boolean model is a union of independent objects (compact random subsets located at Poisson points. Two algorithms are proposed for simulating a Boolean model in a bounded domain. The first one applies only to stationary models. It generates the objects prior to their Poisson locations. Two examples illustrate its applicability. The second algorithm applies to stationary and non-stationary models. It generates the Poisson points prior to the objects. Its practical difficulties of implementation are discussed. Both algorithms are based on importance sampling techniques, and the generated objects are weighted.

  14. Synthetic neuronal datasets for benchmarking directed functional connectivity metrics

    Directory of Open Access Journals (Sweden)

    João Rodrigues

    2015-05-01

    Full Text Available Background. Datasets consisting of synthetic neural data generated with quantifiable and controlled parameters are a valuable asset in the process of testing and validating directed functional connectivity metrics. Considering the recent debate in the neuroimaging community concerning the use of these metrics for fMRI data, synthetic datasets that emulate the BOLD signal dynamics have played a central role by supporting claims that argue in favor or against certain choices. Generative models often used in studies that simulate neuronal activity, with the aim of gaining insight into specific brain regions and functions, have different requirements from the generative models for benchmarking datasets. Even though the latter must be realistic, there is a tradeoff between realism and computational demand that needs to be contemplated and simulations that efficiently mimic the real behavior of single neurons or neuronal populations are preferred, instead of more cumbersome and marginally precise ones. Methods. This work explores how simple generative models are able to produce neuronal datasets, for benchmarking purposes, that reflect the simulated effective connectivity and, how these can be used to obtain synthetic recordings of EEG and fMRI BOLD signals. The generative models covered here are AR processes, neural mass models consisting of linear and nonlinear stochastic differential equations and populations with thousands of spiking units. Forward models for EEG consist in the simple three-shell head model while the fMRI BOLD signal is modeled with the Balloon-Windkessel model or by convolution with a hemodynamic response function. Results. The simulated datasets are tested for causality with the original spectral formulation for Granger causality. Modeled effective connectivity can be detected in the generated data for varying connection strengths and interaction delays. Discussion. All generative models produce synthetic neuronal data with

  15. 2015 WFNDEC eddy current benchmark modeling of impedance variation in coil due to a crack located at the plate edge

    Science.gov (United States)

    Rocha, João Vicente; Camerini, Cesar; Pereira, Gabriela

    2016-02-01

    The 2015 World Federation of NDE Centers (WFNDEC) eddy current benchmark problem involves the inspection of two EDM notches placed at the edge of a conducting plate with a pancake coil that runs parallel to the plate's edge line. Experimental data consists of impedance variation measured with a precision LCR bridge as a XY scanner moves the coil. The authors are pleased to present the numerical results obtained with commercial FEM packages (OPERA 3-D). Values of electrical resistance and inductive reactance variation between base material and the region around the notch are plotted as function of the coil displacement over the plate. The calculations were made for frequencies of 1 kHz and 10 kHz and agreement between experimental and numerical results are excellent for all inspection conditions. Explanations are made about how the impedance is calculated as well as pros and cons of the presented methods.

  16. Thermohydraulic modeling and simulation of breeder reactors

    International Nuclear Information System (INIS)

    This paper deals with the modeling and simulation of system-wide transients in LMFBRs. Unprotected events (i.e., the presumption of failure of the plant protection system) leading to core-melt are not considered in this paper. The existing computational capabilities in the area of protected transients in the US are noted. Various physical and numerical approximations that are made in these codes are discussed. Finally, the future direction in the area of model verification and improvements is discussed

  17. Stabilising the global greenhouse: A simulation model

    OpenAIRE

    Michaelis, Peter

    1993-01-01

    This paper investigates the economic implications of a comprehensive approach to greenhouse policies that strives to stabilise the atmospheric concentration of greenhouse gases at an ecologicaliy determined threshold level. In a theoretical optimisation model conditions for an efficient allocation of abatement effort among pollutants and over time are derived. The model is empirically speeified and adapted to a dynamic GAMS-algorithm. By various Simulation runs for the period of 1990 to 2110,...

  18. Object Oriented Modelling and Dynamical Simulation

    DEFF Research Database (Denmark)

    Wagner, Falko Jens; Poulsen, Mikael Zebbelin

    1998-01-01

    This report with appendix describes the work done in master project at DTU.The goal of the project was to develop a concept for simulation of dynamical systems based on object oriented methods.The result was a library of C++-classes, for use when both building componentbased models and when...

  19. Agent Based Modelling for Social Simulation

    NARCIS (Netherlands)

    Smit, S.K.; Ubink, E.M.; Vecht, B. van der; Langley, D.J.

    2013-01-01

    This document is the result of an exploratory project looking into the status of, and opportunities for Agent Based Modelling (ABM) at TNO. The project focussed on ABM applications containing social interactions and human factors, which we termed ABM for social simulation (ABM4SS). During the course

  20. Love Kills:. Simulations in Penna Ageing Model

    Science.gov (United States)

    Stauffer, Dietrich; Cebrat, Stanisław; Penna, T. J. P.; Sousa, A. O.

    The standard Penna ageing model with sexual reproduction is enlarged by adding additional bit-strings for love: Marriage happens only if the male love strings are sufficiently different from the female ones. We simulate at what level of required difference the population dies out.

  1. Modeling salmonella Dublin into the dairy herd simulation model Simherd

    DEFF Research Database (Denmark)

    Kudahl, Anne Braad

    2010-01-01

    of the simulations will therefore be used for decision support in the national surveillance and eradication program against Salmonella Dublin. Basic structures of the model are programmed and will be presented at the workshop. The model is in a phase of face-validation by a group of Salmonella......Infection with Salmonella Dublin in the dairy herd and effects of the infection and relevant control measures are currently being modeled into the dairy herd simulation model called Simherd. The aim is to compare the effects of different control strategies against Salmonella Dublin on both within...

  2. Twitter's tweet method modelling and simulation

    Science.gov (United States)

    Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.

    2015-02-01

    This paper seeks to purpose the concept of Twitter marketing methods. The tools that Twitter provides are modelled and simulated using iThink in the context of a Twitter media-marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following models have been developed for a twitter marketing agent/company and tested in real circumstances and with real numbers. These models were finalized through a number of revisions and iterators of the design, develop, simulate, test and evaluate. It also addresses these methods that suit most organized promotion through targeting, to the Twitter social media service. The validity and usefulness of these Twitter marketing methods models for the day-to-day decision making are authenticated by the management of the company organization. It implements system dynamics concepts of Twitter marketing methods modelling and produce models of various Twitter marketing situations. The Tweet method that Twitter provides can be adjusted, depending on the situation, in order to maximize the profit of the company/agent.

  3. GeodeticBenchmark_GEOMON

    Data.gov (United States)

    Vermont Center for Geographic Information — The GeodeticBenchmark_GEOMON data layer consists of geodetic control monuments (points) that have a known position or spatial reference. The locations of these...

  4. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  5. CFD validation in OECD/NEA t-junction benchmark.

    Energy Technology Data Exchange (ETDEWEB)

    Obabko, A. V.; Fischer, P. F.; Tautges, T. J.; Karabasov, S.; Goloviznin, V. M.; Zaytsev, M. A.; Chudanov, V. V.; Pervichko, V. A.; Aksenova, A. E. (Mathematics and Computer Science); (Cambridge Univ.); (Moscow Institute of Nuclar Energy Safety)

    2011-08-23

    When streams of rapidly moving flow merge in a T-junction, the potential arises for large oscillations at the scale of the diameter, D, with a period scaling as O(D/U), where U is the characteristic flow velocity. If the streams are of different temperatures, the oscillations result in experimental fluctuations (thermal striping) at the pipe wall in the outlet branch that can accelerate thermal-mechanical fatigue and ultimately cause pipe failure. The importance of this phenomenon has prompted the nuclear energy modeling and simulation community to establish a benchmark to test the ability of computational fluid dynamics (CFD) codes to predict thermal striping. The benchmark is based on thermal and velocity data measured in an experiment designed specifically for this purpose. Thermal striping is intrinsically unsteady and hence not accessible to steady state simulation approaches such as steady state Reynolds-averaged Navier-Stokes (RANS) models.1 Consequently, one must consider either unsteady RANS or large eddy simulation (LES). This report compares the results for three LES codes: Nek5000, developed at Argonne National Laboratory (USA), and Cabaret and Conv3D, developed at the Moscow Institute of Nuclear Energy Safety at (IBRAE) in Russia. Nek5000 is based on the spectral element method (SEM), which is a high-order weighted residual technique that combines the geometric flexibility of the finite element method (FEM) with the tensor-product efficiencies of spectral methods. Cabaret is a 'compact accurately boundary-adjusting high-resolution technique' for fluid dynamics simulation. The method is second-order accurate on nonuniform grids in space and time, and has a small dispersion error and computational stencil defined within one space-time cell. The scheme is equipped with a conservative nonlinear correction procedure based on the maximum principle. CONV3D is based on the immersed boundary method and is validated on a wide set of the experimental

  6. Advances in NLTE Modeling for Integrated Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Scott, H A; Hansen, S B

    2009-07-08

    The last few years have seen significant progress in constructing the atomic models required for non-local thermodynamic equilibrium (NLTE) simulations. Along with this has come an increased understanding of the requirements for accurately modeling the ionization balance, energy content and radiative properties of different elements for a wide range of densities and temperatures. Much of this progress is the result of a series of workshops dedicated to comparing the results from different codes and computational approaches applied to a series of test problems. The results of these workshops emphasized the importance of atomic model completeness, especially in doubly excited states and autoionization transitions, to calculating ionization balance, and the importance of accurate, detailed atomic data to producing reliable spectra. We describe a simple screened-hydrogenic model that calculates NLTE ionization balance with surprising accuracy, at a low enough computational cost for routine use in radiation-hydrodynamics codes. The model incorporates term splitting, {Delta}n = 0 transitions, and approximate UTA widths for spectral calculations, with results comparable to those of much more detailed codes. Simulations done with this model have been increasingly successful at matching experimental data for laser-driven systems and hohlraums. Accurate and efficient atomic models are just one requirement for integrated NLTE simulations. Coupling the atomic kinetics to hydrodynamics and radiation transport constrains both discretizations and algorithms to retain energy conservation, accuracy and stability. In particular, the strong coupling between radiation and populations can require either very short timesteps or significantly modified radiation transport algorithms to account for NLTE material response. Considerations such as these continue to provide challenges for NLTE simulations.

  7. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as impo...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  8. A Discrete Event Modeling and Simulation of Wave Division Multiplexing Unidirectional Slotted Ring Metropolitan Area Network

    Directory of Open Access Journals (Sweden)

    H. A. Fua’ad

    2009-01-01

    Full Text Available Problem statement: The lack of uniformity in the choice of simulation platforms for optical WDM networks stands behind the difficulty of developing a common simulation environment. Correlating WDM unidirectional slotted ring network to Discrete Event Simulation (DES encompassing event definition, time advancing mechanism and scheduler has yet to be developed. Approach: The study focused on the proposed and the development of an event based discrete simulator for the WDM unidirectional slotted ring network to facilitate the reuse of the protocol modules under a common simulation environment. The proposed network architecture implemented for the developed simulator employs a separate wavelength as the control information channel. This control information enabled the nodes to monitor their access to the transmission media. Each node was equipped with a tunable transmitter and fixed receiver for data communication. Access nodes were equipped with a fixed transmitter and fixed receiver for the control information exchange. The developed simulator had derived the use of dividing the wavelength into slots. Nodes used these slots to transmit fixed size packets. Slots can be reused by the access node after receiving packets by the deployment of the spatial reuse scheme, thus enhancing the bandwidth utilization. The developed simulator had derived the set of the parameters, events, performance metrics and other unique WDM simulator elements according to a detailed analysis of the base model. Results: The network delay and packet loss were investigated and compared to a benchmark of the modeled domain. Successful deployment of the developed simulator was proven by the generated results. Conclusion: Extensive performance analysis of WDM unidirectional slotted ring network can be deployed using the developed simulator with low computational overheads. Further enhancements were to extend the developed simulator for bidirectional slotted ring supporting

  9. Electronic continuum model for molecular dynamics simulations.

    Science.gov (United States)

    Leontyev, I V; Stuchebrukhov, A A

    2009-02-28

    A simple model for accounting for electronic polarization in molecular dynamics (MD) simulations is discussed. In this model, called molecular dynamics electronic continuum (MDEC), the electronic polarization is treated explicitly in terms of the electronic continuum (EC) approximation, while the nuclear dynamics is described with a fixed-charge force field. In such a force-field all atomic charges are scaled to reflect the screening effect by the electronic continuum. The MDEC model is rather similar but not equivalent to the standard nonpolarizable force-fields; the differences are discussed. Of our particular interest is the calculation of the electrostatic part of solvation energy using standard nonpolarizable MD simulations. In a low-dielectric environment, such as protein, the standard MD approach produces qualitatively wrong results. The difficulty is in mistreatment of the electronic polarizability. We show how the results can be much improved using the MDEC approach. We also show how the dielectric constant of the medium obtained in a MD simulation with nonpolarizable force-field is related to the static (total) dielectric constant, which includes both the nuclear and electronic relaxation effects. Using the MDEC model, we discuss recent calculations of dielectric constants of alcohols and alkanes, and show that the MDEC results are comparable with those obtained with the polarizable Drude oscillator model. The applicability of the method to calculations of dielectric properties of proteins is discussed. PMID:19256627

  10. Computational model for protein unfolding simulation

    Science.gov (United States)

    Tian, Xu-Hong; Zheng, Ye-Han; Jiao, Xiong; Liu, Cai-Xing; Chang, Shan

    2011-06-01

    The protein folding problem is one of the fundamental and important questions in molecular biology. However, the all-atom molecular dynamics studies of protein folding and unfolding are still computationally expensive and severely limited by the time scale of simulation. In this paper, a simple and fast protein unfolding method is proposed based on the conformational stability analyses and structure modeling. In this method, two structure-based conditions are considered to identify the unstable regions of proteins during the unfolding processes. The protein unfolding trajectories are mimicked through iterative structure modeling according to conformational stability analyses. Two proteins, chymotrypsin inhibitor 2 (CI2) and α -spectrin SH3 domain (SH3) were simulated by this method. Their unfolding pathways are consistent with the previous molecular dynamics simulations. Furthermore, the transition states of the two proteins were identified in unfolding processes and the theoretical Φ values of these transition states showed significant correlations with the experimental data (the correlation coefficients are >0.8). The results indicate that this method is effective in studying protein unfolding. Moreover, we analyzed and discussed the influence of parameters on the unfolding simulation. This simple coarse-grained model may provide a general and fast approach for the mechanism studies of protein folding.

  11. Modeling, simulation and optimization of bipedal walking

    CERN Document Server

    Berns, Karsten

    2013-01-01

    The model-based investigation of motions of anthropomorphic systems is an important interdisciplinary research topic involving specialists from many fields such as Robotics, Biomechanics, Physiology, Orthopedics, Psychology, Neurosciences, Sports, Computer Graphics and Applied Mathematics. This book presents a study of basic locomotion forms such as walking and running is of particular interest due to the high demand on dynamic coordination, actuator efficiency and balance control. Mathematical models and numerical simulation and optimization techniques are explained, in combination with experimental data, which can help to better understand the basic underlying mechanisms of these motions and to improve them. Example topics treated in this book are Modeling techniques for anthropomorphic bipedal walking systems Optimized walking motions for different objective functions Identification of objective functions from measurements Simulation and optimization approaches for humanoid robots Biologically inspired con...

  12. Multiphase reacting flows modelling and simulation

    CERN Document Server

    Marchisio, Daniele L

    2007-01-01

    The papers in this book describe the most widely applicable modeling approaches and are organized in six groups covering from fundamentals to relevant applications. In the first part, some fundamentals of multiphase turbulent reacting flows are covered. In particular the introduction focuses on basic notions of turbulence theory in single-phase and multi-phase systems as well as on the interaction between turbulence and chemistry. In the second part, models for the physical and chemical processes involved are discussed. Among other things, particular emphasis is given to turbulence modeling strategies for multiphase flows based on the kinetic theory for granular flows. Next, the different numerical methods based on Lagrangian and/or Eulerian schemes are presented. In particular the most popular numerical approaches of computational fluid dynamics codes are described (i.e., Direct Numerical Simulation, Large Eddy Simulation, and Reynolds-Averaged Navier-Stokes approach). The book will cover particle-based meth...

  13. Modelling and simulation of thermal power plants

    Energy Technology Data Exchange (ETDEWEB)

    Eborn, J.

    1998-02-01

    Mathematical modelling and simulation are important tools when dealing with engineering systems that today are becoming increasingly more complex. Integrated production and recycling of materials are trends that give rise to heterogenous systems, which are difficult to handle within one area of expertise. Model libraries are an excellent way to package engineering knowledge of systems and units to be reused by those who are not experts in modelling. Many commercial packages provide good model libraries, but they are usually domain-specific and closed. Heterogenous, multi-domain systems requires open model libraries written in general purpose modelling languages. This thesis describes a model database for thermal power plants written in the object-oriented modelling language OMOLA. The models are based on first principles. Subunits describe volumes with pressure and enthalpy dynamics and flows of heat or different media. The subunits are used to build basic units such as pumps, valves and heat exchangers which can be used to build system models. Several applications are described; a heat recovery steam generator, equipment for juice blending, steam generation in a sulphuric acid plant and a condensing steam plate heat exchanger. Model libraries for industrial use must be validated against measured data. The thesis describes how parameter estimation methods can be used for model validation. Results from a case-study on parameter optimization of a non-linear drum boiler model show how the technique can be used 32 refs, 21 figs

  14. Remarks on a benchmark nonlinear constrained optimization problem

    Institute of Scientific and Technical Information of China (English)

    Luo Yazhong; Lei Yongjun; Tang Guojin

    2006-01-01

    Remarks on a benchmark nonlinear constrained optimization problem are made. Due to a citation error, two absolutely different results for the benchmark problem are obtained by independent researchers. Parallel simulated annealing using simplex method is employed in our study to solve the benchmark nonlinear constrained problem with mistaken formula and the best-known solution is obtained, whose optimality is testified by the Kuhn-Tucker conditions.

  15. Laser wakefield simulation using a speed-of-light frame envelope model

    International Nuclear Information System (INIS)

    Simulation of laser wakefield accelerator (LWFA) experiments is computationally intensive due to the disparate length scales involved. Current experiments extend hundreds of laser wavelengths transversely and many thousands in the propagation direction, making explicit PIC simulations enormously expensive and requiring massively parallel execution in 3D. We can substantially improve the performance of laser wakefield simulations by modeling the envelope modulation of the laser field rather than the field itself. This allows for much coarser grids, since we need only resolve the plasma wavelength and not the laser wavelength, and therefore larger timesteps. Thus an envelope model can result in savings of several orders of magnitude in computational resources. By propagating the laser envelope in a frame moving at the speed of light, dispersive errors can be avoided and simulations over long distances become possible. Here we describe the model and its implementation, and show simulations and benchmarking of laser wakefield phenomena such as channel propagation, self-focusing, wakefield generation, and downramp injection using the model.

  16. Simulating the folding of HP-sequences with a minimalist model in an inhomogeneous medium.

    Science.gov (United States)

    Alas, S J; González-Pérez, P P

    2016-01-01

    The phenomenon of protein folding is a fundamental issue in the field of the computational molecular biology. The protein folding inside the cells is performed in a highly inhomogeneous, tortuous, and correlated environment. Therefore, it is important to include in the theoretical studies the medium where the protein folding is developed. In this work we present the combination of three models to mimic the protein folding inside of an inhomogeneous medium. The models used here are Hydrophobic-Polar (HP) in 2D square arrangement, Evolutionary Algorithms (EA), and the Dual Site Bond Model (DSBM). The DSBM model is used to simulate the environment where the HP beads are folded; in this case the medium is correlated and is fractal-like. The analysis of five benchmark HP sequences shows that the inhomogeneous space provided with a given correlation length and fractal dimension plays an important role for correct folding of these sequences, which does not occur in a homogeneous space.

  17. Mathematical models and numerical simulation in electromagnetism

    CERN Document Server

    Bermúdez, Alfredo; Salgado, Pilar

    2014-01-01

    The book represents a basic support for a master course in electromagnetism oriented to numerical simulation. The main goal of the book is that the reader knows the boundary-value problems of partial differential equations that should be solved in order to perform computer simulation of electromagnetic processes. Moreover it includes a part devoted to electric circuit theory  based on ordinary differential equations. The book is mainly oriented to electric engineering applications, going from the general to the specific, namely, from the full Maxwell’s equations to the particular cases of electrostatics, direct current, magnetostatics and eddy currents models. Apart from standard exercises related to analytical calculus, the book includes some others oriented to real-life applications solved with MaxFEM free simulation software.

  18. Modeling and simulation of economic processes

    Directory of Open Access Journals (Sweden)

    Bogdan Brumar

    2010-12-01

    Full Text Available In general, any activity requires a longer action often characterized by a degree of uncertainty, insecurity, in terms of size of the objective pursued. Because of the complexity of real economic systems, the stochastic dependencies between different variables and parameters considered, not all systems can be adequately represented by a model that can be solved by analytical methods and covering all issues for management decision analysis-economic horizon real. Often in such cases, it is considered that the simulation technique is the only alternative available. Using simulation techniques to study real-world systems often requires a laborious work. Making a simulation experiment is a process that takes place in several stages.

  19. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  20. Deep Drawing Simulations With Different Polycrystalline Models

    Science.gov (United States)

    Duchêne, Laurent; de Montleau, Pierre; Bouvier, Salima; Habraken, Anne Marie

    2004-06-01

    The goal of this research is to study the anisotropic material behavior during forming processes, represented by both complex yield loci and kinematic-isotropic hardening models. A first part of this paper describes the main concepts of the `Stress-strain interpolation' model that has been implemented in the non-linear finite element code Lagamine. This model consists of a local description of the yield locus based on the texture of the material through the full constraints Taylor's model. The texture evolution due to plastic deformations is computed throughout the FEM simulations. This `local yield locus' approach was initially linked to the classical isotropic Swift hardening law. Recently, a more complex hardening model was implemented: the physically-based microstructural model of Teodosiu. It takes into account intergranular heterogeneity due to the evolution of dislocation structures, that affects isotropic and kinematic hardening. The influence of the hardening model is compared to the influence of the texture evolution thanks to deep drawing simulations.