WorldWideScience

Sample records for model simulations capture

  1. Circuit simulation model multi-quantum well laser diodes inducing transport and capture/escape

    International Nuclear Information System (INIS)

    Zhuber-Okrog, K.

    1996-04-01

    This work describes the development of world's first circuit simulation model for multi-quantum well (MQW) semiconductor lasers comprising caier transport and capture/escape effects. This model can be seen as the application of a new semiconductor device simulator for quasineutral structures including MQW layers with an extension for simple single mode modeling of optical behavior. It is implemented in a circuit simulation program. The model is applied to Fabry-Perot laser diodes and compared to measured data. (author)

  2. Modelling the oil producers: Capturing oil industry knowledge in a behavioural simulation model

    International Nuclear Information System (INIS)

    Morecroft, J.D.W.; Van der Heijden, K.A.J.M.

    1992-01-01

    A group of senior managers and planners from a major oil company met to discuss the changing structure of the oil industry with the purpose of improving group understanding of oil market behaviour for use in global scenarios. This broad ranging discussion led to a system dynamics simulation model of the oil producers. The model produced new insights into the power and stability of OPEC (the major oil producers' organization), the dynamic of oil prices, and the investment opportunities of non-OPEC producers. The paper traces the model development process, starting from group discussions and leading to working simulation models. Particular attention is paid to the methods used to capture team knowledge and to ensure that the computer models reflected opinions and ideas from the meetings. The paper describes how flip-chart diagrams were used to collect ideas about the logic of the principal producers' production decisions. A sub-group of the project team developed and tested an algebraic model. The paper shows partial model simulations used to build confidence and a sense of ownership in the algebraic formulations. Further simulations show how the full model can stimulate thinking about producers' behaviour and oil prices. The paper concludes with comments on the model building process. 11 figs., 37 refs

  3. The effect of modeled recharge distribution on simulated groundwater availability and capture.

    Science.gov (United States)

    Tillman, F D; Pool, D R; Leake, S A

    2015-01-01

    Simulating groundwater flow in basin-fill aquifers of the semiarid southwestern United States commonly requires decisions about how to distribute aquifer recharge. Precipitation can recharge basin-fill aquifers by direct infiltration and transport through faults and fractures in the high-elevation areas, by flowing overland through high-elevation areas to infiltrate at basin-fill margins along mountain fronts, by flowing overland to infiltrate along ephemeral channels that often traverse basins in the area, or by some combination of these processes. The importance of accurately simulating recharge distributions is a current topic of discussion among hydrologists and water managers in the region, but no comparative study has been performed to analyze the effects of different recharge distributions on groundwater simulations. This study investigates the importance of the distribution of aquifer recharge in simulating regional groundwater flow in basin-fill aquifers by calibrating a groundwater-flow model to four different recharge distributions, all with the same total amount of recharge. Similarities are seen in results from steady-state models for optimized hydraulic conductivity values, fit of simulated to observed hydraulic heads, and composite scaled sensitivities of conductivity parameter zones. Transient simulations with hypothetical storage properties and pumping rates produce similar capture rates and storage change results, but differences are noted in the rate of drawdown at some well locations owing to the differences in optimized hydraulic conductivity. Depending on whether the purpose of the groundwater model is to simulate changes in groundwater levels or changes in storage and capture, the distribution of aquifer recharge may or may not be of primary importance. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.

  4. Musculoskeletal Simulation Model Generation from MRI Data Sets and Motion Capture Data

    Science.gov (United States)

    Schmid, Jérôme; Sandholm, Anders; Chung, François; Thalmann, Daniel; Delingette, Hervé; Magnenat-Thalmann, Nadia

    Today computer models and computer simulations of the musculoskeletal system are widely used to study the mechanisms behind human gait and its disorders. The common way of creating musculoskeletal models is to use a generic musculoskeletal model based on data derived from anatomical and biomechanical studies of cadaverous specimens. To adapt this generic model to a specific subject, the usual approach is to scale it. This scaling has been reported to introduce several errors because it does not always account for subject-specific anatomical differences. As a result, a novel semi-automatic workflow is proposed that creates subject-specific musculoskeletal models from magnetic resonance imaging (MRI) data sets and motion capture data. Based on subject-specific medical data and a model-based automatic segmentation approach, an accurate modeling of the anatomy can be produced while avoiding the scaling operation. This anatomical model coupled with motion capture data, joint kinematics information, and muscle-tendon actuators is finally used to create a subject-specific musculoskeletal model.

  5. CO2 capture using aqueous ammonia: kinetic study and process simulation

    DEFF Research Database (Denmark)

    Darde, Victor Camille Alfred; van Well, Willy J.M.; Stenby, Erling Halfdan

    2011-01-01

    to 0.6. The results were compared with those found for 30 wt% mono-ethanolamine (MEA) solutions.The capture process was simulated successfully using the simulator Aspen Plus coupled with the extended UNIQUAC thermodynamic model available for the NH3–CO2–H2O system. For this purpose, a user model......Carbon dioxide capture using aqueous ammonia is a post-combustion technology that has shown a good potential. Therefore this process is studied by measuring the rate of absorption of carbon dioxide by aqueous ammonia and by performing process simulation. The rate of absorption of carbon dioxide...

  6. Process simulation of CO2 capture with aqueous ammonia using the Extended UNIQUAC model

    DEFF Research Database (Denmark)

    Darde, Victor Camille Alfred; Maribo-Mogensen, Bjørn; van Well, Willy J.M.

    2012-01-01

    of the process is necessary.In this work, the performance of the carbon dioxide capture process using aqueous ammonia has been analyzed by process simulation. The Extended UNIQUAC thermodynamic model available for the CO2–NH3–H2O system has been implemented in the commercial simulator Aspen Plus®1 by using...... dioxide at low temperature (2–10°C). The low temperature limits the vaporization of ammonia in the absorber and entails precipitation of ammonium carbonate compounds, thereby allowing high loadings of CO2. The process has thereby good perspectives. However, a scientific understanding and evaluation......The use of aqueous ammonia is a promising option to capture carbon dioxide from power plants thanks to the potential low heat requirement during the carbon dioxide desorption compared to monoethanolamine (MEA) based process. The patented Chilled Ammonia Process developed by Alstom absorbs carbon...

  7. Dynamic Operation and Simulation of Post-Combustion CO2 Capture

    DEFF Research Database (Denmark)

    Gaspar, Jozsef; Gladis, Arne; Jørgensen, John Bagterp

    2016-01-01

    Thermal power need to operate, on a daily basis, with frequent and fast load changes to balance the large variations of intermittent energy sources, such as wind and solar energy. To make the integration of carbon capture to power plants economically and technically feasible, the carbon capture...... process has to be able to follow these fast and large load changes without decreasing the overall performance of the carbon capture plant. Therefore, dynamic models for simulation, optimization and control system design are essential. In this work, we compare the transient behavior of the model against...

  8. Annual Report: Carbon Capture Simulation Initiative (CCSI) (30 September 2012)

    Energy Technology Data Exchange (ETDEWEB)

    Miller, David C. [National Energy Technology Lab. (NETL), Morgantown, WV (United States); Syamlal, Madhava [National Energy Technology Lab. (NETL), Morgantown, WV (United States); Cottrell, Roger [URS Corporation. (URS), San Francisco, CA (United States); National Energy Technology Lab. (NETL), Morgantown, WV (United States); Kress, Joel D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sun, Xin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sundaresan, S. [Princeton Univ., NJ (United States); Sahinidis, Nikolaos V. [Carnegie Mellon Univ., Pittsburgh, PA (United States); National Energy Technology Lab. (NETL), Morgantown, WV (United States); Zitney, Stephen E. [NETL; Bhattacharyya, D. [West Virginia Univ., Morgantown, WV (United States); National Energy Technology Lab. (NETL), Morgantown, WV (United States); Agarwal, Deb [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tong, Charles [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lin, Guang [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Dale, Crystal [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Engel, Dave [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Calafiura, Paolo [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Beattie, Keith [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Shinn, John [SynPatEco. Pleasant Hill, CA (United States)

    2012-09-30

    The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that is developing and deploying state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models, with uncertainty quantification (UQ), optimization, risk analysis and decision making capabilities. The CCSI Toolset incorporates commercial and open-source software currently in use by industry and is also developing new software tools as necessary to fill technology gaps identified during execution of the project. Ultimately, the CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. CCSI is organized into 8 technical elements that fall under two focus areas. The first focus area (Physicochemical Models and Data) addresses the steps necessary to model and simulate the various technologies and processes needed to bring a new Carbon Capture and Storage (CCS) technology into production. The second focus area (Analysis & Software) is developing the software infrastructure to integrate the various components and implement the tools that are needed to make quantifiable decisions regarding the viability of new CCS technologies. CCSI also has an Industry Advisory Board (IAB). By working closely with industry from the inception of the project to identify

  9. Advanced modeling and simulation of integrated gasification combined cycle power plants with CO2-capture

    International Nuclear Information System (INIS)

    Rieger, Mathias

    2014-01-01

    The objective of this thesis is to provide an extensive description of the correlations in some of the most crucial sub-processes for hard coal fired IGCC with carbon capture (CC-IGCC). For this purpose, process simulation models are developed for four industrial gasification processes, the CO-shift cycle, the acid gas removal unit, the sulfur recovery process, the gas turbine, the water-/steam cycle and the air separation unit (ASU). Process simulations clarify the influence of certain boundary conditions on plant operation, performance and economics. Based on that, a comparative benchmark of CC-IGCC concepts is conducted. Furthermore, the influence of integration between the gas turbine and the ASU is analyzed in detail. The generated findings are used to develop an advanced plant configuration with improved economics. Nevertheless, IGCC power plants with carbon capture are not found to be an economically efficient power generation technology at present day boundary conditions.

  10. Advanced modeling and simulation of integrated gasification combined cycle power plants with CO{sub 2}-capture

    Energy Technology Data Exchange (ETDEWEB)

    Rieger, Mathias

    2014-04-17

    The objective of this thesis is to provide an extensive description of the correlations in some of the most crucial sub-processes for hard coal fired IGCC with carbon capture (CC-IGCC). For this purpose, process simulation models are developed for four industrial gasification processes, the CO-shift cycle, the acid gas removal unit, the sulfur recovery process, the gas turbine, the water-/steam cycle and the air separation unit (ASU). Process simulations clarify the influence of certain boundary conditions on plant operation, performance and economics. Based on that, a comparative benchmark of CC-IGCC concepts is conducted. Furthermore, the influence of integration between the gas turbine and the ASU is analyzed in detail. The generated findings are used to develop an advanced plant configuration with improved economics. Nevertheless, IGCC power plants with carbon capture are not found to be an economically efficient power generation technology at present day boundary conditions.

  11. Annual Report: Carbon Capture Simulation Initiative (CCSI) (30 September 2013)

    Energy Technology Data Exchange (ETDEWEB)

    Miller, David C. [National Energy Technology Lab. (NETL), Pittsburgh, PA, (United States); Syamlal, Madhava [National Energy Technology Lab. (NETL), Pittsburgh, PA, (United States); Cottrell, Roger [URS Corporation. (URS), San Francisco, CA (United States); National Energy Technology Lab. (NETL), Morgantown, WV (United States); Kress, Joel D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sundaresan, S. [Princeton Univ., NJ (United States); Sun, Xin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Storlie, C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bhattacharyya, D. [West Virginia Univ., Morgantown, WV (United States); National Energy Technology Lab. (NETL), Morgantown, WV (United States); Tong, Charles [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Zitney, Stephen E [National Energy Technology Lab. (NETL), Pittsburgh, PA, (United States); Dale, Crystal [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Engel, Dave [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Agarwal, Deb [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Calafiura, Paolo [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Shinn, John [SynPatEco, Pleasant Hill, CA (United States)

    2013-09-30

    The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that is developing and deploying state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models, with uncertainty quantification (UQ), optimization, risk analysis and decision making capabilities. The CCSI Toolset incorporates commercial and open-source software currently in use by industry and is also developing new software tools as necessary to fill technology gaps identified during execution of the project. Ultimately, the CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. CCSI is led by the National Energy Technology Laboratory (NETL) and leverages the Department of Energy (DOE) national laboratories’ core strengths in modeling and simulation, bringing together the best capabilities at NETL, Los Alamos National Laboratory (LANL), Lawrence Berkeley National Laboratory (LBNL), Lawrence Livermore National Laboratory (LLNL), and Pacific Northwest National Laboratory (PNNL). The CCSI’s industrial partners provide representation from the power generation industry, equipment manufacturers, technology providers and engineering and construction firms. The CCSI’s academic participants (Carnegie Mellon University, Princeton University, West

  12. Simulation of the capture process in the Fermilab Booster

    International Nuclear Information System (INIS)

    Stahl, S.; Ankenbrandt, C.

    1987-01-01

    A progress report on efforts to understand and improve adiabatic capture in the Fermilab Booster by experiment and simulation is presented. In particular, a new Rf voltage program for capture which ameliorates transverse space-charge effects is described and simulated

  13. Simulation of mercury capture by sorbent injection using a simplified model.

    Science.gov (United States)

    Zhao, Bingtao; Zhang, Zhongxiao; Jin, Jing; Pan, Wei-Ping

    2009-10-30

    Mercury pollution by fossil fuel combustion or solid waste incineration is becoming the worldwide environmental concern. As an effective control technology, powdered sorbent injection (PSI) has been successfully used for mercury capture from flue gas with advantages of low cost and easy operation. In order to predict the mercury capture efficiency for PSI more conveniently, a simplified model, which is based on the theory of mass transfer, isothermal adsorption and mass balance, is developed in this paper. The comparisons between theoretical results of this model and experimental results by Meserole et al. [F.B. Meserole, R. Chang, T.R. Carrey, J. Machac, C.F.J. Richardson, Modeling mercury removal by sorbent injection, J. Air Waste Manage. Assoc. 49 (1999) 694-704] demonstrate that the simplified model is able to provide good predictive accuracy. Moreover, the effects of key parameters including the mass transfer coefficient, sorbent concentration, sorbent physical property and sorbent adsorption capacity on mercury adsorption efficiency are compared and evaluated. Finally, the sensitive analysis of impact factor indicates that the injected sorbent concentration plays most important role for mercury capture efficiency.

  14. Simulation of the capture process in the Fermilab Booster

    International Nuclear Information System (INIS)

    Stahl, S.; Ankenbrandt, C.

    1987-09-01

    A progress report on efforts to understand and improve adiabatic capture in the Fermilab Booster by experiment and simulation is presented. In particular, a new RF voltage program for capture which ameliorates transverse space-charge effects is described and simulated. 7 refs., 4 figs

  15. Micromotors to capture and destroy anthrax simulant spores.

    Science.gov (United States)

    Orozco, Jahir; Pan, Guoqing; Sattayasamitsathit, Sirilak; Galarnyk, Michael; Wang, Joseph

    2015-03-07

    Towards addressing the need for detecting and eliminating biothreats, we describe a micromotor-based approach for screening, capturing, isolating and destroying anthrax simulant spores in a simple and rapid manner with minimal sample processing. The B. globilli antibody-functionalized micromotors can recognize, capture and transport B. globigii spores in environmental matrices, while showing non-interactions with excess of non-target bacteria. Efficient destruction of the anthrax simulant spores is demonstrated via the micromotor-induced mixing of a mild oxidizing solution. The new micromotor-based approach paves a way to dynamic multifunctional systems that rapidly recognize, isolate, capture and destroy biological threats.

  16. Numerical simulation of two-phase flow with front-capturing

    International Nuclear Information System (INIS)

    Tzanos, C.P.; Weber, D.P.

    2000-01-01

    Because of the complexity of two-phase flow phenomena, two-phase flow codes rely heavily on empirical correlations. This approach has a number of serious shortcomings. Advances in parallel computing and continuing improvements in computer speed and memory have stimulated the development of numerical simulation tools that rely less on empirical correlations and more on fundamental physics. The objective of this work is to take advantage of developments in massively parallel computing, single-phase computational fluid dynamics of complex systems, and numerical methods for front capturing in two-phase flows to develop a computer code for direct numerical simulation of two-phase flow. This includes bubble/droplet transport, interface deformation and topology change, bubble-droplet interactions, interface mass, momentum, and energy transfer. In this work, the Navier-Stokes and energy equations are solved by treating both phases as a single fluid with interfaces between the two phases, and a discontinuity in material properties across the moving interfaces. The evolution of the interfaces is simulated by using the front capturing technique of the level-set methods. In these methods, the boundary of a two-fluid interface is modeled as the zero level set of a smooth function φ. The level-set function φ is defined as the signed distance from the interface (φ is negative inside a droplet/bubble and positive outside). Compared to other front-capturing or front-tracking methods, the level-set approach is relatively easy to implement even in three-dimensional flows, and it has been shown to simulate well the coalescence and breakup of droplets/bubbles

  17. Optimising the application of multiple-capture traps for invasive species management using spatial simulation.

    Science.gov (United States)

    Warburton, Bruce; Gormley, Andrew M

    2015-01-01

    Internationally, invasive vertebrate species pose a significant threat to biodiversity, agricultural production and human health. To manage these species a wide range of tools, including traps, are used. In New Zealand, brushtail possums (Trichosurus vulpecula), stoats (Mustela ermine), and ship rats (Rattus rattus) are invasive and there is an ongoing demand for cost-effective non-toxic methods for controlling these pests. Recently, traps with multiple-capture capability have been developed which, because they do not require regular operator-checking, are purported to be more cost-effective than traditional single-capture traps. However, when pest populations are being maintained at low densities (as is typical of orchestrated pest management programmes) it remains uncertain if it is more cost-effective to use fewer multiple-capture traps or more single-capture traps. To address this uncertainty, we used an individual-based spatially explicit modelling approach to determine the likely maximum animal-captures per trap, given stated pest densities and defined times traps are left between checks. In the simulation, single- or multiple-capture traps were spaced according to best practice pest-control guidelines. For possums with maintenance densities set at the lowest level (i.e. 0.5/ha), 98% of all simulated possums were captured with only a single capacity trap set at each site. When possum density was increased to moderate levels of 3/ha, having a capacity of three captures per trap caught 97% of all simulated possums. Results were similar for stoats, although only two potential captures per site were sufficient to capture 99% of simulated stoats. For rats, which were simulated at their typically higher densities, even a six-capture capacity per trap site only resulted in 80% kill. Depending on target species, prevailing density and extent of immigration, the most cost-effective strategy for pest control in New Zealand might be to deploy several single-capture

  18. Evaluation of bias associated with capture maps derived from nonlinear groundwater flow models

    Science.gov (United States)

    Nadler, Cara; Allander, Kip K.; Pohll, Greg; Morway, Eric D.; Naranjo, Ramon C.; Huntington, Justin

    2018-01-01

    The impact of groundwater withdrawal on surface water is a concern of water users and water managers, particularly in the arid western United States. Capture maps are useful tools to spatially assess the impact of groundwater pumping on water sources (e.g., streamflow depletion) and are being used more frequently for conjunctive management of surface water and groundwater. Capture maps have been derived using linear groundwater flow models and rely on the principle of superposition to demonstrate the effects of pumping in various locations on resources of interest. However, nonlinear models are often necessary to simulate head-dependent boundary conditions and unconfined aquifers. Capture maps developed using nonlinear models with the principle of superposition may over- or underestimate capture magnitude and spatial extent. This paper presents new methods for generating capture difference maps, which assess spatial effects of model nonlinearity on capture fraction sensitivity to pumping rate, and for calculating the bias associated with capture maps. The sensitivity of capture map bias to selected parameters related to model design and conceptualization for the arid western United States is explored. This study finds that the simulation of stream continuity, pumping rates, stream incision, well proximity to capture sources, aquifer hydraulic conductivity, and groundwater evapotranspiration extinction depth substantially affect capture map bias. Capture difference maps demonstrate that regions with large capture fraction differences are indicative of greater potential capture map bias. Understanding both spatial and temporal bias in capture maps derived from nonlinear groundwater flow models improves their utility and defensibility as conjunctive-use management tools.

  19. Multi-scale modeling of carbon capture systems

    Energy Technology Data Exchange (ETDEWEB)

    Kress, Joel David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-03

    The development and scale up of cost effective carbon capture processes is of paramount importance to enable the widespread deployment of these technologies to significantly reduce greenhouse gas emissions. The U.S. Department of Energy initiated the Carbon Capture Simulation Initiative (CCSI) in 2011 with the goal of developing a computational toolset that would enable industry to more effectively identify, design, scale up, operate, and optimize promising concepts. The first half of the presentation will introduce the CCSI Toolset consisting of basic data submodels, steady-state and dynamic process models, process optimization and uncertainty quantification tools, an advanced dynamic process control framework, and high-resolution filtered computationalfluid- dynamics (CFD) submodels. The second half of the presentation will describe a high-fidelity model of a mesoporous silica supported, polyethylenimine (PEI)-impregnated solid sorbent for CO2 capture. The sorbent model includes a detailed treatment of transport and amine-CO2- H2O interactions based on quantum chemistry calculations. Using a Bayesian approach for uncertainty quantification, we calibrate the sorbent model to Thermogravimetric (TGA) data.

  20. Advanced modeling to accelerate the scale up of carbon capture technologies

    Energy Technology Data Exchange (ETDEWEB)

    Miller, David C.; Sun, XIN; Storlie, Curtis B.; Bhattacharyya, Debangsu

    2015-06-01

    In order to help meet the goals of the DOE carbon capture program, the Carbon Capture Simulation Initiative (CCSI) was launched in early 2011 to develop, demonstrate, and deploy advanced computational tools and validated multi-scale models to reduce the time required to develop and scale-up new carbon capture technologies. This article focuses on essential elements related to the development and validation of multi-scale models in order to help minimize risk and maximize learning as new technologies progress from pilot to demonstration scale.

  1. Modelling of limestone injection for SO2 capture in a coal fired utility boiler

    International Nuclear Information System (INIS)

    Kovacik, G.J.; Reid, K.; McDonald, M.M.; Knill, K.

    1997-01-01

    A computer model was developed for simulating furnace sorbent injection for SO 2 capture in a full scale utility boiler using TASCFlow TM computational fluid dynamics (CFD) software. The model makes use of a computational grid of the superheater section of a tangentially fired utility boiler. The computer simulations are three dimensional so that the temperature and residence time distribution in the boiler could be realistically represented. Results of calculations of simulated sulphur capture performance of limestone injection in a typical utility boiler operation were presented

  2. Capturing spike variability in noisy Izhikevich neurons using point process generalized linear models

    DEFF Research Database (Denmark)

    Østergaard, Jacob; Kramer, Mark A.; Eden, Uri T.

    2018-01-01

    current. We then fit these spike train datawith a statistical model (a generalized linear model, GLM, with multiplicative influences of past spiking). For different levels of noise, we show how the GLM captures both the deterministic features of the Izhikevich neuron and the variability driven...... by the noise. We conclude that the GLM captures essential features of the simulated spike trains, but for near-deterministic spike trains, goodness-of-fit analyses reveal that the model does not fit very well in a statistical sense; the essential random part of the GLM is not captured....... are separately applied; understanding the relationships between these modeling approaches remains an area of active research. In this letter, we examine this relationship using simulation. To do so, we first generate spike train data from a well-known dynamical model, the Izhikevich neuron, with a noisy input...

  3. The establishment of Digital Image Capture System(DICS) using conventional simulator

    International Nuclear Information System (INIS)

    Oh, Tae Sung; Park, Jong Il; Byun, Young Sik; Shin, Hyun Kyoh

    2004-01-01

    The simulator is used to determine patient field and ensure the treatment field, which encompasses the required anatomy during patient normal movement such as during breathing. The latest simulator provide real time display of still, fluoroscopic and digitalized image, but conventional simulator is not yet. The purpose of this study is to introduce digital image capture system(DICS) using conventional simulator and clinical case using digital captured still and fluoroscopic image. We connect the video signal cable to the video terminal in the back up of simulator monitor, and connect the video jack to the A/D converter. After connection between the converter jack and computer, We can acquire still image and record fluoroscopic image with operating image capture program. The data created with this system can be used in patient treatment, and modified for verification by using image processing software. (j.e. photoshop, paintshop) DICS was able to establish easy and economical procedure. DCIS image was helpful for simulation. DICS imaging was powerful tool in the evaluation of the department specific patient positioning. Because the commercialized simulator based of digital capture is very expensive, it is not easily to establish DICS simulator in the most hospital. DICS using conventional simulator enable to utilize the practical use of image equal to high cost digitalized simulator and to research many clinical cases in case of using other software program.

  4. CO2 capture in amine solutions: modelling and simulations with non-empirical methods

    Science.gov (United States)

    Andreoni, Wanda; Pietrucci, Fabio

    2016-12-01

    Absorption in aqueous amine solutions is the most advanced technology for the capture of CO2, although suffering from drawbacks that do not allow exploitation on large scale. The search for optimum solvents has been pursued with empirical methods and has also motivated a number of computational approaches over the last decade. However, a deeper level of understanding of the relevant chemical reactions in solution is required so as to contribute to this effort. We present here a brief critical overview of the most recent applications of computer simulations using ab initio methods. Comparison of their outcome shows a strong dependence on the structural models employed to represent the molecular systems in solution and on the strategy used to simulate the reactions. In particular, the results of very recent ab initio molecular dynamics augmented with metadynamics are summarized, showing the crucial role of water, which has been so far strongly underestimated both in the calculations and in the interpretation of experimental data. Indications are given for advances in computational approaches that are necessary if meant to contribute to the rational design of new solvents.

  5. CO2 capture in amine solutions: modelling and simulations with non-empirical methods

    International Nuclear Information System (INIS)

    Andreoni, Wanda; Pietrucci, Fabio

    2016-01-01

    Absorption in aqueous amine solutions is the most advanced technology for the capture of CO 2 , although suffering from drawbacks that do not allow exploitation on large scale. The search for optimum solvents has been pursued with empirical methods and has also motivated a number of computational approaches over the last decade. However, a deeper level of understanding of the relevant chemical reactions in solution is required so as to contribute to this effort. We present here a brief critical overview of the most recent applications of computer simulations using ab initio methods. Comparison of their outcome shows a strong dependence on the structural models employed to represent the molecular systems in solution and on the strategy used to simulate the reactions. In particular, the results of very recent ab initio molecular dynamics augmented with metadynamics are summarized, showing the crucial role of water, which has been so far strongly underestimated both in the calculations and in the interpretation of experimental data. Indications are given for advances in computational approaches that are necessary if meant to contribute to the rational design of new solvents. (topical review)

  6. Simulation of proton RF capture in the AGS Booster

    International Nuclear Information System (INIS)

    Khiari, F.Z.; Luccio, A.U.; Weng, W.T.

    1988-01-01

    RF capture of the proton beam in the AGS Booster has been simulated with the longitudinal phase-space tracking code ESME. Results show that a capture in excess of 95% can be achieved with multiturn injection of a chopped beam

  7. Validation of simulation models

    DEFF Research Database (Denmark)

    Rehman, Muniza; Pedersen, Stig Andur

    2012-01-01

    In philosophy of science, the interest for computational models and simulations has increased heavily during the past decades. Different positions regarding the validity of models have emerged but the views have not succeeded in capturing the diversity of validation methods. The wide variety...

  8. Modelling of tetrahydrofuran promoted gas hydrate systems for carbon dioxide capture processes

    DEFF Research Database (Denmark)

    Herslund, Peter Jørgensen; Thomsen, Kaj; Abildskov, Jens

    2014-01-01

    A thermodynamic study of a novel gas hydrate based CO2 capture process is presented.•Model predicts this process unsuitable for CO2 capture from power station flue gases. A thermodynamic modelling study of both fluid phase behaviour and hydrate phase behaviour is presented for the quaternary system...... of water, tetrahydrofuran, carbon dioxide and nitrogen. The applied model incorporates the Cubic-Plus-Association (CPA) equation of state for the fluid phase description and the van der Waals-Platteeuw hydrate model for the solid (hydrate) phase. Six binary pairs are studied for their fluid phase behaviour...... accurate descriptions of both fluid- and hydrate phase equilibria in the studied system and its subsystems. The developed model is applied to simulate two simplified, gas hydrate-based processes for post-combustion carbon dioxide capture from power station flue gases. The first process, an unpromoted...

  9. Constant-parameter capture-recapture models

    Science.gov (United States)

    Brownie, C.; Hines, J.E.; Nichols, J.D.

    1986-01-01

    Jolly (1982, Biometrics 38, 301-321) presented modifications of the Jolly-Seber model for capture-recapture data, which assume constant survival and/or capture rates. Where appropriate, because of the reduced number of parameters, these models lead to more efficient estimators than the Jolly-Seber model. The tests to compare models given by Jolly do not make complete use of the data, and we present here the appropriate modifications, and also indicate how to carry out goodness-of-fit tests which utilize individual capture history information. We also describe analogous models for the case where young and adult animals are tagged. The availability of computer programs to perform the analysis is noted, and examples are given using output from these programs.

  10. A Generalized Estimating Equations Approach to Model Heterogeneity and Time Dependence in Capture-Recapture Studies

    Directory of Open Access Journals (Sweden)

    Akanda Md. Abdus Salam

    2017-03-01

    Full Text Available Individual heterogeneity in capture probabilities and time dependence are fundamentally important for estimating the closed animal population parameters in capture-recapture studies. A generalized estimating equations (GEE approach accounts for linear correlation among capture-recapture occasions, and individual heterogeneity in capture probabilities in a closed population capture-recapture individual heterogeneity and time variation model. The estimated capture probabilities are used to estimate animal population parameters. Two real data sets are used for illustrative purposes. A simulation study is carried out to assess the performance of the GEE estimator. A Quasi-Likelihood Information Criterion (QIC is applied for the selection of the best fitting model. This approach performs well when the estimated population parameters depend on the individual heterogeneity and the nature of linear correlation among capture-recapture occasions.

  11. Modelling of catalytic oxidation of NH3 and reduction of NO on limestone during sulphur capture

    DEFF Research Database (Denmark)

    Kiil, Søren; Bhatia, Suresh K.; Dam-Johansen, Kim

    1996-01-01

    activity with respect to each species involved. An existing particle model, the Grain-Micrograin Model, which simulates sulphur capture on limestone under oxidizing conditions is considered in the modelling. Simulation results in good qualitative agreement with experimental data are obtained here......A theoretical study of the complex transient system of simultaneous sulphur capture and catalytic reactions of N-containing compounds taking place on a single limestone particle is conducted. The numerical technique developed previously by the authors (Kiil et al. 1994) based on collocation...... for the catalytic chemistry of NH3 during simultaneous sulphur capture on a Stevns Chalk particle. The reduction of NO by NH3 over CaSO4 (which is the product of the reaction between SO2, O2 and limestone) was found to be important because this reaction could explain the change in selectivity with increased solid...

  12. Carbon dioxide capture processes: Simulation, design and sensitivity analysis

    DEFF Research Database (Denmark)

    Zaman, Muhammad; Lee, Jay Hyung; Gani, Rafiqul

    2012-01-01

    equilibrium and associated property models are used. Simulations are performed to investigate the sensitivity of the process variables to change in the design variables including process inputs and disturbances in the property model parameters. Results of the sensitivity analysis on the steady state...... performance of the process to the L/G ratio to the absorber, CO2 lean solvent loadings, and striper pressure are presented in this paper. Based on the sensitivity analysis process optimization problems have been defined and solved and, a preliminary control structure selection has been made.......Carbon dioxide is the main greenhouse gas and its major source is combustion of fossil fuels for power generation. The objective of this study is to carry out the steady-state sensitivity analysis for chemical absorption of carbon dioxide capture from flue gas using monoethanolamine solvent. First...

  13. A Comparison of Grizzly Bear Demographic Parameters Estimated from Non-Spatial and Spatial Open Population Capture-Recapture Models.

    Science.gov (United States)

    Whittington, Jesse; Sawaya, Michael A

    2015-01-01

    Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal's home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786-1.071) for females, 0.844 (0.703-0.975) for males, and 0.882 (0.779-0.981) for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758-1.024) for females, 0.825 (0.700-0.948) for males, and 0.863 (0.771-0.957) for both sexes. The combination of low densities, low reproductive rates, and predominantly negative population growth

  14. A Comparison of Grizzly Bear Demographic Parameters Estimated from Non-Spatial and Spatial Open Population Capture-Recapture Models.

    Directory of Open Access Journals (Sweden)

    Jesse Whittington

    Full Text Available Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal's home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786-1.071 for females, 0.844 (0.703-0.975 for males, and 0.882 (0.779-0.981 for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758-1.024 for females, 0.825 (0.700-0.948 for males, and 0.863 (0.771-0.957 for both sexes. The combination of low densities, low reproductive rates, and predominantly negative

  15. CFD Simulations of a Regenerative Process for Carbon Dioxide Capture in Advanced Gasification Based Power Systems

    Energy Technology Data Exchange (ETDEWEB)

    Arastoopour, Hamid [Illinois Inst. of Technology, Chicago, IL (United States); Abbasian, Javad [Illinois Inst. of Technology, Chicago, IL (United States)

    2014-07-31

    This project describes the work carried out to prepare a highly reactive and mechanically strong MgO based sorbents and to develop a Population Balance Equations (PBE) approach to describe the evolution of the particle porosity distribution that is linked with Computational Fluid Dynamics (CFD) to perform simulations of the CO2 capture and sorbent regeneration. A large number of MgO-based regenerable sorbents were prepared using low cost and abundant dolomite as the base material. Among various preparation parameters investigated the potassium/magnesium (K/Mg) ratio was identified as the key variable affecting the reactivity and CO2 capacity of the sorbent. The optimum K/Mg ratio is about 0.15. The sorbent formulation HD52-P2 was identified as the “best” sorbent formulation and a large batch (one kg) of the sorbent was prepared for the detailed study. The results of parametric study indicate the optimum carbonation and regeneration temperatures are 360° and 500°C, respectively. The results also indicate that steam has a beneficial effect on the rate of carbonation and regeneration of the sorbent and that the reactivity and capacity of the sorbent decreases in the cycling process (sorbent deactivation). The results indicate that to achieve a high CO2 removal efficiency, the bed of sorbent should be operated at a temperature range of 370-410°C which also favors production of hydrogen through the WGS reaction. To describe the carbonation reaction kinetics of the MgO, the Variable Diffusivity shrinking core Model (VDM) was developed in this project, which was shown to accurately fit the experimental data. An important advantage of this model is that the changes in the sorbent conversion with time can be expressed in an explicit manner, which will significantly reduce the CFD computation time. A Computational Fluid Dynamic/Population Balance Equations (CFD/PBE) model was developed that accounts for the particle (sorbent) porosity distribution and a new version of

  16. Salmonella capture using orbiting magnetic microbeads

    Science.gov (United States)

    Owen, Drew; Ballard, Matthew; Mills, Zachary; Hanasoge, Srinivas; Hesketh, Peter; Alexeev, Alexander

    2014-11-01

    Using three-dimensional simulations and experiments, we examine capture of salmonella from a complex fluid sample flowing through a microfluidic channel. Capture is performed using orbiting magnetic microbeads, which can easily be extracted from the system for analysis after salmonella capture. Numerical simulations are used to model the dynamics of the system, which consists of a microchannel filled with a viscous fluid, model salmonella, magnetic microbeads and a series of angled parallel ridges lining the top of the microchannel. Simulations provide a statistical measure of the ability of the system to capture target salmonella. Our modeling findings guide the design of a lab-on-a-chip experimental device to be used for the detection of salmonella from complex food samples, allowing for the detection of the bacteria at the food source and preventing the consumption of contaminated food. Such a device can be used as a generic platform for the detection of a variety of biomaterials from complex fluids. This work is supported by a grant from the United States Department of Agriculture.

  17. Capturing flood-to-drought transitions in regional climate model simulations

    Science.gov (United States)

    Anders, Ivonne; Haslinger, Klaus; Hofstätter, Michael; Salzmann, Manuela; Resch, Gernot

    2017-04-01

    In previous studies atmospheric cyclones have been investigated in terms of related precipitation extremes in Central Europe. Mediterranean (Vb-like) cyclones are of special relevance as they are frequently related to high atmospheric moisture fluxes leading to floods and landslides in the Alpine region. Another focus in this area is on droughts, affecting soil moisture and surface and sub-surface runoff as well. Such events develop differently depending on available pre-saturation of water in the soil. In a first step we investigated two time periods which encompass a flood event and a subsequent drought on very different time scales, one long lasting transition (2002/2003) and a rather short one between May and August 2013. In a second step we extended the investigation to the long time period 1950-2016. We focused on high spatial and temporal scales and assessed the currently achievable accuracy in the simulation of the Vb-events on one hand and following drought events on the other hand. The state-of-the-art regional climate model CCLM is applied in hindcast-mode simulating the single events described above, but also the time from 1948 to 2016 to evaluate the results from the short runs to be valid for the long time period. Besides the conventional forcing of the regional climate model at its lateral boundaries, a spectral nudging technique is applied. The simulations covering the European domain have been varied systematically different model parameters. The resulting precipitation amounts have been compared to E-OBS gridded European precipitation data set and a recent high spatially resolved precipitation data set for Austria (GPARD-6). For the drought events the Standardized Precipitation Evapotranspiration Index (SPEI), soil moisture and runoff has been investigated. Varying the spectral nudging setup helps us to understand the 3D-processes during these events, but also to identify model deficiencies. To improve the simulation of such events in the past

  18. Injection and capture simulations for a high intensity proton synchrotron

    International Nuclear Information System (INIS)

    Cho, Y.; Lessner, E.; Symon, K.; Univ. of Wisconsin, Madison, WI

    1994-01-01

    The injection and capture processes in a high intensity, rapid cycling, proton synchrotron are simulated by numerical integration. The equations of motion suitable for rapid numerical simulation are derived so as to maintain symplecticity and second-order accuracy. By careful bookkeeping, the authors can, for each particle that is lost, determine its initial phase space coordinates. They use this information as a guide for different injection schemes and rf voltage programming, so that a minimum of particle losses and dilution are attained. A fairly accurate estimate of the space charge fields is required, as they influence considerably the particle distribution and reduce the capture efficiency. Since the beam is represented by a relatively coarse ensemble of macro particles, the authors study several methods of reducing the statistical fluctuations while retaining the fine structure (high intensity modulations) of the beam distribution. A pre-smoothing of the data is accomplished by the cloud-in-cell method. The program is checked by making sure that it gives correct answers in the absence of space charge, and that it reproduces the negative mass instability properly. Results of simulations for stationary distributions are compared to their analytical predictions. The capture efficiency for the rapid-cycling synchrotron is analyzed with respect to variations in the injected beam energy spread, bunch length, and rf programming

  19. Protein Simulation Data in the Relational Model.

    Science.gov (United States)

    Simms, Andrew M; Daggett, Valerie

    2012-10-01

    High performance computing is leading to unprecedented volumes of data. Relational databases offer a robust and scalable model for storing and analyzing scientific data. However, these features do not come without a cost-significant design effort is required to build a functional and efficient repository. Modeling protein simulation data in a relational database presents several challenges: the data captured from individual simulations are large, multi-dimensional, and must integrate with both simulation software and external data sites. Here we present the dimensional design and relational implementation of a comprehensive data warehouse for storing and analyzing molecular dynamics simulations using SQL Server.

  20. Dynamic skin deformation simulation using musculoskeletal model and soft tissue dynamics

    Institute of Scientific and Technical Information of China (English)

    Akihiko Murai; Q. Youn Hong; Katsu Yamane; Jessica K. Hodgins

    2017-01-01

    Deformation of skin and muscle is essential for bringing an animated character to life. This deformation is difficult to animate in a realistic fashion using traditional techniques because of the subtlety of the skin deformations that must move appropriately for the character design. In this paper, we present an algorithm that generates natural, dynamic, and detailed skin deformation (movement and jiggle) from joint angle data sequences. The algorithm has two steps: identification of parameters for a quasi-static muscle deformation model, and simulation of skin deformation. In the identification step, we identify the model parameters using a musculoskeletal model and a short sequence of skin deformation data captured via a dense marker set. The simulation step first uses the quasi-static muscle deformation model to obtain the quasi-static muscle shape at each frame of the given motion sequence (slow jump). Dynamic skin deformation is then computed by simulating the passive muscle and soft tissue dynamics modeled as a mass–spring–damper system. Having obtained the model parameters, we can simulate dynamic skin deformations for subjects with similar body types from new motion data. We demonstrate our method by creating skin deformations for muscle co-contraction and external impacts from four different behaviors captured as skeletal motion capture data. Experimental results show that the simulated skin deformations are quantitatively and qualitatively similar to measured actual skin deformations.

  1. Dynamic skin deformation simulation using musculoskeletal model and soft tissue dynamics

    Institute of Scientific and Technical Information of China (English)

    Akihiko Murai; Q.Youn Hong; Katsu Yamane; Jessica K.Hodgins

    2017-01-01

    Deformation of skin and muscle is essential for bringing an animated character to life. This deformation is difficult to animate in a realistic fashion using traditional techniques because of the subtlety of the skin deformations that must move appropriately for the character design. In this paper, we present an algorithm that generates natural, dynamic, and detailed skin deformation(movement and jiggle) from joint angle data sequences. The algorithm has two steps: identification of parameters for a quasi-static muscle deformation model, and simulation of skin deformation. In the identification step, we identify the model parameters using a musculoskeletal model and a short sequence of skin deformation data captured via a dense marker set. The simulation step first uses the quasi-static muscle deformation model to obtain the quasi-static muscle shape at each frame of the given motion sequence(slow jump). Dynamic skin deformation is then computed by simulating the passive muscle and soft tissue dynamics modeled as a mass–spring–damper system. Having obtained the model parameters, we can simulate dynamic skin deformations for subjects with similar body types from new motion data. We demonstrate our method by creating skin deformations for muscle co-contraction and external impacts from four different behaviors captured as skeletal motion capture data. Experimental results show that the simulated skin deformations are quantitatively and qualitatively similar to measured actual skin deformations.

  2. Rcapture: Loglinear Models for Capture-Recapture in R

    Directory of Open Access Journals (Sweden)

    Sophie Baillargeon

    2007-04-01

    Full Text Available This article introduces Rcapture, an R package for capture-recapture experiments. The data for analysis consists of the frequencies of the observable capture histories over the t capture occasions of the experiment. A capture history is a vector of zeros and ones where one stands for a capture and zero for a miss. Rcapture can fit three types of models. With a closed population model, the goal of the analysis is to estimate the size N of the population which is assumed to be constant throughout the experiment. The estimator depends on the way in which the capture probabilities of the animals vary. Rcapture features several models for these capture probabilities that lead to different estimators for N. In an open population model, immigration and death occur between sampling periods. The estimation of survival rates is of primary interest. Rcapture can fit the basic Cormack-Jolly-Seber and Jolly-Seber model to such data. The third type of models fitted by Rcapture are robust design models. It features two levels of sampling; closed population models apply within primary periods and an open population model applies between periods. Most models in Rcapture have a loglinear form; they are fitted by carrying out a Poisson regression with the R function glm. Estimates of the demographic parameters of interest are derived from the loglinear parameter estimates; their variances are obtained by linearization. The novel feature of this package is the provision of several new options for modeling capture probabilities heterogeneity between animals in both closed population models and the primary periods of a robust design. It also implements many of the techniques developed by R. M. Cormack for open population models.

  3. Hollow Fiber Membrane Contactors for CO2 Capture: Modeling and Up-Scaling to CO2 Capture for an 800 MWe Coal Power Station

    Directory of Open Access Journals (Sweden)

    Kimball Erin

    2014-11-01

    Full Text Available A techno-economic analysis was completed to compare the use of Hollow Fiber Membrane Modules (HFMM with the more conventional structured packing columns as the absorber in amine-based CO2 capture systems for power plants. In order to simulate the operation of industrial scale HFMM systems, a two-dimensional model was developed and validated based on results of a laboratory scale HFMM. After successful experiments and validation of the model, a pilot scale HFMM was constructed and simulated with the same model. The results of the simulations, from both sizes of HFMM, were used to assess the feasibility of further up-scaling to a HFMM system to capture the CO2 from an 800 MWe power plant. The system requirements – membrane fiber length, total contact surface area, and module volume – were determined from simulations and used for an economic comparison with structured packing columns. Results showed that a significant cost reduction of at least 50% is required to make HFMM competitive with structured packing columns. Several factors for the design of industrial scale HFMM require further investigation, such as the optimal aspect ratio (module length/diameter, membrane lifetime, and casing material and shape, in addition to the need to reduce the overall cost. However, HFMM were also shown to have the advantages of having a higher contact surface area per unit volume and modular scale-up, key factors for applications requiring limited footprints or flexibility in configuration.

  4. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection.

    Directory of Open Access Journals (Sweden)

    Mark N Read

    2016-09-01

    Full Text Available The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto

  5. Experiments and simulation of a net closing mechanism for tether-net capture of space debris

    Science.gov (United States)

    Sharf, Inna; Thomsen, Benjamin; Botta, Eleonora M.; Misra, Arun K.

    2017-10-01

    This research addresses the design and testing of a debris containment system for use in a tether-net approach to space debris removal. The tether-net active debris removal involves the ejection of a net from a spacecraft by applying impulses to masses on the net, subsequent expansion of the net, the envelopment and capture of the debris target, and the de-orbiting of the debris via a tether to the chaser spacecraft. To ensure a debris removal mission's success, it is important that the debris be successfully captured and then, secured within the net. To this end, we present a concept for a net closing mechanism, which we believe will permit consistently successful debris capture via a simple and unobtrusive design. This net closing system functions by extending the main tether connecting the chaser spacecraft and the net vertex to the perimeter and around the perimeter of the net, allowing the tether to actuate closure of the net in a manner similar to a cinch cord. A particular embodiment of the design in a laboratory test-bed is described: the test-bed itself is comprised of a scaled-down tether-net, a supporting frame and a mock-up debris. Experiments conducted with the facility demonstrate the practicality of the net closing system. A model of the net closure concept has been integrated into the previously developed dynamics simulator of the chaser/tether-net/debris system. Simulations under tether tensioning conditions demonstrate the effectiveness of the closure concept for debris containment, in the gravity-free environment of space, for a realistic debris target. The on-ground experimental test-bed is also used to showcase its utility for validating the dynamics simulation of the net deployment, and a full-scale automated setup would make possible a range of validation studies of other aspects of a tether-net debris capture mission.

  6. Comparison of Thunderstorm Simulations from WRF-NMM and WRF-ARW Models over East Indian Region

    Directory of Open Access Journals (Sweden)

    A. J. Litta

    2012-01-01

    Full Text Available The thunderstorms are typical mesoscale systems dominated by intense convection. Mesoscale models are essential for the accurate prediction of such high-impact weather events. In the present study, an attempt has been made to compare the simulated results of three thunderstorm events using NMM and ARW model core of WRF system and validated the model results with observations. Both models performed well in capturing stability indices which are indicators of severe convective activity. Comparison of model-simulated radar reflectivity imageries with observations revealed that NMM model has simulated well the propagation of the squall line, while the squall line movement was slow in ARW. From the model-simulated spatial plots of cloud top temperature, we can see that NMM model has better captured the genesis, intensification, and propagation of thunder squall than ARW model. The statistical analysis of rainfall indicates the better performance of NMM than ARW. Comparison of model-simulated thunderstorm affected parameters with that of the observed showed that NMM has performed better than ARW in capturing the sharp rise in humidity and drop in temperature. This suggests that NMM model has the potential to provide unique and valuable information for severe thunderstorm forecasters over east Indian region.

  7. CAPTURE OF TROJANS BY JUMPING JUPITER

    International Nuclear Information System (INIS)

    Nesvorný, David; Vokrouhlický, David; Morbidelli, Alessandro

    2013-01-01

    Jupiter Trojans are thought to be survivors of a much larger population of planetesimals that existed in the planetary region when planets formed. They can provide important constraints on the mass and properties of the planetesimal disk, and its dispersal during planet migration. Here, we tested a possibility that the Trojans were captured during the early dynamical instability among the outer planets (aka the Nice model), when the semimajor axis of Jupiter was changing as a result of scattering encounters with an ice giant. The capture occurs in this model when Jupiter's orbit and its Lagrange points become radially displaced in a scattering event and fall into a region populated by planetesimals (that previously evolved from their natal transplanetary disk to ∼5 AU during the instability). Our numerical simulations of the new capture model, hereafter jump capture, satisfactorily reproduce the orbital distribution of the Trojans and their total mass. The jump capture is potentially capable of explaining the observed asymmetry in the number of leading and trailing Trojans. We find that the capture probability is (6-8) × 10 –7 for each particle in the original transplanetary disk, implying that the disk contained (3-4) × 10 7 planetesimals with absolute magnitude H disk ∼ 14-28 M Earth , is consistent with the mass deduced from recent dynamical simulations of the planetary instability.

  8. New exploration on TMSR: modelling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Si, S.; Chen, Q.; Bei, H.; Zhao, J., E-mail: ssy@snerdi.com.cn [Shanghai Nuclear Engineering Research & Design Inst., Shanghai (China)

    2015-07-01

    A tightly coupled multi-physics model for MSR (Molten Salt Reactor) system involving the reactor core and the rest of the primary loop has been developed and employed in an in-house developed computer code TANG-MSR. In this paper, the computer code is used to simulate the behavior of steady state operation and transient for our redesigned TMSR. The presented simulation results demonstrate that the models employed in TANG-MSR can capture major physics phenomena in MSR and the redesigned TMSR has excellent performance of safety and sustainability. (author)

  9. Simulations of magnetic capturing of drug carriers in the brain vascular system

    Energy Technology Data Exchange (ETDEWEB)

    Kenjeres, S., E-mail: S.Kenjeres@tudelft.nl [Department of Multi-Scale Physics, Faculty of Applied Sciences, J.M. Burgerscentre for Fluid Dynamics, Delft University of Technology, Leeghwaterstraat 39, 2628 CB Delft (Netherlands); Righolt, B.W. [Department of Multi-Scale Physics, Faculty of Applied Sciences, J.M. Burgerscentre for Fluid Dynamics, Delft University of Technology, Leeghwaterstraat 39, 2628 CB Delft (Netherlands)

    2012-06-15

    Highlights: Black-Right-Pointing-Pointer Blood flow and magnetic particles distributions in the brain vascular system simulated. Black-Right-Pointing-Pointer Numerical mesh generated from raw MRI images. Black-Right-Pointing-Pointer Significant increase in local capturing of magnetic particles obtained. Black-Right-Pointing-Pointer Promising technique for localised non-invasive treatment of brain tumours. - Abstract: The present paper reports on numerical simulations of blood flow and magnetic drug carrier distributions in a complex brain vascular system. The blood is represented as a non-Newtonian fluid by the generalised power law. The Lagrangian tracking of the double-layer spherical particles is performed to estimate particle deposition under influence of imposed magnetic field gradients across arterial walls. Two situations are considered: neutral (magnetic field off) and active control (magnetic field on) case. The double-layer spherical particles that mimic a real medical drug are characterised by two characteristic diameters - the outer one and the inner one of the magnetic core. A numerical mesh of the brain vascular system consisting of multi-branching arteries is generated from raw MRI scan images of a patient. The blood is supplied through four main inlet arteries and the entire vascular system includes more than 30 outlets, which are modelled by Murray's law. The no-slip boundary condition is applied for velocity components along the smooth and rigid arterial walls. Numerical simulations revealed detailed insights into blood flow patterns, wall-shear-stress and local particle deposition efficiency along arterial walls. It is demonstrated that magnetically targeted drug delivery significantly increased the particle capturing efficiency in the pre-defined regions. This feature can be potentially useful for localised, non-invasive treatment of brain tumours.

  10. Modeling misidentification errors that result from use of genetic tags in capture-recapture studies

    Science.gov (United States)

    Yoshizaki, J.; Brownie, C.; Pollock, K.H.; Link, W.A.

    2011-01-01

    Misidentification of animals is potentially important when naturally existing features (natural tags) such as DNA fingerprints (genetic tags) are used to identify individual animals. For example, when misidentification leads to multiple identities being assigned to an animal, traditional estimators tend to overestimate population size. Accounting for misidentification in capture-recapture models requires detailed understanding of the mechanism. Using genetic tags as an example, we outline a framework for modeling the effect of misidentification in closed population studies when individual identification is based on natural tags that are consistent over time (non-evolving natural tags). We first assume a single sample is obtained per animal for each capture event, and then generalize to the case where multiple samples (such as hair or scat samples) are collected per animal per capture occasion. We introduce methods for estimating population size and, using a simulation study, we show that our new estimators perform well for cases with moderately high capture probabilities or high misidentification rates. In contrast, conventional estimators can seriously overestimate population size when errors due to misidentification are ignored. ?? 2009 Springer Science+Business Media, LLC.

  11. Model of electron capture in low-temperature glasses

    International Nuclear Information System (INIS)

    Bartczak, W.M.; Swiatla, D.; Kroh, J.

    1983-01-01

    The new model of electron capture by a statistical variety of traps in glassy matrices is proposed. The electron capture is interpreted as the radiationless transition (assisted by multiphonon emission) of the mobile electron to the localized state in the trap. The conception of 'unfair' and 'fair' traps is introduced. The 'unfair' trap captures the mobile electron by the shallow excited state. In contrast, the 'fair' trap captures the electron by the ground state. The model calculations of the statistical distributions of the occupied electron traps are presented and discussed with respect to experimental results. (author)

  12. The energetic, physiological, and behavioral response of lemon sharks (Negaprion brevirostris) to simulated longline capture.

    Science.gov (United States)

    Bouyoucos, Ian A; Suski, Cory D; Mandelman, John W; Brooks, Edward J

    2017-05-01

    Commercial fisheries bycatch is a considerable threat to elasmobranch population recovery, and techniques to mitigate sub-lethal consequences can be improved with data on the energetic, physiological, and behavioral response of individuals to capture. This study sought to estimate the effects of simulated longline capture on the behavior, energy use, and physiological stress of juvenile lemon sharks (Negaprion brevirostris). Captive sharks equipped with acceleration biologgers were subjected to 1h of simulated longline capture. Swimming behaviors were identified from acceleration data using a machine-learning algorithm, energetic costs were estimated using accelerometer-calibrated relationships and respirometry, and physiological stress was quantified with point-of-care blood analyzers. During capture, sharks exhibited nine-fold increases in the frequency of burst swimming, 98% reductions in resting, and swam as often as unrestrained sharks. Aerobic metabolic rates during capture were 8% higher than for unrestrained sharks, and accounted for a 57.7% increase in activity costs when excess post-exercise oxygen consumption was included. Lastly, sharks exhibited significant increases in blood lactate and glucose, but no change in blood pH after 1h of capture. Therefore, these results provide preliminary insight into the behavioral and energetic responses of sharks to capture, and have implications for mitigating sub-lethal consequences of capture for sharks as commercial longline bycatch. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. A new pilot absorber for CO2 capture from flue gases: Measuring and modelling capture with MEA solution

    DEFF Research Database (Denmark)

    Sønderby, Tim L.; Carlsen, Kim B.; Fosbøl, Philip Loldrup

    2013-01-01

    A pilot absorber column for CO2 recovery from flue gases was constructed and tested with aqueous 30wt% monoethanolamine (MEA), a primary amine, as capture solvent. The pilot plant data were compared with a mathematical rate based packed-column model. The simulation results compared well...... with the pilot plant data. The packed height of the column can be varied from 1.6 to 8.2. m by means of five different liquid inlets. The column has an inner diameter of 100. mm and is packed with structured Mellapak 250Y packing. Counter-current flow is used. The pilot plant performance was investigated...

  14. Solids Modelling and Capture Simulation of Piperazine in Potassium Solvents

    DEFF Research Database (Denmark)

    Fosbøl, Philip Loldrup; Maribo-Mogensen, Bjørn; Thomsen, Kaj

    2012-01-01

    be a benefit to the capture process, but it could also result in unforeseen situations of potential hazardous operation, clogging, equipment failure etc.Security of the PZ process needs to be in focus. Flow assurance requires additional attention, especially due to the precipitation phenomenon. This entails...

  15. Monte Carlo simulation of the scattered component of neutron capture prompt gamma-ray analyzer responses

    International Nuclear Information System (INIS)

    Jin, Y.; Verghese, K.; Gardner, R.P.

    1986-01-01

    This paper describes a major part of our efforts to simulate the entire spectral response of the neutron capture prompt gamma-ray analyzer for bulk media (or conveyor belt) samples by the Monte Carlo method. This would allow one to use such a model to augment or, in most cases, essentially replace experiments in the calibration and optimum design of these analyzers. In previous work, we simulated the unscattered gamma-ray intensities, but would like to simulate the entire spectral response as we did with the energy-dispersive x-ray fluorescence analyzers. To accomplish this, one must account for the scattered gamma rays as well as the unscattered and one must have available the detector response function to translate the incident gamma-ray spectrum calculated by the Monte Carlo simulation into the detected pulse-height spectrum. We recently completed our work on the germanium detector response function, and the present paper describes our efforts to simulate the entire spectral response by using it with Monte Carlo predicted unscattered and scattered gamma rays

  16. Cellular Scanning Strategy for Selective Laser Melting: Capturing Thermal Trends with a Low-Fidelity, Pseudo-Analytical Model

    Directory of Open Access Journals (Sweden)

    Sankhya Mohanty

    2014-01-01

    Full Text Available Simulations of additive manufacturing processes are known to be computationally expensive. The resulting large runtimes prohibit their application in secondary analysis requiring several complete simulations such as optimization studies, and sensitivity analysis. In this paper, a low-fidelity pseudo-analytical model has been introduced to enable such secondary analysis. The model has been able to mimic a finite element model and was able to capture the thermal trends associated with the process. The model has been validated and subsequently applied in a small optimization case study. The pseudo-analytical modelling technique is established as a fast tool for primary modelling investigations.

  17. A dynamic mathematical model for packed columns in carbon capture plants

    DEFF Research Database (Denmark)

    Gaspar, Jozsef; Jørgensen, John Bagterp; Fosbøl, Philip Loldrup

    2015-01-01

    simulation using monoethanolamine (MEA) and piperazine (PZ) as solvent. MEA is considered as the base-case solvent in the carbon capture business. The effect of changes in the flue gas flow rate and changes in the available steam are investigated to determine their influence on the performance of the capture...

  18. Rose bush leaf and internode expansion dynamics: analysis and development of a model capturing interplant variability

    Directory of Open Access Journals (Sweden)

    Sabine eDemotes-Mainard

    2013-10-01

    Full Text Available Bush rose architecture, among other factors, such as plant health, determines plant visual quality. The commercial product is the individual plant and interplant variability may be high within a crop. Thus, both mean plant architecture and interplant variability should be studied. Expansion is an important feature of architecture, but it has been little studied at the level of individual organs in bush roses. We investigated the expansion kinetics of primary shoot organs, to develop a model reproducing the organ expansion of real crops from non destructive input variables. We took interplant variability in expansion kinetics and the model’s ability to simulate this variability into account. Changes in leaflet and internode dimensions over thermal time were recorded for primary shoot expansion, on 83 plants from three crops grown in different climatic conditions and densities. An empirical model was developed, to reproduce organ expansion kinetics for individual plants of a real crop of bush rose primary shoots. Leaflet or internode length was simulated as a logistic function of thermal time. The model was evaluated by cross-validation. We found that differences in leaflet or internode expansion kinetics between phytomer positions and between plants at a given phytomer position were due mostly to large differences in time of organ expansion and expansion rate, rather than differences in expansion duration. Thus, in the model, the parameters linked to expansion duration were predicted by values common to all plants, whereas variability in final size and organ expansion time was captured by input data. The model accurately simulated leaflet and internode expansion for individual plants (RMSEP = 7.3% and 10.2% of final length, respectively. Thus, this study defines the measurements required to simulate expansion and provides the first model simulating organ expansion in rosebush to capture interplant variability.

  19. Disability weight of Clonorchis sinensis infection: captured from community study and model simulation.

    Directory of Open Access Journals (Sweden)

    Men-Bao Qian

    2011-12-01

    Full Text Available BACKGROUND: Clonorchiasis is among the most neglected tropical diseases. It is caused by ingesting raw or undercooked fish or shrimp containing the larval of Clonorchis sinensis and mainly endemic in Southeast Asia including China, Korea and Vietnam. The global estimations for population at risk and infected are 601 million and 35 million, respectively. However, it is still not listed among the Global Burden of Disease (GBD and no disability weight is available for it. Disability weight reflects the average degree of loss of life value due to certain chronic disease condition and ranges between 0 (complete health and 1 (death. It is crucial parameter for calculating the morbidity part of any disease burden in terms of disability-adjusted life years (DALYs. METHODOLOGY/PRINCIPAL FINDINGS: According to the probability and disability weight of single sequelae caused by C. sinensis infection, the overall disability weight could be captured through Monte Carlo simulation. The probability of single sequelae was gained from one community investigation, while the corresponding disability weight was searched from the literatures in evidence-based approach. The overall disability weights of the male and female were 0.101 and 0.050, respectively. The overall disability weights of the age group of 5-14, 15-29, 30-44, 45-59 and 60+ were 0.022, 0.052, 0.072, 0.094 and 0.118, respectively. There was some evidence showing that the disability weight and geometric mean of eggs per gram of feces (GMEPG fitted a logarithmic equation. CONCLUSION/SIGNIFICANCE: The overall disability weights of C. sinensis infection are differential in different sex and age groups. The disability weight captured here may be referred for estimating the disease burden of C. sinensis infection.

  20. 'Mathematical model of K Capture and its implications'

    International Nuclear Information System (INIS)

    Angus, Andrew C.

    2000-01-01

    The mechanism of K Capture, the nuclear absorption of electron in the K shell, as induced by electricity, is explained in this article. Furthermore, a mathematical model of K Capture is formulated. Then, K Capture is applied to explain the negative results obtained by Steven Jones and the positive results obtained by Pons-Fleischmann in Deuterium Oxide Electrolysis Experiments. The most important implication of K Capture is the possibility of obtaining nuclear energy by fusion at low temperature from heavy water

  1. Modelling clavicular and scapular kinematics: from measurement to simulation

    NARCIS (Netherlands)

    bolsterlee, B.; Veeger, H.E.J.; van der Helm, F.C.T.

    2014-01-01

    Musculoskeletal models are intended to be used to assist in prevention and treatments of musculoskeletal disorders. To capture important aspects of shoulder dysfunction, realistic simulation of clavicular and scapular movements is crucial. The range of motion of these bones is dependent on thoracic,

  2. Modeling misidentification errors in capture-recapture studies using photographic identification of evolving marks

    Science.gov (United States)

    Yoshizaki, J.; Pollock, K.H.; Brownie, C.; Webster, R.A.

    2009-01-01

    Misidentification of animals is potentially important when naturally existing features (natural tags) are used to identify individual animals in a capture-recapture study. Photographic identification (photoID) typically uses photographic images of animals' naturally existing features as tags (photographic tags) and is subject to two main causes of identification errors: those related to quality of photographs (non-evolving natural tags) and those related to changes in natural marks (evolving natural tags). The conventional methods for analysis of capture-recapture data do not account for identification errors, and to do so requires a detailed understanding of the misidentification mechanism. Focusing on the situation where errors are due to evolving natural tags, we propose a misidentification mechanism and outline a framework for modeling the effect of misidentification in closed population studies. We introduce methods for estimating population size based on this model. Using a simulation study, we show that conventional estimators can seriously overestimate population size when errors due to misidentification are ignored, and that, in comparison, our new estimators have better properties except in cases with low capture probabilities (<0.2) or low misidentification rates (<2.5%). ?? 2009 by the Ecological Society of America.

  3. COSMOS: A System-Level Modelling and Simulation Framework for Coprocessor-Coupled Reconfigurable Systems

    DEFF Research Database (Denmark)

    Wu, Kehuai; Madsen, Jan

    2007-01-01

    and resource management, and iii) present a SystemC based framework to model and simulate coprocessor-coupled reconfigurable systems. We illustrate how COSMOS may be used to capture the dynamic behavior of such systems and emphasize the need for capturing the system aspects of such systems in order to deal...

  4. Modeling and simulation of thermally actuated bilayer plates

    Science.gov (United States)

    Bartels, Sören; Bonito, Andrea; Muliana, Anastasia H.; Nochetto, Ricardo H.

    2018-02-01

    We present a mathematical model of polymer bilayers that undergo large bending deformations when actuated by non-mechanical stimuli such as thermal effects. The simple model captures a large class of nonlinear bending effects and can be discretized with standard plate elements. We devise a fully practical iterative scheme and apply it to the simulation of folding of several practically useful compliant structures comprising of thin elastic layers.

  5. How does spatial study design influence density estimates from spatial capture-recapture models?

    Directory of Open Access Journals (Sweden)

    Rahel Sollmann

    Full Text Available When estimating population density from data collected on non-invasive detector arrays, recently developed spatial capture-recapture (SCR models present an advance over non-spatial models by accounting for individual movement. While these models should be more robust to changes in trapping designs, they have not been well tested. Here we investigate how the spatial arrangement and size of the trapping array influence parameter estimates for SCR models. We analysed black bear data collected with 123 hair snares with an SCR model accounting for differences in detection and movement between sexes and across the trapping occasions. To see how the size of the trap array and trap dispersion influence parameter estimates, we repeated analysis for data from subsets of traps: 50% chosen at random, 50% in the centre of the array and 20% in the South of the array. Additionally, we simulated and analysed data under a suite of trap designs and home range sizes. In the black bear study, we found that results were similar across trap arrays, except when only 20% of the array was used. Black bear density was approximately 10 individuals per 100 km(2. Our simulation study showed that SCR models performed well as long as the extent of the trap array was similar to or larger than the extent of individual movement during the study period, and movement was at least half the distance between traps. SCR models performed well across a range of spatial trap setups and animal movements. Contrary to non-spatial capture-recapture models, they do not require the trapping grid to cover an area several times the average home range of the studied species. This renders SCR models more appropriate for the study of wide-ranging mammals and more flexible to design studies targeting multiple species.

  6. Modeling and Testing of Phase Transition-Based Deployable Systems for Small Body Sample Capture

    Science.gov (United States)

    Quadrelli, Marco; Backes, Paul; Wilkie, Keats; Giersch, Lou; Quijano, Ubaldo; Keim, Jason; Mukherjee, Rudranarayan

    2009-01-01

    This paper summarizes the modeling, simulation, and testing work related to the development of technology to investigate the potential that shape memory actuation has to provide mechanically simple and affordable solutions for delivering assets to a surface and for sample capture and return. We investigate the structural dynamics and controllability aspects of an adaptive beam carrying an end-effector which, by changing equilibrium phases is able to actively decouple the end-effector dynamics from the spacecraft dynamics during the surface contact phase. Asset delivery and sample capture and return are at the heart of several emerging potential missions to small bodies, such as asteroids and comets, and to the surface of large bodies, such as Titan.

  7. Evaluation of Stochastic Rainfall Models in Capturing Climate Variability for Future Drought and Flood Risk Assessment

    Science.gov (United States)

    Chowdhury, A. F. M. K.; Lockart, N.; Willgoose, G. R.; Kuczera, G. A.; Kiem, A.; Nadeeka, P. M.

    2016-12-01

    One of the key objectives of stochastic rainfall modelling is to capture the full variability of climate system for future drought and flood risk assessment. However, it is not clear how well these models can capture the future climate variability when they are calibrated to Global/Regional Climate Model data (GCM/RCM) as these datasets are usually available for very short future period/s (e.g. 20 years). This study has assessed the ability of two stochastic daily rainfall models to capture climate variability by calibrating them to a dynamically downscaled RCM dataset in an east Australian catchment for 1990-2010, 2020-2040, and 2060-2080 epochs. The two stochastic models are: (1) a hierarchical Markov Chain (MC) model, which we developed in a previous study and (2) a semi-parametric MC model developed by Mehrotra and Sharma (2007). Our hierarchical model uses stochastic parameters of MC and Gamma distribution, while the semi-parametric model uses a modified MC process with memory of past periods and kernel density estimation. This study has generated multiple realizations of rainfall series by using parameters of each model calibrated to the RCM dataset for each epoch. The generated rainfall series are used to generate synthetic streamflow by using a SimHyd hydrology model. Assessing the synthetic rainfall and streamflow series, this study has found that both stochastic models can incorporate a range of variability in rainfall as well as streamflow generation for both current and future periods. However, the hierarchical model tends to overestimate the multiyear variability of wet spell lengths (therefore, is less likely to simulate long periods of drought and flood), while the semi-parametric model tends to overestimate the mean annual rainfall depths and streamflow volumes (hence, simulated droughts are likely to be less severe). Sensitivity of these limitations of both stochastic models in terms of future drought and flood risk assessment will be discussed.

  8. Extended behavioural device modelling and circuit simulation with Qucs-S

    Science.gov (United States)

    Brinson, M. E.; Kuznetsov, V.

    2018-03-01

    Current trends in circuit simulation suggest a growing interest in open source software that allows access to more than one simulation engine while simultaneously supporting schematic drawing tools, behavioural Verilog-A and XSPICE component modelling, and output data post-processing. This article introduces a number of new features recently implemented in the 'Quite universal circuit simulator - SPICE variant' (Qucs-S), including structure and fundamental schematic capture algorithms, at the same time highlighting their use in behavioural semiconductor device modelling. Particular importance is placed on the interaction between Qucs-S schematics, equation-defined devices, SPICE B behavioural sources and hardware description language (HDL) scripts. The multi-simulator version of Qucs is a freely available tool that offers extended modelling and simulation features compared to those provided by legacy circuit simulators. The performance of a number of Qucs-S modelling extensions are demonstrated with a GaN HEMT compact device model and data obtained from tests using the Qucs-S/Ngspice/Xyce ©/SPICE OPUS multi-engine circuit simulator.

  9. DeMO: An Ontology for Discrete-event Modeling and Simulation

    Science.gov (United States)

    Silver, Gregory A; Miller, John A; Hybinette, Maria; Baramidze, Gregory; York, William S

    2011-01-01

    Several fields have created ontologies for their subdomains. For example, the biological sciences have developed extensive ontologies such as the Gene Ontology, which is considered a great success. Ontologies could provide similar advantages to the Modeling and Simulation community. They provide a way to establish common vocabularies and capture knowledge about a particular domain with community-wide agreement. Ontologies can support significantly improved (semantic) search and browsing, integration of heterogeneous information sources, and improved knowledge discovery capabilities. This paper discusses the design and development of an ontology for Modeling and Simulation called the Discrete-event Modeling Ontology (DeMO), and it presents prototype applications that demonstrate various uses and benefits that such an ontology may provide to the Modeling and Simulation community. PMID:22919114

  10. Dynamic modeling and simulation of a real world billiard

    International Nuclear Information System (INIS)

    Hartl, Alexandre E.; Miller, Bruce N.; Mazzoleni, Andre P.

    2011-01-01

    Gravitational billiards provide an experimentally accessible arena for testing formulations of nonlinear dynamics. We present a mathematical model that captures the essential dynamics required for describing the motion of a realistic billiard for arbitrary boundaries. Simulations of the model are applied to parabolic, wedge and hyperbolic billiards that are driven sinusoidally. Direct comparisons are made between the model's predictions and previously published experimental data. It is shown that the data can be successfully modeled with a simple set of parameters without an assumption of exotic energy dependence. -- Highlights: → We create a model of a gravitational billiard that includes rotation and dissipation. → Predictions of the model are compared with the experiments of Felt and Olafsen. → The simulations correctly predict the essential features of the experiments.

  11. MULTISCALE SPARSE APPEARANCE MODELING AND SIMULATION OF PATHOLOGICAL DEFORMATIONS

    Directory of Open Access Journals (Sweden)

    Rami Zewail

    2017-08-01

    Full Text Available Machine learning and statistical modeling techniques has drawn much interest within the medical imaging research community. However, clinically-relevant modeling of anatomical structures continues to be a challenging task. This paper presents a novel method for multiscale sparse appearance modeling in medical images with application to simulation of pathological deformations in X-ray images of human spine. The proposed appearance model benefits from the non-linear approximation power of Contourlets and its ability to capture higher order singularities to achieve a sparse representation while preserving the accuracy of the statistical model. Independent Component Analysis is used to extract statistical independent modes of variations from the sparse Contourlet-based domain. The new model is then used to simulate clinically-relevant pathological deformations in radiographic images.

  12. A comparison of the COG and MCNP codes in computational neutron capture therapy modeling, Part I: boron neutron capture therapy models.

    Science.gov (United States)

    Culbertson, C N; Wangerin, K; Ghandourah, E; Jevremovic, T

    2005-08-01

    The goal of this study was to evaluate the COG Monte Carlo radiation transport code, developed and tested by Lawrence Livermore National Laboratory, for neutron capture therapy related modeling. A boron neutron capture therapy model was analyzed comparing COG calculational results to results from the widely used MCNP4B (Monte Carlo N-Particle) transport code. The approach for computing neutron fluence rate and each dose component relevant in boron neutron capture therapy is described, and calculated values are shown in detail. The differences between the COG and MCNP predictions are qualified and quantified. The differences are generally small and suggest that the COG code can be applied for BNCT research related problems.

  13. A bottleneck model of set-specific capture.

    Directory of Open Access Journals (Sweden)

    Katherine Sledge Moore

    Full Text Available Set-specific contingent attentional capture is a particularly strong form of capture that occurs when multiple attentional sets guide visual search (e.g., "search for green letters" and "search for orange letters". In this type of capture, a potential target that matches one attentional set (e.g. a green stimulus impairs the ability to identify a temporally proximal target that matches another attentional set (e.g. an orange stimulus. In the present study, we investigated whether set-specific capture stems from a bottleneck in working memory or from a depletion of limited resources that are distributed across multiple attentional sets. In each trial, participants searched a rapid serial visual presentation (RSVP stream for up to three target letters (T1-T3 that could appear in any of three target colors (orange, green, or lavender. The most revealing findings came from trials in which T1 and T2 matched different attentional sets and were both identified. In these trials, T3 accuracy was lower when it did not match T1's set than when it did match, but only when participants failed to identify T2. These findings support a bottleneck model of set-specific capture in which a limited-capacity mechanism in working memory enhances only one attentional set at a time, rather than a resource model in which processing capacity is simultaneously distributed across multiple attentional sets.

  14. Simulation of High Velocity Impact on Composite Structures - Model Implementation and Validation

    Science.gov (United States)

    Schueler, Dominik; Toso-Pentecôte, Nathalie; Voggenreiter, Heinz

    2016-08-01

    High velocity impact on composite aircraft structures leads to the formation of flexural waves that can cause severe damage to the structure. Damage and failure can occur within the plies and/or in the resin rich interface layers between adjacent plies. In the present paper a modelling methodology is documented that captures intra- and inter-laminar damage and their interrelations by use of shell element layers representing sub-laminates that are connected with cohesive interface layers to simulate delamination. This approach allows the simulation of large structures while still capturing the governing damage mechanisms and their interactions. The paper describes numerical algorithms for the implementation of a Ladevèze continuum damage model for the ply and methods to derive input parameters for the cohesive zone model. By comparison with experimental results from gas gun impact tests the potential and limitations of the modelling approach are discussed.

  15. Simulating carbon exchange using a regional atmospheric model coupled to an advanced land-surface model

    Directory of Open Access Journals (Sweden)

    H. W. Ter Maat

    2010-08-01

    Full Text Available This paper is a case study to investigate what the main controlling factors are that determine atmospheric carbon dioxide content for a region in the centre of The Netherlands. We use the Regional Atmospheric Modelling System (RAMS, coupled with a land surface scheme simulating carbon, heat and momentum fluxes (SWAPS-C, and including also submodels for urban and marine fluxes, which in principle should include the dominant mechanisms and should be able to capture the relevant dynamics of the system. To validate the model, observations are used that were taken during an intensive observational campaign in central Netherlands in summer 2002. These include flux-tower observations and aircraft observations of vertical profiles and spatial fluxes of various variables.

    The simulations performed with the coupled regional model (RAMS-SWAPS-C are in good qualitative agreement with the observations. The station validation of the model demonstrates that the incoming shortwave radiation and surface fluxes of water and CO2 are well simulated. The comparison against aircraft data shows that the regional meteorology (i.e. wind, temperature is captured well by the model. Comparing spatially explicitly simulated fluxes with aircraft observed fluxes we conclude that in general latent heat fluxes are underestimated by the model compared to the observations but that the latter exhibit large variability within all flights. Sensitivity experiments demonstrate the relevance of the urban emissions of carbon dioxide for the carbon balance in this particular region. The same tests also show the relation between uncertainties in surface fluxes and those in atmospheric concentrations.

  16. Stochastic model simulation using Kronecker product analysis and Zassenhaus formula approximation.

    Science.gov (United States)

    Caglar, Mehmet Umut; Pal, Ranadip

    2013-01-01

    Probabilistic Models are regularly applied in Genetic Regulatory Network modeling to capture the stochastic behavior observed in the generation of biological entities such as mRNA or proteins. Several approaches including Stochastic Master Equations and Probabilistic Boolean Networks have been proposed to model the stochastic behavior in genetic regulatory networks. It is generally accepted that Stochastic Master Equation is a fundamental model that can describe the system being investigated in fine detail, but the application of this model is computationally enormously expensive. On the other hand, Probabilistic Boolean Network captures only the coarse-scale stochastic properties of the system without modeling the detailed interactions. We propose a new approximation of the stochastic master equation model that is able to capture the finer details of the modeled system including bistabilities and oscillatory behavior, and yet has a significantly lower computational complexity. In this new method, we represent the system using tensors and derive an identity to exploit the sparse connectivity of regulatory targets for complexity reduction. The algorithm involves an approximation based on Zassenhaus formula to represent the exponential of a sum of matrices as product of matrices. We derive upper bounds on the expected error of the proposed model distribution as compared to the stochastic master equation model distribution. Simulation results of the application of the model to four different biological benchmark systems illustrate performance comparable to detailed stochastic master equation models but with considerably lower computational complexity. The results also demonstrate the reduced complexity of the new approach as compared to commonly used Stochastic Simulation Algorithm for equivalent accuracy.

  17. Capture reactions at astrophysically relevant energies: extended gas target experiments and GEANT simulations

    CERN Document Server

    Kölle, V; Braitmayer, S E; Mohr, P J; Wilmes, S; Staudt, G; Hammer, J W; Jäger, M; Knee, H; Kunz, R; Mayer, A

    1999-01-01

    Several resonances of the capture reaction sup 2 sup 0 Ne(alpha, gamma) sup 2 sup 4 Mg were measured using an extended windowless gas target system. Detailed GEANT simulations were performed to derive the strength and the total width of the resonances from the measured yield curve. The crucial experimental parameters, which are mainly the density profile in the gas target and the efficiency of the gamma-ray detector, were analyzed by a comparison between the measured data and the corresponding simulation calculations. The excellent agreement between the experimental data and the simulations gives detailed insight into these parameters. (author)

  18. Comparison of two electrolyte models for the carbon capture with aqueous ammonia

    DEFF Research Database (Denmark)

    Darde, Victor; Thomsen, Kaj; van Well, Willy J.M.

    2012-01-01

    Post-combustion carbon capture is attracting much attention due to the fact that it can be retrofitted on existing coal power plants. Among the most interesting technologies is the one that employs aqueous ammonia solutions to absorb the generated carbon dioxide. The evaluation of such process.......2). Subsequently, a simple absorption/regeneration layout is simulated employing both models and the process performances are compared. In general, the Extended UNIQUAC appears to describe the experimental data for larger ranges of temperature, pressure and concentration of ammonia more satisfactorily. The energy...

  19. A new integration model of the calcium looping technology into coal fired power plants for CO_2 capture

    International Nuclear Information System (INIS)

    Ortiz, C.; Chacartegui, R.; Valverde, J.M.; Becerra, J.A.

    2016-01-01

    Highlights: • A CaL-CFPP (coal fired power plant) integration model is proposed and efficiency penalty is estimated. • Carbonation in the diffusion stage is considered to predict the capture efficiency. • Low efficiency penalty may be achieved by operating with longer particles’ residence time. • Simulation results show that the energy penalty ranges between 4% and 7% points. - Abstract: The Ca-Looping (CaL) process is at the root of a promising 2nd generation technology for post-combustion CO_2 capture at coal fired power plants. The process is based on the reversible and quick carbonation/calcination reaction of CaO/CaCO_​_3 at high temperatures and allows using low cost, widely available and non toxic CaO precursors such as natural limestone. In this work, the efficiency penalty caused by the integration of the Ca-looping technology into a coal fired power plant is analyzed. The results of the simulations based on the proposed integration model show that efficiency penalty varies between 4% and 7% points, which yields lower energy costs than other more mature post-combustion CO_2 capture technologies such as the currently commercial amine scrubbing technology. A principal feature of the CaL process at CO_2 capture conditions is that it produces a large amount of energy and therefore an optimized integration of the systems energy flows is essential for the feasibility of the integration at the commercial level. As a main novel contribution, CO_2 capture efficiency is calculated in our work by considering the important role of the solid-state diffusion controlled carbonation phase, which becomes relevant when CaO regeneration is carried out under high CO_2 partial pressure as is the case with the CaL process for CO_2 capture. The results obtained based on the new model suggest that integration energy efficiency would be significantly improved as the solids residence time in the carbonator reactor is increased.

  20. Development of a Simulation Model for Swimming with Diving Fins

    Directory of Open Access Journals (Sweden)

    Motomu Nakashima

    2018-02-01

    Full Text Available The simulation model to assess the performance of diving fin was developed by extending the swimming human simulation model SWUM. A diving fin was modeled as a series of five rigid plates and connected to the human model by springs and dampers. These plates were connected to each other by virtual springs and dampers, and fin’s bending property was represented by springs and dampers as well. An actual diver’s swimming motion with fins was acquired by a motion capture experiment. In order to determine the bending property of the fin, two bending tests on land were conducted. In addition, an experiment was conducted in order to determine the fluid force coefficients in the fluid force model for the fin. Finally, using all measured and identified information, a simulation, in which the experimental situation was reproduced, was carried out. It was confirmed that the diver in the simulation propelled forward in the water successfully.

  1. Large Eddy Simulations of Electromagnetic Braking Effects on Argon Bubble Transport and Capture in a Steel Continuous Casting Mold

    Science.gov (United States)

    Jin, Kai; Vanka, Surya P.; Thomas, Brian G.

    2018-06-01

    In continuous casting of steel, argon gas is often injected to prevent clogging of the nozzle, but the bubbles affect the flow pattern, and may become entrapped to form defects in the final product. Further, an electromagnetic field is frequently applied to induce a braking effect on the flow field and modify the inclusion transport. In this study, a previously validated GPU-based in-house code CUFLOW is used to investigate the effect of electromagnetic braking on turbulent flow, bubble transport, and capture. Well-resolved large eddy simulations are combined with two-way coupled Lagrangian computations of the bubbles. The drag coefficient on the bubbles is modified to account for the effects of the magnetic field. The distribution of the argon bubbles, capture, and escape rates, are presented and compared with and without the magnetic field. The bubble capture patterns are also compared with results of a previous RANS model as well as with plant measurements.

  2. Effects of heat exchanger tubes on hydrodynamics and CO 2 capture of a sorbent-based fluidized bed reactor

    Energy Technology Data Exchange (ETDEWEB)

    Lai, Canhai; Xu, Zhijie; Li, Tingwen; Lee, Andrew; Dietiker, Jean-François; Lane, William; Sun, Xin

    2017-12-01

    In virtual design and scale up of pilot-scale carbon capture systems, the coupled reactive multiphase flow problem must be solved to predict the adsorber’s performance and capture efficiency under various operation conditions. This paper focuses on the detailed computational fluid dynamics (CFD) modeling of a pilot-scale fluidized bed adsorber equipped with vertical cooling tubes. Multiphase Flow with Interphase eXchanges (MFiX), an open-source multiphase flow CFD solver, is used for the simulations with custom code to simulate the chemical reactions and filtered models to capture the effect of the unresolved details in the coarser mesh for simulations with reasonable simulations and manageable computational effort. Previously developed two filtered models for horizontal cylinder drag, heat transfer, and reaction kinetics have been modified to derive the 2D filtered models representing vertical cylinders in the coarse-grid CFD simulations. The effects of the heat exchanger configurations (i.e., horizontal or vertical) on the adsorber’s hydrodynamics and CO2 capture performance are then examined. The simulation result subsequently is compared and contrasted with another predicted by a one-dimensional three-region process model.

  3. Evaluation of XHVRB for Capturing Explosive Shock Desensitization

    Science.gov (United States)

    Tuttle, Leah; Schmitt, Robert; Kittell, Dave; Harstad, Eric

    2017-06-01

    Explosive shock desensitization phenomena have been recognized for some time. It has been demonstrated that pressure-based reactive flow models do not adequately capture the basic nature of the explosive behavior. Historically, replacing the local pressure with a shock captured pressure has dramatically improved the numerical modeling approaches. Models based upon shock pressure or functions of entropy have recently been developed. A pseudo-entropy based formulation using the History Variable Reactive Burn model, as proposed by Starkenberg, was implemented into the Eulerian shock physics code CTH. Improvements in the shock capturing algorithm were made. The model is demonstrated to reproduce single shock behavior consistent with published pop plot data. It is also demonstrated to capture a desensitization effect based on available literature data, and to qualitatively capture dead zones from desensitization in 2D corner turning experiments. This models shows promise for use in modeling and simulation problems that are relevant to the desensitization phenomena. Issues are identified with the current implementation and future work is proposed for improving and expanding model capabilities. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  4. Anatomy and Physiology of Multiscale Modeling and Simulation in Systems Medicine.

    Science.gov (United States)

    Mizeranschi, Alexandru; Groen, Derek; Borgdorff, Joris; Hoekstra, Alfons G; Chopard, Bastien; Dubitzky, Werner

    2016-01-01

    Systems medicine is the application of systems biology concepts, methods, and tools to medical research and practice. It aims to integrate data and knowledge from different disciplines into biomedical models and simulations for the understanding, prevention, cure, and management of complex diseases. Complex diseases arise from the interactions among disease-influencing factors across multiple levels of biological organization from the environment to molecules. To tackle the enormous challenges posed by complex diseases, we need a modeling and simulation framework capable of capturing and integrating information originating from multiple spatiotemporal and organizational scales. Multiscale modeling and simulation in systems medicine is an emerging methodology and discipline that has already demonstrated its potential in becoming this framework. The aim of this chapter is to present some of the main concepts, requirements, and challenges of multiscale modeling and simulation in systems medicine.

  5. Simulation of a bubbling fluidized bed process for capturing CO2 from flue gas

    International Nuclear Information System (INIS)

    Choi, Jeong-Hoo; Yi, Chang-Keun; Jo, Sung-Ho; Ryu, Ho-Jung; Park, Young-Cheol

    2014-01-01

    We simulated a bubbling bed process capturing CO 2 from flue gas. It applied for a laboratory scale process to investigate effects of operating parameters on capture efficiency. The adsorber temperature had a stronger effect than the regenerator temperature. The effect of regenerator temperature was minor for high adsorber temperature. The effect of regenerator temperature decreased to level off for the temperature >250 .deg. C. The capture efficiency was rather dominated by the adsorption reaction than the regeneration reaction. The effect of gas velocity was as appreciable as that of adsorber temperature. The capture efficiency increased with the solids circulation rate since it was ruled by the molar ratio of K to CO 2 for solids circulation smaller than the minimum required one (G s, min ). However, it leveled off for solids circulation rate >G s, min . As the ratio of adsorber solids inventory to the total solids inventory (x w1 ) increased, the capture efficiency increased until x w1 =0.705, but decreased for x w1 >0.705 because the regeneration time decreased too small. It revealed that the regeneration reaction was faster than the adsorption reaction. Increase of total solids inventory is a good way to get further increase in capture efficiency

  6. Multiple-predators-based capture process on complex networks

    International Nuclear Information System (INIS)

    Sharafat, Rajput Ramiz; Pu Cunlai; Li Jie; Chen Rongbin; Xu Zhongqi

    2017-01-01

    The predator/prey (capture) problem is a prototype of many network-related applications. We study the capture process on complex networks by considering multiple predators from multiple sources. In our model, some lions start from multiple sources simultaneously to capture the lamb by biased random walks, which are controlled with a free parameter α . We derive the distribution of the lamb’s lifetime and the expected lifetime 〈 T 〉. Through simulation, we find that the expected lifetime drops substantially with the increasing number of lions. Moreover, we study how the underlying topological structure affects the capture process, and obtain that locating on small-degree nodes is better than on large-degree nodes to prolong the lifetime of the lamb. The dense or homogeneous network structures are against the survival of the lamb. We also discuss how to improve the capture efficiency in our model. (paper)

  7. Simulation of dynamic magnetic particle capture and accumulation around a ferromagnetic wire

    Energy Technology Data Exchange (ETDEWEB)

    Choomphon-anomakhun, Natthaphon [Department of Physics, Faculty of Science, Chulalongkorn University, 254 Phayathai Road, Bangkok 10330 (Thailand); Ebner, Armin D. [Department of Chemical Engineering, University of South Carolina, Columbia, SC 29208 (United States); Natenapit, Mayuree [Department of Physics, Faculty of Science, Chulalongkorn University, 254 Phayathai Road, Bangkok 10330 (Thailand); Ritter, James A. [Department of Chemical Engineering, University of South Carolina, Columbia, SC 29208 (United States)

    2017-04-15

    A new approach for modeling high gradient magnetic separation (HGMS)-type systems during the time-dependent capture and accumulation of magnetic particles by a ferromagnetic wire was developed. This new approach assumes the fluid (slurry) viscosity, comprised of water and magnetic particles, is a function of the magnetic particle concentration in the fluid, with imposed maxima on both the particle concentration and fluid viscosity to avoid unrealistic limits. In 2-D, the unsteady-state Navier-Stokes equations for compressible fluid flow and the unsteady-state continuity equations applied separately to the water and magnetic particle phases in the slurry were solved simultaneously, along with the Laplace equations for the magnetic potential applied separately to the slurry and wire, to evaluate the velocities and concentrations around the wire in a narrow channel using COMSOL Multiphysics. The results from this model revealed very realistic magnetically attractive and repulsive zones forming in time around the wire. These collection zones formed their own impermeable viscous phase during accumulation that was also magnetic with its area and magnetism impacting locally both the fluid flow and magnetic fields around the wire. These collection zones increased with an increase in the applied magnetic field. For a given set of conditions, the capture ability peaked and then decreased to zero at infinite time during magnetic particle accumulation in the collection zones. Predictions of the collection efficiency from a steady-state, clean collector, trajectory model could not show this behavior; it also agreed only qualitatively with the dynamic model and then only at the early stages of collection and more so at a higher applied magnetic field. Also, the collection zones decreased in size when the accumulation regions included magnetic particle magnetization (realistic) compared to when they excluded it (unrealistic). Overall, this might be the first time a mathematical

  8. Off-gas adsorption model and simulation - OSPREY

    Energy Technology Data Exchange (ETDEWEB)

    Rutledge, V.J. [Idaho National Laboratory, P. O. Box 1625, Idaho Falls, ID (United States)

    2013-07-01

    A capability of accurately simulating the dynamic behavior of advanced fuel cycle separation processes is expected to provide substantial cost savings and many technical benefits. To support this capability, a modeling effort focused on the off-gas treatment system of a used nuclear fuel recycling facility is in progress. The off-gas separation consists of a series of scrubbers and adsorption beds to capture constituents of interest. Dynamic models are being developed to simulate each unit operation involved so each unit operation can be used as a stand-alone model and in series with multiple others. Currently, an adsorption model has been developed within Multi-physics Object Oriented Simulation Environment (MOOSE) developed at the Idaho National Laboratory (INL). Off-gas Separation and Recovery (OSPREY) models the adsorption of offgas constituents for dispersed plug flow in a packed bed under non-isothermal and non-isobaric conditions. Inputs to the model include gas composition, sorbent and column properties, equilibrium and kinetic data, and inlet conditions. The simulation outputs component concentrations along the column length as a function of time from which breakthrough data can be obtained. The breakthrough data can be used to determine bed capacity, which in turn can be used to size columns. In addition to concentration data, the model predicts temperature along the column length as a function of time and pressure drop along the column length. A description of the OSPREY model, results from krypton adsorption modeling and plans for modeling the behavior of iodine, xenon, and tritium will be discussed. (author)

  9. Towards a structured approach to building qualitative reasoning models and simulations

    NARCIS (Netherlands)

    Bredeweg, B.; Salles, P.; Bouwer, A.; Liem, J.; Nuttle, T.; Cioca, E.; Nakova, E.; Noble, R.; Caldas, A.L.R.; Uzunov, Y.; Varadinova, E.; Zitek, A.

    2008-01-01

    Successful transfer and uptake of qualitative reasoning technology for modelling and simulation in a variety of domains has been hampered by the lack of a structured methodology to support formalisation of ideas. We present a framework that structures and supports the capture of conceptual knowledge

  10. Modeling and optimization of CO2 capture processes by chemical absorption

    International Nuclear Information System (INIS)

    Neveux, Thibaut

    2013-01-01

    CO 2 capture processes by chemical absorption lead to a large energy penalty on efficiency of coal-fired power plants, establishing one of the main bottleneck to its industrial deployment. The objective of this thesis is the development and validation of a global methodology, allowing the precise evaluation of the potential of a given amine capture process. Characteristic phenomena of chemical absorption have been thoroughly studied and represented with state-of-the-art models. The e-UNIQUAC model has been used to describe vapor-liquid and chemical equilibria of electrolyte solutions and the model parameters have been identified for four solvents. A rate-based formulation has been adopted for the representation of chemically enhanced heat and mass transfer in columns. The absorption and stripping models have been successfully validated against experimental data from an industrial and a laboratory pilot plants. The influence of the numerous phenomena has been investigated in order to highlight the most limiting ones. A methodology has been proposed to evaluate the total energy penalty resulting from the implementation of a capture process on an advanced supercritical coal-fired power plant, including thermal and electric consumptions. Then, the simulation and process evaluation environments have been coupled with a non-linear optimization algorithm in order to find optimal operating and design parameters with respect to energetic and economic performances. This methodology has been applied to optimize five process flow schemes operating with an monoethanolamine aqueous solution at 30% by weight: the conventional flow scheme and four process modifications. The performance comparison showed that process modifications using a heat pump effect give the best gains. The use of technical-economic analysis as an evaluation criterion of a process performance, coupled with a optimization algorithm, has proved its capability to find values for the numerous operating and design

  11. Spatial capture-recapture models for search-encounter data

    Science.gov (United States)

    Royle, J. Andrew; Kery, Marc; Guelat, Jerome

    2011-01-01

    1. Spatial capture–recapture models make use of auxiliary data on capture location to provide density estimates for animal populations. Previously, models have been developed primarily for fixed trap arrays which define the observable locations of individuals by a set of discrete points. 2. Here, we develop a class of models for 'search-encounter' data, i.e. for detections of recognizable individuals in continuous space, not restricted to trap locations. In our hierarchical model, detection probability is related to the average distance between individual location and the survey path. The locations are allowed to change over time owing to movements of individuals, and individual locations are related formally by a model describing individual activity or home range centre which is itself regarded as a latent variable in the model. We provide a Bayesian analysis of the model in WinBUGS, and develop a custom MCMC algorithm in the R language. 3. The model is applied to simulated data and to territory mapping data for the Willow Tit from the Swiss Breeding Bird Survey MHB. While the observed density was 15 territories per nominal 1 km2 plot of unknown effective sample area, the model produced a density estimate of 21∙12 territories per square km (95% posterior interval: 17–26). 4. Spatial capture–recapture models are relevant to virtually all animal population studies that seek to estimate population size or density, yet existing models have been proposed mainly for conventional sampling using arrays of traps. Our model for search-encounter data, where the spatial pattern of searching can be arbitrary and may change over occasions, greatly expands the scope and utility of spatial capture–recapture models.

  12. From capture to simulation: connecting forward and inverse problems in fluids

    KAUST Repository

    Gregson, James; Ihrke, Ivo; Thuerey, Nils; Heidrich, Wolfgang

    2014-01-01

    We explore the connection between fluid capture, simulation and proximal methods, a class of algorithms commonly used for inverse problems in image processing and computer vision. Our key finding is that the proximal operator constraining fluid velocities to be divergence-free is directly equivalent to the pressure-projection methods commonly used in incompressible flow solvers. This observation lets us treat the inverse problem of fluid tracking as a constrained flow problem all while working in an efficient, modular framework. In addition it lets us tightly couple fluid simulation into flow tracking, providing a global prior that significantly increases tracking accuracy and temporal coherence as compared to previous techniques. We demonstrate how we can use these improved results for a variety of applications, such as re-simulation, detail enhancement, and domain modification. We furthermore give an outlook of the applications beyond fluid tracking that our proximal operator framework could enable by exploring the connection of deblurring and fluid guiding.

  13. From capture to simulation: connecting forward and inverse problems in fluids

    KAUST Repository

    Gregson, James

    2014-07-27

    We explore the connection between fluid capture, simulation and proximal methods, a class of algorithms commonly used for inverse problems in image processing and computer vision. Our key finding is that the proximal operator constraining fluid velocities to be divergence-free is directly equivalent to the pressure-projection methods commonly used in incompressible flow solvers. This observation lets us treat the inverse problem of fluid tracking as a constrained flow problem all while working in an efficient, modular framework. In addition it lets us tightly couple fluid simulation into flow tracking, providing a global prior that significantly increases tracking accuracy and temporal coherence as compared to previous techniques. We demonstrate how we can use these improved results for a variety of applications, such as re-simulation, detail enhancement, and domain modification. We furthermore give an outlook of the applications beyond fluid tracking that our proximal operator framework could enable by exploring the connection of deblurring and fluid guiding.

  14. Thermodynamic simulation of CO{sub 2} capture for an IGCC power plant using the calcium looping cycle

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y. [National Engineering Laboratory for Coal-Burning Pollutant Emission Reduction, Shandong University, Jinan (China); Zhao, C.; Ren, Q. [School of Energy and Environment, Southeast University, Nanjing (China)

    2011-06-15

    A CO{sub 2} capture process for an integrated gasification combined cycle (IGCC) power plant using the calcium looping cycle was proposed. The CO{sub 2} capture process using natural and modified limestone was simulated and investigated with the software package Aspen Plus. It incorporated a fresh feed of sorbent to compensate for the decay in CO{sub 2} capture activity during long-term cycles. The sorbent flow ratios have significant effect on the CO{sub 2} capture efficiency and net efficiency of the CO{sub 2} capture system. The IGCC power plant, using the modified limestone, exhibits higher CO{sub 2} capture efficiency than that using the natural limestone at the same sorbent flow ratios. The system net efficiency using the natural and modified limestones achieves 41.7% and 43.1%, respectively, at the CO{sub 2} capture efficiency of 90% without the effect of sulfation. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  15. Capture zone simulation for boreholes located in fractured dykes ...

    African Journals Online (AJOL)

    drinie

    2002-04-02

    Apr 2, 2002 ... models do not account for the capture zone of a draining fracture. In South Africa ... uniform, the pathline distribution under certain hydrogeological settings is ... defined as a mathematical sink line with a finite length. If a pumping ... the impermeable dyke is located at x = - d and the centre of the fracture with ...

  16. Improving Prediction Accuracy of a Rate-Based Model of an MEA-Based Carbon Capture Process for Large-Scale Commercial Deployment

    Directory of Open Access Journals (Sweden)

    Xiaobo Luo

    2017-04-01

    Full Text Available Carbon capture and storage (CCS technology will play a critical role in reducing anthropogenic carbon dioxide (CO2 emission from fossil-fired power plants and other energy-intensive processes. However, the increment of energy cost caused by equipping a carbon capture process is the main barrier to its commercial deployment. To reduce the capital and operating costs of carbon capture, great efforts have been made to achieve optimal design and operation through process modeling, simulation, and optimization. Accurate models form an essential foundation for this purpose. This paper presents a study on developing a more accurate rate-based model in Aspen Plus® for the monoethanolamine (MEA-based carbon capture process by multistage model validations. The modeling framework for this process was established first. The steady-state process model was then developed and validated at three stages, which included a thermodynamic model, physical properties calculations, and a process model at the pilot plant scale, covering a wide range of pressures, temperatures, and CO2 loadings. The calculation correlations of liquid density and interfacial area were updated by coding Fortran subroutines in Aspen Plus®. The validation results show that the correlation combination for the thermodynamic model used in this study has higher accuracy than those of three other key publications and the model prediction of the process model has a good agreement with the pilot plant experimental data. A case study was carried out for carbon capture from a 250 MWe combined cycle gas turbine (CCGT power plant. Shorter packing height and lower specific duty were achieved using this accurate model.

  17. Transient modeling of electrochemically assisted CO2 capture and release

    DEFF Research Database (Denmark)

    Singh, Shobhana; Stechel, Ellen B.; Buttry, Daniel A.

    2017-01-01

    to analyze the time-dependent behavior of CO2 capture and electro-migration transport across the cell length. Given high nonlinearity of the system, we used a finite element method (FEM) to numerically solve the coupled mass transport equations. The model describes the concentration profiles by taking......The present work aims to develop a model of a new electrochemical CO2 separation and release technology. We present a one-dimensional transient model of an electrochemical cell for point source CO2 capture and release, which mainly focuses on the simultaneous mass transport and complex chemical...... reactions associated with the separation process. For concreteness, we use an ionic liquid (IL) with 2 M thiolate anion (RS−) in 1 M disulfide (RSSR) as an electrolyte in the electrochemical cell to capture, transport and release CO2 under standard operating conditions. We computationally solved the model...

  18. Multiscale modeling and simulation of microtubule–motor-protein assemblies

    Science.gov (United States)

    Gao, Tong; Blackwell, Robert; Glaser, Matthew A.; Betterton, M. D.; Shelley, Michael J.

    2016-01-01

    Microtubules and motor proteins self-organize into biologically important assemblies including the mitotic spindle and the centrosomal microtubule array. Outside of cells, microtubule-motor mixtures can form novel active liquid-crystalline materials driven out of equilibrium by adenosine triphosphate–consuming motor proteins. Microscopic motor activity causes polarity-dependent interactions between motor proteins and microtubules, but how these interactions yield larger-scale dynamical behavior such as complex flows and defect dynamics is not well understood. We develop a multiscale theory for microtubule-motor systems in which Brownian dynamics simulations of polar microtubules driven by motors are used to study microscopic organization and stresses created by motor-mediated microtubule interactions. We identify polarity-sorting and crosslink tether relaxation as two polar-specific sources of active destabilizing stress. We then develop a continuum Doi-Onsager model that captures polarity sorting and the hydrodynamic flows generated by these polar-specific active stresses. In simulations of active nematic flows on immersed surfaces, the active stresses drive turbulent flow dynamics and continuous generation and annihilation of disclination defects. The dynamics follow from two instabilities, and accounting for the immersed nature of the experiment yields unambiguous characteristic length and time scales. When turning off the hydrodynamics in the Doi-Onsager model, we capture formation of polar lanes as observed in the Brownian dynamics simulation. PMID:26764729

  19. Multiscale modeling and simulation of microtubule-motor-protein assemblies.

    Science.gov (United States)

    Gao, Tong; Blackwell, Robert; Glaser, Matthew A; Betterton, M D; Shelley, Michael J

    2015-01-01

    Microtubules and motor proteins self-organize into biologically important assemblies including the mitotic spindle and the centrosomal microtubule array. Outside of cells, microtubule-motor mixtures can form novel active liquid-crystalline materials driven out of equilibrium by adenosine triphosphate-consuming motor proteins. Microscopic motor activity causes polarity-dependent interactions between motor proteins and microtubules, but how these interactions yield larger-scale dynamical behavior such as complex flows and defect dynamics is not well understood. We develop a multiscale theory for microtubule-motor systems in which Brownian dynamics simulations of polar microtubules driven by motors are used to study microscopic organization and stresses created by motor-mediated microtubule interactions. We identify polarity-sorting and crosslink tether relaxation as two polar-specific sources of active destabilizing stress. We then develop a continuum Doi-Onsager model that captures polarity sorting and the hydrodynamic flows generated by these polar-specific active stresses. In simulations of active nematic flows on immersed surfaces, the active stresses drive turbulent flow dynamics and continuous generation and annihilation of disclination defects. The dynamics follow from two instabilities, and accounting for the immersed nature of the experiment yields unambiguous characteristic length and time scales. When turning off the hydrodynamics in the Doi-Onsager model, we capture formation of polar lanes as observed in the Brownian dynamics simulation.

  20. Multiscale modeling and simulation of microtubule-motor-protein assemblies

    Science.gov (United States)

    Gao, Tong; Blackwell, Robert; Glaser, Matthew A.; Betterton, M. D.; Shelley, Michael J.

    2015-12-01

    Microtubules and motor proteins self-organize into biologically important assemblies including the mitotic spindle and the centrosomal microtubule array. Outside of cells, microtubule-motor mixtures can form novel active liquid-crystalline materials driven out of equilibrium by adenosine triphosphate-consuming motor proteins. Microscopic motor activity causes polarity-dependent interactions between motor proteins and microtubules, but how these interactions yield larger-scale dynamical behavior such as complex flows and defect dynamics is not well understood. We develop a multiscale theory for microtubule-motor systems in which Brownian dynamics simulations of polar microtubules driven by motors are used to study microscopic organization and stresses created by motor-mediated microtubule interactions. We identify polarity-sorting and crosslink tether relaxation as two polar-specific sources of active destabilizing stress. We then develop a continuum Doi-Onsager model that captures polarity sorting and the hydrodynamic flows generated by these polar-specific active stresses. In simulations of active nematic flows on immersed surfaces, the active stresses drive turbulent flow dynamics and continuous generation and annihilation of disclination defects. The dynamics follow from two instabilities, and accounting for the immersed nature of the experiment yields unambiguous characteristic length and time scales. When turning off the hydrodynamics in the Doi-Onsager model, we capture formation of polar lanes as observed in the Brownian dynamics simulation.

  1. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations.

    Science.gov (United States)

    Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  2. Semivarying coefficient models for capture-recapture data: colony size estimation for the little penguin Eudyptula minor.

    Science.gov (United States)

    Stoklosa, Jakub; Dann, Peter; Huggins, Richard

    2014-09-01

    To accommodate seasonal effects that change from year to year into models for the size of an open population we consider a time-varying coefficient model. We fit this model to a capture-recapture data set collected on the little penguin Eudyptula minor in south-eastern Australia over a 25 year period using Jolly-Seber type estimators and nonparametric P-spline techniques. The time-varying coefficient model identified strong changes in the seasonal pattern across the years which we further examined using functional data analysis techniques. To evaluate the methodology we also conducted several simulation studies that incorporate seasonal variation. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Differences in rain rate intensities between TRMM observations and community atmosphere model simulations

    Science.gov (United States)

    Deng, Yi; Bowman, Kenneth P.; Jackson, Charles

    2007-01-01

    Precipitation related latent heating is important in driving the atmospheric general circulation and in generating intraseasonal to decadal atmospheric variability. Our ability to project future climate change, especially trends in costly precipitation extremes, hinges upon whether coupled GCMs capture processes that affect precipitation characteristics. Our study compares the tropical-subtropical precipitation characteristics of simulations by the NCAR CAM3.1 atmospheric GCM and observations derived from the NASA Tropical Rainfall Measuring Mission (TRMM) satellite. Despite a fairly good simulation of the annual mean rain rate, CAM rains about 10-50% more often than the real world and fails to capture heavy rainfall associated with deep convective systems over subtropical South America and U.S. Southern Plains. When it rains, there is a likelihood of 0.96-1.0 that it rains lightly in the model, compared to values of 0.84-1.0 in TRMM data. On the other hand, the likelihood of the occurrence of moderate to heavy rainfall is an order of magnitude higher in observations (0.12-0.2) than that in the model (model compensates for the lack of heavy precipitation through raining more frequently within the light rain category, which leads to an annual rainfall amount close to what is observed. CAM captures the qualitative change of rain rate PDF from a "dry" oceanic to a "wet" oceanic region, but it fails to simulate the change of precipitation characteristics from an oceanic region to a land region where thunderstorm rainfall dominates.

  4. Numerical and theoretical aspects of the modelling of compressible two-phase flow by interface capture methods

    International Nuclear Information System (INIS)

    Kokh, S.

    2001-01-01

    This research thesis reports the development of a numerical direct simulation of compressible two-phase flows by using interface capturing methods. These techniques are based on the use of an Eulerian fixed grid to describe flow variables as well as the interface between fluids. The author first recalls conventional interface capturing methods and makes the distinction between those based on discontinuous colour functions and those based on level set functions. The approach is then extended to a five equation model to allow the largest as possible choice of state equations for the fluids. Three variants are developed. A solver inspired by the Roe scheme is developed for one of them. These interface capturing methods are then refined, more particularly for problems of numerical diffusion at the interface. A last part addresses the study of dynamic phase change. Non-conventional thermodynamics tools are used to study the structures of an interface which performs phase transition [fr

  5. Models to capture the potential for disease transmission in domestic sheep flocks.

    Science.gov (United States)

    Schley, David; Whittle, Sophie; Taylor, Michael; Kiss, Istvan Zoltan

    2012-09-15

    Successful control of livestock diseases requires an understanding of how they spread amongst animals and between premises. Mathematical models can offer important insight into the dynamics of disease, especially when built upon experimental and/or field data. Here the dynamics of a range of epidemiological models are explored in order to determine which models perform best in capturing real-world heterogeneities at sufficient resolution. Individual based network models are considered together with one- and two-class compartmental models, for which the final epidemic size is calculated as a function of the probability of disease transmission occurring during a given physical contact between two individuals. For numerical results the special cases of a viral disease with a fast recovery rate (foot-and-mouth disease) and a bacterial disease with a slow recovery rate (brucellosis) amongst sheep are considered. Quantitative results from observational studies of physical contact amongst domestic sheep are applied and results from the differently structured flocks (ewes with newborn lambs, ewes with nearly weaned lambs and ewes only) compared. These indicate that the breeding cycle leads to significant changes in the expected basic reproduction ratio of diseases. The observed heterogeneity of contacts amongst animals is best captured by full network simulations, although simple compartmental models describe the key features of an outbreak but, as expected, often overestimate the speed of an outbreak. Here the weights of contacts are heterogeneous, with many low weight links. However, due to the well-connected nature of the networks, this has little effect and differences between models remain small. These results indicate that simple compartmental models can be a useful tool for modelling real-world flocks; their applicability will be greater still for more homogeneously mixed livestock, which could be promoted by higher intensity farming practices. Copyright © 2012

  6. Simulating streamflow and water table depth with a coupled hydrological model

    Directory of Open Access Journals (Sweden)

    Alphonce Chenjerayi Guzha

    2010-09-01

    Full Text Available A coupled model integrating MODFLOW and TOPNET with the models interacting through the exchange of recharge and baseflow and river-aquifer interactions was developed and applied to the Big Darby Watershed in Ohio, USA. Calibration and validation results show that there is generally good agreement between measured streamflow and simulated results from the coupled model. At two gauging stations, average goodness of fit (R2, percent bias (PB, and Nash Sutcliffe efficiency (ENS values of 0.83, 11.15%, and 0.83, respectively, were obtained for simulation of streamflow during calibration, and values of 0.84, 8.75%, and 0.85, respectively, were obtained for validation. The simulated water table depths yielded average R2 values of 0.77 and 0.76 for calibration and validation, respectively. The good match between measured and simulated streamflows and water table depths demonstrates that the model is capable of adequately simulating streamflows and water table depths in the watershed and also capturing the influence of spatial and temporal variation in recharge.

  7. Comparison of extended mean-reversion and time series models for electricity spot price simulation considering negative prices

    International Nuclear Information System (INIS)

    Keles, Dogan; Genoese, Massimo; Möst, Dominik; Fichtner, Wolf

    2012-01-01

    This paper evaluates different financial price and time series models, such as mean reversion, autoregressive moving average (ARMA), integrated ARMA (ARIMA) and general autoregressive conditional heteroscedasticity (GARCH) process, usually applied for electricity price simulations. However, as these models are developed to describe the stochastic behaviour of electricity prices, they are extended by a separate data treatment for the deterministic components (trend, daily, weekly and annual cycles) of electricity spot prices. Furthermore price jumps are considered and implemented within a regime-switching model. Since 2008 market design allows for negative prices at the European Energy Exchange, which also occurred for several hours in the last years. Up to now, only a few financial and time series approaches exist, which are able to capture negative prices. This paper presents a new approach incorporating negative prices. The evaluation of the different approaches presented points out that the mean reversion and the ARMA models deliver the lowest mean root square error between simulated and historical electricity spot prices gained from the European Energy Exchange. These models posses also lower mean average errors than GARCH models. Hence, they are more suitable to simulate well-fitting price paths. Furthermore it is shown that the daily structure of historical price curves is better captured applying ARMA or ARIMA processes instead of mean-reversion or GARCH models. Another important outcome of the paper is that the regime-switching approach and the consideration of negative prices via the new proposed approach lead to a significant improvement of the electricity price simulation. - Highlights: ► Considering negative prices improves the results of time-series and financial models for electricity prices. ► Regime-switching approach captures the jumps and base prices quite well. ► Removing and separate modelling of deterministic annual, weekly and daily

  8. Capturing multi-stage fuzzy uncertainties in hybrid system dynamics and agent-based models for enhancing policy implementation in health systems research.

    Science.gov (United States)

    Liu, Shiyong; Triantis, Konstantinos P; Zhao, Li; Wang, Youfa

    2018-01-01

    In practical research, it was found that most people made health-related decisions not based on numerical data but on perceptions. Examples include the perceptions and their corresponding linguistic values of health risks such as, smoking, syringe sharing, eating energy-dense food, drinking sugar-sweetened beverages etc. For the sake of understanding the mechanisms that affect the implementations of health-related interventions, we employ fuzzy variables to quantify linguistic variable in healthcare modeling where we employ an integrated system dynamics and agent-based model. In a nonlinear causal-driven simulation environment driven by feedback loops, we mathematically demonstrate how interventions at an aggregate level affect the dynamics of linguistic variables that are captured by fuzzy agents and how interactions among fuzzy agents, at the same time, affect the formation of different clusters(groups) that are targeted by specific interventions. In this paper, we provide an innovative framework to capture multi-stage fuzzy uncertainties manifested among interacting heterogeneous agents (individuals) and intervention decisions that affect homogeneous agents (groups of individuals) in a hybrid model that combines an agent-based simulation model (ABM) and a system dynamics models (SDM). Having built the platform to incorporate high-dimension data in a hybrid ABM/SDM model, this paper demonstrates how one can obtain the state variable behaviors in the SDM and the corresponding values of linguistic variables in the ABM. This research provides a way to incorporate high-dimension data in a hybrid ABM/SDM model. This research not only enriches the application of fuzzy set theory by capturing the dynamics of variables associated with interacting fuzzy agents that lead to aggregate behaviors but also informs implementation research by enabling the incorporation of linguistic variables at both individual and institutional levels, which makes unstructured linguistic data

  9. Capturing complexity in work disability research: application of system dynamics modeling methodology.

    Science.gov (United States)

    Jetha, Arif; Pransky, Glenn; Hettinger, Lawrence J

    2016-01-01

    Work disability (WD) is characterized by variable and occasionally undesirable outcomes. The underlying determinants of WD outcomes include patterns of dynamic relationships among health, personal, organizational and regulatory factors that have been challenging to characterize, and inadequately represented by contemporary WD models. System dynamics modeling (SDM) methodology applies a sociotechnical systems thinking lens to view WD systems as comprising a range of influential factors linked by feedback relationships. SDM can potentially overcome limitations in contemporary WD models by uncovering causal feedback relationships, and conceptualizing dynamic system behaviors. It employs a collaborative and stakeholder-based model building methodology to create a visual depiction of the system as a whole. SDM can also enable researchers to run dynamic simulations to provide evidence of anticipated or unanticipated outcomes that could result from policy and programmatic intervention. SDM may advance rehabilitation research by providing greater insights into the structure and dynamics of WD systems while helping to understand inherent complexity. Challenges related to data availability, determining validity, and the extensive time and technical skill requirements for model building may limit SDM's use in the field and should be considered. Contemporary work disability (WD) models provide limited insight into complexity associated with WD processes. System dynamics modeling (SDM) has the potential to capture complexity through a stakeholder-based approach that generates a simulation model consisting of multiple feedback loops. SDM may enable WD researchers and practitioners to understand the structure and behavior of the WD system as a whole, and inform development of improved strategies to manage straightforward and complex WD cases.

  10. The knowledge-based economy modeled, measured, simulated

    CERN Document Server

    Leydesdorff, Loet

    2006-01-01

    "Challenging, theoretically rich yet anchored in detailed empirical analysis, Loet Leydesdorff's exploration of the dynamics of the knowledge-economy is a major contribution to the field. Drawing on his expertise in science and technology studies, systems theory, and his internationally respected work on the 'triple helix', the book provides a radically new modelling and simulation of knowledge systems, capturing the articulation of structure, communication, and agency therein. This work will be of immense interest to both theorists of the knowledge-economy and practitioners in science policy." Andrew Webster Science & Technology Studies, University of York, UK

  11. Capture Gamma-Ray Libraries for Nuclear Applications

    International Nuclear Information System (INIS)

    Sleaford, B.W.; Firestone, Richard B.; Summers, N.; Escher, J.; Hurst, A.; Krticka, M.; Basunia, S.; Molnar, G.; Belgya, T.; Revay, Z.; Choi, H.D.

    2010-01-01

    The neutron capture reaction is useful in identifying and analyzing the gamma-ray spectrum from an unknown assembly as it gives unambiguous information on its composition. This can be done passively or actively where an external neutron source is used to probe an unknown assembly. There are known capture gamma-ray data gaps in the ENDF libraries used by transport codes for various nuclear applications. The Evaluated Gamma-ray Activation file (EGAF) is a new thermal neutron capture database of discrete line spectra and cross sections for over 260 isotopes that was developed as part of an IAEA Coordinated Research Project. EGAF has been used to improve the capture gamma production in ENDF libraries. For medium to heavy nuclei the quasi continuum contribution to the gamma cascades is not experimentally resolved. The continuum contains up to 90% of all the decay energy an is modeled here with the statistical nuclear structure code DICEBOX. This code also provides a consistency check of the level scheme nuclear structure evaluation. The calculated continuum is of sufficient accuracy to include in the ENDF libraries. This analysis also determines new total thermal capture cross sections and provides an improved RIPL database. For higher energy neutron capture there is less experimental data available making benchmarking of the modeling codes more difficult. We use CASINO, a version of DICEBOX that is modified for this purpose. This can be used to simulate the neutron capture at incident neutron energies up to 20 MeV to improve the gamma-ray spectrum in neutron data libraries used for transport modelling of unknown assemblies.

  12. Modeling, Identification, Estimation, and Simulation of Urban Traffic Flow in Jakarta and Bandung

    Directory of Open Access Journals (Sweden)

    Herman Y. Sutarto

    2015-06-01

    Full Text Available This paper presents an overview of urban traffic flow from the perspective of system theory and stochastic control. The topics of modeling, identification, estimation and simulation techniques are evaluated and validated using actual traffic flow data from the city of Jakarta and Bandung, Indonesia, and synthetic data generated from traffic micro-simulator VISSIM. The results on particle filter (PF based state estimation and Expectation-Maximization (EM based parameter estimation (identification confirm the proposed model gives satisfactory results that capture the variation of urban traffic flow. The combination of the technique and the simulator platform assembles possibility to develop a real-time traffic light controller.  

  13. Streamflow in the upper Mississippi river basin as simulated by SWAT driven by 20{sup th} century contemporary results of global climate models and NARCCAP regional climate models

    Energy Technology Data Exchange (ETDEWEB)

    Takle, Eugene S.; Jha, Manoj; Lu, Er; Arritt, Raymond W.; Gutowski, William J. [Iowa State Univ. Ames, IA (United States)

    2010-06-15

    We use Soil and Water Assessment Tool (SWAT) when driven by observations and results of climate models to evaluate hydrological quantities, including streamflow, in the Upper Mississippi River Basin (UMRB) for 1981-2003 in comparison to observed streamflow. Daily meteorological conditions used as input to SWAT are taken from (1) observations at weather stations in the basin, (2) daily meteorological conditions simulated by a collection of regional climate models (RCMs) driven by reanalysis boundary conditions, and (3) daily meteorological conditions simulated by a collection of global climate models (GCMs). Regional models used are those whose data are archived by the North American Regional Climate Change Assessment Program (NARCCAP). Results show that regional models correctly simulate the seasonal cycle of precipitation, temperature, and streamflow within the basin. Regional models also capture interannual extremes represented by the flood of 1993 and the dry conditions of 2000. The ensemble means of both the GCM-driven and RCM-driven simulations by SWAT capture both the timing and amplitude of the seasonal cycle of streamflow with neither demonstrating significant superiority at the basin level. (orig.)

  14. Neutron Capture Gamma-Ray Libraries for Nuclear Applications

    International Nuclear Information System (INIS)

    Sleaford, B. W.; Summers, N.; Escher, J.; Firestone, R. B.; Basunia, S.; Hurst, A.; Krticka, M.; Molnar, G.; Belgya, T.; Revay, Z.; Choi, H. D.

    2011-01-01

    The neutron capture reaction is useful in identifying and analyzing the gamma-ray spectrum from an unknown assembly as it gives unambiguous information on its composition. This can be done passively or actively where an external neutron source is used to probe an unknown assembly. There are known capture gamma-ray data gaps in the ENDF libraries used by transport codes for various nuclear applications. The Evaluated Gamma-ray Activation file (EGAF) is a new thermal neutron capture database of discrete line spectra and cross sections for over 260 isotopes that was developed as part of an IAEA Coordinated Research Project. EGAF is being used to improve the capture gamma production in ENDF libraries. For medium to heavy nuclei the quasi continuum contribution to the gamma cascades is not experimentally resolved. The continuum contains up to 90% of all the decay energy and is modeled here with the statistical nuclear structure code DICEBOX. This code also provides a consistency check of the level scheme nuclear structure evaluation. The calculated continuum is of sufficient accuracy to include in the ENDF libraries. This analysis also determines new total thermal capture cross sections and provides an improved RIPL database. For higher energy neutron capture there is less experimental data available making benchmarking of the modeling codes more difficult. We are investigating the capture spectra from higher energy neutrons experimentally using surrogate reactions and modeling this with Hauser-Feshbach codes. This can then be used to benchmark CASINO, a version of DICEBOX modified for neutron capture at higher energy. This can be used to simulate spectra from neutron capture at incident neutron energies up to 20 MeV to improve the gamma-ray spectrum in neutron data libraries used for transport modeling of unknown assemblies.

  15. Neutron Capture Gamma-Ray Libraries for Nuclear Applications

    International Nuclear Information System (INIS)

    Sleaford, B.W.; Firestone, R.B.; Summers, N.; Escher, J.; Hurst, A.; Krticka, M.; Basunia, S.; Molnar, G.; Belgya, T.; Revay, Z.; Choi, H.D.

    2010-01-01

    The neutron capture reaction is useful in identifying and analyzing the gamma-ray spectrum from an unknown assembly as it gives unambiguous information on its composition. this can be done passively or actively where an external neutron source is used to probe an unknown assembly. There are known capture gamma-ray data gaps in the ENDF libraries used by transport codes for various nuclear applications. The Evaluated Gamma-ray Activation file (EGAF) is a new thermal neutron capture database of discrete line spectra and cross sections for over 260 isotopes that was developed as part of an IAEA Coordinated Research project. EGAF is being used to improve the capture gamma production in ENDF libraries. For medium to heavy nuclei the quasi continuum contribution to the gamma cascades is not experimentally resolved. The continuum contains up to 90% of all the decay energy and is modeled here with the statistical nuclear structure code DICEBOX. This code also provides a consistency check of the level scheme nuclear structure evaluation. The calculated continuum is of sufficient accuracy to include in the ENDF libraries. This analysis also determines new total thermal capture cross sections and provides an improved RIPL database. For higher energy neutron capture there is less experimental data available making benchmarking of the modeling codes more difficult. They are investigating the capture spectra from higher energy neutrons experimentally using surrogate reactions and modeling this with Hauser-Feshbach codes. This can then be used to benchmark CASINO, a version of DICEBOX modified for neutron capture at higher energy. This can be used to simulate spectra from neutron capture at incident neutron energies up to 20 MeV to improve the gamma-ray spectrum in neutron data libraries used for transport modeling of unknown assemblies.

  16. Modeling of phonon- and Coulomb-mediated capture processes in quantum dots

    DEFF Research Database (Denmark)

    Magnúsdóttir, Ingibjörg

    2003-01-01

    This thesis describes modeling of carrier relaxation processes in self-assembled quantum-dot-structures, with particular emphasis on carrier capture processes in quantum dots. Relaxation by emission of lontitudinal optical (LO) phonons is very efficient in bulk semiconductors and nanostructures...... of higher dimensionality. Here, we investigate carrier capture processes into quantum dots, mediated by emission of one and two LO phonons. In these investigations is is assumed that the dot is empty initially. In the Case of single-phonon capture we also investigate the influence of the presence...... of a charge in the quantum-dot state to which the capture takes place. In general, capture rates are of the same order as capture rates into an empty dot state, but in some cases the dot-size interval for which the capture process is energetically allowed, is considerably reduced.The above calculations...

  17. Warp simulations for capture and control of laser-accelerated proton beams

    International Nuclear Information System (INIS)

    Nuernberg, Frank; Harres, K; Roth, M; Friedman, A; Grote, D P; Logan, B G; Schollmeier, M

    2010-01-01

    The capture of laser-accelerated proton beams accompanied by co-moving electrons via a solenoid field has been studied with particle-in-cell simulations. The main advantages of the Warp simulation suite that we have used, relative to envelope or tracking codes, are the possibility of including all source parameters energy resolved, adding electrons as second species and considering the non-negligible space-charge forces and electrostatic self-fields. It was observed that the influence of the electrons is of vital importance. The magnetic effect on the electrons outbalances the space-charge force. Hence, the electrons are forced onto the beam axis and attract protons. Beside the energy dependent proton density increase on axis, the change in the particle spectrum is also important for future applications. Protons are accelerated/decelerated slightly, electrons highly. 2/3 of all electrons get lost directly at the source and 27% of all protons hit the inner wall of the solenoid.

  18. Warp simulations for capture and control of laser-accelerated proton beams

    International Nuclear Information System (INIS)

    Nurnberg, F.; Friedman, A.; Grote, D.P.; Harres, K.; Logan, B.G.; Schollmeier, M.; Roth, M.

    2009-01-01

    The capture of laser-accelerated proton beams accompanied by co-moving electrons via a solenoid field has been studied with particle-in-cell simulations. The main advantages of the Warp simulation suite that was used, relative to envelope or tracking codes, are the possibility of including all source parameters energy resolved, adding electrons as second species and considering the non-negligible space-charge forces and electrostatic self-fields. It was observed that the influence of the electrons is of vital importance. The magnetic effect on the electrons out balances the space-charge force. Hence, the electrons are forced onto the beam axis and attract protons. Besides the energy dependent proton density increase on axis, the change in the particle spectrum is also important for future applications. Protons are accelerated/decelerated slightly, electrons highly. 2/3 of all electrons get lost directly at the source and 27% of all protons hit the inner wall of the solenoid.

  19. Multi-model ensemble hydrological simulation using a BP Neural Network for the upper Yalongjiang River Basin, China

    Science.gov (United States)

    Li, Zhanjie; Yu, Jingshan; Xu, Xinyi; Sun, Wenchao; Pang, Bo; Yue, Jiajia

    2018-06-01

    Hydrological models are important and effective tools for detecting complex hydrological processes. Different models have different strengths when capturing the various aspects of hydrological processes. Relying on a single model usually leads to simulation uncertainties. Ensemble approaches, based on multi-model hydrological simulations, can improve application performance over single models. In this study, the upper Yalongjiang River Basin was selected for a case study. Three commonly used hydrological models (SWAT, VIC, and BTOPMC) were selected and used for independent simulations with the same input and initial values. Then, the BP neural network method was employed to combine the results from the three models. The results show that the accuracy of BP ensemble simulation is better than that of the single models.

  20. Additive Manufacturing Modeling and Simulation A Literature Review for Electron Beam Free Form Fabrication

    Science.gov (United States)

    Seufzer, William J.

    2014-01-01

    Additive manufacturing is coming into industrial use and has several desirable attributes. Control of the deposition remains a complex challenge, and so this literature review was initiated to capture current modeling efforts in the field of additive manufacturing. This paper summarizes about 10 years of modeling and simulation related to both welding and additive manufacturing. The goals were to learn who is doing what in modeling and simulation, to summarize various approaches taken to create models, and to identify research gaps. Later sections in the report summarize implications for closed-loop-control of the process, implications for local research efforts, and implications for local modeling efforts.

  1. Modeling and Simulation Resource Repository (MSRR)(System Engineering/Integrated M&S Management Approach

    Science.gov (United States)

    Milroy, Audrey; Hale, Joe

    2006-01-01

    NASA s Exploration Systems Mission Directorate (ESMD) is implementing a management approach for modeling and simulation (M&S) that will provide decision-makers information on the model s fidelity, credibility, and quality, including the verification, validation and accreditation information. The NASA MSRR will be implemented leveraging M&S industry best practices. This presentation will discuss the requirements that will enable NASA to capture and make available the "meta data" or "simulation biography" data associated with a model. The presentation will also describe the requirements that drive how NASA will collect and document relevant information for models or suites of models in order to facilitate use and reuse of relevant models and provide visibility across NASA organizations and the larger M&S community.

  2. Modeling Supermassive Black Holes in Cosmological Simulations

    Science.gov (United States)

    Tremmel, Michael

    My thesis work has focused on improving the implementation of supermassive black hole (SMBH) physics in cosmological hydrodynamic simulations. SMBHs are ubiquitous in mas- sive galaxies, as well as bulge-less galaxies and dwarfs, and are thought to be a critical component to massive galaxy evolution. Still, much is unknown about how SMBHs form, grow, and affect their host galaxies. Cosmological simulations are an invaluable tool for un- derstanding the formation of galaxies, self-consistently tracking their evolution with realistic merger and gas accretion histories. SMBHs are often modeled in these simulations (generally as a necessity to produce realistic massive galaxies), but their implementations are commonly simplified in ways that can limit what can be learned. Current and future observations are opening new windows into the lifecycle of SMBHs and their host galaxies, but require more detailed, physically motivated simulations. Within the novel framework I have developed, SMBHs 1) are seeded at early times without a priori assumptions of galaxy occupation, 2) grow in a way that accounts for the angular momentum of gas, and 3) experience realistic orbital evolution. I show how this model, properly tuned with a novel parameter optimiza- tion technique, results in realistic galaxies and SMBHs. Utilizing the unique ability of these simulations to capture the dynamical evolution of SMBHs, I present the first self-consistent prediction for the formation timescales of close SMBH pairs, precursors to SMBH binaries and merger events potentially detected by future gravitational wave experiments.

  3. Package Flow Model and its fuzzy implementation for simulating nuclear reactor system dynamics

    International Nuclear Information System (INIS)

    Matsuoka, Hiroshi; Ishiguro, Misako.

    1996-01-01

    A simple intuitive simulation model, which we call 'Package Flow Model', has been developed to evaluate physical processes in nuclear reactor system from a macroscopic point of view. In the previous paper, we showed the physical process of each energy generation and transfer stage in a PWR could be modeled by PFM, and its dynamics could be approximately simulated by fuzzy implementation. In this paper, a PFMs network approach for a total PWR system simulation is proposed and some transients of nuclear ship 'MUTSU' reactor system are evaluated. The simulated results are consistent with those from Nuclear Ship Engineering Simulation System developed by JAERI. Furthermore, a visual representation method is proposed to intuitively capture the profile of fuel safety transient. Using the PFMs network, we can handily calculate the transient phenomena of the system even by a notebook-type personal computer. In addition, we can easily interpret the results of calculation surveying a small number of parameters. (author)

  4. Geostatistical simulation of geological architecture and uncertainty propagation in groundwater modeling

    DEFF Research Database (Denmark)

    He, Xiulan

    parameters and model structures, which are the primary focuses of this PhD research. Parameter uncertainty was analyzed using an optimization tool (PEST: Parameter ESTimation) in combination with a random sampling method (LHS: Latin Hypercube Sampling). Model structure, namely geological architecture...... be compensated by model parameters, e.g. when hydraulic heads are considered. However, geological structure is the primary source of uncertainty with respect to simulations of groundwater age and capture zone. Operational MPS based software has been on stage for just around ten years; yet, issues regarding...... geological structures of these three sites provided appropriate conditions for testing the methods. Our study documented that MPS is an efficient approach for simulating geological heterogeneity, especially for non-stationary system. The high resolution of geophysical data such as SkyTEM is valuable both...

  5. Accounting for female reproductive cycles in a superpopulation capture-recapture framework

    DEFF Research Database (Denmark)

    Carroll, E. L.; Childerhouse, S. J.; Fewster, R. M.

    2013-01-01

    Superpopulation capture-recapture models are useful for estimating the abundance of long-lived, migratory species because they are able to account for the fluid nature of annual residency at migratory destinations. Here we extend the superpopulation POPAN model to explicitly account...... 700 whales, sampled during two sets of winter expeditions in 1995-1998 and 2006-2009. Due to differences in recapture rates between sexes, only sex-specific models were considered. The POPAN-tau models, which explicitly account for a decrease in capture probability in non-calving years, fit the female...... estimate of 1007 males (95% CL 794, 1276) and an estimated annual increase of 7% (95% CL 5%, 9%) for 1995-2009. Combined, the total superpopulation estimate for 1995-2009 was 2169 whales (95% CL 1836, 2563). Simulations suggest that failure to account for the effect of reproductive status on the capture...

  6. A fast Monte Carlo program for pulsed-neutron capture-gamma tools

    International Nuclear Information System (INIS)

    Hovgaard, J.

    1992-02-01

    A fast model for the pulsed-neutron capture-gamma tool has been developed. It is believed that the program produce valid results even though some approximation have been introduced. A correct γ photon transport simulation, which is under preparation, has for instance not yet been included. Simulations performed so far has shown that the model, with respect to computing time and accuracy, fully lives up to expectations with respect to computing time and accuracy. (au)

  7. Comprehensive device Simulation modeling of heavily irradiated silicon detectors at cryogenic temperatures

    CERN Document Server

    Moscatelli, F; MacEvoy, B; Hall, G; Passeri, D; Petasecca, M; Pignatel, Giogrio Umberto

    2004-01-01

    Radiation hardness is a critical design concern for present and future silicon detectors in high energy physics. Tracking systems at the CERN Large Hadron Collider (LHC) are expected to operate for ten years and to receive fast hadron fluences equivalent to 10/sup 15/cm /sup -2/ 1-MeV neutrons. Recently, low temperature operating conditions have been suggested as a means of suppressing the negative effects of radiation damage on detector charge collection properties. To investigate this effect, simulations have been carried out using the ISE-TCAD DESSIS device simulator. The so-called "three-level model" has been used. A comprehensive analysis of the influence of the V/sub 2/, C/sub i/O/sub i/ and V/sub 2/O capture cross sections on the effective doping concentration (N/sub eff/) as a function of temperature and fluence has been carried out. The capture cross sections have been varied in the range 10/sup -18/-10/sup -12/ cm/sup 2/. The simulated results are compared with charge collection spectra obtained wit...

  8. Instrument Response Modeling and Simulation for the GLAST Burst Monitor

    International Nuclear Information System (INIS)

    Kippen, R. M.; Hoover, A. S.; Wallace, M. S.; Pendleton, G. N.; Meegan, C. A.; Fishman, G. J.; Wilson-Hodge, C. A.; Kouveliotou, C.; Lichti, G. G.; Kienlin, A. von; Steinle, H.; Diehl, R.; Greiner, J.; Preece, R. D.; Connaughton, V.; Briggs, M. S.; Paciesas, W. S.; Bhat, P. N.

    2007-01-01

    The GLAST Burst Monitor (GBM) is designed to provide wide field of view observations of gamma-ray bursts and other fast transient sources in the energy range 10 keV to 30 MeV. The GBM is composed of several unshielded and uncollimated scintillation detectors (twelve NaI and two BGO) that are widely dispersed about the GLAST spacecraft. As a result, reconstructing source locations, energy spectra, and temporal properties from GBM data requires detailed knowledge of the detectors' response to both direct radiation as well as that scattered from the spacecraft and Earth's atmosphere. This full GBM instrument response will be captured in the form of a response function database that is derived from computer modeling and simulation. The simulation system is based on the GEANT4 Monte Carlo radiation transport simulation toolset, and is being extensively validated against calibrated experimental GBM data. We discuss the architecture of the GBM simulation and modeling system and describe how its products will be used for analysis of observed GBM data. Companion papers describe the status of validating the system

  9. Measurement and Modelling of the Piperazine Potassium Carbonate Solutions for CO2 Capture

    DEFF Research Database (Denmark)

    Fosbøl, Philip Loldrup; Thomsen, Kaj; Waseem Arshad, Muhammad

    The climate is in a critical state due to the impact of pollution by CO2 and similar greenhouse gasses. Action needs to be taken in order reduce the emission of harmful components. CO2 capture is one process to help the world population back on track in order to return to normal condition...... with the purpose of simulating the CO2 capture process. This involves equilibrium studies on physical properties in the activated carbonate solvent. Energy consumption while applying the promoted carbonate solutions using piperazine is given in overview....

  10. A multi-species exchange model for fully fluctuating polymer field theory simulations.

    Science.gov (United States)

    Düchs, Dominik; Delaney, Kris T; Fredrickson, Glenn H

    2014-11-07

    Field-theoretic models have been used extensively to study the phase behavior of inhomogeneous polymer melts and solutions, both in self-consistent mean-field calculations and in numerical simulations of the full theory capturing composition fluctuations. The models commonly used can be grouped into two categories, namely, species models and exchange models. Species models involve integrations of functionals that explicitly depend on fields originating both from species density operators and their conjugate chemical potential fields. In contrast, exchange models retain only linear combinations of the chemical potential fields. In the two-component case, development of exchange models has been instrumental in enabling stable complex Langevin (CL) simulations of the full complex-valued theory. No comparable stable CL approach has yet been established for field theories of the species type. Here, we introduce an extension of the exchange model to an arbitrary number of components, namely, the multi-species exchange (MSE) model, which greatly expands the classes of soft material systems that can be accessed by the complex Langevin simulation technique. We demonstrate the stability and accuracy of the MSE-CL sampling approach using numerical simulations of triblock and tetrablock terpolymer melts, and tetrablock quaterpolymer melts. This method should enable studies of a wide range of fluctuation phenomena in multiblock/multi-species polymer blends and composites.

  11. Hollow fiber membrane contactors for CO2 capture: modeling and up-scaling to CO2 capture for an 800 MWe coal power station

    NARCIS (Netherlands)

    Kimball, E.; Al-Azki, A.; Gomez, A.; Goetheer, E.L.V.; Booth, N.; Adams, D.; Ferre, D.

    2014-01-01

    A techno-economic analysis was completed to compare the use of Hollow Fiber Membrane Modules (HFMM) with the more conventional structured packing columns as the absorber in amine-based CO2capture systems for power plants. In order to simulate the operation of industrial scale HFMMsystems, a

  12. A Nonlinear Dynamic Subscale Model for Partially Resolved Numerical Simulation (PRNS)/Very Large Eddy Simulation (VLES) of Internal Non-Reacting Flows

    Science.gov (United States)

    Shih, Tsan-Hsing; Liu, nan-Suey

    2010-01-01

    A brief introduction of the temporal filter based partially resolved numerical simulation/very large eddy simulation approach (PRNS/VLES) and its distinct features are presented. A nonlinear dynamic subscale model and its advantages over the linear subscale eddy viscosity model are described. In addition, a guideline for conducting a PRNS/VLES simulation is provided. Results are presented for three turbulent internal flows. The first one is the turbulent pipe flow at low and high Reynolds numbers to illustrate the basic features of PRNS/VLES; the second one is the swirling turbulent flow in a LM6000 single injector to further demonstrate the differences in the calculated flow fields resulting from the nonlinear model versus the pure eddy viscosity model; the third one is a more complex turbulent flow generated in a single-element lean direct injection (LDI) combustor, the calculated result has demonstrated that the current PRNS/VLES approach is capable of capturing the dynamically important, unsteady turbulent structures while using a relatively coarse grid.

  13. USING COPULAS TO MODEL DEPENDENCE IN SIMULATION RISK ASSESSMENT

    Energy Technology Data Exchange (ETDEWEB)

    Dana L. Kelly

    2007-11-01

    Typical engineering systems in applications with high failure consequences such as nuclear reactor plants often employ redundancy and diversity of equipment in an effort to lower the probability of failure and therefore risk. However, it has long been recognized that dependencies exist in these redundant and diverse systems. Some dependencies, such as common sources of electrical power, are typically captured in the logic structure of the risk model. Others, usually referred to as intercomponent dependencies, are treated implicitly by introducing one or more statistical parameters into the model. Such common-cause failure models have limitations in a simulation environment. In addition, substantial subjectivity is associated with parameter estimation for these models. This paper describes an approach in which system performance is simulated by drawing samples from the joint distributions of dependent variables. The approach relies on the notion of a copula distribution, a notion which has been employed by the actuarial community for ten years or more, but which has seen only limited application in technological risk assessment. The paper also illustrates how equipment failure data can be used in a Bayesian framework to estimate the parameter values in the copula model. This approach avoids much of the subjectivity required to estimate parameters in traditional common-cause failure models. Simulation examples are presented for failures in time. The open-source software package R is used to perform the simulations. The open-source software package WinBUGS is used to perform the Bayesian inference via Markov chain Monte Carlo sampling.

  14. Predicting the performance uncertainty of a 1-MW pilot-scale carbon capture system after hierarchical laboratory-scale calibration and validation

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Zhijie; Lai, Canhai; Marcy, Peter William; Dietiker, Jean-François; Li, Tingwen; Sarkar, Avik; Sun, Xin

    2017-05-01

    A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of their inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.

  15. Modeling and Simulation of Longitudinal Dynamics for Low Energy Ring-High Energy Ring at the Positron-Electron Project

    International Nuclear Information System (INIS)

    Rivetta, Claudio; Mastorides, T.; Fox, J.D.; Teytelman, D.; Van Winkle, D.

    2007-01-01

    A time domain dynamic modeling and simulation tool for beam-cavity interactions in the Low Energy Ring (LER) and High Energy Ring (HER) at the Positron-Electron Project (PEP-II) is presented. Dynamic simulation results for PEP-II are compared to measurements of the actual machine. The motivation for this tool is to explore the stability margins and performance limits of PEP-II radio-frequency (RF) systems at future higher currents and upgraded RF configurations. It also serves as a test bed for new control algorithms and can define the ultimate limits of the low-level RF (LLRF) architecture. The time domain program captures the dynamic behavior of the beam-cavity-LLRF interaction based on a reduced model. The ring current is represented by macrobunches. Multiple RF stations in the ring are represented via one or two macrocavities. Each macrocavity captures the overall behavior of all the 2 or 4 cavity RF stations. Station models include nonlinear elements in the klystron and signal processing. This enables modeling the principal longitudinal impedance control loops interacting via the longitudinal beam model. The dynamics of the simulation model are validated by comparing the measured growth rates for the LER with simulation results. The simulated behavior of the LER at increased operation currents is presented via low-mode instability growth rates. Different control strategies are compared and the effects of both the imperfections in the LLRF signal processing and the nonlinear drivers and klystrons are explored

  16. A Model for Capturing Team Adaptation in Simulated Emergencies

    DEFF Research Database (Denmark)

    Paltved, Charlotte; Musaeus, Peter

    2013-01-01

    and conceptualizes team processes through recursive cycles of updates. In the 29 simulation scenarios, 94 updates were recorded. There were between 0 and 8 updates per scenario (mean 3,2). Level five was achieved in 13 scenarios, level four in 8 scenarios and finally, level two and three were achieved in four...... is required to meaningfully account for communication exchanges in context. As such, this theoretical framework might provide a vocabulary for operationalizing the differences between "effective and ineffective" communication. Moving beyond counting communication events or the frequency of certain...

  17. Capturing coherent structures and turbulent interfaces in wake flows by means of the Organised Eddy Simulation, OES and by Tomo-PIV

    International Nuclear Information System (INIS)

    Deri, E; Braza, M; Cazin, S; Cid, E; Harran, G; Ouvrard, H; Hoarau, Y; Hunt, J

    2011-01-01

    The present study aims at a physical analysis of the coherent and chaotic vortex dynamics in the near wake around a flat plate at incidence, to provide new elements in respect of the flow physics turbulence modelling for high-Reynolds number flows around bodies. This constitutes nowadays a challenge in the aeronautics design. A special attention is paid to capture the thin shear layer interfaces downstream of the separation, responsible for aeroacoustics phenomena related to noise reduction and directly linked to an accurate prediction of the aerodynamic forces. The experimental investigation is carried out by means of tomographic PIV. The interaction of the most energetic coherent structures with the random turbulence is discussed. Furthermore, the POD analysis allowed evaluation of 3D phase averaged dynamics as well as the influence of higher modes associated with the finer-scale turbulence. The numerical study by means of the Organised Eddy Simulation, OES approach ensured a reduced turbulence diffusion that allowed development of the von Karman instability and of capturing of the thin shear-layer interfaces, by using appropriate criteria based on vorticity and dissipation rate of kinetic energy. A comparison between the experiments and the simulations concerning the coherent vortex pattern is carried out.

  18. Diagnosing observed characteristics of the wet season across Africa to identify deficiencies in climate model simulations

    Science.gov (United States)

    Dunning, C.; Black, E.; Allan, R. P.

    2017-12-01

    The seasonality of rainfall over Africa plays a key role in determining socio-economic impacts for agricultural stakeholders, influences energy supply from hydropower, affects the length of the malaria transmission season and impacts surface water supplies. Hence, failure or delays of these rains can lead to significant socio-economic impacts. Diagnosing and interpreting interannual variability and long-term trends in seasonality, and analysing the physical driving mechanisms, requires a robust definition of African precipitation seasonality, applicable to both observational datasets and model simulations. Here we present a methodology for objectively determining the onset and cessation of multiple wet seasons across the whole of Africa. Compatibility with known physical drivers of African rainfall, consistency with indigenous methods, and generally strong agreement between satellite-based rainfall data sets confirm that the method is capturing the correct seasonal progression of African rainfall. Application of this method to observational datasets reveals that over East Africa cessation of the short rains is 5 days earlier in La Nina years, and the failure of the rains and subsequent humanitarian disaster is associated with shorter as well as weaker rainy seasons over this region. The method is used to examine the representation of the seasonality of African precipitation in CMIP5 model simulations. Overall, atmosphere-only and fully coupled CMIP5 historical simulations represent essential aspects of the seasonal cycle; patterns of seasonal progression of the rainy season are captured, for the most part mean model onset/ cessation dates agree with mean observational dates to within 18 days. However, unlike the atmosphere-only simulations, the coupled simulations do not capture the biannual regime over the southern West African coastline, linked to errors in Gulf of Guinea Sea Surface Temperature. Application to both observational and climate model datasets, and

  19. How important is diversity for capturing environmental-change responses in ecosystem models?

    DEFF Research Database (Denmark)

    Prowe, Friederike; Pahlow, M.; Dutkiewicz, S.

    2014-01-01

    Marine ecosystem models used to investigate how global change affects ocean ecosystems and their functioning typically omit pelagic plankton diversity. Diversity, however, may affect functions such as primary production and their sensitivity to environmental changes. Here we use a global ocean...... ecosystem model that explicitly resolves phytoplankton diversity by defining subtypes within four phytoplankton functional types (PFTs). We investigate the model's ability to capture diversity effects on primary production under environmental change. An idealized scenario with a sudden reduction in vertical...... in the model, for example via trade-offs or different PFTs, thus determines the diversity effects on ecosystem functioning captured in ocean ecosystem models....

  20. Dynamic Modeling and Control Studies of a Two-Stage Bubbling Fluidized Bed Adsorber-Reactor for Solid-Sorbent CO{sub 2} Capture

    Energy Technology Data Exchange (ETDEWEB)

    Modekurti, Srinivasarao; Bhattacharyya, Debangsu; Zitney, Stephen E.

    2013-07-31

    A one-dimensional, non-isothermal, pressure-driven dynamic model has been developed for a two-stage bubbling fluidized bed (BFB) adsorber-reactor for solid-sorbent carbon dioxide (CO{sub 2}) capture using Aspen Custom Modeler® (ACM). The BFB model for the flow of gas through a continuous phase of downward moving solids considers three regions: emulsion, bubble, and cloud-wake. Both the upper and lower reactor stages are of overflow-type configuration, i.e., the solids leave from the top of each stage. In addition, dynamic models have been developed for the downcomer that transfers solids between the stages and the exit hopper that removes solids from the bottom of the bed. The models of all auxiliary equipment such as valves and gas distributor have been integrated with the main model of the two-stage adsorber reactor. Using the developed dynamic model, the transient responses of various process variables such as CO{sub 2} capture rate and flue gas outlet temperatures have been studied by simulating typical disturbances such as change in the temperature, flowrate, and composition of the incoming flue gas from pulverized coal-fired power plants. In control studies, the performance of a proportional-integral-derivative (PID) controller, feedback-augmented feedforward controller, and linear model predictive controller (LMPC) are evaluated for maintaining the overall CO{sub 2} capture rate at a desired level in the face of typical disturbances.

  1. Reconstructing 3D Tree Models Using Motion Capture and Particle Flow

    Directory of Open Access Journals (Sweden)

    Jie Long

    2013-01-01

    Full Text Available Recovering tree shape from motion capture data is a first step toward efficient and accurate animation of trees in wind using motion capture data. Existing algorithms for generating models of tree branching structures for image synthesis in computer graphics are not adapted to the unique data set provided by motion capture. We present a method for tree shape reconstruction using particle flow on input data obtained from a passive optical motion capture system. Initial branch tip positions are estimated from averaged and smoothed motion capture data. Branch tips, as particles, are also generated within a bounding space defined by a stack of bounding boxes or a convex hull. The particle flow, starting at branch tips within the bounding volume under forces, creates tree branches. The forces are composed of gravity, internal force, and external force. The resulting shapes are realistic and similar to the original tree crown shape. Several tunable parameters provide control over branch shape and arrangement.

  2. Modeling and Simulation of a Tethered Harpoon for Comet Sampling

    Science.gov (United States)

    Quadrelli, Marco B.

    2014-01-01

    This paper describes the development of a dynamic model and simulation results of a tethered harpoon for comet sampling. This model and simulation was done in order to carry out an initial sensitivity analysis for key design parameters of the tethered system. The harpoon would contain a canister which would collect a sample of soil from a cometary surface. Both a spring ejected canister and a tethered canister are considered. To arrive in close proximity of the spacecraft at the end of its trajectory so it could be captured, the free-flying canister would need to be ejected at the right time and with the proper impulse, while the tethered canister must be recovered by properly retrieving the tether at a rate that would avoid an excessive amplitude of oscillatory behavior during the retrieval. The paper describes the model of the tether dynamics and harpoon penetration physics. The simulations indicate that, without the tether, the canister would still reach the spacecraft for collection, that the tether retrieval of the canister would be achievable with reasonable fuel consumption, and that the canister amplitude upon retrieval would be insensitive to variations in vertical velocity dispersion.

  3. Radiative proton-deuteron capture in a gauge invariant relativistic model

    NARCIS (Netherlands)

    Korchin, AY; Van Neck, D; Scholten, O; Waroquier, M

    A relativistic model is developed for the description of the process p+dHe-3+gamma*. It is based on the impulse approximation, but is explicitly gauge invariant and Lorentz covariant. The model is applied to radiative proton-deuteron capture and electrodisintegration of He-3 nt intermediate

  4. SUPRA - Enhanced upset recovery simulation

    NARCIS (Netherlands)

    Groen, E.; Ledegang, W.; Field, J.; Smaili, H.; Roza, M.; Fucke, L.; Nooij, S.; Goman, M.; Mayrhofer, M.; Zaichik, L.E.; Grigoryev, M.; Biryukov, V.

    2012-01-01

    The SUPRA research project - Simulation of Upset Recovery in Aviation - has been funded by the European Union 7th Framework Program to enhance the flight simulation envelope for upset recovery simulation. Within the project an extended aerodynamic model, capturing the key aerodynamics during and

  5. Selecting a dynamic simulation modeling method for health care delivery research-part 2: report of the ISPOR Dynamic Simulation Modeling Emerging Good Practices Task Force.

    Science.gov (United States)

    Marshall, Deborah A; Burgos-Liz, Lina; IJzerman, Maarten J; Crown, William; Padula, William V; Wong, Peter K; Pasupathy, Kalyan S; Higashi, Mitchell K; Osgood, Nathaniel D

    2015-03-01

    In a previous report, the ISPOR Task Force on Dynamic Simulation Modeling Applications in Health Care Delivery Research Emerging Good Practices introduced the fundamentals of dynamic simulation modeling and identified the types of health care delivery problems for which dynamic simulation modeling can be used more effectively than other modeling methods. The hierarchical relationship between the health care delivery system, providers, patients, and other stakeholders exhibits a level of complexity that ought to be captured using dynamic simulation modeling methods. As a tool to help researchers decide whether dynamic simulation modeling is an appropriate method for modeling the effects of an intervention on a health care system, we presented the System, Interactions, Multilevel, Understanding, Loops, Agents, Time, Emergence (SIMULATE) checklist consisting of eight elements. This report builds on the previous work, systematically comparing each of the three most commonly used dynamic simulation modeling methods-system dynamics, discrete-event simulation, and agent-based modeling. We review criteria for selecting the most suitable method depending on 1) the purpose-type of problem and research questions being investigated, 2) the object-scope of the model, and 3) the method to model the object to achieve the purpose. Finally, we provide guidance for emerging good practices for dynamic simulation modeling in the health sector, covering all aspects, from the engagement of decision makers in the model design through model maintenance and upkeep. We conclude by providing some recommendations about the application of these methods to add value to informed decision making, with an emphasis on stakeholder engagement, starting with the problem definition. Finally, we identify areas in which further methodological development will likely occur given the growing "volume, velocity and variety" and availability of "big data" to provide empirical evidence and techniques

  6. Gas permeation process for post combustion CO2 capture

    International Nuclear Information System (INIS)

    Pfister, Marc

    2017-01-01

    CO 2 Capture and Storage (CCS) is a promising solution to separate CO 2 from flue gas, to reduce the CO 2 emissions in the atmosphere, and hence to reduce global warming. In CCS, one important constraint is the high additional energy requirement of the different capture processes. That statement is partly explained by the low CO 2 fraction in the inlet flue gas and the high output targets in terms of CO 2 capture and purity (≥90%). Gas permeation across dense membrane can be used in post combustion CO 2 capture. Gas permeation in a dense membrane is ruled by a mass transfer mechanism and separation performance in a dense membrane are characterized by component's effective permeability and selectivity. One of the newest and encouraging type of membrane in terms of separation performance is the facilitated transport membrane. Each particular type of membrane is defined by a specific mass transfer law. The most important difference to the mass transfer behavior in a dense membrane is related to the facilitated transport mechanism and the solution diffusion mechanism and its restrictions and limitations. Permeation flux modelling across a dense membrane is required to perform a post combustion CO 2 capture process simulation. A CO 2 gas permeation separation process is composed of a two-steps membrane process, one drying step and a compression unit. Simulation on the energy requirement and surface area of the different membrane modules in the global system are useful to determine the benefits of using dense membranes in a post combustion CO 2 capture technology. (author)

  7. Novel Virtual User Models of Mild Cognitive Impairment for Simulating Dementia

    Directory of Open Access Journals (Sweden)

    Sofia Segkouli

    2015-01-01

    Full Text Available Virtual user modeling research has attempted to address critical issues of human-computer interaction (HCI such as usability and utility through a large number of analytic, usability-oriented approaches as cognitive models in order to provide users with experiences fitting to their specific needs. However, there is demand for more specific modules embodied in cognitive architecture that will detect abnormal cognitive decline across new synthetic task environments. Also, accessibility evaluation of graphical user interfaces (GUIs requires considerable effort for enhancing ICT products accessibility for older adults. The main aim of this study is to develop and test virtual user models (VUM simulating mild cognitive impairment (MCI through novel specific modules, embodied at cognitive models and defined by estimations of cognitive parameters. Well-established MCI detection tests assessed users’ cognition, elaborated their ability to perform multitasks, and monitored the performance of infotainment related tasks to provide more accurate simulation results on existing conceptual frameworks and enhanced predictive validity in interfaces’ design supported by increased tasks’ complexity to capture a more detailed profile of users’ capabilities and limitations. The final outcome is a more robust cognitive prediction model, accurately fitted to human data to be used for more reliable interfaces’ evaluation through simulation on the basis of virtual models of MCI users.

  8. Novel Virtual User Models of Mild Cognitive Impairment for Simulating Dementia

    Science.gov (United States)

    Segkouli, Sofia; Tzovaras, Dimitrios; Tsakiris, Thanos; Tsolaki, Magda; Karagiannidis, Charalampos

    2015-01-01

    Virtual user modeling research has attempted to address critical issues of human-computer interaction (HCI) such as usability and utility through a large number of analytic, usability-oriented approaches as cognitive models in order to provide users with experiences fitting to their specific needs. However, there is demand for more specific modules embodied in cognitive architecture that will detect abnormal cognitive decline across new synthetic task environments. Also, accessibility evaluation of graphical user interfaces (GUIs) requires considerable effort for enhancing ICT products accessibility for older adults. The main aim of this study is to develop and test virtual user models (VUM) simulating mild cognitive impairment (MCI) through novel specific modules, embodied at cognitive models and defined by estimations of cognitive parameters. Well-established MCI detection tests assessed users' cognition, elaborated their ability to perform multitasks, and monitored the performance of infotainment related tasks to provide more accurate simulation results on existing conceptual frameworks and enhanced predictive validity in interfaces' design supported by increased tasks' complexity to capture a more detailed profile of users' capabilities and limitations. The final outcome is a more robust cognitive prediction model, accurately fitted to human data to be used for more reliable interfaces' evaluation through simulation on the basis of virtual models of MCI users. PMID:26339282

  9. Characteristics of sub-daily precipitation extremes in observed data and regional climate model simulations

    Science.gov (United States)

    Beranová, Romana; Kyselý, Jan; Hanel, Martin

    2018-04-01

    The study compares characteristics of observed sub-daily precipitation extremes in the Czech Republic with those simulated by Hadley Centre Regional Model version 3 (HadRM3) and Rossby Centre Regional Atmospheric Model version 4 (RCA4) regional climate models (RCMs) driven by reanalyses and examines diurnal cycles of hourly precipitation and their dependence on intensity and surface temperature. The observed warm-season (May-September) maxima of short-duration (1, 2 and 3 h) amounts show one diurnal peak in the afternoon, which is simulated reasonably well by RCA4, although the peak occurs too early in the model. HadRM3 provides an unrealistic diurnal cycle with a nighttime peak and an afternoon minimum coinciding with the observed maximum for all three ensemble members, which suggests that convection is not captured realistically. Distorted relationships of the diurnal cycles of hourly precipitation to daily maximum temperature in HadRM3 further evidence that underlying physical mechanisms are misrepresented in this RCM. Goodness-of-fit tests indicate that generalised extreme value distribution is an applicable model for both observed and RCM-simulated precipitation maxima. However, the RCMs are not able to capture the range of the shape parameter estimates of distributions of short-duration precipitation maxima realistically, leading to either too many (nearly all for HadRM3) or too few (RCA4) grid boxes in which the shape parameter corresponds to a heavy tail. This means that the distributions of maxima of sub-daily amounts are distorted in the RCM-simulated data and do not match reality well. Therefore, projected changes of sub-daily precipitation extremes in climate change scenarios based on RCMs not resolving convection need to be interpreted with caution.

  10. The Importance of Electron Captures in Core-Collapse Supernovae

    International Nuclear Information System (INIS)

    Langanke, K.; Sampaio, J.M.; Martinez-Pinedo, G.

    2004-01-01

    Nuclear physics plays an essential role in the dynamics of a type II supernova (a collapsing star). Recent advances in nuclear many-body theory allow now to reliably calculate the stellar weak-interaction processes involving nuclei. The most important process is the electron capture on finite nuclei with mass numbers A > 55. It is found that the respective capture rates, derived from modern many-body models, differ noticeably from previous, more phenomenological estimates. This leads to significant changes in the stellar trajectory during the supernova explosion, as has been found in state-of-the-art supernova simulations. (author)

  11. Impact of electron-captures on nuclei near N = 50 on core-collapse supernovae

    Science.gov (United States)

    Titus, R.; Sullivan, C.; Zegers, R. G. T.; Brown, B. A.; Gao, B.

    2018-01-01

    The sensitivity of the late stages of stellar core collapse to electron-capture rates on nuclei is investigated, with a focus on electron-capture rates on 74 nuclei with neutron number close to 50, just above doubly magic 78Ni. It is demonstrated that variations in key characteristics of the evolution, such as the lepton fraction, electron fraction, entropy, stellar density, and in-fall velocity are about 50% due to uncertainties in the electron-capture rates on nuclei in this region, although thousands of nuclei are included in the simulations. The present electron-capture rate estimates used for the nuclei in this high-sensitivity region of the chart of isotopes are primarily based on a simple approximation, and it is shown that the estimated rates are likely too high, by an order of magnitude or more. Electron-capture rates based on Gamow-Teller strength distributions calculated in microscopic theoretical models will be required to obtain better estimates. Gamow-Teller distributions extracted from charge-exchange experiments performed at intermediate energies serve to guide the development and benchmark the models. A previously compiled weak-rate library that is used in the astrophysical simulations was updated as part of the work presented here, by adding additional rate tables for nuclei near stability for mass numbers between 60 and 110.

  12. Coupling Visualization, Simulation, and Deep Learning for Ensemble Steering of Complex Energy Models: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Potter, Kristin C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bugbee, Bruce [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnan, Venkat K [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-10-09

    We have developed a framework for the exploration, design, and planning of energy systems that combines interactive visualization with machine-learning based approximations of simulations through a general purpose dataflow API. Our system provides a visual inter- face allowing users to explore an ensemble of energy simulations representing a subset of the complex input parameter space, and spawn new simulations to 'fill in' input regions corresponding to new enegery system scenarios. Unfortunately, many energy simula- tions are far too slow to provide interactive responses. To support interactive feedback, we are developing reduced-form models via machine learning techniques, which provide statistically sound esti- mates of the full simulations at a fraction of the computational cost and which are used as proxies for the full-form models. Fast com- putation and an agile dataflow enhance the engagement with energy simulations, and allow researchers to better allocate computational resources to capture informative relationships within the system and provide a low-cost method for validating and quality-checking large-scale modeling efforts.

  13. Interannual Tropical Rainfall Variability in General Circulation Model Simulations Associated with the Atmospheric Model Intercomparison Project.

    Science.gov (United States)

    Sperber, K. R.; Palmer, T. N.

    1996-11-01

    The interannual variability of rainfall over the Indian subcontinent, the African Sahel, and the Nordeste region of Brazil have been evaluated in 32 models for the period 1979-88 as part of the Atmospheric Model Intercomparison Project (AMIP). The interannual variations of Nordeste rainfall are the most readily captured, owing to the intimate link with Pacific and Atlantic sea surface temperatures. The precipitation variations over India and the Sahel are less well simulated. Additionally, an Indian monsoon wind shear index was calculated for each model. Evaluation of the interannual variability of a wind shear index over the summer monsoon region indicates that the models exhibit greater fidelity in capturing the large-scale dynamic fluctuations than the regional-scale rainfall variations. A rainfall/SST teleconnection quality control was used to objectively stratify model performance. Skill scores improved for those models that qualitatively simulated the observed rainfall/El Niño- Southern Oscillation SST correlation pattern. This subset of models also had a rainfall climatology that was in better agreement with observations, indicating a link between systematic model error and the ability to simulate interannual variations.A suite of six European Centre for Medium-Range Weather Forecasts (ECMWF) AMIP runs (differing only in their initial conditions) have also been examined. As observed, all-India rainfall was enhanced in 1988 relative to 1987 in each of these realizations. All-India rainfall variability during other years showed little or no predictability, possibly due to internal chaotic dynamics associated with intraseasonal monsoon fluctuations and/or unpredictable land surface process interactions. The interannual variations of Nordeste rainfall were best represented. The State University of New York at Albany/National Center for Atmospheric Research Genesis model was run in five initial condition realizations. In this model, the Nordeste rainfall

  14. Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows

    Science.gov (United States)

    Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel

    2017-11-01

    We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.

  15. Predicting kinetics using musculoskeletal modeling and inertial motion capture

    NARCIS (Netherlands)

    Karatsidis, Angelos; Jung, Moonki; Schepers, H. Martin; Bellusci, Giovanni; de Zee, Mark; Veltink, Peter H.; Andersen, Michael Skipper

    2018-01-01

    Inverse dynamic analysis using musculoskeletal modeling is a powerful tool, which is utilized in a range of applications to estimate forces in ligaments, muscles, and joints, non-invasively. To date, the conventional input used in this analysis is derived from optical motion capture (OMC) and force

  16. Capturing optically important constituents and properties in a marine biogeochemical and ecosystem model

    Science.gov (United States)

    Dutkiewicz, S.; Hickman, A. E.; Jahn, O.; Gregg, W. W.; Mouw, C. B.; Follows, M. J.

    2015-07-01

    We present a numerical model of the ocean that couples a three-stream radiative transfer component with a marine biogeochemical-ecosystem component in a dynamic three-dimensional physical framework. The radiative transfer component resolves the penetration of spectral irradiance as it is absorbed and scattered within the water column. We explicitly include the effect of several optically important water constituents (different phytoplankton functional types; detrital particles; and coloured dissolved organic matter, CDOM). The model is evaluated against in situ-observed and satellite-derived products. In particular we compare to concurrently measured biogeochemical, ecosystem, and optical data along a meridional transect of the Atlantic Ocean. The simulation captures the patterns and magnitudes of these data, and estimates surface upwelling irradiance analogous to that observed by ocean colour satellite instruments. We find that incorporating the different optically important constituents explicitly and including spectral irradiance was crucial to capture the variability in the depth of the subsurface chlorophyll a (Chl a) maximum. We conduct a series of sensitivity experiments to demonstrate, globally, the relative importance of each of the water constituents, as well as the crucial feedbacks between the light field, the relative fitness of phytoplankton types, and the biogeochemistry of the ocean. CDOM has proportionally more importance at attenuating light at short wavelengths and in more productive waters, phytoplankton absorption is relatively more important at the subsurface Chl a maximum, and water molecules have the greatest contribution when concentrations of other constituents are low, such as in the oligotrophic gyres. Scattering had less effect on attenuation, but since it is important for the amount and type of upwelling irradiance, it is crucial for setting sea surface reflectance. Strikingly, sensitivity experiments in which absorption by any of the

  17. Valve-specific, analytic-phenomenological modelling of spray dispersion in zero-dimensional simulation; Ventilspezifische, analytisch-phaenomenologische Modellierung der Sprayausbreitung fuer die nulldimensionale Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Schuerg, F.; Arndt, S. [Robert Bosch GmbH, Stuttgart (Germany); Weigand, B. [Stuttgart Univ. (Germany). Inst. fuer Thermodynamik der Luft- und Raumfahrt

    2007-07-01

    Spray-guided combustion processes for gasoline direct injection offer a great fuel saving potential. The quality of mixture formation has direct impact on combustion and emissions and ultimately on the technical feasibility of the consumption advantage. Therefore, it is very important to select the optimal mixture formation strategy. A systematic optimization of the mixture formation process based on experiments or three-dimensional computational fluid dynamics requires tremendous effort. An efficient alternative is the application-oriented, zero-dimensional numerical simulation of mixture formation. With a systemic model formulation in terms of global thermodynamic and fluid mechanical balance equations, the presented simulation model considers all relevant aspects of the mixture formation process. A comparison with measurements in a pressure/temperature chamber using laser-induced exciplex fluorescence tomography revealed a very satisfactory agreement between simulation and experiment. The newly developed, analytic-phenomenological spray propagation model precisely captures the injector-specific mixture formation characteristics of an annular-orifice injector in terms of penetration and volume. Vaporization rate and mean air/fuel ratio as the key quantities of mixture formation are correctly reproduced. Thus, the simulation model is suited to numerically assess the quality and to optimize the strategy of mixture formation. (orig.)

  18. Hydrodynamical model and experimental results of a calcium looping cycle for CO2 capture

    International Nuclear Information System (INIS)

    Lisbona, Pilar; Martínez, Ana; Romeo, Luis M.

    2013-01-01

    Highlights: ► A scaled experimental cold flow model of a dual fluidized bed facility is presented. ► Two MATLAB models are developed for the single CFB and the dual CFB facility. ► Set of experiments are carried out and used to validate the mathematical model. ► Good agreement between model and experimental tests for sCFB. ► Further work required for validating dual CFB operation. -- Abstract: High temperature looping cycles involving solid circulation, such as carbonation–calcination, play an essential role among the CO 2 capture technologies under development. The low cost and high availability of Ca-based sorbents together with the feasibility of integration between these capture systems and existing power plants lead to very competitive potential costs of avoided CO 2 , below 20 €/tonne. Optimal configurations make use of several interconnected fluidized beds. One promising configuration for Ca-based sorbents looping systems relies on the use of two circulating beds (carbonator and calciner) and two bubbling beds acting as non-mechanical valves. Fluidized beds are well characterized when operating independently since they are extensively used in industrial applications, power and chemical plants. However, the operation when two or more fluidized beds exchange solid material through non-mechanical valves is still uncertain because of the more complex pressure balance of the system. Theoretical studies based on thermo-chemical simulations and experimental studies show that minimum CO 2 capture cost is attained with large solid circulation flow between reactors. The challenge is to reach the required particle circulation in a system with a complex configuration and be able to control it. Solid internal recirculation in any of these fluidized beds would provide flexibility in its control but it will also make harder the characterization of the whole system. The aim of this work is to analyse the hydrodynamics of the system and to generate a

  19. Sunspot Modeling: From Simplified Models to Radiative MHD Simulations

    Directory of Open Access Journals (Sweden)

    Rolf Schlichenmaier

    2011-09-01

    Full Text Available We review our current understanding of sunspots from the scales of their fine structure to their large scale (global structure including the processes of their formation and decay. Recently, sunspot models have undergone a dramatic change. In the past, several aspects of sunspot structure have been addressed by static MHD models with parametrized energy transport. Models of sunspot fine structure have been relying heavily on strong assumptions about flow and field geometry (e.g., flux-tubes, "gaps", convective rolls, which were motivated in part by the observed filamentary structure of penumbrae or the necessity of explaining the substantial energy transport required to maintain the penumbral brightness. However, none of these models could self-consistently explain all aspects of penumbral structure (energy transport, filamentation, Evershed flow. In recent years, 3D radiative MHD simulations have been advanced dramatically to the point at which models of complete sunspots with sufficient resolution to capture sunspot fine structure are feasible. Here overturning convection is the central element responsible for energy transport, filamentation leading to fine-structure and the driving of strong outflows. On the larger scale these models are also in the progress of addressing the subsurface structure of sunspots as well as sunspot formation. With this shift in modeling capabilities and the recent advances in high resolution observations, the future research will be guided by comparing observation and theory.

  20. Application of blocking diagnosis methods to general circulation models. Part II: model simulations

    Energy Technology Data Exchange (ETDEWEB)

    Barriopedro, D.; Trigo, R.M. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Lisbon (Portugal); Garcia-Herrera, R.; Gonzalez-Rouco, J.F. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de C.C. Fisicas, Madrid (Spain)

    2010-12-15

    A previously defined automatic method is applied to reanalysis and present-day (1950-1989) forced simulations of the ECHO-G model in order to assess its performance in reproducing atmospheric blocking in the Northern Hemisphere. Unlike previous methodologies, critical parameters and thresholds to estimate blocking occurrence in the model are not calibrated with an observed reference, but objectively derived from the simulated climatology. The choice of model dependent parameters allows for an objective definition of blocking and corrects for some intrinsic model bias, the difference between model and observed thresholds providing a measure of systematic errors in the model. The model captures reasonably the main blocking features (location, amplitude, annual cycle and persistence) found in observations, but reveals a relative southward shift of Eurasian blocks and an overall underestimation of blocking activity, especially over the Euro-Atlantic sector. Blocking underestimation mostly arises from the model inability to generate long persistent blocks with the observed frequency. This error is mainly attributed to a bias in the basic state. The bias pattern consists of excessive zonal winds over the Euro-Atlantic sector and a southward shift at the exit zone of the jet stream extending into in the Eurasian continent, that are more prominent in cold and warm seasons and account for much of Euro-Atlantic and Eurasian blocking errors, respectively. It is shown that other widely used blocking indices or empirical observational thresholds may not give a proper account of the lack of realism in the model as compared with the proposed method. This suggests that in addition to blocking changes that could be ascribed to natural variability processes or climate change signals in the simulated climate, attention should be paid to significant departures in the diagnosis of phenomena that can also arise from an inappropriate adaptation of detection methods to the climate of the

  1. Modeling of fatigue crack induced nonlinear ultrasonics using a highly parallelized explicit local interaction simulation approach

    Science.gov (United States)

    Shen, Yanfeng; Cesnik, Carlos E. S.

    2016-04-01

    This paper presents a parallelized modeling technique for the efficient simulation of nonlinear ultrasonics introduced by the wave interaction with fatigue cracks. The elastodynamic wave equations with contact effects are formulated using an explicit Local Interaction Simulation Approach (LISA). The LISA formulation is extended to capture the contact-impact phenomena during the wave damage interaction based on the penalty method. A Coulomb friction model is integrated into the computation procedure to capture the stick-slip contact shear motion. The LISA procedure is coded using the Compute Unified Device Architecture (CUDA), which enables the highly parallelized supercomputing on powerful graphic cards. Both the explicit contact formulation and the parallel feature facilitates LISA's superb computational efficiency over the conventional finite element method (FEM). The theoretical formulations based on the penalty method is introduced and a guideline for the proper choice of the contact stiffness is given. The convergence behavior of the solution under various contact stiffness values is examined. A numerical benchmark problem is used to investigate the new LISA formulation and results are compared with a conventional contact finite element solution. Various nonlinear ultrasonic phenomena are successfully captured using this contact LISA formulation, including the generation of nonlinear higher harmonic responses. Nonlinear mode conversion of guided waves at fatigue cracks is also studied.

  2. Stormwater Tank Performance: Design and Management Criteria for Capture Tanks Using a Continuous Simulation and a Semi-Probabilistic Analytical Approach

    Directory of Open Access Journals (Sweden)

    Flavio De Martino

    2013-10-01

    Full Text Available Stormwater tank performance significantly depends on management practices. This paper proposes a procedure to assess tank efficiency in terms of volume and pollutant concentration using four different capture tank management protocols. The comparison of the efficiency results reveals that, as expected, a combined bypass—stormwater tank system achieves better results than a tank alone. The management practices tested for the tank-only systems provide notably different efficiency results. The practice of immediately emptying after the end of the event exhibits significant levels of efficiency and operational advantages. All other configurations exhibit either significant operational problems or very low performances. The continuous simulation and semi-probabilistic approach for the best tank management practice are compared. The semi-probabilistic approach is based on a Weibull probabilistic model of the main characteristics of the rainfall process. Following this approach, efficiency indexes were established. The comparison with continuous simulations shows the reliability of the probabilistic approach even if this last is certainly very site sensitive.

  3. A Model-Based Approach for Bridging Virtual and Physical Sensor Nodes in a Hybrid Simulation Framework

    Directory of Open Access Journals (Sweden)

    Mohammad Mozumdar

    2014-06-01

    Full Text Available The Model Based Design (MBD approach is a popular trend to speed up application development of embedded systems, which uses high-level abstractions to capture functional requirements in an executable manner, and which automates implementation code generation. Wireless Sensor Networks (WSNs are an emerging very promising application area for embedded systems. However, there is a lack of tools in this area, which would allow an application developer to model a WSN application by using high level abstractions, simulate it mapped to a multi-node scenario for functional analysis, and finally use the refined model to automatically generate code for different WSN platforms. Motivated by this idea, in this paper we present a hybrid simulation framework that not only follows the MBD approach for WSN application development, but also interconnects a simulated sub-network with a physical sub-network and then allows one to co-simulate them, which is also known as Hardware-In-the-Loop (HIL simulation.

  4. A New Signal Model for Axion Cavity Searches from N -body Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Lentz, Erik W.; Rosenberg, Leslie J. [Physics Department, University of Washington, Seattle, WA 98195-1580 (United States); Quinn, Thomas R.; Tremmel, Michael J., E-mail: lentze@phys.washington.edu, E-mail: ljrosenberg@phys.washington.edu, E-mail: trq@astro.washington.edu, E-mail: mjt29@astro.washington.edu [Astronomy Department, University of Washington, Seattle, WA 98195-1580 (United States)

    2017-08-20

    Signal estimates for direct axion dark matter (DM) searches have used the isothermal sphere halo model for the last several decades. While insightful, the isothermal model does not capture effects from a halo’s infall history nor the influence of baryonic matter, which has been shown to significantly influence a halo’s inner structure. The high resolution of cavity axion detectors can make use of modern cosmological structure-formation simulations, which begin from realistic initial conditions, incorporate a wide range of baryonic physics, and are capable of resolving detailed structure. This work uses a state-of-the-art cosmological N -body+Smoothed-Particle Hydrodynamics simulation to develop an improved signal model for axion cavity searches. Signal shapes from a class of galaxies encompassing the Milky Way are found to depart significantly from the isothermal sphere. A new signal model for axion detectors is proposed and projected sensitivity bounds on the Axion DM eXperiment (ADMX) data are presented.

  5. Performance Evaluation of UML2-Modeled Embedded Streaming Applications with System-Level Simulation

    Directory of Open Access Journals (Sweden)

    Arpinen Tero

    2009-01-01

    Full Text Available This article presents an efficient method to capture abstract performance model of streaming data real-time embedded systems (RTESs. Unified Modeling Language version 2 (UML2 is used for the performance modeling and as a front-end for a tool framework that enables simulation-based performance evaluation and design-space exploration. The adopted application meta-model in UML resembles the Kahn Process Network (KPN model and it is targeted at simulation-based performance evaluation. The application workload modeling is done using UML2 activity diagrams, and platform is described with structural UML2 diagrams and model elements. These concepts are defined using a subset of the profile for Modeling and Analysis of Realtime and Embedded (MARTE systems from OMG and custom stereotype extensions. The goal of the performance modeling and simulation is to achieve early estimates on task response times, processing element, memory, and on-chip network utilizations, among other information that is used for design-space exploration. As a case study, a video codec application on multiple processors is modeled, evaluated, and explored. In comparison to related work, this is the first proposal that defines transformation between UML activity diagrams and streaming data application workload meta models and successfully adopts it for RTES performance evaluation.

  6. A new synoptic scale resolving global climate simulation using the Community Earth System Model

    Science.gov (United States)

    Small, R. Justin; Bacmeister, Julio; Bailey, David; Baker, Allison; Bishop, Stuart; Bryan, Frank; Caron, Julie; Dennis, John; Gent, Peter; Hsu, Hsiao-ming; Jochum, Markus; Lawrence, David; Muñoz, Ernesto; diNezio, Pedro; Scheitlin, Tim; Tomas, Robert; Tribbia, Joseph; Tseng, Yu-heng; Vertenstein, Mariana

    2014-12-01

    High-resolution global climate modeling holds the promise of capturing planetary-scale climate modes and small-scale (regional and sometimes extreme) features simultaneously, including their mutual interaction. This paper discusses a new state-of-the-art high-resolution Community Earth System Model (CESM) simulation that was performed with these goals in mind. The atmospheric component was at 0.25° grid spacing, and ocean component at 0.1°. One hundred years of "present-day" simulation were completed. Major results were that annual mean sea surface temperature (SST) in the equatorial Pacific and El-Niño Southern Oscillation variability were well simulated compared to standard resolution models. Tropical and southern Atlantic SST also had much reduced bias compared to previous versions of the model. In addition, the high resolution of the model enabled small-scale features of the climate system to be represented, such as air-sea interaction over ocean frontal zones, mesoscale systems generated by the Rockies, and Tropical Cyclones. Associated single component runs and standard resolution coupled runs are used to help attribute the strengths and weaknesses of the fully coupled run. The high-resolution run employed 23,404 cores, costing 250 thousand processor-hours per simulated year and made about two simulated years per day on the NCAR-Wyoming supercomputer "Yellowstone."

  7. From blackbirds to black holes: Investigating capture-recapture methods for time domain astronomy

    Science.gov (United States)

    Laycock, Silas G. T.

    2017-07-01

    In time domain astronomy, recurrent transients present a special problem: how to infer total populations from limited observations. Monitoring observations may give a biassed view of the underlying population due to limitations on observing time, visibility and instrumental sensitivity. A similar problem exists in the life sciences, where animal populations (such as migratory birds) or disease prevalence, must be estimated from sparse and incomplete data. The class of methods termed Capture-Recapture is used to reconstruct population estimates from time-series records of encounters with the study population. This paper investigates the performance of Capture-Recapture methods in astronomy via a series of numerical simulations. The Blackbirds code simulates monitoring of populations of transients, in this case accreting binary stars (neutron star or black hole accreting from a stellar companion) under a range of observing strategies. We first generate realistic light-curves for populations of binaries with contrasting orbital period distributions. These models are then randomly sampled at observing cadences typical of existing and planned monitoring surveys. The classical capture-recapture methods, Lincoln-Peterson, Schnabel estimators, related techniques, and newer methods implemented in the Rcapture package are compared. A general exponential model based on the radioactive decay law is introduced which is demonstrated to recover (at 95% confidence) the underlying population abundance and duty cycle, in a fraction of the observing visits (10-50%) required to discover all the sources in the simulation. Capture-Recapture is a promising addition to the toolbox of time domain astronomy, and methods implemented in R by the biostats community can be readily called from within python.

  8. An Improved Scale-Adaptive Simulation Model for Massively Separated Flows

    Directory of Open Access Journals (Sweden)

    Yue Liu

    2018-01-01

    Full Text Available A new hybrid modelling method termed improved scale-adaptive simulation (ISAS is proposed by introducing the von Karman operator into the dissipation term of the turbulence scale equation, proper derivation as well as constant calibration of which is presented, and the typical circular cylinder flow at Re = 3900 is selected for validation. As expected, the proposed ISAS approach with the concept of scale-adaptive appears more efficient than the original SAS method in obtaining a convergent resolution, meanwhile, comparable with DES in visually capturing the fine-scale unsteadiness. Furthermore, the grid sensitivity issue of DES is encouragingly remedied benefiting from the local-adjusted limiter. The ISAS simulation turns out to attractively represent the development of the shear layers and the flow profiles of the recirculation region, and thus, the focused statistical quantities such as the recirculation length and drag coefficient are closer to the available measurements than DES and SAS outputs. In general, the new modelling method, combining the features of DES and SAS concepts, is capable to simulate turbulent structures down to the grid limit in a simple and effective way, which is practically valuable for engineering flows.

  9. Semiclassical model of atomic collisions: stopping and capture of the heavy charged particles and exotic atom formation

    International Nuclear Information System (INIS)

    Beck, W.A.

    2000-01-01

    The semiclassical model of atomic collisions, especially in different areas of the maximum stopping, when proton collides at the velocity of the boron order velocity, providing as the result for interactions of many bodies with an electron target, enabling application of the model with high degree of confidence to a clearly expressed experimental problem, such the antiproton capture on helium, is presented. The semiclassical collision model and stopping energy are considered. The stopping and capture of negatively-charged particles are investigated. The capture and angular moments of antiprotons, captures at the end of the collision cascade, are presented [ru

  10. A new capture fraction method to map how pumpage affects surface water flow

    Science.gov (United States)

    Leake, S.A.; Reeves, H.W.; Dickinson, J.E.

    2010-01-01

    All groundwater pumped is balanced by removal of water somewhere, initially from storage in the aquifer and later from capture in the form of increase in recharge and decrease in discharge. Capture that results in a loss of water in streams, rivers, and wetlands now is a concern in many parts of the United States. Hydrologists commonly use analytical and numerical approaches to study temporal variations in sources of water to wells for select points of interest. Much can be learned about coupled surface/groundwater systems, however, by looking at the spatial distribution of theoretical capture for select times of interest. Development of maps of capture requires (1) a reasonably well-constructed transient or steady state model of an aquifer with head-dependent flow boundaries representing surface water features or evapotranspiration and (2) an automated procedure to run the model repeatedly and extract results, each time with a well in a different location. This paper presents new methods for simulating and mapping capture using three-dimensional groundwater flow models and presents examples from Arizona, Oregon, and Michigan. Journal compilation ?? 2010 National Ground Water Association. No claim to original US government works.

  11. Forecasting Lightning Threat using Cloud-resolving Model Simulations

    Science.gov (United States)

    McCaul, E. W., Jr.; Goodman, S. J.; LaCasse, K. M.; Cecil, D. J.

    2009-01-01

    As numerical forecasts capable of resolving individual convective clouds become more common, it is of interest to see if quantitative forecasts of lightning flash rate density are possible, based on fields computed by the numerical model. Previous observational research has shown robust relationships between observed lightning flash rates and inferred updraft and large precipitation ice fields in the mixed phase regions of storms, and that these relationships might allow simulated fields to serve as proxies for lightning flash rate density. It is shown in this paper that two simple proxy fields do indeed provide reasonable and cost-effective bases for creating time-evolving maps of predicted lightning flash rate density, judging from a series of diverse simulation case study events in North Alabama for which Lightning Mapping Array data provide ground truth. One method is based on the product of upward velocity and the mixing ratio of precipitating ice hydrometeors, modeled as graupel only, in the mixed phase region of storms at the -15\\dgc\\ level, while the second method is based on the vertically integrated amounts of ice hydrometeors in each model grid column. Each method can be calibrated by comparing domainwide statistics of the peak values of simulated flash rate proxy fields against domainwide peak total lightning flash rate density data from observations. Tests show that the first method is able to capture much of the temporal variability of the lightning threat, while the second method does a better job of depicting the areal coverage of the threat. A blended solution is designed to retain most of the temporal sensitivity of the first method, while adding the improved spatial coverage of the second. Weather Research and Forecast Model simulations of selected North Alabama cases show that this model can distinguish the general character and intensity of most convective events, and that the proposed methods show promise as a means of generating

  12. Analysis of MCNP simulated gamma spectra of CdTe detectors for boron neutron capture therapy.

    Science.gov (United States)

    Winkler, Alexander; Koivunoro, Hanna; Savolainen, Sauli

    2017-06-01

    The next step in the boron neutron capture therapy (BNCT) is the real time imaging of the boron concentration in healthy and tumor tissue. Monte Carlo simulations are employed to predict the detector response required to realize single-photon emission computed tomography in BNCT, but have failed to correctly resemble measured data for cadmium telluride detectors. In this study we have tested the gamma production cross-section data tables of commonly used libraries in the Monte Carlo code MCNP in comparison to measurements. The cross section data table TENDL-2008-ACE is reproducing measured data best, whilst the commonly used ENDL92 and other studied libraries do not include correct tables for the gamma production from the cadmium neutron capture reaction that is occurring inside the detector. Furthermore, we have discussed the size of the annihilation peaks of spectra obtained by cadmium telluride and germanium detectors. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Simulating carbon exchange using a regional atmospheric model coupled to an advanced land-surface model

    International Nuclear Information System (INIS)

    Ter Maat, H.W.; Hutjes, R.W.A.; Miglietta, F.; Gioli, B.; Bosveld, F.C.; Vermeulen, A.T.; Fritsch, H.

    2010-08-01

    This paper is a case study to investigate what the main controlling factors are that determine atmospheric carbon dioxide content for a region in the centre of The Netherlands. We use the Regional Atmospheric Modelling System (RAMS), coupled with a land surface scheme simulating carbon, heat and momentum fluxes (SWAPS-C), and including also submodels for urban and marine fluxes, which in principle should include the dominant mechanisms and should be able to capture the relevant dynamics of the system. To validate the model, observations are used that were taken during an intensive observational campaign in central Netherlands in summer 2002. These include flux-tower observations and aircraft observations of vertical profiles and spatial fluxes of various variables.

  14. A heterogeneous lattice gas model for simulating pedestrian evacuation

    Science.gov (United States)

    Guo, Xiwei; Chen, Jianqiao; Zheng, Yaochen; Wei, Junhong

    2012-02-01

    Based on the cellular automata method (CA model) and the mobile lattice gas model (MLG model), we have developed a heterogeneous lattice gas model for simulating pedestrian evacuation processes in an emergency. A local population density concept is introduced first. The update rule in the new model depends on the local population density and the exit crowded degree factor. The drift D, which is one of the key parameters influencing the evacuation process, is allowed to change according to the local population density of the pedestrians. Interactions including attraction, repulsion, and friction between every two pedestrians and those between a pedestrian and the building wall are described by a nonlinear function of the corresponding distance, and the repulsion forces increase sharply as the distances get small. A critical force of injury is introduced into the model, and its effects on the evacuation process are investigated. The model proposed has heterogeneous features as compared to the MLG model or the basic CA model. Numerical examples show that the model proposed can capture the basic features of pedestrian evacuation, such as clogging and arching phenomena.

  15. Simulation-Based Dynamic Passenger Flow Assignment Modelling for a Schedule-Based Transit Network

    Directory of Open Access Journals (Sweden)

    Xiangming Yao

    2017-01-01

    Full Text Available The online operation management and the offline policy evaluation in complex transit networks require an effective dynamic traffic assignment (DTA method that can capture the temporal-spatial nature of traffic flows. The objective of this work is to propose a simulation-based dynamic passenger assignment framework and models for such applications in the context of schedule-based rail transit systems. In the simulation framework, travellers are regarded as individual agents who are able to obtain complete information on the current traffic conditions. A combined route selection model integrated with pretrip route selection and entrip route switch is established for achieving the dynamic network flow equilibrium status. The train agent is operated strictly with the timetable and its capacity limitation is considered. A continuous time-driven simulator based on the proposed framework and models is developed, whose performance is illustrated through a large-scale network of Beijing subway. The results indicate that more than 0.8 million individual passengers and thousands of trains can be simulated simultaneously at a speed ten times faster than real time. This study provides an efficient approach to analyze the dynamic demand-supply relationship for large schedule-based transit networks.

  16. Data supporting the validation of a simulation model for multi-component gas separation in polymeric membranes

    Directory of Open Access Journals (Sweden)

    Lorena Giordano

    2016-12-01

    The data were obtained using a model for simulating gas separation, described in the research article entitled “Interplay of inlet temperature and humidity on energy penalty for CO2 post-combustion capture: rigorous analysis and simulation of a single stage gas permeation process” (L. Giordano, D. Roizard, R. Bounaceur, E. Favre, 2016 [1]. The data were used to validate the model by comparison with literature results. Considering a membrane system based on feed compression only, data from the model proposed and that from literature were compared with respect to the molar composition of permeate stream, the membrane area and specific energy requirement, varying the feed pressure and the CO2 separation degree.

  17. Science-Based Simulation Model of Human Performance for Human Reliability Analysis

    International Nuclear Information System (INIS)

    Kelly, Dana L.; Boring, Ronald L.; Mosleh, Ali; Smidts, Carol

    2011-01-01

    Human reliability analysis (HRA), a component of an integrated probabilistic risk assessment (PRA), is the means by which the human contribution to risk is assessed, both qualitatively and quantitatively. However, among the literally dozens of HRA methods that have been developed, most cannot fully model and quantify the types of errors that occurred at Three Mile Island. Furthermore, all of the methods lack a solid empirical basis, relying heavily on expert judgment or empirical results derived in non-reactor domains. Finally, all of the methods are essentially static, and are thus unable to capture the dynamics of an accident in progress. The objective of this work is to begin exploring a dynamic simulation approach to HRA, one whose models have a basis in psychological theories of human performance, and whose quantitative estimates have an empirical basis. This paper highlights a plan to formalize collaboration among the Idaho National Laboratory (INL), the University of Maryland, and The Ohio State University (OSU) to continue development of a simulation model initially formulated at the University of Maryland. Initial work will focus on enhancing the underlying human performance models with the most recent psychological research, and on planning follow-on studies to establish an empirical basis for the model, based on simulator experiments to be carried out at the INL and at the OSU.

  18. Toward transformational carbon capture systems

    Energy Technology Data Exchange (ETDEWEB)

    Miller, David C. [National Energy Technology Laboratory, U.S. Dept. of Energy, Pittsburgh PA (United States); Litynski, John T. [Office of Fossil Energy, U.S. Dept. of Energy, Washington DC (United States); Brickett, Lynn A. [National Energy Technology Laboratory, U.S. Dept. of Energy, Pittsburgh PA (United States); Morreale, Bryan D. [National Energy Technology Laboratory, U.S. Dept. of Energy, Pittsburgh PA (United States)

    2015-10-28

    This paper will briefly review the history and current state of Carbon Capture and Storage (CCS) research and development and describe the technical barriers to carbon capture. it will argue forcefully for a new approach to R&D, which leverages both simulation and physical systems at the laboratory and pilot scales to more rapidly move the best technoogies forward, prune less advantageous approaches, and simultaneously develop materials and processes.

  19. A minimal unified model of disease trajectories captures hallmarks of multiple sclerosis

    KAUST Repository

    Kannan, Venkateshan

    2017-03-29

    Multiple Sclerosis (MS) is an autoimmune disease targeting the central nervous system (CNS) causing demyelination and neurodegeneration leading to accumulation of neurological disability. Here we present a minimal, computational model involving the immune system and CNS that generates the principal subtypes of the disease observed in patients. The model captures several key features of MS, especially those that distinguish the chronic progressive phase from that of the relapse-remitting. In addition, a rare subtype of the disease, progressive relapsing MS naturally emerges from the model. The model posits the existence of two key thresholds, one in the immune system and the other in the CNS, that separate dynamically distinct behavior of the model. Exploring the two-dimensional space of these thresholds, we obtain multiple phases of disease evolution and these shows greater variation than the clinical classification of MS, thus capturing the heterogeneity that is manifested in patients.

  20. The capture rate of free-floating planets in our galaxy

    Science.gov (United States)

    Goulinski, N.; Ribak, E. N.

    2017-09-01

    We propose that planetary nebulae and supernova remnants may constitute a significant source of free-floating planets. With a large population of free-floating planets, the rate at which these planets get captured by planetary systems may be non-negligible. We predict that about one out of every 100 sub-solar stars are expected to experience a capture of a free-floating planet during their lifetime. The capture cross section calculated through three-body scattering simulations in vacuum conditions. Since planetary systems usually contain multiple planets, and dissipation processes where not included in the simulation, the capture rate may be higher.

  1. ARM Cloud Radar Simulator Package for Global Climate Models Value-Added Product

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yuying [North Carolina State Univ., Raleigh, NC (United States); Xie, Shaocheng [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-01

    It has been challenging to directly compare U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility ground-based cloud radar measurements with climate model output because of limitations or features of the observing processes and the spatial gap between model and the single-point measurements. To facilitate the use of ARM radar data in numerical models, an ARM cloud radar simulator was developed to converts model data into pseudo-ARM cloud radar observations that mimic the instrument view of a narrow atmospheric column (as compared to a large global climate model [GCM] grid-cell), thus allowing meaningful comparison between model output and ARM cloud observations. The ARM cloud radar simulator value-added product (VAP) was developed based on the CloudSat simulator contained in the community satellite simulator package, the Cloud Feedback Model Intercomparison Project (CFMIP) Observation Simulator Package (COSP) (Bodas-Salcedo et al., 2011), which has been widely used in climate model evaluation with satellite data (Klein et al., 2013, Zhang et al., 2010). The essential part of the CloudSat simulator is the QuickBeam radar simulator that is used to produce CloudSat-like radar reflectivity, but is capable of simulating reflectivity for other radars (Marchand et al., 2009; Haynes et al., 2007). Adapting QuickBeam to the ARM cloud radar simulator within COSP required two primary changes: one was to set the frequency to 35 GHz for the ARM Ka-band cloud radar, as opposed to 94 GHz used for the CloudSat W-band radar, and the second was to invert the view from the ground to space so as to attenuate the beam correctly. In addition, the ARM cloud radar simulator uses a finer vertical resolution (100 m compared to 500 m for CloudSat) to resolve the more detailed structure of clouds captured by the ARM radars. The ARM simulator has been developed following the COSP workflow (Figure 1) and using the capabilities available in COSP

  2. Attention capture by abrupt onsets: re-visiting the priority tag model

    Directory of Open Access Journals (Sweden)

    Meera Mary Sunny

    2013-12-01

    Full Text Available Abrupt onsets have been shown to strongly attract attention in a stimulus-driven, bottom-up manner. However, the precise mechanism that drives capture by onsets is still debated. According to the new object account, abrupt onsets capture attention because they signal the appearance of a new object. Yantis and Johnson (1990 used a visual search task and showed that up to four onsets can be automatically prioritized. However, in their study the number of onsets co-varied with the total number of items in the display, allowing for a possible confound between these two variables. In the present study, display size was fixed at eight items while the number of onsets was systematically varied between zero and eight. Experiment 1 showed a systematic increase in reactions times with increasing number of onsets. This increase was stronger when the target was an onset than when it was a no-onset item, a result that is best explained by a model according to which only one onset is automatically prioritized. Even when the onsets were marked in red (Experiment 2, nearly half of the participants continued to prioritize only one onset item. Only when onset and no-onset targets were blocked (Experiment 3, participants started to search selectively through the set of only the relevant target type. These results further support the finding that only one onset captures attention. Many bottom-up models of attention capture, like masking or saliency accounts, can efficiently explain this finding.

  3. Attention capture by abrupt onsets: re-visiting the priority tag model.

    Science.gov (United States)

    Sunny, Meera M; von Mühlenen, Adrian

    2013-01-01

    Abrupt onsets have been shown to strongly attract attention in a stimulus-driven, bottom-up manner. However, the precise mechanism that drives capture by onsets is still debated. According to the new object account, abrupt onsets capture attention because they signal the appearance of a new object. Yantis and Johnson (1990) used a visual search task and showed that up to four onsets can be automatically prioritized. However, in their study the number of onsets co-varied with the total number of items in the display, allowing for a possible confound between these two variables. In the present study, display size was fixed at eight items while the number of onsets was systematically varied between zero and eight. Experiment 1 showed a systematic increase in reactions times with increasing number of onsets. This increase was stronger when the target was an onset than when it was a no-onset item, a result that is best explained by a model according to which only one onset is automatically prioritized. Even when the onsets were marked in red (Experiment 2), nearly half of the participants continued to prioritize only one onset item. Only when onset and no-onset targets were blocked (Experiment 3), participants started to search selectively through the set of only the relevant target type. These results further support the finding that only one onset captures attention. Many bottom-up models of attention capture, like masking or saliency accounts, can efficiently explain this finding.

  4. Capture of free-floating planets by planetary systems

    Science.gov (United States)

    Goulinski, Nadav; Ribak, Erez N.

    2018-01-01

    Evidence of exoplanets with orbits that are misaligned with the spin of the host star may suggest that not all bound planets were born in the protoplanetary disc of their current planetary system. Observations have shown that free-floating Jupiter-mass objects can exceed the number of stars in our Galaxy, implying that capture scenarios may not be so rare. To address this issue, we construct a three-dimensional simulation of a three-body scattering between a free-floating planet and a star accompanied by a Jupiter-mass bound planet. We distinguish between three different possible scattering outcomes, where the free-floating planet may get weakly captured after the brief interaction with the binary, remain unbound or 'kick out' the bound planet and replace it. The simulation was performed for different masses of the free-floating planets and stars, as well as different impact parameters, inclination angles and approach velocities. The outcome statistics are used to construct an analytical approximation of the cross-section for capturing a free-floating planet by fitting their dependence on the tested variables. The analytically approximated cross-section is used to predict the capture rate for these kinds of objects, and to estimate that about 1 per cent of all stars are expected to experience a temporary capture of a free-floating planet during their lifetime. Finally, we propose additional physical processes that may increase the capture statistics and whose contribution should be considered in future simulations in order to determine the fate of the temporarily captured planets.

  5. A geographic information system-based 3D city estate modeling and simulation system

    Science.gov (United States)

    Chong, Xiaoli; Li, Sha

    2015-12-01

    This paper introduces a 3D city simulation system which is based on geographic information system (GIS), covering all commercial housings of the city. A regional- scale, GIS-based approach is used to capture, describe, and track the geographical attributes of each house in the city. A sorting algorithm of "Benchmark + Parity Rate" is developed to cluster houses with similar spatial and construction attributes. This system is applicable for digital city modeling, city planning, housing evaluation, housing monitoring, and visualizing housing transaction. Finally, taking Jingtian area of Shenzhen as an example, the each unit of 35,997 houses in the area could be displayed, tagged, and easily tracked by the GIS-based city modeling and simulation system. The match market real conditions well and can be provided to house buyers as reference.

  6. CAPTURING UNCERTAINTY IN UNSATURATED-ZONE FLOW USING DIFFERENT CONCEPTUAL MODELS OF FRACTURE-MATRIX INTERACTION

    International Nuclear Information System (INIS)

    SUSAN J. ALTMAN, MICHAEL L. WILSON, GUMUNDUR S. BODVARSSON

    1998-01-01

    Preliminary calculations show that the two different conceptual models of fracture-matrix interaction presented here yield different results pertinent to the performance of the potential repository at Yucca Mountain. Namely, each model produces different ranges of flow in the fractures, where radionuclide transport is thought to be most important. This method of using different flow models to capture both conceptual model and parameter uncertainty ensures that flow fields used in TSPA calculations will be reasonably calibrated to the available data while still capturing this uncertainty. This method also allows for the use of three-dimensional flow fields for the TSPA-VA calculations

  7. Performance estimation of a Venturi scrubber using a computational model for capturing dust particles with liquid spray

    Energy Technology Data Exchange (ETDEWEB)

    Pak, S.I. [National Fusion Research Center, 52 Eoeun-dong, Yuseong-gu, Daejeon 305-333 (Korea, Republic of)]. E-mail: paksunil@dreamwiz.com; Chang, K.S. [Department of Aerospace Engineering, KAIST, Daejeon (Korea, Republic of)]. E-mail: kschang@kaist.ac.kr

    2006-12-01

    A Venturi scrubber has dispersed three-phase flow of gas, dust, and liquid. Atomization of a liquid jet and interaction between the phases has a large effect on the performance of Venturi scrubbers. In this study, a computational model for the interactive three-phase flow in a Venturi scrubber has been developed to estimate pressure drop and collection efficiency. The Eulerian-Lagrangian method is used to solve the model numerically. Gas flow is solved using the Eulerian approach by using the Navier-Stokes equations, and the motion of dust and liquid droplets, described by the Basset-Boussinesq-Oseen (B-B-O) equation, is solved using the Lagrangian approach. This model includes interaction between gas and droplets, atomization of a liquid jet, droplet deformation, breakup and collision of droplets, and capture of dust by droplets. A circular Pease-Anthony Venturi scrubber was simulated numerically with this new model. The numerical results were compared with earlier experimental data for pressure drop and collection efficiency, and gave good agreements.

  8. Performance estimation of a Venturi scrubber using a computational model for capturing dust particles with liquid spray

    International Nuclear Information System (INIS)

    Pak, S.I.; Chang, K.S.

    2006-01-01

    A Venturi scrubber has dispersed three-phase flow of gas, dust, and liquid. Atomization of a liquid jet and interaction between the phases has a large effect on the performance of Venturi scrubbers. In this study, a computational model for the interactive three-phase flow in a Venturi scrubber has been developed to estimate pressure drop and collection efficiency. The Eulerian-Lagrangian method is used to solve the model numerically. Gas flow is solved using the Eulerian approach by using the Navier-Stokes equations, and the motion of dust and liquid droplets, described by the Basset-Boussinesq-Oseen (B-B-O) equation, is solved using the Lagrangian approach. This model includes interaction between gas and droplets, atomization of a liquid jet, droplet deformation, breakup and collision of droplets, and capture of dust by droplets. A circular Pease-Anthony Venturi scrubber was simulated numerically with this new model. The numerical results were compared with earlier experimental data for pressure drop and collection efficiency, and gave good agreements

  9. Performance estimation of a Venturi scrubber using a computational model for capturing dust particles with liquid spray.

    Science.gov (United States)

    Pak, S I; Chang, K S

    2006-12-01

    A Venturi scrubber has dispersed three-phase flow of gas, dust, and liquid. Atomization of a liquid jet and interaction between the phases has a large effect on the performance of Venturi scrubbers. In this study, a computational model for the interactive three-phase flow in a Venturi scrubber has been developed to estimate pressure drop and collection efficiency. The Eulerian-Lagrangian method is used to solve the model numerically. Gas flow is solved using the Eulerian approach by using the Navier-Stokes equations, and the motion of dust and liquid droplets, described by the Basset-Boussinesq-Oseen (B-B-O) equation, is solved using the Lagrangian approach. This model includes interaction between gas and droplets, atomization of a liquid jet, droplet deformation, breakup and collision of droplets, and capture of dust by droplets. A circular Pease-Anthony Venturi scrubber was simulated numerically with this new model. The numerical results were compared with earlier experimental data for pressure drop and collection efficiency, and gave good agreements.

  10. Integrating surrogate models into subsurface simulation framework allows computation of complex reactive transport scenarios

    Science.gov (United States)

    De Lucia, Marco; Kempka, Thomas; Jatnieks, Janis; Kühn, Michael

    2017-04-01

    Reactive transport simulations - where geochemical reactions are coupled with hydrodynamic transport of reactants - are extremely time consuming and suffer from significant numerical issues. Given the high uncertainties inherently associated with the geochemical models, which also constitute the major computational bottleneck, such requirements may seem inappropriate and probably constitute the main limitation for their wide application. A promising way to ease and speed-up such coupled simulations is achievable employing statistical surrogates instead of "full-physics" geochemical models [1]. Data-driven surrogates are reduced models obtained on a set of pre-calculated "full physics" simulations, capturing their principal features while being extremely fast to compute. Model reduction of course comes at price of a precision loss; however, this appears justified in presence of large uncertainties regarding the parametrization of geochemical processes. This contribution illustrates the integration of surrogates into the flexible simulation framework currently being developed by the authors' research group [2]. The high level language of choice for obtaining and dealing with surrogate models is R, which profits from state-of-the-art methods for statistical analysis of large simulations ensembles. A stand-alone advective mass transport module was furthermore developed in order to add such capability to any multiphase finite volume hydrodynamic simulator within the simulation framework. We present 2D and 3D case studies benchmarking the performance of surrogates and "full physics" chemistry in scenarios pertaining the assessment of geological subsurface utilization. [1] Jatnieks, J., De Lucia, M., Dransch, D., Sips, M.: "Data-driven surrogate model approach for improving the performance of reactive transport simulations.", Energy Procedia 97, 2016, p. 447-453. [2] Kempka, T., Nakaten, B., De Lucia, M., Nakaten, N., Otto, C., Pohl, M., Chabab [Tillner], E., Kühn, M

  11. Lower extremity finite element model for crash simulation

    Energy Technology Data Exchange (ETDEWEB)

    Schauer, D.A.; Perfect, S.A.

    1996-03-01

    A lower extremity model has been developed to study occupant injury mechanisms of the major bones and ligamentous soft tissues resulting from vehicle collisions. The model is based on anatomically correct digitized bone surfaces of the pelvis, femur, patella and the tibia. Many muscles, tendons and ligaments were incrementally added to the basic bone model. We have simulated two types of occupant loading that occur in a crash environment using a non-linear large deformation finite element code. The modeling approach assumed that the leg was passive during its response to the excitation, that is, no active muscular contraction and therefore no active change in limb stiffness. The approach recognized that the most important contributions of the muscles to the lower extremity response are their ability to define and modify the impedance of the limb. When nonlinear material behavior in a component of the leg model was deemed important to response, a nonlinear constitutive model was incorporated. The accuracy of these assumptions can be verified only through a review of analysis results and careful comparison with test data. As currently defined, the model meets the objective for which it was created. Much work remains to be done, both from modeling and analysis perspectives, before the model can be considered complete. The model implements a modeling philosophy that can accurately capture both kinematic and kinetic response of the lower limb. We have demonstrated that the lower extremity model is a valuable tool for understanding the injury processes and mechanisms. We are now in a position to extend the computer simulation to investigate the clinical fracture patterns observed in actual crashes. Additional experience with this model will enable us to make a statement on what measures are needed to significantly reduce lower extremity injuries in vehicle crashes. 6 refs.

  12. Nonlocal Peridynamic Modeling and Simulation on Crack Propagation in Concrete Structures

    Directory of Open Access Journals (Sweden)

    Dan Huang

    2015-01-01

    Full Text Available An extended peridynamic approach for crack propagation analysis in concrete structures was proposed. In the peridynamic constitutive model, concrete material was described as a series of interacting particles, and the short-range repulsive force and anisotropic behavior of concrete were taken into account in the expression of the interactive bonding force, which was given in terms of classical elastic constants and peridynamic horizon. The damage of material was defined locally at the level of pairwise bond, and the critical stretch of material bond was described as a function of fracture strength in the classical concrete failure theory. The efficiency and accuracy of the proposed model and algorithms were validated by simulating the propagation of mode I and I-II mixed mode cracks in concrete slabs. Furthermore, crack propagation in a double-edge notched concrete beam subjected to four-point load was simulated, in which the experimental observations are captured naturally as a consequence of the solution.

  13. Model-Independent Calculation of Radiative Neutron Capture on Lithium-7

    NARCIS (Netherlands)

    Rupak, Gautam; Higa, Renato

    2011-01-01

    The radiative neutron capture on lithium-7 is calculated model independently using a low-energy halo effective field theory. The cross section is expressed in terms of scattering parameters directly related to the S-matrix elements. It depends on the poorly known p-wave effective range parameter

  14. Model structure of the stream salmonid simulator (S3)—A dynamic model for simulating growth, movement, and survival of juvenile salmonids

    Science.gov (United States)

    Perry, Russell W.; Plumb, John M.; Jones, Edward C.; Som, Nicholas A.; Hetrick, Nicholas J.; Hardy, Thomas B.

    2018-04-06

    Fisheries and water managers often use population models to aid in understanding the effect of alternative water management or restoration actions on anadromous fish populations. We developed the Stream Salmonid Simulator (S3) to help resource managers evaluate the effect of management alternatives on juvenile salmonid populations. S3 is a deterministic stage-structured population model that tracks daily growth, movement, and survival of juvenile salmon. A key theme of the model is that river flow affects habitat availability and capacity, which in turn drives density dependent population dynamics. To explicitly link population dynamics to habitat quality and quantity, the river environment is constructed as a one-dimensional series of linked habitat units, each of which has an associated daily time series of discharge, water temperature, and usable habitat area or carrying capacity. The physical characteristics of each habitat unit and the number of fish occupying each unit, in turn, drive survival and growth within each habitat unit and movement of fish among habitat units.The purpose of this report is to outline the underlying general structure of the S3 model that is common among different applications of the model. We have developed applications of the S3 model for juvenile fall Chinook salmon (Oncorhynchus tshawytscha) in the lower Klamath River. Thus, this report is a companion to current application of the S3 model to the Trinity River (in review). The general S3 model structure provides a biological and physical framework for the salmonid freshwater life cycle. This framework captures important demographics of juvenile salmonids aimed at translating management alternatives into simulated population responses. Although the S3 model is built on this common framework, the model has been constructed to allow much flexibility in application of the model to specific river systems. The ability for practitioners to include system-specific information for the

  15. Electron-capture and Low-mass Iron-core-collapse Supernovae: New Neutrino-radiation-hydrodynamics Simulations

    Science.gov (United States)

    Radice, David; Burrows, Adam; Vartanyan, David; Skinner, M. Aaron; Dolence, Joshua C.

    2017-11-01

    We present new 1D (spherical) and 2D (axisymmetric) simulations of electron-capture (EC) and low-mass iron-core-collapse supernovae (SN). We consider six progenitor models: the ECSN progenitor from Nomoto; two ECSN-like low-mass low-metallicity iron-core progenitors from A. Heger (2016, private communication); and the 9, 10, and 11 {M}⊙ (zero-age main-sequence) progenitors from Sukhbold et al. We confirm that the ECSN and ESCN-like progenitors explode easily even in 1D with explosion energies of up to a 0.15 Bethes (1 {{B}}\\equiv {10}51 {erg}), and are a viable mechanism for the production of very-low-mass neutron stars. However, the 9, 10, and 11 {M}⊙ progenitors do not explode in 1D and are not even necessarily easier to explode than higher-mass progenitor stars in 2D. We study the effect of perturbations and of changes to the microphysics and we find that relatively small changes can result in qualitatively different outcomes, even in 1D, for models sufficiently close to the explosion threshold. Finally, we revisit the impact of convection below the protoneutron star (PNS) surface. We analyze 1D and 2D evolutions of PNSs subject to the same boundary conditions. We find that the impact of PNS convection has been underestimated in previous studies and could result in an increase of the neutrino luminosity by up to factors of two.

  16. Evaluation of uncertainty in capturing the spatial variability and magnitudes of extreme hydrological events for the uMngeni catchment, South Africa

    Science.gov (United States)

    Kusangaya, Samuel; Warburton Toucher, Michele L.; van Garderen, Emma Archer

    2018-02-01

    Downscaled General Circulation Models (GCMs) output are used to forecast climate change and provide information used as input for hydrological modelling. Given that our understanding of climate change points towards an increasing frequency, timing and intensity of extreme hydrological events, there is therefore the need to assess the ability of downscaled GCMs to capture these extreme hydrological events. Extreme hydrological events play a significant role in regulating the structure and function of rivers and associated ecosystems. In this study, the Indicators of Hydrologic Alteration (IHA) method was adapted to assess the ability of simulated streamflow (using downscaled GCMs (dGCMs)) in capturing extreme river dynamics (high and low flows), as compared to streamflow simulated using historical climate data from 1960 to 2000. The ACRU hydrological model was used for simulating streamflow for the 13 water management units of the uMngeni Catchment, South Africa. Statistically downscaled climate models obtained from the Climate System Analysis Group at the University of Cape Town were used as input for the ACRU Model. Results indicated that, high flows and extreme high flows (one in ten year high flows/large flood events) were poorly represented both in terms of timing, frequency and magnitude. Simulated streamflow using dGCMs data also captures more low flows and extreme low flows (one in ten year lowest flows) than that captured in streamflow simulated using historical climate data. The overall conclusion was that although dGCMs output can reasonably be used to simulate overall streamflow, it performs poorly when simulating extreme high and low flows. Streamflow simulation from dGCMs must thus be used with caution in hydrological applications, particularly for design hydrology, as extreme high and low flows are still poorly represented. This, arguably calls for the further improvement of downscaling techniques in order to generate climate data more relevant and

  17. Hybrid neural network bushing model for vehicle dynamics simulation

    International Nuclear Information System (INIS)

    Sohn, Jeong Hyun; Lee, Seung Kyu; Yoo, Wan Suk

    2008-01-01

    Although the linear model was widely used for the bushing model in vehicle suspension systems, it could not express the nonlinear characteristics of bushing in terms of the amplitude and the frequency. An artificial neural network model was suggested to consider the hysteretic responses of bushings. This model, however, often diverges due to the uncertainties of the neural network under the unexpected excitation inputs. In this paper, a hybrid neural network bushing model combining linear and neural network is suggested. A linear model was employed to represent linear stiffness and damping effects, and the artificial neural network algorithm was adopted to take into account the hysteretic responses. A rubber test was performed to capture bushing characteristics, where sine excitation with different frequencies and amplitudes is applied. Random test results were used to update the weighting factors of the neural network model. It is proven that the proposed model has more robust characteristics than a simple neural network model under step excitation input. A full car simulation was carried out to verify the proposed bushing models. It was shown that the hybrid model results are almost identical to the linear model under several maneuvers

  18. Simulation modeling and arena

    CERN Document Server

    Rossetti, Manuel D

    2015-01-01

    Emphasizes a hands-on approach to learning statistical analysis and model building through the use of comprehensive examples, problems sets, and software applications With a unique blend of theory and applications, Simulation Modeling and Arena®, Second Edition integrates coverage of statistical analysis and model building to emphasize the importance of both topics in simulation. Featuring introductory coverage on how simulation works and why it matters, the Second Edition expands coverage on static simulation and the applications of spreadsheets to perform simulation. The new edition als

  19. Data supporting the validation of a simulation model for multi-component gas separation in polymeric membranes.

    Science.gov (United States)

    Giordano, Lorena; Roizard, Denis; Bounaceur, Roda; Favre, Eric

    2016-12-01

    The article describes data concerning the separation performances of polymeric hollow-fiber membranes. The data were obtained using a model for simulating gas separation, described in the research article entitled "Interplay of inlet temperature and humidity on energy penalty for CO 2 post-combustion capture: rigorous analysis and simulation of a single stage gas permeation process" (L. Giordano, D. Roizard, R. Bounaceur, E. Favre, 2016) [1]. The data were used to validate the model by comparison with literature results. Considering a membrane system based on feed compression only, data from the model proposed and that from literature were compared with respect to the molar composition of permeate stream, the membrane area and specific energy requirement, varying the feed pressure and the CO 2 separation degree.

  20. Vortex-Breakdown-Induced Particle Capture in Branching Junctions.

    Science.gov (United States)

    Ault, Jesse T; Fani, Andrea; Chen, Kevin K; Shin, Sangwoo; Gallaire, François; Stone, Howard A

    2016-08-19

    We show experimentally that a flow-induced, Reynolds number-dependent particle-capture mechanism in branching junctions can be enhanced or eliminated by varying the junction angle. In addition, numerical simulations are used to show that the features responsible for this capture have the signatures of classical vortex breakdown, including an approach flow aligned with the vortex axis and a pocket of subcriticality. We show how these recirculation regions originate and evolve and suggest a physical mechanism for their formation. Furthermore, comparing experiments and numerical simulations, the presence of vortex breakdown is found to be an excellent predictor of particle capture. These results inform the design of systems in which suspended particle accumulation can be eliminated or maximized.

  1. Biomechanical model-based displacement estimation in micro-sensor motion capture

    International Nuclear Information System (INIS)

    Meng, X L; Sun, S Y; Wu, J K; Zhang, Z Q; 3 Building, 21 Heng Mui Keng Terrace (Singapore))" data-affiliation=" (Department of Electrical and Computer Engineering, National University of Singapore (NUS), 02-02-10 I3 Building, 21 Heng Mui Keng Terrace (Singapore))" >Wong, W C

    2012-01-01

    In micro-sensor motion capture systems, the estimation of the body displacement in the global coordinate system remains a challenge due to lack of external references. This paper proposes a self-contained displacement estimation method based on a human biomechanical model to track the position of walking subjects in the global coordinate system without any additional supporting infrastructures. The proposed approach makes use of the biomechanics of the lower body segments and the assumption that during walking there is always at least one foot in contact with the ground. The ground contact joint is detected based on walking gait characteristics and used as the external references of the human body. The relative positions of the other joints are obtained from hierarchical transformations based on the biomechanical model. Anatomical constraints are proposed to apply to some specific joints of the lower body to further improve the accuracy of the algorithm. Performance of the proposed algorithm is compared with an optical motion capture system. The method is also demonstrated in outdoor and indoor long distance walking scenarios. The experimental results demonstrate clearly that the biomechanical model improves the displacement accuracy within the proposed framework. (paper)

  2. Do maize models capture the impacts of heat and drought stresses on yield? Using algorithm ensembles to identify successful approaches.

    Science.gov (United States)

    Jin, Zhenong; Zhuang, Qianlai; Tan, Zeli; Dukes, Jeffrey S; Zheng, Bangyou; Melillo, Jerry M

    2016-09-01

    Stresses from heat and drought are expected to increasingly suppress crop yields, but the degree to which current models can represent these effects is uncertain. Here we evaluate the algorithms that determine impacts of heat and drought stress on maize in 16 major maize models by incorporating these algorithms into a standard model, the Agricultural Production Systems sIMulator (APSIM), and running an ensemble of simulations. Although both daily mean temperature and daylight temperature are common choice of forcing heat stress algorithms, current parameterizations in most models favor the use of daylight temperature even though the algorithm was designed for daily mean temperature. Different drought algorithms (i.e., a function of soil water content, of soil water supply to demand ratio, and of actual to potential transpiration ratio) simulated considerably different patterns of water shortage over the growing season, but nonetheless predicted similar decreases in annual yield. Using the selected combination of algorithms, our simulations show that maize yield reduction was more sensitive to drought stress than to heat stress for the US Midwest since the 1980s, and this pattern will continue under future scenarios; the influence of excessive heat will become increasingly prominent by the late 21st century. Our review of algorithms in 16 crop models suggests that the impacts of heat and drought stress on plant yield can be best described by crop models that: (i) incorporate event-based descriptions of heat and drought stress, (ii) consider the effects of nighttime warming, and (iii) coordinate the interactions among multiple stresses. Our study identifies the proficiency with which different model formulations capture the impacts of heat and drought stress on maize biomass and yield production. The framework presented here can be applied to other modeled processes and used to improve yield predictions of other crops with a wide variety of crop models. © 2016 John

  3. A holistic 3D finite element simulation model for thermoelectric power generator element

    International Nuclear Information System (INIS)

    Wu, Guangxi; Yu, Xiong

    2014-01-01

    Highlights: • Development of a holistic simulation model for the thermoelectric energy harvester. • Account for delta Seebeck coefficient and carrier charge densities variations. • Solution of thermo-electric coupling problem with finite element method. • Model capable of predicting phenomena not captured by traditional models. • A simulation tool for design of innovative TEM materials and structures. - Abstract: Harvesting the thermal energy stored in the ambient environment provides a potential sustainable energy source. Thermoelectric power generators have advantages of having no moving parts, being durable, and light-weighted. These unique features are advantageous for many applications (i.e., carry-on medical devices, embedded infrastructure sensors, aerospace, transportation, etc.). To ensure the efficient applications of thermoelectric energy harvesting system, the behaviors of such systems need to be fully understood. Finite element simulations provide important tools for such purpose. Although modeling the performance of thermoelectric modules has been conducted by many researchers, due to the complexity in solving the coupled problem, the influences of the effective Seebeck coefficient and carrier density variations on the performance of thermoelectric system are generally neglected. This results in an overestimation of the power generator performance under strong-ionization temperature region. This paper presents an advanced simulation model for thermoelectric elements that considers the effects of both factors. The mathematical basis of this model is firstly presented. Finite element simulations are then implemented on a thermoelectric power generator unit. The characteristics of the thermoelectric power generator and their relationship to its performance are discussed under different working temperature regions. The internal physics processes of the TEM harvester are analyzed from the results of computational simulations. The new model

  4. Modeling and simulation of CANDU reactor and its regulating system

    Science.gov (United States)

    Javidnia, Hooman

    Analytical computer codes are indispensable tools in design, optimization, and control of nuclear power plants. Numerous codes have been developed to perform different types of analyses related to the nuclear power plants. A large number of these codes are designed to perform safety analyses. In the context of safety analyses, the control system is often neglected. Although there are good reasons for such a decision, that does not mean that the study of control systems in the nuclear power plants should be neglected altogether. In this thesis, a proof of concept code is developed as a tool that can be used in the design. optimization. and operation stages of the control system. The main objective in the design of this computer code is providing a tool that is easy to use by its target audience and is capable of producing high fidelity results that can be trusted to design the control system and optimize its performance. Since the overall plant control system covers a very wide range of processes, in this thesis the focus has been on one particular module of the the overall plant control system, namely, the reactor regulating system. The center of the reactor regulating system is the CANDU reactor. A nodal model for the reactor is used to represent the spatial neutronic kinetics of the core. The nodal model produces better results compared to the point kinetics model which is often used in the design and analysis of control system for nuclear reactors. The model can capture the spatial effects to some extent. although it is not as detailed as the finite difference methods. The criteria for choosing a nodal model of the core are: (1) the model should provide more detail than point kinetics and capture spatial effects, (2) it should not be too complex or overly detailed to slow down the simulation and provide details that are extraneous or unnecessary for a control engineer. Other than the reactor itself, there are auxiliary models that describe dynamics of different

  5. Integration of Local Hydrology into Regional Hydrologic Simulation Model

    Science.gov (United States)

    Van Zee, R. J.; Lal, W. A.

    2002-05-01

    South Florida hydrology is dominated by the Central and South Florida (C&SF) Project that is managed to provide flood protection, water supply and environmental protection. A complex network of levees canals and structures provide these services to the individual drainage basins. The landscape varies widely across the C&SF system, with corresponding differences in the way water is managed within each basin. Agricultural areas are managed for optimal crop production. Urban areas maximize flood protection while maintaining minimum water levels to protect adjacent wetlands and local water supplies. "Natural" areas flood and dry out in response to the temporal distribution of rainfall. The evaluation of planning, regulation and operational issues require access to a simulation model that captures the effects of both regional and local hydrology. The Regional Simulation Model (RSM) uses a "pseudo-cell" approach to integrate local hydrology within the context of a regional hydrologic system. A 2-dimensional triangulated mesh is used to represent the regional surface and ground water systems and a 1-dimensional canal network is superimposed onto this mesh. The movement of water is simulated using a finite volume formulation with a diffusive wave approximation. Each cell in the triangulated mesh has a "pseudo-cell" counterpart, which represents the same area as the cell, but it is conceptualized such that it simulates the localized hydrologic conditions Protocols have been established to provide an interface between a cell and its pseudo-cell counterpart. . A number of pseudo-cell types have already been developed and tested in the simulation of Water Conservation Area 1 and several have been proposed to deal with specific local issues in the Southwest Florida Feasibility Study. This presentation will provide an overview of the overall RSM design, describe the relationship between cells and pseudo-cells, and illustrate how pseudo-cells are be used to simulate agriculture

  6. Digital representations of the real world how to capture, model, and render visual reality

    CERN Document Server

    Magnor, Marcus A; Sorkine-Hornung, Olga; Theobalt, Christian

    2015-01-01

    Create Genuine Visual Realism in Computer Graphics Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality explains how to portray visual worlds with a high degree of realism using the latest video acquisition technology, computer graphics methods, and computer vision algorithms. It explores the integration of new capture modalities, reconstruction approaches, and visual perception into the computer graphics pipeline.Understand the Entire Pipeline from Acquisition, Reconstruction, and Modeling to Realistic Rendering and ApplicationsThe book covers sensors fo

  7. Systematic vacuum study of the ITER model cryopump by test particle Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Xueli; Haas, Horst; Day, Christian [Institute for Technical Physics, Karlsruhe Institute of Technology, P.O. Box 3640, 76021 Karlsruhe (Germany)

    2011-07-01

    The primary pumping systems on the ITER torus are based on eight tailor-made cryogenic pumps because not any standard commercial vacuum pump can meet the ITER working criteria. This kind of cryopump can provide high pumping speed, especially for light gases, by the cryosorption on activated charcoal at 4.5 K. In this paper we will present the systematic Monte Carlo simulation results of the model pump in a reduced scale by ProVac3D, a new Test Particle Monte Carlo simulation program developed by KIT. The simulation model has included the most important mechanical structures such as sixteen cryogenic panels working at 4.5 K, the 80 K radiation shield envelope with baffles, the pump housing, inlet valve and the TIMO (Test facility for the ITER Model Pump) test facility. Three typical gas species, i.e., deuterium, protium and helium are simulated. The pumping characteristics have been obtained. The result is in good agreement with the experiment data up to the gas throughput of 1000 sccm, which marks the limit for free molecular flow. This means that ProVac3D is a useful tool in the design of the prototype cryopump of ITER. Meanwhile, the capture factors at different critical positions are calculated. They can be used as the important input parameters for a follow-up Direct Simulation Monte Carlo (DSMC) simulation for higher gas throughput.

  8. Capture orbits around asteroids by hitting zero-velocity curves

    Science.gov (United States)

    Wang, Wei; Yang, Hongwei; Zhang, Wei; Ma, Guangfu

    2017-12-01

    The problem of capturing a spacecraft from a heliocentric orbit into a high parking orbit around binary asteroids is investigated in the current study. To reduce the braking Δ V, a new capture strategy takes advantage of the three-body gravity of the binary asteroid to lower the inertial energy before applying the Δ V. The framework of the circular restricted three-body problem (CR3BP) is employed for the binary asteroid system. The proposed capture strategy is based on the mechanism by which inertial energy can be decreased sharply near zero-velocity curves (ZVCs). The strategy has two steps, namely, hitting the target ZVC and raising the periapsis by a small Δ V at the apoapsis. By hitting the target ZVC, the positive inertial energy decreases and becomes negative. Using a small Δ V, the spacecraft inserts into a bounded orbit around the asteroid. In addition, a rotating mass dipole model is employed for elongated asteroids, which leads to dynamics similar to that of the CR3BP. With this approach, the proposed capture strategy can be applied to elongated asteroids. Numerical simulations validate that the proposed capture strategy is applicable for the binary asteroid 90 Antiope and the elongated asteroid 216 Kleopatra.

  9. CO_2 capture with solid sorbent: CFD model of an innovative reactor concept

    International Nuclear Information System (INIS)

    Barelli, L.; Bidini, G.; Gallorini, F.

    2016-01-01

    Highlights: • A new reactor solution based on rotating fixed beds was presented. • The preliminary design of the reactor was approached. • A CFD model of the reactor, including CO_2 capture kinetic, was developed. • The CFD model is validated with experimental results. • Sorbent exploitation increasing is possible thanks to the new reactor. - Abstract: In future decarbonization scenarios, CCS with particular reference to post-combustion technologies will be an important option also for energy intensive industries. Nevertheless, today CCS systems are rarely installed due to high energy and cost penalties of current technology based on chemical scrubbing with amine solvent. Therefore, innovative solutions based on new/optimized solvents, sorbents, membranes and new process designs, are R&D priorities. Regarding the CO_2 capture through solid sorbents, a new reactor solution based on rotating fixed beds is presented in this paper. In order to design the innovative system, a suitable CFD model was developed considering also the kinetic capture process. The model was validated with experimental results obtained by the authors in previous research activities, showing a potential reduction of energy penalties respect to current technologies. In the future, the model will be used to identify the control logic of the innovative reactor in order to verify improvements in terms of sorbent exploitation and reduction of system energy consumption.

  10. Neutron-capture Cross Sections from Indirect Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Escher, J E; Burke, J T; Dietrich, F S; Ressler, J J; Scielzo, N D; Thompson, I J

    2011-10-18

    Cross sections for compound-nuclear reactions play an important role in models of astrophysical environments and simulations of the nuclear fuel cycle. Providing reliable cross section data remains a formidable task, and direct measurements have to be complemented by theoretical predictions and indirect methods. The surrogate nuclear reactions method provides an indirect approach for determining cross sections for reactions on unstable isotopes, which are difficult or impossible to measure otherwise. Current implementations of the method provide useful cross sections for (n,f) reactions, but need to be improved upon for applications to capture reactions.

  11. Modelling of a tubular membrane contactor for pre-combustion CO2 capture using ionic liquids: Influence of the membrane configuration, absorbent properties and operation parameters

    Directory of Open Access Journals (Sweden)

    Zhongde Dai

    2016-10-01

    Full Text Available A membrane contactor using ionic liquids (ILs as solvent for pre-combustion capture CO2 at elevated temperature (303–393 K and pressure (20 bar has been studied using mathematic model in the present work. A comprehensive two-dimensional (2D mass-transfer model was developed based on finite element method. The effects of liquid properties, membrane configurations, as well as operation parameters on the CO2 removal efficiency were systematically studied. The simulation results show that CO2 can be effectively removed in this process. In addition, it is found that the liquid phase mass transfer dominated the overall mass transfer. Membranes with high porosity and small thickness could apparently reduce the membrane resistance and thus increase the separation efficiency. On the other hand, the membrane diameter and membrane length have a relatively small influence on separation performance within the operation range. Keywords: CO2 capture, Pre-combustion, Membrane contactor, Ionic liquids, Modelling

  12. Patient-Specific Simulation Models of the Abdominal Aorta With and Without Aneurysms

    DEFF Research Database (Denmark)

    Enevoldsen, Marie Sand

    to be isotropic, which may allow simpler phenomenological models to capture these effects. There is a pressing need, however, for more detailed histological information coupled with more complete experimental data for the systemic arteries. The second study was aimed at developing computational simulation models...... relations for computational analysis, and evaluation of the material model predictability. The constitutive framework applied is the four fiber family (4FF) model. This model assumes that the wall is a constrained mixture of an amorphous isotropic elastin dominated matrix reinforced by collagen fibers....... The collagen fibers are grouped in four directions of orientation. The purpose of the first study was to investigate whether significant risk factors related to AAA development can be identified from a specific pattern in the material parameters of the 4FF model. Smoking is a leading self-inflicted risk factor...

  13. Next-Generation Climate Modeling Science Challenges for Simulation, Workflow and Analysis Systems

    Science.gov (United States)

    Koch, D. M.; Anantharaj, V. G.; Bader, D. C.; Krishnan, H.; Leung, L. R.; Ringler, T.; Taylor, M.; Wehner, M. F.; Williams, D. N.

    2016-12-01

    We will present two examples of current and future high-resolution climate-modeling research that are challenging existing simulation run-time I/O, model-data movement, storage and publishing, and analysis. In each case, we will consider lessons learned as current workflow systems are broken by these large-data science challenges, as well as strategies to repair or rebuild the systems. First we consider the science and workflow challenges to be posed by the CMIP6 multi-model HighResMIP, involving around a dozen modeling groups performing quarter-degree simulations, in 3-member ensembles for 100 years, with high-frequency (1-6 hourly) diagnostics, which is expected to generate over 4PB of data. An example of science derived from these experiments will be to study how resolution affects the ability of models to capture extreme-events such as hurricanes or atmospheric rivers. Expected methods to transfer (using parallel Globus) and analyze (using parallel "TECA" software tools) HighResMIP data for such feature-tracking by the DOE CASCADE project will be presented. A second example will be from the Accelerated Climate Modeling for Energy (ACME) project, which is currently addressing challenges involving multiple century-scale coupled high resolution (quarter-degree) climate simulations on DOE Leadership Class computers. ACME is anticipating production of over 5PB of data during the next 2 years of simulations, in order to investigate the drivers of water cycle changes, sea-level-rise, and carbon cycle evolution. The ACME workflow, from simulation to data transfer, storage, analysis and publication will be presented. Current and planned methods to accelerate the workflow, including implementing run-time diagnostics, and implementing server-side analysis to avoid moving large datasets will be presented.

  14. Aviation Safety Simulation Model

    Science.gov (United States)

    Houser, Scott; Yackovetsky, Robert (Technical Monitor)

    2001-01-01

    The Aviation Safety Simulation Model is a software tool that enables users to configure a terrain, a flight path, and an aircraft and simulate the aircraft's flight along the path. The simulation monitors the aircraft's proximity to terrain obstructions, and reports when the aircraft violates accepted minimum distances from an obstruction. This model design facilitates future enhancements to address other flight safety issues, particularly air and runway traffic scenarios. This report shows the user how to build a simulation scenario and run it. It also explains the model's output.

  15. Coupled Multi-physical Simulations for the Assessment of Nuclear Waste Repository Concepts: Modeling, Software Development and Simulation

    Science.gov (United States)

    Massmann, J.; Nagel, T.; Bilke, L.; Böttcher, N.; Heusermann, S.; Fischer, T.; Kumar, V.; Schäfers, A.; Shao, H.; Vogel, P.; Wang, W.; Watanabe, N.; Ziefle, G.; Kolditz, O.

    2016-12-01

    As part of the German site selection process for a high-level nuclear waste repository, different repository concepts in the geological candidate formations rock salt, clay stone and crystalline rock are being discussed. An open assessment of these concepts using numerical simulations requires physical models capturing the individual particularities of each rock type and associated geotechnical barrier concept to a comparable level of sophistication. In a joint work group of the Helmholtz Centre for Environmental Research (UFZ) and the German Federal Institute for Geosciences and Natural Resources (BGR), scientists of the UFZ are developing and implementing multiphysical process models while BGR scientists apply them to large scale analyses. The advances in simulation methods for waste repositories are incorporated into the open-source code OpenGeoSys. Here, recent application-driven progress in this context is highlighted. A robust implementation of visco-plasticity with temperature-dependent properties into a framework for the thermo-mechanical analysis of rock salt will be shown. The model enables the simulation of heat transport along with its consequences on the elastic response as well as on primary and secondary creep or the occurrence of dilatancy in the repository near field. Transverse isotropy, non-isothermal hydraulic processes and their coupling to mechanical stresses are taken into account for the analysis of repositories in clay stone. These processes are also considered in the near field analyses of engineered barrier systems, including the swelling/shrinkage of the bentonite material. The temperature-dependent saturation evolution around the heat-emitting waste container is described by different multiphase flow formulations. For all mentioned applications, we illustrate the workflow from model development and implementation, over verification and validation, to repository-scale application simulations using methods of high performance computing.

  16. Cognitive models embedded in system simulation models

    International Nuclear Information System (INIS)

    Siegel, A.I.; Wolf, J.J.

    1982-01-01

    If we are to discuss and consider cognitive models, we must first come to grips with two questions: (1) What is cognition; (2) What is a model. Presumably, the answers to these questions can provide a basis for defining a cognitive model. Accordingly, this paper first places these two questions into perspective. Then, cognitive models are set within the context of computer simulation models and a number of computer simulations of cognitive processes are described. Finally, pervasive issues are discussed vis-a-vis cognitive modeling in the computer simulation context

  17. GEANT4 simulation of the neutron background of the C{sub 6}D{sub 6} set-up for capture studies at n{sub T}OF

    Energy Technology Data Exchange (ETDEWEB)

    Žugec, P., E-mail: pzugec@phy.hr [Department of Physics, Faculty of Science, University of Zagreb (Croatia); Colonna, N. [Istituto Nazionale di Fisica Nucleare, Bari (Italy); Bosnar, D. [Department of Physics, Faculty of Science, University of Zagreb (Croatia); Altstadt, S. [Johann-Wolfgang-Goethe Universität, Frankfurt (Germany); Andrzejewski, J. [Uniwersytet Łódzki, Lodz (Poland); Audouin, L. [Centre National de la Recherche Scientifique/IN2P3 – IPN, Orsay (France); Barbagallo, M. [Istituto Nazionale di Fisica Nucleare, Bari (Italy); Bécares, V. [Centro de Investigaciones Energeticas Medioambientales y Tecnológicas (CIEMAT), Madrid (Spain); Bečvář, F. [Charles University, Prague (Czech Republic); Belloni, F. [Commissariat à l' Énergie Atomique (CEA) Saclay – Irfu, Gif-sur-Yvette (France); Berthoumieux, E. [Commissariat à l' Énergie Atomique (CEA) Saclay – Irfu, Gif-sur-Yvette (France); European Organization for Nuclear Research (CERN), Geneva (Switzerland); Billowes, J. [University of Manchester, Oxford Road, Manchester (United Kingdom); Boccone, V.; Brugger, M.; Calviani, M. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Calviño, F. [Universitat Politecnica de Catalunya, Barcelona (Spain); Cano-Ott, D. [Centro de Investigaciones Energeticas Medioambientales y Tecnológicas (CIEMAT), Madrid (Spain); Carrapiço, C. [Instituto Tecnológico e Nuclear, Instituto Superior Técnico, Universidade Técnica de Lisboa, Lisboa (Portugal); Cerutti, F. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); and others

    2014-10-01

    The neutron sensitivity of the C{sub 6}D{sub 6} detector setup used at n{sub T}OF facility for capture measurements has been studied by means of detailed GEANT4 simulations. A realistic software replica of the entire n{sub T}OF experimental hall, including the neutron beam line, sample, detector supports and the walls of the experimental area has been implemented in the simulations. The simulations have been analyzed in the same manner as experimental data, in particular by applying the Pulse Height Weighting Technique. The simulations have been validated against a measurement of the neutron background performed with a {sup nat}C sample, showing an excellent agreement above 1 keV. At lower energies, an additional component in the measured {sup nat}C yield has been discovered, which prevents the use of {sup nat}C data for neutron background estimates at neutron energies below a few hundred eV. The origin and time structure of the neutron background have been derived from the simulations. Examples of the neutron background for two different samples are demonstrating the important role of accurate simulations of the neutron background in capture cross-section measurements.

  18. Development and evaluation of GRAL-C dispersion model, a hybrid Eulerian-Lagrangian approach capturing NO-NO 2-O 3 chemistry

    Science.gov (United States)

    Oettl, Dietmar; Uhrner, Ulrich

    2011-02-01

    Based on two recent publications using Lagrangian dispersion models to simulate NO-NO 2-O 3 chemistry for industrial plumes, a similar modified approach was implemented using GRAL-C ( Graz Lagrangian Model with Chemistry) and tested on two urban applications. In the hybrid dispersion model GRAL-C, the transport and turbulent diffusion of primary species such as NO and NO 2 are treated in a Lagrangian framework while those of O 3 are treated in an Eulerian framework. GRAL-C was employed on a one year street canyon simulation in Berlin and on a four-day simulation during a winter season in Graz, the second biggest city in Austria. In contrast to Middleton D.R., Jones A.R., Redington A.L., Thomson D.J., Sokhi R.S., Luhana L., Fisher B.E.A. (2008. Lagrangian modelling of plume chemistry for secondary pollutants in large industrial plumes. Atmospheric Environment 42, 415-427) and Alessandrini S., Ferrero E. (2008. A Lagrangian model with chemical reactions: application in real atmosphere. Proceedings of the 12th Int. Conf. on Harmonization within atmospheric dispersion modelling for regulatory purposes. Croatian Meteorological Journal, 43, ISSN: 1330-0083, 235-239) the treatment of ozone was modified in order to facilitate urban scale simulations encompassing dense road networks. For the street canyon application, modelled daily mean NO x/NO 2 concentrations deviated by +0.4%/-15% from observations, while the correlations for NO x and NO 2 were 0.67 and 0.76 respectively. NO 2 concentrations were underestimated in summer, but were captured well for other seasons. In Graz a fair agreement for NO x and NO 2 was obtained between observed and modelled values for NO x and NO 2. Simulated diurnal cycles of NO 2 and O 3 matched observations reasonably well, although O 3 was underestimated during the day. A possible explanation here might lie in the non-consideration of volatile organic compounds (VOCs) chemistry.

  19. Establishing gene models from the Pinus pinaster genome using gene capture and BAC sequencing.

    Science.gov (United States)

    Seoane-Zonjic, Pedro; Cañas, Rafael A; Bautista, Rocío; Gómez-Maldonado, Josefa; Arrillaga, Isabel; Fernández-Pozo, Noé; Claros, M Gonzalo; Cánovas, Francisco M; Ávila, Concepción

    2016-02-27

    In the era of DNA throughput sequencing, assembling and understanding gymnosperm mega-genomes remains a challenge. Although drafts of three conifer genomes have recently been published, this number is too low to understand the full complexity of conifer genomes. Using techniques focused on specific genes, gene models can be established that can aid in the assembly of gene-rich regions, and this information can be used to compare genomes and understand functional evolution. In this study, gene capture technology combined with BAC isolation and sequencing was used as an experimental approach to establish de novo gene structures without a reference genome. Probes were designed for 866 maritime pine transcripts to sequence genes captured from genomic DNA. The gene models were constructed using GeneAssembler, a new bioinformatic pipeline, which reconstructed over 82% of the gene structures, and a high proportion (85%) of the captured gene models contained sequences from the promoter regulatory region. In a parallel experiment, the P. pinaster BAC library was screened to isolate clones containing genes whose cDNA sequence were already available. BAC clones containing the asparagine synthetase, sucrose synthase and xyloglucan endotransglycosylase gene sequences were isolated and used in this study. The gene models derived from the gene capture approach were compared with the genomic sequences derived from the BAC clones. This combined approach is a particularly efficient way to capture the genomic structures of gene families with a small number of members. The experimental approach used in this study is a valuable combined technique to study genomic gene structures in species for which a reference genome is unavailable. It can be used to establish exon/intron boundaries in unknown gene structures, to reconstruct incomplete genes and to obtain promoter sequences that can be used for transcriptional studies. A bioinformatics algorithm (GeneAssembler) is also provided as a

  20. Coupled Model Intercomparison Project 5 (CMIP5) simulations of climate following volcanic eruptions

    KAUST Repository

    Driscoll, Simon; Bozzo, Alessio; Gray, Lesley J.; Robock, Alan; Stenchikov, Georgiy L.

    2012-01-01

    The ability of the climate models submitted to the Coupled Model Intercomparison Project 5 (CMIP5) database to simulate the Northern Hemisphere winter climate following a large tropical volcanic eruption is assessed. When sulfate aerosols are produced by volcanic injections into the tropical stratosphere and spread by the stratospheric circulation, it not only causes globally averaged tropospheric cooling but also a localized heating in the lower stratosphere, which can cause major dynamical feedbacks. Observations show a lower stratospheric and surface response during the following one or two Northern Hemisphere (NH) winters, that resembles the positive phase of the North Atlantic Oscillation (NAO). Simulations from 13 CMIP5 models that represent tropical eruptions in the 19th and 20th century are examined, focusing on the large-scale regional impacts associated with the large-scale circulation during the NH winter season. The models generally fail to capture the NH dynamical response following eruptions. They do not sufficiently simulate the observed post-volcanic strengthened NH polar vortex, positive NAO, or NH Eurasian warming pattern, and they tend to overestimate the cooling in the tropical troposphere. The findings are confirmed by a superposed epoch analysis of the NAO index for each model. The study confirms previous similar evaluations and raises concern for the ability of current climate models to simulate the response of a major mode of global circulation variability to external forcings. This is also of concern for the accuracy of geoengineering modeling studies that assess the atmospheric response to stratosphere-injected particles.

  1. Coupled Model Intercomparison Project 5 (CMIP5) simulations of climate following volcanic eruptions

    KAUST Repository

    Driscoll, Simon

    2012-09-16

    The ability of the climate models submitted to the Coupled Model Intercomparison Project 5 (CMIP5) database to simulate the Northern Hemisphere winter climate following a large tropical volcanic eruption is assessed. When sulfate aerosols are produced by volcanic injections into the tropical stratosphere and spread by the stratospheric circulation, it not only causes globally averaged tropospheric cooling but also a localized heating in the lower stratosphere, which can cause major dynamical feedbacks. Observations show a lower stratospheric and surface response during the following one or two Northern Hemisphere (NH) winters, that resembles the positive phase of the North Atlantic Oscillation (NAO). Simulations from 13 CMIP5 models that represent tropical eruptions in the 19th and 20th century are examined, focusing on the large-scale regional impacts associated with the large-scale circulation during the NH winter season. The models generally fail to capture the NH dynamical response following eruptions. They do not sufficiently simulate the observed post-volcanic strengthened NH polar vortex, positive NAO, or NH Eurasian warming pattern, and they tend to overestimate the cooling in the tropical troposphere. The findings are confirmed by a superposed epoch analysis of the NAO index for each model. The study confirms previous similar evaluations and raises concern for the ability of current climate models to simulate the response of a major mode of global circulation variability to external forcings. This is also of concern for the accuracy of geoengineering modeling studies that assess the atmospheric response to stratosphere-injected particles.

  2. State-and-transition simulation models: a framework for forecasting landscape change

    Science.gov (United States)

    Daniel, Colin; Frid, Leonardo; Sleeter, Benjamin M.; Fortin, Marie-Josée

    2016-01-01

    SummaryA wide range of spatially explicit simulation models have been developed to forecast landscape dynamics, including models for projecting changes in both vegetation and land use. While these models have generally been developed as separate applications, each with a separate purpose and audience, they share many common features.We present a general framework, called a state-and-transition simulation model (STSM), which captures a number of these common features, accompanied by a software product, called ST-Sim, to build and run such models. The STSM method divides a landscape into a set of discrete spatial units and simulates the discrete state of each cell forward as a discrete-time-inhomogeneous stochastic process. The method differs from a spatially interacting Markov chain in several important ways, including the ability to add discrete counters such as age and time-since-transition as state variables, to specify one-step transition rates as either probabilities or target areas, and to represent multiple types of transitions between pairs of states.We demonstrate the STSM method using a model of land-use/land-cover (LULC) change for the state of Hawai'i, USA. Processes represented in this example include expansion/contraction of agricultural lands, urbanization, wildfire, shrub encroachment into grassland and harvest of tree plantations; the model also projects shifts in moisture zones due to climate change. Key model output includes projections of the future spatial and temporal distribution of LULC classes and moisture zones across the landscape over the next 50 years.State-and-transition simulation models can be applied to a wide range of landscapes, including questions of both land-use change and vegetation dynamics. Because the method is inherently stochastic, it is well suited for characterizing uncertainty in model projections. When combined with the ST-Sim software, STSMs offer a simple yet powerful means for developing a wide range of models of

  3. Understanding and simulating the link between African easterly waves and Atlantic tropical cyclones using a regional climate model: the role of domain size and lateral boundary conditions

    Energy Technology Data Exchange (ETDEWEB)

    Caron, Louis-Philippe [MISU, Stockholm University, Stockholm (Sweden); Universite du Quebec a Montreal, CRCMD Network, Montreal, QC (Canada); Jones, Colin G. [Swedish Meterological and Hydrological Institute, Rossby Center, Norrkoeping (Sweden)

    2012-07-15

    Using a suite of lateral boundary conditions, we investigate the impact of domain size and boundary conditions on the Atlantic tropical cyclone and african easterly Wave activity simulated by a regional climate model. Irrespective of boundary conditions, simulations closest to observed climatology are obtained using a domain covering both the entire tropical Atlantic and northern African region. There is a clear degradation when the high-resolution model domain is diminished to cover only part of the African continent or only the tropical Atlantic. This is found to be the result of biases in the boundary data, which for the smaller domains, have a large impact on TC activity. In this series of simulations, the large-scale Atlantic atmospheric environment appears to be the primary control on simulated TC activity. Weaker wave activity is usually accompanied by a shift in cyclogenesis location, from the MDR to the subtropics. All ERA40-driven integrations manage to capture the observed interannual variability and to reproduce most of the upward trend in tropical cyclone activity observed during that period. When driven by low-resolution global climate model (GCM) integrations, the regional climate model captures interannual variability (albeit with lower correlation coefficients) only if tropical cyclones form in sufficient numbers in the main development region. However, all GCM-driven integrations fail to capture the upward trend in Atlantic tropical cyclone activity. In most integrations, variations in Atlantic tropical cyclone activity appear uncorrelated with variations in African easterly wave activity. (orig.)

  4. Application of Computer Simulation Modeling to Medication Administration Process Redesign

    Directory of Open Access Journals (Sweden)

    Nathan Huynh

    2012-01-01

    Full Text Available The medication administration process (MAP is one of the most high-risk processes in health care. MAP workflow redesign can precipitate both unanticipated and unintended consequences that can lead to new medication safety risks and workflow inefficiencies. Thus, it is necessary to have a tool to evaluate the impact of redesign approaches in advance of their clinical implementation. This paper discusses the development of an agent-based MAP computer simulation model that can be used to assess the impact of MAP workflow redesign on MAP performance. The agent-based approach is adopted in order to capture Registered Nurse medication administration performance. The process of designing, developing, validating, and testing such a model is explained. Work is underway to collect MAP data in a hospital setting to provide more complex MAP observations to extend development of the model to better represent the complexity of MAP.

  5. Facilitating arrhythmia simulation: the method of quantitative cellular automata modeling and parallel running

    Directory of Open Access Journals (Sweden)

    Mondry Adrian

    2004-08-01

    Full Text Available Abstract Background Many arrhythmias are triggered by abnormal electrical activity at the ionic channel and cell level, and then evolve spatio-temporally within the heart. To understand arrhythmias better and to diagnose them more precisely by their ECG waveforms, a whole-heart model is required to explore the association between the massively parallel activities at the channel/cell level and the integrative electrophysiological phenomena at organ level. Methods We have developed a method to build large-scale electrophysiological models by using extended cellular automata, and to run such models on a cluster of shared memory machines. We describe here the method, including the extension of a language-based cellular automaton to implement quantitative computing, the building of a whole-heart model with Visible Human Project data, the parallelization of the model on a cluster of shared memory computers with OpenMP and MPI hybrid programming, and a simulation algorithm that links cellular activity with the ECG. Results We demonstrate that electrical activities at channel, cell, and organ levels can be traced and captured conveniently in our extended cellular automaton system. Examples of some ECG waveforms simulated with a 2-D slice are given to support the ECG simulation algorithm. A performance evaluation of the 3-D model on a four-node cluster is also given. Conclusions Quantitative multicellular modeling with extended cellular automata is a highly efficient and widely applicable method to weave experimental data at different levels into computational models. This process can be used to investigate complex and collective biological activities that can be described neither by their governing differentiation equations nor by discrete parallel computation. Transparent cluster computing is a convenient and effective method to make time-consuming simulation feasible. Arrhythmias, as a typical case, can be effectively simulated with the methods

  6. Evaluation of Simulated Marine Aerosol Production Using the WaveWatchIII Prognostic Wave Model Coupled to the Community Atmosphere Model within the Community Earth System Model

    Energy Technology Data Exchange (ETDEWEB)

    Long, M. S. [Harvard Univ., Cambridge, MA (United States). School of Engineering and Applied Sciences; Keene, William C. [Univ. of Virginia, Charlottesville, VA (United States). Dept. of Environmental Sciences; Zhang, J. [Univ. of North Dakota, Grand Forks, ND (United States). Dept. of Atmospheric Sciences; Reichl, B. [Univ. of Rhode Island, Narragansett, RI (United States). Graduate School of Oceanography; Shi, Y. [Univ. of North Dakota, Grand Forks, ND (United States). Dept. of Atmospheric Sciences; Hara, T. [Univ. of Rhode Island, Narragansett, RI (United States). Graduate School of Oceanography; Reid, J. S. [Naval Research Lab. (NRL), Monterey, CA (United States); Fox-Kemper, B. [Brown Univ., Providence, RI (United States). Earth, Environmental and Planetary Sciences; Craig, A. P. [National Center for Atmospheric Research, Boulder, CO (United States); Erickson, D. J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Computer Science and Mathematics Division; Ginis, I. [Univ. of Rhode Island, Narragansett, RI (United States). Graduate School of Oceanography; Webb, A. [Univ. of Tokyo (Japan). Dept. of Ocean Technology, Policy, and Environment

    2016-11-08

    Primary marine aerosol (PMA) is emitted into the atmosphere via breaking wind waves on the ocean surface. Most parameterizations of PMA emissions use 10-meter wind speed as a proxy for wave action. This investigation coupled the 3rd generation prognostic WAVEWATCH-III wind-wave model within a coupled Earth system model (ESM) to drive PMA production using wave energy dissipation rate – analogous to whitecapping – in place of 10-meter wind speed. The wind speed parameterization did not capture basin-scale variability in relations between wind and wave fields. Overall, the wave parameterization did not improve comparison between simulated versus measured AOD or Na+, thus highlighting large remaining uncertainties in model physics. Results confirm the efficacy of prognostic wind-wave models for air-sea exchange studies coupled with laboratory- and field-based characterizations of the primary physical drivers of PMA production. No discernible correlations were evident between simulated PMA fields and observed chlorophyll or sea surface temperature.

  7. A Bayesian network based approach for integration of condition-based maintenance in strategic offshore wind farm O&M simulation models

    DEFF Research Database (Denmark)

    Nielsen, Jannie Sønderkær; Sørensen, John Dalsgaard; Sperstad, Iver Bakken

    2018-01-01

    In the overall decision problem regarding optimization of operation and maintenance (O&M) for offshore wind farms, there are many approaches for solving parts of the overall decision problem. Simulation-based strategy models accurately capture system effects related to logistics, but model...... to generate failures and CBM tasks. An example considering CBM for wind turbine blades demonstrates the feasibility of the approach....

  8. Climatology and internal variability in a 1000-year control simulation with the coupled climate model ECHO-G

    Energy Technology Data Exchange (ETDEWEB)

    Min, S.K.; Hense, A. [Bonn Univ. (Germany). Meteorologisches Inst.; Legutke, S.; Kwon, W.T. [Korea Meteorological Administration, Seoul (Korea). Meteorological Research Inst.

    2004-03-01

    The climatology and internal variability in a 1000-year control simulation of the coupled atmosphere-ocean global climate model ECHO-G are analyzed and compared with observations and other coupled climate model simulations. ECHO-G requires annual mean flux corrections for heat and freshwater in order to simulate no climate drift for 1000 years, but no flux corrections for momentum. The ECHO-G control run captures well most aspects of the observed seasonal and annual climatology and of the interannual to decadal variability. Model biases are very close to those in ECHAM4 stand-alone integrations with prescribed observed sea surface temperature. A trend comparison between observed and modeled near surface temperatures shows that the observed global warming at near surface level is beyond the range of internal variability produced by ECHO-G. The simulated global mean near surface temperatures, however, show a two-year spectral peak which is linked with a strong biennial bias of energy in the ENSO signal. Consequently, the interannual variability (3-9 years) is underestimated. The overall ENSO structure such as the tropical SST climate and its seasonal cycle, a single ITCZ in the eastern tropical Pacific, and the ENSO phase-locking to the annual cycle are simulated reasonably well by ECHO-G. However, the amplitude of SST variability is overestimated in the eastern equatorial pacific and the observed westward propagation of zonal wind stress over the equatorial pacific is not captured by the model. ENSO-related teleconnection patterns of near surface temperature, precipitation, and mean sea level pressure are reproduced realistically. The station-based NAO index in the model exhibits a 'white' noise spectrum similar to the observed and the NAO-related patterns of near surface temperature, precipitation, and mean sea level pressure are also simulated successfully. However, the model overestimates the additional warming over the north pacific in the high index

  9. Environmental assessment of amine-based carbon capture Scenario modelling with life cycle assessment (LCA)

    Energy Technology Data Exchange (ETDEWEB)

    Brekke, Andreas; Askham, Cecilia; Modahl, Ingunn Saur; Vold, Bjoern Ivar; Johnsen, Fredrik Moltu

    2012-07-01

    This report contains a first attempt at introducing the environmental impacts associated with amines and derivatives in a life cycle assessment (LCA) of gas power production with carbon capture and comparing these with other environmental impacts associated with the production system. The report aims to identify data gaps and methodological challenges connected both to modelling toxicity of amines and derivatives and weighting of environmental impacts. A scenario based modelling exercise was performed on a theoretical gas power plant with carbon capture, where emission levels of nitrosamines were varied between zero (gas power without CCS) to a worst case level (outside the probable range of actual carbon capture facilities). Because of extensive research and development in the areas of solvents and emissions from carbon capture facilities in the latter years, data used in the exercise may be outdated and results should therefore not be taken at face value.The results from the exercise showed: According to UseTox, emissions of nitrosamines are less important than emissions of formaldehyde with regard to toxicity related to operation of (i.e. both inputs to and outputs from) a carbon capture facility. If characterisation factors for emissions of metals are included, these outweigh all other toxic emissions in the study. None of the most recent weighting methods in LCA include characterisation factors for nitrosamines, and these are therefore not part of the environmental ranking.These results shows that the EDecIDe project has an important role to play in developing LCA methodology useful for assessing the environmental performance of amine based carbon capture in particular and CCS in general. The EDecIDe project will examine the toxicity models used in LCA in more detail, specifically UseTox. The applicability of the LCA compartment models and site specificity issues for a Norwegian/Arctic situation will be explored. This applies to the environmental compartments

  10. Dynamic simulation and optimization of an industrial-scale absorption tower for CO2 capturing from ethane gas

    Directory of Open Access Journals (Sweden)

    Babak Pouladi

    2016-11-01

    Full Text Available This article considers a process technology based on absorption for CO2 capturing of ethane gas in phase 9 and 10 of south pars in Iran using diethanolamine (DEA as absorbent solvent. This CO2 capture plant was designed to achieve 85% CO2 recovery and obtain 19 ppm the CO2 concentration in the outlet of absorber. ASPEN–HYSYS software was used for the dynamic simulation of a commercial-scale CO2 capture plant and amine Pkg equation was chosen from the fluid property package for calculating the thermodynamic properties of the process. A static approach for optimization was used to evaluate the optimum conditions. This research revealed that pressure variation does not have any considerable changes in the absorption process, while both amine inlet temperature and volumetric flow rate increment enhance the absorption tower efficiency. The effect of temperature was very significant as shown in the dynamic study plots. The optimum condition for CO2 absorption from a stream of ethane gas with molar flow rate of 2118 kg mol h−1 was obtained 75 m3  h−1 of amine at 53 °C and 24 bar. This optimized condition is acceptable from economical, safe as well as feasible point of view.

  11. Modeling and simulation of complex systems a framework for efficient agent-based modeling and simulation

    CERN Document Server

    Siegfried, Robert

    2014-01-01

    Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard

  12. Exploiting the Capture Effect to Enhance RACH Performance in Cellular-Based M2M Communications

    Directory of Open Access Journals (Sweden)

    Jonghun Kim

    2017-09-01

    Full Text Available Cellular-based machine-to-machine (M2M communication is expected to facilitate services for the Internet of Things (IoT. However, because cellular networks are designed for human users, they have some limitations. Random access channel (RACH congestion caused by massive access from M2M devices is one of the biggest factors hindering cellular-based M2M services because the RACH congestion causes random access (RA throughput degradation and connection failures to the devices. In this paper, we show the possibility exploiting the capture effects, which have been known to have a positive impact on the wireless network system, on RA procedure for improving the RA performance of M2M devices. For this purpose, we analyze an RA procedure using a capture model. Through this analysis, we examine the effects of capture on RA performance and propose an Msg3 power-ramping (Msg3 PR scheme to increase the capture probability (thereby increasing the RA success probability even when severe RACH congestion problem occurs. The proposed analysis models are validated using simulations. The results show that the proposed scheme, with proper parameters, further improves the RA throughput and reduces the connection failure probability, by slightly increasing the energy consumption. Finally, we demonstrate the effects of coexistence with other RA-related schemes through simulation results.

  13. Capturing a Commander's decision making style

    Science.gov (United States)

    Santos, Eugene; Nguyen, Hien; Russell, Jacob; Kim, Keumjoo; Veenhuis, Luke; Boparai, Ramnjit; Stautland, Thomas Kristoffer

    2017-05-01

    A Commander's decision making style represents how he weighs his choices and evaluates possible solutions with regards to his goals. Specifically, in the naval warfare domain, it relates the way he processes a large amount of information in dynamic, uncertain environments, allocates resources, and chooses appropriate actions to pursue. In this paper, we describe an approach to capture a Commander's decision style by creating a cognitive model that captures his decisionmaking process and evaluate this model using a set of scenarios using an online naval warfare simulation game. In this model, we use the Commander's past behaviors and generalize Commander's actions across multiple problems and multiple decision making sequences in order to recommend actions to a Commander in a manner that he may have taken. Our approach builds upon the Double Transition Model to represent the Commander's focus and beliefs to estimate his cognitive state. Each cognitive state reflects a stage in a Commander's decision making process, each action reflects the tasks that he has taken to move himself closer to a final decision, and the reward reflects how close he is to achieving his goal. We then use inverse reinforcement learning to compute a reward for each of the Commander's actions. These rewards and cognitive states are used to compare between different styles of decision making. We construct a set of scenarios in the game where rational, intuitive and spontaneous decision making styles will be evaluated.

  14. Improving firm performance in out-of-equilibrium, deregulated markets using feedback simulation models

    International Nuclear Information System (INIS)

    Gary, S.; Larsen, E.R.

    2000-01-01

    Deregulation has reshaped the utility sector in many countries around the world. Organisations in these deregulated industries must adopt new polices which guide strategic decisions, in an uncertain and unfamiliar environment, that determine the short- and long-term fate of their companies. Traditional economic equilibrium models do not adequately address the issues facing these organisations in the shift towards deregulated market competition. Equilibrium assumptions break down in the out-of-equilibrium transition to competitive markets, and therefore different underpinning assumptions must be adopted in order to guide management in these periods. Simulation models incorporating information feedback through behavioural policies fill the void left by equilibrium models and support strategic policy analysis in out-of-equilibrium markets. As an example, we present a feedback simulation model developed to examine firm and industry level performance consequences of new generation capacity investment policies in the deregulated UK electricity sector. The model explicitly captures behavioural decision polices of boundedly rational managers and avoids equilibrium assumptions. Such models are essential to help managers evaluate the performance impact of various strategic policies in environments in which disequilibrum behaviour dominates. (Author)

  15. The influence of anisotropy on capture zone analysis

    International Nuclear Information System (INIS)

    Hansen, K.

    1995-01-01

    Approximately 50,000 gallons of various grades of gasoline and aviation fuel were leaked into silty clay overburden overlying phyllite of the Wissahickon Formation. Pumping tests and measurements of water table recovery from recovery and production wells suggested that elliptical cones of depression were caused by anisotropic groundwater flow in steeply dipping fractures trending between N60 degree E and N75 degree E which were formed by weathered metamorphic foliation. Fracture trace analysis, outcrop measurements, borehole camera surveys, straddle packer testing, and test excavations supported this conceptual model for hydraulic conductivity. Using both quantitative and semi-quantitative methods, the magnitude of anisotropy was calculated from both pumping tests and water table recovery data. Calculated anisotropies showed variations related to the particular method of analysis. Simulations of capture zones using numerical techniques indicated that anisotropic conditions had produced actual capture zones influenced by groundwater flow not orthogonal to equipotential lines. Capture zone shapes were significantly distorted along the primary axis of anisotropy within the range of variation in anisotropy (n) measured by the different analysis methods. Using the mean value of anisotropy from this site (n = 14), actual recovery well capture areas were subsequently corrected for anisotropic effects. The use of capture areas corrected for the mean value of anisotropy enabled more effective placement of subsequent recovery wells. The relatively consistent foliation of rocks in the Wissahickon Formation suggested that capture zone correction should be considered when developing recovery strategies in aquifer systems where anisotropic conditions are likely

  16. Experimental and Simulated Characterization of a Beam Shaping Assembly for Accelerator- Based Boron Neutron Capture Therapy (AB-BNCT)

    International Nuclear Information System (INIS)

    Burlon, Alejandro A.; Valda, Alejandro A.; Girola, Santiago; Minsky, Daniel M.; Kreiner, Andres J.

    2010-01-01

    In the frame of the construction of a Tandem Electrostatic Quadrupole Accelerator facility devoted to the Accelerator-Based Boron Neutron Capture Therapy, a Beam Shaping Assembly has been characterized by means of Monte-Carlo simulations and measurements. The neutrons were generated via the 7 Li(p, n) 7 Be reaction by irradiating a thick LiF target with a 2.3 MeV proton beam delivered by the TANDAR accelerator at CNEA. The emerging neutron flux was measured by means of activation foils while the beam quality and directionality was evaluated by means of Monte Carlo simulations. The parameters show compliance with those suggested by IAEA. Finally, an improvement adding a beam collimator has been evaluated.

  17. Endothelial cell capture of heparin-binding growth factors under flow.

    Directory of Open Access Journals (Sweden)

    Bing Zhao

    2010-10-01

    Full Text Available Circulation is an important delivery method for both natural and synthetic molecules, but microenvironment interactions, regulated by endothelial cells and critical to the molecule's fate, are difficult to interpret using traditional approaches. In this work, we analyzed and predicted growth factor capture under flow using computer modeling and a three-dimensional experimental approach that includes pertinent circulation characteristics such as pulsatile flow, competing binding interactions, and limited bioavailability. An understanding of the controlling features of this process was desired. The experimental module consisted of a bioreactor with synthetic endothelial-lined hollow fibers under flow. The physical design of the system was incorporated into the model parameters. The heparin-binding growth factor fibroblast growth factor-2 (FGF-2 was used for both the experiments and simulations. Our computational model was composed of three parts: (1 media flow equations, (2 mass transport equations and (3 cell surface reaction equations. The model is based on the flow and reactions within a single hollow fiber and was scaled linearly by the total number of fibers for comparison with experimental results. Our model predicted, and experiments confirmed, that removal of heparan sulfate (HS from the system would result in a dramatic loss of binding by heparin-binding proteins, but not by proteins that do not bind heparin. The model further predicted a significant loss of bound protein at flow rates only slightly higher than average capillary flow rates, corroborated experimentally, suggesting that the probability of capture in a single pass at high flow rates is extremely low. Several other key parameters were investigated with the coupling between receptors and proteoglycans shown to have a critical impact on successful capture. The combined system offers opportunities to examine circulation capture in a straightforward quantitative manner that

  18. VIC-CropSyst-v2: A regional-scale modeling platform to simulate the nexus of climate, hydrology, cropping systems, and human decisions

    Science.gov (United States)

    Malek, Keyvan; Stöckle, Claudio; Chinnayakanahalli, Kiran; Nelson, Roger; Liu, Mingliang; Rajagopalan, Kirti; Barik, Muhammad; Adam, Jennifer C.

    2017-08-01

    Food supply is affected by a complex nexus of land, atmosphere, and human processes, including short- and long-term stressors (e.g., drought and climate change, respectively). A simulation platform that captures these complex elements can be used to inform policy and best management practices to promote sustainable agriculture. We have developed a tightly coupled framework using the macroscale variable infiltration capacity (VIC) hydrologic model and the CropSyst agricultural model. A mechanistic irrigation module was also developed for inclusion in this framework. Because VIC-CropSyst combines two widely used and mechanistic models (for crop phenology, growth, management, and macroscale hydrology), it can provide realistic and hydrologically consistent simulations of water availability, crop water requirements for irrigation, and agricultural productivity for both irrigated and dryland systems. This allows VIC-CropSyst to provide managers and decision makers with reliable information on regional water stresses and their impacts on food production. Additionally, VIC-CropSyst is being used in conjunction with socioeconomic models, river system models, and atmospheric models to simulate feedback processes between regional water availability, agricultural water management decisions, and land-atmosphere interactions. The performance of VIC-CropSyst was evaluated on both regional (over the US Pacific Northwest) and point scales. Point-scale evaluation involved using two flux tower sites located in agricultural fields in the US (Nebraska and Illinois). The agreement between recorded and simulated evapotranspiration (ET), applied irrigation water, soil moisture, leaf area index (LAI), and yield indicated that, although the model is intended to work on regional scales, it also captures field-scale processes in agricultural areas.

  19. Integrating the simulation of domestic water demand behaviour to an urban water model using agent based modelling

    Science.gov (United States)

    Koutiva, Ifigeneia; Makropoulos, Christos

    2015-04-01

    The urban water system's sustainable evolution requires tools that can analyse and simulate the complete cycle including both physical and cultural environments. One of the main challenges, in this regard, is the design and development of tools that are able to simulate the society's water demand behaviour and the way policy measures affect it. The effects of these policy measures are a function of personal opinions that subsequently lead to the formation of people's attitudes. These attitudes will eventually form behaviours. This work presents the design of an ABM tool for addressing the social dimension of the urban water system. The created tool, called Urban Water Agents' Behaviour (UWAB) model, was implemented, using the NetLogo agent programming language. The main aim of the UWAB model is to capture the effects of policies and environmental pressures to water conservation behaviour of urban households. The model consists of agents representing urban households that are linked to each other creating a social network that influences the water conservation behaviour of its members. Household agents are influenced as well by policies and environmental pressures, such as drought. The UWAB model simulates behaviour resulting in the evolution of water conservation within an urban population. The final outcome of the model is the evolution of the distribution of different conservation levels (no, low, high) to the selected urban population. In addition, UWAB is implemented in combination with an existing urban water management simulation tool, the Urban Water Optioneering Tool (UWOT) in order to create a modelling platform aiming to facilitate an adaptive approach of water resources management. For the purposes of this proposed modelling platform, UWOT is used in a twofold manner: (1) to simulate domestic water demand evolution and (2) to simulate the response of the water system to the domestic water demand evolution. The main advantage of the UWAB - UWOT model

  20. Magnetic Capture of a Molecular Biomarker from Synovial Fluid in a Rat Model of Knee Osteoarthritis.

    Science.gov (United States)

    Yarmola, Elena G; Shah, Yash; Arnold, David P; Dobson, Jon; Allen, Kyle D

    2016-04-01

    Biomarker development for osteoarthritis (OA) often begins in rodent models, but can be limited by an inability to aspirate synovial fluid from a rodent stifle (similar to the human knee). To address this limitation, we have developed a magnetic nanoparticle-based technology to collect biomarkers from a rodent stifle, termed magnetic capture. Using a common OA biomarker--the c-terminus telopeptide of type II collagen (CTXII)--magnetic capture was optimized in vitro using bovine synovial fluid and then tested in a rat model of knee OA. Anti-CTXII antibodies were conjugated to the surface of superparamagnetic iron oxide-containing polymeric particles. Using these anti-CTXII particles, magnetic capture was able to estimate the level of CTXII in 25 μL aliquots of bovine synovial fluid; and under controlled conditions, this estimate was unaffected by synovial fluid viscosity. Following in vitro testing, anti-CTXII particles were tested in a rat monoiodoacetate model of knee OA. CTXII could be magnetically captured from a rodent stifle without the need to aspirate fluid and showed tenfold changes in CTXII levels from OA-affected joints relative to contralateral control joints. Combined, these data demonstrate the ability and sensitivity of magnetic capture for post-mortem analysis of OA biomarkers in the rat.

  1. Sampling the stream landscape: Improving the applicability of an ecoregion-level capture probability model for stream fishes

    Science.gov (United States)

    Mollenhauer, Robert; Mouser, Joshua B.; Brewer, Shannon K.

    2018-01-01

    Temporal and spatial variability in streams result in heterogeneous gear capture probability (i.e., the proportion of available individuals identified) that confounds interpretation of data used to monitor fish abundance. We modeled tow-barge electrofishing capture probability at multiple spatial scales for nine Ozark Highland stream fishes. In addition to fish size, we identified seven reach-scale environmental characteristics associated with variable capture probability: stream discharge, water depth, conductivity, water clarity, emergent vegetation, wetted width–depth ratio, and proportion of riffle habitat. The magnitude of the relationship between capture probability and both discharge and depth varied among stream fishes. We also identified lithological characteristics among stream segments as a coarse-scale source of variable capture probability. The resulting capture probability model can be used to adjust catch data and derive reach-scale absolute abundance estimates across a wide range of sampling conditions with similar effort as used in more traditional fisheries surveys (i.e., catch per unit effort). Adjusting catch data based on variable capture probability improves the comparability of data sets, thus promoting both well-informed conservation and management decisions and advances in stream-fish ecology.

  2. Evaluation of a present-day climate simulation with a new coupled atmosphere-ocean model GENMOM

    Directory of Open Access Journals (Sweden)

    J. R. Alder

    2011-02-01

    Full Text Available We present a new, non-flux corrected AOGCM, GENMOM, that combines the GENESIS version 3 atmospheric GCM (Global Environmental and Ecological Simulation of Interactive Systems and MOM2 (Modular Ocean Model version 2 nominally at T31 resolution. We evaluate GENMOM by comparison with reanalysis products (e.g., NCEP2 and three models used in the IPCC AR4 assessment. GENMOM produces a global temperature bias of 0.6 °C. Atmospheric features such as the jet stream structure and major semi-permanent sea level pressure centers are well simulated as is the mean planetary-scale wind structure that is needed to produce the correct position of stormtracks. Most ocean surface currents are reproduced except where they are not resolvable at T31 resolution. Overall, GENMOM captures reasonably well the observed gradients and spatial distributions of annual surface temperature and precipitation and the simulations are on par with other AOGCMs. Deficiencies in the GENMOM simulations include a warm bias in the surface temperature over the southern oceans, a split in the ITCZ and weaker-than-observed overturning circulation.

  3. Heat waves over Central Europe in regional climate model simulations

    Science.gov (United States)

    Lhotka, Ondřej; Kyselý, Jan

    2014-05-01

    Regional climate models (RCMs) have become a powerful tool for exploring impacts of global climate change on a regional scale. The aim of the study is to evaluate the capability of RCMs to reproduce characteristics of major heat waves over Central Europe in their simulations of the recent climate (1961-2000), with a focus on the most severe and longest Central European heat wave that occurred in 1994. We analyzed 7 RCM simulations with a high resolution (0.22°) from the ENSEMBLES project, driven by the ERA-40 reanalysis. In observed data (the E-OBS 9.0 dataset), heat waves were defined on the basis of deviations of daily maximum temperature (Tmax) from the 95% quantile of summer Tmax distribution in grid points over Central Europe. The same methodology was applied in the RCM simulations; we used corresponding 95% quantiles (calculated for each RCM and grid point) in order to remove the bias of modelled Tmax. While climatological characteristics of heat waves are reproduced reasonably well in the RCM ensemble, we found major deficiencies in simulating heat waves in individual years. For example, METNOHIRHAM simulated very severe heat waves in 1996, when no heat wave was observed. Focusing on the major 1994 heat wave, considerable differences in simulated temperature patterns were found among the RCMs. The differences in the temperature patterns were clearly linked to the simulated amount of precipitation during this event. The 1994 heat wave was almost absent in all RCMs that did not capture the observed precipitation deficit, while it was by far most pronounced in KNMI-RACMO that simulated virtually no precipitation over Central Europe during the 15-day period of the heat wave. By contrast to precipitation, values of evaporative fraction in the RCMs were not linked to severity of the simulated 1994 heat wave. This suggests a possible major contribution of other factors such as cloud cover and associated downward shortwave radiation. Therefore, a more detailed

  4. Hybrid Large Eddy Simulation / Reynolds Averaged Navier-Stokes Modeling in Directed Energy Applications

    Science.gov (United States)

    Zilberter, Ilya Alexandrovich

    In this work, a hybrid Large Eddy Simulation / Reynolds-Averaged Navier Stokes (LES/RANS) turbulence model is applied to simulate two flows relevant to directed energy applications. The flow solver blends the Menter Baseline turbulence closure near solid boundaries with a Lenormand-type subgrid model in the free-stream with a blending function that employs the ratio of estimated inner and outer turbulent length scales. A Mach 2.2 mixing nozzle/diffuser system representative of a gas laser is simulated under a range of exit pressures to assess the ability of the model to predict the dynamics of the shock train. The simulation captures the location of the shock train responsible for pressure recovery but under-predicts the rate of pressure increase. Predicted turbulence production at the wall is found to be highly sensitive to the behavior of the RANS turbulence model. A Mach 2.3, high-Reynolds number, three-dimensional cavity flow is also simulated in order to compute the wavefront aberrations of an optical beam passing thorough the cavity. The cavity geometry is modeled using an immersed boundary method, and an auxiliary flat plate simulation is performed to replicate the effects of the wind-tunnel boundary layer on the computed optical path difference. Pressure spectra extracted on the cavity walls agree with empirical predictions based on Rossiter's formula. Proper orthogonal modes of the wavefront aberrations in a beam originating from the cavity center agree well with experimental data despite uncertainty about in flow turbulence levels and boundary layer thicknesses over the wind tunnel window. Dynamic mode decomposition of a planar wavefront spanning the cavity reveals that wavefront distortions are driven by shear layer oscillations at the Rossiter frequencies; these disturbances create eddy shocklets that propagate into the free-stream, creating additional optical wavefront distortion.

  5. Applying the flow-capturing location-allocation model to an authentic network: Edmonton, Canada

    NARCIS (Netherlands)

    M.J. Hodgson (John); K.E. Rosing (Kenneth); A.L.G. Storrier (Leontien)

    1996-01-01

    textabstractTraditional location-allocation models aim to locate network facilities to optimally serve demand expressed as weights at nodes. For some types of facilities demand is not expressed at nodes, but as passing network traffic. The flow-capturing location-allocation model responds to this

  6. Particle capture efficiency in a multi-wire model for high gradient magnetic separation

    KAUST Repository

    Eisenträger, Almut

    2014-07-21

    High gradient magnetic separation (HGMS) is an efficient way to remove magnetic and paramagnetic particles, such as heavy metals, from waste water. As the suspension flows through a magnetized filter mesh, high magnetic gradients around the wires attract and capture the particles removing them from the fluid. We model such a system by considering the motion of a paramagnetic tracer particle through a periodic array of magnetized cylinders. We show that there is a critical Mason number (ratio of viscous to magnetic forces) below which the particle is captured irrespective of its initial position in the array. Above this threshold, particle capture is only partially successful and depends on the particle\\'s entry position. We determine the relationship between the critical Mason number and the system geometry using numerical and asymptotic calculations. If a capture efficiency below 100% is sufficient, our results demonstrate how operating the HGMS system above the critical Mason number but with multiple separation cycles may increase efficiency. © 2014 AIP Publishing LLC.

  7. Development of a mechanistic model for prediction of CO2 capture from gas mixtures by amine solutions in porous membranes.

    Science.gov (United States)

    Ghadiri, Mehdi; Marjani, Azam; Shirazian, Saeed

    2017-06-01

    A mechanistic model was developed in order to predict capture and removal of CO 2 from air using membrane technology. The considered membrane was a hollow-fiber contactor module in which gas mixture containing CO 2 was assumed as feed while 2-amino-2-metyl-1-propanol (AMP) was used as an absorbent. The mechanistic model was developed according to transport phenomena taking into account mass transfer and chemical reaction between CO 2 and amine in the contactor module. The main aim of modeling was to track the composition and flux of CO 2 and AMP in the membrane module for process optimization. For modeling of the process, the governing equations were computed using finite element approach in which the whole model domain was discretized into small cells. To confirm the simulation findings, model outcomes were compared with experimental data and good consistency was revealed. The results showed that increasing temperature of AMP solution increases CO 2 removal in the hollow-fiber membrane contactor.

  8. Simulations of mixing in Inertial Confinement Fusion with front tracking and sub-grid scale models

    Science.gov (United States)

    Rana, Verinder; Lim, Hyunkyung; Melvin, Jeremy; Cheng, Baolian; Glimm, James; Sharp, David

    2015-11-01

    We present two related results. The first discusses the Richtmyer-Meshkov (RMI) and Rayleigh-Taylor instabilities (RTI) and their evolution in Inertial Confinement Fusion simulations. We show the evolution of the RMI to the late time RTI under transport effects and tracking. The role of the sub-grid scales helps capture the interaction of turbulence with diffusive processes. The second assesses the effects of concentration on the physics model and examines the mixing properties in the low Reynolds number hot spot. We discuss the effect of concentration on the Schmidt number. The simulation results are produced using the University of Chicago code FLASH and Stony Brook University's front tracking algorithm.

  9. Multi-fuel multi-product operation of IGCC power plants with carbon capture and storage (CCS)

    International Nuclear Information System (INIS)

    Cormos, Ana-Maria; Dinca, Cristian; Cormos, Calin-Cristian

    2015-01-01

    This paper investigates multi-fuel multi-product operation of IGCC plants with carbon capture and storage (CCS). The investigated plant designs co-process coal with different sorts of biomass (e.g. sawdust) and solid wastes, through gasification, leading to different decarbonised energy vectors (power, hydrogen, heat, substitute natural gas etc.) simultaneous with carbon capture. Co-gasification of coal with different renewable energy sources coupled with carbon capture will pave the way towards zero emissions power plants. The energy conversions investigated in the paper were simulated using commercial process flow modelling package (ChemCAD) in order to produce mass and energy balances necessary for the proposed evaluation. As illustrative cases, hydrogen and power co-generation and Fischer–Tropsch fuel synthesis (both with carbon capture), were presented. The case studies investigated in the paper produce a flexible ratio between power and hydrogen (in the range of 400–600 MW net electricity and 0–200 MW th hydrogen considering the lower heating value) with at least 90% carbon capture rate. Special emphasis were given to fuel selection criteria for optimisation of gasification performances (fuel blending), to the selection criteria for gasification reactor in a multi-fuel multi-product operation scenario, modelling and simulation of whole process, to thermal and power integration of processes, flexibility analysis of the energy conversion processes, in-depth techno-economic and environmental assessment etc. - Highlights: • Assessment of IGCC-based energy vectors poly-generation systems with CCS. • Optimisation of gasification performances and CO 2 emissions by fuel blending. • Multi-fuel multi-product operation of gasification plants

  10. Simulation in Complex Modelling

    DEFF Research Database (Denmark)

    Nicholas, Paul; Ramsgaard Thomsen, Mette; Tamke, Martin

    2017-01-01

    This paper will discuss the role of simulation in extended architectural design modelling. As a framing paper, the aim is to present and discuss the role of integrated design simulation and feedback between design and simulation in a series of projects under the Complex Modelling framework. Complex...... performance, engage with high degrees of interdependency and allow the emergence of design agency and feedback between the multiple scales of architectural construction. This paper presents examples for integrated design simulation from a series of projects including Lace Wall, A Bridge Too Far and Inflated...... Restraint developed for the research exhibition Complex Modelling, Meldahls Smedie Gallery, Copenhagen in 2016. Where the direct project aims and outcomes have been reported elsewhere, the aim for this paper is to discuss overarching strategies for working with design integrated simulation....

  11. Desktop Modeling and Simulation: Parsimonious, yet Effective Discrete-Event Simulation Analysis

    Science.gov (United States)

    Bradley, James R.

    2012-01-01

    This paper evaluates how quickly students can be trained to construct useful discrete-event simulation models using Excel The typical supply chain used by many large national retailers is described, and an Excel-based simulation model is constructed of it The set of programming and simulation skills required for development of that model are then determined we conclude that six hours of training are required to teach the skills to MBA students . The simulation presented here contains all fundamental functionallty of a simulation model, and so our result holds for any discrete-event simulation model. We argue therefore that Industry workers with the same technical skill set as students having completed one year in an MBA program can be quickly trained to construct simulation models. This result gives credence to the efficacy of Desktop Modeling and Simulation whereby simulation analyses can be quickly developed, run, and analyzed with widely available software, namely Excel.

  12. Forecasting Lightning Threat using Cloud-Resolving Model Simulations

    Science.gov (United States)

    McCaul, Eugene W., Jr.; Goodman, Steven J.; LaCasse, Katherine M.; Cecil, Daniel J.

    2008-01-01

    Two new approaches are proposed and developed for making time and space dependent, quantitative short-term forecasts of lightning threat, and a blend of these approaches is devised that capitalizes on the strengths of each. The new methods are distinctive in that they are based entirely on the ice-phase hydrometeor fields generated by regional cloud-resolving numerical simulations, such as those produced by the WRF model. These methods are justified by established observational evidence linking aspects of the precipitating ice hydrometeor fields to total flash rates. The methods are straightforward and easy to implement, and offer an effective near-term alternative to the incorporation of complex and costly cloud electrification schemes into numerical models. One method is based on upward fluxes of precipitating ice hydrometeors in the mixed phase region at the-15 C level, while the second method is based on the vertically integrated amounts of ice hydrometeors in each model grid column. Each method can be calibrated by comparing domain-wide statistics of the peak values of simulated flash rate proxy fields against domain-wide peak total lightning flash rate density data from observations. Tests show that the first method is able to capture much of the temporal variability of the lightning threat, while the second method does a better job of depicting the areal coverage of the threat. Our blended solution is designed to retain most of the temporal sensitivity of the first method, while adding the improved spatial coverage of the second. Exploratory tests for selected North Alabama cases show that, because WRF can distinguish the general character of most convective events, our methods show promise as a means of generating quantitatively realistic fields of lightning threat. However, because the models tend to have more difficulty in predicting the instantaneous placement of storms, forecasts of the detailed location of the lightning threat based on single

  13. Monte Carlo based dosimetry and treatment planning for neutron capture therapy of brain tumors

    International Nuclear Information System (INIS)

    Zamenhof, R.G.; Brenner, J.F.; Wazer, D.E.; Madoc-Jones, H.; Clement, S.D.; Harling, O.K.; Yanch, J.C.

    1990-01-01

    Monte Carlo based dosimetry and computer-aided treatment planning for neutron capture therapy have been developed to provide the necessary link between physical dosimetric measurements performed on the MITR-II epithermal-neutron beams and the need of the radiation oncologist to synthesize large amounts of dosimetric data into a clinically meaningful treatment plan for each individual patient. Monte Carlo simulation has been employed to characterize the spatial dose distributions within a skull/brain model irradiated by an epithermal-neutron beam designed for neutron capture therapy applications. The geometry and elemental composition employed for the mathematical skull/brain model and the neutron and photon fluence-to-dose conversion formalism are presented. A treatment planning program, NCTPLAN, developed specifically for neutron capture therapy, is described. Examples are presented illustrating both one and two-dimensional dose distributions obtainable within the brain with an experimental epithermal-neutron beam, together with beam quality and treatment plan efficacy criteria which have been formulated for neutron capture therapy. The incorporation of three-dimensional computed tomographic image data into the treatment planning procedure is illustrated

  14. Improved simulation of tropospheric ozone by a global-multi-regional two-way coupling model system

    Directory of Open Access Journals (Sweden)

    Y. Yan

    2016-02-01

    Full Text Available Small-scale nonlinear chemical and physical processes over pollution source regions affect the tropospheric ozone (O3, but these processes are not captured by current global chemical transport models (CTMs and chemistry–climate models that are limited by coarse horizontal resolutions (100–500 km, typically 200 km. These models tend to contain large (and mostly positive tropospheric O3 biases in the Northern Hemisphere. Here we use the recently built two-way coupling system of the GEOS-Chem CTM to simulate the regional and global tropospheric O3 in 2009. The system couples the global model (at 2.5° long.  ×  2° lat. and its three nested models (at 0.667° long.  ×  0.5° lat. covering Asia, North America and Europe, respectively. Specifically, the nested models take lateral boundary conditions (LBCs from the global model, better capture small-scale processes and feed back to modify the global model simulation within the nested domains, with a subsequent effect on their LBCs. Compared to the global model alone, the two-way coupled system better simulates the tropospheric O3 both within and outside the nested domains, as found by evaluation against a suite of ground (1420 sites from the World Data Centre for Greenhouse Gases (WDCGG, the United States National Oceanic and Atmospheric Administration (NOAA Earth System Research Laboratory Global Monitoring Division (GMD, the Chemical Coordination Centre of European Monitoring and Evaluation Programme (EMEP, and the United States Environmental Protection Agency Air Quality System (AQS, aircraft (the High-performance Instrumented Airborne Platform for Environmental Research (HIAPER Pole-to-Pole Observations (HIPPO and Measurement of Ozone and Water Vapor by Airbus In- Service Aircraft (MOZAIC and satellite measurements (two Ozone Monitoring Instrument (OMI products. The two-way coupled simulation enhances the correlation in day-to-day variation of afternoon mean surface O3

  15. Neutron-capture cross sections from indirect measurements

    Directory of Open Access Journals (Sweden)

    Scielzo N.D.

    2012-02-01

    Full Text Available Cross sections for compound-nuclear reactions reactions play an important role in models of astrophysical environments and simulations of the nuclear fuel cycle. Providing reliable cross section data remains a formidable task, and direct measurements have to be complemented by theoretical predictions and indirect methods. The surrogate nuclear reactions method provides an indirect approach for determining cross sections for reactions on unstable isotopes, which are difficult or impossible to measure otherwise. Current implementations of the method provide useful cross sections for (n,f reactions, but need to be improved upon for applications to capture reactions.

  16. STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies

    Directory of Open Access Journals (Sweden)

    Hepburn Iain

    2012-05-01

    Full Text Available Abstract Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins, conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates

  17. Low-cost structured-light based 3D capture system design

    Science.gov (United States)

    Dong, Jing; Bengtson, Kurt R.; Robinson, Barrett F.; Allebach, Jan P.

    2014-03-01

    Most of the 3D capture products currently in the market are high-end and pricey. They are not targeted for consumers, but rather for research, medical, or industrial usage. Very few aim to provide a solution for home and small business applications. Our goal is to fill in this gap by only using low-cost components to build a 3D capture system that can satisfy the needs of this market segment. In this paper, we present a low-cost 3D capture system based on the structured-light method. The system is built around the HP TopShot LaserJet Pro M275. For our capture device, we use the 8.0 Mpixel camera that is part of the M275. We augment this hardware with two 3M MPro 150 VGA (640 × 480) pocket projectors. We also describe an analytical approach to predicting the achievable resolution of the reconstructed 3D object based on differentials and small signal theory, and an experimental procedure for validating that the system under test meets the specifications for reconstructed object resolution that are predicted by our analytical model. By comparing our experimental measurements from the camera-projector system with the simulation results based on the model for this system, we conclude that our prototype system has been correctly configured and calibrated. We also conclude that with the analytical models, we have an effective means for specifying system parameters to achieve a given target resolution for the reconstructed object.

  18. Enhanced mutual capture of colored solitons by matched modulator

    Science.gov (United States)

    Feigenbaum, Eyal; Orenstein, Meir

    2004-08-01

    The mutual capture of two colored solitons is enhanced by a modulator, to a level which enables its practical exploitation, e.g., for a read- write mechanism in a soliton buffer. The enhanced capture was analyzed using closed form particle-like soliton perturbation, and verified by numerical simulations. Optimal modulator frequency and modulation depth are obtained. This mutual capture can be utilized for all-optical soliton logic and memory.

  19. Multiscale modeling of a rectifying bipolar nanopore: explicit-water versus implicit-water simulations.

    Science.gov (United States)

    Ható, Zoltán; Valiskó, Mónika; Kristóf, Tamás; Gillespie, Dirk; Boda, Dezsö

    2017-07-21

    In a multiscale modeling approach, we present computer simulation results for a rectifying bipolar nanopore at two modeling levels. In an all-atom model, we use explicit water to simulate ion transport directly with the molecular dynamics technique. In a reduced model, we use implicit water and apply the Local Equilibrium Monte Carlo method together with the Nernst-Planck transport equation. This hybrid method makes the fast calculation of ion transport possible at the price of lost details. We show that the implicit-water model is an appropriate representation of the explicit-water model when we look at the system at the device (i.e., input vs. output) level. The two models produce qualitatively similar behavior of the electrical current for different voltages and model parameters. Looking at the details of concentration and potential profiles, we find profound differences between the two models. These differences, however, do not influence the basic behavior of the model as a device because they do not influence the z-dependence of the concentration profiles which are the main determinants of current. These results then address an old paradox: how do reduced models, whose assumptions should break down in a nanoscale device, predict experimental data? Our simulations show that reduced models can still capture the overall device physics correctly, even though they get some important aspects of the molecular-scale physics quite wrong; reduced models work because they include the physics that is necessary from the point of view of device function. Therefore, reduced models can suffice for general device understanding and device design, but more detailed models might be needed for molecular level understanding.

  20. Modifying a dynamic global vegetation model for simulating large spatial scale land surface water balance

    Science.gov (United States)

    Tang, G.; Bartlein, P. J.

    2012-01-01

    Water balance models of simple structure are easier to grasp and more clearly connect cause and effect than models of complex structure. Such models are essential for studying large spatial scale land surface water balance in the context of climate and land cover change, both natural and anthropogenic. This study aims to (i) develop a large spatial scale water balance model by modifying a dynamic global vegetation model (DGVM), and (ii) test the model's performance in simulating actual evapotranspiration (ET), soil moisture and surface runoff for the coterminous United States (US). Toward these ends, we first introduced development of the "LPJ-Hydrology" (LH) model by incorporating satellite-based land covers into the Lund-Potsdam-Jena (LPJ) DGVM instead of dynamically simulating them. We then ran LH using historical (1982-2006) climate data and satellite-based land covers at 2.5 arc-min grid cells. The simulated ET, soil moisture and surface runoff were compared to existing sets of observed or simulated data for the US. The results indicated that LH captures well the variation of monthly actual ET (R2 = 0.61, p 0.46, p 0.52) with observed values over the years 1982-2006, respectively. The modeled spatial patterns of annual ET and surface runoff are in accordance with previously published data. Compared to its predecessor, LH simulates better monthly stream flow in winter and early spring by incorporating effects of solar radiation on snowmelt. Overall, this study proves the feasibility of incorporating satellite-based land-covers into a DGVM for simulating large spatial scale land surface water balance. LH developed in this study should be a useful tool for studying effects of climate and land cover change on land surface hydrology at large spatial scales.

  1. Simulation and Modeling Application in Agricultural Mechanization

    Directory of Open Access Journals (Sweden)

    R. M. Hudzari

    2012-01-01

    Full Text Available This experiment was conducted to determine the equations relating the Hue digital values of the fruits surface of the oil palm with maturity stage of the fruit in plantation. The FFB images were zoomed and captured using Nikon digital camera, and the calculation of Hue was determined using the highest frequency of the value for R, G, and B color components from histogram analysis software. New procedure in monitoring the image pixel value for oil palm fruit color surface in real-time growth maturity was developed. The estimation of day harvesting prediction was calculated based on developed model of relationships for Hue values with mesocarp oil content. The simulation model is regressed and predicts the day of harvesting or a number of days before harvest of FFB. The result from experimenting on mesocarp oil content can be used for real-time oil content determination of MPOB color meter. The graph to determine the day of harvesting the FFB was presented in this research. The oil was found to start developing in mesocarp fruit at 65 days before fruit at ripe maturity stage of 75% oil to dry mesocarp.

  2. An efficient shock-capturing scheme for simulating compressible homogeneous mixture flow

    Energy Technology Data Exchange (ETDEWEB)

    Dang, Son Tung; Ha, Cong Tu; Park, Warn Gyu [School of Mechanical Engineering, Pusan National University, Busan (Korea, Republic of); Jung, Chul Min [Advanced Naval Technology CenterNSRDI, ADD, Changwon (Korea, Republic of)

    2016-09-15

    This work is devoted to the development of a procedure for the numerical solution of Navier-Stokes equations for cavitating flows with and without ventilation based on a compressible, multiphase, homogeneous mixture model. The governing equations are discretized on a general structured grid using a high-resolution shock-capturing scheme in conjunction with appropriate limiters to prevent the generation of spurious solutions near shock waves or discontinuities. Two well-known limiters are examined, and a new limiter is proposed to enhance the accuracy and stability of the numerical scheme. A sensitivity analysis is first conducted to determine the relative influences of various model parameters on the solution. These parameters are adopted for the computation of water flows over a hemispherical body, conical body and a divergent/convergent nozzle. Finally, numerical calculations of ventilated supercavitating flows over a hemispherical cylinder body with a hot propulsive jet are conducted to verify the capabilities of the numerical scheme.

  3. An efficient shock-capturing scheme for simulating compressible homogeneous mixture flow

    International Nuclear Information System (INIS)

    Dang, Son Tung; Ha, Cong Tu; Park, Warn Gyu; Jung, Chul Min

    2016-01-01

    This work is devoted to the development of a procedure for the numerical solution of Navier-Stokes equations for cavitating flows with and without ventilation based on a compressible, multiphase, homogeneous mixture model. The governing equations are discretized on a general structured grid using a high-resolution shock-capturing scheme in conjunction with appropriate limiters to prevent the generation of spurious solutions near shock waves or discontinuities. Two well-known limiters are examined, and a new limiter is proposed to enhance the accuracy and stability of the numerical scheme. A sensitivity analysis is first conducted to determine the relative influences of various model parameters on the solution. These parameters are adopted for the computation of water flows over a hemispherical body, conical body and a divergent/convergent nozzle. Finally, numerical calculations of ventilated supercavitating flows over a hemispherical cylinder body with a hot propulsive jet are conducted to verify the capabilities of the numerical scheme

  4. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  5. More Energy-Efficient CO2 Capture from IGCC GE Flue Gases

    Directory of Open Access Journals (Sweden)

    Rakpong Peampermpool

    2017-03-01

    Full Text Available Carbon dioxide (CO2 emissions are one of the main reasons for the increase in greenhouse gasses in the earth’s atmosphere and carbon capture and sequestration (CCS is known as an effective method to reduce CO2 emissions on a larger scale, such as for fossil energy utilization systems. In this paper, the feasibility of capturing CO2 using cryogenic liquefaction and improving the capture rate by expansion will be discussed. The main aim was to design an energy-saving scheme for an IGCC (integrated gasification combined cycle power plant with CO2 cryogenic liquefaction capture. The experimental results provided by the authors, using the feed gas specification of a 740 MW IGCC General Electric (GE combustion power plant, demonstrated that using an orifice for further expanding the vent gas after cryogenic capture from 57 bar to 24 bar gave an experimentally observed capture rate up to 65%. The energy-saving scheme can improve the overall CO2 capture rate, and hence save energy. The capture process has also been simulated using Aspen HYSYS simulation software to evaluate its energy penalty. The results show that a 92% overall capture rate can be achieved by using an orifice.

  6. Computer simulation of fatigue under diametrical compression

    OpenAIRE

    Carmona, H. A.; Kun, F.; Andrade Jr., J. S.; Herrmann, H. J.

    2006-01-01

    We study the fatigue fracture of disordered materials by means of computer simulations of a discrete element model. We extend a two-dimensional fracture model to capture the microscopic mechanisms relevant for fatigue, and we simulate the diametric compression of a disc shape specimen under a constant external force. The model allows to follow the development of the fracture process on the macro- and micro-level varying the relative influence of the mechanisms of damage accumulation over the ...

  7. Physics of epi-thermal boron neutron capture therapy (epi-thermal BNCT).

    Science.gov (United States)

    Seki, Ryoichi; Wakisaka, Yushi; Morimoto, Nami; Takashina, Masaaki; Koizumi, Masahiko; Toki, Hiroshi; Fukuda, Mitsuhiro

    2017-12-01

    The physics of epi-thermal neutrons in the human body is discussed in the effort to clarify the nature of the unique radiologic properties of boron neutron capture therapy (BNCT). This discussion leads to the computational method of Monte Carlo simulation in BNCT. The method is discussed through two examples based on model phantoms. The physics is kept at an introductory level in the discussion in this tutorial review.

  8. Numerical model simulation of free surface behavior in spallation target of ADS

    International Nuclear Information System (INIS)

    Chai Xiang; Su Guanyu; Cheng Xu

    2012-01-01

    The spallation target in accelerator driven sub-critical system (ADS) couples the subcritical reactor core with accelerator. The design of a windowless target has to ensure the formation of a stable free surface with desirable shape, to avoid local over- heating of the heavy liquid metal (HLM). To investigate the free surface behavior of the spallation target, OpenFOAM, an opened CFD software platform, was used to simulate the formation and features of the free surface in the windowless target. VOF method was utilized as the interface-capturing methodology. The numerical results were compared to experimental data and numerical results obtained with FLUENT code. The effects of turbulence models were studied and recommendations were made related to application of turbulence models. (authors)

  9. VIC–CropSyst-v2: A regional-scale modeling platform to simulate the nexus of climate, hydrology, cropping systems, and human decisions

    Directory of Open Access Journals (Sweden)

    K. Malek

    2017-08-01

    Full Text Available Food supply is affected by a complex nexus of land, atmosphere, and human processes, including short- and long-term stressors (e.g., drought and climate change, respectively. A simulation platform that captures these complex elements can be used to inform policy and best management practices to promote sustainable agriculture. We have developed a tightly coupled framework using the macroscale variable infiltration capacity (VIC hydrologic model and the CropSyst agricultural model. A mechanistic irrigation module was also developed for inclusion in this framework. Because VIC–CropSyst combines two widely used and mechanistic models (for crop phenology, growth, management, and macroscale hydrology, it can provide realistic and hydrologically consistent simulations of water availability, crop water requirements for irrigation, and agricultural productivity for both irrigated and dryland systems. This allows VIC–CropSyst to provide managers and decision makers with reliable information on regional water stresses and their impacts on food production. Additionally, VIC–CropSyst is being used in conjunction with socioeconomic models, river system models, and atmospheric models to simulate feedback processes between regional water availability, agricultural water management decisions, and land–atmosphere interactions. The performance of VIC–CropSyst was evaluated on both regional (over the US Pacific Northwest and point scales. Point-scale evaluation involved using two flux tower sites located in agricultural fields in the US (Nebraska and Illinois. The agreement between recorded and simulated evapotranspiration (ET, applied irrigation water, soil moisture, leaf area index (LAI, and yield indicated that, although the model is intended to work on regional scales, it also captures field-scale processes in agricultural areas.

  10. Estimation of effectiveness of three methods of feral cat population control by use of a simulation model.

    Science.gov (United States)

    McCarthy, Robert J; Levine, Stephen H; Reed, J Michael

    2013-08-15

    To predict effectiveness of 3 interventional methods of population control for feral cat colonies. Population model. Estimates of vital data for feral cats. Data were gathered from the literature regarding the demography and mating behavior of feral cats. An individual-based stochastic simulation model was developed to evaluate the effectiveness of trap-neuter-release (TNR), lethal control, and trap-vasectomy-hysterectomy-release (TVHR) in decreasing the size of feral cat populations. TVHR outperformed both TNR and lethal control at all annual capture probabilities between 10% and 90%. Unless > 57% of cats were captured and neutered annually by TNR or removed by lethal control, there was minimal effect on population size. In contrast, with an annual capture rate of ≥ 35%, TVHR caused population size to decrease. An annual capture rate of 57% eliminated the modeled population in 4,000 days by use of TVHR, whereas > 82% was required for both TNR and lethal control. When the effect of fraction of adult cats neutered on kitten and young juvenile survival rate was included in the analysis, TNR performed progressively worse and could be counterproductive, such that population size increased, compared with no intervention at all. TVHR should be preferred over TNR for management of feral cats if decrease in population size is the goal. This model allowed for many factors related to the trapping program and cats to be varied and should be useful for determining the financial and person-effort commitments required to have a desired effect on a given feral cat population.

  11. Mathematical Modeling and Numerical Simulation of CO2 Removal by Using Hollow Fiber Membrane Contactors

    Directory of Open Access Journals (Sweden)

    Mohammad Mesbah

    2017-10-01

    Full Text Available Abstract In this study, a mathematical model is proposed for CO2 separation from N2/CO2 mixtureusing a hollow fiber membrane contactor by various absorbents. The contactor assumed as non-wetted membrane; radial and axial diffusions were also considered in the model development. The governing equations of the model are solved via the finite element method (FEM. To ensure the accuracy of the developed model, the simulation results were validated using the reported experimental data for potassium glycinate (PG, monoethanol amine (MEA, and methyldiethanol amine (MDEA. The results of the proposed model indicated that PG absorbent has the highest removal efficiency of CO2, followed by potassium threonate (PT, MEA, amino-2-methyl-1-propanol (AMP, diethanol amine (DEA, and MDEA in sequence. In addition, the results revealed that the CO2 removal efficiency was favored by absorbent flow rate and liquid temperature, while the gas flow rate has a reverse effect. The simulation results proved that the hollow fiber membrane contactors have a good potential in the area of CO2 capture.

  12. Regional simulation of Indian summer monsoon intraseasonal oscillations at gray-zone resolution

    Science.gov (United States)

    Chen, Xingchao; Pauluis, Olivier M.; Zhang, Fuqing

    2018-01-01

    Simulations of the Indian summer monsoon by the cloud-permitting Weather Research and Forecasting (WRF) model at gray-zone resolution are described in this study, with a particular emphasis on the model ability to capture the monsoon intraseasonal oscillations (MISOs). Five boreal summers are simulated from 2007 to 2011 using the ERA-Interim reanalysis as the lateral boundary forcing data. Our experimental setup relies on a horizontal grid spacing of 9 km to explicitly simulate deep convection without the use of cumulus parameterizations. When compared to simulations with coarser grid spacing (27 km) and using a cumulus scheme, the 9 km simulations reduce the biases in mean precipitation and produce more realistic low-frequency variability associated with MISOs. Results show that the model at the 9 km gray-zone resolution captures the salient features of the summer monsoon. The spatial distributions and temporal evolutions of monsoon rainfall in the WRF simulations verify qualitatively well against observations from the Tropical Rainfall Measurement Mission (TRMM), with regional maxima located over Western Ghats, central India, Himalaya foothills, and the west coast of Myanmar. The onset, breaks, and withdrawal of the summer monsoon in each year are also realistically captured by the model. The MISO-phase composites of monsoon rainfall, low-level wind, and precipitable water anomalies in the simulations also agree qualitatively with the observations. Both the simulations and observations show a northeastward propagation of the MISOs, with the intensification and weakening of the Somali Jet over the Arabian Sea during the active and break phases of the Indian summer monsoon.

  13. Multilevel discretized random field models with 'spin' correlations for the simulation of environmental spatial data

    Science.gov (United States)

    Žukovič, Milan; Hristopulos, Dionissios T.

    2009-02-01

    A current problem of practical significance is how to analyze large, spatially distributed, environmental data sets. The problem is more challenging for variables that follow non-Gaussian distributions. We show by means of numerical simulations that the spatial correlations between variables can be captured by interactions between 'spins'. The spins represent multilevel discretizations of environmental variables with respect to a number of pre-defined thresholds. The spatial dependence between the 'spins' is imposed by means of short-range interactions. We present two approaches, inspired by the Ising and Potts models, that generate conditional simulations of spatially distributed variables from samples with missing data. Currently, the sampling and simulation points are assumed to be at the nodes of a regular grid. The conditional simulations of the 'spin system' are forced to respect locally the sample values and the system statistics globally. The second constraint is enforced by minimizing a cost function representing the deviation between normalized correlation energies of the simulated and the sample distributions. In the approach based on the Nc-state Potts model, each point is assigned to one of Nc classes. The interactions involve all the points simultaneously. In the Ising model approach, a sequential simulation scheme is used: the discretization at each simulation level is binomial (i.e., ± 1). Information propagates from lower to higher levels as the simulation proceeds. We compare the two approaches in terms of their ability to reproduce the target statistics (e.g., the histogram and the variogram of the sample distribution), to predict data at unsampled locations, as well as in terms of their computational complexity. The comparison is based on a non-Gaussian data set (derived from a digital elevation model of the Walker Lake area, Nevada, USA). We discuss the impact of relevant simulation parameters, such as the domain size, the number of

  14. Computational materials chemistry for carbon capture using porous materials

    International Nuclear Information System (INIS)

    Sharma, Abhishek; Malani, Ateeque; Huang, Runhong; Babarao, Ravichandar

    2017-01-01

    Control over carbon dioxide (CO 2 ) release is extremely important to decrease its hazardous effects on the environment such as global warming, ocean acidification, etc. For CO 2 capture and storage at industrial point sources, nanoporous materials offer an energetically viable and economically feasible approach compared to chemisorption in amines. There is a growing need to design and synthesize new nanoporous materials with enhanced capability for carbon capture. Computational materials chemistry offers tools to screen and design cost-effective materials for CO 2 separation and storage, and it is less time consuming compared to trial and error experimental synthesis. It also provides a guide to synthesize new materials with better properties for real world applications. In this review, we briefly highlight the various carbon capture technologies and the need of computational materials design for carbon capture. This review discusses the commonly used computational chemistry-based simulation methods for structural characterization and prediction of thermodynamic properties of adsorbed gases in porous materials. Finally, simulation studies reported on various potential porous materials, such as zeolites, porous carbon, metal organic frameworks (MOFs) and covalent organic frameworks (COFs), for CO 2 capture are discussed. (topical review)

  15. First-Principles Integrated Adsorption Modeling for Selective Capture of Uranium from Seawater by Polyamidoxime Sorbent Materials.

    Science.gov (United States)

    Ladshaw, Austin P; Ivanov, Alexander S; Das, Sadananda; Bryantsev, Vyacheslav S; Tsouris, Costas; Yiacoumi, Sotira

    2018-04-18

    Nuclear power is a relatively carbon-free energy source that has the capacity to be utilized today in an effort to stem the tides of global warming. The growing demand for nuclear energy, however, could put significant strain on our uranium ore resources, and the mining activities utilized to extract that ore can leave behind long-term environmental damage. A potential solution to enhance the supply of uranium fuel is to recover uranium from seawater using amidoximated adsorbent fibers. This technology has been studied for decades but is currently plagued by the material's relatively poor selectivity of uranium over its main competitor vanadium. In this work, we investigate the binding schemes between uranium, vanadium, and the amidoxime functional groups on the adsorbent surface. Using quantum chemical methods, binding strengths are approximated for a set of complexation reactions between uranium and vanadium with amidoxime functionalities. Those approximations are then coupled with a comprehensive aqueous adsorption model developed in this work to simulate the adsorption of uranium and vanadium under laboratory conditions. Experimental adsorption studies with uranium and vanadium over a wide pH range are performed, and the data collected are compared against simulation results to validate the model. It was found that coupling ab initio calculations with process level adsorption modeling provides accurate predictions of the adsorption capacity and selectivity of the sorbent materials. Furthermore, this work demonstrates that this multiscale modeling paradigm could be utilized to aid in the selection of superior ligands or ligand compositions for the selective capture of metal ions. Therefore, this first-principles integrated modeling approach opens the door to the in silico design of next-generation adsorbents with potentially superior efficiency and selectivity for uranium over vanadium in seawater.

  16. A Coupled Atmospheric and Wave Modeling System for Storm Simulations

    DEFF Research Database (Denmark)

    Du, Jianting; Larsén, Xiaoli Guo; Bolanos, R.

    2015-01-01

    to parametrize z0. The results are validated through QuikScat data and point measurements from an open ocean site Ekosk and a coastal, relatively shallow water site Horns Rev. It is found that the modeling system captures in general better strong wind and strong wave characteristics for open ocean condition than......This study aims at improving the simulation of wind and waves during storms in connection with wind turbine design and operations in coastal areas. For this particular purpose, we investigated the Coupled-Ocean-Atmosphere-Wave-Sediment Transport (COAWST) Modeling System which couples the Weather...... resolution ranging from 25km to 2km. Meanwhile, the atmospheric forcing data of dierent spatial resolution, with one about 100km (FNL) and the other about 38km (CFSR) are both used. In addition, bathymatry data of diferent resolutions (1arc-minute and 30arc-seconds) are used. We used three approaches...

  17. Simulation of emotional contagion using modified SIR model: A cellular automaton approach

    Science.gov (United States)

    Fu, Libi; Song, Weiguo; Lv, Wei; Lo, Siuming

    2014-07-01

    Emotion plays an important role in the decision-making of individuals in some emergency situations. The contagion of emotion may induce either normal or abnormal consolidated crowd behavior. This paper aims to simulate the dynamics of emotional contagion among crowds by modifying the epidemiological SIR model to a cellular automaton approach. This new cellular automaton model, entitled the “CA-SIRS model”, captures the dynamic process ‘susceptible-infected-recovered-susceptible', which is based on SIRS contagion in epidemiological theory. Moreover, in this new model, the process is integrated with individual movement. The simulation results of this model show that multiple waves and dynamical stability around a mean value will appear during emotion spreading. It was found that the proportion of initial infected individuals had little influence on the final stable proportion of infected population in a given system, and that infection frequency increased with an increase in the average crowd density. Our results further suggest that individual movement accelerates the spread speed of emotion and increases the stable proportion of infected population. Furthermore, decreasing the duration of an infection and the probability of reinfection can markedly reduce the number of infected individuals. It is hoped that this study will be helpful in crowd management and evacuation organization.

  18. Advances in Intelligent Modelling and Simulation Simulation Tools and Applications

    CERN Document Server

    Oplatková, Zuzana; Carvalho, Marco; Kisiel-Dorohinicki, Marek

    2012-01-01

    The human capacity to abstract complex systems and phenomena into simplified models has played a critical role in the rapid evolution of our modern industrial processes and scientific research. As a science and an art, Modelling and Simulation have been one of the core enablers of this remarkable human trace, and have become a topic of great importance for researchers and practitioners. This book was created to compile some of the most recent concepts, advances, challenges and ideas associated with Intelligent Modelling and Simulation frameworks, tools and applications. The first chapter discusses the important aspects of a human interaction and the correct interpretation of results during simulations. The second chapter gets to the heart of the analysis of entrepreneurship by means of agent-based modelling and simulations. The following three chapters bring together the central theme of simulation frameworks, first describing an agent-based simulation framework, then a simulator for electrical machines, and...

  19. Control of a post-combustion CO2 capture plant during process start-up and load variations

    DEFF Research Database (Denmark)

    Gaspar, Jozsef; Jørgensen, John Bagterp; Fosbøl, Philip Loldrup

    2015-01-01

    Dynamic and flexible operation of a carbon capture plant is important as thermal power plants must be operated very flexibly to accommodate large shares of intermittent energy sources such as wind and solar energy. To facilitate such operation, dynamic models for simulation, optimization...... and control system design are crucial. In this paper, we present a dynamic mathematical model for the absorption and desorption columns in a carbon capture plant. Moreover, we implement a decentralized proportional-integral (PI) based control scheme and we evaluate the performance of the control structure...... for various operational procedures, e.g. start-up, load changes, noise on the flue gas flow rate and composition. Note that the carbon capture plant is based on the solvent storage configuration. To the authors knowledge, this is the first paper addressing the issue of start-up operation and control of carbon...

  20. Electron capture and stellar collapse

    International Nuclear Information System (INIS)

    Chung, K.C.

    1979-01-01

    In order, to investigate the function of electron capture in the phenomenon of pre-supernovae gravitacional collapse, an hydrodynamic caculation was carried out, coupling capture, decay and nuclear reaction equation system. A star simplified model (homogeneous model) was adopted using fermi ideal gas approximation for tthe sea of free electrons and neutrons. The non simplified treatment from quasi-static evolution to collapse is presented. The capture and beta decay rates, as wellas neutron delayed emission, were calculated by beta decay crude theory, while the other reaction rates were determined by usual theories. The preliminary results are presented. (M.C.K.) [pt

  1. Modeling and control of a flexible space robot to capture a tumbling debris

    Science.gov (United States)

    Dubanchet, Vincent

    After 60 years of intensive satellite launches, the number of drifting objects in Earth orbits is reaching a shifting point, where human intervention is becoming necessary to reduce the threat of collision. Indeed, a 200 year forecast, known as the "Kessler syndrome", states that space access will be greatly compromised if nothing is done to address the proliferation of these debris. Scientist J.-C. Liou from the National Aeronautics and Space Administration (NASA) has shown that the current trend could be reversed if at least five massive objects, such as dead satellites or rocket upper stages, were de-orbited each year. Among the various technical concepts considered for debris removal, robotics has emerged, over the last 30 years, as one of the most promising solutions. The International Space Station (ISS) already possesses fully operational robotic arms, and other missions have explored the potential of a manipulator embedded onto a satellite. During two of the latter, key capabilities have been demonstrated for on-orbit servicing, and prove to be equally useful for the purpose of debris removal. This thesis focuses on the close range capture of a tumbling debris by a robotic arm with light-weight flexible segments. This phase includes the motion planning and the control of a space robot, in order to smoothly catch a target point on the debris. The validation of such technologies is almost impossible on Earth and leads to prohibitive costs when performed on orbit. Therefore, the modeling and simulation of flexible multi-body systems has been investigated thoroughly, and is likewise a strong contribution of the thesis. Based on these models, an experimental validation is proposed by reproducing the on-orbit kinematics on a test bench made up of two industrial manipulators and driven by a real-time dynamic simulation. In a nutshell, the thesis is built around three main parts: the modeling of a space robot, the design of control laws, and their validation on a

  2. Notes on modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Redondo, Antonio [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-10

    These notes present a high-level overview of how modeling and simulation are carried out by practitioners. The discussion is of a general nature; no specific techniques are examined but the activities associated with all modeling and simulation approaches are briefly addressed. There is also a discussion of validation and verification and, at the end, a section on why modeling and simulation are useful.

  3. Do terrestrial hermit crabs sniff? Air flow and odorant capture by flicking antennules.

    Science.gov (United States)

    Waldrop, Lindsay D; Koehl, M A R

    2016-01-01

    Capture of odorant molecules by olfactory organs from the surrounding fluid is the first step of smelling. Sniffing intermittently moves fluid across sensory surfaces, increasing delivery rates of molecules to chemosensory receptors and providing discrete odour samples. Aquatic malacostracan crustaceans sniff by flicking olfactory antennules bearing arrays of chemosensory hairs (aesthetascs), capturing water in the arrays during downstroke and holding the sample during return stroke. Terrestrial malacostracans also flick antennules, but how their flicking affects odour capture from air is not understood. The terrestrial hermit crab, Coenobita rugosus, uses antennules bearing shingle-shaped aesthetascs to capture odours. We used particle image velocimetry to measure fine-scale fluid flow relative to a dynamically scaled physical model of a flicking antennule, and computational simulations to calculate diffusion to aesthetascs by odorant molecules carried in that flow. Air does not flow into the aesthetasc array during flick downstrokes or recovery strokes. Odorants are captured from air flowing around the outside of the array during flick downstrokes, when aesthetascs face upstream and molecule capture rates are 21% higher than for stationary antennules. Bursts of flicking followed by pauses deliver discrete odour samples to olfactory sensors, causing intermittency in odour capture by a different mechanism than aquatic crustaceans use. © 2016 The Author(s).

  4. Thermal unit availability modeling in a regional simulation model

    International Nuclear Information System (INIS)

    Yamayee, Z.A.; Port, J.; Robinett, W.

    1983-01-01

    The System Analysis Model (SAM) developed under the umbrella of PNUCC's System Analysis Committee is capable of simulating the operation of a given load/resource scenario. This model employs a Monte-Carlo simulation to incorporate uncertainties. Among uncertainties modeled is thermal unit availability both for energy simulation (seasonal) and capacity simulations (hourly). This paper presents the availability modeling in the capacity and energy models. The use of regional and national data in deriving the two availability models, the interaction between the two and modifications made to the capacity model in order to reflect regional practices is presented. A sample problem is presented to show the modification process. Results for modeling a nuclear unit using NERC-GADS is presented

  5. New Adsorption Cycles for Carbon Dioxide Capture and Concentration

    Energy Technology Data Exchange (ETDEWEB)

    James Ritter; Armin Ebner; Steven Reynolds Hai Du; Amal Mehrotra

    2008-07-31

    tested successfully against several cycle schedules taken from the literature, including a 2-bed 4-step Skarstrom cycle, a 4-bed 9-step process with 2 equalization steps, a 9-bed 11-step process with 3 equalization steps, and a 6-bed 13-step process with 4 equalization steps and 4 idle steps. With respect to CO{sub 2} capture and concentration by PSA, this new approach is now providing a very straightforward way to determine all the viable 3-bed, 4-bed, 5-bed, n-bed, etc. HR PSA cycle schedules to explore using both simulation and experimentation. This program also touted the use of K-promoted HTlc as a high temperature, reversible adsorbent for CO{sub 2} capture by PSA. This program not only showed how to use this material in HR PSA cycles, but it also proposed a new CO{sub 2} interaction mechanism in conjunction with a non-equilibrium kinetic model that adequately describes the uptake and release of CO{sub 2} in this material, and some preliminary fixed bed adsorption breakthrough and desorption elution experiments were carried out to demonstrate complete reversibility on a larger scale. This information was essentially missing from the literature and deemed invaluable toward promoting the use of K-promoted HTlc as a high temperature, reversible adsorbent for CO{sub 2} capture by PSA. Overall, the objectives of this project were met. It showed the feasibility of using K-promoted hydrotalcite (HTlc) as a high temperature, reversible adsorbent for CO{sub 2} capture by PSA. It discovered some novel HR PSA cycles that might be useful for this purpose. Finally, it revealed a mechanistic understanding of the interaction of CO{sub 2} with K-promoted HTlc.

  6. Simulation of wind-induced snow transport in alpine terrain using a fully coupled snowpack/atmosphere model

    Science.gov (United States)

    Vionnet, V.; Martin, E.; Masson, V.; Guyomarc'h, G.; Naaim-Bouvet, F.; Prokop, A.; Durand, Y.; Lac, C.

    2013-06-01

    In alpine regions, wind-induced snow transport strongly influences the spatio-temporal evolution of the snow cover throughout the winter season. To gain understanding on the complex processes that drive the redistribution of snow, a new numerical model is developed. It couples directly the detailed snowpack model Crocus with the atmospheric model Meso-NH. Meso-NH/Crocus simulates snow transport in saltation and in turbulent suspension and includes the sublimation of suspended snow particles. A detailed representation of the first meters of the atmosphere allows a fine reproduction of the erosion and deposition process. The coupled model is evaluated against data collected around the experimental site of Col du Lac Blanc (2720 m a.s.l., French Alps). For this purpose, a blowing snow event without concurrent snowfall has been selected and simulated. Results show that the model captures the main structures of atmospheric flow in alpine terrain, the vertical profile of wind speed and the snow particles fluxes near the surface. However, the horizontal resolution of 50 m is found to be insufficient to simulate the location of areas of snow erosion and deposition observed by terrestrial laser scanning. When activated, the sublimation of suspended snow particles causes a reduction in deposition of 5.3%. Total sublimation (surface + blowing snow) is three times higher than surface sublimation in a simulation neglecting blowing snow sublimation.

  7. A Reduced-Order Model of Transport Phenomena for Power Plant Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Paul Cizmas; Brian Richardson; Thomas Brenner; Raymond Fontenot

    2009-09-30

    A reduced-order model based on proper orthogonal decomposition (POD) has been developed to simulate transient two- and three-dimensional isothermal and non-isothermal flows in a fluidized bed. Reduced-order models of void fraction, gas and solids temperatures, granular energy, and z-direction gas and solids velocity have been added to the previous version of the code. These algorithms are presented and their implementation is discussed. Verification studies are presented for each algorithm. A number of methods to accelerate the computations performed by the reduced-order model are presented. The errors associated with each acceleration method are computed and discussed. Using a combination of acceleration methods, a two-dimensional isothermal simulation using the reduced-order model is shown to be 114 times faster than using the full-order model. In the pursue of achieving the objectives of the project and completing the tasks planned for this program, several unplanned and unforeseen results, methods and studies have been generated. These additional accomplishments are also presented and they include: (1) a study of the effect of snapshot sampling time on the computation of the POD basis functions, (2) an investigation of different strategies for generating the autocorrelation matrix used to find the POD basis functions, (3) the development and implementation of a bubble detection and tracking algorithm based on mathematical morphology, (4) a method for augmenting the proper orthogonal decomposition to better capture flows with discontinuities, such as bubbles, and (5) a mixed reduced-order/full-order model, called point-mode proper orthogonal decomposition, designed to avoid unphysical due to approximation errors. The limitations of the proper orthogonal decomposition method in simulating transient flows with moving discontinuities, such as bubbling flows, are discussed and several methods are proposed to adapt the method for future use.

  8. Multi-objective Extremum Seeking Control for Enhancement of Wind Turbine Power Capture with Load Reduction

    Science.gov (United States)

    Xiao, Yan; Li, Yaoyu; Rotea, Mario A.

    2016-09-01

    The primary objective in below rated wind speed (Region 2) is to maximize the turbine's energy capture. Due to uncertainty, variability of turbine characteristics and lack of inexpensive but precise wind measurements, model-free control strategies that do not use wind measurements such as Extremum Seeking Control (ESC) have received significant attention. Based on a dither-demodulation scheme, ESC can maximize the wind power capture in real time despite uncertainty, variabilities and lack of accurate wind measurements. The existing work on ESC based wind turbine control focuses on power capture only. In this paper, a multi-objective extremum seeking control strategy is proposed to achieve nearly optimum wind energy capture while decreasing structural fatigue loads. The performance index of the ESC combines the rotor power and penalty terms of the standard deviations of selected fatigue load variables. Simulation studies of the proposed multi-objective ESC demonstrate that the damage-equivalent loads of tower and/or blade loads can be reduced with slight compromise in energy capture.

  9. Genesis of Hurricane Sandy (2012) Simulated with a Global Mesoscale Model

    Science.gov (United States)

    Shen, Bo-Wen; DeMaria, Mark; Li, J.-L. F.; Cheung, S.

    2013-01-01

    In this study, we investigate the formation predictability of Hurricane Sandy (2012) with a global mesoscale model. We first present five track and intensity forecasts of Sandy initialized at 00Z 22-26 October 2012, realistically producing its movement with a northwestward turn prior to its landfall. We then show that three experiments initialized at 00Z 16-18 October captured the genesis of Sandy with a lead time of up to 6 days and simulated reasonable evolution of Sandy's track and intensity in the next 2 day period of 18Z 21-23 October. Results suggest that the extended lead time of formation prediction is achieved by realistic simulations of multiscale processes, including (1) the interaction between an easterly wave and a low-level westerly wind belt (WWB) and (2) the appearance of the upper-level trough at 200 hPa to Sandy's northwest. The low-level WWB and upper-level trough are likely associated with a Madden-Julian Oscillation.

  10. Simulating the Pineapple Express in the half degree Community Climate System Model, CCSM4

    Science.gov (United States)

    Shields, Christine A.; Kiehl, Jeffrey T.

    2016-07-01

    Atmospheric rivers are recognized as major contributors to the poleward transport of water vapor. Upon reaching land, these phenomena also play a critical role in extreme precipitation and flooding events. The Pineapple Express (PE) is defined as an atmospheric river extending out of the deep tropics and reaching the west coast of North America. Community Climate System Model (CCSM4) high-resolution ensemble simulations for the twentieth and 21st centuries are diagnosed to identify the PE. Analysis of the twentieth century simulations indicated that the CCSM4 accurately captures the spatial and temporal climatology of the PE. Analysis of the end 21st century simulations indicates a significant increase in storm duration and intensity of precipitation associated with landfall of the PE. Only a modest increase in the number of atmospheric rivers of a few percent is projected for the end of 21st century.

  11. Hardware-in-the-Loop Modeling and Simulation Methods for Daylight Systems in Buildings

    Science.gov (United States)

    Mead, Alex Robert

    . Each class of techniques, broadly speaking however, has advantages and disadvantages with respect to the cost of execution (e.g. money, time, expertise) and the fidelity of the provided insight into the performance of the daylighting system. This varying tradeoff of cost and insight between the techniques determines which techniques are employed for which projects. Daylighting systems with CFS components, however, when considered for simulation with respect to these traditional technique classes, defy high fidelity analysis. Simplified techniques are clearly not applicable. Mathematical-models must have great complexity in order to capture the non-specular transmission accurately, which greatly limit their applicability. This leaves physical modeling, the most costly, as the preferred method for CFS. While mathematical-modeling and simulation methods do exist, they are in general costly and and still approximations of the underlying CFS behavior. Meaning in fact, measurements of CFSs are currently the only practical method to capture the behavior of CFSs. Traditional measurements of CFSs transmission and reflection properties are conducted using an instrument called a goniophotometer and produce a measurement in the form of a Bidirectional Scatter Distribution Function (BSDF) based on the Klems Basis. This measurement must be executed for each possible state of the CFS, hence only a subset of the possible behaviors can be captured for CFSs with continuously varying configurations. In the current era of rapid prototyping (e.g. 3D printing) and automated control of buildings including daylighting systems, a new analysis technique is needed which can faithfully represent these CFSs which are being designed and constructed at an increasing rate. Hardware-in-the-loop modeling and simulation is a perfect fit to the current need of analyzing daylighting systems with CFSs. In the proposed hardware-in-the-loop modeling and simulation approach of this dissertation, physical-models

  12. General introduction to simulation models

    DEFF Research Database (Denmark)

    Hisham Beshara Halasa, Tariq; Boklund, Anette

    2012-01-01

    trials. However, if simulation models would be used, good quality input data must be available. To model FMD, several disease spread models are available. For this project, we chose three simulation model; Davis Animal Disease Spread (DADS), that has been upgraded to DTU-DADS, InterSpread Plus (ISP......Monte Carlo simulation can be defined as a representation of real life systems to gain insight into their functions and to investigate the effects of alternative conditions or actions on the modeled system. Models are a simplification of a system. Most often, it is best to use experiments and field...... trials to investigate the effect of alternative conditions or actions on a specific system. Nonetheless, field trials are expensive and sometimes not possible to conduct, as in case of foot-and-mouth disease (FMD). Instead, simulation models can be a good and cheap substitute for experiments and field...

  13. Evaluation of daily maximum and minimum 2-m temperatures as simulated with the Regional Climate Model COSMO-CLM over Africa

    Directory of Open Access Journals (Sweden)

    Stefan Krähenmann

    2013-07-01

    Full Text Available The representation of the diurnal 2-m temperature cycle is challenging because of the many processes involved, particularly land-atmosphere interactions. This study examines the ability of the regional climate model COSMO-CLM (version 4.8 to capture the statistics of daily maximum and minimum 2-m temperatures (Tmin/Tmax over Africa. The simulations are carried out at two different horizontal grid-spacings (0.22° and 0.44°, and are driven by ECMWF ERA-Interim reanalyses as near-perfect lateral boundary conditions. As evaluation reference, a high-resolution gridded dataset of daily maximum and minimum temperatures (Tmin/Tmax for Africa (covering the period 2008–2010 is created using the regression-kriging-regression-kriging (RKRK algorithm. RKRK applies, among other predictors, the remotely sensed predictors land surface temperature and cloud cover to compensate for the missing information about the temperature pattern due to the low station density over Africa. This dataset allows the evaluation of temperature characteristics like the frequencies of Tmin/Tmax, the diurnal temperature range, and the 90th percentile of Tmax. Although the large-scale patterns of temperature are reproduced well, COSMO-CLM shows significant under- and overestimation of temperature at regional scales. The hemispheric summers are generally too warm and the day-to-day temperature variability is overestimated over northern and southern extra-tropical Africa. The average diurnal temperature range is underestimated by about 2°C across arid areas, yet overestimated by around 2°C over the African tropics. An evaluation based on frequency distributions shows good model performance for simulated Tmin (the simulated frequency distributions capture more than 80% of the observed ones, but less well performance for Tmax (capture below 70%. Further, over wide parts of Africa a too large fraction of daily Tmax values exceeds the observed 90th percentile of Tmax, particularly

  14. Evaluation of daily maximum and minimum 2-m temperatures as simulated with the regional climate model COSMO-CLM over Africa

    Energy Technology Data Exchange (ETDEWEB)

    Kraehenmann, Stefan; Kothe, Steffen; Ahrens, Bodo [Frankfurt Univ. (Germany). Inst. for Atmospheric and Environmental Sciences; Panitz, Hans-Juergen [Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen (Germany)

    2013-10-15

    The representation of the diurnal 2-m temperature cycle is challenging because of the many processes involved, particularly land-atmosphere interactions. This study examines the ability of the regional climate model COSMO-CLM (version 4.8) to capture the statistics of daily maximum and minimum 2-m temperatures (Tmin/Tmax) over Africa. The simulations are carried out at two different horizontal grid-spacings (0.22 and 0.44 ), and are driven by ECMWF ERA-Interim reanalyses as near-perfect lateral boundary conditions. As evaluation reference, a high-resolution gridded dataset of daily maximum and minimum temperatures (Tmin/Tmax) for Africa (covering the period 2008-2010) is created using the regression-kriging-regression-kriging (RKRK) algorithm. RKRK applies, among other predictors, the remotely sensed predictors land surface temperature and cloud cover to compensate for the missing information about the temperature pattern due to the low station density over Africa. This dataset allows the evaluation of temperature characteristics like the frequencies of Tmin/Tmax, the diurnal temperature range, and the 90{sup th} percentile of Tmax. Although the large-scale patterns of temperature are reproduced well, COSMO-CLM shows significant under- and overestimation of temperature at regional scales. The hemispheric summers are generally too warm and the day-to-day temperature variability is overestimated over northern and southern extra-tropical Africa. The average diurnal temperature range is underestimated by about 2 C across arid areas, yet overestimated by around 2 C over the African tropics. An evaluation based on frequency distributions shows good model performance for simulated Tmin (the simulated frequency distributions capture more than 80% of the observed ones), but less well performance for Tmax (capture below 70%). Further, over wide parts of Africa a too large fraction of daily Tmax values exceeds the observed 90{sup th} percentile of Tmax, particularly across

  15. Rate-based modelling of combined SO2 removal and NH3 recycling integrated with an aqueous NH3-based CO2 capture process

    International Nuclear Information System (INIS)

    Li, Kangkang; Yu, Hai; Qi, Guojie; Feron, Paul; Tade, Moses; Yu, Jingwen; Wang, Shujuan

    2015-01-01

    Highlights: • A rigorous, rate-based model for an NH 3 –CO 2 –SO 2 –H 2 O system was developed. • Model predictions are in good agreement with pilot plant results. • >99.9% of SO 2 was captured and >99.9% of slipped ammonia was reused. • The process is highly adaptable to the variations of SO 2 /NH 3 level, temperatures. - Abstract: To reduce the costs of controlling emissions from coal-fired power stations, we propose an advanced and effective process of combined SO 2 removal and NH 3 recycling, which can be integrated with the aqueous NH 3 -based CO 2 capture process to simultaneously achieve SO 2 and CO 2 removal, NH 3 recycling and flue gas cooling in one process. A rigorous, rate-based model for an NH 3 –CO 2 –SO 2 –H 2 O system was developed and used to simulate the proposed process. The model was thermodynamically and kinetically validated by experimental results from the open literature and pilot-plant trials, respectively. Under typical flue gas conditions, the proposed process has SO 2 removal and NH 3 reuse efficiencies of >99.9%. The process is strongly adaptable to different scenarios such as high SO 2 levels in flue gas, high NH 3 levels from the CO 2 absorber and high flue gas temperatures, and has a low energy requirement. Because the process simplifies flue gas desulphurisation and resolves the problems of NH 3 loss and SO 2 removal, it could significantly reduce the cost of CO 2 and SO 2 capture by aqueous NH 3

  16. Evaluation of rainfall simulations over West Africa in dynamically downscaled CMIP5 global circulation models

    Science.gov (United States)

    Akinsanola, A. A.; Ajayi, V. O.; Adejare, A. T.; Adeyeri, O. E.; Gbode, I. E.; Ogunjobi, K. O.; Nikulin, G.; Abolude, A. T.

    2018-04-01

    This study presents evaluation of the ability of Rossby Centre Regional Climate Model (RCA4) driven by nine global circulation models (GCMs), to skilfully reproduce the key features of rainfall climatology over West Africa for the period of 1980-2005. The seasonal climatology and annual cycle of the RCA4 simulations were assessed over three homogenous subregions of West Africa (Guinea coast, Savannah, and Sahel) and evaluated using observed precipitation data from the Global Precipitation Climatology Project (GPCP). Furthermore, the model output was evaluated using a wide range of statistical measures. The interseasonal and interannual variability of the RCA4 were further assessed over the subregions and the whole of the West Africa domain. Results indicate that the RCA4 captures the spatial and interseasonal rainfall pattern adequately but exhibits a weak performance over the Guinea coast. Findings from the interannual rainfall variability indicate that the model performance is better over the larger West Africa domain than the subregions. The largest difference across the RCA4 simulated annual rainfall was found in the Sahel. Result from the Mann-Kendall test showed no significant trend for the 1980-2005 period in annual rainfall either in GPCP observation data or in the model simulations over West Africa. In many aspects, the RCA4 simulation driven by the HadGEM2-ES perform best over the region. The use of the multimodel ensemble mean has resulted to the improved representation of rainfall characteristics over the study domain.

  17. Current climate and climate change over India as simulated by the Canadian Regional Climate Model

    Science.gov (United States)

    Alexandru, Adelina; Sushama, Laxmi

    2015-08-01

    The performance of the fifth generation of the Canadian Regional Climate Model (CRCM5) in reproducing the main climatic characteristics over India during the southwest (SW)-, post- and pre-monsoon seasons are presented in this article. To assess the performance of CRCM5, European Centre for Medium- Range Weather Forecasts (ECMWF) Re- Analysis (ERA- 40) and Interim re-analysis (ERA-Interim) driven CRCM5 simulation is compared against independent observations and reanalysis data for the 1971-2000 period. Projected changes for two future periods, 2041-2070 and 2071-2100, with respect to the 1971-2000 current period are assessed based on two transient climate change simulations of CRCM5 spanning the 1950-2100 period. These two simulations are driven by the Canadian Earth System Model version 2 (CanESM2) and the Max Planck Institute for Meteorology's Earth System Low Resolution Model (MPI-ESM-LR), respectively. The boundary forcing errors associated with errors in the driving global climate models are also studied by comparing the 1971-2000 period of the CanESM2 and MPI-ESM-LR driven simulations with that of the CRCM5 simulation driven by ERA-40/ERA-Interim. Results show that CRCM5 driven by ERA-40/ERA-Interim is in general able to capture well the temporal and spatial patterns of 2 m-temperature, precipitation, wind, sea level pressure, total runoff and soil moisture over India in comparison with available reanalysis and observations. However, some noticeable differences between the model and observational data were found during the SW-monsoon season within the domain of integration. CRCM5 driven by ERA-40/ERA-Interim is 1-2 °C colder than CRU observations and generates more precipitation over the Western Ghats and central regions of India, and not enough in the northern and north-eastern parts of India and along the Konkan west coast in comparison with the observed precipitation. The monsoon onset seems to be relatively well captured over the southwestern coast of

  18. Investigation of Phase Transition-Based Tethered Systems for Small Body Sample Capture

    Science.gov (United States)

    Quadrelli, Marco; Backes, Paul; Wilkie, Keats; Giersch, Lou; Quijano, Ubaldo; Scharf, Daniel; Mukherjee, Rudranarayan

    2009-01-01

    This paper summarizes the modeling, simulation, and testing work related to the development of technology to investigate the potential that shape memory actuation has to provide mechanically simple and affordable solutions for delivering assets to a surface and for sample capture and possible return to Earth. We investigate the structural dynamics and controllability aspects of an adaptive beam carrying an end-effector which, by changing equilibrium phases is able to actively decouple the end-effector dynamics from the spacecraft dynamics during the surface contact phase. Asset delivery and sample capture and return are at the heart of several emerging potential missions to small bodies, such as asteroids and comets, and to the surface of large bodies, such as Titan.

  19. A Comparison of HWRF, ARW and NMM Models in Hurricane Katrina (2005 Simulation

    Directory of Open Access Journals (Sweden)

    Anjaneyulu Yerramilli

    2011-06-01

    Full Text Available The life cycle of Hurricane Katrina (2005 was simulated using three different modeling systems of Weather Research and Forecasting (WRF mesoscale model. These are, HWRF (Hurricane WRF designed specifically for hurricane studies and WRF model with two different dynamic cores as the Advanced Research WRF (ARW model and the Non-hydrostatic Mesoscale Model (NMM. The WRF model was developed and sourced from National Center for Atmospheric Research (NCAR, incorporating the advances in atmospheric simulation system suitable for a broad range of applications. The HWRF modeling system was developed at the National Centers for Environmental Prediction (NCEP based on the NMM dynamic core and the physical parameterization schemes specially designed for tropics. A case study of Hurricane Katrina was chosen as it is one of the intense hurricanes that caused severe destruction along the Gulf Coast from central Florida to Texas. ARW, NMM and HWRF models were designed to have two-way interactive nested domains with 27 and 9 km resolutions. The three different models used in this study were integrated for three days starting from 0000 UTC of 27 August 2005 to capture the landfall of hurricane Katrina on 29 August. The initial and time varying lateral boundary conditions were taken from NCEP global FNL (final analysis data available at 1 degree resolution for ARW and NMM models and from NCEP GFS data at 0.5 degree resolution for HWRF model. The results show that the models simulated the intensification of Hurricane Katrina and the landfall on 29 August 2005 agreeing with the observations. Results from these experiments highlight the superior performance of HWRF model over ARW and NMM models in predicting the track and intensification of Hurricane Katrina.

  20. Engineering the future of military tactical vehicles and systems with modeling and simulation

    Science.gov (United States)

    Loew, Matthew; Watters, Brock

    2005-05-01

    Stewart & Stevenson has developed a Modeling and Simulation approach based on Systems Engineering principles for the development of future military vehicles and systems. This approach starts with a requirements analysis phase that captures and distills the design requirements into a list of parameterized values. A series of executable engineering models are constructed to allow the requirements to be transformed into systems with definable architectures with increasing levels of fidelity. Required performance parameters are available for importation into a variety of modeling and simulation tools including PTC Pro/ENGINEER (for initial engineering models, mechanisms, packaging, and detailed 3-Dimensional solid models), LMS International Virtual.Lab Motion (for vehicle dynamics and ride analysis) and AVL Cruise (Powertrain simulations). Structural analysis and optimization (performed in ANSYS, Pro/MECHANICA, and Altair OptiStruct) is based on the initial geometry from Pro/ENGINEER. Spreadsheets are used for requirements analysis, design documentation and first-order studies. Collectively, these models serve as templates for all design activities. Design variables initially studied within a simplified system model can be cascaded down as the new requirements for a sub-system model. By utilizing this approach premature decisions on systems architectures can be avoided. Ultimately, the systems that are developed are optimally able to meet the requirements by utilizing this top-down approach. Additionally, this M&S approach is seen as a life-cycle tool useful in initially assisting with project management activities through the initial and detail design phases and serves as a template for testing and validation/verification activities. Furthermore, because of the multi-tiered approach, there is natural re-use possible with the models as well.

  1. GEANT4 simulation of the neutron background of the C$_6$D$_6$ set-up for capture studies at n_TOF

    CERN Document Server

    Žugec, P.; Bosnar, D.; Altstadt, S.; Andrzejewski, J.; Audouin, L.; Barbagallo, M.; Bécares, V.; Bečvář, F.; Belloni, F.; Berthoumieux, E.; Billowes, J.; Boccone, V.; Brugger, M.; Calviani, M.; Calviño, F.; Cano-Ott, D.; Carrapiço, C.; Cerutti, F.; Chiaveri, E.; Chin, M.; Cortés, G.; Cortés-Giraldo, M.A.; Diakaki, M.; Domingo-Pardo, C.; Dressler, R.; Duran, I.; Dzysiuk, N.; Eleftheriadis, C.; Ferrari, A.; Fraval, K.; Ganesan, S.; García, A.R.; Giubrone, G.; Gómez-Hornillos, M.B.; Gonçalves, I.F.; González-Romero, E.; Griesmayer, E.; Guerrero, C.; Gunsing, F.; Gurusamy, P.; Heinitz, S.; Jenkins, D.G.; Jericha, E.; Kadi, Y.; Käppeler, F.; Karadimos, D.; Kivel, N.; Koehler, P.; Kokkoris, M.; Krtička, M.; Kroll, J.; Langer, C.; Lederer, C.; Leeb, H.; Leong, L.S.; Lo Meo, S.; Losito, R.; Manousos, A.; Marganiec, J.; Martìnez, T.; Massimi, C.; Mastinu, P.F.; Mastromarco, M.; Meaze, M.; Mendoza, E.; Mengoni, A.; Milazzo, P.M.; Mingrone, F.; Mirea, M.; Mondalaers, W.; Paradela, C.; Pavlik, A.; Perkowski, J.; Plompen, A.; Praena, J.; Quesada, J.M.; Rauscher, T.; Reifarth, R.; Riego, A.; Roman, F.; Rubbia, C.; Sarmento, R.; Saxena, A.; Schillebeeckx, P.; Schmidt, S.; Schumann, D.; Tagliente, G.; Tain, J.L.; Tarrío, D.; Tassan-Got, L.; Tsinganis, A.; Valenta, S.; Vannini, G.; Variale, V.; Vaz, P.; Ventura, A.; Versaci, R.; Vermeulen, M.J.; Vlachoudis, V.; Vlastou, R.; Wallner, A.; Ware, T.; Weigand, M.; Weiß, C.; Wright, T.

    2014-05-09

    The neutron sensitivity of the C$_6$D$_6$ detector setup used at n_TOF for capture measurements has been studied by means of detailed GEANT4 simulations. A realistic software replica of the entire n_TOF experimental hall, including the neutron beam line, sample, detector supports and the walls of the experimental area has been implemented in the simulations. The simulations have been analyzed in the same manner as experimental data, in particular by applying the Pulse Height Weighting Technique. The simulations have been validated against a measurement of the neutron background performed with a $^\\mathrm{nat}$C sample, showing an excellent agreement above 1 keV. At lower energies, an additional component in the measured $^\\mathrm{nat}$C yield has been discovered, which prevents the use of $^\\mathrm{nat}$C data for neutron background estimates at neutron energies below a few hundred eV. The origin and time structure of the neutron background have been derived from the simulations. Examples of the neutron backg...

  2. ECONOMIC MODELING STOCKS CONTROL SYSTEM: SIMULATION MODEL

    OpenAIRE

    Климак, М.С.; Войтко, С.В.

    2016-01-01

    Considered theoretical and applied aspects of the development of simulation models to predictthe optimal development and production systems that create tangible products andservices. It isproved that theprocessof inventory control needs of economicandmathematical modeling in viewof thecomplexity of theoretical studies. A simulation model of stocks control that allows make managementdecisions with production logistics

  3. Modeling boundary-layer transition in direct and large-eddy simulations using parabolized stability equations

    Science.gov (United States)

    Lozano-Durán, A.; Hack, M. J. P.; Moin, P.

    2018-02-01

    We examine the potential of the nonlinear parabolized stability equations (PSE) to provide an accurate yet computationally efficient treatment of the growth of disturbances in H-type transition to turbulence. The PSE capture the nonlinear interactions that eventually induce breakdown to turbulence and can as such identify the onset of transition without relying on empirical correlations. Since the local PSE solution at the onset of transition is a close approximation of the Navier-Stokes equations, it provides a natural inflow condition for direct numerical simulations (DNS) and large-eddy simulations (LES) by avoiding nonphysical transients. We show that a combined PSE-DNS approach, where the pretransitional region is modeled by the PSE, can reproduce the skin-friction distribution and downstream turbulent statistics from a DNS of the full domain. When the PSE are used in conjunction with wall-resolved and wall-modeled LES, the computational cost in both the laminar and turbulent regions is reduced by several orders of magnitude compared to DNS.

  4. Application of a distorted wave model to electron capture in atomic collisions

    International Nuclear Information System (INIS)

    Deco, G.R.; Martinez, A.E.; Rivarola, R.D.

    1988-01-01

    In this work, it is presented the CDW-EIS approximation applied to the description of processes of electron capture in ion-atom collisions. Differential and total cross sections are compared to results obtained by other theoretical models, as well as, to experimental data. (A.C.A.S.) [pt

  5. Probabilities and energies to obtain the counting efficiency of electron-capture nuclides, KLMN model

    International Nuclear Information System (INIS)

    Casas Galiano, G.; Grau Malonda, A.

    1994-01-01

    An intelligent computer program has been developed to obtain the mathematical formulae to compute the probabilities and reduced energies of the different atomic rearrangement pathways following electron-capture decay. Creation and annihilation operators for Auger and X processes have been introduced. Taking into account the symmetries associated with each process, 262 different pathways were obtained. This model allows us to obtain the influence of the M-electron-capture in the counting efficiency when the atomic number of the nuclide is high

  6. Probabilities and energies to obtain the counting efficiency of electron-capture nuclides. KLMN model

    International Nuclear Information System (INIS)

    Galiano, G.; Grau, A.

    1994-01-01

    An intelligent computer program has been developed to obtain the mathematical formulae to compute the probabilities and reduced energies of the different atomic rearrangement pathways following electron-capture decay. Creation and annihilation operators for Auger and X processes have been introduced. Taking into account the symmetries associated with each process, 262 different pathways were obtained. This model allows us to obtain the influence of the M-electro capture in the counting efficiency when the atomic number of the nuclide is high. (Author)

  7. Package Equivalent Reactor Networks as Reduced Order Models for Use with CAPE-OPEN Compliant Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Meeks, E.; Chou, C. -P.; Garratt, T.

    2013-03-31

    Engineering simulations of coal gasifiers are typically performed using computational fluid dynamics (CFD) software, where a 3-D representation of the gasifier equipment is used to model the fluid flow in the gasifier and source terms from the coal gasification process are captured using discrete-phase model source terms. Simulations using this approach can be very time consuming, making it difficult to imbed such models into overall system simulations for plant design and optimization. For such system-level designs, process flowsheet software is typically used, such as Aspen Plus® [1], where each component where each component is modeled using a reduced-order model. For advanced power-generation systems, such as integrated gasifier/gas-turbine combined-cycle systems (IGCC), the critical components determining overall process efficiency and emissions are usually the gasifier and combustor. Providing more accurate and more computationally efficient reduced-order models for these components, then, enables much more effective plant-level design optimization and design for control. Based on the CHEMKIN-PRO and ENERGICO software, we have developed an automated methodology for generating an advanced form of reduced-order model for gasifiers and combustors. The reducedorder model offers representation of key unit operations in flowsheet simulations, while allowing simulation that is fast enough to be used in iterative flowsheet calculations. Using high-fidelity fluiddynamics models as input, Reaction Design’s ENERGICO® [2] software can automatically extract equivalent reactor networks (ERNs) from a CFD solution. For the advanced reduced-order concept, we introduce into the ERN a much more detailed kinetics model than can be included practically in the CFD simulation. The state-of-the-art chemistry solver technology within CHEMKIN-PRO allows that to be accomplished while still maintaining a very fast model turn-around time. In this way, the ERN becomes the basis for

  8. Whole-building Hygrothermal Simulation Model

    DEFF Research Database (Denmark)

    Rode, Carsten; Grau, Karl

    2003-01-01

    An existing integrated simulation tool for dynamic thermal simulation of building was extended with a transient model for moisture release and uptake in building materials. Validation of the new model was begun with comparison against measurements in an outdoor test cell furnished with single...... materials. Almost quasi-steady, cyclic experiments were used to compare the indoor humidity variation and the numerical results of the integrated simulation tool with the new moisture model. Except for the case with chipboard as furnishing, the predictions of indoor humidity with the detailed model were...

  9. A welfare study into capture fisheries in cirata reservoir: a bio-economic model

    Science.gov (United States)

    Anna, Z.; Hindayani, P.

    2018-04-01

    Capture fishery in inland such as reservoirs can be a source of food security and even the economy and public welfare of the surrounding community. This research was conducted on Cirata reservoir fishery in West Java, to see how far reservoir capture fishery can contribute economically in the form of resource rents. The method used is the bioeconomic model Copes, which can analyze the demand and supply functions to calculate the optimization of stakeholders’ welfare in various management regimes. The results showed that the management of capture fishery using Maximum Economic Yield regime (MEY) gave the most efficient result, where fewer inputs would produce maximum profit. In the MEY management, the producer surplus obtained is IDR 2,610.203.099, - per quarter and IDR 273.885.400,- of consumer surplus per quarter. Furthermore, researches showed that sustainable management regime policy MEY result in the government rent/surplus ofIDR 217.891,345, - per quarter with the average price of fish per kg being IDR 13.929. In open access fishery, it was shown that the producer surplus becomesIDR 0. Thus the implementation of the MEY-based instrument policy becomes a necessity for Cirata reservoir capture fishery.

  10. Simulating Freak Waves in the Ocean with CFD Modeling

    Science.gov (United States)

    Manolidis, M.; Orzech, M.; Simeonov, J.

    2017-12-01

    Rogue, or freak, waves constitute an active topic of research within the world scientific community, as various maritime authorities around the globe seek to better understand and more accurately assess the risks that the occurrence of such phenomena entail. Several experimental studies have shed some light on the mechanics of rogue wave formation. In our work we numerically simulate the formation of such waves in oceanic conditions by means of Computational Fluid Dynamics (CFD) software. For this purpose we implement the NHWAVE and OpenFOAM software packages. Both are non-hydrostatic, turbulent flow solvers, but NHWAVE implements a shock-capturing scheme at the free surface-interface, while OpenFOAM utilizes the Volume Of Fluid (VOF) method. NHWAVE has been shown to accurately reproduce highly nonlinear surface wave phenomena, such as soliton propagation and wave shoaling. We conducted a range of tests simulating rogue wave formation and horizontally varying currents to evaluate and compare the capabilities of the two software packages. Then we used each model to investigate the effect of ocean currents and current gradients on the formation of rogue waves. We present preliminary results.

  11. Virtual Geographic Simulation of Light Distribution within Three-Dimensional Plant Canopy Models

    Directory of Open Access Journals (Sweden)

    Liyu Tang

    2017-12-01

    Full Text Available Virtual geographic environments (VGEs have been regarded as an important new means of simulating, analyzing, and understanding complex geological processes. Plants and light are major components of the geographic environment. Light is a critical factor that affects ecological systems. In this study, we focused on simulating light transmission and distribution within a three-dimensional plant canopy model. A progressive refinement radiosity algorithm was applied to simulate the transmission and distribution of solar light within a detailed, three-dimensional (3D loquat (Eriobotrya japonica Lindl. canopy model. The canopy was described in three dimensions, and each organ surface was represented by a set of triangular facets. The form factors in radiosity were calculated using a hemi-cube algorithm. We developed a module for simulating the instantaneous light distribution within a virtual canopy, which was integrated into ParaTree. We simulated the distribution of photosynthetically active radiation (PAR within a loquat canopy, and calculated the total PAR intercepted at the whole canopy scale, as well as the mean PAR interception per unit leaf area. The ParaTree-integrated radiosity model simulates the uncollided propagation of direct solar and diffuse sky light and the light-scattering effect of foliage. The PAR captured by the whole canopy based on the radiosity is approximately 9.4% greater than that obtained using ray tracing and TURTLE methods. The latter methods do not account for the scattering among leaves in the canopy in the study, and therefore, the difference might be due to the contribution of light scattering in the foliage. The simulation result is close to Myneni’s findings, in which the light scattering within a canopy is less than 10% of the incident PAR. Our method can be employed for visualizing and analyzing the spatial distribution of light within a canopy, and for estimating the PAR interception at the organ and canopy

  12. A stochastic model simulating the capture of pathogenic micro-organisms by superparamagnetic particles in an isodynamic magnetic field

    International Nuclear Information System (INIS)

    Rotariu, O; Strachan, N J C; Badescu, V

    2004-01-01

    The method of immunomagnetic separation (IMS) has become an established technique to concentrate and separate animal cells, biologically active compounds and pathogenic micro-organisms from clinical, food and environmental matrices. One drawback of this technique is that the analysis is only possible for small sample volumes. We have developed a stochastic model that involves numerical simulations to optimize the process of concentration of pathogenic micro-organisms onto superparamagnetic carrier particles (SCPs) in a gradient magnetic field. Within the range of the system parameters varied in the simulations, optimal conditions favour larger particles with higher magnetite concentrations. The dependence on magnetic field intensity and gradient together with concentration of particles and micro-organisms was found to be less important for larger SCPs but these parameters can influence the values of the collision time for small particles. These results will be useful in aiding the design of apparatus for immunomagnetic separation from large volume samples

  13. Capturing and modelling high-complex alluvial topography with UAS-borne laser scanning

    Science.gov (United States)

    Mandlburger, Gottfried; Wieser, Martin; Pfennigbauer, Martin

    2015-04-01

    Due to fluvial activity alluvial forests are zones of highest complexity and relief energy. Alluvial forests are dominated by new and pristine channels in consequence of current and historic flood events. Apart from topographic features, the vegetation structure is typically very complex featuring, both, dense under story as well as high trees. Furthermore, deadwood and debris carried from upstream during periods of high discharge within the river channel are deposited in these areas. Therefore, precise modelling of the micro relief of alluvial forests using standard tools like Airborne Laser Scanning (ALS) is hardly feasible. Terrestrial Laser Scanning (TLS), in turn, is very time consuming for capturing larger areas as many scan positions are necessary for obtaining complete coverage due to view occlusions in the forest. In the recent past, the technological development of Unmanned Arial Systems (UAS) has reached a level that light-weight survey-grade laser scanners can be operated from these platforms. For capturing alluvial topography this could bridge the gap between ALS and TLS in terms of providing a very detailed description of the topography and the vegetation structure due to the achievable very high point density of >100 points per m2. In our contribution we demonstrate the feasibility to apply UAS-borne laser scanning for capturing and modelling the complex topography of the study area Neubacher Au, an alluvial forest at the pre-alpine River Pielach (Lower Austria). The area was captured with Riegl's VUX-1 compact time-of-flight laser scanner mounted on a RiCopter (X-8 array octocopter). The scanner features an effective scan rate of 500 kHz and was flown in 50-100 m above ground. At this flying height the laser footprint is 25-50 mm allowing mapping of very small surface details. Furthermore, online waveform processing of the backscattered laser energy enables the retrieval of multiple targets for single laser shots resulting in a dense point cloud of

  14. Integrated water system simulation by considering hydrological and biogeochemical processes: model development, with parameter sensitivity and autocalibration

    Science.gov (United States)

    Zhang, Y. Y.; Shao, Q. X.; Ye, A. Z.; Xing, H. T.; Xia, J.

    2016-02-01

    Integrated water system modeling is a feasible approach to understanding severe water crises in the world and promoting the implementation of integrated river basin management. In this study, a classic hydrological model (the time variant gain model: TVGM) was extended to an integrated water system model by coupling multiple water-related processes in hydrology, biogeochemistry, water quality, and ecology, and considering the interference of human activities. A parameter analysis tool, which included sensitivity analysis, autocalibration and model performance evaluation, was developed to improve modeling efficiency. To demonstrate the model performances, the Shaying River catchment, which is the largest highly regulated and heavily polluted tributary of the Huai River basin in China, was selected as the case study area. The model performances were evaluated on the key water-related components including runoff, water quality, diffuse pollution load (or nonpoint sources) and crop yield. Results showed that our proposed model simulated most components reasonably well. The simulated daily runoff at most regulated and less-regulated stations matched well with the observations. The average correlation coefficient and Nash-Sutcliffe efficiency were 0.85 and 0.70, respectively. Both the simulated low and high flows at most stations were improved when the dam regulation was considered. The daily ammonium-nitrogen (NH4-N) concentration was also well captured with the average correlation coefficient of 0.67. Furthermore, the diffuse source load of NH4-N and the corn yield were reasonably simulated at the administrative region scale. This integrated water system model is expected to improve the simulation performances with extension to more model functionalities, and to provide a scientific basis for the implementation in integrated river basin managements.

  15. Progress in modeling and simulation.

    Science.gov (United States)

    Kindler, E

    1998-01-01

    For the modeling of systems, the computers are more and more used while the other "media" (including the human intellect) carrying the models are abandoned. For the modeling of knowledges, i.e. of more or less general concepts (possibly used to model systems composed of instances of such concepts), the object-oriented programming is nowadays widely used. For the modeling of processes existing and developing in the time, computer simulation is used, the results of which are often presented by means of animation (graphical pictures moving and changing in time). Unfortunately, the object-oriented programming tools are commonly not designed to be of a great use for simulation while the programming tools for simulation do not enable their users to apply the advantages of the object-oriented programming. Nevertheless, there are exclusions enabling to use general concepts represented at a computer, for constructing simulation models and for their easy modification. They are described in the present paper, together with true definitions of modeling, simulation and object-oriented programming (including cases that do not satisfy the definitions but are dangerous to introduce misunderstanding), an outline of their applications and of their further development. In relation to the fact that computing systems are being introduced to be control components into a large spectrum of (technological, social and biological) systems, the attention is oriented to models of systems containing modeling components.

  16. Simulation modelling of fynbos ecosystems: Systems analysis and conceptual models

    CSIR Research Space (South Africa)

    Kruger, FJ

    1985-03-01

    Full Text Available -animal interactions. An additional two models, which expand aspects of the FYNBOS model, are described: a model for simulating canopy processes; and a Fire Recovery Simulator. The canopy process model will simulate ecophysiological processes in more detail than FYNBOS...

  17. Modelling of cyclopentane promoted gas hydrate systems for carbon dioxide capture processes

    DEFF Research Database (Denmark)

    Herslund, Peter Jørgensen; Thomsen, Kaj; Abildskov, Jens

    2014-01-01

    A thermodynamic model based on the Cubic-Plus-Association equation of state and the van der Waals-Platteeuw hydrate model is applied to perform a thermodynamic evaluation of gas hydrate forming systems relevant for post-combustion carbon dioxide capture.A modelling study of both fluid phase...... behaviour and hydrate phase behaviour is presented. Cycloalkanes ranging from cyclopropane to cyclohexane, represents a challenge for CPA, both in the description of the pure component densities and for liquid-liquid equilibrium (LLE) in the binary systems with water. It is concluded that an insufficient...

  18. Simulation modeling and analysis with Arena

    CERN Document Server

    Altiok, Tayfur

    2007-01-01

    Simulation Modeling and Analysis with Arena is a highly readable textbook which treats the essentials of the Monte Carlo discrete-event simulation methodology, and does so in the context of a popular Arena simulation environment.” It treats simulation modeling as an in-vitro laboratory that facilitates the understanding of complex systems and experimentation with what-if scenarios in order to estimate their performance metrics. The book contains chapters on the simulation modeling methodology and the underpinnings of discrete-event systems, as well as the relevant underlying probability, statistics, stochastic processes, input analysis, model validation and output analysis. All simulation-related concepts are illustrated in numerous Arena examples, encompassing production lines, manufacturing and inventory systems, transportation systems, and computer information systems in networked settings.· Introduces the concept of discrete event Monte Carlo simulation, the most commonly used methodology for modeli...

  19. Truth-as-simulation : towards a coalgebraic perspective on logic and games

    NARCIS (Netherlands)

    A. Baltag

    1999-01-01

    textabstractBuilding on the work of L. Moss on coalgebraic logic, I study in a general setting a class of infinitary modal logics for F-coalgebras, designed to capture simulation and bisimulation. For a notion of coalgebraic simulation, I use the work of A. Thijs on modelling simulation in terms of

  20. An intelligent system for monitoring and diagnosis of the CO{sub 2} capture process

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Q.; Chan, C.W.; Tontiwachwuthikul, P. [University of Regina, Regina, SK (Canada). Faculty of Engineering

    2011-07-15

    Amine-based carbon dioxide capture has been widely considered as a feasible ideal technology for reducing large-scale CO{sub 2} emissions and mitigating global warming. The operation of amine-based CO{sub 2} capture is a complicated task, which involves monitoring over 100 process parameters and careful manipulation of numerous valves and pumps. The current research in the field of CO{sub 2} capture has emphasized the need for improving CO{sub 2} capture efficiency and enhancing plant performance. In the present study, artificial intelligence techniques were applied for developing a knowledge-based expert system that aims at effectively monitoring and controlling the CO{sub 2} capture process and thereby enhancing CO{sub 2} capture efficiency. In developing the system, the inferential modeling technique (IMT) was applied to analyze the domain knowledge and problem-solving techniques, and a knowledge base was developed on DeltaV Simulate. The expert system helps to enhance CO{sub 2} capture system performance and efficiency by reducing the time required for diagnosis and problem solving if abnormal conditions occur. The expert system can be used as a decision-support tool that helps inexperienced operators control the plant: it can be used also for training novice operators.

  1. Secondary scattering on the intensity dependence of the capture velocity in a magneto-optical trap

    International Nuclear Information System (INIS)

    Loos, M.R.; Massardo, S.B.; Zanon, R.A. de S; Oliveira, A.L. de

    2005-01-01

    In this work, we consider a three-dimensional model to simulate the capture velocity behavior in a sample of cold-trapped sodium atoms as a function of the trapping laser intensity. We expand on previous work [V. S. Bagnato, L. G. Marcassa, S. G. Miranda, S. R. Muniz, and A. L. de Oliveira, Phys. Rev. A 62, 013404 (2000)] by calculating the capture velocity over a broad range of light intensities considering the secondary scattering in a magneto-optical trap. Our calculations are in a good agreement with recent measured values [S. R. Muniz et al., Phys. Rev. A 65, 015402 (2001)

  2. Secondary scattering on the intensity dependence of the capture velocity in a magneto-optical trap

    Science.gov (United States)

    Loos, M. R.; Massardo, S. B.; de S. Zanon, R. A.; de Oliveira, A. L.

    2005-08-01

    In this work, we consider a three-dimensional model to simulate the capture velocity behavior in a sample of cold-trapped sodium atoms as a function of the trapping laser intensity. We expand on previous work [V. S. Bagnato, L. G. Marcassa, S. G. Miranda, S. R. Muniz, and A. L. de Oliveira, Phys. Rev. A 62, 013404 (2000)] by calculating the capture velocity over a broad range of light intensities considering the secondary scattering in a magneto-optical trap. Our calculations are in a good agreement with recent measured values [S. R. Muniz , Phys. Rev. A 65, 015402 (2001)].

  3. Power Aware Simulation Framework for Wireless Sensor Networks and Nodes

    Directory of Open Access Journals (Sweden)

    Daniel Weber

    2008-07-01

    Full Text Available The constrained resources of sensor nodes limit analytical techniques and cost-time factors limit test beds to study wireless sensor networks (WSNs. Consequently, simulation becomes an essential tool to evaluate such systems.We present the power aware wireless sensors (PAWiS simulation framework that supports design and simulation of wireless sensor networks and nodes. The framework emphasizes power consumption capturing and hence the identification of inefficiencies in various hardware and software modules of the systems. These modules include all layers of the communication system, the targeted class of application itself, the power supply and energy management, the central processing unit (CPU, and the sensor-actuator interface. The modular design makes it possible to simulate heterogeneous systems. PAWiS is an OMNeT++ based discrete event simulator written in C++. It captures the node internals (modules as well as the node surroundings (network, environment and provides specific features critical to WSNs like capturing power consumption at various levels of granularity, support for mobility, and environmental dynamics as well as the simulation of timing effects. A module library with standardized interfaces and a power analysis tool have been developed to support the design and analysis of simulation models. The performance of the PAWiS simulator is comparable with other simulation environments.

  4. A comparison of the COG and MCNP codes in computational neutron capture therapy modeling, Part II: gadolinium neutron capture therapy models and therapeutic effects.

    Science.gov (United States)

    Wangerin, K; Culbertson, C N; Jevremovic, T

    2005-08-01

    The goal of this study was to evaluate the COG Monte Carlo radiation transport code, developed and tested by Lawrence Livermore National Laboratory, for gadolinium neutron capture therapy (GdNCT) related modeling. The validity of COG NCT model has been established for this model, and here the calculation was extended to analyze the effect of various gadolinium concentrations on dose distribution and cell-kill effect of the GdNCT modality and to determine the optimum therapeutic conditions for treating brain cancers. The computational results were compared with the widely used MCNP code. The differences between the COG and MCNP predictions were generally small and suggest that the COG code can be applied to similar research problems in NCT. Results for this study also showed that a concentration of 100 ppm gadolinium in the tumor was most beneficial when using an epithermal neutron beam.

  5. Development of the Transport Class Model (TCM) Aircraft Simulation From a Sub-Scale Generic Transport Model (GTM) Simulation

    Science.gov (United States)

    Hueschen, Richard M.

    2011-01-01

    A six degree-of-freedom, flat-earth dynamics, non-linear, and non-proprietary aircraft simulation was developed that is representative of a generic mid-sized twin-jet transport aircraft. The simulation was developed from a non-proprietary, publicly available, subscale twin-jet transport aircraft simulation using scaling relationships and a modified aerodynamic database. The simulation has an extended aerodynamics database with aero data outside the normal transport-operating envelope (large angle-of-attack and sideslip values). The simulation has representative transport aircraft surface actuator models with variable rate-limits and generally fixed position limits. The simulation contains a generic 40,000 lb sea level thrust engine model. The engine model is a first order dynamic model with a variable time constant that changes according to simulation conditions. The simulation provides a means for interfacing a flight control system to use the simulation sensor variables and to command the surface actuators and throttle position of the engine model.

  6. Electron-capture Isotopes Could Constrain Cosmic-Ray Propagation Models

    Science.gov (United States)

    Benyamin, David; Shaviv, Nir J.; Piran, Tsvi

    2017-12-01

    Electron capture (EC) isotopes are known to provide constraints on the low-energy behavior of cosmic rays (CRs), such as reacceleration. Here, we study the EC isotopes within the framework of the dynamic spiral-arms CR propagation model in which most of the CR sources reside in the galactic spiral arms. The model was previously used to explain the B/C and sub-Fe/Fe ratios. We show that the known inconsistency between the 49Ti/49V and 51V/51Cr ratios remains also in the spiral-arms model. On the other hand, unlike the general wisdom that says the isotope ratios depend primarily on reacceleration, we find here that the ratio also depends on the halo size (Z h) and, in spiral-arms models, also on the time since the last spiral-arm passage ({τ }{arm}). Namely, EC isotopes can, in principle, provide interesting constraints on the diffusion geometry. However, with the present uncertainties in the lab measurements of both the electron attachment rate and the fragmentation cross sections, no meaningful constraint can be placed.

  7. Carbon Capture Multidisciplinary Simulation Center Trilab Support Team (TST) Fall Meeting 2016 Report

    Energy Technology Data Exchange (ETDEWEB)

    Draeger, Erik W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-01-03

    The theme of this year’s meeting was “Predictivity: Now and in the Future”. After welcoming remarks, Erik Draeger gave a talk on the NNSA Labs’ history of predictive simulation and the new challenges faced by upcoming architecture changes. He described an example where the volume of analysis data produced by a set of inertial confinement fusion (ICF) simulations on the Trinity machine was too large to store or transfer, and the steps needed to reduce it to a manageable size. He also described the software re-engineering plan for LLNL’s suite of multiphysics codes and physics packages with a new push toward common components, making collaboration with teams like the CCMSC who already have experience trying to architect complex multiphysics code infrastructure on next-generation architectures all the more important. Phil Smith then gave an overview outlining the goals of the project, namely to accelerate development of new technology in the form of high efficiency carbon capture pulverized coal power generation as well as further optimize existing state of the art designs. He then presented a summary of the Center’s top-down uncertainty quantification approach, in which ultimate target predictivity informs uncertainty targets for lower-level components, and gave data on how close all the different components currently are to their targets. Most components still need an approximately two-fold reduction in uncertainty to hit the ultimate predictivity target, but the current accuracy is already rather impressive.

  8. Multiple-point statistical simulation for hydrogeological models: 3-D training image development and conditioning strategies

    Science.gov (United States)

    Høyer, Anne-Sophie; Vignoli, Giulio; Mejer Hansen, Thomas; Thanh Vu, Le; Keefer, Donald A.; Jørgensen, Flemming

    2017-12-01

    Most studies on the application of geostatistical simulations based on multiple-point statistics (MPS) to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2-D or quasi-3-D training images. In the present study, we demonstrate a novel strategy for 3-D MPS modelling characterized by (i) realistic 3-D training images and (ii) an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments) which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m × 100 m × 5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3-D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed spatial trends. The training image is constructed as a relatively small 3-D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, this study shows how to include both the geological environment and the type and quality of input information in order to achieve optimal results from MPS modelling. We present a practical workflow to build the training image and

  9. Unified Simulation and Analysis Framework for Deep Space Navigation Design

    Science.gov (United States)

    Anzalone, Evan; Chuang, Jason; Olsen, Carrie

    2013-01-01

    As the technology that enables advanced deep space autonomous navigation continues to develop and the requirements for such capability continues to grow, there is a clear need for a modular expandable simulation framework. This tool's purpose is to address multiple measurement and information sources in order to capture system capability. This is needed to analyze the capability of competing navigation systems as well as to develop system requirements, in order to determine its effect on the sizing of the integrated vehicle. The development for such a framework is built upon Model-Based Systems Engineering techniques to capture the architecture of the navigation system and possible state measurements and observations to feed into the simulation implementation structure. These models also allow a common environment for the capture of an increasingly complex operational architecture, involving multiple spacecraft, ground stations, and communication networks. In order to address these architectural developments, a framework of agent-based modules is implemented to capture the independent operations of individual spacecraft as well as the network interactions amongst spacecraft. This paper describes the development of this framework, and the modeling processes used to capture a deep space navigation system. Additionally, a sample implementation describing a concept of network-based navigation utilizing digitally transmitted data packets is described in detail. This developed package shows the capability of the modeling framework, including its modularity, analysis capabilities, and its unification back to the overall system requirements and definition.

  10. CDW-EIS model for single-electron capture in ion-atom collisions involving multielectronic targets

    International Nuclear Information System (INIS)

    Abufager, P N; MartInez, A E; Rivarola, R D; Fainstein, P D

    2004-01-01

    A generalization of the continuum distorted wave eikonal initial state (CDW-EIS) approximation, for the description of single-electron capture in ion-atom collisions involving multielectronic targets is presented. This approximation is developed within the framework of the independent electron model taking particular care of the representation of the bound and continuum target states. Total cross sections for single-electron capture from the K-shell of He, Ne and Ar noble gases by impact of bare ions are calculated. Present results are compared to previous CDW-EIS ones and to experimental data

  11. Proceedings of the 12. meeting of the International Post-Combustion CO{sub 2} Capture Network

    Energy Technology Data Exchange (ETDEWEB)

    Topper, J. [IEA Greenhouse Gas R and D Programme, Cheltenham, Gloucestershire (United Kingdom)] (comp.)

    2009-07-01

    This conference provided a forum to discuss new developments in post combustion capture of carbon dioxide (CO{sub 2}) emissions from fossil-fueled power plants. Since the creation of the Post-Combustion Capture Network in 2000, these conferences have provided exposure to latest research findings, acted as a conduit for trial of latest ideas and served as a means of encouraging trans-national co-operation. As host of the conference, the University of Regina is among the leading institutions in the world with expertise in working on solvent based capture and promoting international activity through the International Test Centre. The topics of discussion ranged from amine based solvent investigations; ammonia as an alternative means of capture; pilot plant progress reports; simulation and modelling studies; latest developments by technology providers; national programs with a special interest in demonstration plant proposals; and more novel techniques such as membranes. The sessions of the conference were entitled: fundamental studies; pilot plant work and scale-up; modelling and plant studies; and commercial and other aspects. This meeting featured 49 presentations, of which 46 have been catalogued separately for inclusion in this database. refs., figs.

  12. Unsteady numerical simulation of the flow in the U9 Kaplan turbine model

    International Nuclear Information System (INIS)

    Javadi, Ardalan; Nilsson, Håkan

    2014-01-01

    The Reynolds-averaged Navier-Stokes equations with the RNG k-ε turbulence model closure are utilized to simulate the unsteady turbulent flow throughout the whole flow passage of the U9 Kaplan turbine model. The U9 Kaplan turbine model comprises 20 stationary guide vanes and 6 rotating blades (696.3 RPM), working at best efficiency load (0.71 m 3 /s). The computations are conducted using a general finite volume method, using the OpenFOAM CFD code. A dynamic mesh is used together with a sliding GGI interface to include the effect of the rotating runner. The clearance is included in the guide vane. The hub and tip clearances are also included in the runner. An analysis is conducted of the unsteady behavior of the flow field, the pressure fluctuation in the draft tube, and the coherent structures of the flow. The tangential and axial velocity distributions at three sections in the draft tube are compared against LDV measurements. The numerical result is in reasonable agreement with the experimental data, and the important flow physics close to the hub in the draft tube is captured. The hub and tip vortices and an on-axis forced vortex are captured. The numerical results show that the frequency of the forced vortex in 1/5 of the runner rotation

  13. Unsteady numerical simulation of the flow in the U9 Kaplan turbine model

    Science.gov (United States)

    Javadi, Ardalan; Nilsson, Håkan

    2014-03-01

    The Reynolds-averaged Navier-Stokes equations with the RNG k-ε turbulence model closure are utilized to simulate the unsteady turbulent flow throughout the whole flow passage of the U9 Kaplan turbine model. The U9 Kaplan turbine model comprises 20 stationary guide vanes and 6 rotating blades (696.3 RPM), working at best efficiency load (0.71 m3/s). The computations are conducted using a general finite volume method, using the OpenFOAM CFD code. A dynamic mesh is used together with a sliding GGI interface to include the effect of the rotating runner. The clearance is included in the guide vane. The hub and tip clearances are also included in the runner. An analysis is conducted of the unsteady behavior of the flow field, the pressure fluctuation in the draft tube, and the coherent structures of the flow. The tangential and axial velocity distributions at three sections in the draft tube are compared against LDV measurements. The numerical result is in reasonable agreement with the experimental data, and the important flow physics close to the hub in the draft tube is captured. The hub and tip vortices and an on-axis forced vortex are captured. The numerical results show that the frequency of the forced vortex in 1/5 of the runner rotation.

  14. Modeling and prototyping of a flux concentrator for positron capture

    International Nuclear Information System (INIS)

    Liu, W.; Gai, W.; Wang, H.; Wong, T.

    2008-01-01

    An adiabatic matching device (AMD) generates a tapered high-strength magnetic field to capture positrons emitted from a positron target to a downstream accelerating structure. The AMD is a key component of a positron source and represents a technical challenge. The International Linear Collider collaboration is proposing to employ a pulsed, normal-conducting, flux-concentrator to generate a 5 Tesla initial magnetic field. The flux-concentrator structure itself and the interactions between the flux-concentrator and the external power supply circuits give rise to a nontrivial system. In this paper, we present a recently developed equivalent circuit model for a flux concentrator, along with the characteristics of a prototype fabricated for validating the model. Using the model, we can obtain the transient response of the pulsed magnetic field and the field profile. Calculations based on the model and the results of measurements made on the prototype are in good agreement.

  15. Numerical study of particle capture efficiency in fibrous filter

    Directory of Open Access Journals (Sweden)

    Fan Jianhua

    2017-01-01

    Full Text Available Numerical simulations are performed for transport and deposition of particles over a fixed obstacle in a fluid flow. The effect of particle size and Stokes number on the particle capture efficiency is investigated using two methods. The first one is one-way coupling combining Lattice Boltzmann (LB method with Lagrangian point-like approach. The second one is two-way coupling based on the coupling between Lattice Boltzmann method and discrete element (DE method, which consider the particle influence on the fluid. Then the single fiber collection efficiency characterized by Stokes number (St are simulated by LB-DE methods. Results show that two-way coupling method is more appropriate in our case for particles larger than 8 μm. A good agreement has also been observed between our simulation results and existing correlations for single fiber collection efficiency. The numerical simulations presented in this work are useful to understand the particle transport and deposition and to predict the capture efficiency.

  16. Numerical simulation for the influence of laser-induced plasmas addition on air mass capture of hypersonic inlet

    Science.gov (United States)

    Zhao, Wei; Dou, Zhiguo; Li, Qian

    2012-03-01

    The theory of laser-induced plasmas addition to hypersonic airflow off a vehicle to increase air mass capture and improve the performance of hypersonic inlets at Mach numbers below the design value is explored. For hypersonic vehicles, when flying at mach numbers lower than the design one, we can increase the mass capture ratio of inlet through laser-induced plasmas injection to the hypersonic flow upstream of cowl lip to form a virtual cowl. Based on the theory, the model of interaction between laser-induced plasmas and hypersonic flow was established. The influence on the effect of increasing mass capture ratio was studied at different positions of laser-induced plasmas region for the external compression hypersonic inlet at Mach 5 while the design value is 6, the power of plasmas was in the range of 1-8mJ. The main results are as follows: 1. the best location of the plasma addition region is near the intersection of the nose shock of the vehicle with the continuation of the cowl line, and slightly below that line. In that case, the shock generated by the heating is close to the shock that is a reflection of the vehicle nose shock off the imaginary solid surface-extension of the cowl. 2. Plasma addition does increase mass capture, and the effect becomes stronger as more energy is added, the peak value appeared when the power of plasma was about 4mJ, when the plasma energy continues to get stronger, the mass capture will decline slowly.

  17. Interatomic Coulombic electron capture

    International Nuclear Information System (INIS)

    Gokhberg, K.; Cederbaum, L. S.

    2010-01-01

    In a previous publication [K. Gokhberg and L. S. Cederbaum, J. Phys. B 42, 231001 (2009)] we presented the interatomic Coulombic electron capture process--an efficient electron capture mechanism by atoms and ions in the presence of an environment. In the present work we derive and discuss the mechanism in detail. We demonstrate thereby that this mechanism belongs to a family of interatomic electron capture processes driven by electron correlation. In these processes the excess energy released in the capture event is transferred to the environment and used to ionize (or to excite) it. This family includes the processes where the capture is into the lowest or into an excited unoccupied orbital of an atom or ion and proceeds in step with the ionization (or excitation) of the environment, as well as the process where an intermediate autoionizing excited resonance state is formed in the capturing center which subsequently deexcites to a stable state transferring its excess energy to the environment. Detailed derivation of the asymptotic cross sections of these processes is presented. The derived expressions make clear that the environment assisted capture processes can be important for many systems. Illustrative examples are presented for a number of model systems for which the data needed to construct the various capture cross sections are available in the literature.

  18. Computational modeling of pitching cylinder-type ocean wave energy converters using 3D MPI-parallel simulations

    Science.gov (United States)

    Freniere, Cole; Pathak, Ashish; Raessi, Mehdi

    2016-11-01

    Ocean Wave Energy Converters (WECs) are devices that convert energy from ocean waves into electricity. To aid in the design of WECs, an advanced computational framework has been developed which has advantages over conventional methods. The computational framework simulates the performance of WECs in a virtual wave tank by solving the full Navier-Stokes equations in 3D, capturing the fluid-structure interaction, nonlinear and viscous effects. In this work, we present simulations of the performance of pitching cylinder-type WECs and compare against experimental data. WECs are simulated at both model and full scales. The results are used to determine the role of the Keulegan-Carpenter (KC) number. The KC number is representative of viscous drag behavior on a bluff body in an oscillating flow, and is considered an important indicator of the dynamics of a WEC. Studying the effects of the KC number is important for determining the validity of the Froude scaling and the inviscid potential flow theory, which are heavily relied on in the conventional approaches to modeling WECs. Support from the National Science Foundation is gratefully acknowledged.

  19. Validation of Effective Models for Simulation of Thermal Stratification and Mixing Induced by Steam Injection into a Large Pool of Water

    Directory of Open Access Journals (Sweden)

    Hua Li

    2014-01-01

    Full Text Available The Effective Heat Source (EHS and Effective Momentum Source (EMS models have been proposed to predict the development of thermal stratification and mixing during a steam injection into a large pool of water. These effective models are implemented in GOTHIC software and validated against the POOLEX STB-20 and STB-21 tests and the PPOOLEX MIX-01 test. First, the EHS model is validated against STB-20 test which shows the development of thermal stratification. Different numerical schemes and grid resolutions have been tested. A 48×114 grid with second order scheme is sufficient to capture the vertical temperature distribution in the pool. Next, the EHS and EMS models are validated against STB-21 test. Effective momentum is estimated based on the water level oscillations in the blowdown pipe. An effective momentum selected within the experimental measurement uncertainty can reproduce the mixing details. Finally, the EHS-EMS models are validated against MIX-01 test which has improved space and time resolution of temperature measurements inside the blowdown pipe. Excellent agreement in averaged pool temperature and water level in the pool between the experiment and simulation has been achieved. The development of thermal stratification in the pool is also well captured in the simulation as well as the thermal behavior of the pool during the mixing phase.

  20. Analysis of capture-recapture data

    CERN Document Server

    McCrea, Rachel S

    2014-01-01

    An important first step in studying the demography of wild animals is to identify the animals uniquely through applying markings, such as rings, tags, and bands. Once the animals are encountered again, researchers can study different forms of capture-recapture data to estimate features, such as the mortality and size of the populations. Capture-recapture methods are also used in other areas, including epidemiology and sociology.With an emphasis on ecology, Analysis of Capture-Recapture Data covers many modern developments of capture-recapture and related models and methods and places them in the historical context of research from the past 100 years. The book presents both classical and Bayesian methods.A range of real data sets motivates and illustrates the material and many examples illustrate biometry and applied statistics at work. In particular, the authors demonstrate several of the modeling approaches using one substantial data set from a population of great cormorants. The book also discusses which co...

  1. Implementation of an Online Chemistry Model to a Large Eddy Simulation Model (PALM-4U0

    Science.gov (United States)

    Mauder, M.; Khan, B.; Forkel, R.; Banzhaf, S.; Russo, E. E.; Sühring, M.; Kanani-Sühring, F.; Raasch, S.; Ketelsen, K.

    2017-12-01

    Large Eddy Simulation (LES) models permit to resolve relevant scales of turbulent motion, so that these models can capture the inherent unsteadiness of atmospheric turbulence. However, LES models are so far hardly applied for urban air quality studies, in particular chemical transformation of pollutants. In this context, BMBF (Bundesministerium für Bildung und Forschung) funded a joint project, MOSAIK (Modellbasierte Stadtplanung und Anwendung im Klimawandel / Model-based city planning and application in climate change) with the main goal to develop a new highly efficient urban climate model (UCM) that also includes atmospheric chemical processes. The state-of-the-art LES model PALM; Maronga et al, 2015, Geosci. Model Dev., 8, doi:10.5194/gmd-8-2515-2015), has been used as a core model for the new UCM named as PALM-4U. For the gas phase chemistry, a fully coupled 'online' chemistry model has been implemented into PALM. The latest version of the Kinetic PreProcessor (KPP) Version 2.3, has been utilized for the numerical integration of chemical species. Due to the high computational demands of the LES model, compromises in the description of chemical processes are required. Therefore, a reduced chemistry mechanism, which includes only major pollutants namely O3, NO, NO2, CO, a highly simplified VOC chemistry and a small number of products have been implemented. This work shows preliminary results of the advection, and chemical transformation of atmospheric pollutants. Non-cyclic boundaries have been used for inflow and outflow in east-west directions while periodic boundary conditions have been implemented to the south-north lateral boundaries. For practical applications, our approach is to go beyond the simulation of single street canyons to chemical transformation, advection and deposition of air pollutants in the larger urban canopy. Tests of chemistry schemes and initial studies of chemistry-turbulence, transport and transformations are presented.

  2. Performance and Uncertainty Evaluation of Snow Models on Snowmelt Flow Simulations over a Nordic Catchment (Mistassibi, Canada

    Directory of Open Access Journals (Sweden)

    Magali Troin

    2015-11-01

    Full Text Available An analysis of hydrological response to a multi-model approach based on an ensemble of seven snow models (SM; degree-day and mixed degree-day/energy balance models coupled with three hydrological models (HM is presented for a snowmelt-dominated basin in Canada. The present study aims to compare the performance and the reliability of different types of SM-HM combinations at simulating snowmelt flows over the 1961–2000 historical period. The multi-model approach also allows evaluating the uncertainties associated with the structure of the SM-HM ensemble to better predict river flows in Nordic environments. The 20-year calibration shows a satisfactory performance of the ensemble of 21 SM-HM combinations at simulating daily discharges and snow water equivalents (SWEs, with low streamflow volume biases. The validation of the ensemble of 21 SM-HM combinations is conducted over a 20-year period. Performances are similar to the calibration in simulating the daily discharges and SWEs, again with low model biases for streamflow. The spring-snowmelt-generated peak flow is captured only in timing by the ensemble of 21 SM-HM combinations. The results of specific hydrologic indicators show that the uncertainty related to the choice of the given HM in the SM-HM combinations cannot be neglected in a more quantitative manner in simulating snowmelt flows. The selection of the SM plays a larger role than the choice of the SM approach (degree-day versus mixed degree-day/energy balance in simulating spring flows. Overall, the snow models provide a low degree of uncertainty to the total uncertainty in hydrological modeling for snow hydrology studies.

  3. AEGIS geologic simulation model

    International Nuclear Information System (INIS)

    Foley, M.G.

    1982-01-01

    The Geologic Simulation Model (GSM) is used by the AEGIS (Assessment of Effectiveness of Geologic Isolation Systems) program at the Pacific Northwest Laboratory to simulate the dynamic geology and hydrology of a geologic nuclear waste repository site over a million-year period following repository closure. The GSM helps to organize geologic/hydrologic data; to focus attention on active natural processes by requiring their simulation; and, through interactive simulation and calibration, to reduce subjective evaluations of the geologic system. During each computer run, the GSM produces a million-year geologic history that is possible for the region and the repository site. In addition, the GSM records in permanent history files everything that occurred during that time span. Statistical analyses of data in the history files of several hundred simulations are used to classify typical evolutionary paths, to establish the probabilities associated with deviations from the typical paths, and to determine which types of perturbations of the geologic/hydrologic system, if any, are most likely to occur. These simulations will be evaluated by geologists familiar with the repository region to determine validity of the results. Perturbed systems that are determined to be the most realistic, within whatever probability limits are established, will be used for the analyses that involve radionuclide transport and dose models. The GSM is designed to be continuously refined and updated. Simulation models are site specific, and, although the submodels may have limited general applicability, the input data equirements necessitate detailed characterization of each site before application

  4. Implicit prosody mining based on the human eye image capture technology

    Science.gov (United States)

    Gao, Pei-pei; Liu, Feng

    2013-08-01

    The technology of eye tracker has become the main methods of analyzing the recognition issues in human-computer interaction. Human eye image capture is the key problem of the eye tracking. Based on further research, a new human-computer interaction method introduced to enrich the form of speech synthetic. We propose a method of Implicit Prosody mining based on the human eye image capture technology to extract the parameters from the image of human eyes when reading, control and drive prosody generation in speech synthesis, and establish prosodic model with high simulation accuracy. Duration model is key issues for prosody generation. For the duration model, this paper put forward a new idea for obtaining gaze duration of eyes when reading based on the eye image capture technology, and synchronous controlling this duration and pronunciation duration in speech synthesis. The movement of human eyes during reading is a comprehensive multi-factor interactive process, such as gaze, twitching and backsight. Therefore, how to extract the appropriate information from the image of human eyes need to be considered and the gaze regularity of eyes need to be obtained as references of modeling. Based on the analysis of current three kinds of eye movement control model and the characteristics of the Implicit Prosody reading, relative independence between speech processing system of text and eye movement control system was discussed. It was proved that under the same text familiarity condition, gaze duration of eyes when reading and internal voice pronunciation duration are synchronous. The eye gaze duration model based on the Chinese language level prosodic structure was presented to change previous methods of machine learning and probability forecasting, obtain readers' real internal reading rhythm and to synthesize voice with personalized rhythm. This research will enrich human-computer interactive form, and will be practical significance and application prospect in terms of

  5. Multilevel discretized random field models with 'spin' correlations for the simulation of environmental spatial data

    International Nuclear Information System (INIS)

    Žukovič, Milan; Hristopulos, Dionissios T

    2009-01-01

    A current problem of practical significance is how to analyze large, spatially distributed, environmental data sets. The problem is more challenging for variables that follow non-Gaussian distributions. We show by means of numerical simulations that the spatial correlations between variables can be captured by interactions between 'spins'. The spins represent multilevel discretizations of environmental variables with respect to a number of pre-defined thresholds. The spatial dependence between the 'spins' is imposed by means of short-range interactions. We present two approaches, inspired by the Ising and Potts models, that generate conditional simulations of spatially distributed variables from samples with missing data. Currently, the sampling and simulation points are assumed to be at the nodes of a regular grid. The conditional simulations of the 'spin system' are forced to respect locally the sample values and the system statistics globally. The second constraint is enforced by minimizing a cost function representing the deviation between normalized correlation energies of the simulated and the sample distributions. In the approach based on the N c -state Potts model, each point is assigned to one of N c classes. The interactions involve all the points simultaneously. In the Ising model approach, a sequential simulation scheme is used: the discretization at each simulation level is binomial (i.e., ± 1). Information propagates from lower to higher levels as the simulation proceeds. We compare the two approaches in terms of their ability to reproduce the target statistics (e.g., the histogram and the variogram of the sample distribution), to predict data at unsampled locations, as well as in terms of their computational complexity. The comparison is based on a non-Gaussian data set (derived from a digital elevation model of the Walker Lake area, Nevada, USA). We discuss the impact of relevant simulation parameters, such as the domain size, the number of

  6. Impacts of carbon capture on power plant emissions

    Energy Technology Data Exchange (ETDEWEB)

    Narula, R.; Wen, H. [Bechtel Power, San Francisco, CA (United States)

    2009-07-01

    Post-combustion carbon dioxide capture processes currently include amine-based solvent scrubbing and ammonia solution scrubbing technologies. Both result in high emissions of volatile organic compounds (VOC) and ammonia, as well as liquid discharge that contain chemical solvent. Additional solid wastes include sludge and spent solvent filtration medias. Process simulation software can be used to predict the amount of solvent vapor in the stack gas for both amine and ammonia solvent based capture processes. However, amine could decompose in most amine-based processes and release ammonia gas due to degradation by exposure to oxygen, sulfur impurities, and thermal conditions. As a regulated pollutant for emission control for some plants, ammonia emissions are a major concern for ammonia scrubbing processes. The energy requirement for carbon capture can be reduced by cooling the flue gas before entering the carbon dioxide absorber column. The resulting low flue gas temperature could create difficulties in dispersing the flue gas plume in the atmosphere. This paper presented a computer simulation of stack emission reduction.

  7. Simulation-based model checking approach to cell fate specification during Caenorhabditis elegans vulval development by hybrid functional Petri net with extension

    Directory of Open Access Journals (Sweden)

    Ueno Kazuko

    2009-04-01

    Full Text Available Abstract Background Model checking approaches were applied to biological pathway validations around 2003. Recently, Fisher et al. have proved the importance of model checking approach by inferring new regulation of signaling crosstalk in C. elegans and confirming the regulation with biological experiments. They took a discrete and state-based approach to explore all possible states of the system underlying vulval precursor cell (VPC fate specification for desired properties. However, since both discrete and continuous features appear to be an indispensable part of biological processes, it is more appropriate to use quantitative models to capture the dynamics of biological systems. Our key motivation of this paper is to establish a quantitative methodology to model and analyze in silico models incorporating the use of model checking approach. Results A novel method of modeling and simulating biological systems with the use of model checking approach is proposed based on hybrid functional Petri net with extension (HFPNe as the framework dealing with both discrete and continuous events. Firstly, we construct a quantitative VPC fate model with 1761 components by using HFPNe. Secondly, we employ two major biological fate determination rules – Rule I and Rule II – to VPC fate model. We then conduct 10,000 simulations for each of 48 sets of different genotypes, investigate variations of cell fate patterns under each genotype, and validate the two rules by comparing three simulation targets consisting of fate patterns obtained from in silico and in vivo experiments. In particular, an evaluation was successfully done by using our VPC fate model to investigate one target derived from biological experiments involving hybrid lineage observations. However, the understandings of hybrid lineages are hard to make on a discrete model because the hybrid lineage occurs when the system comes close to certain thresholds as discussed by Sternberg and Horvitz in

  8. Adjusting multistate capture-recapture models for misclassification bias: manatee breeding proportions

    Science.gov (United States)

    Kendall, W.L.; Hines, J.E.; Nichols, J.D.

    2003-01-01

    Matrix population models are important tools for research and management of populations. Estimating the parameters of these models is an important step in applying them to real populations. Multistate capture-recapture methods have provided a useful means for estimating survival and parameters of transition between locations or life history states but have mostly relied on the assumption that the state occupied by each detected animal is known with certainty. Nevertheless, in some cases animals can be misclassified. Using multiple capture sessions within each period of interest, we developed a method that adjusts estimates of transition probabilities for bias due to misclassification. We applied this method to 10 years of sighting data for a population of Florida manatees (Trichechus manatus latirostris) in order to estimate the annual probability of transition from nonbreeding to breeding status. Some sighted females were unequivocally classified as breeders because they were clearly accompanied by a first-year calf. The remainder were classified, sometimes erroneously, as nonbreeders because an attendant first-year calf was not observed or was classified as more than one year old. We estimated a conditional breeding probability of 0.31 + 0.04 (estimate + 1 SE) when we ignored misclassification bias, and 0.61 + 0.09 when we accounted for misclassification.

  9. Markov modeling and discrete event simulation in health care: a systematic comparison.

    Science.gov (United States)

    Standfield, Lachlan; Comans, Tracy; Scuffham, Paul

    2014-04-01

    The aim of this study was to assess if the use of Markov modeling (MM) or discrete event simulation (DES) for cost-effectiveness analysis (CEA) may alter healthcare resource allocation decisions. A systematic literature search and review of empirical and non-empirical studies comparing MM and DES techniques used in the CEA of healthcare technologies was conducted. Twenty-two pertinent publications were identified. Two publications compared MM and DES models empirically, one presented a conceptual DES and MM, two described a DES consensus guideline, and seventeen drew comparisons between MM and DES through the authors' experience. The primary advantages described for DES over MM were the ability to model queuing for limited resources, capture individual patient histories, accommodate complexity and uncertainty, represent time flexibly, model competing risks, and accommodate multiple events simultaneously. The disadvantages of DES over MM were the potential for model overspecification, increased data requirements, specialized expensive software, and increased model development, validation, and computational time. Where individual patient history is an important driver of future events an individual patient simulation technique like DES may be preferred over MM. Where supply shortages, subsequent queuing, and diversion of patients through other pathways in the healthcare system are likely to be drivers of cost-effectiveness, DES modeling methods may provide decision makers with more accurate information on which to base resource allocation decisions. Where these are not major features of the cost-effectiveness question, MM remains an efficient, easily validated, parsimonious, and accurate method of determining the cost-effectiveness of new healthcare interventions.

  10. THE MARK I BUSINESS SYSTEM SIMULATION MODEL

    Science.gov (United States)

    of a large-scale business simulation model as a vehicle for doing research in management controls. The major results of the program were the...development of the Mark I business simulation model and the Simulation Package (SIMPAC). SIMPAC is a method and set of programs facilitating the construction...of large simulation models. The object of this document is to describe the Mark I Corporation model, state why parts of the business were modeled as they were, and indicate the research applications of the model. (Author)

  11. A Subpath-based Logit Model to Capture the Correlation of Routes

    Directory of Open Access Journals (Sweden)

    Xinjun Lai

    2016-06-01

    Full Text Available A subpath-based methodology is proposed to capture the travellers’ route choice behaviours and their perceptual correlation of routes, because the original link-based style may not be suitable in application: (1 travellers do not process road network information and construct the chosen route by a link-by-link style; (2 observations from questionnaires and GPS data, however, are not always link-specific. Subpaths are defined as important portions of the route, such as major roads and landmarks. The cross-nested Logit (CNL structure is used for its tractable closed-form and its capability to explicitly capture the routes correlation. Nests represent subpaths other than links so that the number of nests is significantly reduced. Moreover, the proposed method simplifies the original link-based CNL model; therefore, it alleviates the estimation and computation difficulties. The estimation and forecast validation with real data are presented, and the results suggest that the new method is practical.

  12. A comparison of PMIP2 model simulations and the MARGO proxy reconstruction for tropical sea surface temperatures at last glacial maximum

    Energy Technology Data Exchange (ETDEWEB)

    Otto-Bliesner, Bette L.; Brady, E.C. [National Center for Atmospheric Research, Climate and Global Dynamics Division, Boulder, CO (United States); Schneider, Ralph; Weinelt, M. [Christian-Albrechts Universitaet, Institut fuer Geowissenschaften, Kiel (Germany); Kucera, M. [Eberhard-Karls Universitaet Tuebingen, Institut fuer Geowissenschaften, Tuebingen (Germany); Abe-Ouchi, A. [The University of Tokyo, Center for Climate System Research, Kashiwa (Japan); Bard, E. [CEREGE, College de France, CNRS, Universite Aix-Marseille, Aix-en-Provence (France); Braconnot, P.; Kageyama, M.; Marti, O.; Waelbroeck, C. [Unite mixte CEA-CNRS-UVSQ, Laboratoire des Sciences du Climat et de l' Environnement, Gif-sur-Yvette Cedex (France); Crucifix, M. [Universite Catholique de Louvain, Institut d' Astronomie et de Geophysique Georges Lemaitre, Louvain-la-Neuve (Belgium); Hewitt, C.D. [Met Office Hadley Centre, Exeter (United Kingdom); Paul, A. [Bremen University, Department of Geosciences, Bremen (Germany); Rosell-Mele, A. [Universitat Autonoma de Barcelona, ICREA and Institut de Ciencia i Tecnologia Ambientals, Barcelona (Spain); Weber, S.L. [Royal Netherlands Meteorological Institute (KNMI), De Bilt (Netherlands); Yu, Y. [Chinese Academy of Sciences, LASG, Institute of Atmospheric Physics, Beijing (China)

    2009-05-15

    Results from multiple model simulations are used to understand the tropical sea surface temperature (SST) response to the reduced greenhouse gas concentrations and large continental ice sheets of the last glacial maximum (LGM). We present LGM simulations from the Paleoclimate Modelling Intercomparison Project, Phase 2 (PMIP2) and compare these simulations to proxy data collated and harmonized within the Multiproxy Approach for the Reconstruction of the Glacial Ocean Surface Project (MARGO). Five atmosphere-ocean coupled climate models (AOGCMs) and one coupled model of intermediate complexity have PMIP2 ocean results available for LGM. The models give a range of tropical (defined for this paper as 15 S-15 N) SST cooling of 1.0-2.4 C, comparable to the MARGO estimate of annual cooling of 1.7{+-}1 C. The models simulate greater SST cooling in the tropical Atlantic than tropical Pacific, but interbasin and intrabasin variations of cooling are much smaller than those found in the MARGO reconstruction. The simulated tropical coolings are relatively insensitive to season, a feature also present in the MARGO transferred-based estimates calculated from planktonic foraminiferal assemblages for the Indian and Pacific Oceans. These assemblages indicate seasonality in cooling in the Atlantic basin, with greater cooling in northern summer than northern winter, not captured by the model simulations. Biases in the simulations of the tropical upwelling and thermocline found in the preindustrial control simulations remain for the LGM simulations and are partly responsible for the more homogeneous spatial and temporal LGM tropical cooling simulated by the models. The PMIP2 LGM simulations give estimates for the climate sensitivity parameter of 0.67 -0.83 C per Wm{sup -2}, which translates to equilibrium climate sensitivity for doubling of atmospheric CO{sub 2} of 2.6-3.1 C. (orig.)

  13. Stochastic models: theory and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2008-03-01

    Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

  14. Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals

    Science.gov (United States)

    Kery, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J. Andrew

    2011-01-01

    Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km2 (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals.

  15. Dynamic simulation of knee-joint loading during gait using force-feedback control and surrogate contact modelling.

    Science.gov (United States)

    Walter, Jonathan P; Pandy, Marcus G

    2017-10-01

    The aim of this study was to perform multi-body, muscle-driven, forward-dynamics simulations of human gait using a 6-degree-of-freedom (6-DOF) model of the knee in tandem with a surrogate model of articular contact and force control. A forward-dynamics simulation incorporating position, velocity and contact force-feedback control (FFC) was used to track full-body motion capture data recorded for multiple trials of level walking and stair descent performed by two individuals with instrumented knee implants. Tibiofemoral contact force errors for FFC were compared against those obtained from a standard computed muscle control algorithm (CMC) with a 6-DOF knee contact model (CMC6); CMC with a 1-DOF translating hinge-knee model (CMC1); and static optimization with a 1-DOF translating hinge-knee model (SO). Tibiofemoral joint loads predicted by FFC and CMC6 were comparable for level walking, however FFC produced more accurate results for stair descent. SO yielded reasonable predictions of joint contact loading for level walking but significant differences between model and experiment were observed for stair descent. CMC1 produced the least accurate predictions of tibiofemoral contact loads for both tasks. Our findings suggest that reliable estimates of knee-joint loading may be obtained by incorporating position, velocity and force-feedback control with a multi-DOF model of joint contact in a forward-dynamics simulation of gait. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  16. Simulation Model of a Transient

    DEFF Research Database (Denmark)

    Jauch, Clemens; Sørensen, Poul; Bak-Jensen, Birgitte

    2005-01-01

    This paper describes the simulation model of a controller that enables an active-stall wind turbine to ride through transient faults. The simulated wind turbine is connected to a simple model of a power system. Certain fault scenarios are specified and the turbine shall be able to sustain operati...

  17. Laser accelerated protons captured and transported by a pulse power solenoid

    Directory of Open Access Journals (Sweden)

    T. Burris-Mog

    2011-12-01

    Full Text Available Using a pulse power solenoid, we demonstrate efficient capture of laser accelerated proton beams and the ability to control their large divergence angles and broad energy range. Simulations using measured data for the input parameters give inference into the phase-space and transport efficiencies of the captured proton beams. We conclude with results from a feasibility study of a pulse power compact achromatic gantry concept. Using a scaled target normal sheath acceleration spectrum, we present simulation results of the available spectrum after transport through the gantry.

  18. Adaptive Planning: Understanding Organizational Workload to Capability/ Capacity through Modeling and Simulation

    Science.gov (United States)

    Hase, Chris

    2010-01-01

    In August 2003, the Secretary of Defense (SECDEF) established the Adaptive Planning (AP) initiative [1] with an objective of reducing the time necessary to develop and revise Combatant Commander (COCOM) contingency plans and increase SECDEF plan visibility. In addition to reducing the traditional plan development timeline from twenty-four months to less than twelve months (with a goal of six months)[2], AP increased plan visibility to Department of Defense (DoD) leadership through In-Progress Reviews (IPRs). The IPR process, as well as the increased number of campaign and contingency plans COCOMs had to develop, increased the workload while the number of planners remained fixed. Several efforts from collaborative planning tools to streamlined processes were initiated to compensate for the increased workload enabling COCOMS to better meet shorter planning timelines. This paper examines the Joint Strategic Capabilities Plan (JSCP) directed contingency planning and staffing requirements assigned to a combatant commander staff through the lens of modeling and simulation. The dynamics of developing a COCOM plan are captured with an ExtendSim [3] simulation. The resulting analysis provides a quantifiable means by which to measure a combatant commander staffs workload associated with development and staffing JSCP [4] directed contingency plans with COCOM capability/capacity. Modeling and simulation bring significant opportunities in measuring the sensitivity of key variables in the assessment of workload to capability/capacity analysis. Gaining an understanding of the relationship between plan complexity, number of plans, planning processes, and number of planners with time required for plan development provides valuable information to DoD leadership. Through modeling and simulation AP leadership can gain greater insight in making key decisions on knowing where to best allocate scarce resources in an effort to meet DoD planning objectives.

  19. Prey capture in zebrafish larvae serves as a model to study cognitive functions

    Directory of Open Access Journals (Sweden)

    Akira eMuto

    2013-06-01

    Full Text Available Prey capture in zebrafish larvae is an innate behavior which can be observed as early as 4 days post fertilization, the day when they start to swim. This simple behavior apparently involves several neural processes including visual perception, recognition, decision-making, and motor control, and, therefore, serves as a good model system to study cognitive functions underlying natural behaviors in vertebrates. Recent progresses in imaging techniques provided us with a unique opportunity to image neuronal activity in the brain of an intact fish in real-time while the fish perceives a natural prey, paramecium. By expanding this approach, it would be possible to image entire brain areas at a single cell resolution in real-time during prey capture, and identify neuronal circuits important for cognitive functions. Further, activation or inhibition of those neuronal circuits with recently developed optogenetic tools or neurotoxins should shed light on their roles. Thus, we will be able to explore the prey capture in zebrafish larvae more thoroughly at cellular levels, which should establish a basis of understanding of the cognitive function in vertebrates.

  20. Simulating spatial and temporally related fire weather

    Science.gov (United States)

    Isaac C. Grenfell; Mark Finney; Matt Jolly

    2010-01-01

    Use of fire behavior models has assumed an increasingly important role for managers of wildfire incidents to make strategic decisions. For fire risk assessments and danger rating at very large spatial scales, these models depend on fire weather variables or fire danger indices. Here, we describe a method to simulate fire weather at a national scale that captures the...

  1. Validation and Simulation of Ares I Scale Model Acoustic Test - 2 - Simulations at 5 Foot Elevation for Evaluation of Launch Mount Effects

    Science.gov (United States)

    Strutzenberg, Louise L.; Putman, Gabriel C.

    2011-01-01

    The Ares I Scale Model Acoustics Test (ASMAT) is a series of live-fire tests of scaled rocket motors meant to simulate the conditions of the Ares I launch configuration. These tests have provided a well documented set of high fidelity measurements useful for validation including data taken over a range of test conditions and containing phenomena like Ignition Over-Pressure and water suppression of acoustics. Expanding from initial simulations of the ASMAT setup in a held down configuration, simulations have been performed using the Loci/CHEM computational fluid dynamics software for ASMAT tests of the vehicle at 5 ft. elevation (100 ft. real vehicle elevation) with worst case drift in the direction of the launch tower. These tests have been performed without water suppression and have compared the acoustic emissions for launch structures with and without launch mounts. In addition, simulation results have also been compared to acoustic and imagery data collected from similar live-fire tests to assess the accuracy of the simulations. Simulations have shown a marked change in the pattern of emissions after removal of the launch mount with a reduction in the overall acoustic environment experienced by the vehicle and the formation of highly directed acoustic waves moving across the platform deck. Comparisons of simulation results to live-fire test data showed good amplitude and temporal correlation and imagery comparisons over the visible and infrared wavelengths showed qualitative capture of all plume and pressure wave evolution features.

  2. A VRLA battery simulation model

    International Nuclear Information System (INIS)

    Pascoe, Phillip E.; Anbuky, Adnan H.

    2004-01-01

    A valve regulated lead acid (VRLA) battery simulation model is an invaluable tool for the standby power system engineer. The obvious use for such a model is to allow the assessment of battery performance. This may involve determining the influence of cells suffering from state of health (SOH) degradation on the performance of the entire string, or the running of test scenarios to ascertain the most suitable battery size for the application. In addition, it enables the engineer to assess the performance of the overall power system. This includes, for example, running test scenarios to determine the benefits of various load shedding schemes. It also allows the assessment of other power system components, either for determining their requirements and/or vulnerabilities. Finally, a VRLA battery simulation model is vital as a stand alone tool for educational purposes. Despite the fundamentals of the VRLA battery having been established for over 100 years, its operating behaviour is often poorly understood. An accurate simulation model enables the engineer to gain a better understanding of VRLA battery behaviour. A system level multipurpose VRLA battery simulation model is presented. It allows an arbitrary battery (capacity, SOH, number of cells and number of strings) to be simulated under arbitrary operating conditions (discharge rate, ambient temperature, end voltage, charge rate and initial state of charge). The model accurately reflects the VRLA battery discharge and recharge behaviour. This includes the complex start of discharge region known as the coup de fouet

  3. Post-capture vibration suppression of spacecraft via a bio-inspired isolation system

    Science.gov (United States)

    Dai, Honghua; Jing, Xingjian; Wang, Yu; Yue, Xiaokui; Yuan, Jianping

    2018-05-01

    Inspired by the smooth motions of a running kangaroo, a bio-inspired quadrilateral shape (BIQS) structure is proposed to suppress the vibrations of a free-floating spacecraft subject to periodic or impulsive forces, which may be encountered during on-orbit servicing missions. In particular, the BIQS structure is installed between the satellite platform and the capture mechanism. The dynamical model of the BIQS isolation system, i.e. a BIQS structure connecting the platform and the capture mechanism at each side, is established by Lagrange's equations to simulate the post-capture dynamical responses. The BIQS system suffering an impulsive force is dealt with by means of a modified version of Lagrange's equations. Furthermore, the classical harmonic balance method is used to solve the nonlinear dynamical system subject to periodic forces, while for the case under impulsive forces the numerical integration method is adopted. Due to the weightless environment in space, the present BIQS system is essentially an under-constrained dynamical system with one of its natural frequencies being identical to zero. The effects of system parameters, such as the number of layers in BIQS, stiffness, assembly angle, rod length, damping coefficient, masses of satellite platform and capture mechanism, on the isolation performance of the present system are thoroughly investigated. In addition, comparisons between the isolation performances of the presently proposed BIQS isolator and the conventional spring-mass-damper (SMD) isolator are conducted to demonstrate the advantages of the present isolator. Numerical simulations show that the BIQS system has a much better performance than the SMD system under either periodic or impulsive forces. Overall, the present BIQS isolator offers a highly efficient passive way for vibration suppressions of free-floating spacecraft.

  4. Computational Modeling of Mixed Solids for CO2 CaptureSorbents

    Energy Technology Data Exchange (ETDEWEB)

    Duan, Yuhua

    2015-01-01

    Since current technologies for capturing CO2 to fight global climate change are still too energy intensive, there is a critical need for development of new materials that can capture CO2 reversibly with acceptable energy costs. Accordingly, solid sorbents have been proposed to be used for CO2 capture applications through a reversible chemical transformation. By combining thermodynamic database mining with first principles density functional theory and phonon lattice dynamics calculations, a theoretical screening methodology to identify the most promising CO2 sorbent candidates from the vast array of possible solid materials has been proposed and validated. The calculated thermodynamic properties of different classes of solid materials versus temperature and pressure changes were further used to evaluate the equilibrium properties for the CO2 adsorption/desorption cycles. According to the requirements imposed by the pre- and post- combustion technologies and based on our calculated thermodynamic properties for the CO2 capture reactions by the solids of interest, we were able to screen only those solid materials for which lower capture energy costs are expected at the desired pressure and temperature conditions. Only those selected CO2 sorbent candidates were further considered for experimental validations. The ab initio thermodynamic technique has the advantage of identifying thermodynamic properties of CO2 capture reactions without any experimental input beyond crystallographic structural information of the solid phases involved. Such methodology not only can be used to search for good candidates from existing database of solid materials, but also can provide some guidelines for synthesis new materials. In this presentation, we apply our screening methodology to mixing solid systems to adjust the turnover temperature to help on developing CO2 capture Technologies.

  5. Seasonal Synchronization of a Simple Stochastic Dynamical Model Capturing El Niño Diversity

    Science.gov (United States)

    Thual, S.; Majda, A.; Chen, N.

    2017-12-01

    The El Niño-Southern Oscillation (ENSO) has significant impact on global climate and seasonal prediction. Recently, a simple ENSO model was developed that automatically captures the ENSO diversity and intermittency in nature, where state-dependent stochastic wind bursts and nonlinear advection of sea surface temperature (SST) are coupled to simple ocean-atmosphere processes that are otherwise deterministic, linear and stable. In the present article, it is further shown that the model can reproduce qualitatively the ENSO synchronization (or phase-locking) to the seasonal cycle in nature. This goal is achieved by incorporating a cloud radiative feedback that is derived naturally from the model's atmosphere dynamics with no ad-hoc assumptions and accounts in simple fashion for the marked seasonal variations of convective activity and cloud cover in the eastern Pacific. In particular, the weak convective response to SSTs in boreal fall favors the eastern Pacific warming that triggers El Niño events while the increased convective activity and cloud cover during the following spring contributes to the shutdown of those events by blocking incoming shortwave solar radiations. In addition to simulating the ENSO diversity with realistic non-Gaussian statistics in different Niño regions, both the eastern Pacific moderate and super El Niño, the central Pacific El Niño as well as La Niña show a realistic chronology with a tendency to peak in boreal winter as well as decreased predictability in spring consistent with the persistence barrier in nature. The incorporation of other possible seasonal feedbacks in the model is also documented for completeness.

  6. Multispectral simulation environment for modeling low-light-level sensor systems

    Science.gov (United States)

    Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.

    1998-11-01

    - light-level conditions including the incorporation of natural and man-made sources which emphasizes the importance of accurate BRDF. A description of the implementation of each stage in the image processing and capture chain for the LLL model is also presented. Finally, simulated images are presented and qualitatively compared to lab acquired imagery from a commercial system.

  7. Predicting knee replacement damage in a simulator machine using a computational model with a consistent wear factor.

    Science.gov (United States)

    Zhao, Dong; Sakoda, Hideyuki; Sawyer, W Gregory; Banks, Scott A; Fregly, Benjamin J

    2008-02-01

    Wear of ultrahigh molecular weight polyethylene remains a primary factor limiting the longevity of total knee replacements (TKRs). However, wear testing on a simulator machine is time consuming and expensive, making it impractical for iterative design purposes. The objectives of this paper were first, to evaluate whether a computational model using a wear factor consistent with the TKR material pair can predict accurate TKR damage measured in a simulator machine, and second, to investigate how choice of surface evolution method (fixed or variable step) and material model (linear or nonlinear) affect the prediction. An iterative computational damage model was constructed for a commercial knee implant in an AMTI simulator machine. The damage model combined a dynamic contact model with a surface evolution model to predict how wear plus creep progressively alter tibial insert geometry over multiple simulations. The computational framework was validated by predicting wear in a cylinder-on-plate system for which an analytical solution was derived. The implant damage model was evaluated for 5 million cycles of simulated gait using damage measurements made on the same implant in an AMTI machine. Using a pin-on-plate wear factor for the same material pair as the implant, the model predicted tibial insert wear volume to within 2% error and damage depths and areas to within 18% and 10% error, respectively. Choice of material model had little influence, while inclusion of surface evolution affected damage depth and area but not wear volume predictions. Surface evolution method was important only during the initial cycles, where variable step was needed to capture rapid geometry changes due to the creep. Overall, our results indicate that accurate TKR damage predictions can be made with a computational model using a constant wear factor obtained from pin-on-plate tests for the same material pair, and furthermore, that surface evolution method matters only during the initial

  8. Microfluidic device for cell capture and impedance measurement.

    Science.gov (United States)

    Jang, Ling-Sheng; Wang, Min-How

    2007-10-01

    This work presents a microfluidic device to capture physically single cells within microstructures inside a channel and to measure the impedance of a single HeLa cell (human cervical epithelioid carcinoma) using impedance spectroscopy. The device includes a glass substrate with electrodes and a PDMS channel with micro pillars. The commercial software CFD-ACE+ is used to study the flow of the microstructures in the channel. According to simulation results, the probability of cell capture by three micro pillars is about 10%. An equivalent circuit model of the device is established and fits closely to the experimental results. The circuit can be modeled electrically as cell impedance in parallel with dielectric capacitance and in series with a pair of electrode resistors. The system is operated at low frequency between 1 and 100 kHz. In this study, experiments show that the HeLa cell is successfully captured by the micro pillars and its impedance is measured by impedance spectroscopy. The magnitude of the HeLa cell impedance declines at all operation voltages with frequency because the HeLa cell is capacitive. Additionally, increasing the operation voltage reduces the magnitude of the HeLa cell because a strong electric field may promote the exchange of ions between the cytoplasm and the isotonic solution. Below an operating voltage of 0.9 V, the system impedance response is characteristic of a parallel circuit at under 30 kHz and of a series circuit at between 30 and 100 kHz. The phase of the HeLa cell impedance is characteristic of a series circuit when the operation voltage exceeds 0.8 V because the cell impedance becomes significant.

  9. Experimentally-based optimization of contact parameters in dynamics simulation of humanoid robots

    NARCIS (Netherlands)

    Vivian, Michele; Reggiani, Monica; Sartori, Massimo

    2013-01-01

    With this work we introduce a novel methodology for the simulation of walking of a humanoid robot. Motion capture technology is used to calibrate the dynamics engine internal parameters and validate the simulated motor task. Results showed the calibrated contact model allows predicting dynamically

  10. Practical enhancement factor model based on GM for multiple parallel reactions: Piperazine (PZ) CO2 capture

    DEFF Research Database (Denmark)

    Gaspar, Jozsef; Fosbøl, Philip Loldrup

    2017-01-01

    Reactive absorption is a key process for gas separation and purification and it is the main technology for CO2 capture. Thus, reliable and simple mathematical models for mass transfer rate calculation are essential. Models which apply to parallel interacting and non-interacting reactions, for all......, desorption and pinch conditions.In this work, we apply the GM model to multiple parallel reactions. We deduce the model for piperazine (PZ) CO2 capture and we validate it against wetted-wall column measurements using 2, 5 and 8 molal PZ for temperatures between 40 °C and 100 °C and CO2 loadings between 0.......23 and 0.41 mol CO2/2 mol PZ. We show that overall second order kinetics describes well the reaction between CO2 and PZ accounting for the carbamate and bicarbamate reactions. Here we prove the GM model for piperazine and MEA but we expect that this practical approach is applicable for various amines...

  11. Doping of alkali, alkaline-earth, and transition metals in covalent-organic frameworks for enhancing CO2 capture by first-principles calculations and molecular simulations.

    Science.gov (United States)

    Lan, Jianhui; Cao, Dapeng; Wang, Wenchuan; Smit, Berend

    2010-07-27

    We use the multiscale simulation approach, which combines the first-principles calculations and grand canonical Monte Carlo simulations, to comprehensively study the doping of a series of alkali (Li, Na, and K), alkaline-earth (Be, Mg, and Ca), and transition (Sc and Ti) metals in nanoporous covalent organic frameworks (COFs), and the effects of the doped metals on CO2 capture. The results indicate that, among all the metals studied, Li, Sc, and Ti can bind with COFs stably, while Be, Mg, and Ca cannot, because the binding of Be, Mg, and Ca with COFs is very weak. Furthermore, Li, Sc, and Ti can improve the uptakes of CO2 in COFs significantly. However, the binding energy of a CO2 molecule with Sc and Ti exceeds the lower limit of chemisorptions and, thus, suffers from the difficulty of desorption. By the comparative studies above, it is found that Li is the best surface modifier of COFs for CO2 capture among all the metals studied. Therefore, we further investigate the uptakes of CO2 in the Li-doped COFs. Our simulation results show that at 298 K and 1 bar, the excess CO2 uptakes of the Li-doped COF-102 and COF-105 reach 409 and 344 mg/g, which are about eight and four times those in the nondoped ones, respectively. As the pressure increases to 40 bar, the CO2 uptakes of the Li-doped COF-102 and COF-105 reach 1349 and 2266 mg/g at 298 K, respectively, which are among the reported highest scores to date. In summary, doping of metals in porous COFs provides an efficient approach for enhancing CO2 capture.

  12. Validated simulator for space debris removal with nets and other flexible tethers applications

    Science.gov (United States)

    Gołębiowski, Wojciech; Michalczyk, Rafał; Dyrek, Michał; Battista, Umberto; Wormnes, Kjetil

    2016-12-01

    In the context of active debris removal technologies and preparation activities for the e.Deorbit mission, a simulator for net-shaped elastic bodies dynamics and their interactions with rigid bodies, has been developed. Its main application is to aid net design and test scenarios for space debris deorbitation. The simulator can model all the phases of the debris capturing process: net launch, flight and wrapping around the target. It handles coupled simulation of rigid and flexible bodies dynamics. Flexible bodies were implemented using Cosserat rods model. It allows to simulate flexible threads or wires with elasticity and damping for stretching, bending and torsion. Threads may be combined into structures of any topology, so the software is able to simulate nets, pure tethers, tether bundles, cages, trusses, etc. Full contact dynamics was implemented. Programmatic interaction with simulation is possible - i.e. for control implementation. The underlying model has been experimentally validated and due to significant gravity influence, experiment had to be performed in microgravity conditions. Validation experiment for parabolic flight was a downscaled process of Envisat capturing. The prepacked net was launched towards the satellite model, it expanded, hit the model and wrapped around it. The whole process was recorded with 2 fast stereographic camera sets for full 3D trajectory reconstruction. The trajectories were used to compare net dynamics to respective simulations and then to validate the simulation tool. The experiments were performed on board of a Falcon-20 aircraft, operated by National Research Council in Ottawa, Canada. Validation results show that model reflects phenomenon physics accurately enough, so it may be used for scenario evaluation and mission design purposes. The functionalities of the simulator are described in detail in the paper, as well as its underlying model, sample cases and methodology behind validation. Results are presented and

  13. Simulation of wind-induced snow transport and sublimation in alpine terrain using a fully coupled snowpack/atmosphere model

    Science.gov (United States)

    Vionnet, V.; Martin, E.; Masson, V.; Guyomarc'h, G.; Naaim-Bouvet, F.; Prokop, A.; Durand, Y.; Lac, C.

    2014-03-01

    In alpine regions, wind-induced snow transport strongly influences the spatio-temporal evolution of the snow cover throughout the winter season. To gain understanding on the complex processes that drive the redistribution of snow, a new numerical model is developed. It directly couples the detailed snowpack model Crocus with the atmospheric model Meso-NH. Meso-NH/Crocus simulates snow transport in saltation and in turbulent suspension and includes the sublimation of suspended snow particles. The coupled model is evaluated against data collected around the experimental site of Col du Lac Blanc (2720 m a.s.l., French Alps). First, 1-D simulations show that a detailed representation of the first metres of the atmosphere is required to reproduce strong gradients of blowing snow concentration and compute mass exchange between the snowpack and the atmosphere. Secondly, 3-D simulations of a blowing snow event without concurrent snowfall have been carried out. Results show that the model captures the main structures of atmospheric flow in alpine terrain. However, at 50 m grid spacing, the model reproduces only the patterns of snow erosion and deposition at the ridge scale and misses smaller scale patterns observed by terrestrial laser scanning. When activated, the sublimation of suspended snow particles causes a reduction of deposited snow mass of 5.3% over the calculation domain. Total sublimation (surface + blowing snow) is three times higher than surface sublimation in a simulation neglecting blowing snow sublimation.

  14. Magnetosphere Modeling: From Cartoons to Simulations

    Science.gov (United States)

    Gombosi, T. I.

    2017-12-01

    Over the last half a century physics-based global computer simulations became a bridge between experiment and basic theory and now it represents the "third pillar" of geospace research. Today, many of our scientific publications utilize large-scale simulations to interpret observations, test new ideas, plan campaigns, or design new instruments. Realistic simulations of the complex Sun-Earth system have been made possible by the dramatically increased power of both computing hardware and numerical algorithms. Early magnetosphere models were based on simple E&M concepts (like the Chapman-Ferraro cavity) and hydrodynamic analogies (bow shock). At the beginning of the space age current system models were developed culminating in the sophisticated Tsyganenko-type description of the magnetic configuration. The first 3D MHD simulations of the magnetosphere were published in the early 1980s. A decade later there were several competing global models that were able to reproduce many fundamental properties of the magnetosphere. The leading models included the impact of the ionosphere by using a height-integrated electric potential description. Dynamic coupling of global and regional models started in the early 2000s by integrating a ring current and a global magnetosphere model. It has been recognized for quite some time that plasma kinetic effects play an important role. Presently, global hybrid simulations of the dynamic magnetosphere are expected to be possible on exascale supercomputers, while fully kinetic simulations with realistic mass ratios are still decades away. In the 2010s several groups started to experiment with PIC simulations embedded in large-scale 3D MHD models. Presently this integrated MHD-PIC approach is at the forefront of magnetosphere simulations and this technique is expected to lead to some important advances in our understanding of magnetosheric physics. This talk will review the evolution of magnetosphere modeling from cartoons to current systems

  15. Stochastic modeling analysis and simulation

    CERN Document Server

    Nelson, Barry L

    1995-01-01

    A coherent introduction to the techniques for modeling dynamic stochastic systems, this volume also offers a guide to the mathematical, numerical, and simulation tools of systems analysis. Suitable for advanced undergraduates and graduate-level industrial engineers and management science majors, it proposes modeling systems in terms of their simulation, regardless of whether simulation is employed for analysis. Beginning with a view of the conditions that permit a mathematical-numerical analysis, the text explores Poisson and renewal processes, Markov chains in discrete and continuous time, se

  16. CO2 Capture with Ionic Liquids : Experiments and Molecular Simulations

    NARCIS (Netherlands)

    Ramdin, M.

    2015-01-01

    In this thesis, we investigated the potential of physical ILs for CO2 capture at pre-combustion and natural gas sweetening conditions. The performance of ILs with respect to conventional solvents is assessed in terms of gas solubilities and selectivities. The work discussed in this thesis consists

  17. SEIR model simulation for Hepatitis B

    Science.gov (United States)

    Side, Syafruddin; Irwan, Mulbar, Usman; Sanusi, Wahidah

    2017-09-01

    Mathematical modelling and simulation for Hepatitis B discuss in this paper. Population devided by four variables, namely: Susceptible, Exposed, Infected and Recovered (SEIR). Several factors affect the population in this model is vaccination, immigration and emigration that occurred in the population. SEIR Model obtained Ordinary Differential Equation (ODE) non-linear System 4-D which then reduces to 3-D. SEIR model simulation undertaken to predict the number of Hepatitis B cases. The results of the simulation indicates the number of Hepatitis B cases will increase and then decrease for several months. The result of simulation using the number of case in Makassar also found the basic reproduction number less than one, that means, Makassar city is not an endemic area of Hepatitis B.

  18. Climate simulations for 1880-2003 with GISS modelE

    International Nuclear Information System (INIS)

    Hansen, J.; Lacis, A.; Miller, R.; Schmidt, G.A.; Russell, G.; Canuto, V.; Del Genio, A.; Hall, T.; Hansen, J.; Sato, M.; Kharecha, P.; Nazarenko, L.; Aleinov, I.; Bauer, S.; Chandler, M.; Faluvegi, G.; Jonas, J.; Ruedy, R.; Lo, K.; Cheng, Y.; Lacis, A.; Schmidt, G.A.; Del Genio, A.; Miller, R.; Cairns, B.; Hall, T.; Baum, E.; Cohen, A.; Fleming, E.; Jackman, C.; Friend, A.; Kelley, M.

    2007-01-01

    We carry out climate simulations for 1880-2003 with GISS modelE driven by ten measured or estimated climate forcing. An ensemble of climate model runs is carried out for each forcing acting individually and for all forcing mechanisms acting together. We compare side-by-side simulated climate change for each forcing, all forcing, observations, unforced variability among model ensemble members, and, if available, observed variability. Discrepancies between observations and simulations with all forcing are due to model deficiencies, inaccurate or incomplete forcing, and imperfect observations. Although there are notable discrepancies between model and observations, the fidelity is sufficient to encourage use of the model for simulations of future climate change. By using a fixed well-documented model and accurately defining the 1880-2003 forcing, we aim to provide a benchmark against which the effect of improvements in the model, climate forcing, and observations can be tested. Principal model deficiencies include unrealistic weak tropical El Nino-like variability and a poor distribution of sea ice, with too much sea ice in the Northern Hemisphere and too little in the Southern Hemisphere. Greatest uncertainties in the forcing are the temporal and spatial variations of anthropogenic aerosols and their indirect effects on clouds. (authors)

  19. Adaptive capture of expert knowledge

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, C.L.; Jones, R.D. [Los Alamos National Lab., NM (United States); Hand, Un Kyong [Los Alamos National Lab., NM (United States)]|[US Navy (United States)

    1995-05-01

    A method is introduced that can directly acquire knowledge-engineered, rule-based logic in an adaptive network. This adaptive representation of the rule system can then replace the rule system in simulated intelligent agents and thereby permit further performance-based adaptation of the rule system. The approach described provides both weight-fitting network adaptation and potentially powerful rule mutation and selection mechanisms. Nonlinear terms are generated implicitly in the mutation process through the emergent interaction of multiple linear terms. By this method it is possible to acquire nonlinear relations that exist in the training data without addition of hidden layers or imposition of explicit nonlinear terms in the network. We smoothed and captured a set of expert rules with an adaptive network. The motivation for this was to (1) realize a speed advantage over traditional rule-based simulations; (2) have variability in the intelligent objects not possible by rule-based systems but provided by adaptive systems: and (3) maintain the understandability of rule-based simulations. A set of binary rules was smoothed and converted into a simple set of arithmetic statements, where continuous, non-binary rules are permitted. A neural network, called the expert network, was developed to capture this rule set, which it was able to do with zero error. The expert network is also capable of learning a nonmonotonic term without a hidden layer. The trained network in feedforward operation is fast running, compact, and traceable to the rule base.

  20. Agent-based modeling and simulation of clean heating system adoption in Norway

    Energy Technology Data Exchange (ETDEWEB)

    Sopha, Bertha Maya

    2011-03-15

    at revealing potential interventions toward wood pellet heating in Norway. A methodological approach of coupling ABM with empirical research is introduced to develop a conceptual model capturing households' adoption-decision processes which is parameterized with empirical data. Simulation results demonstrate that the generated data from simulation is reasonably able to generate independent historical data at both macro- and micro-levels. It indicates that the proposed methodology is promising. As a whole, this thesis integrally addresses the study case using interdisciplinary perspective. The major contributions of the thesis lie in the inclusion of psychological factors, in addition to socio-demographic and technological factors, in adoption-decision, and the methodological proposal of coupling agent-based modeling (ABM) with empirical research and its application in the studied case. (Author)

  1. FASTBUS simulation models in VHDL

    International Nuclear Information System (INIS)

    Appelquist, G.

    1992-11-01

    Four hardware simulation models implementing the FASTBUS protocol are described. The models are written in the VHDL hardware description language to obtain portability, i.e. without relations to any specific simulator. They include two complete FASTBUS devices, a full-duplex segment interconnect and ancillary logic for the segment. In addition, master and slave models using a high level interface to describe FASTBUS operations, are presented. With these models different configurations of FASTBUS systems can be evaluated and the FASTBUS transactions of new devices can be verified. (au)

  2. Estimating ICU bed capacity using discrete event simulation.

    Science.gov (United States)

    Zhu, Zhecheng; Hen, Bee Hoon; Teow, Kiok Liang

    2012-01-01

    The intensive care unit (ICU) in a hospital caters for critically ill patients. The number of the ICU beds has a direct impact on many aspects of hospital performance. Lack of the ICU beds may cause ambulance diversion and surgery cancellation, while an excess of ICU beds may cause a waste of resources. This paper aims to develop a discrete event simulation (DES) model to help the healthcare service providers determine the proper ICU bed capacity which strikes the balance between service level and cost effectiveness. The DES model is developed to reflect the complex patient flow of the ICU system. Actual operational data, including emergency arrivals, elective arrivals and length of stay, are directly fed into the DES model to capture the variations in the system. The DES model is validated by open box test and black box test. The validated model is used to test two what-if scenarios which the healthcare service providers are interested in: the proper number of the ICU beds in service to meet the target rejection rate and the extra ICU beds in service needed to meet the demand growth. A 12-month period of actual operational data was collected from an ICU department with 13 ICU beds in service. Comparison between the simulation results and the actual situation shows that the DES model accurately captures the variations in the system, and the DES model is flexible to simulate various what-if scenarios. DES helps the healthcare service providers describe the current situation, and simulate the what-if scenarios for future planning.

  3. DeepSimulator: a deep simulator for Nanopore sequencing

    KAUST Repository

    Li, Yu

    2017-12-23

    Motivation: Oxford Nanopore sequencing is a rapidly developed sequencing technology in recent years. To keep pace with the explosion of the downstream data analytical tools, a versatile Nanopore sequencing simulator is needed to complement the experimental data as well as to benchmark those newly developed tools. However, all the currently available simulators are based on simple statistics of the produced reads, which have difficulty in capturing the complex nature of the Nanopore sequencing procedure, the main task of which is the generation of raw electrical current signals. Results: Here we propose a deep learning based simulator, DeepSimulator, to mimic the entire pipeline of Nanopore sequencing. Starting from a given reference genome or assembled contigs, we simulate the electrical current signals by a context-dependent deep learning model, followed by a base-calling procedure to yield simulated reads. This workflow mimics the sequencing procedure more naturally. The thorough experiments performed across four species show that the signals generated by our context-dependent model are more similar to the experimentally obtained signals than the ones generated by the official context-independent pore model. In terms of the simulated reads, we provide a parameter interface to users so that they can obtain the reads with different accuracies ranging from 83% to 97%. The reads generated by the default parameter have almost the same properties as the real data. Two case studies demonstrate the application of DeepSimulator to benefit the development of tools in de novo assembly and in low coverage SNP detection. Availability: The software can be accessed freely at: https://github.com/lykaust15/DeepSimulator.

  4. The millennium water vapour drop in chemistry–climate model simulations

    Directory of Open Access Journals (Sweden)

    S. Brinkop

    2016-07-01

    Full Text Available This study investigates the abrupt and severe water vapour decline in the stratosphere beginning in the year 2000 (the "millennium water vapour drop" and other similarly strong stratospheric water vapour reductions by means of various simulations with the state-of-the-art Chemistry-Climate Model (CCM EMAC (ECHAM/MESSy Atmospheric Chemistry Model. The model simulations differ with respect to the prescribed sea surface temperatures (SSTs and whether nudging is applied or not. The CCM EMAC is able to most closely reproduce the signature and pattern of the water vapour drop in agreement with those derived from satellite observations if the model is nudged. Model results confirm that this extraordinary water vapour decline is particularly obvious in the tropical lower stratosphere and is related to a large decrease in cold point temperature. The drop signal propagates under dilution to the higher stratosphere and to the poles via the Brewer–Dobson circulation (BDC. We found that the driving forces for this significant decline in water vapour mixing ratios are tropical sea surface temperature (SST changes due to a coincidence with a preceding strong El Niño–Southern Oscillation event (1997/1998 followed by a strong La Niña event (1999/2000 and supported by the change of the westerly to the easterly phase of the equatorial stratospheric quasi-biennial oscillation (QBO in 2000. Correct (observed SSTs are important for triggering the strong decline in water vapour. There are indications that, at least partly, SSTs contribute to the long period of low water vapour values from 2001 to 2006. For this period, the specific dynamical state of the atmosphere (overall atmospheric large-scale wind and temperature distribution is important as well, as it causes the observed persistent low cold point temperatures. These are induced by a period of increased upwelling, which, however, has no corresponding pronounced signature in SSTs anomalies in the tropics

  5. A vertically resolved, global, gap-free ozone database for assessing or constraining global climate model simulations

    Directory of Open Access Journals (Sweden)

    G. E. Bodeker

    2013-02-01

    Full Text Available High vertical resolution ozone measurements from eight different satellite-based instruments have been merged with data from the global ozonesonde network to calculate monthly mean ozone values in 5° latitude zones. These ''Tier 0'' ozone number densities and ozone mixing ratios are provided on 70 altitude levels (1 to 70 km and on 70 pressure levels spaced ~ 1 km apart (878.4 hPa to 0.046 hPa. The Tier 0 data are sparse and do not cover the entire globe or altitude range. To provide a gap-free database, a least squares regression model is fitted to the Tier 0 data and then evaluated globally. The regression model fit coefficients are expanded in Legendre polynomials to account for latitudinal structure, and in Fourier series to account for seasonality. Regression model fit coefficient patterns, which are two dimensional fields indexed by latitude and month of the year, from the N-th vertical level serve as an initial guess for the fit at the N + 1-th vertical level. The initial guess field for the first fit level (20 km/58.2 hPa was derived by applying the regression model to total column ozone fields. Perturbations away from the initial guess are captured through the Legendre and Fourier expansions. By applying a single fit at each level, and using the approach of allowing the regression fits to change only slightly from one level to the next, the regression is less sensitive to measurement anomalies at individual stations or to individual satellite-based instruments. Particular attention is paid to ensuring that the low ozone abundances in the polar regions are captured. By summing different combinations of contributions from different regression model basis functions, four different ''Tier 1'' databases have been compiled for different intended uses. This database is suitable for assessing ozone fields from chemistry-climate model simulations or for providing the ozone boundary conditions for global climate model simulations that do not

  6. Multiple-point statistical simulation for hydrogeological models: 3-D training image development and conditioning strategies

    Directory of Open Access Journals (Sweden)

    A.-S. Høyer

    2017-12-01

    Full Text Available Most studies on the application of geostatistical simulations based on multiple-point statistics (MPS to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2-D or quasi-3-D training images. In the present study, we demonstrate a novel strategy for 3-D MPS modelling characterized by (i realistic 3-D training images and (ii an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m  ×  100 m  ×  5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3-D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed spatial trends. The training image is constructed as a relatively small 3-D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, this study shows how to include both the geological environment and the type and quality of input information in order to achieve optimal results from MPS modelling. We present a practical

  7. Chance-constrained overland flow modeling for improving conceptual distributed hydrologic simulations based on scaling representation of sub-daily rainfall variability

    International Nuclear Information System (INIS)

    Han, Jing-Cheng; Huang, Guohe; Huang, Yuefei; Zhang, Hua; Li, Zhong; Chen, Qiuwen

    2015-01-01

    Lack of hydrologic process representation at the short time-scale would lead to inadequate simulations in distributed hydrological modeling. Especially for complex mountainous watersheds, surface runoff simulations are significantly affected by the overland flow generation, which is closely related to the rainfall characteristics at a sub-time step. In this paper, the sub-daily variability of rainfall intensity was considered using a probability distribution, and a chance-constrained overland flow modeling approach was proposed to capture the generation of overland flow within conceptual distributed hydrologic simulations. The integrated modeling procedures were further demonstrated through a watershed of China Three Gorges Reservoir area, leading to an improved SLURP-TGR hydrologic model based on SLURP. Combined with rainfall thresholds determined to distinguish various magnitudes of daily rainfall totals, three levels of significance were simultaneously employed to examine the hydrologic-response simulation. Results showed that SLURP-TGR could enhance the model performance, and the deviation of runoff simulations was effectively controlled. However, rainfall thresholds were so crucial for reflecting the scaling effect of rainfall intensity that optimal levels of significance and rainfall threshold were 0.05 and 10 mm, respectively. As for the Xiangxi River watershed, the main runoff contribution came from interflow of the fast store. Although slight differences of overland flow simulations between SLURP and SLURP-TGR were derived, SLURP-TGR was found to help improve the simulation of peak flows, and would improve the overall modeling efficiency through adjusting runoff component simulations. Consequently, the developed modeling approach favors efficient representation of hydrological processes and would be expected to have a potential for wide applications. - Highlights: • We develop an improved hydrologic model considering the scaling effect of rainfall. • A

  8. Chance-constrained overland flow modeling for improving conceptual distributed hydrologic simulations based on scaling representation of sub-daily rainfall variability

    Energy Technology Data Exchange (ETDEWEB)

    Han, Jing-Cheng [State Key Laboratory of Hydroscience & Engineering, Department of Hydraulic Engineering, Tsinghua University, Beijing 100084 (China); Huang, Guohe, E-mail: huang@iseis.org [Institute for Energy, Environment and Sustainable Communities, University of Regina, Regina, Saskatchewan S4S 0A2 (Canada); Huang, Yuefei [State Key Laboratory of Hydroscience & Engineering, Department of Hydraulic Engineering, Tsinghua University, Beijing 100084 (China); Zhang, Hua [College of Science and Engineering, Texas A& M University — Corpus Christi, Corpus Christi, TX 78412-5797 (United States); Li, Zhong [Institute for Energy, Environment and Sustainable Communities, University of Regina, Regina, Saskatchewan S4S 0A2 (Canada); Chen, Qiuwen [Center for Eco-Environmental Research, Nanjing Hydraulics Research Institute, Nanjing 210029 (China)

    2015-08-15

    Lack of hydrologic process representation at the short time-scale would lead to inadequate simulations in distributed hydrological modeling. Especially for complex mountainous watersheds, surface runoff simulations are significantly affected by the overland flow generation, which is closely related to the rainfall characteristics at a sub-time step. In this paper, the sub-daily variability of rainfall intensity was considered using a probability distribution, and a chance-constrained overland flow modeling approach was proposed to capture the generation of overland flow within conceptual distributed hydrologic simulations. The integrated modeling procedures were further demonstrated through a watershed of China Three Gorges Reservoir area, leading to an improved SLURP-TGR hydrologic model based on SLURP. Combined with rainfall thresholds determined to distinguish various magnitudes of daily rainfall totals, three levels of significance were simultaneously employed to examine the hydrologic-response simulation. Results showed that SLURP-TGR could enhance the model performance, and the deviation of runoff simulations was effectively controlled. However, rainfall thresholds were so crucial for reflecting the scaling effect of rainfall intensity that optimal levels of significance and rainfall threshold were 0.05 and 10 mm, respectively. As for the Xiangxi River watershed, the main runoff contribution came from interflow of the fast store. Although slight differences of overland flow simulations between SLURP and SLURP-TGR were derived, SLURP-TGR was found to help improve the simulation of peak flows, and would improve the overall modeling efficiency through adjusting runoff component simulations. Consequently, the developed modeling approach favors efficient representation of hydrological processes and would be expected to have a potential for wide applications. - Highlights: • We develop an improved hydrologic model considering the scaling effect of rainfall. • A

  9. Scientific Modeling and simulations

    CERN Document Server

    Diaz de la Rubia, Tomás

    2009-01-01

    Showcases the conceptual advantages of modeling which, coupled with the unprecedented computing power through simulations, allow scientists to tackle the formibable problems of our society, such as the search for hydrocarbons, understanding the structure of a virus, or the intersection between simulations and real data in extreme environments

  10. Responses to atmospheric CO2 concentrations in crop simulation models: a review of current simple and semicomplex representations and options for model development.

    Science.gov (United States)

    Vanuytrecht, Eline; Thorburn, Peter J

    2017-05-01

    Elevated atmospheric CO 2 concentrations ([CO 2 ]) cause direct changes in crop physiological processes (e.g. photosynthesis and stomatal conductance). To represent these CO 2 responses, commonly used crop simulation models have been amended, using simple and semicomplex representations of the processes involved. Yet, there is no standard approach to and often poor documentation of these developments. This study used a bottom-up approach (starting with the APSIM framework as case study) to evaluate modelled responses in a consortium of commonly used crop models and illuminate whether variation in responses reflects true uncertainty in our understanding compared to arbitrary choices of model developers. Diversity in simulated CO 2 responses and limited validation were common among models, both within the APSIM framework and more generally. Whereas production responses show some consistency up to moderately high [CO 2 ] (around 700 ppm), transpiration and stomatal responses vary more widely in nature and magnitude (e.g. a decrease in stomatal conductance varying between 35% and 90% among models was found for [CO 2 ] doubling to 700 ppm). Most notably, nitrogen responses were found to be included in few crop models despite being commonly observed and critical for the simulation of photosynthetic acclimation, crop nutritional quality and carbon allocation. We suggest harmonization and consideration of more mechanistic concepts in particular subroutines, for example, for the simulation of N dynamics, as a way to improve our predictive understanding of CO 2 responses and capture secondary processes. Intercomparison studies could assist in this aim, provided that they go beyond simple output comparison and explicitly identify the representations and assumptions that are causal for intermodel differences. Additionally, validation and proper documentation of the representation of CO 2 responses within models should be prioritized. © 2017 John Wiley & Sons Ltd.

  11. Modeling Group Perceptions Using Stochastic Simulation: Scaling Issues in the Multiplicative AHP

    DEFF Research Database (Denmark)

    Barfod, Michael Bruhn; van den Honert, Robin; Salling, Kim Bang

    2016-01-01

    This paper proposes a new decision support approach for applying stochastic simulation to the multiplicative analytic hierarchy process (AHP) in order to deal with issues concerning the scale parameter. The paper suggests a new approach that captures the influence from the scale parameter by maki...

  12. Network Modeling and Simulation A Practical Perspective

    CERN Document Server

    Guizani, Mohsen; Khan, Bilal

    2010-01-01

    Network Modeling and Simulation is a practical guide to using modeling and simulation to solve real-life problems. The authors give a comprehensive exposition of the core concepts in modeling and simulation, and then systematically address the many practical considerations faced by developers in modeling complex large-scale systems. The authors provide examples from computer and telecommunication networks and use these to illustrate the process of mapping generic simulation concepts to domain-specific problems in different industries and disciplines. Key features: Provides the tools and strate

  13. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    Science.gov (United States)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  14. Simulating the role of visual selective attention during the development of perceptual completion

    OpenAIRE

    Schlesinger, Matthew; Amso, Dima; Johnson, Scott P.

    2012-01-01

    We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of visual selective attention. In the current simulation study, we used the same model to simulate 3-month-olds’ performance on a second measure, the percep...

  15. Model reduction for circuit simulation

    CERN Document Server

    Hinze, Michael; Maten, E Jan W Ter

    2011-01-01

    Simulation based on mathematical models plays a major role in computer aided design of integrated circuits (ICs). Decreasing structure sizes, increasing packing densities and driving frequencies require the use of refined mathematical models, and to take into account secondary, parasitic effects. This leads to very high dimensional problems which nowadays require simulation times too large for the short time-to-market demands in industry. Modern Model Order Reduction (MOR) techniques present a way out of this dilemma in providing surrogate models which keep the main characteristics of the devi

  16. A Novel Multi-scale Simulation Strategy for Turbulent Reacting Flows

    Energy Technology Data Exchange (ETDEWEB)

    James, Sutherland [University of Utah

    2018-04-12

    Abstract In this project, a new methodology was proposed to bridge the gap between Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES). This novel methodology, titled Lattice-Based Multiscale Simulation (LBMS), creates a lattice structure of One-Dimensional Turbulence (ODT) models. This model has been shown to capture turbulent combustion with high fidelity by fully resolving interactions between turbulence and diffusion. By creating a lattice of ODT models, which are then coupled, LBMS overcomes the shortcomings of ODT, which are its inability to capture large scale three dimensional flow structures. However, by spacing these lattices significantly apart, LBMS can avoid the curse of dimensionality that creates untenable computational costs associated with DNS. This project has shown that LBMS is capable of reproducing statistics of isotropic turbulent flows while coarsening the spacing between lines significantly. It also investigates and resolves issues that arise when coupling ODT lines, such as flux reconstruction perpendicular to a given ODT line, preservation of conserved quantities when eddies cross a course cell volume and boundary condition application. Robust parallelization is also investigated.

  17. Laser capture microdissection: Arcturus(XT) infrared capture and UV cutting methods.

    Science.gov (United States)

    Gallagher, Rosa I; Blakely, Steven R; Liotta, Lance A; Espina, Virginia

    2012-01-01

    Laser capture microdissection (LCM) is a technique that allows the precise procurement of enriched cell populations from a heterogeneous tissue under direct microscopic visualization. LCM can be used to harvest the cells of interest directly or can be used to isolate specific cells by ablating the unwanted cells, resulting in histologically enriched cell populations. The fundamental components of laser microdissection technology are (a) visualization of the cells of interest via microscopy, (b) transfer of laser energy to a thermolabile polymer with either the formation of a polymer-cell composite (capture method) or transfer of laser energy via an ultraviolet laser to photovolatize a region of tissue (cutting method), and (c) removal of cells of interest from the heterogeneous tissue section. Laser energy supplied by LCM instruments can be infrared (810 nm) or ultraviolet (355 nm). Infrared lasers melt thermolabile polymers for cell capture, whereas ultraviolet lasers ablate cells for either removal of unwanted cells or excision of a defined area of cells. LCM technology is applicable to an array of applications including mass spectrometry, DNA genotyping and loss-of-heterozygosity analysis, RNA transcript profiling, cDNA library generation, proteomics discovery, and signal kinase pathway profiling. This chapter describes the unique features of the Arcturus(XT) laser capture microdissection instrument, which incorporates both infrared capture and ultraviolet cutting technology in one instrument, using a proteomic downstream assay as a model.

  18. A spatial simulation model for the dispersal of the bluetongue vector Culicoides brevitarsis in Australia.

    Directory of Open Access Journals (Sweden)

    Joel K Kelso

    Full Text Available The spread of Bluetongue virus (BTV among ruminants is caused by movement of infected host animals or by movement of infected Culicoides midges, the vector of BTV. Biologically plausible models of Culicoides dispersal are necessary for predicting the spread of BTV and are important for planning control and eradication strategies.A spatially-explicit simulation model which captures the two underlying population mechanisms, population dynamics and movement, was developed using extensive data from a trapping program for C. brevitarsis on the east coast of Australia. A realistic midge flight sub-model was developed and the annual incursion and population establishment of C. brevitarsis was simulated. Data from the literature was used to parameterise the model.The model was shown to reproduce the spread of C. brevitarsis southwards along the east Australian coastline in spring, from an endemic population to the north. Such incursions were shown to be reliant on wind-dispersal; Culicoides midge active flight on its own was not capable of achieving known rates of southern spread, nor was re-emergence of southern populations due to overwintering larvae. Data from midge trapping programmes were used to qualitatively validate the resulting simulation model.The model described in this paper is intended to form the vector component of an extended model that will also include BTV transmission. A model of midge movement and population dynamics has been developed in sufficient detail such that the extended model may be used to evaluate the timing and extent of BTV outbreaks. This extended model could then be used as a platform for addressing the effectiveness of spatially targeted vaccination strategies or animal movement bans as BTV spread mitigation measures, or the impact of climate change on the risk and extent of outbreaks. These questions involving incursive Culicoides spread cannot be simply addressed with non-spatial models.

  19. Mesoporous amine-bridged polysilsesquioxane for CO2 capture

    KAUST Repository

    Qi, Genggeng

    2011-01-01

    A novel class of amine-supported sorbents based on amine-bridged mesoporous polysilsesquioxane was developed via a simple one-pot sol-gel process. The new sorbent allows the incorporation of a large amount of active groups without sacrificing surface area or pore volume available for CO2 capture, leading to a CO2 capture capacity of 3.2 mmol g−1 under simulated flue gas conditions. The sorbent is readily regenerated at 100°C and exhibits good stability over repetitive adsorption-desorption cycling.

  20. Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2014-01-01

    This book includes extended and revised versions of a set of selected papers from the 2012 International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2012) which was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and held in Rome, Italy. SIMULTECH 2012 was technically co-sponsored by the Society for Modeling & Simulation International (SCS), GDR I3, Lionphant Simulation, Simulation Team and IFIP and held in cooperation with AIS Special Interest Group of Modeling and Simulation (AIS SIGMAS) and the Movimento Italiano Modellazione e Simulazione (MIMOS).

  1. Regional climate simulations over South America: sensitivity to model physics and to the treatment of lateral boundary conditions using the MM5 model

    Energy Technology Data Exchange (ETDEWEB)

    Solman, Silvina A. [CONICET-UBA, Centro de Investigaciones del Mar y la Atmosfera (CIMA), Buenos Aires (Argentina); Universidad de Buenos Aires, Departamento de Ciencias de la Atmosfera y los Oceanos. Facultad de Ciencias Exactas y Naturales, Buenos Aires (Argentina); Pessacg, Natalia L. [CONICET-UBA, Centro de Investigaciones del Mar y la Atmosfera (CIMA), Buenos Aires (Argentina)

    2012-01-15

    In this study the capability of the MM5 model in simulating the main mode of intraseasonal variability during the warm season over South America is evaluated through a series of sensitivity experiments. Several 3-month simulations nested into ERA40 reanalysis were carried out using different cumulus schemes and planetary boundary layer schemes in an attempt to define the optimal combination of physical parameterizations for simulating alternating wet and dry conditions over La Plata Basin (LPB) and the South Atlantic Convergence Zone regions, respectively. The results were compared with different observational datasets and model evaluation was performed taking into account the spatial distribution of monthly precipitation and daily statistics of precipitation over the target regions. Though every experiment was able to capture the contrasting behavior of the precipitation during the simulated period, precipitation was largely underestimated particularly over the LPB region, mainly due to a misrepresentation in the moisture flux convergence. Experiments using grid nudging of the winds above the planetary boundary layer showed a better performance compared with those in which no constrains were imposed to the regional circulation within the model domain. Overall, no single experiment was found to perform the best over the entire domain and during the two contrasting months. The experiment that outperforms depends on the area of interest, being the simulation using the Grell (Kain-Fritsch) cumulus scheme in combination with the MRF planetary boundary layer scheme more adequate for subtropical (tropical) latitudes. The ensemble of the sensitivity experiments showed a better performance compared with any individual experiment. (orig.)

  2. Finite Element Methods for real-time Haptic Feedback of Soft-Tissue Models in Virtual Reality Simulators

    Science.gov (United States)

    Frank, Andreas O.; Twombly, I. Alexander; Barth, Timothy J.; Smith, Jeffrey D.; Dalton, Bonnie P. (Technical Monitor)

    2001-01-01

    We have applied the linear elastic finite element method to compute haptic force feedback and domain deformations of soft tissue models for use in virtual reality simulators. Our results show that, for virtual object models of high-resolution 3D data (>10,000 nodes), haptic real time computations (>500 Hz) are not currently possible using traditional methods. Current research efforts are focused in the following areas: 1) efficient implementation of fully adaptive multi-resolution methods and 2) multi-resolution methods with specialized basis functions to capture the singularity at the haptic interface (point loading). To achieve real time computations, we propose parallel processing of a Jacobi preconditioned conjugate gradient method applied to a reduced system of equations resulting from surface domain decomposition. This can effectively be achieved using reconfigurable computing systems such as field programmable gate arrays (FPGA), thereby providing a flexible solution that allows for new FPGA implementations as improved algorithms become available. The resulting soft tissue simulation system would meet NASA Virtual Glovebox requirements and, at the same time, provide a generalized simulation engine for any immersive environment application, such as biomedical/surgical procedures or interactive scientific applications.

  3. Understanding Emergency Care Delivery Through Computer Simulation Modeling.

    Science.gov (United States)

    Laker, Lauren F; Torabi, Elham; France, Daniel J; Froehle, Craig M; Goldlust, Eric J; Hoot, Nathan R; Kasaie, Parastu; Lyons, Michael S; Barg-Walkow, Laura H; Ward, Michael J; Wears, Robert L

    2018-02-01

    In 2017, Academic Emergency Medicine convened a consensus conference entitled, "Catalyzing System Change through Health Care Simulation: Systems, Competency, and Outcomes." This article, a product of the breakout session on "understanding complex interactions through systems modeling," explores the role that computer simulation modeling can and should play in research and development of emergency care delivery systems. This article discusses areas central to the use of computer simulation modeling in emergency care research. The four central approaches to computer simulation modeling are described (Monte Carlo simulation, system dynamics modeling, discrete-event simulation, and agent-based simulation), along with problems amenable to their use and relevant examples to emergency care. Also discussed is an introduction to available software modeling platforms and how to explore their use for research, along with a research agenda for computer simulation modeling. Through this article, our goal is to enhance adoption of computer simulation, a set of methods that hold great promise in addressing emergency care organization and design challenges. © 2017 by the Society for Academic Emergency Medicine.

  4. Numerical simulation and experimental verification of oil recovery by macro-emulsion floods

    Energy Technology Data Exchange (ETDEWEB)

    Khamharatana, F. [Chulalongkorn Univ., Bangkok (Thailand); Thomas, S.; Farouq Ali, S. M. [Alberta Univ., Edmonton, AB (Canada)

    1997-08-01

    The process of emulsion flooding as an enhanced oil recovery method was described. The process involves several mechanisms that occur at the same time during displacement, therefore, simulation by emulsion flooding requires a good understanding of flow mechanics of emulsions in porous media. This paper provides a description of the process and its mathematical representation. Emulsion rheology, droplet capture and surfactant adsorption are represented mathematically and incorporated into a one-dimensional, three-phase mathematical model to account for interactions of surfactant, oil, water and the rock matrix. The simulator was validated by comparing simulation results with the results from linear core floods performed in the laboratory. Best match was achieved by a multi-phase non-Newtonian rheological model of an emulsion with interfacial tension-dependent relative permeabilities and time-dependent capture. 13 refs., 1 tab., 42 figs.

  5. The Water, Energy, and Carbon Dioxide Sequestration Simulation Model (WECSsim). A user's manual

    Energy Technology Data Exchange (ETDEWEB)

    Kobos, Peter Holmes; Roach, Jesse Dillon; Klise, Geoffrey Taylor; Heath, Jason E.; Dewers, Thomas A.; Gutierrez, Karen A.; Malczynski, Leonard A.; Borns, David James; McNemar, Andrea

    2014-01-01

    The Water, Energy, and Carbon Sequestration Simulation Model (WECSsim) is a national dynamic simulation model that calculates and assesses capturing, transporting, and storing CO2 in deep saline formations from all coal and natural gas-fired power plants in the U.S. An overarching capability of WECSsim is to also account for simultaneous CO2 injection and water extraction within the same geological saline formation. Extracting, treating, and using these saline waters to cool the power plant is one way to develop more value from using saline formations as CO2 storage locations. WECSsim allows for both one-to-one comparisons of a single power plant to a single saline formation along with the ability to develop a national CO2 storage supply curve and related national assessments for these formations. This report summarizes the scope, structure, and methodology of WECSsim along with a few key results. Developing WECSsim from a small scoping study to the full national-scale modeling effort took approximately 5 years. This report represents the culmination of that effort. The key findings from the WECSsim model indicate the U.S. has several decades' worth of storage for CO2 in saline formations when managed appropriately. Competition for subsurface storage capacity, intrastate flows of CO2 and water, and a supportive regulatory environment all play a key role as to the performance and cost profile across the range from a single power plant to all coal and natural gas-based plants' ability to store CO2. The overall system's cost to capture, transport, and store CO2 for the national assessment range from $74 to $208 / tonne stored ($96 to 272 / tonne avoided) for the first 25 to 50% of the 1126 power plants to between $1,585 to well beyond $2,000 / tonne stored ($2,040 to well beyond $2,000 / tonne avoided) for the remaining 75 to 100% of the plants. The latter range

  6. Simulation of Flash-Flood-Producing Storm Events in Saudi Arabia Using the Weather Research and Forecasting Model

    KAUST Repository

    Deng, Liping

    2015-05-01

    The challenges of monitoring and forecasting flash-flood-producing storm events in data-sparse and arid regions are explored using the Weather Research and Forecasting (WRF) Model (version 3.5) in conjunction with a range of available satellite, in situ, and reanalysis data. Here, we focus on characterizing the initial synoptic features and examining the impact of model parameterization and resolution on the reproduction of a number of flood-producing rainfall events that occurred over the western Saudi Arabian city of Jeddah. Analysis from the European Centre for Medium-Range Weather Forecasts (ECMWF) interim reanalysis (ERA-Interim) data suggests that mesoscale convective systems associated with strong moisture convergence ahead of a trough were the major initial features for the occurrence of these intense rain events. The WRF Model was able to simulate the heavy rainfall, with driving convective processes well characterized by a high-resolution cloud-resolving model. The use of higher (1 km vs 5 km) resolution along the Jeddah coastline favors the simulation of local convective systems and adds value to the simulation of heavy rainfall, especially for deep-convection-related extreme values. At the 5-km resolution, corresponding to an intermediate study domain, simulation without a cumulus scheme led to the formation of deeper convective systems and enhanced rainfall around Jeddah, illustrating the need for careful model scheme selection in this transition resolution. In analysis of multiple nested WRF simulations (25, 5, and 1 km), localized volume and intensity of heavy rainfall together with the duration of rainstorms within the Jeddah catchment area were captured reasonably well, although there was evidence of some displacements of rainstorm events.

  7. Failure of CMIP5 climate models in simulating post-1950 decreasing trend of Indian monsoon

    Science.gov (United States)

    Saha, Anamitra; Ghosh, Subimal; Sahana, A. S.; Rao, E. P.

    2014-10-01

    Impacts of climate change on Indian Summer Monsoon Rainfall (ISMR) and the growing population pose a major threat to water and food security in India. Adapting to such changes needs reliable projections of ISMR by general circulation models. Here we find that, majority of new generation climate models from Coupled Model Intercomparison Project phase5 (CMIP5) fail to simulate the post-1950 decreasing trend of ISMR. The weakening of monsoon is associated with the warming of Southern Indian Ocean and strengthening of cyclonic formation in the tropical western Pacific Ocean. We also find that these large-scale changes are not captured by CMIP5 models, with few exceptions, which is the reason of this failure. Proper representation of these highlighted geophysical processes in next generation models may improve the reliability of ISMR projections. Our results also alert the water resource planners to evaluate the CMIP5 models before using them for adaptation strategies.

  8. Flocking and self-defense: experiments and simulations of avian mobbing

    Science.gov (United States)

    Kane, Suzanne Amador

    2011-03-01

    We have performed motion capture studies in the field of avian mobbing, in which flocks of prey birds harass predatory birds. Our empirical studies cover both field observations of mobbing occurring in mid-air, where both predator and prey are in flight, and an experimental system using actual prey birds and simulated predator ``perch and wait'' strategies. To model our results and establish the effectiveness of mobbing flight paths at minimizing risk of capture while optimizing predator harassment, we have performed computer simulations using the actual measured trajectories of mobbing prey birds combined with model predator trajectories. To accurately simulate predator motion, we also measured raptor acceleration and flight dynamics, well as prey-pursuit strategies. These experiments and theoretical studies were all performed with undergraduate research assistants in a liberal arts college setting. This work illustrates how biological physics provides undergraduate research projects well-suited to the abilities of physics majors with interdisciplinary science interests and diverse backgrounds.

  9. Simulation and Sensitivity in a Nested Modeling System for South America. Part II: GCM Boundary Forcing.

    Science.gov (United States)

    Rojas, Maisa; Seth, Anji

    2003-08-01

    of this study, the RegCM's ability to simulate circulation and rainfall observed in the two extreme seasons was demonstrated when driven at the lateral boundaries by reanalyzed forcing. Seasonal integrations with the RegCM driven by GCM ensemble-derived lateral boundary forcing demonstrate that the nested model responds well to the SST forcing, by capturing the major features of the circulation and rainfall differences between the two years. The GCM-driven model also improves upon the monthly evolution of rainfall compared with that from the GCM. However, the nested model rainfall simulations for the two seasons are degraded compared with those from the reanalyses-driven RegCM integrations. The poor location of the Atlantic intertropical convergence zone (ITCZ) in the GCM leads to excess rainfall in Nordeste in the nested model.An expanded domain was tested, wherein the RegCM was permitted more internal freedom to respond to SST and regional orographic forcing. Results show that the RegCM is able to improve the location of the ITCZ, and the seasonal evolution of rainfall in Nordeste, the Amazon region, and the southeastern region of Brazil. However, it remains that the limiting factor in the skill of the nested modeling system is the quality of the lateral boundary forcing provided by the global model.

  10. Review, modeling, Heat Integration, and improved schemes of Rectisol®-based processes for CO2 capture

    International Nuclear Information System (INIS)

    Gatti, Manuele; Martelli, Emanuele; Marechal, François; Consonni, Stefano

    2014-01-01

    The paper evaluates the thermodynamic performances and the energy integration of alternative schemes of a methanol absorption based acid gas removal process designed for CO 2 Capture and Storage. More precisely, this work focuses the attention on the Rectisol ® process specifically designed for the selective removal of H 2 S and CO 2 from syngas produced by coal gasification. The study addresses the following issues: (i) perform a review of the Rectisol ® schemes proposed by engineers and researchers with the purpose of determining the best one for CO 2 capture and storage; (ii) calibrate the PC-SAFT equation of state for CH 3 OH–CO 2 –H 2 S–H 2 –CO mixtures at conditions relevant to the Rectisol ® process; (iii) evaluate the thermodynamic performances and optimize the energy integration of a “Reference” scheme derived from those available in the literature; (iv) identify and assess alternative Rectisol ® schemes with optimized performance for CO 2 Capture and Storage and Heat Integration with utilities. On the basis of the analysis of the Composite Curves of the integrated process, we propose some possible improvements at the level of the process configuration, like the introduction of mechanical vapor recompression and the development of a two stage regeneration arrangement. - Highlights: • Comprehensive review of the Rectisol ® process configurations and applications. • Calibration of PC-SAFT equation of state for Rectisol ® -relevant mixtures. • Detailed process simulation and optimized Heat Integration, and utility design. • Development of alternative Rectisol ® schemes optimized for CO 2 Capture

  11. Automated Simulation Model Generation

    NARCIS (Netherlands)

    Huang, Y.

    2013-01-01

    One of today's challenges in the field of modeling and simulation is to model increasingly larger and more complex systems. Complex models take long to develop and incur high costs. With the advances in data collection technologies and more popular use of computer-aided systems, more data has become

  12. Thermodynamic and Process Modelling of Gas Hydrate Systems in CO2 Capture Processes

    DEFF Research Database (Denmark)

    Herslund, Peter Jørgensen

    A novel gas separation technique based on gas hydrate formation (solid precipitation) is investigated by means of thermodynamic modeling and experimental investigations. This process has previously been proposed for application in post-combustion carbon dioxide capture from power station flue gases...... formation may be performed at pressures of approximately 20 MPa and temperatures below 280 K. Thermodynamic promoters are needed, to reduce the pressure requirement of the process, thereby making it competitive to existing capture technologies. A literature study is presented focusing mainly...... on thermodynamic gas hydrate promotion by hydrate formers stabilising the classical gas clathrate hydrate structures (sI, sII and sH) at low to moderate pressures. Much literature is available on this subject. Both experimental and theoretical studies presented in the literature have pointed out cyclopentane...

  13. How Airbnb Captures and Disseminates Value

    OpenAIRE

    Reinhold, Stephan; Dolnicar, Sara

    2017-01-01

    This chapter analyses two of the six vital business model elements, explaining the functioning of peer-to-peer accommodation networks: value capture and dissemination. The other elements are discussed in detail in Chapter 4. We focus on Airbnb because it is the international market leader. Separate business analyses are necessary for other peer-to-peer accommodation networks given that each functions in a slightly different way. In this chapter the business model value capture and value disse...

  14. An introduction to enterprise modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ostic, J.K.; Cannon, C.E. [Los Alamos National Lab., NM (United States). Technology Modeling and Analysis Group

    1996-09-01

    As part of an ongoing effort to continuously improve productivity, quality, and efficiency of both industry and Department of Energy enterprises, Los Alamos National Laboratory is investigating various manufacturing and business enterprise simulation methods. A number of enterprise simulation software models are being developed to enable engineering analysis of enterprise activities. In this document the authors define the scope of enterprise modeling and simulation efforts, and review recent work in enterprise simulation at Los Alamos National Laboratory as well as at other industrial, academic, and research institutions. References of enterprise modeling and simulation methods and a glossary of enterprise-related terms are provided.

  15. Diffuse-Interface Capturing Methods for Compressible Two-Phase Flows

    Science.gov (United States)

    Saurel, Richard; Pantano, Carlos

    2018-01-01

    Simulation of compressible flows became a routine activity with the appearance of shock-/contact-capturing methods. These methods can determine all waves, particularly discontinuous ones. However, additional difficulties may appear in two-phase and multimaterial flows due to the abrupt variation of thermodynamic properties across the interfacial region, with discontinuous thermodynamical representations at the interfaces. To overcome this difficulty, researchers have developed augmented systems of governing equations to extend the capturing strategy. These extended systems, reviewed here, are termed diffuse-interface models, because they are designed to compute flow variables correctly in numerically diffused zones surrounding interfaces. In particular, they facilitate coupling the dynamics on both sides of the (diffuse) interfaces and tend to the proper pure fluid-governing equations far from the interfaces. This strategy has become efficient for contact interfaces separating fluids that are governed by different equations of state, in the presence or absence of capillary effects, and with phase change. More sophisticated materials than fluids (e.g., elastic-plastic materials) have been considered as well.

  16. Geometric capture and escape of a microswimmer colliding with an obstacle.

    Science.gov (United States)

    Spagnolie, Saverio E; Moreno-Flores, Gregorio R; Bartolo, Denis; Lauga, Eric

    2015-05-07

    Motivated by recent experiments, we consider the hydrodynamic capture of a microswimmer near a stationary spherical obstacle. Simulations of model equations show that a swimmer approaching a small spherical colloid is simply scattered. In contrast, when the colloid is larger than a critical size it acts as a passive trap: the swimmer is hydrodynamically captured along closed trajectories and endlessly orbits around the colloidal sphere. In order to gain physical insight into this hydrodynamic scattering problem, we address it analytically. We provide expressions for the critical trapping radius, the depth of the "basin of attraction," and the scattering angle, which show excellent agreement with our numerical findings. We also demonstrate and rationalize the strong impact of swimming-flow symmetries on the trapping efficiency. Finally, we give the swimmer an opportunity to escape the colloidal traps by considering the effects of Brownian, or active, diffusion. We show that in some cases the trapping time is governed by an Ornstein-Uhlenbeck process, which results in a trapping time distribution that is well-approximated as inverse-Gaussian. The predictions again compare very favorably with the numerical simulations. We envision applications of the theory to bioremediation, microorganism sorting techniques, and the study of bacterial populations in heterogeneous or porous environments.

  17. Modeling and simulation of blood collection systems.

    Science.gov (United States)

    Alfonso, Edgar; Xie, Xiaolan; Augusto, Vincent; Garraud, Olivier

    2012-03-01

    This paper addresses the modeling and simulation of blood collection systems in France for both fixed site and mobile blood collection with walk in whole blood donors and scheduled plasma and platelet donors. Petri net models are first proposed to precisely describe different blood collection processes, donor behaviors, their material/human resource requirements and relevant regulations. Petri net models are then enriched with quantitative modeling of donor arrivals, donor behaviors, activity times and resource capacity. Relevant performance indicators are defined. The resulting simulation models can be straightforwardly implemented with any simulation language. Numerical experiments are performed to show how the simulation models can be used to select, for different walk in donor arrival patterns, appropriate human resource planning and donor appointment strategies.

  18. Simulations and measurements of adiabatic annular flows in triangular, tight lattice nuclear fuel bundle model

    Energy Technology Data Exchange (ETDEWEB)

    Saxena, Abhishek, E-mail: asaxena@lke.mavt.ethz.ch [ETH Zurich, Laboratory for Nuclear Energy Systems, Department of Mechanical and Process Engineering, Sonneggstrasse 3, 8092 Zürich (Switzerland); Zboray, Robert [Laboratory for Thermal-hydraulics, Nuclear Energy and Safety Department, Paul Scherrer Institute, 5232 Villigen PSI (Switzerland); Prasser, Horst-Michael [ETH Zurich, Laboratory for Nuclear Energy Systems, Department of Mechanical and Process Engineering, Sonneggstrasse 3, 8092 Zürich (Switzerland); Laboratory for Thermal-hydraulics, Nuclear Energy and Safety Department, Paul Scherrer Institute, 5232 Villigen PSI (Switzerland)

    2016-04-01

    High conversion light water reactors (HCLWR) having triangular, tight-lattice fuels bundles could enable improved fuel utilization compared to present day LWRs. However, the efficient cooling of a tight lattice bundle has to be still proven. Major concern is the avoidance of high-quality boiling crisis (film dry-out) by the use of efficient functional spacers. For this reason, we have carried out experiments on adiabatic, air-water annular two-phase flows in a tight-lattice, triangular fuel bundle model using generic spacers. A high-spatial-resolution, non-intrusive measurement technology, cold neutron tomography, has been utilized to resolve the distribution of the liquid film thickness on the virtual fuel pin surfaces. Unsteady CFD simulations have also been performed to replicate and compare with the experiments using the commercial code STAR-CCM+. Large eddies have been resolved on the grid level to capture the dominant unsteady flow features expected to drive the liquid film thickness distribution downstream of a spacer while the subgrid scales have been modeled using the Wall Adapting Local Eddy (WALE) subgrid model. A Volume of Fluid (VOF) method, which directly tracks the interface and does away with closure relationship models for interfacial exchange terms, has also been employed. The present paper shows first comparison of the measurement with the simulation results.

  19. A model for long-distance dispersal of boll weevils (Coleoptera: Curculionidae)

    Science.gov (United States)

    Westbrook, John K.; Eyster, Ritchie S.; Allen, Charles T.

    2011-07-01

    The boll weevil, Anthonomus grandis (Boheman), has been a major insect pest of cotton production in the US, accounting for yield losses and control costs on the order of several billion US dollars since the introduction of the pest in 1892. Boll weevil eradication programs have eliminated reproducing populations in nearly 94%, and progressed toward eradication within the remaining 6%, of cotton production areas. However, the ability of weevils to disperse and reinfest eradicated zones threatens to undermine the previous investment toward eradication of this pest. In this study, the HYSPLIT atmospheric dispersion model was used to simulate daily wind-aided dispersal of weevils from the Lower Rio Grande Valley (LRGV) of southern Texas and northeastern Mexico. Simulated weevil dispersal was compared with weekly capture of weevils in pheromone traps along highway trap lines between the LRGV and the South Texas / Winter Garden zone of the Texas Boll Weevil Eradication Program. A logistic regression model was fit to the probability of capturing at least one weevil in individual pheromone traps relative to specific values of simulated weevil dispersal, which resulted in 60.4% concordance, 21.3% discordance, and 18.3% ties in estimating captures and non-captures. During the first full year of active eradication with widespread insecticide applications in 2006, the dispersal model accurately estimated 71.8%, erroneously estimated 12.5%, and tied 15.7% of capture and non-capture events. Model simulations provide a temporal risk assessment over large areas of weevil reinfestation resulting from dispersal by prevailing winds. Eradication program managers can use the model risk assessment information to effectively schedule and target enhanced trapping, crop scouting, and insecticide applications.

  20. A combined nonlinear and hysteresis model of shock absorber for quarter car simulation on the basis of experimental data

    Directory of Open Access Journals (Sweden)

    Vijay Barethiye

    2017-12-01

    Full Text Available Modeling dynamic characteristics of an automotive shock absorber is a challenging task due to its complex behavior. In the present paper, the nonparametric and hybrid approach is proposed to represent the nonlinear and hysteresis characteristics of the shock absorber. An experiment is carried out on a car damper utilizing INSTRON to obtain force-velocity characteristics of the shock absorber. The experimental data is used to devise two different models, namely, piecewise linear model and hysteresis model, to capture the damping properties of the absorber and for consequent use in simulations. The complexity involved due to certain physical phenomenon such as oil compressibility, gas entrapment etc. gives rise to hysteresis behavior and the present paper tries to model such behavior with the help of Neural Networks. Finally, a combined (hybrid shock absorber model (including the characteristics of both piecewise linear and hysteresis behavior is developed in Simulink and integrated into a quarter car simulation to verify its feasibility. The results generated by the combined (hybrid model are compared with linear as well as piecewise linear model and the comparison shows that the proposed model substantially a better option to study the vehicle characteristics more accurately and precisely.

  1. Capture cross sections on unstable nuclei

    Science.gov (United States)

    Tonchev, A. P.; Escher, J. E.; Scielzo, N.; Bedrossian, P.; Ilieva, R. S.; Humby, P.; Cooper, N.; Goddard, P. M.; Werner, V.; Tornow, W.; Rusev, G.; Kelley, J. H.; Pietralla, N.; Scheck, M.; Savran, D.; Löher, B.; Yates, S. W.; Crider, B. P.; Peters, E. E.; Tsoneva, N.; Goriely, S.

    2017-09-01

    Accurate neutron-capture cross sections on unstable nuclei near the line of beta stability are crucial for understanding the s-process nucleosynthesis. However, neutron-capture cross sections for short-lived radionuclides are difficult to measure due to the fact that the measurements require both highly radioactive samples and intense neutron sources. Essential ingredients for describing the γ decays following neutron capture are the γ-ray strength function and level densities. We will compare different indirect approaches for obtaining the most relevant observables that can constrain Hauser-Feshbach statistical-model calculations of capture cross sections. Specifically, we will consider photon scattering using monoenergetic and 100% linearly polarized photon beams. Challenges that exist on the path to obtaining neutron-capture cross sections for reactions on isotopes near and far from stability will be discussed.

  2. Capture cross sections on unstable nuclei

    Directory of Open Access Journals (Sweden)

    Tonchev A.P.

    2017-01-01

    Full Text Available Accurate neutron-capture cross sections on unstable nuclei near the line of beta stability are crucial for understanding the s-process nucleosynthesis. However, neutron-capture cross sections for short-lived radionuclides are difficult to measure due to the fact that the measurements require both highly radioactive samples and intense neutron sources. Essential ingredients for describing the γ decays following neutron capture are the γ-ray strength function and level densities. We will compare different indirect approaches for obtaining the most relevant observables that can constrain Hauser-Feshbach statistical-model calculations of capture cross sections. Specifically, we will consider photon scattering using monoenergetic and 100% linearly polarized photon beams. Challenges that exist on the path to obtaining neutron-capture cross sections for reactions on isotopes near and far from stability will be discussed.

  3. Models and simulations

    International Nuclear Information System (INIS)

    Lee, M.J.; Sheppard, J.C.; Sullenberger, M.; Woodley, M.D.

    1983-09-01

    On-line mathematical models have been used successfully for computer controlled operation of SPEAR and PEP. The same model control concept is being implemented for the operation of the LINAC and for the Damping Ring, which will be part of the Stanford Linear Collider (SLC). The purpose of this paper is to describe the general relationships between models, simulations and the control system for any machine at SLAC. The work we have done on the development of the empirical model for the Damping Ring will be presented as an example

  4. Incremental learning for automated knowledge capture

    Energy Technology Data Exchange (ETDEWEB)

    Benz, Zachary O. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Basilico, Justin Derrick [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Davis, Warren Leon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dixon, Kevin R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jones, Brian S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Nathaniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wendt, Jeremy Daniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-12-01

    People responding to high-consequence national-security situations need tools to help them make the right decision quickly. The dynamic, time-critical, and ever-changing nature of these situations, especially those involving an adversary, require models of decision support that can dynamically react as a situation unfolds and changes. Automated knowledge capture is a key part of creating individualized models of decision making in many situations because it has been demonstrated as a very robust way to populate computational models of cognition. However, existing automated knowledge capture techniques only populate a knowledge model with data prior to its use, after which the knowledge model is static and unchanging. In contrast, humans, including our national-security adversaries, continually learn, adapt, and create new knowledge as they make decisions and witness their effect. This artificial dichotomy between creation and use exists because the majority of automated knowledge capture techniques are based on traditional batch machine-learning and statistical algorithms. These algorithms are primarily designed to optimize the accuracy of their predictions and only secondarily, if at all, concerned with issues such as speed, memory use, or ability to be incrementally updated. Thus, when new data arrives, batch algorithms used for automated knowledge capture currently require significant recomputation, frequently from scratch, which makes them ill suited for use in dynamic, timecritical, high-consequence decision making environments. In this work we seek to explore and expand upon the capabilities of dynamic, incremental models that can adapt to an ever-changing feature space.

  5. Does imminent threat capture and hold attention?

    Science.gov (United States)

    Koster, Ernst H W; Crombez, Geert; Van Damme, Stefaan; Verschuere, Bruno; De Houwer, Jan

    2004-09-01

    According to models of attention and emotion, threat captures and holds attention. In behavioral tasks, robust evidence has been found for attentional holding but not for attentional capture by threat. An important explanation for the absence of attentional capture effects is that the visual stimuli used posed no genuine threat. The present study investigated whether visual cues that signal an aversive white noise can elicit attentional capture and holding effects. Cues presented in an attentional task were simultaneously provided with a threat value through an aversive conditioning procedure. Response latencies showed that threatening cues captured and held attention. These results support recent views on attention to threat, proposing that imminent threat captures attention in everyone. (c) 2004 APA, all rights reserved

  6. Two-step rapid sulfur capture. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-04-01

    The primary goal of this program was to test the technical and economic feasibility of a novel dry sorbent injection process called the Two-Step Rapid Sulfur Capture process for several advanced coal utilization systems. The Two-Step Rapid Sulfur Capture process consists of limestone activation in a high temperature auxiliary burner for short times followed by sorbent quenching in a lower temperature sulfur containing coal combustion gas. The Two-Step Rapid Sulfur Capture process is based on the Non-Equilibrium Sulfur Capture process developed by the Energy Technology Office of Textron Defense Systems (ETO/TDS). Based on the Non-Equilibrium Sulfur Capture studies the range of conditions for optimum sorbent activation were thought to be: activation temperature > 2,200 K for activation times in the range of 10--30 ms. Therefore, the aim of the Two-Step process is to create a very active sorbent (under conditions similar to the bomb reactor) and complete the sulfur reaction under thermodynamically favorable conditions. A flow facility was designed and assembled to simulate the temperature, time, stoichiometry, and sulfur gas concentration prevalent in the advanced coal utilization systems such as gasifiers, fluidized bed combustors, mixed-metal oxide desulfurization systems, diesel engines, and gas turbines.

  7. Numerical simulation of swirling flow in complex hydroturbine draft tube using unsteady statistical turbulence models

    Energy Technology Data Exchange (ETDEWEB)

    Paik, Joongcheol [University of Minnesota; Sotiropoulos, Fotis [University of Minnesota; Sale, Michael J [ORNL

    2005-06-01

    A numerical method is developed for carrying out unsteady Reynolds-averaged Navier-Stokes (URANS) simulations and detached-eddy simulations (DESs) in complex 3D geometries. The method is applied to simulate incompressible swirling flow in a typical hydroturbine draft tube, which consists of a strongly curved 90 degree elbow and two piers. The governing equations are solved with a second-order-accurate, finite-volume, dual-time-stepping artificial compressibility approach for a Reynolds number of 1.1 million on a mesh with 1.8 million nodes. The geometrical complexities of the draft tube are handled using domain decomposition with overset (chimera) grids. Numerical simulations show that unsteady statistical turbulence models can capture very complex 3D flow phenomena dominated by geometry-induced, large-scale instabilities and unsteady coherent structures such as the onset of vortex breakdown and the formation of the unsteady rope vortex downstream of the turbine runner. Both URANS and DES appear to yield the general shape and magnitude of mean velocity profiles in reasonable agreement with measurements. Significant discrepancies among the DES and URANS predictions of the turbulence statistics are also observed in the straight downstream diffuser.

  8. Simulation model of a PWR power plant

    International Nuclear Information System (INIS)

    Larsen, N.

    1987-03-01

    A simulation model of a hypothetical PWR power plant is described. A large number of disturbances and failures in plant function can be simulated. The model is written as seven modules to the modular simulation system for continuous processes DYSIM and serves also as a user example of this system. The model runs in Fortran 77 on the IBM-PC-AT. (author)

  9. Simulating Pacific Northwest Forest Response to Climate Change: How We Made Model Results Useful for Vulnerability Assessments

    Science.gov (United States)

    Kim, J. B.; Kerns, B. K.; Halofsky, J.

    2014-12-01

    GCM-based climate projections and downscaled climate data proliferate, and there are many climate-aware vegetation models in use by researchers. Yet application of fine-scale DGVM based simulation output in national forest vulnerability assessments is not common, because there are technical, administrative and social barriers for their use by managers and policy makers. As part of a science-management climate change adaptation partnership, we performed simulations of vegetation response to climate change for four national forests in the Blue Mountains of Oregon using the MC2 dynamic global vegetation model (DGVM) for use in vulnerability assessments. Our simulation results under business-as-usual scenarios suggest a starkly different future forest conditions for three out of the four national forests in the study area, making their adoption by forest managers a potential challenge. However, using DGVM output to structure discussion of potential vegetation changes provides a suitable framework to discuss the dynamic nature of vegetation change compared to using more commonly available model output (e.g. species distribution models). From the onset, we planned and coordinated our work with national forest managers to maximize the utility and the consideration of the simulation results in planning. Key lessons from this collaboration were: (1) structured and strategic selection of a small number climate change scenarios that capture the range of variability in future conditions simplified results; (2) collecting and integrating data from managers for use in simulations increased support and interest in applying output; (3) a structured, regionally focused, and hierarchical calibration of the DGVM produced well-validated results; (4) simple approaches to quantifying uncertainty in simulation results facilitated communication; and (5) interpretation of model results in a holistic context in relation to multiple lines of evidence produced balanced guidance. This latest

  10. Probing Cellular Dynamics with Mesoscopic Simulations

    DEFF Research Database (Denmark)

    Shillcock, Julian C.

    2010-01-01

    Cellular processes span a huge range of length and time scales from the molecular to the near-macroscopic. Understanding how effects on one scale influence, and are themselves influenced by, those on lower and higher scales is a critical issue for the construction of models in Systems Biology....... Advances in computing hardware and software now allow explicit simulation of some aspects of cellular dynamics close to the molecular scale. Vesicle fusion is one example of such a process. Experiments, however, typically probe cellular behavior from the molecular scale up to microns. Standard particle...... soon be coupled to Mass Action models allowing the parameters in such models to be continuously tuned according to the finer resolution simulation. This will help realize the goal of a computational cellular simulation that is able to capture the dynamics of membrane-associated processes...

  11. Structured building model reduction toward parallel simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University

    2013-08-26

    Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.

  12. Modeling and Simulation for Safeguards

    International Nuclear Information System (INIS)

    Swinhoe, Martyn T.

    2012-01-01

    The purpose of this talk is to give an overview of the role of modeling and simulation in Safeguards R and D and introduce you to (some of) the tools used. Some definitions are: (1) Modeling - the representation, often mathematical, of a process, concept, or operation of a system, often implemented by a computer program; (2) Simulation - the representation of the behavior or characteristics of one system through the use of another system, especially a computer program designed for the purpose; and (3) Safeguards - the timely detection of diversion of significant quantities of nuclear material. The role of modeling and simulation are: (1) Calculate amounts of material (plant modeling); (2) Calculate signatures of nuclear material etc. (source terms); and (3) Detector performance (radiation transport and detection). Plant modeling software (e.g. FACSIM) gives the flows and amount of material stored at all parts of the process. In safeguards this allow us to calculate the expected uncertainty of the mass and evaluate the expected MUF. We can determine the measurement accuracy required to achieve a certain performance.

  13. Asymmetric capture of Dirac dark matter by the Sun

    International Nuclear Information System (INIS)

    Blennow, Mattias; Clementz, Stefan

    2015-01-01

    Current problems with the solar model may be alleviated if a significant amount of dark matter from the galactic halo is captured in the Sun. We discuss the capture process in the case where the dark matter is a Dirac fermion and the background halo consists of equal amounts of dark matter and anti-dark matter. By considering the case where dark matter and anti-dark matter have different cross sections on solar nuclei as well as the case where the capture process is considered to be a Poisson process, we find that a significant asymmetry between the captured dark particles and anti-particles is possible even for an annihilation cross section in the range expected for thermal relic dark matter. Since the captured number of particles are competitive with asymmetric dark matter models in a large range of parameter space, one may expect solar physics to be altered by the capture of Dirac dark matter. It is thus possible that solutions to the solar composition problem may be searched for in these type of models

  14. Asymmetric capture of Dirac dark matter by the Sun

    Energy Technology Data Exchange (ETDEWEB)

    Blennow, Mattias; Clementz, Stefan [Department of Theoretical Physics, School of Engineering Sciences, KTH Royal Institute of Technology, Albanova University Center,106 91, Stockholm (Sweden)

    2015-08-18

    Current problems with the solar model may be alleviated if a significant amount of dark matter from the galactic halo is captured in the Sun. We discuss the capture process in the case where the dark matter is a Dirac fermion and the background halo consists of equal amounts of dark matter and anti-dark matter. By considering the case where dark matter and anti-dark matter have different cross sections on solar nuclei as well as the case where the capture process is considered to be a Poisson process, we find that a significant asymmetry between the captured dark particles and anti-particles is possible even for an annihilation cross section in the range expected for thermal relic dark matter. Since the captured number of particles are competitive with asymmetric dark matter models in a large range of parameter space, one may expect solar physics to be altered by the capture of Dirac dark matter. It is thus possible that solutions to the solar composition problem may be searched for in these type of models.

  15. Asymmetric capture of Dirac dark matter by the Sun

    Energy Technology Data Exchange (ETDEWEB)

    Blennow, Mattias; Clementz, Stefan, E-mail: emb@kth.se, E-mail: scl@kth.se [Department of Theoretical Physics, School of Engineering Sciences, KTH Royal Institute of Technology, Albanova University Center, 106 91, Stockholm (Sweden)

    2015-08-01

    Current problems with the solar model may be alleviated if a significant amount of dark matter from the galactic halo is captured in the Sun. We discuss the capture process in the case where the dark matter is a Dirac fermion and the background halo consists of equal amounts of dark matter and anti-dark matter. By considering the case where dark matter and anti-dark matter have different cross sections on solar nuclei as well as the case where the capture process is considered to be a Poisson process, we find that a significant asymmetry between the captured dark particles and anti-particles is possible even for an annihilation cross section in the range expected for thermal relic dark matter. Since the captured number of particles are competitive with asymmetric dark matter models in a large range of parameter space, one may expect solar physics to be altered by the capture of Dirac dark matter. It is thus possible that solutions to the solar composition problem may be searched for in these type of models.

  16. Hybrid simulation models of production networks

    CERN Document Server

    Kouikoglou, Vassilis S

    2001-01-01

    This book is concerned with a most important area of industrial production, that of analysis and optimization of production lines and networks using discrete-event models and simulation. The book introduces a novel approach that combines analytic models and discrete-event simulation. Unlike conventional piece-by-piece simulation, this method observes a reduced number of events between which the evolution of the system is tracked analytically. Using this hybrid approach, several models are developed for the analysis of production lines and networks. The hybrid approach combines speed and accuracy for exceptional analysis of most practical situations. A number of optimization problems, involving buffer design, workforce planning, and production control, are solved through the use of hybrid models.

  17. Software-Engineering Process Simulation (SEPS) model

    Science.gov (United States)

    Lin, C. Y.; Abdel-Hamid, T.; Sherif, J. S.

    1992-01-01

    The Software Engineering Process Simulation (SEPS) model is described which was developed at JPL. SEPS is a dynamic simulation model of the software project development process. It uses the feedback principles of system dynamics to simulate the dynamic interactions among various software life cycle development activities and management decision making processes. The model is designed to be a planning tool to examine tradeoffs of cost, schedule, and functionality, and to test the implications of different managerial policies on a project's outcome. Furthermore, SEPS will enable software managers to gain a better understanding of the dynamics of software project development and perform postmodern assessments.

  18. Hydrologic and atrazine simulation of the Cedar Creek Watershed using the SWAT model.

    Science.gov (United States)

    Larose, M; Heathman, G C; Norton, L D; Engel, B

    2007-01-01

    One of the major factors contributing to surface water contamination in agricultural areas is the use of pesticides. The Soil and Water Assessment Tool (SWAT) is a hydrologic model capable of simulating the fate and transport of pesticides in an agricultural watershed. The SWAT model was used in this study to estimate stream flow and atrazine (2-chloro-4-(ethylamino)-6-(isopropylamino)-s-triazine) losses to surface water in the Cedar Creek Watershed (CCW) within the St. Joseph River Basin in northeastern Indiana. Model calibration and validation periods consisted of five and two year periods, respectively. The National Agricultural Statistics Survey (NASS) 2001 land cover classification and the Soil Survey Geographic (SSURGO) database were used as model input data layers. Data from the St. Joseph River Watershed Initiative and the Soil and Water Conservation Districts of Allen, Dekalb, and Noble counties were used to represent agricultural practices in the watershed which included the type of crops grown, tillage practices, fertilizer, and pesticide application rates. Model results were evaluated based on efficiency coefficient values, standard statistical measures, and visual inspection of the measured and simulated hydrographs. The Nash and Sutcliffe model efficiency coefficients (E(NS)) for monthly and daily stream flow calibration and validation ranged from 0.51 to 0.66. The E(NS) values for atrazine calibration and validation ranged from 0.43 to 0.59. All E(NS) values were within the range of acceptable model performance standards. The results of this study indicate that the model is an effective tool in capturing the dynamics of stream flow and atrazine concentrations on a large-scale agricultural watershed in the midwestern USA.

  19. EVOLUTION OF PROGENITORS FOR ELECTRON CAPTURE SUPERNOVAE

    International Nuclear Information System (INIS)

    Takahashi, Koh; Umeda, Hideyuki; Yoshida, Takashi

    2013-01-01

    We provide progenitor models for electron capture supernovae (ECSNe) with detailed evolutionary calculation. We include minor electron capture nuclei using a large nuclear reaction network with updated reaction rates. For electron capture, the Coulomb correction of rates is treated and the contribution from neutron-rich isotopes is taken into account in each nuclear statistical equilibrium (NSE) composition. We calculate the evolution of the most massive super asymptotic giant branch stars and show that these stars undergo off-center carbon burning and form ONe cores at the center. These cores become heavier up to the critical mass of 1.367 M ☉ and keep contracting even after the initiation of O+Ne deflagration. Inclusion of minor electron capture nuclei causes convective URCA cooling during the contraction phase, but the effect on the progenitor evolution is small. On the other hand, electron capture by neutron-rich isotopes in the NSE region has a more significant effect. We discuss the uniqueness of the critical core mass for ECSNe and the effect of wind mass loss on the plausibility of our models for ECSN progenitors.

  20. Electrofishing capture probability of smallmouth bass in streams

    Science.gov (United States)

    Dauwalter, D.C.; Fisher, W.L.

    2007-01-01

    Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.

  1. Modeling and simulation of satellite subsystems for end-to-end spacecraft modeling

    Science.gov (United States)

    Schum, William K.; Doolittle, Christina M.; Boyarko, George A.

    2006-05-01

    During the past ten years, the Air Force Research Laboratory (AFRL) has been simultaneously developing high-fidelity spacecraft payload models as well as a robust distributed simulation environment for modeling spacecraft subsystems. Much of this research has occurred in the Distributed Architecture Simulation Laboratory (DASL). AFRL developers working in the DASL have effectively combined satellite power, attitude pointing, and communication link analysis subsystem models with robust satellite sensor models to create a first-order end-to-end satellite simulation capability. The merging of these two simulation areas has advanced the field of spacecraft simulation, design, and analysis, and enabled more in-depth mission and satellite utility analyses. A core capability of the DASL is the support of a variety of modeling and analysis efforts, ranging from physics and engineering-level modeling to mission and campaign-level analysis. The flexibility and agility of this simulation architecture will be used to support space mission analysis, military utility analysis, and various integrated exercises with other military and space organizations via direct integration, or through DOD standards such as Distributed Interaction Simulation. This paper discusses the results and lessons learned in modeling satellite communication link analysis, power, and attitude control subsystems for an end-to-end satellite simulation. It also discusses how these spacecraft subsystem simulations feed into and support military utility and space mission analyses.

  2. Numerical simulation of Higgs models

    International Nuclear Information System (INIS)

    Jaster, A.

    1995-10-01

    The SU(2) Higgs and the Schwinger model on the lattice were analysed. Numerical simulations of the SU(2) Higgs model were performed to study the finite temperature electroweak phase transition. With the help of the multicanonical method the distribution of an order parameter at the phase transition point was measured. This was used to obtain the order of the phase transition and the value of the interface tension with the histogram method. Numerical simulations were also performed at zero temperature to perform renormalization. The measured values for the Wilson loops were used to determine the static potential and from this the renormalized gauge coupling. The Schwinger model was simulated at different gauge couplings to analyse the properties of the Kaplan-Shamir fermions. The prediction that the mass parameter gets only multiplicative renormalization was tested and verified. (orig.)

  3. Model improvements to simulate charging in SEM

    Science.gov (United States)

    Arat, K. T.; Klimpel, T.; Hagen, C. W.

    2018-03-01

    Charging of insulators is a complex phenomenon to simulate since the accuracy of the simulations is very sensitive to the interaction of electrons with matter and electric fields. In this study, we report model improvements for a previously developed Monte-Carlo simulator to more accurately simulate samples that charge. The improvements include both modelling of low energy electron scattering and charging of insulators. The new first-principle scattering models provide a more realistic charge distribution cloud in the material, and a better match between non-charging simulations and experimental results. Improvements on charging models mainly focus on redistribution of the charge carriers in the material with an induced conductivity (EBIC) and a breakdown model, leading to a smoother distribution of the charges. Combined with a more accurate tracing of low energy electrons in the electric field, we managed to reproduce the dynamically changing charging contrast due to an induced positive surface potential.

  4. Combining state-and-transition simulations and species distribution models to anticipate the effects of climate change

    Science.gov (United States)

    Miller, Brian W.; Frid, Leonardo; Chang, Tony; Piekielek, N. B.; Hansen, Andrew J.; Morisette, Jeffrey T.

    2015-01-01

    State-and-transition simulation models (STSMs) are known for their ability to explore the combined effects of multiple disturbances, ecological dynamics, and management actions on vegetation. However, integrating the additional impacts of climate change into STSMs remains a challenge. We address this challenge by combining an STSM with species distribution modeling (SDM). SDMs estimate the probability of occurrence of a given species based on observed presence and absence locations as well as environmental and climatic covariates. Thus, in order to account for changes in habitat suitability due to climate change, we used SDM to generate continuous surfaces of species occurrence probabilities. These data were imported into ST-Sim, an STSM platform, where they dictated the probability of each cell transitioning between alternate potential vegetation types at each time step. The STSM was parameterized to capture additional processes of vegetation growth and disturbance that are relevant to a keystone species in the Greater Yellowstone Ecosystem—whitebark pine (Pinus albicaulis). We compared historical model runs against historical observations of whitebark pine and a key disturbance agent (mountain pine beetle, Dendroctonus ponderosae), and then projected the simulation into the future. Using this combination of correlative and stochastic simulation models, we were able to reproduce historical observations and identify key data gaps. Results indicated that SDMs and STSMs are complementary tools, and combining them is an effective way to account for the anticipated impacts of climate change, biotic interactions, and disturbances, while also allowing for the exploration of management options.

  5. On valuing patches: estimating contributions to metapopulation growth with reverse-time capture-recapture modeling

    Science.gov (United States)

    Jamie S. Sanderlin; Peter M. Waser; James E. Hines; James D. Nichols

    2012-01-01

    Metapopulation ecology has historically been rich in theory, yet analytical approaches for inferring demographic relationships among local populations have been few. We show how reverse-time multi-state capture­recapture models can be used to estimate the importance of local recruitment and interpopulation dispersal to metapopulation growth. We use 'contribution...

  6. Phantom experiment of depth-dose distributions for gadolinium neutron capture therapy

    International Nuclear Information System (INIS)

    Matsumoto, T.; Kato, K.; Sakuma, Y.; Tsuruno, A.; Matsubayashi, M.

    1993-01-01

    Depth-dose distributions in a tumor simulated phantom were measured for thermal neutron flux, capture gamma-ray and internal conversion electron dose rates for gadolinium neutron capture therapy. The results show that (i) a significant dose enhancement can be achieved in the tumor by capture gamma-rays and internal conversion electrons but the dose is mainly due to capture gamma-rays from the Gd(n, γ) reactions, therefore, is not selective at the cellular level, (ii) the dose distribution was a function of strongly interrelated parameters such as gadolinium concentrations, tumor site and neutron beam size (collimator aperture size), and (iii) the Gd-NCT by thermal neutrons appears to be a potential for treatment of superficial tumor. (author)

  7. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  8. Modeling salmonella Dublin into the dairy herd simulation model Simherd

    DEFF Research Database (Denmark)

    Kudahl, Anne Braad

    2010-01-01

    Infection with Salmonella Dublin in the dairy herd and effects of the infection and relevant control measures are currently being modeled into the dairy herd simulation model called Simherd. The aim is to compare the effects of different control strategies against Salmonella Dublin on both within...... of the simulations will therefore be used for decision support in the national surveillance and eradication program against Salmonella Dublin. Basic structures of the model are programmed and will be presented at the workshop. The model is in a phase of face-validation by a group of Salmonella......-herd- prevalence and economy by simulations. The project Dublin on both within-herd- prevalence and economy by simulations. The project is a part of a larger national project "Salmonella 2007 - 2011" with the main objective to reduce the prevalence of Salmonella Dublin in Danish Dairy herds. Results...

  9. A large animal model for boron neutron capture therapy

    International Nuclear Information System (INIS)

    Gavin, P.R.; Kraft, S.L.; DeHaan, C.E.; Moore, M.P.; Griebenow, M.L.

    1992-01-01

    An epithermal neutron beam is needed to treat relatively deep seated tumors. The scattering characteristics of neutrons in this energy range dictate that in vivo experiments be conducted in a large animal to prevent unacceptable total body irradiation. The canine species has proven an excellent model to evaluate the various problems of boron neutron capture utilizing an epithermal neutron beam. This paper discusses three major components of the authors study: (1) the pharmacokinetics of borocaptate sodium (NA 2 B 12 H 11 SH or BSH) in dogs with spontaneously occurring brain tumors, (2) the radiation tolerance of normal tissues in the dog using an epithermal beam alone and in combination with borocaptate sodium, and (3) initial treatment of dogs with spontaneously occurring brain tumors utilizing borocaptate sodium and an epithermal neutron beam

  10. Geological Sequestration Training and Research Program in Capture and Transport: Development of the Most Economical Separation Method for CO2 Capture

    Energy Technology Data Exchange (ETDEWEB)

    Vahdat, Nader

    2013-09-30

    The project provided hands-on training and networking opportunities to undergraduate students in the area of carbon dioxide (CO2) capture and transport, through fundamental research study focused on advanced separation methods that can be applied to the capture of CO2 resulting from the combustion of fossil-fuels for power generation . The project team’s approach to achieve its objectives was to leverage existing Carbon Capture and Storage (CCS) course materials and teaching methods to create and implement an annual CCS short course for the Tuskegee University community; conduct a survey of CO2 separation and capture methods; utilize data to verify and develop computer models for CO2 capture and build CCS networks and hands-on training experiences. The objectives accomplished as a result of this project were: (1) A comprehensive survey of CO2 capture methods was conducted and mathematical models were developed to compare the potential economics of the different methods based on the total cost per year per unit of CO2 avoidance; and (2) Training was provided to introduce the latest CO2 capture technologies and deployment issues to the university community.

  11. The critical role of the routing scheme in simulating peak river discharge in global hydrological models

    Science.gov (United States)

    Zhao, F.; Veldkamp, T.; Frieler, K.; Schewe, J.; Ostberg, S.; Willner, S. N.; Schauberger, B.; Gosling, S.; Mueller Schmied, H.; Portmann, F. T.; Leng, G.; Huang, M.; Liu, X.; Tang, Q.; Hanasaki, N.; Biemans, H.; Gerten, D.; Satoh, Y.; Pokhrel, Y. N.; Stacke, T.; Ciais, P.; Chang, J.; Ducharne, A.; Guimberteau, M.; Wada, Y.; Kim, H.; Yamazaki, D.

    2017-12-01

    Global hydrological models (GHMs) have been applied to assess global flood hazards, but their capacity to capture the timing and amplitude of peak river discharge—which is crucial in flood simulations—has traditionally not been the focus of examination. Here we evaluate to what degree the choice of river routing scheme affects simulations of peak discharge and may help to provide better agreement with observations. To this end we use runoff and discharge simulations of nine GHMs forced by observational climate data (1971-2010) within the ISIMIP2a project. The runoff simulations were used as input for the global river routing model CaMa-Flood. The simulated daily discharge was compared to the discharge generated by each GHM using its native river routing scheme. For each GHM both versions of simulated discharge were compared to monthly and daily discharge observations from 1701 GRDC stations as a benchmark. CaMa-Flood routing shows a general reduction of peak river discharge and a delay of about two to three weeks in its occurrence, likely induced by the buffering capacity of floodplain reservoirs. For a majority of river basins, discharge produced by CaMa-Flood resulted in a better agreement with observations. In particular, maximum daily discharge was adjusted, with a multi-model averaged reduction in bias over about 2/3 of the analysed basin area. The increase in agreement was obtained in both managed and near-natural basins. Overall, this study demonstrates the importance of routing scheme choice in peak discharge simulation, where CaMa-Flood routing accounts for floodplain storage and backwater effects that are not represented in most GHMs. Our study provides important hints that an explicit parameterisation of these processes may be essential in future impact studies.

  12. Simulation of atmospheric temperature inversions over greater cairo using the MM5 Meso-Scale atmospheric model

    International Nuclear Information System (INIS)

    Kandil, H.A.; Elhadidi, B.M.; Kader, A. A.; Moaty, A.A.; Sherif, A.O.

    2006-01-01

    Air pollution episodes have been recorded in Cairo, during the fall season, since 1999, as a result of specific meteorological conditions combined with large quantity of pollutants created by several ground-based sources. The main reason for the smog-like episodes (black clouds) is adverse weather conditions with low and variable winds, high humidity and strong temperature inversions in the few-hundred meters above the ground. The two important types of temperature inversion affecting the air pollution are surface or ground (radiation) inversion and subsidence (elevated) inversion. The surface temperature inversion is associated with a rapid decrease in the ground surface temperature with the simultaneous existence of warm air in the lower troposphere. The inversion develops at dusk and continues until the surface warms again the following day. Pollutants emitted during the night are caught under this i nversion lid. S ubsidence inversion forms when warm air masses move over colder air masses. The inversion develops with a stagnating high-pressure system (generally associated with fair weather). Under these conditions, the pressure gradient becomes progressively weaker so that winds become light. These light winds greatly reduce the horizontal transport and dispersion of pollutants. At the same time, the subsidence inversion acts as a barrier to the vertical dispersion of the pollutants. In this study, the Penn State/NCAR meso -scale model (MM5) is used to simulate the temperature inversion phenomenon over Greater Cairo region during the fall season of 2004. Accurate computations of the heat transfer at the surface are needed to capture this phenomenon. This can only be achieved by high-resolution simulations in both horizontal and vertical directions. Hence, for accurate simulation of the temperature inversion over Greater Cairo, four nested domains of resolutions of 27 km, 9 km, 3 km and 1 km, respectively, were used in the horizontal planes. Furthermore, 42

  13. Optical and statistical model calculation of the americium 242m capture cross section

    International Nuclear Information System (INIS)

    Tellier, Henry.

    1981-04-01

    The capture cross sections of Am 242m can be deduced from resonances analysis at low energy and computed with theoretical models at high energy. In this work, a coherent set of cross sections which reproduced the experimental values of the fission cross sections is computed. These calculations were performed for an energy of the incoming neutron between 1 keV and 1 MeV

  14. Instrumented anvil-on-rod impact experiments for validating constitutive strength model for simulating transient dynamic deformation response of metals

    International Nuclear Information System (INIS)

    Martin, M.; Shen, T.; Thadhani, N.N.

    2008-01-01

    Instrumented anvil-on-rod impact experiments were performed to access the applicability of this approach for validating a constitutive strength model for dynamic, transient-state deformation and elastic-plastic wave interactions in vanadium, 21-6-9 stainless steel, titanium, and Ti-6Al-4V. In addition to soft-catching the impacted rod-shaped samples, their transient deformation states were captured by high-speed imaging, and velocity interferometry was used to record the sample back (free) surface velocity and monitor elastic-plastic wave interactions. Simulations utilizing AUTODYN-2D hydrocode with Steinberg-Guinan constitutive equation were used to generate simulated free surface velocity traces and final/transient deformation profiles for comparisons with experiments. The simulations were observed to under-predict the radial strain for bcc vanadium and fcc steel, but over-predict the radial strain for hcp titanium and Ti-6Al-4V. The correlations illustrate the applicability of the instrumented anvil-on-rod impact test as a method for providing robust model validation based on the entire deformation event, and not just the final deformed state

  15. Modelling energy demand of developing countries: Are the specific features adequately captured?

    International Nuclear Information System (INIS)

    Bhattacharyya, Subhes C.; Timilsina, Govinda R.

    2010-01-01

    This paper critically reviews existing energy demand forecasting methodologies highlighting the methodological diversities and developments over the past four decades in order to investigate whether the existing energy demand models are appropriate for capturing the specific features of developing countries. The study finds that two types of approaches, econometric and end-use accounting, are commonly used in the existing energy demand models. Although energy demand models have greatly evolved since the early seventies, key issues such as the poor-rich and urban-rural divides, traditional energy resources and differentiation between commercial and non-commercial energy commodities are often poorly reflected in these models. While the end-use energy accounting models with detailed sectoral representations produce more realistic projections as compared to the econometric models, they still suffer from huge data deficiencies especially in developing countries. Development and maintenance of more detailed energy databases, further development of models to better reflect developing country context and institutionalizing the modelling capacity in developing countries are the key requirements for energy demand modelling to deliver richer and more reliable input to policy formulation in developing countries.

  16. Modelling energy demand of developing countries: Are the specific features adequately captured?

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharyya, Subhes C. [CEPMLP, University of Dundee, Dundee DD1 4HN (United Kingdom); Timilsina, Govinda R. [Development Research Group, The World Bank, Washington DC (United States)

    2010-04-15

    This paper critically reviews existing energy demand forecasting methodologies highlighting the methodological diversities and developments over the past four decades in order to investigate whether the existing energy demand models are appropriate for capturing the specific features of developing countries. The study finds that two types of approaches, econometric and end-use accounting, are commonly used in the existing energy demand models. Although energy demand models have greatly evolved since the early seventies, key issues such as the poor-rich and urban-rural divides, traditional energy resources and differentiation between commercial and non-commercial energy commodities are often poorly reflected in these models. While the end-use energy accounting models with detailed sectoral representations produce more realistic projections as compared to the econometric models, they still suffer from huge data deficiencies especially in developing countries. Development and maintenance of more detailed energy databases, further development of models to better reflect developing country context and institutionalizing the modelling capacity in developing countries are the key requirements for energy demand modelling to deliver richer and more reliable input to policy formulation in developing countries. (author)

  17. P-d capture reactions in muonic molecules

    International Nuclear Information System (INIS)

    Friar, J.L.

    1991-01-01

    Capture reactions for very low-energy n-d and p-d systems are calculated and compared with experiment, as are low-energy n-d and p-d scattering. We find excellent agreement for the n-d scattering lengths, but poor agreement for the p-d case, which we believe is a problem with the experimental extrapolation. The n-d radiative capture is sensitive to details of the meson-exchange currents, but reasonable models agree with the data. The latter models are in good agreement with experiment when extended to the p-d case. Our large quartet capture rate resolves a long-standing anomaly. The EO capture matrix element recently obtained from a reanalysis of internal conversion in muonic molecules is in excellent agreement with our predictions. This matrix element is very clean theoretically and provides the best test of the calculations. 33 refs., 3 figs., 1 tab

  18. Simulation of Hybrid Solar Dryer

    International Nuclear Information System (INIS)

    Yunus, Y M; Al-Kayiem, H H

    2013-01-01

    The efficient performance of a solar dryer is mainly depending on the good distribution of the thermal and flow field inside the dryer body. This paper presents simulation results of a solar dryer with a biomass burner as backup heater. The flow and thermal fields were simulated by CFD tools under different operational modes. GAMBIT software was used for the model and grid generation while FLUENT software was used to simulate the velocity and temperature distribution inside the dryer body. The CFD simulation procedure was validated by comparing the simulation results with experimental measurement. The simulation results show acceptable agreement with the experimental measurements. The simulations have shown high temperature spot with very low velocity underneath the solar absorber and this is an indication for the poor design. Many other observations have been visualized from the temperature and flow distribution which cannot be captured by experimental measurements.

  19. Debunching and Capture in the LEB for the SSC

    Energy Technology Data Exchange (ETDEWEB)

    Mahale, N.; Furman, M.

    1991-05-01

    The authors present the details of the capture process in the Low Energy Booster (LEB) for the SSC. They consider only the longitudinal dynamics. Space charge forces are computed quasistatically. The beam pipe is considered to be perfectly conducting. With respect to maximizing the capture efficiency and minimizing the space charge tune spread, initial few milliseconds are very important. They present only the first few milliseconds of the cycle, during which space charge effects are significant. For the numerical simulation they use the code ESME.

  20. Optimization of seasonal ARIMA models using differential evolution - simulated annealing (DESA) algorithm in forecasting dengue cases in Baguio City

    Science.gov (United States)

    Addawe, Rizavel C.; Addawe, Joel M.; Magadia, Joselito C.

    2016-10-01

    Accurate forecasting of dengue cases would significantly improve epidemic prevention and control capabilities. This paper attempts to provide useful models in forecasting dengue epidemic specific to the young and adult population of Baguio City. To capture the seasonal variations in dengue incidence, this paper develops a robust modeling approach to identify and estimate seasonal autoregressive integrated moving average (SARIMA) models in the presence of additive outliers. Since the least squares estimators are not robust in the presence of outliers, we suggest a robust estimation based on winsorized and reweighted least squares estimators. A hybrid algorithm, Differential Evolution - Simulated Annealing (DESA), is used to identify and estimate the parameters of the optimal SARIMA model. The method is applied to the monthly reported dengue cases in Baguio City, Philippines.

  1. Modeling and simulation of maintenance treatment in first-line non-small cell lung cancer with external validation

    International Nuclear Information System (INIS)

    Han, Kelong; Claret, Laurent; Sandler, Alan; Das, Asha; Jin, Jin; Bruno, Rene

    2016-01-01

    Maintenance treatment (MTx) in responders following first-line treatment has been investigated and practiced for many cancers. Modeling and simulation may support interpretation of interim data and development decisions. We aimed to develop a modeling framework to simulate overall survival (OS) for MTx in NSCLC using tumor growth inhibition (TGI) data. TGI metrics were estimated using longitudinal tumor size data from two Phase III first-line NSCLC studies evaluating bevacizumab and erlotinib as MTx in 1632 patients. Baseline prognostic factors and TGI metric estimates were assessed in multivariate parametric models to predict OS. The OS model was externally validated by simulating a third independent NSCLC study (n = 253) based on interim TGI data (up to progression-free survival database lock). The third study evaluated pemetrexed + bevacizumab vs. bevacizumab alone as MTx. Time-to-tumor-growth (TTG) was the best TGI metric to predict OS. TTG, baseline tumor size, ECOG score, Asian ethnicity, age, and gender were significant covariates in the final OS model. The OS model was qualified by simulating OS distributions and hazard ratios (HR) in the two studies used for model-building. Simulations of the third independent study based on interim TGI data showed that pemetrexed + bevacizumab MTx was unlikely to significantly prolong OS vs. bevacizumab alone given the current sample size (predicted HR: 0.81; 95 % prediction interval: 0.59–1.09). Predicted median OS was 17.3 months and 14.7 months in both arms, respectively. These simulations are consistent with the results of the final OS analysis published 2 years later (observed HR: 0.87; 95 % confidence interval: 0.63–1.21). Final observed median OS was 17.1 months and 13.2 months in both arms, respectively, consistent with our predictions. A robust TGI-OS model was developed for MTx in NSCLC. TTG captures treatment effect. The model successfully predicted the OS outcomes of an independent study based on interim

  2. Modeling and simulation of maintenance treatment in first-line non-small cell lung cancer with external validation.

    Science.gov (United States)

    Han, Kelong; Claret, Laurent; Sandler, Alan; Das, Asha; Jin, Jin; Bruno, Rene

    2016-07-13

    Maintenance treatment (MTx) in responders following first-line treatment has been investigated and practiced for many cancers. Modeling and simulation may support interpretation of interim data and development decisions. We aimed to develop a modeling framework to simulate overall survival (OS) for MTx in NSCLC using tumor growth inhibition (TGI) data. TGI metrics were estimated using longitudinal tumor size data from two Phase III first-line NSCLC studies evaluating bevacizumab and erlotinib as MTx in 1632 patients. Baseline prognostic factors and TGI metric estimates were assessed in multivariate parametric models to predict OS. The OS model was externally validated by simulating a third independent NSCLC study (n = 253) based on interim TGI data (up to progression-free survival database lock). The third study evaluated pemetrexed + bevacizumab vs. bevacizumab alone as MTx. Time-to-tumor-growth (TTG) was the best TGI metric to predict OS. TTG, baseline tumor size, ECOG score, Asian ethnicity, age, and gender were significant covariates in the final OS model. The OS model was qualified by simulating OS distributions and hazard ratios (HR) in the two studies used for model-building. Simulations of the third independent study based on interim TGI data showed that pemetrexed + bevacizumab MTx was unlikely to significantly prolong OS vs. bevacizumab alone given the current sample size (predicted HR: 0.81; 95 % prediction interval: 0.59-1.09). Predicted median OS was 17.3 months and 14.7 months in both arms, respectively. These simulations are consistent with the results of the final OS analysis published 2 years later (observed HR: 0.87; 95 % confidence interval: 0.63-1.21). Final observed median OS was 17.1 months and 13.2 months in both arms, respectively, consistent with our predictions. A robust TGI-OS model was developed for MTx in NSCLC. TTG captures treatment effect. The model successfully predicted the OS outcomes of an independent study

  3. Enzymes in CO2 Capture

    DEFF Research Database (Denmark)

    Fosbøl, Philip Loldrup; Gladis, Arne; Thomsen, Kaj

    The enzyme Carbonic Anhydrase (CA) can accelerate the absorption rate of CO2 into aqueous solutions by several-fold. It exist in almost all living organisms and catalyses different important processes like CO2 transport, respiration and the acid-base balances. A new technology in the field...... of carbon capture is the application of enzymes for acceleration of typically slow ternary amines or inorganic carbonates. There is a hidden potential to revive currently infeasible amines which have an interesting low energy consumption for regeneration but too slow kinetics for viable CO2 capture. The aim...... of this work is to discuss the measurements of kinetic properties for CA promoted CO2 capture solvent systems. The development of a rate-based model for enzymes will be discussed showing the principles of implementation and the results on using a well-known ternary amine for CO2 capture. Conclusions...

  4. Capture of irregular satellites at Jupiter

    International Nuclear Information System (INIS)

    Nesvorný, David; Vokrouhlický, David; Deienno, Rogerio

    2014-01-01

    The irregular satellites of outer planets are thought to have been captured from heliocentric orbits. The exact nature of the capture process, however, remains uncertain. We examine the possibility that irregular satellites were captured from the planetesimal disk during the early solar system instability when encounters between the outer planets occurred. Nesvorný et al. already showed that the irregular satellites of Saturn, Uranus, and Neptune were plausibly captured during planetary encounters. Here we find that the current instability models present favorable conditions for capture of irregular satellites at Jupiter as well, mainly because Jupiter undergoes a phase of close encounters with an ice giant. We show that the orbital distribution of bodies captured during planetary encounters provides a good match to the observed distribution of irregular satellites at Jupiter. The capture efficiency for each particle in the original transplanetary disk is found to be (1.3-3.6) × 10 –8 . This is roughly enough to explain the observed population of jovian irregular moons. We also confirm Nesvorný et al.'s results for the irregular satellites of Saturn, Uranus, and Neptune.

  5. Mathematical Capture of Human Crowd Behavioral Data for Computational Model Building, Verification, and Validation

    Science.gov (United States)

    2011-03-21

    throughout the experimental runs. Reliable and validated measures of anxiety ( Spielberger , 1983), as well as custom-constructed questionnaires about...Crowd modeling and simulation technologies. Transactions on modeling and computer simulation, 20(4). Spielberger , C. D. (1983

  6. Simulation - modeling - experiment; Simulation - modelisation - experience

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  7. 3D Core Model for simulation of nuclear power plants: Simulation requirements, model features, and validation

    International Nuclear Information System (INIS)

    Zerbino, H.

    1999-01-01

    In 1994-1996, Thomson Training and Simulation (TT and S) earned out the D50 Project, which involved the design and construction of optimized replica simulators for one Dutch and three German Nuclear Power Plants. It was recognized early on that the faithful reproduction of the Siemens reactor control and protection systems would impose extremely stringent demands on the simulation models, particularly the Core physics and the RCS thermohydraulics. The quality of the models, and their thorough validation, were thus essential. The present paper describes the main features of the fully 3D Core model implemented by TT and S, and its extensive validation campaign, which was defined in extremely positive collaboration with the Customer and the Core Data suppliers. (author)

  8. A study for production simulation model generation system based on data model at a shipyard

    Directory of Open Access Journals (Sweden)

    Myung-Gi Back

    2016-09-01

    Full Text Available Simulation technology is a type of shipbuilding product lifecycle management solution used to support production planning or decision-making. Normally, most shipbuilding processes are consisted of job shop production, and the modeling and simulation require professional skills and experience on shipbuilding. For these reasons, many shipbuilding companies have difficulties adapting simulation systems, regardless of the necessity for the technology. In this paper, the data model for shipyard production simulation model generation was defined by analyzing the iterative simulation modeling procedure. The shipyard production simulation data model defined in this study contains the information necessary for the conventional simulation modeling procedure and can serve as a basis for simulation model generation. The efficacy of the developed system was validated by applying it to the simulation model generation of the panel block production line. By implementing the initial simulation model generation process, which was performed in the past with a simulation modeler, the proposed system substantially reduced the modeling time. In addition, by reducing the difficulties posed by different modeler-dependent generation methods, the proposed system makes the standardization of the simulation model quality possible.

  9. Validation process of simulation model

    International Nuclear Information System (INIS)

    San Isidro, M. J.

    1998-01-01

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  10. Carbon Dioxide Capture and Transportation Options in the Illinois Basin

    Energy Technology Data Exchange (ETDEWEB)

    M. Rostam-Abadi; S. S. Chen; Y. Lu

    2004-09-30

    This report describes carbon dioxide (CO{sub 2}) capture options from large stationary emission sources in the Illinois Basin, primarily focusing on coal-fired utility power plants. The CO{sub 2} emissions data were collected for utility power plants and industrial facilities over most of Illinois, southwestern Indiana, and western Kentucky. Coal-fired power plants are by far the largest CO{sub 2} emission sources in the Illinois Basin. The data revealed that sources within the Illinois Basin emit about 276 million tonnes of CO2 annually from 122 utility power plants and industrial facilities. Industrial facilities include 48 emission sources and contribute about 10% of total emissions. A process analysis study was conducted to review the suitability of various CO{sub 2} capture technologies for large stationary sources. The advantages and disadvantages of each class of technology were investigated. Based on these analyses, a suitable CO{sub 2} capture technology was assigned to each type of emission source in the Illinois Basin. Techno-economic studies were then conducted to evaluate the energy and economic performances of three coal-based power generation plants with CO{sub 2} capture facilities. The three plants considered were (1) pulverized coal (PC) + post combustion chemical absorption (monoethanolamine, or MEA), (2) integrated gasification combined cycle (IGCC) + pre-combustion physical absorption (Selexol), and (3) oxygen-enriched coal combustion plants. A conventional PC power plant without CO2 capture was also investigated as a baseline plant for comparison. Gross capacities of 266, 533, and 1,054 MW were investigated at each power plant. The economic study considered the burning of both Illinois No. 6 coal and Powder River Basin (PRB) coal. The cost estimation included the cost for compressing the CO{sub 2} stream to pipeline pressure. A process simulation software, CHEMCAD, was employed to perform steady-state simulations of power generation systems

  11. A satellite simulator for TRMM PR applied to climate model simulations

    Science.gov (United States)

    Spangehl, T.; Schroeder, M.; Bodas-Salcedo, A.; Hollmann, R.; Riley Dellaripa, E. M.; Schumacher, C.

    2017-12-01

    Climate model simulations have to be compared against observation based datasets in order to assess their skill in representing precipitation characteristics. Here we use a satellite simulator for TRMM PR in order to evaluate simulations performed with MPI-ESM (Earth system model of the Max Planck Institute for Meteorology in Hamburg, Germany) performed within the MiKlip project (https://www.fona-miklip.de/, funded by Federal Ministry of Education and Research in Germany). While classical evaluation methods focus on geophysical parameters such as precipitation amounts, the application of the satellite simulator enables an evaluation in the instrument's parameter space thereby reducing uncertainties on the reference side. The CFMIP Observation Simulator Package (COSP) provides a framework for the application of satellite simulators to climate model simulations. The approach requires the introduction of sub-grid cloud and precipitation variability. Radar reflectivities are obtained by applying Mie theory, with the microphysical assumptions being chosen to match the atmosphere component of MPI-ESM (ECHAM6). The results are found to be sensitive to the methods used to distribute the convective precipitation over the sub-grid boxes. Simple parameterization methods are used to introduce sub-grid variability of convective clouds and precipitation. In order to constrain uncertainties a comprehensive comparison with sub-grid scale convective precipitation variability which is deduced from TRMM PR observations is carried out.

  12. Muon capture in deuterium

    Czech Academy of Sciences Publication Activity Database

    Ricci, P.; Truhlík, Emil; Mosconi, B.; Smejkal, J.

    2010-01-01

    Roč. 837, - (2010), s. 110-144 ISSN 0375-9474 Institutional research plan: CEZ:AV0Z10480505 Keywords : Negative muon capture * Deuteron * Potential models Subject RIV: BE - Theoretical Physics Impact factor: 1.986, year: 2010

  13. Use case driven approach to develop simulation model for PCS of APR1400 simulator

    International Nuclear Information System (INIS)

    Dong Wook, Kim; Hong Soo, Kim; Hyeon Tae, Kang; Byung Hwan, Bae

    2006-01-01

    The full-scope simulator is being developed to evaluate specific design feature and to support the iterative design and validation in the Man-Machine Interface System (MMIS) design of Advanced Power Reactor (APR) 1400. The simulator consists of process model, control logic model, and MMI for the APR1400 as well as the Power Control System (PCS). In this paper, a use case driven approach is proposed to develop a simulation model for PCS. In this approach, a system is considered from the point of view of its users. User's view of the system is based on interactions with the system and the resultant responses. In use case driven approach, we initially consider the system as a black box and look at its interactions with the users. From these interactions, use cases of the system are identified. Then the system is modeled using these use cases as functions. Lower levels expand the functionalities of each of these use cases. Hence, starting from the topmost level view of the system, we proceeded down to the lowest level (the internal view of the system). The model of the system thus developed is use case driven. This paper will introduce the functionality of the PCS simulation model, including a requirement analysis based on use case and the validation result of development of PCS model. The PCS simulation model using use case will be first used during the full-scope simulator development for nuclear power plant and will be supplied to Shin-Kori 3 and 4 plant. The use case based simulation model development can be useful for the design and implementation of simulation models. (authors)

  14. A predictive model of nuclear power plant crew decision-making and performance in a dynamic simulation environment

    Science.gov (United States)

    Coyne, Kevin Anthony

    branching events and provide a better representation of the IDAC cognitive model. An operator decision-making engine capable of responding to dynamic changes in situational context was implemented. The IDAC human performance model was fully integrated with a detailed nuclear plant model in order to realistically simulate plant accident scenarios. Finally, the improved ADS-IDAC model was calibrated, validated, and updated using actual nuclear plant crew performance data. This research led to the following general conclusions: (1) A relatively small number of branching rules are capable of efficiently capturing a wide spectrum of crew-to-crew variabilities. (2) Compared to traditional static risk assessment methods, ADS-IDAC can provide a more realistic and integrated assessment of human error events by directly determining the effect of operator behaviors on plant thermal hydraulic parameters. (3) The ADS-IDAC approach provides an efficient framework for capturing actual operator performance data such as timing of operator actions, mental models, and decision-making activities.

  15. Dynamics of RF captured cooled proton beams

    International Nuclear Information System (INIS)

    Kells, W.; Mills, F.

    1983-01-01

    In the course of electron cooling experiments at the Electron Cooling Ring (ECR) at Fermilab, several peculiar features of the longitudinal phase space of cold protons (200 MeV) captured in RF buckets were observed. Here we present the experimental facts, present a simple theory, and summarize computer simulation results which support the theory and facts

  16. Plasma modelling and numerical simulation

    International Nuclear Information System (INIS)

    Van Dijk, J; Kroesen, G M W; Bogaerts, A

    2009-01-01

    Plasma modelling is an exciting subject in which virtually all physical disciplines are represented. Plasma models combine the electromagnetic, statistical and fluid dynamical theories that have their roots in the 19th century with the modern insights concerning the structure of matter that were developed throughout the 20th century. The present cluster issue consists of 20 invited contributions, which are representative of the state of the art in plasma modelling and numerical simulation. These contributions provide an in-depth discussion of the major theories and modelling and simulation strategies, and their applications to contemporary plasma-based technologies. In this editorial review, we introduce and complement those papers by providing a bird's eye perspective on plasma modelling and discussing the historical context in which it has surfaced. (editorial review)

  17. An Agent-Based Monetary Production Simulation Model

    DEFF Research Database (Denmark)

    Bruun, Charlotte

    2006-01-01

    An Agent-Based Simulation Model Programmed in Objective Borland Pascal. Program and source code is downloadable......An Agent-Based Simulation Model Programmed in Objective Borland Pascal. Program and source code is downloadable...

  18. Model Process Control Language

    Data.gov (United States)

    National Aeronautics and Space Administration — The MPC (Model Process Control) language enables the capture, communication and preservation of a simulation instance, with sufficient detail that it can be...

  19. Spatial capture-recapture design and modelling for the study of small mammals.

    Directory of Open Access Journals (Sweden)

    Juan Romairone

    Full Text Available Spatial capture-recapture modelling (SCR is a powerful analytical tool to estimate density and derive information on space use and behaviour of elusive animals. Yet, SCR has been seldom applied to the study of ecologically keystone small mammals. Here we highlight its potential and requirements with a case study on common voles (Microtus arvalis. First, we address mortality associated with live-trapping, which can be high in small mammals, and must be kept minimal. We designed and tested a nest box coupled with a classic Sherman trap and show that it allows a 5-fold reduction of mortality in traps. Second, we address the need to adjust the trapping grid to the individual home range to maximize spatial recaptures. In May-June 2016, we captured and tagged with transponders 227 voles in a 1.2-ha area during two monthly sessions. Using a Bayesian SCR with a multinomial approach, we estimated: (1 the baseline detection rate and investigated variation according to sex, time or behaviour (aversion/attraction after a previous capture; (2 the parameter sigma that describes how detection probability declines as a function of the distance to an individual´s activity centre, and investigated variation according to sex; and (3 density and population sex-ratio. We show that reducing the maximum distance between traps from 12 to 9.6m doubled spatial recaptures and improved model predictions. Baseline detection rate increased over time (after overcoming a likely aversion to entering new odourless traps and was greater for females than males in June. The sigma parameter of males was twice that of females, indicating larger home ranges. Density estimates were of 142.92±38.50 and 168.25±15.79 voles/ha in May and June, respectively, with 2-3 times more females than males. We highlight the potential and broad applicability that SCR offers and provide specific recommendations for using it to study small mammals like voles.

  20. An optimization model for carbon capture & storage/utilization vs. carbon trading: A case study of fossil-fired power plants in Turkey.

    Science.gov (United States)

    Ağralı, Semra; Üçtuğ, Fehmi Görkem; Türkmen, Burçin Atılgan

    2018-06-01

    We consider fossil-fired power plants that operate in an environment where a cap and trade system is in operation. These plants need to choose between carbon capture and storage (CCS), carbon capture and utilization (CCU), or carbon trading in order to obey emissions limits enforced by the government. We develop a mixed-integer programming model that decides on the capacities of carbon capture units, if it is optimal to install them, the transportation network that needs to be built for transporting the carbon captured, and the locations of storage sites, if they are decided to be built. Main restrictions on the system are the minimum and maximum capacities of the different parts of the pipeline network, the amount of carbon that can be sold to companies for utilization, and the capacities on the storage sites. Under these restrictions, the model aims to minimize the net present value of the sum of the costs associated with installation and operation of the carbon capture unit and the transportation of carbon, the storage cost in case of CCS, the cost (or revenue) that results from the emissions trading system, and finally the negative revenue of selling the carbon to other entities for utilization. We implement the model on General Algebraic Modeling System (GAMS) by using data associated with two coal-fired power plants located in different regions of Turkey. We choose enhanced oil recovery (EOR) as the process in which carbon would be utilized. The results show that CCU is preferable to CCS as long as there is sufficient demand in the EOR market. The distance between the location of emission and location of utilization/storage, and the capacity limits on the pipes are an important factor in deciding between carbon capture and carbon trading. At carbon prices over $15/ton, carbon capture becomes preferable to carbon trading. These results show that as far as Turkey is concerned, CCU should be prioritized as a means of reducing nation-wide carbon emissions in an

  1. Validation of the simulator neutronics model

    International Nuclear Information System (INIS)

    Gregory, M.V.

    1984-01-01

    The neutronics model in the SRP reactor training simulator computes the variation with time of the neutron population in the reactor core. The power output of a reactor is directly proportional to the neutron population, thus in a very real sense the neutronics model determines the response of the simulator. The geometrical complexity of the reactor control system in SRP reactors requires the neutronics model to provide a detailed, 3D representation of the reactor core. Existing simulator technology does not allow such a detailed representation to run in real-time in a minicomputer environment, thus an entirely different approach to the problem was required. A prompt jump method has been developed in answer to this need

  2. Modelling and simulation of a heat exchanger

    Science.gov (United States)

    Xia, Lei; Deabreu-Garcia, J. Alex; Hartley, Tom T.

    1991-01-01

    Two models for two different control systems are developed for a parallel heat exchanger. First by spatially lumping a heat exchanger model, a good approximate model which has a high system order is produced. Model reduction techniques are applied to these to obtain low order models that are suitable for dynamic analysis and control design. The simulation method is discussed to ensure a valid simulation result.

  3. Modeling and Simulation of U-tube Steam Generator

    Science.gov (United States)

    Zhang, Mingming; Fu, Zhongguang; Li, Jinyao; Wang, Mingfei

    2018-03-01

    The U-tube natural circulation steam generator was mainly researched with modeling and simulation in this article. The research is based on simuworks system simulation software platform. By analyzing the structural characteristics and the operating principle of U-tube steam generator, there are 14 control volumes in the model, including primary side, secondary side, down channel and steam plenum, etc. The model depends completely on conservation laws, and it is applied to make some simulation tests. The results show that the model is capable of simulating properly the dynamic response of U-tube steam generator.

  4. A comprehensive approach for the simulation of the Urban Heat Island effect with the WRF/SLUCM modeling system: The case of Athens (Greece)

    Science.gov (United States)

    Giannaros, Christos; Nenes, Athanasios; Giannaros, Theodore M.; Kourtidis, Konstantinos; Melas, Dimitrios

    2018-03-01

    This study presents a comprehensive modeling approach for simulating the spatiotemporal distribution of urban air temperatures with a modeling system that includes the Weather Research and Forecasting (WRF) model and the Single-Layer Urban Canopy Model (SLUCM) with a modified treatment of the impervious surface temperature. The model was applied to simulate a 3-day summer heat wave event over the city of Athens, Greece. The simulation, using default SLUCM parameters, is capable of capturing the observed diurnal variation of urban temperatures and the Urban Heat Island (UHI) in the greater Athens Area (GAA), albeit with systematic biases that are prominent during nighttime hours. These biases are particularly evident over low-intensity residential areas, and they are associated with the surface and urban canopy properties representing the urban environment. A series of sensitivity simulations unravels the importance of the sub-grid urban fraction parameter, surface albedo, and street canyon geometry in the overall causation and development of the UHI effect. The sensitivities are then used to determine optimal values of the street canyon geometry, which reproduces the observed temperatures throughout the simulation domain. The optimal parameters, apart from considerably improving model performance (reductions in mean temperature bias from 0.30 °C to 1.58 °C), are also consistent with actual city building characteristics - which gives confidence that the model set-up is robust, and can be used to study the UHI in the GAA in the anticipated warmer conditions in the future.

  5. A real option-based simulation model to evaluate investments in pump storage plants

    International Nuclear Information System (INIS)

    Muche, Thomas

    2009-01-01

    Investments in pump storage plants are expected to grow especially due to their ability to store an excess of supply from wind power plants. In order to evaluate these investments correctly the peculiarities of pump storage plants and the characteristics of liberalized power markets have to be considered. The main characteristics of power markets are the strong power price volatility and the occurrence of prices spikes. In this article a valuation model is developed capturing these aspects using power price simulation, optimization of unit commitment and capital market theory. This valuation model is able to value a future price-based unit commitment planning that corresponds to future scope of actions also called real options. The resulting real option value for the pump storage plant is compared with the traditional net present value approach. Because this approach is not able to evaluate scope of actions correctly it results in strongly smaller investment values and forces wrong investment decisions.

  6. Model for Simulation Atmospheric Turbulence

    DEFF Research Database (Denmark)

    Lundtang Petersen, Erik

    1976-01-01

    A method that produces realistic simulations of atmospheric turbulence is developed and analyzed. The procedure makes use of a generalized spectral analysis, often called a proper orthogonal decomposition or the Karhunen-Loève expansion. A set of criteria, emphasizing a realistic appearance...... eigenfunctions and estimates of the distributions of the corresponding expansion coefficients. The simulation method utilizes the eigenfunction expansion procedure to produce preliminary time histories of the three velocity components simultaneously. As a final step, a spectral shaping procedure is then applied....... The method is unique in modeling the three velocity components simultaneously, and it is found that important cross-statistical features are reasonably well-behaved. It is concluded that the model provides a practical, operational simulator of atmospheric turbulence....

  7. Geomechanical Simulation of Bayou Choctaw Strategic Petroleum Reserve - Model Calibration.

    Energy Technology Data Exchange (ETDEWEB)

    Park, Byoung [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    A finite element numerical analysis model has been constructed that consists of a realistic mesh capturing the geometries of Bayou Choctaw (BC) Strategic Petroleum Reserve (SPR) site and multi - mechanism deformation ( M - D ) salt constitutive model using the daily data of actual wellhead pressure and oil - brine interface. The salt creep rate is not uniform in the salt dome, and the creep test data for BC salt is limited. Therefore, the model calibration is necessary to simulate the geomechanical behavior of the salt dome. The cavern volumetric closures of SPR caverns calculated from CAVEMAN are used for the field baseline measurement. The structure factor, A 2 , and transient strain limit factor, K 0 , in the M - D constitutive model are used for the calibration. The A 2 value obtained experimentally from the BC salt and K 0 value of Waste Isolation Pilot Plant (WIPP) salt are used for the baseline values. T o adjust the magnitude of A 2 and K 0 , multiplication factors A2F and K0F are defined, respectively. The A2F and K0F values of the salt dome and salt drawdown skins surrounding each SPR cavern have been determined through a number of back fitting analyses. The cavern volumetric closures calculated from this model correspond to the predictions from CAVEMAN for six SPR caverns. Therefore, this model is able to predict past and future geomechanical behaviors of the salt dome, caverns, caprock , and interbed layers. The geological concerns issued in the BC site will be explained from this model in a follow - up report .

  8. A model to capture and manage tacit knowledge using a multiagent system

    Science.gov (United States)

    Paolino, Lilyam; Paggi, Horacio; Alonso, Fernando; López, Genoveva

    2014-10-01

    This article presents a model to capture and register business tacit knowledge belonging to different sources, using an expert multiagent system which enables the entry of incidences and captures the tacit knowledge which could fix them. This knowledge and their sources are evaluated through the application of trustworthy algorithms that lead to the registration of the data base and the best of each of them. Through its intelligent software agents, this system interacts with the administrator, users, with the knowledge sources and with all the practice communities which might exist in the business world. The sources as well as the knowledge are constantly evaluated, before being registered and also after that, in order to decide the staying or modification of its original weighting. If there is the possibility of better, new knowledge are registered through the old ones. This is also part of an investigation being carried out which refers to knowledge management methodologies in order to manage tacit business knowledge so as to make the business competitiveness easier and leading to innovation learning.

  9. System modeling and simulation at EBR-II

    International Nuclear Information System (INIS)

    Dean, E.M.; Lehto, W.K.; Larson, H.A.

    1986-01-01

    The codes being developed and verified using EBR-II data are the NATDEMO, DSNP and CSYRED. NATDEMO is a variation of the Westinghouse DEMO code coupled to the NATCON code previously used to simulate perturbations of reactor flow and inlet temperature and loss-of-flow transients leading to natural convection in EBR-II. CSYRED uses the Continuous System Modeling Program (CSMP) to simulate the EBR-II core, including power, temperature, control-rod movement reactivity effects and flow and is used primarily to model reactivity induced power transients. The Dynamic Simulator for Nuclear Power Plants (DSNP) allows a whole plant, thermal-hydraulic simulation using specific component and system models called from libraries. It has been used to simulate flow coastdown transients, reactivity insertion events and balance-of-plant perturbations

  10. Optimization of the process loop for CO{sub 2} capture by solvents

    Energy Technology Data Exchange (ETDEWEB)

    Burkhardt, Thorsten; Camy-Portenabe, Julien; Fradet, Aude; Tobiesen, Andrew; Svendsen, Hallvard F [Institut Francais du Petrole, Vernaison (France)

    2006-07-01

    A plant for the CO{sub 2} capture of a coal fired power plant is simulated by three commercial simulation tools (i.e. Aspen Plus, Hysys and Protreat). The results are generally in reasonable agreement. However, the CO{sub 2} removal is significantly higher in the Aspen Plus simulation, most probably due to the used 'Radfrac' column model which does not account for the mass transfer resistance and the chemical kinetics, thus overestimating the CO{sub 2} absorption. An optimization is carried out with respect to lean loading level and circulation rate for a given base case. A lean loading of 0.24 molCO{sub 2}/molMEA represents the best compromise at the chosen conditions between sufficient stripping and a limited amine flow rate. A temperature approach of the rich lean cross exchanger is investigated and a decrease in the temperature approach from 10 to 5{sup o}C results in a decrease in the reboiler heat duty of 2%. 8 refs., 4 figs.

  11. Lessons Learnt from Experts in Design Rationale Knowledge Capture

    DEFF Research Database (Denmark)

    Hall, Mark; Bermell-Garcia, Pablo; Ravindranath, Ranjitun

    2017-01-01

    The focus of this paper is on the use of argumentation models and software tools to support knowledge capture in the design of long-life engineering products. The results of semi-structured interviews with a number of experts in the field are presented, exploring their collective experience...... of knowledge capture and eliciting guidelines for successful implementation of such models and tools. The results of this research may be used as the basis for the design of future tools and techniques for knowledge capture....

  12. Modelling and Simulation of Wave Loads

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    velocity can be approximated by a Gaussian Markov process. Known approximate results for the first-passage density or equivalently, the distribution of the extremes of wave loads are presented and compared with rather precise simulation results. It is demonstrated that the approximate results......A simple model of the wave load on slender members of offshore structures is described. The wave elevation of the sea state is modelled by a stationary Gaussian process. A new procedure to simulate realizations of the wave loads is developed. The simulation method assumes that the wave particle...

  13. Modelling and Simulation of Wave Loads

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    1985-01-01

    velocity can be approximated by a Gaussian Markov process. Known approximate results for the first passage density or equivalently, the distribution of the extremes of wave loads are presented and compared with rather precise simulation results. It is demonstrated that the approximate results......A simple model of the wave load on stender members of offshore structures is described . The wave elevation of the sea stateis modelled by a stationary Gaussian process. A new procedure to simulate realizations of the wave loads is developed. The simulation method assumes that the wave particle...

  14. Modeling and Measurements of Multiphase Flow and Bubble Entrapment in Steel Continuous Casting

    Science.gov (United States)

    Jin, Kai; Thomas, Brian G.; Ruan, Xiaoming

    2016-02-01

    In steel continuous casting, argon gas is usually injected to prevent clogging, but the bubbles also affect the flow pattern, and may become entrapped to form defects in the final product. To investigate this behavior, plant measurements were conducted, and a computational model was applied to simulate turbulent flow of the molten steel and the transport and capture of argon gas bubbles into the solidifying shell in a continuous slab caster. First, the flow field was solved with an Eulerian k- ɛ model of the steel, which was two-way coupled with a Lagrangian model of the large bubbles using a discrete random walk method to simulate their turbulent dispersion. The flow predicted on the top surface agreed well with nailboard measurements and indicated strong cross flow caused by biased flow of Ar gas due to the slide-gate orientation. Then, the trajectories and capture of over two million bubbles (25 μm to 5 mm diameter range) were simulated using two different capture criteria (simple and advanced). Results with the advanced capture criterion agreed well with measurements of the number, locations, and sizes of captured bubbles, especially for larger bubbles. The relative capture fraction of 0.3 pct was close to the measured 0.4 pct for 1 mm bubbles and occurred mainly near the top surface. About 85 pct of smaller bubbles were captured, mostly deeper down in the caster. Due to the biased flow, more bubbles were captured on the inner radius, especially near the nozzle. On the outer radius, more bubbles were captured near to narrow face. The model presented here is an efficient tool to study the capture of bubbles and inclusion particles in solidification processes.

  15. Simulation modeling for the health care manager.

    Science.gov (United States)

    Kennedy, Michael H

    2009-01-01

    This article addresses the use of simulation software to solve administrative problems faced by health care managers. Spreadsheet add-ins, process simulation software, and discrete event simulation software are available at a range of costs and complexity. All use the Monte Carlo method to realistically integrate probability distributions into models of the health care environment. Problems typically addressed by health care simulation modeling are facility planning, resource allocation, staffing, patient flow and wait time, routing and transportation, supply chain management, and process improvement.

  16. Simulating the characteristics of tropical cyclones over the South West Indian Ocean using a Stretched-Grid Global Climate Model

    Science.gov (United States)

    Maoyi, Molulaqhooa L.; Abiodun, Babatunde J.; Prusa, Joseph M.; Veitch, Jennifer J.

    2018-03-01

    Tropical cyclones (TCs) are one of the most devastating natural phenomena. This study examines the capability of a global climate model with grid stretching (CAM-EULAG, hereafter CEU) in simulating the characteristics of TCs over the South West Indian Ocean (SWIO). In the study, CEU is applied with a variable increment global grid that has a fine horizontal grid resolution (0.5° × 0.5°) over the SWIO and coarser resolution (1° × 1°—2° × 2.25°) over the rest of the globe. The simulation is performed for the 11 years (1999-2010) and validated against the Joint Typhoon Warning Center (JTWC) best track data, global precipitation climatology project (GPCP) satellite data, and ERA-Interim (ERAINT) reanalysis. CEU gives a realistic simulation of the SWIO climate and shows some skill in simulating the spatial distribution of TC genesis locations and tracks over the basin. However, there are some discrepancies between the observed and simulated climatic features over the Mozambique channel (MC). Over MC, CEU simulates a substantial cyclonic feature that produces a higher number of TC than observed. The dynamical structure and intensities of the CEU TCs compare well with observation, though the model struggles to produce TCs with a deep pressure centre as low as the observed. The reanalysis has the same problem. The model captures the monthly variation of TC occurrence well but struggles to reproduce the interannual variation. The results of this study have application in improving and adopting CEU for seasonal forecasting over the SWIO.

  17. Guidelines for Reproducibly Building and Simulating Systems Biology Models.

    Science.gov (United States)

    Medley, J Kyle; Goldberg, Arthur P; Karr, Jonathan R

    2016-10-01

    Reproducibility is the cornerstone of the scientific method. However, currently, many systems biology models cannot easily be reproduced. This paper presents methods that address this problem. We analyzed the recent Mycoplasma genitalium whole-cell (WC) model to determine the requirements for reproducible modeling. We determined that reproducible modeling requires both repeatable model building and repeatable simulation. New standards and simulation software tools are needed to enhance and verify the reproducibility of modeling. New standards are needed to explicitly document every data source and assumption, and new deterministic parallel simulation tools are needed to quickly simulate large, complex models. We anticipate that these new standards and software will enable researchers to reproducibly build and simulate more complex models, including WC models.

  18. Effects of inlet/outlet configurations on the electrostatic capture of airborne nanoparticles and viruses

    International Nuclear Information System (INIS)

    Jang, Jaesung; Akin, Demir; Bashir, Rashid

    2008-01-01

    Motivated by capture and detection of airborne biological agents in real time with a cantilever biosensor without introducing the agents into liquids, we present the effects of inlet/outlet configurations of a homemade particle collector on the electrostatic capture of airborne 100 nm diameter nanoparticles under swirling gas flows. This particle collector has three different inlet/outlet configurations: forward inlet/outlet (FO), backward inlet/outlet (BO) and straight inlet/outlet (SO) configurations. We also present the electrostatic capture of Vaccinia viruses using the same particle collector and compare these virus measurements with the nanoparticle cases. The most particles were collected in the FO configuration. The numbers of particles captured in the BO and SO configurations were close within their standard deviations. For all the three configurations tested, the number of particles captured in the center electrode C was much smaller than those captured in the other electrodes at a flow rate of 1.1 l min −1 and an applied potential of 2 kV. Using a commercial CFD code FLUENT, we also simulated the effects of the three inlet/outlet configurations on the particle capture in terms of particle trajectories, velocities and travel times. This simulation was in a good agreement with measurements that the FO configuration is the most favorable to particle capture among the tested configurations at a flow rate of 1.1 l min −1 . The effects of particle diameters on the capture will also be discussed. This collector can be used for real-time monitoring of bioaerosols along with cantilever biosensors

  19. Molecular constraints on synaptic tagging and maintenance of long-term potentiation: a predictive model.

    Science.gov (United States)

    Smolen, Paul; Baxter, Douglas A; Byrne, John H

    2012-01-01

    Protein synthesis-dependent, late long-term potentiation (LTP) and depression (LTD) at glutamatergic hippocampal synapses are well characterized examples of long-term synaptic plasticity. Persistent increased activity of protein kinase M ζ (PKMζ) is thought essential for maintaining LTP. Additional spatial and temporal features that govern LTP and LTD induction are embodied in the synaptic tagging and capture (STC) and cross capture hypotheses. Only synapses that have been "tagged" by a stimulus sufficient for LTP and learning can "capture" PKMζ. A model was developed to simulate the dynamics of key molecules required for LTP and LTD. The model concisely represents relationships between tagging, capture, LTD, and LTP maintenance. The model successfully simulated LTP maintained by persistent synaptic PKMζ, STC, LTD, and cross capture, and makes testable predictions concerning the dynamics of PKMζ. The maintenance of LTP, and consequently of at least some forms of long-term memory, is predicted to require continual positive feedback in which PKMζ enhances its own synthesis only at potentiated synapses. This feedback underlies bistability in the activity of PKMζ. Second, cross capture requires the induction of LTD to induce dendritic PKMζ synthesis, although this may require tagging of a nearby synapse for LTP. The model also simulates the effects of PKMζ inhibition, and makes additional predictions for the dynamics of CaM kinases. Experiments testing the above predictions would significantly advance the understanding of memory maintenance.

  20. Direct numerical simulation of bluff-body-stabilized premixed flames

    KAUST Repository

    Arias, Paul G.

    2014-01-10

    To enable high fidelity simulation of combustion phenomena in realistic devices, an embedded boundary method is implemented into direct numerical simulations (DNS) of reacting flows. One of the additional numerical issues associated with reacting flows is the stable treatment of the embedded boundaries in the presence of multicomponent species and reactions. The implemented method is validated in two test con gurations: a pre-mixed hydrogen/air flame stabilized in a backward-facing step configuration, and reactive flows around a square prism. The former is of interest in practical gas turbine combustor applications in which the thermo-acoustic instabilities are a strong concern, and the latter serves as a good model problem to capture the vortex shedding behind a bluff body. In addition, a reacting flow behind the square prism serves as a model for the study of flame stabilization in a micro-channel combustor. The present study utilizes fluid-cell reconstruction methods in order to capture important flame-to-solid wall interactions that are important in confined multicomponent reacting flows. Results show that the DNS with embedded boundaries can be extended to more complex geometries without loss of accuracy and the high fidelity simulation data can be used to develop and validate turbulence and combustion models for the design of practical combustion devices.