WorldWideScience

Sample records for simulation modeling study

  1. Operations planning simulation: Model study

    Science.gov (United States)

    1974-01-01

    The use of simulation modeling for the identification of system sensitivities to internal and external forces and variables is discussed. The technique provides a means of exploring alternate system procedures and processes, so that these alternatives may be considered on a mutually comparative basis permitting the selection of a mode or modes of operation which have potential advantages to the system user and the operator. These advantages are measurements is system efficiency are: (1) the ability to meet specific schedules for operations, mission or mission readiness requirements or performance standards and (2) to accomplish the objectives within cost effective limits.

  2. Plasma simulation studies using multilevel physics models

    International Nuclear Information System (INIS)

    Park, W.; Belova, E.V.; Fu, G.Y.; Tang, X.Z.; Strauss, H.R.; Sugiyama, L.E.

    1999-01-01

    The question of how to proceed toward ever more realistic plasma simulation studies using ever increasing computing power is addressed. The answer presented here is the M3D (Multilevel 3D) project, which has developed a code package with a hierarchy of physics levels that resolve increasingly complete subsets of phase-spaces and are thus increasingly more realistic. The rationale for the multilevel physics models is given. Each physics level is described and examples of its application are given. The existing physics levels are fluid models (3D configuration space), namely magnetohydrodynamic (MHD) and two-fluids; and hybrid models, namely gyrokinetic-energetic-particle/MHD (5D energetic particle phase-space), gyrokinetic-particle-ion/fluid-electron (5D ion phase-space), and full-kinetic-particle-ion/fluid-electron level (6D ion phase-space). Resolving electron phase-space (5D or 6D) remains a future project. Phase-space-fluid models are not used in favor of δf particle models. A practical and accurate nonlinear fluid closure for noncollisional plasmas seems not likely in the near future. copyright 1999 American Institute of Physics

  3. Plasma simulation studies using multilevel physics models

    International Nuclear Information System (INIS)

    Park, W.; Belova, E.V.; Fu, G.Y.

    2000-01-01

    The question of how to proceed toward ever more realistic plasma simulation studies using ever increasing computing power is addressed. The answer presented here is the M3D (Multilevel 3D) project, which has developed a code package with a hierarchy of physics levels that resolve increasingly complete subsets of phase-spaces and are thus increasingly more realistic. The rationale for the multilevel physics models is given. Each physics level is described and examples of its application are given. The existing physics levels are fluid models (3D configuration space), namely magnetohydrodynamic (MHD) and two-fluids; and hybrid models, namely gyrokinetic-energetic-particle/MHD (5D energetic particle phase-space), gyrokinetic-particle-ion/fluid-electron (5D ion phase-space), and full-kinetic-particle-ion/fluid-electron level (6D ion phase-space). Resolving electron phase-space (5D or 6D) remains a future project. Phase-space-fluid models are not used in favor of delta f particle models. A practical and accurate nonlinear fluid closure for noncollisional plasmas seems not likely in the near future

  4. Simulation model for studying low frequency microinstabilities

    International Nuclear Information System (INIS)

    Lee, W.W.; Okuda, H.

    1976-03-01

    A 2 1 / 2 dimensional, electrostatic particle code in a slab geometry has been developed to study low frequency oscillations such as drift wave and trapped particle instabilities in a nonuniform bounded plasma. A drift approximation for the electron transverse motion is made which eliminates the high frequency oscillations at the electron gyrofrequency and its multiples. It is, therefore, possible to study the nonlinear effects such as the anomalous transport of plasmas within a reasonable computing time using a real mass ratio. Several examples are given to check the validity and usefulness of the model

  5. MODELING SIMULATION AND PERFORMANCE STUDY OF GRIDCONNECTED PHOTOVOLTAIC ENERGY SYSTEM

    OpenAIRE

    Nagendra K; Karthik J; Keerthi Rao C; Kumar Raja Pemmadi

    2017-01-01

    This paper presents Modeling Simulation of grid connected Photovoltaic Energy System and performance study using MATLAB/Simulink. The Photovoltaic energy system is considered in three main parts PV Model, Power conditioning System and Grid interface. The Photovoltaic Model is inter-connected with grid through full scale power electronic devices. The simulation is conducted on the PV energy system at normal temperature and at constant load by using MATLAB.

  6. A study for production simulation model generation system based on data model at a shipyard

    Directory of Open Access Journals (Sweden)

    Myung-Gi Back

    2016-09-01

    Full Text Available Simulation technology is a type of shipbuilding product lifecycle management solution used to support production planning or decision-making. Normally, most shipbuilding processes are consisted of job shop production, and the modeling and simulation require professional skills and experience on shipbuilding. For these reasons, many shipbuilding companies have difficulties adapting simulation systems, regardless of the necessity for the technology. In this paper, the data model for shipyard production simulation model generation was defined by analyzing the iterative simulation modeling procedure. The shipyard production simulation data model defined in this study contains the information necessary for the conventional simulation modeling procedure and can serve as a basis for simulation model generation. The efficacy of the developed system was validated by applying it to the simulation model generation of the panel block production line. By implementing the initial simulation model generation process, which was performed in the past with a simulation modeler, the proposed system substantially reduced the modeling time. In addition, by reducing the difficulties posed by different modeler-dependent generation methods, the proposed system makes the standardization of the simulation model quality possible.

  7. Mathematical and computational modeling and simulation fundamentals and case studies

    CERN Document Server

    Moeller, Dietmar P F

    2004-01-01

    Mathematical and Computational Modeling and Simulation - a highly multi-disciplinary field with ubiquitous applications in science and engineering - is one of the key enabling technologies of the 21st century. This book introduces to the use of Mathematical and Computational Modeling and Simulation in order to develop an understanding of the solution characteristics of a broad class of real-world problems. The relevant basic and advanced methodologies are explained in detail, with special emphasis on ill-defined problems. Some 15 simulation systems are presented on the language and the logical level. Moreover, the reader can accumulate experience by studying a wide variety of case studies. The latter are briefly described within the book but their full versions as well as some simulation software demos are available on the Web. The book can be used for University courses of different level as well as for self-study. Advanced sections are marked and can be skipped in a first reading or in undergraduate courses...

  8. Bias-Correction in Vector Autoregressive Models: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Tom Engsted

    2014-03-01

    Full Text Available We analyze the properties of various methods for bias-correcting parameter estimates in both stationary and non-stationary vector autoregressive models. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study, we show that when the model is stationary this simple bias formula compares very favorably to bootstrap bias-correction, both in terms of bias and mean squared error. In non-stationary models, the analytical bias formula performs noticeably worse than bootstrapping. Both methods yield a notable improvement over ordinary least squares. We pay special attention to the risk of pushing an otherwise stationary model into the non-stationary region of the parameter space when correcting for bias. Finally, we consider a recently proposed reduced-bias weighted least squares estimator, and we find that it compares very favorably in non-stationary models.

  9. A model ecosystem experiment and its computational simulation studies

    International Nuclear Information System (INIS)

    Doi, M.

    2002-01-01

    Simplified microbial model ecosystem and its computer simulation model are introduced as eco-toxicity test for the assessment of environmental responses from the effects of environmental impacts. To take the effects on the interactions between species and environment into account, one option is to select the keystone species on the basis of ecological knowledge, and to put it in the single-species toxicity test. Another option proposed is to put the eco-toxicity tests as experimental micro ecosystem study and a theoretical model ecosystem analysis. With these tests, the stressors which are more harmful to the ecosystems should be replace with less harmful ones on the basis of unified measures. Management of radioactive materials, chemicals, hyper-eutrophic, and other artificial disturbances of ecosystem should be discussed consistently from the unified view point of environmental protection. (N.C.)

  10. System studies for micro grid design: modeling and simulation examples

    Energy Technology Data Exchange (ETDEWEB)

    Rosa, Arlei Lucas S. [Federal University of Juiz de Fora (UFJF), MG (Brazil); Ribeiro, Paulo F. [Calvin College, Grand Rapids, MI (United States). Electrical Engineering Dept.

    2009-07-01

    The search for new energy sources to replace or increase the existing power plant of the traditional system leads to changes in concepts of energy generation and consumption. In this context, the concept of Microgrid has been developed and opens the opportunity for local power generation. The Microgrid can operate connected to the existing distribution network, increasing the reliability and safety of the system. Control measures and electronics interface are taken to maintain the network as a single and strong unit to any perturbation. This paper presents the typical system studies required to investigate the performance of a Microgrid operating under different system condition (e.g. interconnected or isolated from the utility grid and under system disturbance). Load flow and electromagnetic transient studies are used for modeling and simulation of a typical Microgrid configuration. (author)

  11. Bias-correction in vector autoregressive models: A simulation study

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard

    We analyze and compare the properties of various methods for bias-correcting parameter estimates in vector autoregressions. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study, we show that this simple...... and easy-to-use analytical bias formula compares very favorably to the more standard but also more computer intensive bootstrap bias-correction method, both in terms of bias and mean squared error. Both methods yield a notable improvement over both OLS and a recently proposed WLS estimator. We also...... of pushing an otherwise stationary model into the non-stationary region of the parameter space during the process of correcting for bias....

  12. Computer simulation study of water using a fluctuating charge model

    Indian Academy of Sciences (India)

    Unknown

    Typically, the simulated diffusion constants are larger, and relaxation times smaller than .... where λi is the Lagrange multiplier for the charge neutrality constraint. As the .... For a geometrically rigid model such as SPC, the integral turns out to ...

  13. A Theoretical Study of Subsurface Drainage Model Simulation of ...

    African Journals Online (AJOL)

    A three-dimensional variable-density groundwater flow model, the SEAWAT model, was used to assess the influence of subsurface drain spacing, evapotranspiration and irrigation water quality on salt concentration at the base of the root zone, leaching and drainage in salt affected irrigated land. The study was carried out ...

  14. Shuttle/TDRSS modelling and link simulation study

    Science.gov (United States)

    Braun, W. R.; Mckenzie, T. M.; Biederman, L.; Lindsey, W. C.

    1979-01-01

    A Shuttle/TDRSS S-band and Ku-band link simulation package called LinCsim was developed for the evaluation of link performance for specific Shuttle signal designs. The link models were described in detail and the transmitter distortion parameters or user constraints were carefully defined. The overall link degradation (excluding hardware degradations) relative to an ideal BPSK channel were given for various sets of user constraint values. The performance sensitivity to each individual user constraint was then illustrated. The effect of excessive Spacelab clock jitter on the return link BER performance was also investigated as was the problem of subcarrier recovery for the K-band Shuttle return link signal.

  15. Impact of atmospheric model resolution on simulation of ENSO feedback processes: a coupled model study

    Science.gov (United States)

    Hua, Lijuan; Chen, Lin; Rong, Xinyao; Su, Jingzhi; Wang, Lu; Li, Tim; Yu, Yongqiang

    2018-03-01

    This study examines El Niño-Southern Oscillation (ENSO)-related air-sea feedback processes in a coupled general circulation model (CGCM) to gauge model errors and pin down their sources in ENSO simulation. Three horizontal resolutions of the atmospheric component (T42, T63 and T106) of the CGCM are used to investigate how the simulated ENSO behaviors are affected by the resolution. We find that air-sea feedback processes in the three experiments mainly differ in terms of both thermodynamic and dynamic feedbacks. We also find that these processes are simulated more reasonably in the highest resolution version than in the other two lower resolution versions. The difference in the thermodynamic feedback arises from the difference in the shortwave-radiation (SW) feedback. Due to the severely (mildly) excessive cold tongue in the lower (higher) resolution version, the SW feedback is severely (mildly) underestimated. The main difference in the dynamic feedback processes lies in the thermocline feedback and the zonal-advection feedback, both of which are caused by the difference in the anomalous thermocline response to anomalous zonal wind stress. The difference in representing the anomalous thermocline response is attributed to the difference in meridional structure of zonal wind stress anomaly in the three simulations, which is linked to meridional resolution.

  16. A Theoretical Study of Subsurface Drainage Model Simulation of ...

    African Journals Online (AJOL)

    User

    Simulation of Drainage Flow and Leaching in Salt Affected ... mg/l with an impermeable layer at 10 m depth and impermeable field boundaries. .... The hydraulic where D is the free molecular diffusion ...... Dynamics of fluid in porous media.

  17. Modelling and Simulation of TCPAR for Power System Flow Studies

    Directory of Open Access Journals (Sweden)

    Narimen Lahaçani AOUZELLAG

    2012-12-01

    Full Text Available In this paper, the modelling of Thyristor Controlled Phase Angle Regulator ‘TCPAR’ for power flow studies and the role of that modelling in the study of Flexible Alternating Current Transmission Systems ‘FACTS’ for power flow control are discussed. In order to investigate the impact of TCPAR on power systems effectively, it is essential to formulate a correct and appropriate model for it. The TCPAR, thus, makes it possible to increase or decrease the power forwarded in the line where it is inserted in a considerable way, which makes of it an ideal tool for this kind of use. Knowing that the TCPAR does not inject any active power, it offers a good solution with a less consumption. One of the adverse effects of the TCPAR is the voltage drop which it causes in the network although it is not significant. To solve this disadvantage, it is enough to introduce a Static VAR Compensator ‘SVC’ into the electrical network which will compensate the voltages fall and will bring them back to an acceptable level.

  18. Simulation study of a rectifying bipolar ion channel: Detailed model versus reduced model

    Directory of Open Access Journals (Sweden)

    Z. Ható

    2016-02-01

    Full Text Available We study a rectifying mutant of the OmpF porin ion channel using both all-atom and reduced models. The mutant was created by Miedema et al. [Nano Lett., 2007, 7, 2886] on the basis of the NP semiconductor diode, in which an NP junction is formed. The mutant contains a pore region with positive amino acids on the left-hand side and negative amino acids on the right-hand side. Experiments show that this mutant rectifies. Although we do not know the structure of this mutant, we can build an all-atom model for it on the basis of the structure of the wild type channel. Interestingly, molecular dynamics simulations for this all-atom model do not produce rectification. A reduced model that contains only the important degrees of freedom (the positive and negative amino acids and free ions in an implicit solvent, on the other hand, exhibits rectification. Our calculations for the reduced model (using the Nernst-Planck equation coupled to Local Equilibrium Monte Carlo simulations reveal a rectification mechanism that is different from that seen for semiconductor diodes. The basic reason is that the ions are different in nature from electrons and holes (they do not recombine. We provide explanations for the failure of the all-atom model including the effect of all the other atoms in the system as a noise that inhibits the response of ions (that would be necessary for rectification to the polarizing external field.

  19. Extremophiles survival to simulated space conditions: an astrobiology model study.

    Science.gov (United States)

    Mastascusa, V; Romano, I; Di Donato, P; Poli, A; Della Corte, V; Rotundi, A; Bussoletti, E; Quarto, M; Pugliese, M; Nicolaus, B

    2014-09-01

    In this work we investigated the ability of four extremophilic bacteria from Archaea and Bacteria domains to resist to space environment by exposing them to extreme conditions of temperature, UV radiation, desiccation coupled to low pressure generated in a Mars' conditions simulator. All the investigated extremophilic strains (namely Sulfolobus solfataricus, Haloterrigena hispanica, Thermotoga neapolitana and Geobacillus thermantarcticus) showed a good resistance to the simulation of the temperature variation in the space; on the other hand irradiation with UV at 254 nm affected only slightly the growth of H. hispanica, G. thermantarcticus and S. solfataricus; finally exposition to Mars simulated condition showed that H. hispanica and G. thermantarcticus were resistant to desiccation and low pressure.

  20. Studies of turbulent round jets through experimentation, simulation, and modeling

    Science.gov (United States)

    Keedy, Ryan

    This thesis studies the physics of the turbulent round jet. In particular, it focuses on three different problems that have the turbulent round jet as their base flow. The first part of this thesis examines a compressible turbulent round jet at its sonic condition. We investigate the shearing effect such a jet has when impinging on a solid surface that is perpendicular to the flow direction. We report on experiments to evaluate the jet's ability to remove different types of explosive particles from a glass surface. Theoretical analysis revealed trends and enabled modeling to improve the predictability of particle removal for various jet conditions. The second part of thesis aims at developing a non-intrusive measurement technique for free-shear turbulent flows in nature. Most turbulent jet investigations in the literature, both in the laboratory and in the field, required specialized intrusive instrumentation and/or complex optical setups. There are many situations in naturally-occurring flows where the environment may prove too hostile or remote for existing instrumentation. We have developed a methodology for analyzing video of the exterior of a naturally-occurring flow and calculating the flow velocity. We found that the presence of viscosity gradients affects the velocity analysis. While these effects produce consistent, predictable changes, we became interested in the mechanism by which the viscosity gradients affect the mixing and development of the turbulent round jet. We conducted a stability analysis of the axisymmetric jet when a viscosity gradient is present. Finally, the third problem addressed in this thesis is the growth of liquid droplets by condensation in a turbulent round jet. A vapor-saturated turbulent jet issues into a cold, dry environment. The resulting mixing produces highly inhomogeneous regions of supersaturation, where droplets grow and evaporate. Non-linear interactions between the droplet growth rate and the supersaturation field make

  1. An IT-enabled supply chain model: a simulation study

    Science.gov (United States)

    Cannella, Salvatore; Framinan, Jose M.; Barbosa-Póvoa, Ana

    2014-11-01

    During the last decades, supply chain collaboration practices and the underlying enabling technologies have evolved from the classical electronic data interchange (EDI) approach to a web-based and radio frequency identification (RFID)-enabled collaboration. In this field, most of the literature has focused on the study of optimal parameters for reducing the total cost of suppliers, by adopting operational research (OR) techniques. Herein we are interested in showing that the considered information technology (IT)-enabled structure is resilient, that is, it works well across a reasonably broad range of parameter settings. By adopting a methodological approach based on system dynamics, we study a multi-tier collaborative supply chain. Results show that the IT-enabled supply chain improves operational performance and customer service level. Nonetheless, benefits for geographically dispersed networks are of minor entity.

  2. 3D simulation studies of tokamak plasmas using MHD and extended-MHD models

    International Nuclear Information System (INIS)

    Park, W.; Chang, Z.; Fredrickson, E.; Fu, G.Y.

    1996-01-01

    The M3D (Multi-level 3D) tokamak simulation project aims at the simulation of tokamak plasmas using a multi-level tokamak code package. Several current applications using MHD and Extended-MHD models are presented; high-β disruption studies in reversed shear plasmas using the MHD level MH3D code, ω *i stabilization and nonlinear island saturation of TAE mode using the hybrid particle/MHD level MH3D-K code, and unstructured mesh MH3D ++ code studies. In particular, three internal mode disruption mechanisms are identified from simulation results which agree which agree well with experimental data

  3. Co-producing simulation models to inform resource management: a case study from southwest South Dakota

    Science.gov (United States)

    Miller, Brian W.; Symstad, Amy J.; Frid, Leonardo; Fisichelli, Nicholas A.; Schuurman, Gregor W.

    2017-01-01

    Simulation models can represent complexities of the real world and serve as virtual laboratories for asking “what if…?” questions about how systems might respond to different scenarios. However, simulation models have limited relevance to real-world applications when designed without input from people who could use the simulated scenarios to inform their decisions. Here, we report on a state-and-transition simulation model of vegetation dynamics that was coupled to a scenario planning process and co-produced by researchers, resource managers, local subject-matter experts, and climate change adaptation specialists to explore potential effects of climate scenarios and management alternatives on key resources in southwest South Dakota. Input from management partners and local experts was critical for representing key vegetation types, bison and cattle grazing, exotic plants, fire, and the effects of climate change and management on rangeland productivity and composition given the paucity of published data on many of these topics. By simulating multiple land management jurisdictions, climate scenarios, and management alternatives, the model highlighted important tradeoffs between grazer density and vegetation composition, as well as between the short- and long-term costs of invasive species management. It also pointed to impactful uncertainties related to the effects of fire and grazing on vegetation. More broadly, a scenario-based approach to model co-production bracketed the uncertainty associated with climate change and ensured that the most important (and impactful) uncertainties related to resource management were addressed. This cooperative study demonstrates six opportunities for scientists to engage users throughout the modeling process to improve model utility and relevance: (1) identifying focal dynamics and variables, (2) developing conceptual model(s), (3) parameterizing the simulation, (4) identifying relevant climate scenarios and management

  4. Study of Monte Carlo Simulation Method for Methane Phase Diagram Prediction using Two Different Potential Models

    KAUST Repository

    Kadoura, Ahmad

    2011-06-06

    Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.

  5. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    Science.gov (United States)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  6. Flood Zoning Simulation by HEC-RAS Model (Case Study: Johor River-Kota Tinggi Region)

    OpenAIRE

    ShahiriParsa, Ahmad; Heydari, Mohammad; Sadeghian, Mohammad Sadegh; Moharrampour, Mahdi

    2015-01-01

    Flooding of rivers has caused many human and financial losses. Hence, studies and research on the nature of the river is inevitable.However, the behavior of rivers hasmany complexities and in this respect, computer models are efficient tools in order to study and simulate the behavior of rivers with the least possible cost. In this paper, one-dimensional model HEC-RAS was used to simulate the flood zoning in the Kota Tinggi district in Johor state. Implementation processes of the zoning on ca...

  7. Static, rheological and mechanical properties of polymer nanocomposites studied by computer modeling and simulation.

    Science.gov (United States)

    Liu, Jun; Zhang, Liqun; Cao, Dapeng; Wang, Wenchuan

    2009-12-28

    Polymer nanocomposites (PNCs) often exhibit excellent mechanical, thermal, electrical and optical properties, because they combine the performances of both polymers and inorganic or organic nanoparticles. Recently, computer modeling and simulation are playing an important role in exploring the reinforcement mechanism of the PNCs and even the design of functional PNCs. This report provides an overview of the progress made in past decades in the investigation of the static, rheological and mechanical properties of polymer nanocomposites studied by computer modeling and simulation. Emphases are placed on exploring the mechanisms at the molecular level for the dispersion of nanoparticles in nanocomposites, the effects of nanoparticles on chain conformation and glass transition temperature (T(g)), as well as viscoelastic and mechanical properties. Finally, some future challenges and opportunities in computer modeling and simulation of PNCs are addressed.

  8. A Theory for the Neural Basis of Language Part 2: Simulation Studies of the Model

    Science.gov (United States)

    Baron, R. J.

    1974-01-01

    Computer simulation studies of the proposed model are presented. Processes demonstrated are (1) verbally directed recall of visual experience; (2) understanding of verbal information; (3) aspects of learning and forgetting; (4) the dependence of recognition and understanding in context; and (5) elementary concepts of sentence production. (Author)

  9. Atmospheric models in the numerical simulation system (SPEEDI-MP) for environmental studies

    International Nuclear Information System (INIS)

    Nagai, Haruyasu; Terada, Hiroaki

    2007-01-01

    As a nuclear emergency response system, numerical models to predict the atmospheric dispersion of radionuclides have been developed at Japan Atomic Energy Agency (JAEA). Evolving these models by incorporating new schemes for physical processes and up-to-date computational technologies, a numerical simulation system, which consists of dynamical models and material transport models for the atmospheric, terrestrial, and oceanic environments, has been constructed to apply for various environmental studies. In this system, the combination of a non-hydrostatic atmospheric dynamic model and Lagrangian particle dispersion model is used for the emergency response system. The utilization of detailed meteorological field by the atmospheric model improves the model performance for diffusion and deposition calculations. It also calculates a large area domain with coarse resolution and local area domain with high resolution simultaneously. The performance of new model system was evaluated using measurements of surface deposition of 137 Cs over Europe during the Chernobyl accident. (author)

  10. Simulation modeling and arena

    CERN Document Server

    Rossetti, Manuel D

    2015-01-01

    Emphasizes a hands-on approach to learning statistical analysis and model building through the use of comprehensive examples, problems sets, and software applications With a unique blend of theory and applications, Simulation Modeling and Arena®, Second Edition integrates coverage of statistical analysis and model building to emphasize the importance of both topics in simulation. Featuring introductory coverage on how simulation works and why it matters, the Second Edition expands coverage on static simulation and the applications of spreadsheets to perform simulation. The new edition als

  11. 3D simulation studies of tokamak plasmas using MHD and extended-MHD models

    International Nuclear Information System (INIS)

    Park, W.; Chang, Z.; Fredrickson, E.; Fu, G.Y.; Pomphrey, N.; Sugiyama, L.E.

    1997-01-01

    The M3D (Multi-level 3D) tokamak simulation project aims at the simulation of tokamak plasmas using a multi-level tokamak code package. Several current applications using MHD and Extended-MHD models are presented; high-β disruption studies in reversed shear plasmas using the MHD level MH3D code, ω *i stabilization and nonlinear island rotation studies using the two-fluid level MH3D-T code, studies of nonlinear saturation of TAE modes using the hybrid particle/MHD level MH3D-K code, and unstructured mesh MH3D ++ code studies. In particular, three internal mode disruption mechanisms are identified from simulation results which agree well with experimental data

  12. Computational study of nonlinear plasma waves. I. Simulation model and monochromatic wave propagation

    International Nuclear Information System (INIS)

    Matsuda, Y.; Crawford, F.W.

    1975-01-01

    An economical low-noise plasma simulation model originated by Denavit is applied to a series of problems associated with electrostatic wave propagation in a one-dimensional, collisionless, Maxwellian plasma, in the absence of magnetic field. The model is described and tested, first in the absence of an applied signal, and then with a small amplitude perturbation. These tests serve to establish the low-noise features of the model, and to verify the theoretical linear dispersion relation at wave energy levels as low as 10 -6 of the plasma thermal energy: Better quantitative results are obtained, for comparable computing time, than can be obtained by conventional particle simulation models, or direct solution of the Vlasov equation. The method is then used to study propagation of an essentially monochromatic plane wave. Results on amplitude oscillation and nonlinear frequency shift are compared with available theories

  13. Simulation modelling as a tool for knowledge mobilisation in health policy settings: a case study protocol.

    Science.gov (United States)

    Freebairn, L; Atkinson, J; Kelly, P; McDonnell, G; Rychetnik, L

    2016-09-21

    Evidence-informed decision-making is essential to ensure that health programs and services are effective and offer value for money; however, barriers to the use of evidence persist. Emerging systems science approaches and advances in technology are providing new methods and tools to facilitate evidence-based decision-making. Simulation modelling offers a unique tool for synthesising and leveraging existing evidence, data and expert local knowledge to examine, in a robust, low risk and low cost way, the likely impact of alternative policy and service provision scenarios. This case study will evaluate participatory simulation modelling to inform the prevention and management of gestational diabetes mellitus (GDM). The risks associated with GDM are well recognised; however, debate remains regarding diagnostic thresholds and whether screening and treatment to reduce maternal glucose levels reduce the associated risks. A diagnosis of GDM may provide a leverage point for multidisciplinary lifestyle modification interventions. This research will apply and evaluate a simulation modelling approach to understand the complex interrelation of factors that drive GDM rates, test options for screening and interventions, and optimise the use of evidence to inform policy and program decision-making. The study design will use mixed methods to achieve the objectives. Policy, clinical practice and research experts will work collaboratively to develop, test and validate a simulation model of GDM in the Australian Capital Territory (ACT). The model will be applied to support evidence-informed policy dialogues with diverse stakeholders for the management of GDM in the ACT. Qualitative methods will be used to evaluate simulation modelling as an evidence synthesis tool to support evidence-based decision-making. Interviews and analysis of workshop recordings will focus on the participants' engagement in the modelling process; perceived value of the participatory process, perceived

  14. Aviation Safety Simulation Model

    Science.gov (United States)

    Houser, Scott; Yackovetsky, Robert (Technical Monitor)

    2001-01-01

    The Aviation Safety Simulation Model is a software tool that enables users to configure a terrain, a flight path, and an aircraft and simulate the aircraft's flight along the path. The simulation monitors the aircraft's proximity to terrain obstructions, and reports when the aircraft violates accepted minimum distances from an obstruction. This model design facilitates future enhancements to address other flight safety issues, particularly air and runway traffic scenarios. This report shows the user how to build a simulation scenario and run it. It also explains the model's output.

  15. A General Simulation Framework for Supply Chain Modeling: State of the Art and Case Study

    OpenAIRE

    Antonio Cimino; Francesco Longo; Giovanni Mirabelli

    2010-01-01

    Nowadays there is a large availability of discrete event simulation software that can be easily used in different domains: from industry to supply chain, from healthcare to business management, from training to complex systems design. Simulation engines of commercial discrete event simulation software use specific rules and logics for simulation time and events management. Difficulties and limitations come up when commercial discrete event simulation software are used for modeling complex rea...

  16. A Simulation Study of the Radiation-Induced Bystander Effect: Modeling with Stochastically Defined Signal Reemission

    Directory of Open Access Journals (Sweden)

    Kohei Sasaki

    2012-01-01

    Full Text Available The radiation-induced bystander effect (RIBE has been experimentally observed for different types of radiation, cell types, and cell culture conditions. However, the behavior of signal transmission between unirradiated and irradiated cells is not well known. In this study, we have developed a new model for RIBE based on the diffusion of soluble factors in cell cultures using a Monte Carlo technique. The model involves the signal emission probability from bystander cells following Poisson statistics. Simulations with this model show that the spatial configuration of the bystander cells agrees well with that of corresponding experiments, where the optimal emission probability is estimated through a large number of simulation runs. It was suggested that the most likely probability falls within 0.63–0.92 for mean number of the emission signals ranging from 1.0 to 2.5.

  17. Cross-flow turbines: physical and numerical model studies towards improved array simulations

    Science.gov (United States)

    Wosnik, M.; Bachant, P.

    2015-12-01

    Cross-flow, or vertical-axis turbines, show potential in marine hydrokinetic (MHK) and wind energy applications. As turbine designs mature, the research focus is shifting from individual devices towards improving turbine array layouts for maximizing overall power output, i.e., minimizing wake interference for axial-flow turbines, or taking advantage of constructive wake interaction for cross-flow turbines. Numerical simulations are generally better suited to explore the turbine array design parameter space, as physical model studies of large arrays at large model scale would be expensive. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries, the turbines' interaction with the energy resource needs to be parameterized, or modeled. Most models in use today, e.g. actuator disk, are not able to predict the unique wake structure generated by cross-flow turbines. Experiments were carried out using a high-resolution turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier--Stokes models. The ALM predicts turbine loading with the blade element method combined with sub-models for dynamic stall and flow curvature. The open-source software is written as an extension library for the OpenFOAM CFD package, which allows the ALM body force to be applied to their standard RANS and LES solvers. Turbine forcing is also applied to volume of fluid (VOF) models, e.g., for predicting free surface effects on submerged MHK devices. An

  18. Theoretical model simulations for the global Thermospheric Mapping Study (TMS) periods

    Science.gov (United States)

    Rees, D.; Fuller-Rowell, T. J.

    Theoretical and semiempirical models of the solar UV/EUV and of the geomagnetic driving forces affecting the terrestrial mesosphere and thermosphere have been used to generate a series of representative numerical time-dependent and global models of the thermosphere, for the range of solar and geoamgnetic activity levels which occurred during the three Thermospheric Mapping Study periods. The simulations obtained from these numerical models are compared with observations, and with the results of semiempirical models of the thermosphere. The theoretical models provide a record of the magnitude of the major driving forces which affected the thermosphere during the study periods, and a baseline against which the actual observed structure and dynamics can be compared.

  19. Clinical prediction in defined populations: a simulation study investigating when and how to aggregate existing models

    Directory of Open Access Journals (Sweden)

    Glen P. Martin

    2017-01-01

    Full Text Available Abstract Background Clinical prediction models (CPMs are increasingly deployed to support healthcare decisions but they are derived inconsistently, in part due to limited data. An emerging alternative is to aggregate existing CPMs developed for similar settings and outcomes. This simulation study aimed to investigate the impact of between-population-heterogeneity and sample size on aggregating existing CPMs in a defined population, compared with developing a model de novo. Methods Simulations were designed to mimic a scenario in which multiple CPMs for a binary outcome had been derived in distinct, heterogeneous populations, with potentially different predictors available in each. We then generated a new ‘local’ population and compared the performance of CPMs developed for this population by aggregation, using stacked regression, principal component analysis or partial least squares, with redevelopment from scratch using backwards selection and penalised regression. Results While redevelopment approaches resulted in models that were miscalibrated for local datasets of less than 500 observations, model aggregation methods were well calibrated across all simulation scenarios. When the size of local data was less than 1000 observations and between-population-heterogeneity was small, aggregating existing CPMs gave better discrimination and had the lowest mean square error in the predicted risks compared with deriving a new model. Conversely, given greater than 1000 observations and significant between-population-heterogeneity, then redevelopment outperformed the aggregation approaches. In all other scenarios, both aggregation and de novo derivation resulted in similar predictive performance. Conclusion This study demonstrates a pragmatic approach to contextualising CPMs to defined populations. When aiming to develop models in defined populations, modellers should consider existing CPMs, with aggregation approaches being a suitable modelling

  20. Modeling and Simulation Optimization and Feasibility Studies for the Neutron Detection without Helium-3 Project

    Energy Technology Data Exchange (ETDEWEB)

    Ely, James H.; Siciliano, Edward R.; Swinhoe, Martyn T.; Lintereur, Azaree T.

    2013-01-01

    This report details the results of the modeling and simulation work accomplished for the ‘Neutron Detection without Helium-3’ project during the 2011 and 2012 fiscal years. The primary focus of the project is to investigate commercially available technologies that might be used in safeguards applications in the relatively near term. Other technologies that are being developed may be more applicable in the future, but are outside the scope of this study.

  1. Evaluation of soil flushing of complex contaminated soil: An experimental and modeling simulation study

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Sung Mi; Kang, Christina S. [Department of Environmental Engineering, Konkuk University, 120 Neungdong-ro, Gwangjin-gu, Seoul 143-701 (Korea, Republic of); Kim, Jonghwa [Department of Industrial Engineering, Konkuk University, 120 Neungdong-ro, Gwangjin-gu, Seoul 143-701 (Korea, Republic of); Kim, Han S., E-mail: hankim@konkuk.ac.kr [Department of Environmental Engineering, Konkuk University, 120 Neungdong-ro, Gwangjin-gu, Seoul 143-701 (Korea, Republic of)

    2015-04-28

    Highlights: • Remediation of complex contaminated soil achieved by sequential soil flushing. • Removal of Zn, Pb, and heavy petroleum oils using 0.05 M citric acid and 2% SDS. • Unified desorption distribution coefficients modeled and experimentally determined. • Nonequilibrium models for the transport behavior of complex contaminants in soils. - Abstract: The removal of heavy metals (Zn and Pb) and heavy petroleum oils (HPOs) from a soil with complex contamination was examined by soil flushing. Desorption and transport behaviors of the complex contaminants were assessed by batch and continuous flow reactor experiments and through modeling simulations. Flushing a one-dimensional flow column packed with complex contaminated soil sequentially with citric acid then a surfactant resulted in the removal of 85.6% of Zn, 62% of Pb, and 31.6% of HPO. The desorption distribution coefficients, K{sub Ubatch} and K{sub Lbatch}, converged to constant values as C{sub e} increased. An equilibrium model (ADR) and nonequilibrium models (TSNE and TRNE) were used to predict the desorption and transport of complex contaminants. The nonequilibrium models demonstrated better fits with the experimental values obtained from the column test than the equilibrium model. The ranges of K{sub Ubatch} and K{sub Lbatch} were very close to those of K{sub Ufit} and K{sub Lfit} determined from model simulations. The parameters (R, β, ω, α, and f) determined from model simulations were useful for characterizing the transport of contaminants within the soil matrix. The results of this study provide useful information for the operational parameters of the flushing process for soils with complex contamination.

  2. A participative and facilitative conceptual modelling framework for discrete event simulation studies in healthcare

    OpenAIRE

    Kotiadis, Kathy; Tako, Antuela; Vasilakis, Christos

    2014-01-01

    Existing approaches to conceptual modelling (CM) in discrete-event simulation do not formally support the participation of a group of stakeholders. Simulation in healthcare can benefit from stakeholder participation as it makes possible to share multiple views and tacit knowledge from different parts of the system. We put forward a framework tailored to healthcare that supports the interaction of simulation modellers with a group of stakeholders to arrive at a common conceptual model. The fra...

  3. Simulation models: a current indispensable tool in studies of the continuous water-soil-plant - atmosphere

    International Nuclear Information System (INIS)

    Lopez Seijas, Teresa; Gonzalez, Felicita; Cid, G.; Osorio, Maria de los A.; Ruiz, Maria Elena

    2008-01-01

    Full text: This work assesses the current use of simulation models as a tool useful and indispensable for the advancement in the research and study of the processes related to the continuous water-soil - plant-atmosphere. In recent years they have reported in the literature many jobs where these modeling tools are used as a support to the decision-making process of companies or organizations in the agricultural sphere and in Special for the design of optimal management of irrigation and fertilization strategies of the crops. Summarizes some of the latest applications reported with respect to the use of water transfers and solutes, such simulation models mainly to nitrate leaching and groundwater contamination problems. On the other hand also summarizes important applications of simulation models of growth of cultivation for the prediction of effects on the performance of different conditions of water stress, and finally some other applications on the management of the different irrigation technologies as kingpins, superfiail irrigation and drip irrigation. Refer also the main work carried out in Cuba. (author)

  4. A generic analytical foot rollover model for predicting translational ankle kinematics in gait simulation studies.

    Science.gov (United States)

    Ren, Lei; Howard, David; Ren, Luquan; Nester, Chris; Tian, Limei

    2010-01-19

    The objective of this paper is to develop an analytical framework to representing the ankle-foot kinematics by modelling the foot as a rollover rocker, which cannot only be used as a generic tool for general gait simulation but also allows for case-specific modelling if required. Previously, the rollover models used in gait simulation have often been based on specific functions that have usually been of a simple form. In contrast, the analytical model described here is in a general form that the effective foot rollover shape can be represented by any polar function rho=rho(phi). Furthermore, a normalized generic foot rollover model has been established based on a normative foot rollover shape dataset of 12 normal healthy subjects. To evaluate model accuracy, the predicted ankle motions and the centre of pressure (CoP) were compared with measurement data for both subject-specific and general cases. The results demonstrated that the ankle joint motions in both vertical and horizontal directions (relative RMSE approximately 10%) and CoP (relative RMSE approximately 15% for most of the subjects) are accurately predicted over most of the stance phase (from 10% to 90% of stance). However, we found that the foot cannot be very accurately represented by a rollover model just after heel strike (HS) and just before toe off (TO), probably due to shear deformation of foot plantar tissues (ankle motion can occur without any foot rotation). The proposed foot rollover model can be used in both inverse and forward dynamics gait simulation studies and may also find applications in rehabilitation engineering. Copyright 2009 Elsevier Ltd. All rights reserved.

  5. Pilot-model analysis and simulation study of effect of control task desired control response

    Science.gov (United States)

    Adams, J. J.; Gera, J.; Jaudon, J. B.

    1978-01-01

    A pilot model analysis was performed that relates pilot control compensation, pilot aircraft system response, and aircraft response characteristics for longitudinal control. The results show that a higher aircraft short period frequency is required to achieve superior pilot aircraft system response in an altitude control task than is required in an attitude control task. These results were confirmed by a simulation study of target tracking. It was concluded that the pilot model analysis provides a theoretical basis for determining the effect of control task on pilot opinions.

  6. A Study on Bipedal and Mobile Robot Behavior Through Modeling and Simulation

    Directory of Open Access Journals (Sweden)

    Nirmala Nirmala

    2015-05-01

    Full Text Available The purpose of this work is to study and analyze mobile robot behavior. In performing this, a framework is adopted and developed for mobile and bipedal robot. The robots are design, build, and run as proceed from the development of mechanical structure, electronics and control integration, and control software application. The behavior of those robots are difficult to be observed and analyzed qualitatively. To evaluate the design and behavior quality, modeling and simulation of robot structure and its task capability is performed. The stepwise procedure to robot behavior study is explained. Behavior cases study are experimented to bipedal robots, transporter robot and Autonomous Guided Vehicle (AGV developed at our institution. The experimentation are conducted on those robots by adjusting their dynamic properties and/or surrounding environment. Validation is performed by comparing the simulation result and the real robot execution. The simulation gives a more idealistic behavior execution rather than realistic one. Adjustments are performed to fine tuning simulation's parameters to provide a more realistic performance.

  7. Simulation in Complex Modelling

    DEFF Research Database (Denmark)

    Nicholas, Paul; Ramsgaard Thomsen, Mette; Tamke, Martin

    2017-01-01

    This paper will discuss the role of simulation in extended architectural design modelling. As a framing paper, the aim is to present and discuss the role of integrated design simulation and feedback between design and simulation in a series of projects under the Complex Modelling framework. Complex...... performance, engage with high degrees of interdependency and allow the emergence of design agency and feedback between the multiple scales of architectural construction. This paper presents examples for integrated design simulation from a series of projects including Lace Wall, A Bridge Too Far and Inflated...... Restraint developed for the research exhibition Complex Modelling, Meldahls Smedie Gallery, Copenhagen in 2016. Where the direct project aims and outcomes have been reported elsewhere, the aim for this paper is to discuss overarching strategies for working with design integrated simulation....

  8. An agent-based simulation model to study accountable care organizations.

    Science.gov (United States)

    Liu, Pai; Wu, Shinyi

    2016-03-01

    Creating accountable care organizations (ACOs) has been widely discussed as a strategy to control rapidly rising healthcare costs and improve quality of care; however, building an effective ACO is a complex process involving multiple stakeholders (payers, providers, patients) with their own interests. Also, implementation of an ACO is costly in terms of time and money. Immature design could cause safety hazards. Therefore, there is a need for analytical model-based decision-support tools that can predict the outcomes of different strategies to facilitate ACO design and implementation. In this study, an agent-based simulation model was developed to study ACOs that considers payers, healthcare providers, and patients as agents under the shared saving payment model of care for congestive heart failure (CHF), one of the most expensive causes of sometimes preventable hospitalizations. The agent-based simulation model has identified the critical determinants for the payment model design that can motivate provider behavior changes to achieve maximum financial and quality outcomes of an ACO. The results show nonlinear provider behavior change patterns corresponding to changes in payment model designs. The outcomes vary by providers with different quality or financial priorities, and are most sensitive to the cost-effectiveness of CHF interventions that an ACO implements. This study demonstrates an increasingly important method to construct a healthcare system analytics model that can help inform health policy and healthcare management decisions. The study also points out that the likely success of an ACO is interdependent with payment model design, provider characteristics, and cost and effectiveness of healthcare interventions.

  9. JPL Thermal Design Modeling Philosophy and NASA-STD-7009 Standard for Models and Simulations - A Case Study

    Science.gov (United States)

    Avila, Arturo

    2011-01-01

    The Standard JPL thermal engineering practice prescribes worst-case methodologies for design. In this process, environmental and key uncertain thermal parameters (e.g., thermal blanket performance, interface conductance, optical properties) are stacked in a worst case fashion to yield the most hot- or cold-biased temperature. Thus, these simulations would represent the upper and lower bounds. This, effectively, represents JPL thermal design margin philosophy. Uncertainty in the margins and the absolute temperatures is usually estimated by sensitivity analyses and/or by comparing the worst-case results with "expected" results. Applicability of the analytical model for specific design purposes along with any temperature requirement violations are documented in peer and project design review material. In 2008, NASA released NASA-STD-7009, Standard for Models and Simulations. The scope of this standard covers the development and maintenance of models, the operation of simulations, the analysis of the results, training, recommended practices, the assessment of the Modeling and Simulation (M&S) credibility, and the reporting of the M&S results. The Mars Exploration Rover (MER) project thermal control system M&S activity was chosen as a case study determining whether JPL practice is in line with the standard and to identify areas of non-compliance. This paper summarizes the results and makes recommendations regarding the application of this standard to JPL thermal M&S practices.

  10. Scientific Modeling and simulations

    CERN Document Server

    Diaz de la Rubia, Tomás

    2009-01-01

    Showcases the conceptual advantages of modeling which, coupled with the unprecedented computing power through simulations, allow scientists to tackle the formibable problems of our society, such as the search for hydrocarbons, understanding the structure of a virus, or the intersection between simulations and real data in extreme environments

  11. Using ProModel as a simulation tools to assist plant layout design and planning: Case study plastic packaging factory

    OpenAIRE

    Pochamarn Tearwattanarattikal; Suwadee Namphacharoen; Chonthicha Chamrasporn

    2008-01-01

    This study is about the application of a Simulation Model to assist decision making on expanding capacity and plant layout design and planning. The plant layout design concept is performed first to create the physical layouts then the simulation model used to test the capability of plant to meet various demand forecast scena. The study employed ProModel package as a tool, using the model to compare the performances in term of % utilization, characteristics of WIP and ability to meet due date....

  12. Study on driver model for hybrid truck based on driving simulator experimental results

    Directory of Open Access Journals (Sweden)

    Dam Hoang Phuc

    2018-04-01

    Full Text Available In this paper, a proposed car-following driver model taking into account some features of both the compensatory and anticipatory model representing the human pedal operation has been verified by driving simulator experiments with several real drivers. The comparison between computer simulations performed by determined model parameters with the experimental results confirm the correctness of this mathematical driver model and identified model parameters. Then the driver model is joined to a hybrid vehicle dynamics model and the moderate car following maneuver simulations with various driver parameters are conducted to investigate influences of driver parameters on vehicle dynamics response and fuel economy. Finally, major driver parameters involved in the longitudinal control of drivers are clarified. Keywords: Driver model, Driver-vehicle closed-loop system, Car Following, Driving simulator/hybrid electric vehicle (B1

  13. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  14. Automated Simulation Model Generation

    NARCIS (Netherlands)

    Huang, Y.

    2013-01-01

    One of today's challenges in the field of modeling and simulation is to model increasingly larger and more complex systems. Complex models take long to develop and incur high costs. With the advances in data collection technologies and more popular use of computer-aided systems, more data has become

  15. Neuronal model with distributed delay: analysis and simulation study for gamma distribution memory kernel.

    Science.gov (United States)

    Karmeshu; Gupta, Varun; Kadambari, K V

    2011-06-01

    A single neuronal model incorporating distributed delay (memory)is proposed. The stochastic model has been formulated as a Stochastic Integro-Differential Equation (SIDE) which results in the underlying process being non-Markovian. A detailed analysis of the model when the distributed delay kernel has exponential form (weak delay) has been carried out. The selection of exponential kernel has enabled the transformation of the non-Markovian model to a Markovian model in an extended state space. For the study of First Passage Time (FPT) with exponential delay kernel, the model has been transformed to a system of coupled Stochastic Differential Equations (SDEs) in two-dimensional state space. Simulation studies of the SDEs provide insight into the effect of weak delay kernel on the Inter-Spike Interval(ISI) distribution. A measure based on Jensen-Shannon divergence is proposed which can be used to make a choice between two competing models viz. distributed delay model vis-á-vis LIF model. An interesting feature of the model is that the behavior of (CV(t))((ISI)) (Coefficient of Variation) of the ISI distribution with respect to memory kernel time constant parameter η reveals that neuron can switch from a bursting state to non-bursting state as the noise intensity parameter changes. The membrane potential exhibits decaying auto-correlation structure with or without damped oscillatory behavior depending on the choice of parameters. This behavior is in agreement with empirically observed pattern of spike count in a fixed time window. The power spectral density derived from the auto-correlation function is found to exhibit single and double peaks. The model is also examined for the case of strong delay with memory kernel having the form of Gamma distribution. In contrast to fast decay of damped oscillations of the ISI distribution for the model with weak delay kernel, the decay of damped oscillations is found to be slower for the model with strong delay kernel.

  16. Factoring vs linear modeling in rate estimation: a simulation study of relative accuracy.

    Science.gov (United States)

    Maldonado, G; Greenland, S

    1998-07-01

    A common strategy for modeling dose-response in epidemiology is to transform ordered exposures and covariates into sets of dichotomous indicator variables (that is, to factor the variables). Factoring tends to increase estimation variance, but it also tends to decrease bias and thus may increase or decrease total accuracy. We conducted a simulation study to examine the impact of factoring on the accuracy of rate estimation. Factored and unfactored Poisson regression models were fit to follow-up study datasets that were randomly generated from 37,500 population model forms that ranged from subadditive to supramultiplicative. In the situations we examined, factoring sometimes substantially improved accuracy relative to fitting the corresponding unfactored model, sometimes substantially decreased accuracy, and sometimes made little difference. The difference in accuracy between factored and unfactored models depended in a complicated fashion on the difference between the true and fitted model forms, the strength of exposure and covariate effects in the population, and the study size. It may be difficult in practice to predict when factoring is increasing or decreasing accuracy. We recommend, therefore, that the strategy of factoring variables be supplemented with other strategies for modeling dose-response.

  17. Theoretical modeling, simulation and experimental study of hybrid piezoelectric and electromagnetic energy harvester

    Directory of Open Access Journals (Sweden)

    Ping Li

    2018-03-01

    Full Text Available In this paper, performances of vibration energy harvester combined piezoelectric (PE and electromagnetic (EM mechanism are studied by theoretical analysis, simulation and experimental test. For the designed harvester, electromechanical coupling modeling is established, and expressions of vibration response, output voltage, current and power are derived. Then, performances of the harvester are simulated and tested; moreover, the power charging rechargeable battery is realized through designed energy storage circuit. By the results, it’s found that compared with piezoelectric-only and electromagnetic-only energy harvester, the hybrid energy harvester can enhance the output power and harvesting efficiency; furthermore, at the harmonic excitation, output power of harvester linearly increases with acceleration amplitude increasing; while it enhances with acceleration spectral density increasing at the random excitation. In addition, the bigger coupling strength, the bigger output power is, and there is the optimal load resistance to make the harvester output the maximal power.

  18. AEGIS geologic simulation model

    International Nuclear Information System (INIS)

    Foley, M.G.

    1982-01-01

    The Geologic Simulation Model (GSM) is used by the AEGIS (Assessment of Effectiveness of Geologic Isolation Systems) program at the Pacific Northwest Laboratory to simulate the dynamic geology and hydrology of a geologic nuclear waste repository site over a million-year period following repository closure. The GSM helps to organize geologic/hydrologic data; to focus attention on active natural processes by requiring their simulation; and, through interactive simulation and calibration, to reduce subjective evaluations of the geologic system. During each computer run, the GSM produces a million-year geologic history that is possible for the region and the repository site. In addition, the GSM records in permanent history files everything that occurred during that time span. Statistical analyses of data in the history files of several hundred simulations are used to classify typical evolutionary paths, to establish the probabilities associated with deviations from the typical paths, and to determine which types of perturbations of the geologic/hydrologic system, if any, are most likely to occur. These simulations will be evaluated by geologists familiar with the repository region to determine validity of the results. Perturbed systems that are determined to be the most realistic, within whatever probability limits are established, will be used for the analyses that involve radionuclide transport and dose models. The GSM is designed to be continuously refined and updated. Simulation models are site specific, and, although the submodels may have limited general applicability, the input data equirements necessitate detailed characterization of each site before application

  19. Simulation study using 3-D wavefield modeling for oil and gas exploration; Sanjigen hadoba modeling wo mochiita sekiyu tanko no simulation study

    Energy Technology Data Exchange (ETDEWEB)

    Sato, T; Matsuoka, T [Japan Petroleum Exploration Corp., Tokyo (Japan); Saeki, T [Japan National Oil Corp., Tokyo (Japan). Technology Research Center

    1997-05-27

    As the surroundings of objects of oil exploration grow more complicated, seismic survey methods have turned 3-dimensional and, in this report, several models are examined using the 3-dimensional simulation technology. The result obtained by the conventional wave tracking method is different from actual wavefields, and is unrealistic. The difference method among the fullwave modelling methods demands an exorbitantly long computation time and high cost. A pseudospectral method has been developed which is superior to the difference method, and has been put to practical use thanks to the advent of parallel computers. It is found that a 3-dimensional survey is mandatory in describing faults. After examining the SEG/EAGE Salt model, it is learned that the salt is well-developed and that 3-dimensional depth migration is required for sub-salt exploration. It is also found through simulation of the EAGE/S Overthrust model, which is an elastic model, that no quality records are available on thrust zones in complicated terrains. The records are poor in quality since the actually measured wavefield is regarded as an acoustic wavefield when it is an elastic wavefield. 1 refs., 18 figs., 2 tabs.

  20. Validation of simulation models

    DEFF Research Database (Denmark)

    Rehman, Muniza; Pedersen, Stig Andur

    2012-01-01

    In philosophy of science, the interest for computational models and simulations has increased heavily during the past decades. Different positions regarding the validity of models have emerged but the views have not succeeded in capturing the diversity of validation methods. The wide variety...

  1. ECONOMIC MODELING STOCKS CONTROL SYSTEM: SIMULATION MODEL

    OpenAIRE

    Климак, М.С.; Войтко, С.В.

    2016-01-01

    Considered theoretical and applied aspects of the development of simulation models to predictthe optimal development and production systems that create tangible products andservices. It isproved that theprocessof inventory control needs of economicandmathematical modeling in viewof thecomplexity of theoretical studies. A simulation model of stocks control that allows make managementdecisions with production logistics

  2. Effects of statistical models and items difficulties on making trait-level inferences: A simulation study

    Directory of Open Access Journals (Sweden)

    Nelson Hauck Filho

    2014-12-01

    Full Text Available Researchers dealing with the task of estimating locations of individuals on continuous latent variables may rely on several statistical models described in the literature. However, weighting costs and benefits of using one specific model over alternative models depends on empirical information that is not always clearly available. Therefore, the aim of this simulation study was to compare the performance of seven popular statistical models in providing adequate latent trait estimates in conditions of items difficulties targeted at the sample mean or at the tails of the latent trait distribution. Results suggested an overall tendency of models to provide more accurate estimates of true latent scores when using items targeted at the sample mean of the latent trait distribution. Rating Scale Model, Graded Response Model, and Weighted Least Squares Mean- and Variance-adjusted Confirmatory Factor Analysis yielded the most reliable latent trait estimates, even when applied to inadequate items for the sample distribution of the latent variable. These findings have important implications concerning some popular methodological practices in Psychology and related areas.

  3. Thermodynamics of Macromolecular Association in Heterogeneous Crowding Environments: Theoretical and Simulation Studies with a Simplified Model.

    Science.gov (United States)

    Ando, Tadashi; Yu, Isseki; Feig, Michael; Sugita, Yuji

    2016-11-23

    The cytoplasm of a cell is crowded with many different kinds of macromolecules. The macromolecular crowding affects the thermodynamics and kinetics of biological reactions in a living cell, such as protein folding, association, and diffusion. Theoretical and simulation studies using simplified models focus on the essential features of the crowding effects and provide a basis for analyzing experimental data. In most of the previous studies on the crowding effects, a uniform crowder size is assumed, which is in contrast to the inhomogeneous size distribution of macromolecules in a living cell. Here, we evaluate the free energy changes upon macromolecular association in a cell-like inhomogeneous crowding system via a theory of hard-sphere fluids and free energy calculations using Brownian dynamics trajectories. The inhomogeneous crowding model based on 41 different types of macromolecules represented by spheres with different radii mimics the physiological concentrations of macromolecules in the cytoplasm of Mycoplasma genitalium. The free energy changes of macromolecular association evaluated by the theory and simulations were in good agreement with each other. The crowder size distribution affects both specific and nonspecific molecular associations, suggesting that not only the volume fraction but also the size distribution of macromolecules are important factors for evaluating in vivo crowding effects. This study relates in vitro experiments on macromolecular crowding to in vivo crowding effects by using the theory of hard-sphere fluids with crowder-size heterogeneity.

  4. Three-dimensional (3D) printed endovascular simulation models: a feasibility study.

    Science.gov (United States)

    Mafeld, Sebastian; Nesbitt, Craig; McCaslin, James; Bagnall, Alan; Davey, Philip; Bose, Pentop; Williams, Rob

    2017-02-01

    Three-dimensional (3D) printing is a manufacturing process in which an object is created by specialist printers designed to print in additive layers to create a 3D object. Whilst there are initial promising medical applications of 3D printing, a lack of evidence to support its use remains a barrier for larger scale adoption into clinical practice. Endovascular virtual reality (VR) simulation plays an important role in the safe training of future endovascular practitioners, but existing VR models have disadvantages including cost and accessibility which could be addressed with 3D printing. This study sought to evaluate the feasibility of 3D printing an anatomically accurate human aorta for the purposes of endovascular training. A 3D printed model was successfully designed and printed and used for endovascular simulation. The stages of development and practical applications are described. Feedback from 96 physicians who answered a series of questions using a 5 point Likert scale is presented. Initial data supports the value of 3D printed endovascular models although further educational validation is required.

  5. Computer simulation modeling of recreation use: Current status, case studies, and future directions

    Science.gov (United States)

    David N. Cole

    2005-01-01

    This report compiles information about recent progress in the application of computer simulation modeling to planning and management of recreation use, particularly in parks and wilderness. Early modeling efforts are described in a chapter that provides an historical perspective. Another chapter provides an overview of modeling options, common data input requirements,...

  6. NRC model simulations in support of the hydrologic code intercomparison study (HYDROCOIN): Level 1-code verification

    International Nuclear Information System (INIS)

    1988-03-01

    HYDROCOIN is an international study for examining ground-water flow modeling strategies and their influence on safety assessments of geologic repositories for nuclear waste. This report summarizes only the combined NRC project temas' simulation efforts on the computer code bench-marking problems. The codes used to simulate thesee seven problems were SWIFT II, FEMWATER, UNSAT2M USGS-3D, AND TOUGH. In general, linear problems involving scalars such as hydraulic head were accurately simulated by both finite-difference and finite-element solution algorithms. Both types of codes produced accurate results even for complex geometrics such as intersecting fractures. Difficulties were encountered in solving problems that invovled nonlinear effects such as density-driven flow and unsaturated flow. In order to fully evaluate the accuracy of these codes, post-processing of results using paricle tracking algorithms and calculating fluxes were examined. This proved very valuable by uncovering disagreements among code results even through the hydraulic-head solutions had been in agreement. 9 refs., 111 figs., 6 tabs

  7. Computational study of nonlinear plasma waves. I. Simulation model and monochromatic wave propagtion

    International Nuclear Information System (INIS)

    Matda, Y.; Crawford, F.W.

    1974-12-01

    An economical low noise plasma simulation model is applied to a series of problems associated with electrostatic wave propagation in a one-dimensional, collisionless, Maxwellian plasma, in the absence of magnetic field. The model is described and tested, first in the absence of an applied signal, and then with a small amplitude perturbation, to establish the low noise features and to verify the theoretical linear dispersion relation at wave energy levels as low as 0.000,001 of the plasma thermal energy. The method is then used to study propagation of an essentially monochromatic plane wave. Results on amplitude oscillation and nonlinear frequency shift are compared with available theories. The additional phenomena of sideband instability and satellite growth, stimulated by large amplitude wave propagation and the resulting particle trapping, are described. (auth)

  8. Modeling Of A Reactive Distillation Column: Methyl Tertiary Butyl Ether (Mtbe Simulation Studies

    Directory of Open Access Journals (Sweden)

    Ismail Mohd Saaid Abdul Rahman Mohamed and Subhash Bhatia

    2012-10-01

    Full Text Available A process simulation stage-wise reactive distillation column model formulated from equilibrium stage theory was developed. The algorithm for solving mathematical model represented by sets of differential-algebraic equations was based on relaxation method. Numerical integration scheme based on backward differentiation formula was selected for solving the stiffness of differential-algebraic equations. Simulations were performed on a personal computer (PC Pentium processor through a developed computer program using FORTRAN90 programming language. The proposed model was validated by comparing the simulated results with the published simulation results and with the pilot plant data from the literature. The model was capable of predicting high isobutene conversion for heterogeneous system, as desirable in industrial MTBE production process. The comparisons on temperature profiles, liquid composition profile and operating conditions of reactive distillation column also showed promising results. Therefore the proposed model can be used as a tool for the development and simulation of reactive distillation column.Keywords: Modeling, simulation, reactive distillation, relaxation method, equilibrium stage, heterogeneous, MTBE

  9. Simulation Of Seawater Intrusion With 2D And 3D Models: Nauru Island Case Study

    Science.gov (United States)

    Ghassemi, F.; Jakeman, A. J.; Jacobson, G.; Howard, K. W. F.

    1996-03-01

    With the advent of large computing capacities during the past few decades, sophisticated models have been developed for the simulation of seawater intrusion in coastal and island aquifers. Currently, several models are commercially available for the simulation of this problem. This paper describes the mathematical basis and application of the SUTRA and HST3D models to simulate seawater intrusion in Nauru Island, in the central Pacific Ocean. A comparison of the performance and limitations of these two models in simulating a real problem indicates that three-dimensional simulation of seawater intrusion with the HST3D model has the major advantage of being able to specify natural boundary conditions as well as pumping stresses. However, HST3D requires a small grid size and short time steps in order to maintain numerical stability and accuracy. These requirements lead to solution of a large set of linear equations that requires the availability of powerful computing facilities in terms of memory and computing speed. Combined results of the two simulation models indicate a safe pumping rate of 400 m3/d for the aquifer on Nauru Island, where additional fresh water is presently needed for the rehabilitation of mined-out land.

  10. Dynamic PET simulator via tomographic emission projection for kinetic modeling and parametric image studies.

    Science.gov (United States)

    Häggström, Ida; Beattie, Bradley J; Schmidtlein, C Ross

    2016-06-01

    To develop and evaluate a fast and simple tool called dpetstep (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. The tool was developed in matlab using both new and previously reported modules of petstep (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). dpetstep was 8000 times faster than MC. Dynamic images from dpetstep had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dpetstep and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dpetstep images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dpetstep to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for studies investigating these phenomena. dpetstep can be downloaded free of cost from https://github.com/CRossSchmidtlein/dPETSTEP.

  11. Ball bearing defect models: A study of simulated and experimental fault signatures

    Science.gov (United States)

    Mishra, C.; Samantaray, A. K.; Chakraborty, G.

    2017-07-01

    Numerical model based virtual prototype of a system can serve as a tool to generate huge amount of data which replace the dependence on expensive and often difficult to conduct experiments. However, the model must be accurate enough to substitute the experiments. The abstraction level and details considered during model development depend on the purpose for which simulated data should be generated. This article concerns development of simulation models for deep groove ball bearings which are used in a variety of rotating machinery. The purpose of the model is to generate vibration signatures which usually contain features of bearing defects. Three different models with increasing level-of-complexity are considered: a bearing kinematics based planar motion block diagram model developed in MATLAB Simulink which does not explicitly consider cage and traction dynamics, a planar motion model with cage, traction and contact dynamics developed using multi-energy domain bond graph formalism in SYMBOLS software, and a detailed spatial multi-body dynamics model with complex contact and traction mechanics developed using ADAMS software. Experiments are conducted using Spectra Quest machine fault simulator with different prefabricated faulted bearings. The frequency domain characteristics of simulated and experimental vibration signals for different bearing faults are compared and conclusions are drawn regarding usefulness of the developed models.

  12. Models and simulations

    International Nuclear Information System (INIS)

    Lee, M.J.; Sheppard, J.C.; Sullenberger, M.; Woodley, M.D.

    1983-09-01

    On-line mathematical models have been used successfully for computer controlled operation of SPEAR and PEP. The same model control concept is being implemented for the operation of the LINAC and for the Damping Ring, which will be part of the Stanford Linear Collider (SLC). The purpose of this paper is to describe the general relationships between models, simulations and the control system for any machine at SLAC. The work we have done on the development of the empirical model for the Damping Ring will be presented as an example

  13. PSH Transient Simulation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Muljadi, Eduard [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-12-21

    PSH Transient Simulation Modeling presentation from the WPTO FY14 - FY16 Peer Review. Transient effects are an important consideration when designing a PSH system, yet numerical techniques for hydraulic transient analysis still need improvements for adjustable-speed (AS) reversible pump-turbine applications.

  14. Wake modeling and simulation

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Madsen Aagaard, Helge; Larsen, Torben J.

    We present a consistent, physically based theory for the wake meandering phenomenon, which we consider of crucial importance for the overall description of wind turbine loadings in wind farms. In its present version the model is confined to single wake situations. The model philosophy does, howev...... methodology has been implemented in the aeroelastic code HAWC2, and example simulations of wake situations, from the small Tjæreborg wind farm, have been performed showing satisfactory agreement between predictions and measurements...

  15. In silico modelling and molecular dynamics simulation studies of thiazolidine based PTP1B inhibitors.

    Science.gov (United States)

    Mahapatra, Manoj Kumar; Bera, Krishnendu; Singh, Durg Vijay; Kumar, Rajnish; Kumar, Manoj

    2018-04-01

    Protein tyrosine phosphatase 1B (PTP1B) has been identified as a negative regulator of insulin and leptin signalling pathway; hence, it can be considered as a new therapeutic target of intervention for the treatment of type 2 diabetes. Inhibition of this molecular target takes care of both diabetes and obesity, i.e. diabestiy. In order to get more information on identification and optimization of lead, pharmacophore modelling, atom-based 3D QSAR, docking and molecular dynamics studies were carried out on a set of ligands containing thiazolidine scaffold. A six-point pharmacophore model consisting of three hydrogen bond acceptor (A), one negative ionic (N) and two aromatic rings (R) with discrete geometries as pharmacophoric features were developed for a predictive 3D QSAR model. The probable binding conformation of the ligands within the active site was studied through molecular docking. The molecular interactions and the structural features responsible for PTP1B inhibition and selectivity were further supplemented by molecular dynamics simulation study for a time scale of 30 ns. The present investigation has identified some of the indispensible structural features of thiazolidine analogues which can further be explored to optimize PTP1B inhibitors.

  16. Technological progress and effects of (supra) regional innovation and production collaboration. An agent-based model simulation study.

    NARCIS (Netherlands)

    Vermeulen, B.; Pyka, A.; Serguieva, A.; Maringer, D.; Palade, V.; Almeida, R.J.

    2014-01-01

    We provide a novel technology development model in which economic agents search for transformations to build artifacts. Using this technology development model, we conduct an agent-based model simulation study on the effect of (supra-)regional collaboration in production and innovation on

  17. Dynamic PET simulator via tomographic emission projection for kinetic modeling and parametric image studies

    Energy Technology Data Exchange (ETDEWEB)

    Häggström, Ida, E-mail: haeggsti@mskcc.org [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 and Department of Radiation Sciences, Umeå University, Umeå 90187 (Sweden); Beattie, Bradley J.; Schmidtlein, C. Ross [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States)

    2016-06-15

    Purpose: To develop and evaluate a fast and simple tool called dPETSTEP (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. Methods: The tool was developed in MATLAB using both new and previously reported modules of PETSTEP (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). Results: dPETSTEP was 8000 times faster than MC. Dynamic images from dPETSTEP had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dPETSTEP and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dPETSTEP images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dPETSTEP images and noise properties agreed better with MC. Conclusions: The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dPETSTEP to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for

  18. Dynamic PET simulator via tomographic emission projection for kinetic modeling and parametric image studies

    International Nuclear Information System (INIS)

    Häggström, Ida; Beattie, Bradley J.; Schmidtlein, C. Ross

    2016-01-01

    Purpose: To develop and evaluate a fast and simple tool called dPETSTEP (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. Methods: The tool was developed in MATLAB using both new and previously reported modules of PETSTEP (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). Results: dPETSTEP was 8000 times faster than MC. Dynamic images from dPETSTEP had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dPETSTEP and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dPETSTEP images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dPETSTEP images and noise properties agreed better with MC. Conclusions: The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dPETSTEP to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for

  19. Systematic vacuum study of the ITER model cryopump by test particle Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Xueli; Haas, Horst; Day, Christian [Institute for Technical Physics, Karlsruhe Institute of Technology, P.O. Box 3640, 76021 Karlsruhe (Germany)

    2011-07-01

    The primary pumping systems on the ITER torus are based on eight tailor-made cryogenic pumps because not any standard commercial vacuum pump can meet the ITER working criteria. This kind of cryopump can provide high pumping speed, especially for light gases, by the cryosorption on activated charcoal at 4.5 K. In this paper we will present the systematic Monte Carlo simulation results of the model pump in a reduced scale by ProVac3D, a new Test Particle Monte Carlo simulation program developed by KIT. The simulation model has included the most important mechanical structures such as sixteen cryogenic panels working at 4.5 K, the 80 K radiation shield envelope with baffles, the pump housing, inlet valve and the TIMO (Test facility for the ITER Model Pump) test facility. Three typical gas species, i.e., deuterium, protium and helium are simulated. The pumping characteristics have been obtained. The result is in good agreement with the experiment data up to the gas throughput of 1000 sccm, which marks the limit for free molecular flow. This means that ProVac3D is a useful tool in the design of the prototype cryopump of ITER. Meanwhile, the capture factors at different critical positions are calculated. They can be used as the important input parameters for a follow-up Direct Simulation Monte Carlo (DSMC) simulation for higher gas throughput.

  20. Comparative approaches from empirical to mechanistic simulation modelling in Land Evaluation studies

    Science.gov (United States)

    Manna, P.; Basile, A.; Bonfante, A.; Terribile, F.

    2009-04-01

    The Land Evaluation (LE) comprise the evaluation procedures to asses the attitudes of the land to a generic or specific use (e.g. biomass production). From local to regional and national scale the approach to the land use planning should requires a deep knowledge of the processes that drive the functioning of the soil-plant-atmosphere system. According to the classical approaches the assessment of attitudes is the result of a qualitative comparison between the land/soil physical properties and the land use requirements. These approaches have a quick and inexpensive applicability; however, they are based on empirical and qualitative models with a basic knowledge structure specifically built for a specific landscape and for the specific object of the evaluation (e.g. crop). The outcome from this situation is the huge difficulties in the spatial extrapolation of the LE results and the rigidity of the system. Modern techniques instead, rely on the application of mechanistic and quantitative simulation modelling that allow a dynamic characterisation of the interrelated physical and chemical processes taking place in the soil landscape. Moreover, the insertion of physical based rules in the LE procedure may make it less difficult in terms of both extending spatially the results and changing the object (e.g. crop species, nitrate dynamics, etc.) of the evaluation. On the other side these modern approaches require high quality and quantity of input data that cause a significant increase in costs. In this scenario nowadays the LE expert is asked to choose the best LE methodology considering costs, complexity of the procedure and benefits in handling a specific land evaluation. In this work we performed a forage maize land suitability study by comparing 9 different methods having increasing complexity and costs. The study area, of about 2000 ha, is located in North Italy in the Lodi plain (Po valley). The range of the 9 employed methods ranged from standard LE approaches to

  1. The Impacts of East Asia FTA: A CGE Model Simulation Study

    Directory of Open Access Journals (Sweden)

    Mitsuyo Ando

    2007-12-01

    Full Text Available In light of the on-going discussions of the possibility of an East Asia FTA, this paper attempts to estimate the impacts of an East Asia FTA using a Computable General Equilibrium (CGE model. Although most previous simulation studies on the impacts of FTAs focus only on the liberalization of trade in goods, our paper attempts to take into account other aspects of FTAs such as capital accumulation and trade and investment facilitation measures. Our simulation analysis finds that an ASEAN+3 FTA is the most desirable FTA of eight hypothetical FTAs in East Asia to all member countries at the macro level. At the same time, our results demonstrate the significant impacts of capital accumulation and various trade and investment facilitation and coordination programs. At the sectoral level, many sectors gain in terms of output and trade. Although some sectors in certain countries indeed lose in terms of output as a result of an ASEAN+3, most of them experience increases in both exports and imports, even if output declines. These results indicate that the larger the coverage in terms of membership as well as contents such as trade and FDI liberalization and facilitation, and economic cooperation is, the greater benefits can be accrued to the members.

  2. A Software Toolkit to Study Systematic Uncertainties of the Physics Models of the Geant4 Simulation Package

    CERN Document Server

    Genser, Krzysztof; Perdue, Gabriel; Wenzel, Hans; Yarba, Julia; Kelsey, Michael; Wright, Dennis H

    2016-01-01

    The Geant4 toolkit is used to model interactions between particles and matter. Geant4 employs a set of validated physics models that span a wide range of interaction energies. These models are tuned to cover a large variety of possible applications. This raises the critical question of what uncertainties are associated with the Geant4 physics model, or group of models, involved in a simulation project. To address the challenge, we have designed and implemented a comprehen- sive, modular, user-friendly software toolkit that allows the variation of one or more parameters of one or more Geant4 physics models involved in simulation studies. It also enables analysis of multiple variants of the resulting physics observables of interest in order to estimate the uncertain- ties associated with the simulation model choices. Key functionalities of the toolkit are presented in this paper and are illustrated with selected results.

  3. Fog Simulations Based on Multi-Model System: A Feasibility Study

    Science.gov (United States)

    Shi, Chune; Wang, Lei; Zhang, Hao; Zhang, Su; Deng, Xueliang; Li, Yaosun; Qiu, Mingyan

    2012-05-01

    Accurate forecasts of fog and visibility are very important to air and high way traffic, and are still a big challenge. A 1D fog model (PAFOG) is coupled to MM5 by obtaining the initial and boundary conditions (IC/BC) and some other necessary input parameters from MM5. Thus, PAFOG can be run for any area of interest. On the other hand, MM5 itself can be used to simulate fog events over a large domain. This paper presents evaluations of the fog predictability of these two systems for December of 2006 and December of 2007, with nine regional fog events observed in a field experiment, as well as over a large domain in eastern China. Among the simulations of the nine fog events by the two systems, two cases were investigated in detail. Daily results of ground level meteorology were validated against the routine observations at the CMA observational network. Daily fog occurrences for the two study periods was validated in Nanjing. General performance of the two models for the nine fog cases are presented by comparing with routine and field observational data. The results of MM5 and PAFOG for two typical fog cases are verified in detail against field observations. The verifications demonstrated that all methods tended to overestimate fog occurrence, especially for near-fog cases. In terms of TS/ETS, the LWC-only threshold with MM5 showed the best performance, while PAFOG showed the worst. MM5 performed better for advection-radiation fog than for radiation fog, and PAFOG could be an alternative tool for forecasting radiation fogs. PAFOG did show advantages over MM5 on the fog dissipation time. The performance of PAFOG highly depended on the quality of MM5 output. The sensitive runs of PAFOG with different IC/BC showed the capability of using MM5 output to run the 1D model and the high sensitivity of PAFOG on cloud cover. Future works should intensify the study of how to improve the quality of input data (e.g. cloud cover, advection, large scale subsidence) for the 1D

  4. A study on the equivalent electric circuit simulation model of DBD streamer and glow alternate discharge

    International Nuclear Information System (INIS)

    Yao, J; Zhang, Z T; Xu, S J; Yu, Q X; Yu, Z; Zhao, J S

    2013-01-01

    This paper presents a dynamic simulating model of the dielectric barrier discharge (DBD), structured as an equivalent electric circuit of the streamer and glow discharge generated alternately in DBD. The main parameters of DBD have been established by means of analysing the structural characteristics of a single discharge cell. An electrical comprehensive Simulink /MATLAB model was developed in order to reveal the interaction of the adjacent two discharge cell. A series of simulations was carried out in order to estimate the key structural parameters that affect the alternate streamer and glow discharge mode. The comparison results of experimental and simulate indicate that there exists a close similarity of the current waveforms graphic. Therefore, we can grasp a deep understanding mechanism of the dielectric barrier discharge and optimize the plasma reactor.

  5. Study and modeling of the evolution of gas-liquid partitioning of hydrogen sulfide in model solutions simulating winemaking fermentations.

    Science.gov (United States)

    Mouret, Jean-Roch; Sablayrolles, Jean-Marie; Farines, Vincent

    2015-04-01

    The knowledge of gas-liquid partitioning of aroma compounds during winemaking fermentation could allow optimization of fermentation management, maximizing concentrations of positive markers of aroma and minimizing formation of molecules, such as hydrogen sulfide (H2S), responsible for defects. In this study, the effect of the main fermentation parameters on the gas-liquid partition coefficients (Ki) of H2S was assessed. The Ki for this highly volatile sulfur compound was measured in water by an original semistatic method developed in this work for the determination of gas-liquid partitioning. This novel method was validated and then used to determine the Ki of H2S in synthetic media simulating must, fermenting musts at various steps of the fermentation process, and wine. Ki values were found to be mainly dependent on the temperature but also varied with the composition of the medium, especially with the glucose concentration. Finally, a model was developed to quantify the gas-liquid partitioning of H2S in synthetic media simulating must to wine. This model allowed a very accurate prediction of the partition coefficient of H2S: the difference between observed and predicted values never exceeded 4%.

  6. Hydrological Process Simulation of Inland River Watershed: A Case Study of the Heihe River Basin with Multiple Hydrological Models

    Directory of Open Access Journals (Sweden)

    Lili Wang

    2018-04-01

    Full Text Available Simulating the hydrological processes of an inland river basin can help provide the scientific guidance to the policies of water allocation among different subbasins and water resource management groups within the subbasins. However, it is difficult to simulate the hydrological processes of an inland river basin with hydrological models due to the non-consistent hydrological characteristics of the entire basin. This study presents a solution to this problem with a case study about the hydrological process simulation in an inland river basin in China, Heihe River basin. It is divided into the upper, middle, and lower reaches based on the distinctive hydrological characteristics in the Heihe River basin, and three hydrological models are selected, applied, and tested to simulate the hydrological cycling processes for each reach. The upper reach is the contributing area with the complex runoff generation processes, therefore, the hydrological informatic modeling system (HIMS is utilized due to its combined runoff generation mechanisms. The middle reach has strong impacts of intensive human activities on the interactions of surface and subsurface flows, so a conceptual water balance model is applied to simulate the water balance process. For the lower reach, as the dissipative area with groundwater dominating the hydrological process, a groundwater modeling system with the embedment of MODFLOW model is applied to simulate the groundwater dynamics. Statistical parameters and water balance analysis prove that the three models have excellent performances in simulating the hydrological process of the three reaches. Therefore, it is an effective way to simulate the hydrological process of inland river basin with multiple hydrological models according to the characteristics of each subbasin.

  7. A simplified heat pump model for use in solar plus heat pump system simulation studies

    DEFF Research Database (Denmark)

    Perers, Bengt; Andersen, Elsa; Nordman, Roger

    2012-01-01

    Solar plus heat pump systems are often very complex in design, with sometimes special heat pump arrangements and control. Therefore detailed heat pump models can give very slow system simulations and still not so accurate results compared to real heat pump performance in a system. The idea here...

  8. Benchmark simulation model no 2: general protocol and exploratory case studies

    DEFF Research Database (Denmark)

    Jeppsson, U.; Pons, M.N.; Nopens, I.

    2007-01-01

    and digester models, the included temperature dependencies and the reject water storage. BSM2-implementations are now available in a wide range of simulation platforms and a ring test has verified their proper implementation, consistent with the BSM2 definition. This guarantees that users can focus...

  9. Exploring Students' Computational Thinking Skills in Modeling and Simulation Projects: : A Pilot Study

    NARCIS (Netherlands)

    Grgurina, Natasa; van Veen, Klaas; Barendsen, Erik; Zwaneveld, Bert; Suhre, Cor; Gal-Ezer, Judith; Sentance, Sue; Vahrenhold, Jan

    2015-01-01

    Computational Thinking (CT) is gaining a lot of attention in education. We explored how to discern the occurrences of CT in the projects of 12th grade high school students in the computer science (CS) course. Within the projects, they constructed models and ran simulations of phenomena from other

  10. Hydrological Process Simulation of Inland River Watershed: A Case Study of the Heihe River Basin with Multiple Hydrological Models

    OpenAIRE

    Lili Wang; Zhonggen Wang; Jingjie Yu; Yichi Zhang; Suzhen Dang

    2018-01-01

    Simulating the hydrological processes of an inland river basin can help provide the scientific guidance to the policies of water allocation among different subbasins and water resource management groups within the subbasins. However, it is difficult to simulate the hydrological processes of an inland river basin with hydrological models due to the non-consistent hydrological characteristics of the entire basin. This study presents a solution to this problem with a case study about the hydrolo...

  11. Simulation - modeling - experiment

    International Nuclear Information System (INIS)

    2004-01-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  12. Wake modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, G.C.; Aagaard Madsen, H.; Larsen, T.J.; Troldborg, N.

    2008-07-15

    We present a consistent, physically based theory for the wake meandering phenomenon, which we consider of crucial importance for the overall description of wind turbine loadings in wind farms. In its present version the model is confined to single wake situations. The model philosophy does, however, have the potential to include also mutual wake interaction phenomenons. The basic conjecture behind the dynamic wake meandering (DWM) model is that wake transportation in the atmospheric boundary layer is driven by the large scale lateral- and vertical turbulence components. Based on this conjecture a stochastic model of the downstream wake meandering is formulated. In addition to the kinematic formulation of the dynamics of the 'meandering frame of reference', models characterizing the mean wake deficit as well as the added wake turbulence, described in the meandering frame of reference, are an integrated part the DWM model complex. For design applications, the computational efficiency of wake deficit prediction is a key issue. A computationally low cost model is developed for this purpose. Likewise, the character of the added wake turbulence, generated by the up-stream turbine in the form of shed and trailed vorticity, has been approached by a simple semi-empirical model essentially based on an eddy viscosity philosophy. Contrary to previous attempts to model wake loading, the DWM approach opens for a unifying description in the sense that turbine power- and load aspects can be treated simultaneously. This capability is a direct and attractive consequence of the model being based on the underlying physical process, and it potentially opens for optimization of wind farm topology, of wind farm operation as well as of control strategies for the individual turbine. To establish an integrated modeling tool, the DWM methodology has been implemented in the aeroelastic code HAWC2, and example simulations of wake situations, from the small Tjaereborg wind farm, have

  13. Eos modeling and reservoir simulation study of bakken gas injection improved oil recovery in the elm coulee field, Montana

    Science.gov (United States)

    Pu, Wanli

    The Bakken Formation in the Williston Basin is one of the most productive liquid-rich unconventional plays. The Bakken Formation is divided into three members, and the Middle Bakken Member is the primary target for horizontal wellbore landing and hydraulic fracturing because of its better rock properties. Even with this new technology, the primary recovery factor is believed to be only around 10%. This study is to evaluate various gas injection EOR methods to try to improve on that low recovery factor of 10%. In this study, the Elm Coulee Oil Field in the Williston Basin was selected as the area of interest. Static reservoir models featuring the rock property heterogeneity of the Middle Bakken Member were built, and fluid property models were built based on Bakken reservoir fluid sample PVT data. By employing both compositional model simulation and Todd-Longstaff solvent model simulation methods, miscible gas injections were simulated and the simulations speculated that oil recovery increased by 10% to 20% of OOIP in 30 years. The compositional simulations yielded lower oil recovery compared to the solvent model simulations. Compared to the homogeneous model, the reservoir model featuring rock property heterogeneity in the vertical direction resulted in slightly better oil recovery, but with earlier CO2 break-through and larger CO2 production, suggesting that rock property heterogeneity is an important property for modeling because it has a big effect on the simulation results. Long hydraulic fractures shortened CO2 break-through time greatly and increased CO 2 production. Water-alternating-gas injection schemes and injection-alternating-shut-in schemes can provide more options for gas injection EOR projects, especially for gas production management. Compared to CO2 injection, separator gas injection yielded slightly better oil recovery, meaning separator gas could be a good candidate for gas injection EOR; lean gas generated the worst results. Reservoir

  14. Simulation programs for ph.D. study of analysis, modeling and optimum design of solar domestic hot water systems

    Energy Technology Data Exchange (ETDEWEB)

    Lin Qin

    1998-12-31

    The design of solar domestic hot water (DHW) systems is a complex process, due to characteristics inherent in the solar heating technology. Recently, computer simulation has become a widely used technique to improve the understanding of the thermal processes in such systems. One of the main objects of the Ph.D. study of `Analysis, Modelling and optimum Design of Solar Domestic Hot Water Systems` is to develop and verify programs for carrying out the simulation and evaluation of the dynamic performance of solar DHW systems. During this study, simulation programs for hot water distribution networks and for certain types of solar DHW systems were developed. (au)

  15. Numerical simulation of Higgs models

    International Nuclear Information System (INIS)

    Jaster, A.

    1995-10-01

    The SU(2) Higgs and the Schwinger model on the lattice were analysed. Numerical simulations of the SU(2) Higgs model were performed to study the finite temperature electroweak phase transition. With the help of the multicanonical method the distribution of an order parameter at the phase transition point was measured. This was used to obtain the order of the phase transition and the value of the interface tension with the histogram method. Numerical simulations were also performed at zero temperature to perform renormalization. The measured values for the Wilson loops were used to determine the static potential and from this the renormalized gauge coupling. The Schwinger model was simulated at different gauge couplings to analyse the properties of the Kaplan-Shamir fermions. The prediction that the mass parameter gets only multiplicative renormalization was tested and verified. (orig.)

  16. Direct dimethyl-ether (DME) synthesis by spatial patterned catalyst arrangement. A modeling and simulation study

    Energy Technology Data Exchange (ETDEWEB)

    McBride, K.; Turek, T.; Guettel, R. [Clausthal Univ. of Technology (Germany). Inst. of Chemical Process Engineering

    2011-07-01

    The effect of spatially patterned catalyst beds was investigated for direct DME synthesis from synthesis gas as an example. A layered arrangement of methanol synthesis and dehydration catalyst was chosen and studied by numerical simulation under typical operating conditions for single-step DME synthesis. It was revealed that catalyst layers significantly influence the DME productivity. With an increasing number of layers from 2 to 40, an increase in DME productivity was observed approaching the performance of a physical catalyst mixture for an infinite number of layers. The results prove that a physical mixture of methanol synthesis and dehydration catalyst achieves the highest DME productivity under operating conditions chosen in this study. This can be explained by the higher average methanol concentration for the layered catalyst arrangement and thus stronger equilibrium constraints for the methanol synthesis reaction. Essentially, the layered catalyst arrangement is comparable to a cascade model of the two-step process, which is less efficient in terms of DME yield than the single-step process. However, since a significant effect was found, the layered catalyst arrangement could be beneficial for other reaction systems. (orig.)

  17. Biomolecular modelling and simulations

    CERN Document Server

    Karabencheva-Christova, Tatyana

    2014-01-01

    Published continuously since 1944, the Advances in Protein Chemistry and Structural Biology series is the essential resource for protein chemists. Each volume brings forth new information about protocols and analysis of proteins. Each thematically organized volume is guest edited by leading experts in a broad range of protein-related topics. Describes advances in biomolecular modelling and simulations Chapters are written by authorities in their field Targeted to a wide audience of researchers, specialists, and students The information provided in the volume is well supported by a number of high quality illustrations, figures, and tables.

  18. A model for self-diffusion of guanidinium-based ionic liquids: a molecular simulation study.

    Science.gov (United States)

    Klähn, Marco; Seduraman, Abirami; Wu, Ping

    2008-11-06

    We propose a novel self-diffusion model for ionic liquids on an atomic level of detail. The model is derived from molecular dynamics simulations of guanidinium-based ionic liquids (GILs) as a model case. The simulations are based on an empirical molecular mechanical force field, which has been developed in our preceding work, and it relies on the charge distribution in the actual liquid. The simulated GILs consist of acyclic and cyclic cations that were paired with nitrate and perchlorate anions. Self-diffusion coefficients are calculated at different temperatures from which diffusive activation energies between 32-40 kJ/mol are derived. Vaporization enthalpies between 174-212 kJ/mol are calculated, and their strong connection with diffusive activation energies is demonstrated. An observed formation of cavities in GILs of up to 6.5% of the total volume does not facilitate self-diffusion. Instead, the diffusion of ions is found to be determined primarily by interactions with their immediate environment via electrostatic attraction between cation hydrogen and anion oxygen atoms. The calculated average time between single diffusive transitions varies between 58-107 ps and determines the speed of diffusion, in contrast to diffusive displacement distances, which were found to be similar in all simulated GILs. All simulations indicate that ions diffuse by using a brachiation type of movement: a diffusive transition is initiated by cleaving close contacts to a coordinated counterion, after which the ion diffuses only about 2 A until new close contacts are formed with another counterion in its vicinity. The proposed diffusion model links all calculated energetic and dynamic properties of GILs consistently and explains their molecular origin. The validity of the model is confirmed by providing an explanation for the variation of measured ratios of self-diffusion coefficients of cations and paired anions over a wide range of values, encompassing various ionic liquid classes

  19. Study of visualized simulation and analysis of nuclear fuel cycle system based on multilevel flow model

    International Nuclear Information System (INIS)

    Liu Jingquan; Yoshikawa, H.; Zhou Yangping

    2005-01-01

    Complex energy and environment system, especially nuclear fuel cycle system recently raised social concerns about the issues of economic competitiveness, environmental effect and nuclear proliferation. Only under the condition that those conflicting issues are gotten a consensus between stakeholders with different knowledge background, can nuclear power industry be continuingly developed. In this paper, a new analysis platform has been developed to help stakeholders to recognize and analyze various socio-technical issues in the nuclear fuel cycle sys- tem based on the functional modeling method named Multilevel Flow Models (MFM) according to the cognition theory of human being, Its character is that MFM models define a set of mass, energy and information flow structures on multiple levels of abstraction to describe the functional structure of a process system and its graphical symbol representation and the means-end and part-whole hierarchical flow structure to make the represented process easy to be understood. Based upon this methodology, a micro-process and a macro-process of nuclear fuel cycle system were selected to be simulated and some analysis processes such as economics analysis, environmental analysis and energy balance analysis related to those flows were also integrated to help stakeholders to understand the process of decision-making with the introduction of some new functions for the improved Multilevel Flow Models Studio, and finally the simple simulation such as spent fuel management process simulation and money flow of nuclear fuel cycle and its levelised cost analysis will be represented as feasible examples. (authors)

  20. Study of visualized simulation and analysis of nuclear fuel cycle system based on multilevel flow model

    Institute of Scientific and Technical Information of China (English)

    LIU Jing-Quan; YOSHIKAWA Hidekazu; ZHOU Yang-Ping

    2005-01-01

    Complex energy and environment system, especially nuclear fuel cycle system recently raised social concerns about the issues of economic competitiveness, environmental effect and nuclear proliferation. Only under the condition that those conflicting issues are gotten a consensus between stakeholders with different knowledge background, can nuclear power industry be continuingly developed. In this paper, a new analysis platform has been developed to help stakeholders to recognize and analyze various socio-technical issues in the nuclear fuel cycle system based on the functional modeling method named Multilevel Flow Models (MFM) according to the cognition theory of human being. Its character is that MFM models define a set of mass, energy and information flow structures on multiple levels of abstraction to describe the functional structure of a process system and its graphical symbol representation and the means-end and part-whole hierarchical flow structure to make the represented process easy to be understood. Based upon this methodology, a micro-process and a macro-process of nuclear fuel cycle system were selected to be simulated and some analysis processes such as economics analysis, environmental analysis and energy balance analysis related to those flows were also integrated to help stakeholders to understand the process of decision-making with the introduction of some new functions for the improved Multilevel Flow Models Studio, and finally the simple simulation such as spent fuel management process simulation and money flow of nuclear fuel cycle and its levelised cost analysis will be represented as feasible examples.

  1. Mechanical Study of Standard Six Beat Front Crawl Swimming by Using Swimming Human Simulation Model

    Science.gov (United States)

    Nakashima, Motomu

    There are many dynamical problems in front crawl swimming which have not been fully investigated by analytical approaches. Therefore, in this paper, standard six beat front crawl swimming is analyzed by the swimming human simulation model SWUM, which has been developed by the authors. First, the outline of the simulation model, the joint motion for one stroke cycle, and the specifications of calculation are described respectively. Next, contribution of each fluid force component and of each body part to the thrust, effect of the flutter kick, estimation of the active drag, roll motion, and the propulsive efficiency are discussed respectively. The following results were theoretically obtained: The thrust is produced at the upper limb by the normal drag force component. The flutter kick plays a role in raising the lower half of the body. The active drag coefficient in the simulation becomes 0.082. Buoyancy determines the primal wave of the roll motion fluctuation. The propulsive efficiency in the simulation becomes 0.2.

  2. A Study of Synchronous Machine Model Implementations in Matlab/Simulink Simulations for New and Renewable Energy Systems

    DEFF Research Database (Denmark)

    Chen, Zhe; Blaabjerg, Frede; Iov, Florin

    2005-01-01

    A direct phase model of synchronous machines implemented in MA TLAB/SIMULINK is presented. The effects of the machine saturation have been included. Simulation studies are performed under various conditions. It has been demonstrated that the MATLAB/SIMULINK is an effective tool to study the compl...... synchronous machine and the implemented model could be used for studies of various applications of synchronous machines including in renewable and DG generation systems....

  3. A COMPARATIVE STUDY OF SIMULATION AND TIME SERIES MODEL IN QUANTIFYING BULLWHIP EFFECT IN SUPPLY CHAIN

    Directory of Open Access Journals (Sweden)

    T. V. O. Fabson

    2011-11-01

    Full Text Available Bullwhip (or whiplash effect is an observed phenomenon in forecast driven distribution channeland careful management of these effects is of great importance to managers of supply chain.Bullwhip effect refers to situations where orders to the suppliers tend to have larger variance thansales to the buyer (demand distortion and the distortion increases as we move up the supply chain.Due to the fact that demand of customer for product is unstable, business managers must forecast inorder to properly position inventory and other resources. Forecasts are statistically based and in mostcases, are not very accurate. The existence of forecast errors made it necessary for organizations tooften carry an inventory buffer called “safety stock”. Moving up the supply chain from the end userscustomers to raw materials supplier there is a lot of variation in demand that can be observed, whichcall for greater need for safety stock.This study compares the efficacy of simulation and Time Series model in quantifying the bullwhipeffects in supply chain management.

  4. Developing an Agent-Based Model to Simulate Urban Land-Use Expansion (Case Study: Qazvin)

    OpenAIRE

    F. Nourian; A. A. Alesheikh; F. Hosseinali

    2012-01-01

    Extended abstract1-IntroductionUrban land-use expansion is a challenging issue in developing countries. Increases in population as well as the immigration from the villages to the cities are the two major factors for that phenomenon. Those factors have reduced the influence of efforts that try to limit the cities’ boundaries. Thus, spatial planners always look for the models that simulate the expansion of urban land-uses and enable them to prevent unbalanced expansions of cities and guide the...

  5. Computer simulation study of the nematic-vapour interface in the Gay-Berne model

    Science.gov (United States)

    Rull, Luis F.; Romero-Enrique, José Manuel

    2017-06-01

    We present computer simulations of the vapour-nematic interface of the Gay-Berne model. We considered situations which correspond to either prolate or oblate molecules. We determine the anchoring of the nematic phase and correlate it with the intermolecular potential parameters. On the other hand, we evaluate the surface tension associated to this interface. We find a corresponding states law for the surface tension dependence on the temperature, valid for both prolate and oblate molecules.

  6. Modeling the autonomic and metabolic effects of obstructive sleep apnea: A simulation study.

    Directory of Open Access Journals (Sweden)

    Limei eCheng

    2012-01-01

    Full Text Available Long term exposure to intermittent hypoxia and sleep fragmentation introduced by recurring obstructive sleep apnea has been linked to subsequent cardiovascular disease and Type 2 diabetes. The underlying mechanisms remain unclear, but impairment of the normal interactions among the systems that regulate autonomic and metabolic function is likely involved. We have extended an existing integrative model of respiratory, cardiovascular and sleep-wake state control, to incorporate a sub-model of glucose-insulin-fatty acid regulation. This computational model is capable of simulating the complex dynamics of cardiorespiratory control, chemoreflex and state-related control of breath-to-breath ventilation, state-related and chemoreflex control of upper airway potency, respiratory and circulatory mechanics, as well as the metabolic control of glucose insulin dynamics and its interactions with the autonomic control. The interactions between autonomic and metabolic control include the circadian regulation of epinephrine secretion, epinephrine regulation on dynamic fluctuations in glucose and free-fatty acid in plasma, metabolic coupling among tissues and organs provided by insulin and epinephrine, as well as the effect of insulin on peripheral vascular sympathetic activity. These model simulations provide insight into the relative importance of the various mechanisms that determine the acute and chronic physiological effects of sleep-disordered breathing. The model can also be used to investigate the effects of a variety of interventions, such as different glucose clamps, the intravenous glucose tolerance test and the application of continuous positive airway pressure on obstructive sleep apnea subjects. As such, this model provides the foundation on which future efforts to simulate disease progression and the long-term effects of pharmacological intervention can be based.

  7. Study and application of microscopic depletion model in core simulator of COSINE project

    International Nuclear Information System (INIS)

    Hu Xiaoyu; Wang Su; Yan Yuhang; Liu Zhanquan; Chen Yixue; Huang Kai

    2013-01-01

    Microscopic depletion correction is one of the commonly used techniques that could improve the historical effect and attain higher precision of diffusion calculation and alleviate the inaccuracy caused by historical effect. Core simulator of COSINE project (core and system integrated engine for design and analysis) has developed a hybrid macroscopic-microscopic depletion model to track important isotopes during each depletion history and correct the macro cross sections. The basic theory was discussed in this paper. The effect and results of microscopic depletion correction were also analyzed. The preliminary test results demonstrate that the microscopic depletion model is effective and practicable for improving the precision of core calculation. (authors)

  8. The effect of medical trainees on pediatric emergency department flow: a discrete event simulation modeling study.

    Science.gov (United States)

    Genuis, Emerson D; Doan, Quynh

    2013-11-01

    Providing patient care and medical education are both important missions of teaching hospital emergency departments (EDs). With medical school enrollment rising, and ED crowding becoming an increasing prevalent issue, it is important for both pediatric EDs (PEDs) and general EDs to find a balance between these two potentially competing goals. The objective was to determine how the number of trainees in a PED affects patient wait time, total ED length of stay (LOS), and rates of patients leaving without being seen (LWBS) for PED patients overall and stratified by acuity level as defined by the Pediatric Canadian Triage and Acuity Scale (CTAS) using discrete event simulation (DES) modeling. A DES model of an urban tertiary care PED, which receives approximately 40,000 visits annually, was created and validated. Thirteen different trainee schedules, which ranged from averaging zero to six trainees per shift, were input into the DES model and the outcome measures were determined using the combined output of five model iterations. An increase in LOS of approximately 7 minutes was noted to be associated with each additional trainee per attending emergency physician working in the PED. The relationship between the number of trainees and wait time varied with patients' level of acuity and with the degree of PED utilization. Patient wait time decreased as the number of trainees increased for low-acuity visits and when the PED was not operating at full capacity. With rising numbers of trainees, the PED LWBS rate decreased in the whole department and in the CTAS 4 and 5 patient groups, but it rose in patients triaged CTAS 3 or higher. A rising numbers of trainees was not associated with any change to flow outcomes for CTAS 1 patients. The results of this study demonstrate that trainees in PEDs have an impact mainly on patient LOS and that the effect on wait time differs between patients presenting with varying degrees of acuity. These findings will assist PEDs in finding a

  9. A simulation study on Bayesian Ridge regression models for several collinearity levels

    Science.gov (United States)

    Efendi, Achmad; Effrihan

    2017-12-01

    When analyzing data with multiple regression model if there are collinearities, then one or several predictor variables are usually omitted from the model. However, there sometimes some reasons, for instance medical or economic reasons, the predictors are all important and should be included in the model. Ridge regression model is not uncommon in some researches to use to cope with collinearity. Through this modeling, weights for predictor variables are used for estimating parameters. The next estimation process could follow the concept of likelihood. Furthermore, for the estimation nowadays the Bayesian version could be an alternative. This estimation method does not match likelihood one in terms of popularity due to some difficulties; computation and so forth. Nevertheless, with the growing improvement of computational methodology recently, this caveat should not at the moment become a problem. This paper discusses about simulation process for evaluating the characteristic of Bayesian Ridge regression parameter estimates. There are several simulation settings based on variety of collinearity levels and sample sizes. The results show that Bayesian method gives better performance for relatively small sample sizes, and for other settings the method does perform relatively similar to the likelihood method.

  10. Distributed simulation a model driven engineering approach

    CERN Document Server

    Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent

    2016-01-01

    Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.

  11. Do we need full mesoscale models to simulate the urban heat island? A study over the city of Barcelona.

    Science.gov (United States)

    García-Díez, Markel; Ballester, Joan; De Ridder, Koen; Hooyberghs, Hans; Lauwaet, Dirk; Rodó, Xavier

    2016-04-01

    As most of the population lives in urban environments, the simulation of the urban climate has become an important part of the global climate change impact assessment. However, due to the high resolution required, these simulations demand a large amount of computational resources. Here we present a comparison between a simplified fast urban climate model (UrbClim) and a widely used full mesoscale model, the Weather Research and Forecasting (WRF) model, over the city of Barcelona. In order to check the advantages and disadvantages of each approach, both simulations were compared with station data and with land surface temperature observations retrieved by satellites, focusing on the urban heat island. The effect of changing the UrbClim boundary conditions was studied too, by using low resolution global reanalysis data (70 km) and a higher resolution forecast model (15 km). Finally, a strict comparison of the computational resources consumed by both models was carried out. Results show that, generally, the performance of the simple model is comparable to or better than the mesoscale model. The exception are the winds and the day-to-day correlation in the reanalysis driven run, but these problems disappear when taking the boundary conditions from a higher resolution global model. UrbClim was found to run 133 times faster than WRF, using 4x times higher resolution and, thus, it is an efficient solution for running long climate change simulations over large city ensembles.

  12. Comparative study of wall-force models for the simulation of bubbly flows

    Energy Technology Data Exchange (ETDEWEB)

    Rzehak, Roland, E-mail: r.rzehak@hzdr.de [Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Institute of Fluid Dynamics, POB 510119, D-01314 Dresden (Germany); Krepper, Eckhard, E-mail: E.Krepper@hzdr.de [Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Institute of Fluid Dynamics, POB 510119, D-01314 Dresden (Germany); Lifante, Conxita, E-mail: Conxita.Lifante@ansys.com [ANSYS Germany GmbH, Staudenfeldweg 12, 83624 Otterfing (Germany)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer Comparison of common models for the wall force with an experimental database. Black-Right-Pointing-Pointer Identification of suitable closure for bubbly flow. Black-Right-Pointing-Pointer Enables prediction of location and height of wall peak in void fraction profiles. - Abstract: Accurate numerical prediction of void-fraction profiles in bubbly multiphase-flow relies on suitable closure models for the momentum exchange between liquid and gas phases. We here consider forces acting on the bubbles in the vicinity of a wall. A number of different models for this so-called wall-force have been proposed in the literature and are implemented in widely used CFD-codes. Simulations using a selection of these models are compared with a set of experimental data on bubbly air-water flow in round pipes of different diameter. Based on the results, recommendations on suitable closures are given.

  13. When are solar refrigerators less costly than on-grid refrigerators: A simulation modeling study.

    Science.gov (United States)

    Haidari, Leila A; Brown, Shawn T; Wedlock, Patrick; Connor, Diana L; Spiker, Marie; Lee, Bruce Y

    2017-04-19

    Gavi recommends solar refrigerators for vaccine storage in areas with less than eight hours of electricity per day, and WHO guidelines are more conservative. The question remains: Can solar refrigerators provide value where electrical outages are less frequent? Using a HERMES-generated computational model of the Mozambique routine immunization supply chain, we simulated the use of solar versus electric mains-powered refrigerators (hereafter referred to as "electric refrigerators") at different locations in the supply chain under various circumstances. At their current price premium, the annual cost of each solar refrigerator is 132% more than each electric refrigerator at the district level and 241% more at health facilities. Solar refrigerators provided savings over electric refrigerators when one-day electrical outages occurred more than five times per year at either the district level or the health facilities, even when the electric refrigerator holdover time exceeded the duration of the outage. Two-day outages occurring more than three times per year at the district level or more than twice per year at the health facilities also caused solar refrigerators to be cost saving. Lowering the annual cost of a solar refrigerator to 75% more than an electric refrigerator allowed solar refrigerators to be cost saving at either level when one-day outages occurred more than once per year, or when two-day outages occurred more than once per year at the district level or even once per year at the health facilities. Our study supports WHO and Gavi guidelines. In fact, solar refrigerators may provide savings in total cost per dose administered over electrical refrigerators when electrical outages are less frequent. Our study identified the frequency and duration at which electrical outages need to occur for solar refrigerators to provide savings in total cost per dose administered over electric refrigerators at different solar refrigerator prices. Copyright © 2017. Published

  14. Using ProModel as a simulation tools to assist plant layout design and planning: Case study plastic packaging factory

    Directory of Open Access Journals (Sweden)

    Pochamarn Tearwattanarattikal

    2008-01-01

    Full Text Available This study is about the application of a Simulation Model to assist decision making on expanding capacity and plant layout design and planning. The plant layout design concept is performed first to create the physical layouts then the simulation model used to test the capability of plant to meet various demand forecast scena. The study employed ProModel package as a tool, using the model to compare the performances in term of % utilization, characteristics of WIP and ability to meet due date. The verification and validation stages were perform before running the scenarios. The model runs daily production and then the capacity constraint resources defined by % utilization. The expanding capacity policy can be extra shift-working hours or increasing the number of machines. After expanding capacity solutions are found, the physical layout is selected based on the criterion of space available for WIP and easy flow of material.

  15. A Case Study Using Modeling and Simulation to Predict Logistics Supply Chain Issues

    Science.gov (United States)

    Tucker, David A.

    2007-01-01

    Optimization of critical supply chains to deliver thousands of parts, materials, sub-assemblies, and vehicle structures as needed is vital to the success of the Constellation Program. Thorough analysis needs to be performed on the integrated supply chain processes to plan, source, make, deliver, and return critical items efficiently. Process modeling provides simulation technology-based, predictive solutions for supply chain problems which enable decision makers to reduce costs, accelerate cycle time and improve business performance. For example, United Space Alliance, LLC utilized this approach in late 2006 to build simulation models that recreated shuttle orbiter thruster failures and predicted the potential impact of thruster removals on logistics spare assets. The main objective was the early identification of possible problems in providing thruster spares for the remainder of the Shuttle Flight Manifest. After extensive analysis the model results were used to quantify potential problems and led to improvement actions in the supply chain. Similarly the proper modeling and analysis of Constellation parts, materials, operations, and information flows will help ensure the efficiency of the critical logistics supply chains and the overall success of the program.

  16. Simulation studies of optimum energies for DXA: dependence on tissue type, patient size and dose model

    International Nuclear Information System (INIS)

    Michael, G. J.; Henderson, C. J.

    1999-01-01

    Dual-energy x-ray absorptiometry (DXA) is a well established technique for measuring bone mineral density (BMD). However, in recent years DXA is increasingly being used to measure body composition in terms of fat and fat-free mass. DXA scanners must also determine the soft tissue baseline value from soft-tissue-only regions adjacent to bone. The aim of this work is to determine, using computer simulations, the optimum x- ray energies for a number of dose models, different tissues, i.e. bone mineral, average soft tissue, lean soft tissue and fat; and a range of anatomical sites and patient sizes. Three models for patient dose were evaluated total beam energy, entrance exposure and absorbed dose calculated by Monte Carlo modelling. A range of tissue compositions and thicknesses were chosen to cover typical patient variations for the three sites femoral neck, PA spine and lateral spine. In this work, the optimisation of the energies is based on (1) the uncertainty that arises from the quantum statistical nature of the number of x-rays recorded by the detector, and (2) the radiation dose received by the patient. This study has deliberately not considered other parameters such as detector response, electronic noise, x-ray tube heat load etc, because these are technology dependent parameters, not ones that are inherent to the measuring technique. Optimisation of the energies is achieved by minimisation of the product of variance of density measurement and dose which is independent of the absolute intensities of the x-ray beams. The results obtained indicate that if solving for bone density, then E-low in the range 34 to 42 keV, E-high in the range 100 to 200 keV and incident intensity ratio (low energy/high energy) in the range 3 to 10 is a reasonable compromise for the normal range of patient sizes. The choice of energies is complicated by the fact that the DXA unit must also solve for fat and lean soft tissue in soft- tissue-only regions adjacent to the bone. In this

  17. Cycle Time and Throughput Rate Modelling Study through the Simulation Platform

    Directory of Open Access Journals (Sweden)

    Fei Xiong

    2014-02-01

    Full Text Available The shorter cycle time (CT and higher throughput rate (TH are primary goals of the industry, including sensors and transducer factory. The common way of cycle time reduction is to reduce WIP, but such action may also reduce throughput. This paper will show one practical healthy heuristic algorithm based on tool time modelling to balance both the CT and the TH. This algorithm considers the factors that exist in the work in process (WIP and its constrains in modules of the factory. One computer simulation platform based on a semiconductor factory is built to verify this algorithm. The result of computing simulation experiments suggests that the WIP level calculated by this algorithm can achieve the good balance of CT and TH.

  18. [Studies of ozone formation potentials for benzene and ethylbenzene using a smog chamber and model simulation].

    Science.gov (United States)

    Jia, Long; Xu, Yong-Fu

    2014-02-01

    Ozone formation potentials from irradiations of benzene-NO(x) and ethylbenzene-NO(x) systems under the conditions of different VOC/NO(x) ratios and RH were investigated using a characterized chamber and model simulation. The repeatability of the smog chamber experiment shows that for two sets of ethylbenzene-NO(x) irradiations with similar initial concentrations and reaction conditions, such as temperature, relative humidity and relative light intensity, the largest difference in O3 between two experiments is only 4% during the whole experimental run. On the basis of smog chamber experiments, ozone formation of photo-oxidation of benzene and ethylbenzene was simulated in terms of the master chemical mechanism (MCM). The peak ozone values for benzene and ethylbenzene simulated by MCM are higher than the chamber data, and the difference between the MCM-simulated results and chamber data increases with increasing RH. Under the conditions of sunlight irradiations, with benzene and ethylbenzene concentrations being in the range of (10-50) x 10(-9) and NO(x) concentrations in the range of (10-100) x 10(-9), the 6 h ozone contributions of benzene and ethylbenzene were obtained to be (3.1-33) x 10(-9) and (2.6-122) x 10(-9), whereas the peak O3 contributions of benzene and ethylbenzene were (3.5-54) x 10(-9) and (3.8-164) x 10(-9), respectively. The MCM-simulated maximum incremental reactivity (MIR) values for benzene and ethylbenzene were 0.25/C and 0.97/C (per carbon), respectively. The maximum ozone reactivity (MOR) values for these two species were obtained to be 0.73/C and 1.03/C, respectively. The MOR value of benzene from MCM is much higher than that obtained by carter from SAPRC, indicating that SAPRC may underestimate the ozone formation potential of benzene.

  19. Study of tropical clouds feedback to a climate warming as simulated by climate models

    International Nuclear Information System (INIS)

    Brient, Florent

    2012-01-01

    The last IPCC report affirms the predominant role of low cloud-radiative feedbacks in the inter-model spread of climate sensitivity. Understanding the mechanisms that control the behavior of low-level clouds is thus crucial. However, the complexity of coupled ocean-atmosphere models and the large number of processes potentially involved make the analysis of this response difficult. To simplify the analysis and to identify the most critical controls of cloud feedbacks, we analyze the cloud response to climate change simulated by the IPSL-CM5A model in a hierarchy of configurations. A comparison between three model configurations (coupled, atmospheric and aqua-planet) using the same physical parametrizations shows that the cloud response to global warming is dominated by a decrease of low clouds in regimes of moderate subsidence. Using a Single Column Model, forced by weak subsidence large-scale forcing, allows us to reproduce the vertical cloud profile predicted in the 3D model, as well as its response to climate change (if a stochastic forcing is added on vertical velocity). We analyze the sensitivity of this low-cloud response to external forcing and also to uncertain parameters of physical parameterizations involved on the atmospheric model. Through a moist static energy (MSE) budget, we highlight several mechanisms: (1) Robust: Over weak subsidence regimes, the Clausius-Clapeyron relationship predicts that a warmer atmosphere leads to a increase of the vertical MSE gradient, resulting on a strengthening of the import of low-MSE from the free atmosphere into the cloudy boundary layer. The MSE budget links changes of vertical advection and cloud radiative effects. (2) Physics Model Dependent: The coupling between shallow convection, turbulence and cloud schemes allows the intensification of low-MSE transport so that cloud radiative cooling becomes 'less necessary' to balance the energy budget (Robust positive low cloud-radiative feedback for the model). The

  20. Simulation model study of limitation on the locating distance of a ground penetrating radar; Chichu tansa radar no tansa kyori genkai ni kansuru simulation model no kochiku

    Energy Technology Data Exchange (ETDEWEB)

    Nakauchi, T; Tsunasaki, M; Kishi, M; Hayakawa, H [Osaka Gas Co. Ltd., Osaka (Japan)

    1996-10-01

    Various simulations were carried out under various laying conditions to obtain the limitation of locating distance for ground penetrating radar. Recently, ground penetrating radar has been remarked as location technology of obstacles such as the existing buried objects. To enhance the theoretical model (radar equation) of a maximum locating distance, the following factors were examined experimentally using pulse ground penetrating radar: ground surface conditions such as asphalt pavement, diameter of buried pipes, material of buried pipes, effect of soil, antenna gain. The experiment results well agreed with actual field experiment ones. By adopting the antenna gain and effect of the ground surface, the more practical simulation using underground models became possible. The maximum locating distance was more improved by large antenna than small one in actual field. It is assumed that large antenna components contributed to improvement of gain and reduction of attenuation during passing through soil. 5 refs., 12 figs.

  1. Dynamic Value at Risk: A Comparative Study Between Heteroscedastic Models and Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    José Lamartine Távora Junior

    2006-12-01

    Full Text Available The objective of this paper was to analyze the risk management of a portfolio composed by Petrobras PN, Telemar PN and Vale do Rio Doce PNA stocks. It was verified if the modeling of Value-at-Risk (VaR through the place Monte Carlo simulation with volatility of GARCH family is supported by hypothesis of efficient market. The results have shown that the statistic evaluation in inferior to dynamics, evidencing that the dynamic analysis supplies support to the hypothesis of efficient market of the Brazilian share holding market, in opposition of some empirical evidences. Also, it was verified that the GARCH models of volatility is enough to accommodate the variations of the shareholding Brazilian market, since the model is capable to accommodate the great dynamic of the Brazilian market.

  2. Comparative study of micromixing models in transported scalar PDF simulations of turbulent nonpremixed bluff body flames

    Energy Technology Data Exchange (ETDEWEB)

    Merci, Bart [Department of Flow, Heat and Combustion Mechanics, Ghent University-UGent, Ghent (Belgium); Roekaerts, Dirk [Department of Multi-Scale Physics, Delft University of Technology, Delft (Netherlands); Naud, Bertrand [CIEMAT, Madrid (Spain); Pope, Stephen B. [Mechanical and Aerospace Engineering, Cornell University, Ithaca, NY (United States)

    2006-07-15

    Numerical simulation results are presented for turbulent jet diffusion flames with various levels of turbulence-chemistry interaction, stabilized behind a bluff body (Sydney Flames HM1-3). Interaction between turbulence and combustion is modeled with the transported joint-scalar PDF approach. The mass density function transport equation is solved in a Lagrangian manner. A second-moment-closure turbulence model is applied to obtain accurate mean flow and turbulent mixing fields. The behavior of two micromixing models is discussed: the Euclidean minimum spanning tree model and the modified Curl coalescence dispersion model. The impact of the micromixing model choice on the results in physical space is small, although some influence becomes visible as the amount of local extinction increases. Scatter plots and profiles of conditional means and variances of thermochemical quantities, conditioned on the mixture fraction, are discussed both within and downstream of the recirculation region. A distinction is made between local extinction and incomplete combustion, based on the CO species mass fraction. The differences in qualitative behavior between the micromixing models are explained and quantitative comparison to experimental data is made. (author)

  3. Accounting for treatment use when validating a prognostic model: a simulation study.

    Science.gov (United States)

    Pajouheshnia, Romin; Peelen, Linda M; Moons, Karel G M; Reitsma, Johannes B; Groenwold, Rolf H H

    2017-07-14

    Prognostic models often show poor performance when applied to independent validation data sets. We illustrate how treatment use in a validation set can affect measures of model performance and present the uses and limitations of available analytical methods to account for this using simulated data. We outline how the use of risk-lowering treatments in a validation set can lead to an apparent overestimation of risk by a prognostic model that was developed in a treatment-naïve cohort to make predictions of risk without treatment. Potential methods to correct for the effects of treatment use when testing or validating a prognostic model are discussed from a theoretical perspective.. Subsequently, we assess, in simulated data sets, the impact of excluding treated individuals and the use of inverse probability weighting (IPW) on the estimated model discrimination (c-index) and calibration (observed:expected ratio and calibration plots) in scenarios with different patterns and effects of treatment use. Ignoring the use of effective treatments in a validation data set leads to poorer model discrimination and calibration than would be observed in the untreated target population for the model. Excluding treated individuals provided correct estimates of model performance only when treatment was randomly allocated, although this reduced the precision of the estimates. IPW followed by exclusion of the treated individuals provided correct estimates of model performance in data sets where treatment use was either random or moderately associated with an individual's risk when the assumptions of IPW were met, but yielded incorrect estimates in the presence of non-positivity or an unobserved confounder. When validating a prognostic model developed to make predictions of risk without treatment, treatment use in the validation set can bias estimates of the performance of the model in future targeted individuals, and should not be ignored. When treatment use is random, treated

  4. Accounting for treatment use when validating a prognostic model: a simulation study

    Directory of Open Access Journals (Sweden)

    Romin Pajouheshnia

    2017-07-01

    Full Text Available Abstract Background Prognostic models often show poor performance when applied to independent validation data sets. We illustrate how treatment use in a validation set can affect measures of model performance and present the uses and limitations of available analytical methods to account for this using simulated data. Methods We outline how the use of risk-lowering treatments in a validation set can lead to an apparent overestimation of risk by a prognostic model that was developed in a treatment-naïve cohort to make predictions of risk without treatment. Potential methods to correct for the effects of treatment use when testing or validating a prognostic model are discussed from a theoretical perspective.. Subsequently, we assess, in simulated data sets, the impact of excluding treated individuals and the use of inverse probability weighting (IPW on the estimated model discrimination (c-index and calibration (observed:expected ratio and calibration plots in scenarios with different patterns and effects of treatment use. Results Ignoring the use of effective treatments in a validation data set leads to poorer model discrimination and calibration than would be observed in the untreated target population for the model. Excluding treated individuals provided correct estimates of model performance only when treatment was randomly allocated, although this reduced the precision of the estimates. IPW followed by exclusion of the treated individuals provided correct estimates of model performance in data sets where treatment use was either random or moderately associated with an individual's risk when the assumptions of IPW were met, but yielded incorrect estimates in the presence of non-positivity or an unobserved confounder. Conclusions When validating a prognostic model developed to make predictions of risk without treatment, treatment use in the validation set can bias estimates of the performance of the model in future targeted individuals, and

  5. Computational modeling to predict mechanical function of joints: application to the lower leg with simulation of two cadaver studies.

    Science.gov (United States)

    Liacouras, Peter C; Wayne, Jennifer S

    2007-12-01

    Computational models of musculoskeletal joints and limbs can provide useful information about joint mechanics. Validated models can be used as predictive devices for understanding joint function and serve as clinical tools for predicting the outcome of surgical procedures. A new computational modeling approach was developed for simulating joint kinematics that are dictated by bone/joint anatomy, ligamentous constraints, and applied loading. Three-dimensional computational models of the lower leg were created to illustrate the application of this new approach. Model development began with generating three-dimensional surfaces of each bone from CT images and then importing into the three-dimensional solid modeling software SOLIDWORKS and motion simulation package COSMOSMOTION. Through SOLIDWORKS and COSMOSMOTION, each bone surface file was filled to create a solid object and positioned necessary components added, and simulations executed. Three-dimensional contacts were added to inhibit intersection of the bones during motion. Ligaments were represented as linear springs. Model predictions were then validated by comparison to two different cadaver studies, syndesmotic injury and repair and ankle inversion following ligament transection. The syndesmotic injury model was able to predict tibial rotation, fibular rotation, and anterior/posterior displacement. In the inversion simulation, calcaneofibular ligament extension and angles of inversion compared well. Some experimental data proved harder to simulate accurately, due to certain software limitations and lack of complete experimental data. Other parameters that could not be easily obtained experimentally can be predicted and analyzed by the computational simulations. In the syndesmotic injury study, the force generated in the tibionavicular and calcaneofibular ligaments reduced with the insertion of the staple, indicating how this repair technique changes joint function. After transection of the calcaneofibular

  6. Study on Fluid-solid Coupling Mathematical Models and Numerical Simulation of Coal Containing Gas

    Science.gov (United States)

    Xu, Gang; Hao, Meng; Jin, Hongwei

    2018-02-01

    Based on coal seam gas migration theory under multi-physics field coupling effect, fluid-solid coupling model of coal seam gas was build using elastic mechanics, fluid mechanics in porous medium and effective stress principle. Gas seepage behavior under different original gas pressure was simulated. Results indicated that residual gas pressure, gas pressure gradient and gas low were bigger when original gas pressure was higher. Coal permeability distribution decreased exponentially when original gas pressure was lower than critical pressure. Coal permeability decreased rapidly first and then increased slowly when original pressure was higher than critical pressure.

  7. Modeling and Simulation of Nanoindentation

    Science.gov (United States)

    Huang, Sixie; Zhou, Caizhi

    2017-11-01

    Nanoindentation is a hardness test method applied to small volumes of material which can provide some unique effects and spark many related research activities. To fully understand the phenomena observed during nanoindentation tests, modeling and simulation methods have been developed to predict the mechanical response of materials during nanoindentation. However, challenges remain with those computational approaches, because of their length scale, predictive capability, and accuracy. This article reviews recent progress and challenges for modeling and simulation of nanoindentation, including an overview of molecular dynamics, the quasicontinuum method, discrete dislocation dynamics, and the crystal plasticity finite element method, and discusses how to integrate multiscale modeling approaches seamlessly with experimental studies to understand the length-scale effects and microstructure evolution during nanoindentation tests, creating a unique opportunity to establish new calibration procedures for the nanoindentation technique.

  8. Assessing type I error and power of multistate Markov models for panel data-A simulation study

    OpenAIRE

    Cassarly, Christy; Martin, Renee’ H.; Chimowitz, Marc; Peña, Edsel A.; Ramakrishnan, Viswanathan; Palesch, Yuko Y.

    2016-01-01

    Ordinal outcomes collected at multiple follow-up visits are common in clinical trials. Sometimes, one visit is chosen for the primary analysis and the scale is dichotomized amounting to loss of information. Multistate Markov models describe how a process moves between states over time. Here, simulation studies are performed to investigate the type I error and power characteristics of multistate Markov models for panel data with limited non-adjacent state transitions. The results suggest that ...

  9. Validation process of simulation model

    International Nuclear Information System (INIS)

    San Isidro, M. J.

    1998-01-01

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  10. A Software Toolkit to Study Systematic Uncertainties of the Physics Models of the Geant4 Simulation Package

    Science.gov (United States)

    Genser, Krzysztof; Hatcher, Robert; Kelsey, Michael; Perdue, Gabriel; Wenzel, Hans; Wright, Dennis H.; Yarba, Julia

    2017-10-01

    The Geant4 simulation toolkit is used to model interactions between particles and matter. Geant4 employs a set of validated physics models that span a wide range of interaction energies. These models rely on measured cross-sections and phenomenological models with the physically motivated parameters that are tuned to cover many application domains. To study what uncertainties are associated with the Geant4 physics models we have designed and implemented a comprehensive, modular, user-friendly software toolkit that allows the variation of one or more parameters of one or more Geant4 physics models involved in simulation studies. It also enables analysis of multiple variants of the resulting physics observables of interest in order to estimate the uncertainties associated with the simulation model choices. Based on modern event-processing infrastructure software, the toolkit offers a variety of attractive features, e.g. flexible run-time configurable workflow, comprehensive bookkeeping, easy to expand collection of analytical components. Design, implementation technology, and key functionalities of the toolkit are presented in this paper and illustrated with selected results.

  11. Modeling, simulation, and concept studies of a fuel cell hybrid electric vehicle powertrain

    Energy Technology Data Exchange (ETDEWEB)

    Oezbek, Markus

    2010-03-29

    This thesis focuses on the development of a fuel cell-based hybrid electric powertrain for smaller (2 kW) hybrid electric vehicles (HEVs). A Hardware-in-the-Loop test rig is designed and built with the possibility to simulate any load profile for HEVs in a realistic environment, whereby the environment is modeled. Detailed simulation models of the test rig are developed and validated to real physical components and control algorithms are designed for the DC/DC-converters and the fuel cell system. A state-feedback controller is developed for the DC/DC-converters where the state-space averaging method is used for the development. For the fuel cells, a gain-scheduling controller based on state feedback is developed and compared to two conventional methods. The design process of an HEV with regard to a given load profile is introduced with comparison between SuperCaps and batteries. The HEV is also evaluated with an introduction to different power management concepts with regard to fuel consumption, dynamics, and fuel cell deterioration rate. The power management methods are implemented in the test rig and compared. (orig.)

  12. Numerical Modelling and Simulation of Dynamic Parameters for Vibration Driven Mobile Robot: Preliminary Study

    Science.gov (United States)

    Baharudin, M. E.; Nor, A. M.; Saad, A. R. M.; Yusof, A. M.

    2018-03-01

    The motion of vibration-driven robots is based on an internal oscillating mass which can move without legs or wheels. The oscillation of the unbalanced mass by a motor is translated into vibration which in turn produces vertical and horizontal forces. Both vertical and horizontal oscillations are of the same frequency but the phases are shifted. The vertical forces will deflect the bristles which cause the robot to move forward. In this paper, the horizontal motion direction caused by the vertically vibrated bristle is numerically simulated by tuning the frequency of their oscillatory actuation. As a preliminary work, basic equations for a simple off-centered vibration location on the robot platform and simulation model for vibration excitement are introduced. It involves both static and dynamic vibration analysis of robots and analysis of different type of parameters. In addition, the orientation of the bristles and oscillators are also analysed. Results from the numerical integration seem to be in good agreement with those achieved from the literature. The presented numerical integration modeling can be used for designing the bristles and controlling the speed and direction of the robot.

  13. Basic study on a lower-energy defibrillation method using computer simulation and cultured myocardial cell models.

    Science.gov (United States)

    Yaguchi, A; Nagase, K; Ishikawa, M; Iwasaka, T; Odagaki, M; Hosaka, H

    2006-01-01

    Computer simulation and myocardial cell models were used to evaluate a low-energy defibrillation technique. A generated spiral wave, considered to be a mechanism of fibrillation, and fibrillation were investigated using two myocardial sheet models: a two-dimensional computer simulation model and a two-dimensional experimental model. A new defibrillation technique that has few side effects, which are induced by the current passing into the patient's body, on cardiac muscle is desired. The purpose of the present study is to conduct a basic investigation into an efficient defibrillation method. In order to evaluate the defibrillation method, the propagation of excitation in the myocardial sheet is measured during the normal state and during fibrillation, respectively. The advantages of the low-energy defibrillation technique are then discussed based on the stimulation timing.

  14. Notes on modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Redondo, Antonio [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-10

    These notes present a high-level overview of how modeling and simulation are carried out by practitioners. The discussion is of a general nature; no specific techniques are examined but the activities associated with all modeling and simulation approaches are briefly addressed. There is also a discussion of validation and verification and, at the end, a section on why modeling and simulation are useful.

  15. Cortical imaging on a head template: a simulation study using a resistor mesh model (RMM).

    Science.gov (United States)

    Chauveau, Nicolas; Franceries, Xavier; Aubry, Florent; Celsis, Pierre; Rigaud, Bernard

    2008-09-01

    The T1 head template model used in Statistical Parametric Mapping Version 2000 (SPM2), was segmented into five layers (scalp, skull, CSF, grey and white matter) and implemented in 2 mm voxels. We designed a resistor mesh model (RMM), based on the finite volume method (FVM) to simulate the electrical properties of this head model along the three axes for each voxel. Then, we introduced four dipoles of high eccentricity (about 0.8) in this RMM, separately and simultaneously, to compute the potentials for two sets of conductivities. We used the direct cortical imaging technique (CIT) to recover the simulated dipoles, using 60 or 107 electrodes and with or without addition of Gaussian white noise (GWN). The use of realistic conductivities gave better CIT results than standard conductivities, lowering the blurring effect on scalp potentials and displaying more accurate position areas when CIT was applied to single dipoles. Simultaneous dipoles were less accurately localized, but good qualitative and stable quantitative results were obtained up to 5% noise level for 107 electrodes and up to 10% noise level for 60 electrodes, showing that a compromise must be found to optimize both the number of electrodes and the noise level. With the RMM defined in 2 mm voxels, the standard 128-electrode cap and 5% noise appears to be the upper limit providing reliable source positions when direct CIT is used. The admittance matrix defining the RMM is easy to modify so as to adapt to different conductivities. The next step will be the adaptation of individual real head T2 images to the RMM template and the introduction of anisotropy using diffusion imaging (DI).

  16. Laboratory studies and model simulations of sorbent material behavior for an in-situ passive treatment barrier

    International Nuclear Information System (INIS)

    Aloysius, D.; Fuhrmann, M.

    1995-01-01

    This paper presents a study combining laboratory experiments and model simulations in support of the design and construction of a passive treatment barrier (or filter wall) for retarding the migration of Sr-90 within a water-bearing surficial sand and gravel layer. Preliminary evaluation was used to select materials for column testing. A one-dimensional finite-difference model was used to simulate the laboratory column results and extrapolation of the calibrated model was then used to assess barrier performance over extended time frames with respect to Sr-90 breakthrough and loading on the filter media. The final results of the study showed that 20 by 50 mesh clinoptilolite will attenuate Sr-90 with a maximum life expentancy of approximately 10 years. This time period is based on allowable limits of Sr-90 activity on the filter media and is also a function of site-specific conditions

  17. Automatic construction of 3D-ASM intensity models by simulating image acquisition: application to myocardial gated SPECT studies.

    Science.gov (United States)

    Tobon-Gomez, Catalina; Butakoff, Constantine; Aguade, Santiago; Sukno, Federico; Moragas, Gloria; Frangi, Alejandro F

    2008-11-01

    Active shape models bear a great promise for model-based medical image analysis. Their practical use, though, is undermined due to the need to train such models on large image databases. Automatic building of point distribution models (PDMs) has been successfully addressed and a number of autolandmarking techniques are currently available. However, the need for strategies to automatically build intensity models around each landmark has been largely overlooked in the literature. This work demonstrates the potential of creating intensity models automatically by simulating image generation. We show that it is possible to reuse a 3D PDM built from computed tomography (CT) to segment gated single photon emission computed tomography (gSPECT) studies. Training is performed on a realistic virtual population where image acquisition and formation have been modeled using the SIMIND Monte Carlo simulator and ASPIRE image reconstruction software, respectively. The dataset comprised 208 digital phantoms (4D-NCAT) and 20 clinical studies. The evaluation is accomplished by comparing point-to-surface and volume errors against a proper gold standard. Results show that gSPECT studies can be successfully segmented by models trained under this scheme with subvoxel accuracy. The accuracy in estimated LV function parameters, such as end diastolic volume, end systolic volume, and ejection fraction, ranged from 90.0% to 94.5% for the virtual population and from 87.0% to 89.5% for the clinical population.

  18. Energy Modelling and Automated Calibrations of Ancient Building Simulations: A Case Study of a School in the Northwest of Spain

    Directory of Open Access Journals (Sweden)

    Ana Ogando

    2017-06-01

    Full Text Available In the present paper, the energy performance of buildings forming a school centre in the northwest of Spain was analyzed using a transient simulation of the energy model of the school, which was developed with TRNSYS, a software of proven reliability in the field of thermal simulations. A deterministic calibration approach was applied to the initial building model to adjust the predictions to the actual performance of the school, data acquired during the temperature measurement campaign. The buildings under study were in deteriorated conditions due to poor maintenance over the years, presenting a big challenge for modelling and simulating it in a reliable way. The results showed that the proposed methodology is successful for obtaining calibrated thermal models of these types of damaged buildings, as the metrics employed to verify the final error showed a reduced normalized mean bias error (NMBE of 2.73%. It was verified that a decrease of approximately 60% in NMBE and 17% in the coefficient of variation of the root mean square error (CV(RMSE was achieved due to the calibration process. Subsequent steps were performed with the aid of new software, which was developed under a European project that enabled the automated calibration of the simulations.

  19. A Simulation Model for Machine Efficiency Improvement Using Reliability Centered Maintenance: Case Study of Semiconductor Factory

    Directory of Open Access Journals (Sweden)

    Srisawat Supsomboon

    2014-01-01

    Full Text Available The purpose of this study was to increase the quality of product by focusing on the machine efficiency improvement. The principle of the reliability centered maintenance (RCM was applied to increase the machine reliability. The objective was to create preventive maintenance plan under reliability centered maintenance method and to reduce defects. The study target was set to reduce the Lead PPM for a test machine by simulating the proposed preventive maintenance plan. The simulation optimization approach based on evolutionary algorithms was employed for the preventive maintenance technique selection process to select the PM interval that gave the best total cost and Lead PPM values. The research methodology includes procedures such as following the priority of critical components in test machine, analyzing the damage and risk level by using Failure Mode and Effects Analysis (FMEA, calculating the suitable replacement period through reliability estimation, and optimizing the preventive maintenance plan. From the result of the study it is shown that the Lead PPM of test machine can be reduced. The cost of preventive maintenance, cost of good product, and cost of lost product were decreased.

  20. Study of the dosimetric response of Gallium Nitride (GaN): modeling, simulation and characterization on radiotherapy

    International Nuclear Information System (INIS)

    Wang, Ruoxi

    2015-01-01

    The work in this thesis has the objective to increase the measurement precision of the dosimetry based on the Gallium Nitride (GaN) transducer and develop its applications on radiotherapy. The study includes the aspects of modeling, simulation and characterization of this response in external radiotherapy and brachytherapy. In modeling, we have proposed two approaches to model the GaN transducer's response in external radiotherapy. For the first approach, a model has been built based on experimental data, while separating the primary and scattering component of the beam. For the second approach, we have adopted a response model initially developed for the silicon diodes for the GaN radioluminescent transducer. We have also proposed an original concept of bi-media dosimetry which evaluates the dose in tissue according to different responses from two media without prior information on the conditions of irradiation. This concept has been shown by Monte Carlo simulation. Moreover, for High Dose Rate brachytherapy, the response of GaN transducer irradiated by iridium 192 and cobalt 60 sources has been evaluated by Monte Carlo simulation and confirmed by the measurements. Studies on the property characterization of GaN radioluminescent transducer has been carried out with these sources as well. An instrumented phantom prototype with GaN probe has been developed for the HDR brachytherapy quality control. It allows a real-time verification of the physics parameters of a treatment (source dwell position, source dwell time, source activity). (author) [fr

  1. Assessing type I error and power of multistate Markov models for panel data-A simulation study.

    Science.gov (United States)

    Cassarly, Christy; Martin, Renee' H; Chimowitz, Marc; Peña, Edsel A; Ramakrishnan, Viswanathan; Palesch, Yuko Y

    2017-01-01

    Ordinal outcomes collected at multiple follow-up visits are common in clinical trials. Sometimes, one visit is chosen for the primary analysis and the scale is dichotomized amounting to loss of information. Multistate Markov models describe how a process moves between states over time. Here, simulation studies are performed to investigate the type I error and power characteristics of multistate Markov models for panel data with limited non-adjacent state transitions. The results suggest that the multistate Markov models preserve the type I error and adequate power is achieved with modest sample sizes for panel data with limited non-adjacent state transitions.

  2. Sensitivity study of surface wind flow of a limited area model simulating the extratropical storm Delta affecting the Canary Islands

    Directory of Open Access Journals (Sweden)

    C. Marrero

    2009-04-01

    Full Text Available In November 2005 an extratropical storm named Delta affected the Canary Islands (Spain. The high sustained wind and intense gusts experienced caused significant damage. A numerical sensitivity study of Delta was conducted using the Weather Research & Forecasting Model (WRF-ARW. A total of 27 simulations were performed. Non-hydrostatic and hydrostatic experiments were designed taking into account physical parameterizations and geometrical factors (size and position of the outer domain, definition or not of nested grids, horizontal resolution and number of vertical levels. The Factor Separation Method was applied in order to identify the major model sensitivity parameters under this unusual meteorological situation. Results associated to percentage changes relatives to a control run simulation demonstrated that boundary layer and surface layer schemes, horizontal resolutions, hydrostaticity option and nesting grid activation were the model configuration parameters with the greatest impact on the 48 h maximum 10 m horizontal wind speed solution.

  3. Simulation Model of a Transient

    DEFF Research Database (Denmark)

    Jauch, Clemens; Sørensen, Poul; Bak-Jensen, Birgitte

    2005-01-01

    This paper describes the simulation model of a controller that enables an active-stall wind turbine to ride through transient faults. The simulated wind turbine is connected to a simple model of a power system. Certain fault scenarios are specified and the turbine shall be able to sustain operati...

  4. Scalar Dissipation Modeling for Passive and Active Scalars: a priori Study Using Direct Numerical Simulation

    Science.gov (United States)

    Selle, L. C.; Bellan, Josette

    2006-01-01

    Transitional databases from Direct Numerical Simulation (DNS) of three-dimensional mixing layers for single-phase flows and two-phase flows with evaporation are analyzed and used to examine the typical hypothesis that the scalar dissipation Probability Distribution Function (PDF) may be modeled as a Gaussian. The databases encompass a single-component fuel and four multicomponent fuels, two initial Reynolds numbers (Re), two mass loadings for two-phase flows and two free-stream gas temperatures. Using the DNS calculated moments of the scalar-dissipation PDF, it is shown, consistent with existing experimental information on single-phase flows, that the Gaussian is a modest approximation of the DNS-extracted PDF, particularly poor in the range of the high scalar-dissipation values, which are significant for turbulent reaction rate modeling in non-premixed flows using flamelet models. With the same DNS calculated moments of the scalar-dissipation PDF and making a change of variables, a model of this PDF is proposed in the form of the (beta)-PDF which is shown to approximate much better the DNS-extracted PDF, particularly in the regime of the high scalar-dissipation values. Several types of statistical measures are calculated over the ensemble of the fourteen databases. For each statistical measure, the proposed (beta)-PDF model is shown to be much superior to the Gaussian in approximating the DNS-extracted PDF. Additionally, the agreement between the DNS-extracted PDF and the (beta)-PDF even improves when the comparison is performed for higher initial Re layers, whereas the comparison with the Gaussian is independent of the initial Re values. For two-phase flows, the comparison between the DNS-extracted PDF and the (beta)-PDF also improves with increasing free-stream gas temperature and mass loading. The higher fidelity approximation of the DNS-extracted PDF by the (beta)-PDF with increasing Re, gas temperature and mass loading bodes well for turbulent reaction rate

  5. Cognitive models embedded in system simulation models

    International Nuclear Information System (INIS)

    Siegel, A.I.; Wolf, J.J.

    1982-01-01

    If we are to discuss and consider cognitive models, we must first come to grips with two questions: (1) What is cognition; (2) What is a model. Presumably, the answers to these questions can provide a basis for defining a cognitive model. Accordingly, this paper first places these two questions into perspective. Then, cognitive models are set within the context of computer simulation models and a number of computer simulations of cognitive processes are described. Finally, pervasive issues are discussed vis-a-vis cognitive modeling in the computer simulation context

  6. An ab initio chemical reaction model for the direct simulation Monte Carlo study of non-equilibrium nitrogen flows.

    Science.gov (United States)

    Mankodi, T K; Bhandarkar, U V; Puranik, B P

    2017-08-28

    A new ab initio based chemical model for a Direct Simulation Monte Carlo (DSMC) study suitable for simulating rarefied flows with a high degree of non-equilibrium is presented. To this end, Collision Induced Dissociation (CID) cross sections for N 2 +N 2 →N 2 +2N are calculated and published using a global complete active space self-consistent field-complete active space second order perturbation theory N 4 potential energy surface and quasi-classical trajectory algorithm for high energy collisions (up to 30 eV). CID cross sections are calculated for only a selected set of ro-vibrational combinations of the two nitrogen molecules, and a fitting scheme based on spectroscopic weights is presented to interpolate the CID cross section for all possible ro-vibrational combinations. The new chemical model is validated by calculating equilibrium reaction rate coefficients that can be compared well with existing shock tube and computational results. High-enthalpy hypersonic nitrogen flows around a cylinder in the transition flow regime are simulated using DSMC to compare the predictions of the current ab initio based chemical model with the prevailing phenomenological model (the total collision energy model). The differences in the predictions are discussed.

  7. Assessment of input function distortions on kinetic model parameters in simulated dynamic 82Rb PET perfusion studies

    International Nuclear Information System (INIS)

    Meyer, Carsten; Peligrad, Dragos-Nicolae; Weibrecht, Martin

    2007-01-01

    Cardiac 82 rubidium dynamic PET studies allow quantifying absolute myocardial perfusion by using tracer kinetic modeling. Here, the accurate measurement of the input function, i.e. the tracer concentration in blood plasma, is a major challenge. This measurement is deteriorated by inappropriate temporal sampling, spillover, etc. Such effects may influence the measured input peak value and the measured blood pool clearance. The aim of our study is to evaluate the effect of input function distortions on the myocardial perfusion as estimated by the model. To this end, we simulate noise-free myocardium time activity curves (TACs) with a two-compartment kinetic model. The input function to the model is a generic analytical function. Distortions of this function have been introduced by varying its parameters. Using the distorted input function, the compartment model has been fitted to the simulated myocardium TAC. This analysis has been performed for various sets of model parameters covering a physiologically relevant range. The evaluation shows that ±10% error in the input peak value can easily lead to ±10-25% error in the model parameter K 1 , which relates to myocardial perfusion. Variations in the input function tail are generally less relevant. We conclude that an accurate estimation especially of the plasma input peak is crucial for a reliable kinetic analysis and blood flow estimation

  8. General introduction to simulation models

    DEFF Research Database (Denmark)

    Hisham Beshara Halasa, Tariq; Boklund, Anette

    2012-01-01

    trials. However, if simulation models would be used, good quality input data must be available. To model FMD, several disease spread models are available. For this project, we chose three simulation model; Davis Animal Disease Spread (DADS), that has been upgraded to DTU-DADS, InterSpread Plus (ISP......Monte Carlo simulation can be defined as a representation of real life systems to gain insight into their functions and to investigate the effects of alternative conditions or actions on the modeled system. Models are a simplification of a system. Most often, it is best to use experiments and field...... trials to investigate the effect of alternative conditions or actions on a specific system. Nonetheless, field trials are expensive and sometimes not possible to conduct, as in case of foot-and-mouth disease (FMD). Instead, simulation models can be a good and cheap substitute for experiments and field...

  9. Simulation Programs for Ph.D. Study of Analysis, Modeling and Optimum Design of Solar Domestic Hot Water Systems

    DEFF Research Database (Denmark)

    Qin, Lin

    1999-01-01

    The design of solar domestic hot water system is a complex process, due to characteristics inherent in solar heating technology. Recently, computer simulation has become a widely used technique to improve the understanding of the thermal processes in such systems. This report presents the detaile...... programs or units that were developed in the Ph.D study of " Analysis, Modeling and Optimum Design of Solar Domestic Hot Water Systems"....

  10. Sensitivity study of surface wind flow of a limited area model simulating the extratropical storm Delta affecting the Canary Islands

    OpenAIRE

    Marrero, C.; Jorba, O.; Cuevas, E.; Baldasano, J. M.

    2009-01-01

    In November 2005 an extratropical storm named Delta affected the Canary Islands (Spain). The high sustained wind and intense gusts experienced caused significant damage. A numerical sensitivity study of Delta was conducted using the Weather Research & Forecasting Model (WRF-ARW). A total of 27 simulations were performed. Non-hydrostatic and hydrostatic experiments were designed taking into account physical parameterizations and geometrical factors (size and position of the outer domain, d...

  11. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments.

    Science.gov (United States)

    Santos, José; Monteagudo, Angel

    2011-02-21

    As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the fact that the best possible codes show the patterns of the

  12. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments

    Directory of Open Access Journals (Sweden)

    Monteagudo Ángel

    2011-02-01

    Full Text Available Abstract Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the

  13. Modeling and simulation of large HVDC systems

    Energy Technology Data Exchange (ETDEWEB)

    Jin, H.; Sood, V.K.

    1993-01-01

    This paper addresses the complexity and the amount of work in preparing simulation data and in implementing various converter control schemes and the excessive simulation time involved in modelling and simulation of large HVDC systems. The Power Electronic Circuit Analysis program (PECAN) is used to address these problems and a large HVDC system with two dc links is simulated using PECAN. A benchmark HVDC system is studied to compare the simulation results with those from other packages. The simulation time and results are provided in the paper.

  14. Simulation - modeling - experiment; Simulation - modelisation - experience

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  15. Sensitivity and requirement of improvements of four soybean crop simulation models for climate change studies in Southern Brazil.

    Science.gov (United States)

    Battisti, R; Sentelhas, P C; Boote, K J

    2018-05-01

    Crop growth models have many uncertainties that affect the yield response to climate change. Based on that, the aim of this study was to evaluate the sensitivity of crop models to systematic changes in climate for simulating soybean attainable yield in Southern Brazil. Four crop models were used to simulate yields: AQUACROP, MONICA, DSSAT, and APSIM, as well as their ensemble. The simulations were performed considering changes of air temperature (0, + 1.5, + 3.0, + 4.5, and + 6.0 °C), [CO 2 ] (380, 480, 580, 680, and 780 ppm), rainfall (- 30, - 15, 0, + 15, and + 30%), and solar radiation (- 15, 0, + 15), applied to daily values. The baseline climate was from 1961 to 2014, totalizing 53 crop seasons. The crop models simulated a reduction of attainable yield with temperature increase, reaching 2000 kg ha -1 for the ensemble at + 6 °C, mainly due to shorter crop cycle. For rainfall, the yield had a higher rate of reduction when it was diminished than when rainfall was increased. The crop models increased yield variability when solar radiation was changed from - 15 to + 15%, whereas [CO 2 ] rise resulted in yield gains, following an asymptotic response, with a mean increase of 31% from 380 to 680 ppm. The models used require further attention to improvements in optimal and maximum cardinal temperature for development rate; runoff, water infiltration, deep drainage, and dynamic of root growth; photosynthesis parameters related to soil water availability; and energy balance of soil-plant system to define leaf temperature under elevated CO 2 .

  16. Sensitivity and requirement of improvements of four soybean crop simulation models for climate change studies in Southern Brazil

    Science.gov (United States)

    Battisti, R.; Sentelhas, P. C.; Boote, K. J.

    2017-12-01

    Crop growth models have many uncertainties that affect the yield response to climate change. Based on that, the aim of this study was to evaluate the sensitivity of crop models to systematic changes in climate for simulating soybean attainable yield in Southern Brazil. Four crop models were used to simulate yields: AQUACROP, MONICA, DSSAT, and APSIM, as well as their ensemble. The simulations were performed considering changes of air temperature (0, + 1.5, + 3.0, + 4.5, and + 6.0 °C), [CO2] (380, 480, 580, 680, and 780 ppm), rainfall (- 30, - 15, 0, + 15, and + 30%), and solar radiation (- 15, 0, + 15), applied to daily values. The baseline climate was from 1961 to 2014, totalizing 53 crop seasons. The crop models simulated a reduction of attainable yield with temperature increase, reaching 2000 kg ha-1 for the ensemble at + 6 °C, mainly due to shorter crop cycle. For rainfall, the yield had a higher rate of reduction when it was diminished than when rainfall was increased. The crop models increased yield variability when solar radiation was changed from - 15 to + 15%, whereas [CO2] rise resulted in yield gains, following an asymptotic response, with a mean increase of 31% from 380 to 680 ppm. The models used require further attention to improvements in optimal and maximum cardinal temperature for development rate; runoff, water infiltration, deep drainage, and dynamic of root growth; photosynthesis parameters related to soil water availability; and energy balance of soil-plant system to define leaf temperature under elevated CO2.

  17. Sensitivity and requirement of improvements of four soybean crop simulation models for climate change studies in Southern Brazil

    Science.gov (United States)

    Battisti, R.; Sentelhas, P. C.; Boote, K. J.

    2018-05-01

    Crop growth models have many uncertainties that affect the yield response to climate change. Based on that, the aim of this study was to evaluate the sensitivity of crop models to systematic changes in climate for simulating soybean attainable yield in Southern Brazil. Four crop models were used to simulate yields: AQUACROP, MONICA, DSSAT, and APSIM, as well as their ensemble. The simulations were performed considering changes of air temperature (0, + 1.5, + 3.0, + 4.5, and + 6.0 °C), [CO2] (380, 480, 580, 680, and 780 ppm), rainfall (- 30, - 15, 0, + 15, and + 30%), and solar radiation (- 15, 0, + 15), applied to daily values. The baseline climate was from 1961 to 2014, totalizing 53 crop seasons. The crop models simulated a reduction of attainable yield with temperature increase, reaching 2000 kg ha-1 for the ensemble at + 6 °C, mainly due to shorter crop cycle. For rainfall, the yield had a higher rate of reduction when it was diminished than when rainfall was increased. The crop models increased yield variability when solar radiation was changed from - 15 to + 15%, whereas [CO2] rise resulted in yield gains, following an asymptotic response, with a mean increase of 31% from 380 to 680 ppm. The models used require further attention to improvements in optimal and maximum cardinal temperature for development rate; runoff, water infiltration, deep drainage, and dynamic of root growth; photosynthesis parameters related to soil water availability; and energy balance of soil-plant system to define leaf temperature under elevated CO2.

  18. A study on the 0D phenomenological model for diesel engine simulation: Application to combustion of Neem methyl esther biodiesel

    International Nuclear Information System (INIS)

    Ngayihi Abbe, Claude Valery; Nzengwa, Robert; Danwe, Raidandi; Ayissi, Zacharie Merlin; Obonou, Marcel

    2015-01-01

    Highlights: • We elaborate a 0D model for prediction of diesel engine operating parameters. • We implement the model for Neem methyl ester biodiesel combustion. • We show methyl butanoate and butyrate can be used as surrogates for biodiesel. • The model predicts fuel spray, in cylinder gaseous state and NOx emissions. • We show the model can be effective both in accuracy and computational speed. - Abstract: The design and monitoring of modern diesel engines running on alternative fuels require reliable models that can validly substitute experimental tests and predict their operating characteristics under different load conditions. Although there exists a multitude of models for diesel engines, 0D phenomenological models present the advantages of giving fast and accurate computed results. These models are useful for predicting fuel spray characteristics and instantaneous gas state. However, there are few reported studies on the application of 0D phenomenological models on biodiesel fuel combustion in diesel engines. This work reports the elaboration, validation and application on Neem methyl ester biodiesel (NMEB) combustion of a 0D phenomenological model for diesel engine simulation. The model addresses some specific aspects of diesel engine modeling found in previous studies such as the compromise between computers cost, accurateness and model simplicity, the reduction of the number of empirical fitting constant, the prediction of combustion kinetics with reduction of the need of experimental curve fitting, the ability to simultaneously predict under various loads engine thermodynamic and spray parameters as well as emission characteristics and finally the ability to simulate diesel engine parameters when fueled by alternative fuels. The proposed model predicts fuel spray behavior, in cylinder combustion and nitric oxides (NOx) emissions. The model is implemented through a Matlab code. The model is mainly based on Razlejtsev’s spray evaporation model

  19. Photovoltaic array performance simulation models

    Energy Technology Data Exchange (ETDEWEB)

    Menicucci, D. F.

    1986-09-15

    The experience of the solar industry confirms that, despite recent cost reductions, the profitability of photovoltaic (PV) systems is often marginal and the configuration and sizing of a system is a critical problem for the design engineer. Construction and evaluation of experimental systems are expensive and seldom justifiable. A mathematical model or computer-simulation program is a desirable alternative, provided reliable results can be obtained. Sandia National Laboratories, Albuquerque (SNLA), has been studying PV-system modeling techniques in an effort to develop an effective tool to be used by engineers and architects in the design of cost-effective PV systems. This paper reviews two of the sources of error found in previous PV modeling programs, presents the remedies developed to correct these errors, and describes a new program that incorporates these improvements.

  20. Factors influencing the renal arterial Doppler waveform: a simulation study using an electrical circuit model (secondary publication)

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Chang Kyu [Dept. of Radiology, SMG-SNU Boramae Medical Center, Seoul National University College of Medicine, Seoul (Korea, Republic of); Han, Bong Soo [Dept. of Radiological Science, College of Health Science, Yonsei University, Wonju (Korea, Republic of); Kim, Seung Hyup [Dept. of Radiology, Institute of Radiation Medicine, Seoul National University Hospital, Seoul National University College of Medicine, Seoul (Korea, Republic of)

    2016-01-15

    The goal of this study was to evaluate the effect of vascular compliance, resistance, and pulse rate on the resistive index (RI) by using an electrical circuit model to simulate renal blood flow. In order to analyze the renal arterial Doppler waveform, we modeled the renal blood-flow circuit with an equivalent simple electrical circuit containing resistance, inductance, and capacitance. The relationships among the impedance, resistance, and compliance of the circuit were derived from well-known equations, including Kirchhoff’s current law for alternating current circuits. Simulated velocity-time profiles for pulsatile flow were generated using Mathematica (Wolfram Research) and the influence of resistance, compliance, and pulse rate on waveforms and the RI was evaluated. Resistance and compliance were found to alter the waveforms independently. The impedance of the circuit increased with increasing proximal compliance, proximal resistance, and distal resistance. The impedance decreased with increasing distal compliance. The RI of the circuit decreased with increasing proximal compliance and resistance. The RI increased with increasing distal compliance and resistance. No positive correlation between impedance and the RI was found. Pulse rate was found to be an extrinsic factor that also influenced the RI. This simulation study using an electrical circuit model led to a better understanding of the renal arterial Doppler waveform and the RI, which may be useful for interpreting Doppler findings in various clinical settings.

  1. Factors influencing the renal arterial Doppler waveform: a simulation study using an electrical circuit model (secondary publication)

    International Nuclear Information System (INIS)

    Sung, Chang Kyu; Han, Bong Soo; Kim, Seung Hyup

    2016-01-01

    The goal of this study was to evaluate the effect of vascular compliance, resistance, and pulse rate on the resistive index (RI) by using an electrical circuit model to simulate renal blood flow. In order to analyze the renal arterial Doppler waveform, we modeled the renal blood-flow circuit with an equivalent simple electrical circuit containing resistance, inductance, and capacitance. The relationships among the impedance, resistance, and compliance of the circuit were derived from well-known equations, including Kirchhoff’s current law for alternating current circuits. Simulated velocity-time profiles for pulsatile flow were generated using Mathematica (Wolfram Research) and the influence of resistance, compliance, and pulse rate on waveforms and the RI was evaluated. Resistance and compliance were found to alter the waveforms independently. The impedance of the circuit increased with increasing proximal compliance, proximal resistance, and distal resistance. The impedance decreased with increasing distal compliance. The RI of the circuit decreased with increasing proximal compliance and resistance. The RI increased with increasing distal compliance and resistance. No positive correlation between impedance and the RI was found. Pulse rate was found to be an extrinsic factor that also influenced the RI. This simulation study using an electrical circuit model led to a better understanding of the renal arterial Doppler waveform and the RI, which may be useful for interpreting Doppler findings in various clinical settings

  2. Similarities and differences of serotonin and its precursors in their interactions with model membranes studied by molecular dynamics simulation

    Science.gov (United States)

    Wood, Irene; Martini, M. Florencia; Pickholz, Mónica

    2013-08-01

    In this work, we report a molecular dynamics (MD) simulations study of relevant biological molecules as serotonin (neutral and protonated) and its precursors, tryptophan and 5-hydroxy-tryptophan, in a fully hydrated bilayer of 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphatidyl-choline (POPC). The simulations were carried out at the fluid lamellar phase of POPC at constant pressure and temperature conditions. Two guest molecules of each type were initially placed at the water phase. We have analyzed, the main localization, preferential orientation and specific interactions of the guest molecules within the bilayer. During the simulation run, the four molecules were preferentially found at the water-lipid interphase. We found that the interactions that stabilized the systems are essentially hydrogen bonds, salt bridges and cation-π. None of the guest molecules have access to the hydrophobic region of the bilayer. Besides, zwitterionic molecules have access to the water phase, while protonated serotonin is anchored in the interphase. Even taking into account that these simulations were done using a model membrane, our results suggest that the studied molecules could not cross the blood brain barrier by diffusion. These results are in good agreement with works that show that serotonin and Trp do not cross the BBB by simple diffusion.

  3. Laboratory modeling, field study, and numerical simulation of bioremediation of petroleum contaminants

    International Nuclear Information System (INIS)

    Livingston, R.J.; Islam, M.R.

    1999-01-01

    The use of bioremediation as an alternative remediation technology is fast becoming the technique of choice among many environmental professionals. This method offers substantial benefits not found in other remediation processes. Bioremediation is very cost effective, nondestructive, relatively uncomplicated in implementing, requires nonspecialized equipment, and can be extremely effective in removing recalcitrant petroleum hydrocarbons. This study researched the availability of viable microbial populations in the arid climate in South Dakota. Exponential growth of the bacteria and the ability of bacteria to degrade long-chain hydrocarbons indicated that healthy populations do exist and could be used to mineralize organic hydrocarbons. Experimental results indicated that bioremediation can be effectively enhanced in landfills as well as in the subsurface using a supply of harmless nutrients. The biodegradation rate can be further enhanced with the use of edible surfactant that helped disperse the petroleum products. Also, the use of hydrogen peroxide enhanced the oxygen availability and increased the degradation rate. Interestingly, the bacterial growth rate was found to be high in difficult-to-biodegrade contaminants, such as waste oil. A numerical simulation program was also developed that describes the bacterial growth in the subsurface along with the reduction in substrate (contamination). Results from this program were found to be consistent with laboratory results

  4. Comparing effects of fire modeling methods on simulated fire patterns and succession: a case study in the Missouri Ozarks

    Science.gov (United States)

    Jian Yang; Hong S. He; Brian R. Sturtevant; Brian R. Miranda; Eric J. Gustafson

    2008-01-01

    We compared four fire spread simulation methods (completely random, dynamic percolation. size-based minimum travel time algorithm. and duration-based minimum travel time algorithm) and two fire occurrence simulation methods (Poisson fire frequency model and hierarchical fire frequency model) using a two-way factorial design. We examined these treatment effects on...

  5. A numerical study of scalar dispersion downstream of a wall-mounted cube using direct simulations and algebraic flux models

    Energy Technology Data Exchange (ETDEWEB)

    Rossi, R., E-mail: riccardo.rossi12@unibo.i [Laboratorio di Termofluidodinamica Computazionale Seconda Facolta di Ingegneria di Forli, Universita di Bologna Via Fontanelle 40, 47100 Forli (Italy); Center for Turbulence Research Department of Mechanical Engineering Stanford University, CA 94305 (United States); Philips, D.A.; Iaccarino, G. [Center for Turbulence Research Department of Mechanical Engineering Stanford University, CA 94305 (United States)

    2010-10-15

    Research highlights: {yields} The computed DNS statistics indicate that a gradient-transport scheme can be applied to the vertical and spanwise scalar flux components. {yields} The streamwise scalar flux is characterized by a counter-gradient transport mechanism in the wake region close to the obstacle. {yields} The wake profiles of scalar fluctuations and the shape of probability density functions do not suggest a significant flapping movement of the scalar plume. {yields} The evaluation of scalar dispersion models must include a careful assessment of the computed mean velocity field and Reynolds stress tensor. {yields} Algebraic models provide an improved prediction of the mean concentration field as compared to the standard eddy-diffusivity model. -- Abstract: The dispersion of a passive scalar downstream of a wall-mounted cube is examined using direct numerical simulations and turbulence models applied to the Reynolds equations. The scalar is released from a circular source located on top of the obstacle, which is immersed in a developing boundary-layer flow. Direct simulations are performed to give insight into the mixing process and to provide a reference database for turbulence closures. Algebraic flux models are evaluated against the standard eddy-diffusivity representation. Coherent structures periodically released from the cube top are responsible for a counter-diffusion mechanism appearing in the streamwise scalar flux. Alternating vortex pairs form from the lateral edges of the cube, but the intensity profiles and probability density functions of scalar fluctuations suggest that they do not cause a significant flapping movement of the scalar plume. The gradient-transport scheme is consistent with the vertical and spanwise scalar flux components. From the comparative study with our direct simulations, we further stress that Reynolds stress predictions must be carefully evaluated along with scalar flux closures in order to establish the reliability of

  6. A numerical study of scalar dispersion downstream of a wall-mounted cube using direct simulations and algebraic flux models

    International Nuclear Information System (INIS)

    Rossi, R.; Philips, D.A.; Iaccarino, G.

    2010-01-01

    Research highlights: → The computed DNS statistics indicate that a gradient-transport scheme can be applied to the vertical and spanwise scalar flux components. → The streamwise scalar flux is characterized by a counter-gradient transport mechanism in the wake region close to the obstacle. → The wake profiles of scalar fluctuations and the shape of probability density functions do not suggest a significant flapping movement of the scalar plume. → The evaluation of scalar dispersion models must include a careful assessment of the computed mean velocity field and Reynolds stress tensor. → Algebraic models provide an improved prediction of the mean concentration field as compared to the standard eddy-diffusivity model. -- Abstract: The dispersion of a passive scalar downstream of a wall-mounted cube is examined using direct numerical simulations and turbulence models applied to the Reynolds equations. The scalar is released from a circular source located on top of the obstacle, which is immersed in a developing boundary-layer flow. Direct simulations are performed to give insight into the mixing process and to provide a reference database for turbulence closures. Algebraic flux models are evaluated against the standard eddy-diffusivity representation. Coherent structures periodically released from the cube top are responsible for a counter-diffusion mechanism appearing in the streamwise scalar flux. Alternating vortex pairs form from the lateral edges of the cube, but the intensity profiles and probability density functions of scalar fluctuations suggest that they do not cause a significant flapping movement of the scalar plume. The gradient-transport scheme is consistent with the vertical and spanwise scalar flux components. From the comparative study with our direct simulations, we further stress that Reynolds stress predictions must be carefully evaluated along with scalar flux closures in order to establish the reliability of Reynolds

  7. 3D-printed soft-tissue physical models of renal malignancies for individualized surgical simulation: a feasibility study.

    Science.gov (United States)

    Maddox, Michael M; Feibus, Allison; Liu, James; Wang, Julie; Thomas, Raju; Silberstein, Jonathan L

    2018-03-01

    To construct patient-specific physical three-dimensional (3D) models of renal units with materials that approximates the properties of renal tissue to allow pre-operative and robotic training surgical simulation, 3D physical kidney models were created (3DSystems, Rock Hill, SC) using computerized tomography to segment structures of interest (parenchyma, vasculature, collection system, and tumor). Images were converted to a 3D surface mesh file for fabrication using a multi-jet 3D printer. A novel construction technique was employed to approximate normal renal tissue texture, printers selectively deposited photopolymer material forming the outer shell of the kidney, and subsequently, an agarose gel solution was injected into the inner cavity recreating the spongier renal parenchyma. We constructed seven models of renal units with suspected malignancies. Partial nephrectomy and renorrhaphy were performed on each of the replicas. Subsequently all patients successfully underwent robotic partial nephrectomy. Average tumor diameter was 4.4 cm, warm ischemia time was 25 min, RENAL nephrometry score was 7.4, and surgical margins were negative. A comparison was made between the seven cases and the Tulane Urology prospectively maintained robotic partial nephrectomy database. Patients with surgical models had larger tumors, higher nephrometry score, longer warm ischemic time, fewer positive surgical margins, shorter hospitalization, and fewer post-operative complications; however, the only significant finding was lower estimated blood loss (186 cc vs 236; p = 0.01). In this feasibility study, pre-operative resectable physical 3D models can be constructed and used as patient-specific surgical simulation tools; further study will need to demonstrate if this results in improvement of surgical outcomes and robotic simulation education.

  8. Progress in modeling and simulation.

    Science.gov (United States)

    Kindler, E

    1998-01-01

    For the modeling of systems, the computers are more and more used while the other "media" (including the human intellect) carrying the models are abandoned. For the modeling of knowledges, i.e. of more or less general concepts (possibly used to model systems composed of instances of such concepts), the object-oriented programming is nowadays widely used. For the modeling of processes existing and developing in the time, computer simulation is used, the results of which are often presented by means of animation (graphical pictures moving and changing in time). Unfortunately, the object-oriented programming tools are commonly not designed to be of a great use for simulation while the programming tools for simulation do not enable their users to apply the advantages of the object-oriented programming. Nevertheless, there are exclusions enabling to use general concepts represented at a computer, for constructing simulation models and for their easy modification. They are described in the present paper, together with true definitions of modeling, simulation and object-oriented programming (including cases that do not satisfy the definitions but are dangerous to introduce misunderstanding), an outline of their applications and of their further development. In relation to the fact that computing systems are being introduced to be control components into a large spectrum of (technological, social and biological) systems, the attention is oriented to models of systems containing modeling components.

  9. Comparison between splines and fractional polynomials for multivariable model building with continuous covariates: a simulation study with continuous response.

    Science.gov (United States)

    Binder, Harald; Sauerbrei, Willi; Royston, Patrick

    2013-06-15

    In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2)  = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.

  10. Model improvements to simulate charging in SEM

    Science.gov (United States)

    Arat, K. T.; Klimpel, T.; Hagen, C. W.

    2018-03-01

    Charging of insulators is a complex phenomenon to simulate since the accuracy of the simulations is very sensitive to the interaction of electrons with matter and electric fields. In this study, we report model improvements for a previously developed Monte-Carlo simulator to more accurately simulate samples that charge. The improvements include both modelling of low energy electron scattering and charging of insulators. The new first-principle scattering models provide a more realistic charge distribution cloud in the material, and a better match between non-charging simulations and experimental results. Improvements on charging models mainly focus on redistribution of the charge carriers in the material with an induced conductivity (EBIC) and a breakdown model, leading to a smoother distribution of the charges. Combined with a more accurate tracing of low energy electrons in the electric field, we managed to reproduce the dynamically changing charging contrast due to an induced positive surface potential.

  11. Stochastic modeling analysis and simulation

    CERN Document Server

    Nelson, Barry L

    1995-01-01

    A coherent introduction to the techniques for modeling dynamic stochastic systems, this volume also offers a guide to the mathematical, numerical, and simulation tools of systems analysis. Suitable for advanced undergraduates and graduate-level industrial engineers and management science majors, it proposes modeling systems in terms of their simulation, regardless of whether simulation is employed for analysis. Beginning with a view of the conditions that permit a mathematical-numerical analysis, the text explores Poisson and renewal processes, Markov chains in discrete and continuous time, se

  12. Modeling and real time simulation of an HVDC inverter feeding a weak AC system based on commutation failure study.

    Science.gov (United States)

    Mankour, Mohamed; Khiat, Mounir; Ghomri, Leila; Chaker, Abdelkader; Bessalah, Mourad

    2018-06-01

    This paper presents modeling and study of 12-pulse HVDC (High Voltage Direct Current) based on real time simulation where the HVDC inverter is connected to a weak AC system. In goal to study the dynamic performance of the HVDC link, two serious kind of disturbance are applied at HVDC converters where the first one is the single phase to ground AC fault and the second one is the DC link to ground fault. The study is based on two different mode of analysis, which the first is to test the performance of the DC control and the second is focalized to study the effect of the protection function on the system behavior. This real time simulation considers the strength of the AC system to witch is connected and his relativity with the capacity of the DC link. The results obtained are validated by means of RT-lab platform using digital Real time simulator Hypersim (OP-5600), the results carried out show the effect of the DC control and the influence of the protection function to reduce the probability of commutation failures and also for helping inverter to take out from commutation failure even while the DC control fails to eliminate them. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Simulation of Snowmelt Runoff Using SRM Model and Comparison With Neural Networks ANN and ANFIS (Case Study: Kardeh dam basin

    Directory of Open Access Journals (Sweden)

    morteza akbari

    2017-03-01

    Full Text Available Introduction: Snowmelt runoff plays an important role in providing water and agricultural resources, especially in mountainous areas. There are different methods to simulate the process of snowmelt. Inter alia, degree-day model, based on temperature-index is more cited. Snowmelt Runoff Model is a conceptual hydrological model to simulate and predict the daily flow of rivers in the mountainous basins on the basis of comparing the accuracy of AVHRR and TM satellite images to determine snow cover in Karun Basin. Additionally, overestimation of snow-covered area decreased with increasing spatial resolution of satellite data.Studies conducted in the Zayandehrood watershed dam, showed that in the calculation of the snow map cover, changes from MODIS satellite imagery, at the time that the image does not exist, using the digital elevation model and regression analysis can provide to estimate the appropriate data from satellites. In the study of snow cover in eastern Turkey, in the mountainous regions of the Euphrates River, data from five meteorological stations and MODIS images were used with a resolution of 500 m. The results showed that satellite images have a good accuracy in estimating snow cover. In a Watershed in northern Pakistan in the period from 2000 to 2006, SRM model was used to estimate the snow cover using MODIS images. The purpose of this study was to evaluate the snowmelt runoff using remote sensing data and SRM model for flow simulation, based on statistical parameters in the Kardeh dam basin. Materials and Methods: Kardeh dam basin has an area of about 560 square kilometers and is located in the north of Mashhad. This area is in the East of Hezarmasjed – kopehdagh zone that is one of the main basins of Kashafrood. This basin is a mountainous area. About 261 km of the basin is located at above 2000 m. The lowest point of the basin is at the watershed outlet with1300 meters and the highest point in the basin, in the North West part

  14. Laboratory modeling, field study, and numerical simulation of bioremediation of petroleum contaminants

    International Nuclear Information System (INIS)

    Livingston, R.J.; Islam, M.R.

    1999-01-01

    Historical methods of cleaning up petroleum hydrocarbons from the vadose zone, the capillary zone, and the aquifers are not technically true cleanup technologies but rather transfer techniques. In addition, environmental engineers are realizing that the standard remediation techniques are not entirely effective in removing the hazardous material in a reasonable time frame. Long-chain hydrocarbons such as kerosene, diesel, and waste oil are particularly difficult to remediate using conventional techniques. The use of bioremediation as an alternative remediation technology is fast becoming the technique of choice among many environmental professionals. This method offers substantial benefits not found in other remediation processes. Bioremediation is very cost effective, nondestructive, relatively uncomplicated in implementing, requires non specialized equipment, and can be extremely effective in removing recalcitrant petroleum hydrocarbons. This study researched the availability of viable microbial populations in the arid climate in South Dakota. Exponential growth of the bacteria and the ability of bacteria to degrade long-chain hydrocarbons indicated that healthy populations do exist and could be used to mineralize organic hydrocarbons. Experimental results indicated that bioremediation can be effectively enhanced in landfills as well as in the subsurface using a supply of harmless nutrients. The biodegradation rate can be further enhanced with the use of edible surfactant that helped disperse the petroleum products. Also, the use of hydrogen peroxide enhanced the oxygen availability and increased the degradation rate. Interestingly, the bacterial growth rate is found to be high in difficult-to-biodegrade contaminants, such as waste oil. A numerical simulation program was also developed that describes the bacterial growth in the subsurface along with the reduction in substrate (contamination). Results from this program were found to be consistent with laboratory

  15. Sensitivity Studies on the Influence of Aerosols on Cloud and Precipitation Development Using WRF Mesoscale Model Simulations

    Science.gov (United States)

    Thompson, G.; Eidhammer, T.; Rasmussen, R.

    2011-12-01

    Using the WRF model in simulations of shallow and deep precipitating cloud systems, we investigated the sensitivity to aerosols initiating as cloud condensation and ice nuclei. A global climatological dataset of sulfates, sea salts, and dust was used as input for a control experiment. Sensitivity experiments with significantly more polluted conditions were conducted to analyze the resulting impacts to cloud and precipitation formation. Simulations were performed using the WRF model with explicit treatment of aerosols added to the Thompson et al (2008) bulk microphysics scheme. The modified scheme achieves droplet formation using pre-tabulated CCN activation tables provided by a parcel model. The ice nucleation is parameterized as a function of dust aerosols as well as homogeneous freezing of deliquesced aerosols. The basic processes of aerosol activation and removal by wet scavenging are considered, but aerosol characteristic size or hygroscopicity does not change due to evaporating droplets. In other words, aerosol processing was ignored. Unique aspects of this study include the usage of one to four kilometer grid spacings and the direct parameterization of ice nucleation from aerosols rather than typical temperature and/or supersaturation relationships alone. Initial results from simulations of a deep winter cloud system and its interaction with significant orography show contrasting sensitivities in regions of warm rain versus mixed liquid and ice conditions. The classical view of higher precipitation amounts in relatively clean maritime clouds with fewer but larger droplets is confirmed for regions dominated by the warm-rain process. However, due to complex interactions with the ice phase and snow riming, the simulations revealed the reverse situation in high terrain areas dominated by snow reaching the surface. Results of other cloud systems will be summarized at the conference.

  16. FASTBUS simulation models in VHDL

    International Nuclear Information System (INIS)

    Appelquist, G.

    1992-11-01

    Four hardware simulation models implementing the FASTBUS protocol are described. The models are written in the VHDL hardware description language to obtain portability, i.e. without relations to any specific simulator. They include two complete FASTBUS devices, a full-duplex segment interconnect and ancillary logic for the segment. In addition, master and slave models using a high level interface to describe FASTBUS operations, are presented. With these models different configurations of FASTBUS systems can be evaluated and the FASTBUS transactions of new devices can be verified. (au)

  17. Model reduction for circuit simulation

    CERN Document Server

    Hinze, Michael; Maten, E Jan W Ter

    2011-01-01

    Simulation based on mathematical models plays a major role in computer aided design of integrated circuits (ICs). Decreasing structure sizes, increasing packing densities and driving frequencies require the use of refined mathematical models, and to take into account secondary, parasitic effects. This leads to very high dimensional problems which nowadays require simulation times too large for the short time-to-market demands in industry. Modern Model Order Reduction (MOR) techniques present a way out of this dilemma in providing surrogate models which keep the main characteristics of the devi

  18. Simulated Leaching (Migration) Study for a Model Container-Closure System Applicable to Parenteral and Ophthalmic Drug Products.

    Science.gov (United States)

    Jenke, Dennis; Egert, Thomas; Hendricker, Alan; Castner, James; Feinberg, Tom; Houston, Christopher; Hunt, Desmond G; Lynch, Michael; Nicholas, Kumudini; Norwood, Daniel L; Paskiet, Diane; Ruberto, Michael; Smith, Edward J; Holcomb, Frank; Markovic, Ingrid

    2017-01-01

    A simulating leaching (migration) study was performed on a model container-closure system relevant to parenteral and ophthalmic drug products. This container-closure system consisted of a linear low-density polyethylene bottle (primary container), a polypropylene cap and an elastomeric cap liner (closure), an adhesive label (labeling), and a foil overpouch (secondary container). The bottles were filled with simulating solvents (aqueous salt/acid mixture at pH 2.5, aqueous buffer at pH 9.5, and 1/1 v/v isopropanol/water), a label was affixed to the filled and capped bottles, the filled bottles were placed into the foil overpouch, and the filled and pouched units were stored either upright or inverted for up to 6 months at 40 °C. After storage, the leaching solutions were tested for leached substances using multiple complementary analytical techniques to address volatile, semi-volatile, and non-volatile organic and inorganic extractables as potential leachables.The leaching data generated supported several conclusions, including that (1) the extractables (leachables) profile revealed by a simulating leaching study can qualitatively be correlated with compositional information for materials of construction, (2) the chemical nature of both the extracting medium and the individual extractables (leachables) can markedly affect the resulting profile, and (3) while direct contact between a drug product and a system's material of construction may exacerbate the leaching of substances from that material by the drug product, direct contact is not a prerequisite for migration and leaching to occur. LAY ABSTRACT: The migration of container-related extractables from a model pharmaceutical container-closure system and into simulated drug product solutions was studied, focusing on circumstances relevant to parenteral and ophthalmic drug products. The model system was constructed specifically to address the migration of extractables from labels applied to the outside of the

  19. Tribology studies of the natural knee using an animal model in a new whole joint natural knee simulator.

    Science.gov (United States)

    Liu, Aiqin; Jennings, Louise M; Ingham, Eileen; Fisher, John

    2015-09-18

    The successful development of early-stage cartilage and meniscus repair interventions in the knee requires biomechanical and biotribological understanding of the design of the therapeutic interventions and their tribological function in the natural joint. The aim of this study was to develop and validate a porcine knee model using a whole joint knee simulator for investigation of the tribological function and biomechanical properties of the natural knee, which could then be used to pre-clinically assess the tribological performance of cartilage and meniscal repair interventions prior to in vivo studies. The tribological performance of standard artificial bearings in terms of anterior-posterior (A/P) shear force was determined in a newly developed six degrees of freedom tribological joint simulator. The porcine knee model was then developed and the tribological properties in terms of shear force measurements were determined for the first time for three levels of biomechanical constraints including A/P constrained, spring force semi-constrained and A/P unconstrained conditions. The shear force measurements showed higher values under the A/P constrained condition (predominantly sliding motion) compared to the A/P unconstrained condition (predominantly rolling motion). This indicated that the shear force simulation model was able to differentiate between tribological behaviours when the femoral and tibial bearing was constrained to slide or/and roll. Therefore, this porcine knee model showed the potential capability to investigate the effect of knee structural, biomechanical and kinematic changes, as well as different cartilage substitution therapies on the tribological function of natural knee joints. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Greenhouse simulation models.

    NARCIS (Netherlands)

    Bot, G.P.A.

    1989-01-01

    A model is a representation of a real system to describe some properties i.e. internal factors of that system (out-puts) as function of some external factors (inputs). It is impossible to describe the relation between all internal factors (if even all internal factors could be defined) and all

  1. Study protocol: combining experimental methods, econometrics and simulation modelling to determine price elasticities for studying food taxes and subsidies (The Price ExaM Study).

    Science.gov (United States)

    Waterlander, Wilma E; Blakely, Tony; Nghiem, Nhung; Cleghorn, Christine L; Eyles, Helen; Genc, Murat; Wilson, Nick; Jiang, Yannan; Swinburn, Boyd; Jacobi, Liana; Michie, Jo; Ni Mhurchu, Cliona

    2016-07-19

    There is a need for accurate and precise food price elasticities (PE, change in consumer demand in response to change in price) to better inform policy on health-related food taxes and subsidies. The Price Experiment and Modelling (Price ExaM) study aims to: I) derive accurate and precise food PE values; II) quantify the impact of price changes on quantity and quality of discrete food group purchases and; III) model the potential health and disease impacts of a range of food taxes and subsidies. To achieve this, we will use a novel method that includes a randomised Virtual Supermarket experiment and econometric methods. Findings will be applied in simulation models to estimate population health impact (quality-adjusted life-years [QALYs]) using a multi-state life-table model. The study will consist of four sequential steps: 1. We generate 5000 price sets with random price variation for all 1412 Virtual Supermarket food and beverage products. Then we add systematic price variation for foods to simulate five taxes and subsidies: a fruit and vegetable subsidy and taxes on sugar, saturated fat, salt, and sugar-sweetened beverages. 2. Using an experimental design, 1000 adult New Zealand shoppers complete five household grocery shops in the Virtual Supermarket where they are randomly assigned to one of the 5000 price sets each time. 3. Output data (i.e., multiple observations of price configurations and purchased amounts) are used as inputs to econometric models (using Bayesian methods) to estimate accurate PE values. 4. A disease simulation model will be run with the new PE values as inputs to estimate QALYs gained and health costs saved for the five policy interventions. The Price ExaM study has the potential to enhance public health and economic disciplines by introducing internationally novel scientific methods to estimate accurate and precise food PE values. These values will be used to model the potential health and disease impacts of various food pricing policy

  2. Study protocol: combining experimental methods, econometrics and simulation modelling to determine price elasticities for studying food taxes and subsidies (The Price ExaM Study

    Directory of Open Access Journals (Sweden)

    Wilma E. Waterlander

    2016-07-01

    Full Text Available Abstract Background There is a need for accurate and precise food price elasticities (PE, change in consumer demand in response to change in price to better inform policy on health-related food taxes and subsidies. Methods/Design The Price Experiment and Modelling (Price ExaM study aims to: I derive accurate and precise food PE values; II quantify the impact of price changes on quantity and quality of discrete food group purchases and; III model the potential health and disease impacts of a range of food taxes and subsidies. To achieve this, we will use a novel method that includes a randomised Virtual Supermarket experiment and econometric methods. Findings will be applied in simulation models to estimate population health impact (quality-adjusted life-years [QALYs] using a multi-state life-table model. The study will consist of four sequential steps: 1. We generate 5000 price sets with random price variation for all 1412 Virtual Supermarket food and beverage products. Then we add systematic price variation for foods to simulate five taxes and subsidies: a fruit and vegetable subsidy and taxes on sugar, saturated fat, salt, and sugar-sweetened beverages. 2. Using an experimental design, 1000 adult New Zealand shoppers complete five household grocery shops in the Virtual Supermarket where they are randomly assigned to one of the 5000 price sets each time. 3. Output data (i.e., multiple observations of price configurations and purchased amounts are used as inputs to econometric models (using Bayesian methods to estimate accurate PE values. 4. A disease simulation model will be run with the new PE values as inputs to estimate QALYs gained and health costs saved for the five policy interventions. Discussion The Price ExaM study has the potential to enhance public health and economic disciplines by introducing internationally novel scientific methods to estimate accurate and precise food PE values. These values will be used to model the potential

  3. Combining integrated river modelling and agent based social simulation for river management; The case study of the Grensmaas project

    NARCIS (Netherlands)

    Valkering, P.; Krywkow, Jorg; Rotmans, J.; van der Veen, A.; Douben, N.; van Os, A.G.

    2003-01-01

    In this paper we present a coupled Integrated River Model – Agent Based Social Simulation model (IRM-ABSS) for river management. The models represent the case of the ongoing river engineering project “Grensmaas”. In the ABSS model stakeholders are represented as computer agents negotiating a river

  4. Pressure-induced transformations in glassy water: A computer simulation study using the TIP4P/2005 model

    Science.gov (United States)

    Wong, Jessina; Jahn, David A.; Giovambattista, Nicolas

    2015-08-01

    We study the pressure-induced transformations between low-density amorphous (LDA) and high-density amorphous (HDA) ice by performing out-of-equilibrium molecular dynamics (MD) simulations. We employ the TIP4P/2005 water model and show that this model reproduces qualitatively the LDA-HDA transformations observed experimentally. Specifically, the TIP4P/2005 model reproduces remarkably well the (i) structure (OO, OH, and HH radial distribution functions) and (ii) densities of LDA and HDA at P = 0.1 MPa and T = 80 K, as well as (iii) the qualitative behavior of ρ(P) during compression-induced LDA-to-HDA and decompression-induced HDA-to-LDA transformations. At the rates explored, the HDA-to-LDA transformation is less pronounced than in experiments. By studying the LDA-HDA transformations for a broad range of compression/decompression temperatures, we construct a "P-T phase diagram" for glassy water that is consistent with experiments and remarkably similar to that reported previously for ST2 water. This phase diagram is not inconsistent with the possibility of TIP4P/2005 water exhibiting a liquid-liquid phase transition at low temperatures. A comparison with previous MD simulation studies of SPC/E and ST2 water as well as experiments indicates that, overall, the TIP4P/2005 model performs better than the SPC/E and ST2 models. The effects of cooling and compression rates as well as aging on our MD simulations results are also discussed. The MD results are qualitatively robust under variations of cooling/compression rates (accessible in simulations) and are not affected by aging the hyperquenched glass for at least 1 μs. A byproduct of this work is the calculation of TIP4P/2005 water's diffusion coefficient D(T) at P = 0.1 MPa. It is found that, for T ≥ 210 K, D(T) ≈ (T - TMCT)-γ as predicted by mode coupling theory and in agreement with experiments. For TIP4P/2005 water, TMCT = 209 K and γ = 2.14, very close to the corresponding experimental values TMCT = 221 K

  5. How operator admittance affects the response of a teleoperation system to assistive forces – A model analytic study and simulation

    International Nuclear Information System (INIS)

    Wildenbeest, J.G.W.; Abbink, D.A.; Boessenkool, H.; Heemskerk, C.J.M.; Koning, J.F.

    2013-01-01

    Highlights: ► We developed a computational model of a human operator controlling a teleoperation system based on feedforward control, while performing a free-space motion. ► We studied how assistive forces affect the response of the combined system of telemanipulator and operator, when operator admittance changes due to task instruction or arm configuration. ► Inappropriate assistive forces can lead to assistive forces that are either not perceived, or deflect the combined system; assistive forces should be tailored to operator admittance. ► It is required to study, measure and quantitatively model operator behavior for teleoperated tasks in more detail. -- Abstract: Haptic shared control is a promising approach to increase the effectiveness of remote handling operations. While in haptic shared control the operator is continuously guided with assistive forces, the operator's response to forces is not fully understood. This study describes the development of a computational model of a human operator controlling a teleoperation system based on feedforward control. In a simulation, the operator's response to repulsive forces in free-space motions was modeled for two degrees of freedom, for two operator endpoint admittances (estimated by means of closed-loop identification techniques). The simulation results show that similar repulsive forces lead to substantial discrepancies in response when admittance settings mismatch; wrongly estimated operator admittances can lead to assistive forces that are either not perceived, or deflect the combined system of human operator and telemanipulator. It is concluded that assistive forces should be tailored to the arm configuration and the type of task performed. In order to utilize haptic shared control to its full potential, it is required to study, measure and quantitatively model operator behavior for teleoperated tasks in more detail

  6. How operator admittance affects the response of a teleoperation system to assistive forces – A model analytic study and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wildenbeest, J.G.W., E-mail: j.g.w.wildenbeest@tudelft.nl [Department of Biomechanical Engineering, Delft University of Technology, Mekelweg 2, 2626 CD Delft (Netherlands); Heemskerk Innovative Technology B.V., Jonckerweg 12, 2201 DZ Noordwijk (Netherlands); Abbink, D.A. [Department of Biomechanical Engineering, Delft University of Technology, Mekelweg 2, 2626 CD Delft (Netherlands); Boessenkool, H. [FOM Institute DIFFER (Dutch Institute of Fundamental Energy Research), Association EUROTOM-FOM, Partner in the Trilateral Eurogio Cluster, P.O. Box 1207, 3430 BE Nieuwegein (Netherlands); Heemskerk, C.J.M.; Koning, J.F. [Heemskerk Innovative Technology B.V., Jonckerweg 12, 2201 DZ Noordwijk (Netherlands); FOM Institute DIFFER (Dutch Institute of Fundamental Energy Research), Association EUROTOM-FOM, Partner in the Trilateral Eurogio Cluster, P.O. Box 1207, 3430 BE Nieuwegein (Netherlands)

    2013-10-15

    Highlights: ► We developed a computational model of a human operator controlling a teleoperation system based on feedforward control, while performing a free-space motion. ► We studied how assistive forces affect the response of the combined system of telemanipulator and operator, when operator admittance changes due to task instruction or arm configuration. ► Inappropriate assistive forces can lead to assistive forces that are either not perceived, or deflect the combined system; assistive forces should be tailored to operator admittance. ► It is required to study, measure and quantitatively model operator behavior for teleoperated tasks in more detail. -- Abstract: Haptic shared control is a promising approach to increase the effectiveness of remote handling operations. While in haptic shared control the operator is continuously guided with assistive forces, the operator's response to forces is not fully understood. This study describes the development of a computational model of a human operator controlling a teleoperation system based on feedforward control. In a simulation, the operator's response to repulsive forces in free-space motions was modeled for two degrees of freedom, for two operator endpoint admittances (estimated by means of closed-loop identification techniques). The simulation results show that similar repulsive forces lead to substantial discrepancies in response when admittance settings mismatch; wrongly estimated operator admittances can lead to assistive forces that are either not perceived, or deflect the combined system of human operator and telemanipulator. It is concluded that assistive forces should be tailored to the arm configuration and the type of task performed. In order to utilize haptic shared control to its full potential, it is required to study, measure and quantitatively model operator behavior for teleoperated tasks in more detail.

  7. NUMERICAL SIMULATION AND MODELING OF UNSTEADY FLOW ...

    African Journals Online (AJOL)

    2014-06-30

    Jun 30, 2014 ... objective of this study is to control the simulation of unsteady flows around structures. ... Aerospace, our results were in good agreement with experimental .... Two-Equation Eddy-Viscosity Turbulence Models for Engineering.

  8. Comparative study of in situ methods for potential and actual evapotranspiration determination and their calculation by simulation model

    International Nuclear Information System (INIS)

    Kolev, B.

    2006-01-01

    Four in situ methods for potential and actual evapotranspiration determining were compared: neutron gauge, tensiometers, gypsum blocks and lysimeters. The actual and potential evapotranspiration were calculated by water balance equation and by using a simulation model for their determination. The aim of this study was mainly pointed on calculations of water use efficiency and transpiration coefficient in potential production situation. This makes possible to choose the best way for water consumption optimization for a given crop. The final results find with the best of the methods could be used for applying the principles of sustainable agricultural production in random object of Bulgarian agricultural area

  9. Incorporating Psychological Predictors of Treatment Response into Health Economic Simulation Models: A Case Study in Type 1 Diabetes.

    Science.gov (United States)

    Kruger, Jen; Pollard, Daniel; Basarir, Hasan; Thokala, Praveen; Cooke, Debbie; Clark, Marie; Bond, Rod; Heller, Simon; Brennan, Alan

    2015-10-01

    . Health economic modeling has paid limited attention to the effects that patients' psychological characteristics have on the effectiveness of treatments. This case study tests 1) the feasibility of incorporating psychological prediction models of treatment response within an economic model of type 1 diabetes, 2) the potential value of providing treatment to a subgroup of patients, and 3) the cost-effectiveness of providing treatment to a subgroup of responders defined using 5 different algorithms. . Multiple linear regressions were used to investigate relationships between patients' psychological characteristics and treatment effectiveness. Two psychological prediction models were integrated with a patient-level simulation model of type 1 diabetes. Expected value of individualized care analysis was undertaken. Five different algorithms were used to provide treatment to a subgroup of predicted responders. A cost-effectiveness analysis compared using the algorithms to providing treatment to all patients. . The psychological prediction models had low predictive power for treatment effectiveness. Expected value of individualized care results suggested that targeting education at responders could be of value. The cost-effectiveness analysis suggested, for all 5 algorithms, that providing structured education to a subgroup of predicted responders would not be cost-effective. . The psychological prediction models tested did not have sufficient predictive power to make targeting treatment cost-effective. The psychological prediction models are simple linear models of psychological behavior. Collection of data on additional covariates could potentially increase statistical power. . By collecting data on psychological variables before an intervention, we can construct predictive models of treatment response to interventions. These predictive models can be incorporated into health economic models to investigate more complex service delivery and reimbursement strategies.

  10. A VRLA battery simulation model

    International Nuclear Information System (INIS)

    Pascoe, Phillip E.; Anbuky, Adnan H.

    2004-01-01

    A valve regulated lead acid (VRLA) battery simulation model is an invaluable tool for the standby power system engineer. The obvious use for such a model is to allow the assessment of battery performance. This may involve determining the influence of cells suffering from state of health (SOH) degradation on the performance of the entire string, or the running of test scenarios to ascertain the most suitable battery size for the application. In addition, it enables the engineer to assess the performance of the overall power system. This includes, for example, running test scenarios to determine the benefits of various load shedding schemes. It also allows the assessment of other power system components, either for determining their requirements and/or vulnerabilities. Finally, a VRLA battery simulation model is vital as a stand alone tool for educational purposes. Despite the fundamentals of the VRLA battery having been established for over 100 years, its operating behaviour is often poorly understood. An accurate simulation model enables the engineer to gain a better understanding of VRLA battery behaviour. A system level multipurpose VRLA battery simulation model is presented. It allows an arbitrary battery (capacity, SOH, number of cells and number of strings) to be simulated under arbitrary operating conditions (discharge rate, ambient temperature, end voltage, charge rate and initial state of charge). The model accurately reflects the VRLA battery discharge and recharge behaviour. This includes the complex start of discharge region known as the coup de fouet

  11. Physically realistic modeling of maritime training simulation

    OpenAIRE

    Cieutat , Jean-Marc

    2003-01-01

    Maritime training simulation is an important matter of maritime teaching, which requires a lot of scientific and technical skills.In this framework, where the real time constraint has to be maintained, all physical phenomena cannot be studied; the most visual physical phenomena relating to the natural elements and the ship behaviour are reproduced only. Our swell model, based on a surface wave simulation approach, permits to simulate the shape and the propagation of a regular train of waves f...

  12. A Collective Study on Modeling and Simulation of Resistive Random Access Memory

    Science.gov (United States)

    Panda, Debashis; Sahu, Paritosh Piyush; Tseng, Tseung Yuen

    2018-01-01

    In this work, we provide a comprehensive discussion on the various models proposed for the design and description of resistive random access memory (RRAM), being a nascent technology is heavily reliant on accurate models to develop efficient working designs and standardize its implementation across devices. This review provides detailed information regarding the various physical methodologies considered for developing models for RRAM devices. It covers all the important models reported till now and elucidates their features and limitations. Various additional effects and anomalies arising from memristive system have been addressed, and the solutions provided by the models to these problems have been shown as well. All the fundamental concepts of RRAM model development such as device operation, switching dynamics, and current-voltage relationships are covered in detail in this work. Popular models proposed by Chua, HP Labs, Yakopcic, TEAM, Stanford/ASU, Ielmini, Berco-Tseng, and many others have been compared and analyzed extensively on various parameters. The working and implementations of the window functions like Joglekar, Biolek, Prodromakis, etc. has been presented and compared as well. New well-defined modeling concepts have been discussed which increase the applicability and accuracy of the models. The use of these concepts brings forth several improvements in the existing models, which have been enumerated in this work. Following the template presented, highly accurate models would be developed which will vastly help future model developers and the modeling community.

  13. Application of artificial neural networks in hydrological modeling: A case study of runoff simulation of a Himalayan glacier basin

    Science.gov (United States)

    Buch, A. M.; Narain, A.; Pandey, P. C.

    1994-01-01

    The simulation of runoff from a Himalayan Glacier basin using an Artificial Neural Network (ANN) is presented. The performance of the ANN model is found to be superior to the Energy Balance Model and the Multiple Regression model. The RMS Error is used as the figure of merit for judging the performance of the three models, and the RMS Error for the ANN model is the latest of the three models. The ANN is faster in learning and exhibits excellent system generalization characteristics.

  14. A Study on Modeling Approaches in Discrete Event Simulation Using Design Patterns

    National Research Council Canada - National Science Library

    Kim, Leng Koh

    2007-01-01

    .... This modeling paradigm encompasses several modeling approaches active role of events, entities as independent components, and chaining components to enable interactivity that are excellent ways of building a DES system...

  15. Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2009-01-01

    This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial

  16. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 3. Computer Based Modelling and Simulation - Modelling Deterministic Systems. N K Srinivasan. General Article Volume 6 Issue 3 March 2001 pp 46-54. Fulltext. Click here to view fulltext PDF. Permanent link:

  17. A Study of Simulation Effectiveness in Modeling Heavy Combined Arms Combat in Urban Environments

    Science.gov (United States)

    2007-05-01

    Information Center, 2002), 63. 5 CHAPTER TWO HISTORICAL EXAMPLES OF URBAN WARFARE Given that the simulations in use today by the U.S. Army do not...Although these calculations can be extracted using AAR tools, failure to do so diligently may mask important information . Furthermore, soldiers...Steel Beasts 2™, the use of Armed Assault™ for training, Virtual Battlefield System 2™, and Coalescent Technolgies ’™ new DIS*MOUNT™, their replacement

  18. Modelling and simulation of compressible fluid flow in oil reservoir: a case study of the Jubilee Field, Tano Basin (Ghana)

    International Nuclear Information System (INIS)

    Gawusu, S.

    2015-07-01

    Oil extraction represents an important investment and the control of a rational exploitation of a field means mastering various scientific techniques including the understanding of the dynamics of fluids in place. This thesis presents a theoretical investigation of the dynamic behaviour of an oil reservoir during its exploitation. The study investigated the dynamics of fluid flow patterns in a homogeneous oil reservoir using the Radial Diffusivity Equation (RDE) as well as two phase oil-water flow equations. The RDE model was solved analytically and numerically for pressure using the Constant Terminal Rate Solution (CTRS) and the fully implicit Finite Difference Method (FDM) respectively. The mathematical derivations of the models and their solution procedures were presented to allow for easy utilization of the techniques for reservoir and engineering applications. The study predicted that the initial oil reservoir pressure will be able to do the extraction for a very long time before any other recovery method will be used to aid in the extraction process depending on the rate of production. Reservoir simulation describing a one dimensional radial flow of a compressible fluid in porous media may be adequately performed using ordinary laptop computers as revealed by the study. For the simulation of MATLAB, the case of the Jubilee Fields, Tano Basin was studied, an algorithm was developed for the simulation of pressure in the reservoir. It ensues from the analysis of the plots of pressure vrs time and space that the Pressure Transient Analysis (PTA) was duly followed. The approximate solutions of the analytical and numerical solutions to the Radial Diffusivity Equation (RDE) were in excellent agreement, thus the reservoir simulation model developed can be used to describe typical pressure-time relationships that are used in conventional Pressure Transient Analysis (PTA). The study was extended to two phase oil-water flow in reservoirs. The flow of fluids in multi

  19. The Effect of Small Sample Size on Measurement Equivalence of Psychometric Questionnaires in MIMIC Model: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Jamshid Jamali

    2017-01-01

    Full Text Available Evaluating measurement equivalence (also known as differential item functioning (DIF is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.

  20. The Effect of Small Sample Size on Measurement Equivalence of Psychometric Questionnaires in MIMIC Model: A Simulation Study.

    Science.gov (United States)

    Jamali, Jamshid; Ayatollahi, Seyyed Mohammad Taghi; Jafari, Peyman

    2017-01-01

    Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.

  1. Power flow modeling of Back-to-Back STATCOM: Comprehensive simulation studies including PV curves and PQ circles

    Directory of Open Access Journals (Sweden)

    Ahmet Mete Vural

    2017-09-01

    Full Text Available Power flow study in a power network embedded with FACTS device requires effort in program coding. Moreover, Newton-Raphson method should be modified by embedding injected power components into the algorithm. In this study, we have proposed a method for modeling of one of the newest FACTS concepts in power flow study without program coding or modification of existing Newton-Raphson algorithm. Real and reactive power injections for each voltage source converter of Back-to-Back Static Synchronous Compensator (BtB-STATCOM are PI regulated to their desired steady-state values. With this respect, reactive power injection of each voltage source converter as well as real power transfer among them can be assigned as control constraint. Operating losses are also taken into account in the proposed modeling approach. Furthermore, proposed model can be easily modified for the modeling of conventional STATCOM having only one voltage source converter or two STATCOMs operating independently. The proposed modeling approach is verified in PSCAD through a number of simulation scenarios in BtB-STATCOM and STATCOM embedded power systems, namely 1-Machine 4-Bus system and 3-Machine 7-Bus system. PV curves of local buses compensated by BtB-STATCOM and STATCOM are presented and compared. Steady-state performance of BtB-STATCOM and STATCOM is also compared in power flow handling.

  2. Assessing the ability of mechanistic volatilization models to simulate soil surface conditions: a study with the Volt'Air model.

    Science.gov (United States)

    Garcia, L; Bedos, C; Génermont, S; Braud, I; Cellier, P

    2011-09-01

    Ammonia and pesticide volatilization in the field is a surface phenomenon involving physical and chemical processes that depend on the soil surface temperature and water content. The water transfer, heat transfer and energy budget sub models of volatilization models are adapted from the most commonly accepted formalisms and parameterizations. They are less detailed than the dedicated models describing water and heat transfers and surface status. The aim of this work was to assess the ability of one of the available mechanistic volatilization models, Volt'Air, to accurately describe the pedo-climatic conditions of a soil surface at the required time and space resolution. The assessment involves: (i) a sensitivity analysis, (ii) an evaluation of Volt'Air outputs in the light of outputs from a reference Soil-Vegetation-Atmosphere Transfer model (SiSPAT) and three experimental datasets, and (iii) the study of three tests based on modifications of SiSPAT to establish the potential impact of the simplifying assumptions used in Volt'Air. The analysis confirmed that a 5 mm surface layer was well suited, and that Volt'Air surface temperature correlated well with the experimental measurements as well as with SiSPAT outputs. In terms of liquid water transfers, Volt'Air was overall consistent with SiSPAT, with discrepancies only during major rainfall events and dry weather conditions. The tests enabled us to identify the main source of the discrepancies between Volt'Air and SiSPAT: the lack of gaseous water transfer description in Volt'Air. They also helped to explain why neither Volt'Air nor SiSPAT was able to represent lower values of surface water content: current classical water retention and hydraulic conductivity models are not yet adapted to cases of very dry conditions. Given the outcomes of this study, we discuss to what extent the volatilization models can be improved and the questions they pose for current research in water transfer modeling and parameterization

  3. Vehicle dynamics modeling and simulation

    CERN Document Server

    Schramm, Dieter; Bardini, Roberto

    2014-01-01

    The authors examine in detail the fundamentals and mathematical descriptions of the dynamics of automobiles. In this context different levels of complexity will be presented, starting with basic single-track models up to complex three-dimensional multi-body models. A particular focus is on the process of establishing mathematical models on the basis of real cars and the validation of simulation results. The methods presented are explained in detail by means of selected application scenarios.

  4. A Case Study Regarding Influence of Solvers in Matlab/Simulink for Induction Machine Model in Wind Turbine Simulations

    DEFF Research Database (Denmark)

    Iov, F.; Blaabjerg, Frede; Hansen, A.D.

    2002-01-01

    In the last years Matlab/Simulink® has become the most used software for modelling and simulation of dynamic systems. Wind energy conversion systems are for example such systems because they contain parts with different range for the time constant: wind, turbine, generator, power electronics...... the different implementations of induction machine model, influence of the solvers from Simulink and how the simulation speed can be increase for a wind turbine....

  5. Stochastic models: theory and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2008-03-01

    Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

  6. Plasma modelling and numerical simulation

    International Nuclear Information System (INIS)

    Van Dijk, J; Kroesen, G M W; Bogaerts, A

    2009-01-01

    Plasma modelling is an exciting subject in which virtually all physical disciplines are represented. Plasma models combine the electromagnetic, statistical and fluid dynamical theories that have their roots in the 19th century with the modern insights concerning the structure of matter that were developed throughout the 20th century. The present cluster issue consists of 20 invited contributions, which are representative of the state of the art in plasma modelling and numerical simulation. These contributions provide an in-depth discussion of the major theories and modelling and simulation strategies, and their applications to contemporary plasma-based technologies. In this editorial review, we introduce and complement those papers by providing a bird's eye perspective on plasma modelling and discussing the historical context in which it has surfaced. (editorial review)

  7. Interaction of lysozyme with a tear film lipid layer model: A molecular dynamics simulation study.

    Science.gov (United States)

    Wizert, Alicja; Iskander, D Robert; Cwiklik, Lukasz

    2017-12-01

    The tear film is a thin multilayered structure covering the cornea. Its outermost layer is a lipid film underneath of which resides on an aqueous layer. This tear film lipid layer (TFLL) is itself a complex structure, formed by both polar and nonpolar lipids. It was recently suggested that due to tear film dynamics, TFLL contains inhomogeneities in the form of polar lipid aggregates. The aqueous phase of tear film contains lachrymal-origin proteins, whereby lysozyme is the most abundant. These proteins can alter TFLL properties, mainly by reducing its surface tension. However, a detailed nature of protein-lipid interactions in tear film is not known. We investigate the interactions of lysozyme with TFLL in molecular details by employing coarse-grained molecular dynamics simulations. We demonstrate that lysozyme, due to lateral restructuring of TFLL, is able to penetrate the tear lipid film embedded in inverse micellar aggregates. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. The Basic Immune Simulator: An agent-based model to study the interactions between innate and adaptive immunity

    Directory of Open Access Journals (Sweden)

    Orosz Charles G

    2007-09-01

    Full Text Available Abstract Background We introduce the Basic Immune Simulator (BIS, an agent-based model created to study the interactions between the cells of the innate and adaptive immune system. Innate immunity, the initial host response to a pathogen, generally precedes adaptive immunity, which generates immune memory for an antigen. The BIS simulates basic cell types, mediators and antibodies, and consists of three virtual spaces representing parenchymal tissue, secondary lymphoid tissue and the lymphatic/humoral circulation. The BIS includes a Graphical User Interface (GUI to facilitate its use as an educational and research tool. Results The BIS was used to qualitatively examine the innate and adaptive interactions of the immune response to a viral infection. Calibration was accomplished via a parameter sweep of initial agent population size, and comparison of simulation patterns to those reported in the basic science literature. The BIS demonstrated that the degree of the initial innate response was a crucial determinant for an appropriate adaptive response. Deficiency or excess in innate immunity resulted in excessive proliferation of adaptive immune cells. Deficiency in any of the immune system components increased the probability of failure to clear the simulated viral infection. Conclusion The behavior of the BIS matches both normal and pathological behavior patterns in a generic viral infection scenario. Thus, the BIS effectively translates mechanistic cellular and molecular knowledge regarding the innate and adaptive immune response and reproduces the immune system's complex behavioral patterns. The BIS can be used both as an educational tool to demonstrate the emergence of these patterns and as a research tool to systematically identify potential targets for more effective treatment strategies for diseases processes including hypersensitivity reactions (allergies, asthma, autoimmunity and cancer. We believe that the BIS can be a useful addition to

  9. Study and simulation of a multi-lithology stratigraphic model under maximum erosion rate constraint; Etude et simulation d'un modele statigraphique multi-lithologique sous contrainte de taux d'erosion maximal

    Energy Technology Data Exchange (ETDEWEB)

    Gervais, V.

    2004-11-01

    The subject of this report is the study and simulation of a model describing the infill of sedimentary basins on large scales in time and space. It simulates the evolution through time of the sediment layer in terms of geometry and rock properties. A parabolic equation is coupled to an hyperbolic equation by an input boundary condition at the top of the basin. The model also considers a unilaterality constraint on the erosion rate. In the first part of the report, the mathematical model is described and particular solutions are defined. The second part deals with the definition of numerical schemes and the simulation of the model. In the first chap-ter, finite volume numerical schemes are defined and studied. The Newton algorithm adapted to the unilateral constraint used to solve the schemes is given, followed by numerical results in terms of performance and accuracy. In the second chapter, a preconditioning strategy to solve the linear system by an iterative solver at each Newton iteration is defined, and numerical results are given. In the last part, a simplified model is considered in which a variable is decoupled from the other unknowns and satisfies a parabolic equation. A weak formulation is defined for the remaining coupled equations, for which the existence of a unique solution is obtained. The proof uses the convergence of a numerical scheme. (author)

  10. The carbon balance of European croplands: a Trans-European, cross-site, multi model simulation study

    Science.gov (United States)

    Wattenbach, Martin; Sus, Oliver; Vuichard, Nicolas; Lehuger, Simon; Leip, Adrian; Gottschalk, Pia; Smith, Pete

    2010-05-01

    Croplands cover approximately 45% of Europe and play a significant role in the overall carbon budget of the continent. However, the estimation of the regional carbon balance is still uncertain. Here, we present a multi-site model comparison for four cropland ecosystem models namely the DNDC, ORCHIDEE-STICS, CERES-EGC and SPA model. We compare the accuracy of the models in predicting net ecosystem exchange (NEE), gross primary production (GPP), ecosystem respiration (Reco) as well as actual evapo-transpiration (ETa) for winter wheat (Triticum aestivum L.), winter barley (Hordeum vulgare L.) and maize (Zea mays L.) derived from eddy covariance measurements on five sites of the CarboEurope IP network. The models are all able to simulate mean daily GPP. The simulation results for mean daily ETa and Reco are, however, less accurate. The resulting simulation of daily NEE is adequate beside some cases where models fail due to a lack in phase and amplitude alignment. ORCHIDEE-STICS and the SPA demonstrate the best performance, nevertheless, they are not able to simulate full crop rotations under consideration of multiple management. CERES-EGC and especially DNDC although exhibiting a lower level of model accuracy are able to simulate such conditions resulting in more accurate annual cumulative NEE.

  11. Field measurements, simulation modeling and development of analysis for moisture stressed corn and soybeans, 1982 studies

    Science.gov (United States)

    Blad, B. L.; Norman, J. M.; Gardner, B. R.

    1983-01-01

    The experimental design, data acquisition and analysis procedures for agronomic and reflectance data acquired over corn and soybeans at the Sandhills Agricultural Laboratory of the University of Nebraska are described. The following conclusions were reached: (1) predictive leaf area estimation models can be defined which appear valid over a wide range of soils; (2) relative grain yield estimates over moisture stressed corn were improved by combining reflectance and thermal data; (3) corn phenology estimates using the model of Badhwar and Henderson (1981) exhibited systematic bias but were reasonably accurate; (4) canopy reflectance can be modelled to within approximately 10% of measured values; and (5) soybean pubescence significantly affects canopy reflectance, energy balance and water use relationships.

  12. Reduced dimer production in solar-simulator-pumped continuous wave iodine lasers based on model simulations and scaling and pumping studies

    Science.gov (United States)

    Costen, Robert C.; Heinbockel, John H.; Miner, Gilda A.; Meador, Willard E., Jr.; Tabibi, Bagher M.; Lee, Ja H.; Williams, Michael D.

    1995-01-01

    A numerical rate equation model for a continuous wave iodine laser with longitudinally flowing gaseous lasant is validated by approximating two experiments that compare the perfluoroalkyl iodine lasants n-C3F7I and t-C4F9I. The salient feature of the simulations is that the production rate of the dimer (C4F9)2 is reduced by one order of magnitude relative to the dimer (C3F7)2. The model is then used to investigate the kinetic effects of this reduced dimer production, especially how it improves output power. Related parametric and scaling studies are also presented. When dimer production is reduced, more monomer radicals (t-C4F9) are available to combine with iodine ions, thus enhancing depletion of the laser lower level and reducing buildup of the principal quencher, molecular iodine. Fewer iodine molecules result in fewer downward transitions from quenching and more transitions from stimulated emission of lasing photons. Enhanced depletion of the lower level reduces the absorption of lasing photons. The combined result is more lasing photons and proportionally increased output power.

  13. Model for Simulation Atmospheric Turbulence

    DEFF Research Database (Denmark)

    Lundtang Petersen, Erik

    1976-01-01

    A method that produces realistic simulations of atmospheric turbulence is developed and analyzed. The procedure makes use of a generalized spectral analysis, often called a proper orthogonal decomposition or the Karhunen-Loève expansion. A set of criteria, emphasizing a realistic appearance...... eigenfunctions and estimates of the distributions of the corresponding expansion coefficients. The simulation method utilizes the eigenfunction expansion procedure to produce preliminary time histories of the three velocity components simultaneously. As a final step, a spectral shaping procedure is then applied....... The method is unique in modeling the three velocity components simultaneously, and it is found that important cross-statistical features are reasonably well-behaved. It is concluded that the model provides a practical, operational simulator of atmospheric turbulence....

  14. Modeling code-interactions in bilingual word recognition: Recent empirical studies and simulations with BIA+

    NARCIS (Netherlands)

    Lam, K.J.Y.; Dijkstra, A.F.J.

    2010-01-01

    Daily conversations contain many repetitions of identical and similar word forms. For bilinguals, the words can even come from the same or different languages. How do such repetitions affect the human word recognition system? The Bilingual Interactive Activation Plus (BIA+) model provides a

  15. Neuronal encoding of object and distance information: A model simulation study on naturalistic optic flow processing

    Directory of Open Access Journals (Sweden)

    Patrick eHennig

    2012-03-01

    Full Text Available We developed a model of the input circuitry of the FD1 cell, an identified motion-sensitive interneuron in the blowfly’s visual system. The model circuit successfully reproduces the FD1 cell’s most conspicuous property: Its larger responses to objects than to spatially extended patterns. The model circuit also mimics the time-dependent responses of FD1 to dynamically complex naturalistic stimuli, shaped by the blowfly’s saccadic flight and gaze strategy: The FD1 responses are enhanced when, as a consequence of self-motion, a nearby object crosses the receptive field during intersaccadic intervals. Moreover, the model predicts that these object-induced responses are superimposed by pronounced pattern-dependent fluctuations during movements on virtual test flights in a three-dimensional environment with systematic modifications of the environmental patterns. Hence, the FD1 cell is predicted to detect not unambiguously objects defined by the spatial layout of the environment, but to be also sensitive to objects distinguished by textural features. These ambiguous detection abilities suggest an encoding of information about objects - irrespective of the features by which the objects are defined - by a population of cells, with the FD1 cell presumably playing a prominent role in such an ensemble.

  16. Some Sensitivity Studies of Chemical Transport Simulated in Models of the Soil-Plant-Litter System

    Energy Technology Data Exchange (ETDEWEB)

    Begovich, C.L.

    2002-10-28

    Fifteen parameters in a set of five coupled models describing carbon, water, and chemical dynamics in the soil-plant-litter system were varied in a sensitivity analysis of model response. Results are presented for chemical distribution in the components of soil, plants, and litter along with selected responses of biomass, internal chemical transport (xylem and phloem pathways), and chemical uptake. Response and sensitivity coefficients are presented for up to 102 model outputs in an appendix. Two soil properties (chemical distribution coefficient and chemical solubility) and three plant properties (leaf chemical permeability, cuticle thickness, and root chemical conductivity) had the greatest influence on chemical transport in the soil-plant-litter system under the conditions examined. Pollutant gas uptake (SO{sub 2}) increased with change in plant properties that increased plant growth. Heavy metal dynamics in litter responded to plant properties (phloem resistance, respiration characteristics) which induced changes in the chemical cycling to the litter system. Some of the SO{sub 2} and heavy metal responses were not expected but became apparent through the modeling analysis.

  17. Application of digital human modeling and simulation for vision analysis of pilots in a jet aircraft: a case study.

    Science.gov (United States)

    Karmakar, Sougata; Pal, Madhu Sudan; Majumdar, Deepti; Majumdar, Dhurjati

    2012-01-01

    Ergonomic evaluation of visual demands becomes crucial for the operators/users when rapid decision making is needed under extreme time constraint like navigation task of jet aircraft. Research reported here comprises ergonomic evaluation of pilot's vision in a jet aircraft in virtual environment to demonstrate how vision analysis tools of digital human modeling software can be used effectively for such study. Three (03) dynamic digital pilot models, representative of smallest, average and largest Indian pilot population were generated from anthropometric database and interfaced with digital prototype of the cockpit in Jack software for analysis of vision within and outside the cockpit. Vision analysis tools like view cones, eye view windows, blind spot area, obscuration zone, reflection zone etc. were employed during evaluation of visual fields. Vision analysis tool was also used for studying kinematic changes of pilot's body joints during simulated gazing activity. From present study, it can be concluded that vision analysis tool of digital human modeling software was found very effective in evaluation of position and alignment of different displays and controls in the workstation based upon their priorities within the visual fields and anthropometry of the targeted users, long before the development of its physical prototype.

  18. Modeling and simulation of discrete event systems

    CERN Document Server

    Choi, Byoung Kyu

    2013-01-01

    Computer modeling and simulation (M&S) allows engineers to study and analyze complex systems. Discrete-event system (DES)-M&S is used in modern management, industrial engineering, computer science, and the military. As computer speeds and memory capacity increase, so DES-M&S tools become more powerful and more widely used in solving real-life problems. Based on over 20 years of evolution within a classroom environment, as well as on decades-long experience in developing simulation-based solutions for high-tech industries, Modeling and Simulation of Discrete-Event Systems is the only book on

  19. R&D; studies on the hadronic calorimeter and physics simulations on the Standard Model and minimal supersymmetric Standard Model Higgs bosons in the CMS experiment

    CERN Document Server

    Duru, Firdevs

    2007-01-01

    This thesis consists of two main parts: R&D; studies done on the Compact Muon Solenoid (CMS) Hadronic Calorimeter (HCAL) and physics simulations on the Higgs boson for a Minimal Supersymmetric Standard Model (MSSM) and a Standard Model (SM) channel. In the first part, the air core light guides used in the read-out system of the Hadronic Forward (HF) calorimeter and the reflective materials used in them are studied. Then, tests and simulations were performed to find the most efficient way to collect Cerenkov light from the quartz plates, which are proposed as a substitute for the scintillator tiles in the Hadronic Endcap (HE) calorimeter due to radiation damage problems. In the second part physics simulations and their results are presented. The MSSM channel H/A[arrow right]ττ [arrow right]l l v v v v is studied to investigate the jet and missing transverse energy (MET) reconstruction of the CMS detector. The effects of the jet and MET corrections on the Higgs boson mass reconstruction are investigated. ...

  20. Nowcasting of deep convective clouds and heavy precipitation: Comparison study between NWP model simulation and extrapolation

    Czech Academy of Sciences Publication Activity Database

    Bližňák, Vojtěch; Sokol, Zbyněk; Zacharov, Petr, jr.

    2017-01-01

    Roč. 184, February (2017), s. 24-34 ISSN 0169-8095 R&D Projects: GA ČR(CZ) GPP209/12/P701; GA ČR GA13-34856S Institutional support: RVO:68378289 Keywords : meteorological satellite * convective storm * NWP model * verification * Czech Republic Subject RIV: DG - Athmosphere Sciences, Meteorology OBOR OECD: Meteorology and atmospheric sciences Impact factor: 3.778, year: 2016 http://www.sciencedirect.com/science/article/pii/S0169809516304288

  1. Disability weight of Clonorchis sinensis infection: captured from community study and model simulation.

    Directory of Open Access Journals (Sweden)

    Men-Bao Qian

    2011-12-01

    Full Text Available BACKGROUND: Clonorchiasis is among the most neglected tropical diseases. It is caused by ingesting raw or undercooked fish or shrimp containing the larval of Clonorchis sinensis and mainly endemic in Southeast Asia including China, Korea and Vietnam. The global estimations for population at risk and infected are 601 million and 35 million, respectively. However, it is still not listed among the Global Burden of Disease (GBD and no disability weight is available for it. Disability weight reflects the average degree of loss of life value due to certain chronic disease condition and ranges between 0 (complete health and 1 (death. It is crucial parameter for calculating the morbidity part of any disease burden in terms of disability-adjusted life years (DALYs. METHODOLOGY/PRINCIPAL FINDINGS: According to the probability and disability weight of single sequelae caused by C. sinensis infection, the overall disability weight could be captured through Monte Carlo simulation. The probability of single sequelae was gained from one community investigation, while the corresponding disability weight was searched from the literatures in evidence-based approach. The overall disability weights of the male and female were 0.101 and 0.050, respectively. The overall disability weights of the age group of 5-14, 15-29, 30-44, 45-59 and 60+ were 0.022, 0.052, 0.072, 0.094 and 0.118, respectively. There was some evidence showing that the disability weight and geometric mean of eggs per gram of feces (GMEPG fitted a logarithmic equation. CONCLUSION/SIGNIFICANCE: The overall disability weights of C. sinensis infection are differential in different sex and age groups. The disability weight captured here may be referred for estimating the disease burden of C. sinensis infection.

  2. A Study on the Role of Reaction Modeling in Multi-phase CFD-based Simulations of Chemical Looping Combustion; Impact du modele de reaction sur les simulations CFD de la combustion en boucle chimique

    Energy Technology Data Exchange (ETDEWEB)

    Kruggel-Emden, H.; Stepanek, F. [Department of Chemical Engineering, South Kensington Campus, Imperial College London, SW7 2AZ, London (United Kingdom); Kruggel-Emden, H.; Munjiza, A. [Department of Engineering, Queen Mary, University of London, Mile End Road, E1 4NS, London (United Kingdom)

    2011-03-15

    Chemical Looping Combustion is an energy efficient combustion technology for the inherent separation of carbon dioxide for both gaseous and solid fuels. For scale up and further development of this process multi-phase CFD-based simulations have a strong potential which rely on kinetic models for the solid/gaseous reactions. Reaction models are usually simple in structure in order to keep the computational cost low. They are commonly derived from thermogravimetric experiments. With only few CFD-based simulations performed on chemical looping combustion, there is a lack in understanding of the role and of the sensitivity of the applied chemical reaction model on the outcome of a simulation. The aim of this investigation is therefore the study of three different carrier materials CaSO{sub 4}, Mn{sub 3}O{sub 4} and NiO with the gaseous fuels H{sub 2} and CH{sub 4} in a batch type reaction vessel. Four reaction models namely the linear shrinking core, the spherical shrinking core, the Avrami-Erofeev and a recently proposed multi parameter model are applied and compared on a case by case basis. (authors)

  3. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    Science.gov (United States)

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  4. Modeling, simulation, parametric study and economic assessment of reciprocating internal combustion engine integrated with multi-effect desalination unit

    International Nuclear Information System (INIS)

    Salimi, Mohsen; Amidpour, Majid

    2017-01-01

    Highlights: • Integration of small MED unit with gas engine power cycle is studied in this paper. • Modeling, simulation, parametric study and sensitivity analysis were performed. • A thermodynamic model for heat recovery and power generation of the gas engine has been presented. • Annualized Cost of System (ACS) has been employed for economic assessment. • Economic feasibilty dependence of integrated system on natural gas and water prices has been investigated. - Abstract: Due to thermal nature of multi-effect desalination (MED), its integration with a suitable power cycle is highly desirable for waste heat recovery. One of the proper power cycle for proposed integration is internal combustion engine (ICE). The exhaust gas heat of ICE is used to produce motive steam for the required heat for the first effect of MED system. Also, the water jacket heat is utilized in a heat exchanger to pre-heat the seawater. This paper studies a thermodynamic model for a tri-generation system composed of ICE integrated with MED. The ICE thermodynamic model has been used in place of different empirical efficiency relations to estimate performance – load curves reasonably. The entire system performance has been coded in MATLAB, and the results of proposed thermodynamic model for the engine have been verified by manufacturer catalogue. By increasing the engine load from 40% to 100%, the water production of MED unit will increase from 4.38 cubic meters per day to 26.78 cubic meters per day and the tri-generation efficiency from 31% to 56%. Economic analyses of the MED unit integrated with ICE was performed based on Annualized Cost of System method. This integration makes the system more economical. It has been determined that in higher market prices for fresh water (more than 7 US$ per cubic meter), the increase in effects number is more significant to the period of return decrement.

  5. Systems Operations Studies for Automated Guideway Transit Systems : Discrete Event Simulation Model User's Manual.

    Science.gov (United States)

    1982-06-01

    In order to examine specific Automated Guideway Transit (AGT) developments and concepts, and to build a better knowledge base for future decision-making, the Urban Mass Transportation Administration (UMTA) undertook a new program of studies and techn...

  6. System Operations Studies for Automated Guideway Transit Systems : Discrete Event Simulation Model Programmer's Manual

    Science.gov (United States)

    1982-07-01

    In order to examine specific automated guideway transit (AGT) developments and concepts, UMTA undertook a program of studies and technology investigations called Automated Guideway Transit Technology (AGTT) Program. The objectives of one segment of t...

  7. Simulation Modelling Approach to Human Resources Management: Burnout Effect Case Study

    Directory of Open Access Journals (Sweden)

    Marjana Merkac Skok

    2013-07-01

    Full Text Available Human resources management has become one of the most important leverages in organizations for gaining competitive advantage. However, human resources management is in many occasions prone to nonlinear feedbacks with delayed effect. Burnout effect is one of the problems that are especially often faced by the experts in learning society. Burnout effect occurs because modern society is a fast-moving, achievement-oriented, very competitive and lead to many stressful situations, which individuals cannot handle always. We propose usage of system dynamics methodology in exploration of burnout effect, and its usage in learning of consequences of burnout effect. Several experiments have been conducted and presented which indicate increase and collapse behaviour in case of burnout experience by the individual. Experiments with the model explore the presence of burnout effect in several different situations, with different pace of its manifestations.

  8. Modeling and Simulation for Safeguards

    International Nuclear Information System (INIS)

    Swinhoe, Martyn T.

    2012-01-01

    The purpose of this talk is to give an overview of the role of modeling and simulation in Safeguards R and D and introduce you to (some of) the tools used. Some definitions are: (1) Modeling - the representation, often mathematical, of a process, concept, or operation of a system, often implemented by a computer program; (2) Simulation - the representation of the behavior or characteristics of one system through the use of another system, especially a computer program designed for the purpose; and (3) Safeguards - the timely detection of diversion of significant quantities of nuclear material. The role of modeling and simulation are: (1) Calculate amounts of material (plant modeling); (2) Calculate signatures of nuclear material etc. (source terms); and (3) Detector performance (radiation transport and detection). Plant modeling software (e.g. FACSIM) gives the flows and amount of material stored at all parts of the process. In safeguards this allow us to calculate the expected uncertainty of the mass and evaluate the expected MUF. We can determine the measurement accuracy required to achieve a certain performance.

  9. COMPUTATIONAL MODELING AND SIMULATION IN BIOLOGY TEACHING: A MINIMALLY EXPLORED FIELD OF STUDY WITH A LOT OF POTENTIAL

    Directory of Open Access Journals (Sweden)

    Sonia López

    2016-09-01

    Full Text Available This study is part of a research project that aims to characterize the epistemological, psychological and didactic presuppositions of science teachers (Biology, Physics, Chemistry that implement Computational Modeling and Simulation (CMS activities as a part of their teaching practice. We present here a synthesis of a literature review on the subject, evidencing how in the last two decades this form of computer usage for science teaching has boomed in disciplines such as Physics and Chemistry, but in a lesser degree in Biology. Additionally, in the works that dwell on the use of CMS in Biology, we identified a lack of theoretical bases that support their epistemological, psychological and/or didactic postures. Accordingly, this generates significant considerations for the fields of research and teacher instruction in Science Education.

  10. The fragrance hand immersion study - an experimental model simulating real-life exposure for allergic contact dermatitis on the hands

    DEFF Research Database (Denmark)

    Heydorn, S; Menné, T; Andersen, K E

    2003-01-01

    previously diagnosed with hand eczema to explore whether immersion of fingers in a solution with or without the patch-test-positive fragrance allergen would cause or exacerbate hand eczema on the exposed finger. The study was double blinded and randomized. All participants had a positive patch test to either...... hydroxycitronellal or Lyral (hydroxyisohexyl 3-cyclohexene carboxaldehyde). Each participant immersed a finger from each hand, once a day, in a solution containing the fragrance allergen or placebo. During the first 2 weeks, the concentration of fragrance allergen in the solution was low (approximately 10 p...... meter. 3 of 15 hand eczema patients developed eczema on the finger immersed in the fragrance-containing solution, 3 of 15 on the placebo finger and 3 of 15 on both fingers. Using this experimental exposure model simulating real-life exposure, we found no association between immersion of a finger...

  11. A detached eddy simulation model for the study of lateral separation zones along a large canyon-bound river

    Science.gov (United States)

    Alvarez, Laura V.; Schmeeckle, Mark W.; Grams, Paul E.

    2017-01-01

    Lateral flow separation occurs in rivers where banks exhibit strong curvature. In canyon-boundrivers, lateral recirculation zones are the principal storage of fine-sediment deposits. A parallelized,three-dimensional, turbulence-resolving model was developed to study the flow structures along lateralseparation zones located in two pools along the Colorado River in Marble Canyon. The model employs thedetached eddy simulation (DES) technique, which resolves turbulence structures larger than the grid spacingin the interior of the flow. The DES-3D model is validated using Acoustic Doppler Current Profiler flowmeasurements taken during the 2008 controlled flood release from Glen Canyon Dam. A point-to-pointvalidation using a number of skill metrics, often employed in hydrological research, is proposed here forfluvial modeling. The validation results show predictive capabilities of the DES model. The model reproducesthe pattern and magnitude of the velocity in the lateral recirculation zone, including the size and position ofthe primary and secondary eddy cells, and return current. The lateral recirculation zone is open, havingcontinuous import of fluid upstream of the point of reattachment and export by the recirculation returncurrent downstream of the point of separation. Differences in magnitude and direction of near-bed andnear-surface velocity vectors are found, resulting in an inward vertical spiral. Interaction between therecirculation return current and the main flow is dynamic, with large temporal changes in flow direction andmagnitude. Turbulence structures with a predominately vertical axis of vorticity are observed in the shearlayer becoming three-dimensional without preferred orientation downstream.

  12. Assessment of Molecular Modeling & Simulation

    Energy Technology Data Exchange (ETDEWEB)

    None

    2002-01-03

    This report reviews the development and applications of molecular and materials modeling in Europe and Japan in comparison to those in the United States. Topics covered include computational quantum chemistry, molecular simulations by molecular dynamics and Monte Carlo methods, mesoscale modeling of material domains, molecular-structure/macroscale property correlations like QSARs and QSPRs, and related information technologies like informatics and special-purpose molecular-modeling computers. The panel's findings include the following: The United States leads this field in many scientific areas. However, Canada has particular strengths in DFT methods and homogeneous catalysis; Europe in heterogeneous catalysis, mesoscale, and materials modeling; and Japan in materials modeling and special-purpose computing. Major government-industry initiatives are underway in Europe and Japan, notably in multi-scale materials modeling and in development of chemistry-capable ab-initio molecular dynamics codes.

  13. NRTA simulation by modeling PFPF

    International Nuclear Information System (INIS)

    Asano, Takashi; Fujiwara, Shigeo; Takahashi, Saburo; Shibata, Junichi; Totsu, Noriko

    2003-01-01

    In PFPF, NRTA system has been applied since 1991. It has been confirmed by evaluating facility material accountancy data provided from operator in each IIV that a significant MUF was not generated. In case of throughput of PFPF scale, MUF can be evaluated with a sufficient detection probability by the present NRTA evaluation manner. However, by increasing of throughput, the uncertainty of material accountancy will increase, and the detection probability will decline. The relationship between increasing of throughput and declining of detection probability and the maximum throughput upon application of following measures with a sufficient detection probability were evaluated by simulation of NRTA system. This simulation was performed by modeling of PFPF. Measures for increasing detection probability are shown as follows. Shortening of the evaluation interval. Segmentation of evaluation area. This report shows the results of these simulations. (author)

  14. Microcomputer simulation model for facility performance assessment: a case study of nuclear spent fuel handling facility operations

    International Nuclear Information System (INIS)

    Chockie, A.D.; Hostick, C.J.; Otis, P.T.

    1985-10-01

    A microcomputer based simulation model was recently developed at the Pacific Northwest Laboratory (PNL) to assist in the evaluation of design alternatives for a proposed facility to receive, consolidate and store nuclear spent fuel from US commercial power plants. Previous performance assessments were limited to deterministic calculations and Gantt chart representations of the facility operations. To insure that the design of the facility will be adequate to meet the specified throughput requirements, the simulation model was used to analyze such factors as material flow, equipment capability and the interface between the MRS facility and the nuclear waste transportation system. The simulation analysis model was based on commercially available software and application programs designed to represent the MRS waste handling facility operations. The results of the evaluation were used by the design review team at PNL to identify areas where design modifications should be considered. 4 figs

  15. North Atlantic Coast Comprehensive Study (NACCS) Coastal Storm Model Simulations: Waves and Water Levels

    Science.gov (United States)

    2015-08-01

    published in the NGA’s DNCs, with distinct values assigned to areas of sand, gravel, clay , etc. ERDC/CHL TR-15-14 94 6.5.2 Lateral eddy viscosity As with...6.5.1 Manning’s n bottom friction coefficient ................................................................... 93 6.5.2 Lateral eddy viscosity ...this study include (1) Manning’s n bottom friction coefficient, (2) lateral eddy viscosity , (3) land cover effects on winds (also referred to as

  16. Study of Z' {yields} e{sup +}e{sup -} in full simulation with regard to discrimination between models beyond the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Schafer, M

    2004-09-01

    Although experimental results so far agree with predictions of the standard model, it is widely felt to be incomplete. Many prospective theories beyond the standard model predict extra neutral gauge bosons, denoted by Z', which might be light enough to be accessible at the LHC. Observables sensitive to the properties of these extra gauge bosons might be used to discriminate between the different theories beyond the standard model. In the present work several of these observables (total decay width, leptonic cross-section and forward-backward asymmetries) are studied at generation level and with a full simulation in the ATLAS detector. The Z' {yields} e{sup +}e{sup -} decay channel was chosen and 2 values for the mass of Z': 1.5 TeV and 4 TeV. Background is studied as well and it is confirmed that a Z' boson could easily be discovered at the chosen masses. It is shown that even in full simulation the studied observables can be determined with a good precision. In a next step a discrimination strategy has to be developed given the presented methods to extract the variables and their precision. (author)

  17. Study of Z' {yields} e{sup +}e{sup -} in full simulation with regard to discrimination between models beyond the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Schafer, M

    2004-09-01

    Although experimental results so far agree with predictions of the standard model, it is widely felt to be incomplete. Many prospective theories beyond the standard model predict extra neutral gauge bosons, denoted by Z', which might be light enough to be accessible at the LHC. Observables sensitive to the properties of these extra gauge bosons might be used to discriminate between the different theories beyond the standard model. In the present work several of these observables (total decay width, leptonic cross-section and forward-backward asymmetries) are studied at generation level and with a full simulation in the ATLAS detector. The Z' {yields} e{sup +}e{sup -} decay channel was chosen and 2 values for the mass of Z': 1.5 TeV and 4 TeV. Background is studied as well and it is confirmed that a Z' boson could easily be discovered at the chosen masses. It is shown that even in full simulation the studied observables can be determined with a good precision. In a next step a discrimination strategy has to be developed given the presented methods to extract the variables and their precision. (author)

  18. Modelling and Simulation of the SVC for Power System Flow Studies: Electrical Network in voltage drop

    Directory of Open Access Journals (Sweden)

    Narimen Aouzellag LAHAÇANI

    2008-12-01

    Full Text Available The goal of any Flexible AC Transmission Systems (FACTS devices study is to measure their impact on the state of the electrical networks into which they are introduced. Their principal function is to improve the static and dynamic properties of the electrical networks and that by increasing the margins of static and dynamic stability and to allow the power transit to the thermal limits of the lines.To study this impact, it is necessary to establish the state of the network (bus voltages and angles, powers injected and forwarded in the lines before and after the introduction of FACTS devices. This brings to calculate the powers transit by using an iterative method such as Newton-Raphson. Undertaking a calculation without the introduction of FACTS devices followed by a calculation with the modifications induced by the integration of FACTS devices into the network, makes it possible to compare the results obtained in both cases and thus assess the interest of the use of devices FACTS.

  19. Repository simulation model: Final report

    International Nuclear Information System (INIS)

    1988-03-01

    This report documents the application of computer simulation for the design analysis of the nuclear waste repository's waste handling and packaging operations. The Salt Repository Simulation Model was used to evaluate design alternatives during the conceptual design phase of the Salt Repository Project. Code development and verification was performed by the Office of Nuclear Waste Isolation (ONWL). The focus of this report is to relate the experience gained during the development and application of the Salt Repository Simulation Model to future repository design phases. Design of the repository's waste handling and packaging systems will require sophisticated analysis tools to evaluate complex operational and logistical design alternatives. Selection of these design alternatives in the Advanced Conceptual Design (ACD) and License Application Design (LAD) phases must be supported by analysis to demonstrate that the repository design will cost effectively meet DOE's mandated emplacement schedule and that uncertainties in the performance of the repository's systems have been objectively evaluated. Computer simulation of repository operations will provide future repository designers with data and insights that no other analytical form of analysis can provide. 6 refs., 10 figs

  20. Simulating spin models on GPU

    Science.gov (United States)

    Weigel, Martin

    2011-09-01

    Over the last couple of years it has been realized that the vast computational power of graphics processing units (GPUs) could be harvested for purposes other than the video game industry. This power, which at least nominally exceeds that of current CPUs by large factors, results from the relative simplicity of the GPU architectures as compared to CPUs, combined with a large number of parallel processing units on a single chip. To benefit from this setup for general computing purposes, the problems at hand need to be prepared in a way to profit from the inherent parallelism and hierarchical structure of memory accesses. In this contribution I discuss the performance potential for simulating spin models, such as the Ising model, on GPU as compared to conventional simulations on CPU.

  1. Standard for Models and Simulations

    Science.gov (United States)

    Steele, Martin J.

    2016-01-01

    This NASA Technical Standard establishes uniform practices in modeling and simulation to ensure essential requirements are applied to the design, development, and use of models and simulations (MS), while ensuring acceptance criteria are defined by the program project and approved by the responsible Technical Authority. It also provides an approved set of requirements, recommendations, and criteria with which MS may be developed, accepted, and used in support of NASA activities. As the MS disciplines employed and application areas involved are broad, the common aspects of MS across all NASA activities are addressed. The discipline-specific details of a given MS should be obtained from relevant recommended practices. The primary purpose is to reduce the risks associated with MS-influenced decisions by ensuring the complete communication of the credibility of MS results.

  2. Simulating Effects of Long Term Use of Wastewater on Farmers Health Using System Dynamics Modeling (Case Study: Varamin Plain

    Directory of Open Access Journals (Sweden)

    Hamzehali Alizadeh

    2017-06-01

    Full Text Available Introduction: Agricultural activity in Varamin plain has been faced with many challenges in recent years, due to vicinity to Tehran the capital of Iran (competition for Latian dam reservoir, and competition with Tehran south network in allocation of Mamlou dam reservoir and treated wastewater of south wastewater treatment plant. Mamlou and Latian dam reservoirs, due to increase of population and industry sectors, allocated to urban utilization of Tehran. Based on national policy, the treated wastewater should be replaced with Latian dam reservoir water to supply water demand of agricultural sector. High volume transmission of wastewater to Varamin plain, will be have economical, environmental, and social effects. Several factors effect on wastewater management and success of utilization plans and any change in these factors may have various feedbacks on the other elements of wastewater use system. Hence, development of a model with capability of simulation of all factors, aspects and interactions that affect wastewater utilization is very necessary. The main objective of present study was development of water integrated model to study long-term effects of irrigation with Tehran treated wastewater, using system dynamics modeling (SD approach. Materials and Methods: Varamin Plain is one of the most important agricultural production centers of the country due to nearness to the large consumer market of Tehran and having fertile soil and knowledge of agriculture. The total agricultural irrigated land in Varamin Plain is 53486 hectares containing 17274 hectares of barley, 16926 hectares of wheat, 3866 hectares of tomato, 3521 hectares of vegetables, 3556 hectares of alfalfa, 2518 hectares of silage maize, 1771 hectares of melon, 1642 hectares of cotton, 1121 hectares of cucumber and 1291 hectares of other crops. In 2006 the irrigation requirement of the crop pattern was about 690 MCM and the actual agriculture water consumption was about 620 MCM

  3. Simulation modeling and analysis in safety. II

    International Nuclear Information System (INIS)

    Ayoub, M.A.

    1981-01-01

    The paper introduces and illustrates simulation modeling as a viable approach for dealing with complex issues and decisions in safety and health. The author details two studies: evaluation of employee exposure to airborne radioactive materials and effectiveness of the safety organization. The first study seeks to define a policy to manage a facility used in testing employees for radiation contamination. An acceptable policy is one that would permit the testing of all employees as defined under regulatory requirements, while not exceeding available resources. The second study evaluates the relationship between safety performance and the characteristics of the organization, its management, its policy, and communication patterns among various functions and levels. Both studies use models where decisions are reached based on the prevailing conditions and occurrence of key events within the simulation environment. Finally, several problem areas suitable for simulation studies are highlighted. (Auth.)

  4. A practical laboratory study simulating the percutaneous lumbar transforaminal epidural injection: training model in fresh cadaveric sheep spine.

    Science.gov (United States)

    Suslu, Husnu

    2012-01-01

    Laboratory training models are essential for developing and refining treatment skills before the clinical application of surgical and invasive procedures. A simple simulation model is needed for young trainees to learn how to handle instruments, and to perform safe lumbar transforaminal epidural injections. Our aim is to present a model of a fresh cadaveric sheep lumbar spine that simulates the lumbar transforaminal epidural injection. The material consists of a 2-year-old fresh cadaveric sheep spine. A 4-step approach was designed for lumbar transforaminal epidural injection under C-arm scopy. For the lumbar transforaminal epidural injection, the fluoroscope was adjusted to get a proper oblique view while the material was stabilized in a prone position. The procedure then begin, using the C-arm guidance scopy. The model simulates well the steps of standard lumbar transforaminal epidural injections in the human spine. The cadaveric sheep spine represents a good method for training and it simulates fluoroscopic lumbar transforaminal epidural steroid injection procedures performed in the human spine.

  5. Incorporating a vascular term into a reference region model for the analysis of DCE-MRI data: a simulation study

    International Nuclear Information System (INIS)

    Faranesh, A Z; Yankeelov, T E

    2008-01-01

    A vascular term was incorporated into a reference region (RR) model analysis of DCE-MRI data, and its effect on the accuracy of the model in estimating tissue kinetic parameters in a tissue of interest (TOI) was systematically investigated through computer simulations. Errors in the TOI volume transfer constant (K trans,TOI ) and TOI extravascular extracellular volume (v e,TOI ) that result when the fractional plasma volume (v p ) was included in (1) neither region, (2) TOI only (3) both regions were investigated. For nominal values of tumor kinetic parameters (v e,TOI = 0.40 and K trans,TOI = 0.25 min -1 ), if the vascular term was included in neither region or the TOI only, K trans,TOI error was within 20% for 0.03 p,TOI e,TOI error was within 20% for the range of v p,TOI studied (0.01-0.10). The effects of temporal resolution were shown to be complex, and in some cases errors increased with increasing temporal resolution

  6. Molecular dynamics simulation study of PTP1B with allosteric inhibitor and its application in receptor based pharmacophore modeling

    Science.gov (United States)

    Bharatham, Kavitha; Bharatham, Nagakumar; Kwon, Yong Jung; Lee, Keun Woo

    2008-12-01

    Allosteric inhibition of protein tyrosine phosphatase 1B (PTP1B), has paved a new path to design specific inhibitors for PTP1B, which is an important drug target for the treatment of type II diabetes and obesity. The PTP1B1-282-allosteric inhibitor complex crystal structure lacks α7 (287-298) and moreover there is no available 3D structure of PTP1B1-298 in open form. As the interaction between α7 and α6-α3 helices plays a crucial role in allosteric inhibition, α7 was modeled to the PTP1B1-282 in open form complexed with an allosteric inhibitor (compound-2) and a 5 ns MD simulation was performed to investigate the relative orientation of the α7-α6-α3 helices. The simulation conformational space was statistically sampled by clustering analyses. This approach was helpful to reveal certain clues on PTP1B allosteric inhibition. The simulation was also utilized in the generation of receptor based pharmacophore models to include the conformational flexibility of the protein-inhibitor complex. Three cluster representative structures of the highly populated clusters were selected for pharmacophore model generation. The three pharmacophore models were subsequently utilized for screening databases to retrieve molecules containing the features that complement the allosteric site. The retrieved hits were filtered based on certain drug-like properties and molecular docking simulations were performed in two different conformations of protein. Thus, performing MD simulation with α7 to investigate the changes at the allosteric site, then developing receptor based pharmacophore models and finally docking the retrieved hits into two distinct conformations will be a reliable methodology in identifying PTP1B allosteric inhibitors.

  7. Verifying and Validating Simulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statistical sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.

  8. Advances in Intelligent Modelling and Simulation Simulation Tools and Applications

    CERN Document Server

    Oplatková, Zuzana; Carvalho, Marco; Kisiel-Dorohinicki, Marek

    2012-01-01

    The human capacity to abstract complex systems and phenomena into simplified models has played a critical role in the rapid evolution of our modern industrial processes and scientific research. As a science and an art, Modelling and Simulation have been one of the core enablers of this remarkable human trace, and have become a topic of great importance for researchers and practitioners. This book was created to compile some of the most recent concepts, advances, challenges and ideas associated with Intelligent Modelling and Simulation frameworks, tools and applications. The first chapter discusses the important aspects of a human interaction and the correct interpretation of results during simulations. The second chapter gets to the heart of the analysis of entrepreneurship by means of agent-based modelling and simulations. The following three chapters bring together the central theme of simulation frameworks, first describing an agent-based simulation framework, then a simulator for electrical machines, and...

  9. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    DEFF Research Database (Denmark)

    Sørensen, Kim; Condra, Thomas Joseph; Houbak, Niels

    2004-01-01

    In the present work a framework for optimizing the design of boilers for dynamic operation has been developed. A cost function to be minimized during the optimization has been formulated and for the present design variables related to the Boiler Volume and the Boiler load Gradient (i.e. ring rate...... on the boiler) have been dened. Furthermore a number of constraints related to: minimum and maximum boiler load gradient, minimum boiler size, Shrinking and Swelling and Steam Space Load have been dened. For dening the constraints related to the required boiler volume a dynamic model for simulating the boiler...... performance has been developed. Outputs from the simulations are shrinking and swelling of water level in the drum during for example a start-up of the boiler, these gures combined with the requirements with respect to allowable water level uctuations in the drum denes the requirements with respect to drum...

  10. Molecular Dynamics Studies of Liposomes as Carriers for Photosensitizing Drugs: Development, Validation, and Simulations with a Coarse-Grained Model.

    Science.gov (United States)

    Jämbeck, Joakim P M; Eriksson, Emma S E; Laaksonen, Aatto; Lyubartsev, Alexander P; Eriksson, Leif A

    2014-01-14

    Liposomes are proposed as drug delivery systems and can in principle be designed so as to cohere with specific tissue types or local environments. However, little detail is known about the exact mechanisms for drug delivery and the distributions of drug molecules inside the lipid carrier. In the current work, a coarse-grained (CG) liposome model is developed, consisting of over 2500 lipids, with varying degrees of drug loading. For the drug molecule, we chose hypericin, a natural compound proposed for use in photodynamic therapy, for which a CG model was derived and benchmarked against corresponding atomistic membrane bilayer model simulations. Liposomes with 21-84 hypericin molecules were generated and subjected to 10 microsecond simulations. Distribution of the hypericins, their orientations within the lipid bilayer, and the potential of mean force for transferring a hypericin molecule from the interior aqueous "droplet" through the liposome bilayer are reported herein.

  11. SEMI Modeling and Simulation Roadmap

    Energy Technology Data Exchange (ETDEWEB)

    Hermina, W.L.

    2000-10-02

    With the exponential growth in the power of computing hardware and software, modeling and simulation is becoming a key enabler for the rapid design of reliable Microsystems. One vision of the future microsystem design process would include the following primary software capabilities: (1) The development of 3D part design, through standard CAD packages, with automatic design rule checks that guarantee the manufacturability and performance of the microsystem. (2) Automatic mesh generation, for 3D parts as manufactured, that permits computational simulation of the process steps, and the performance and reliability analysis for the final microsystem. (3) Computer generated 2D layouts for process steps that utilize detailed process models to generate the layout and process parameter recipe required to achieve the desired 3D part. (4) Science-based computational tools that can simulate the process physics, and the coupled thermal, fluid, structural, solid mechanics, electromagnetic and material response governing the performance and reliability of the microsystem. (5) Visualization software that permits the rapid visualization of 3D parts including cross-sectional maps, performance and reliability analysis results, and process simulation results. In addition to these desired software capabilities, a desired computing infrastructure would include massively parallel computers that enable rapid high-fidelity analysis, coupled with networked compute servers that permit computing at a distance. We now discuss the individual computational components that are required to achieve this vision. There are three primary areas of focus: design capabilities, science-based capabilities and computing infrastructure. Within each of these areas, there are several key capability requirements.

  12. Effects of oxygen and ethanol on recombinant yeast fermentation for hepatitis B virus surface antigen production: modeling and simulation studies.

    Science.gov (United States)

    Shi, Y; Ryu, D D; Yuan, W K

    1993-01-05

    A model was formulated to examine the competitive growth of two phenotypes (Leu(+) and Leu(-)) and the product formation with recombinant Saccharomyces cerevisiae strain DBY-745, which contains the shuttle vector pYGH3-16-s with the foreign gene HBsAg (hepatitis B virus surface antigen) as well as experimental fedbatch fermentation data. The important state variables and the process parameters evaluated include (1) the ratio of the plasmid-free cell concentration to the plasmid-containing cell concentration (rho = X(-)X(+)), (2) the expression of human hepatitis B surface antigen g (CH), (3) the glucose consumption (S), (4) the ethanol production (/), (5) the change of working volume (V) in the fermentor, (6) the different specific growth rates of two phenotype cells, and (7) the plasmid loss frequency coefficient (alpha ). These variables and other parameters were carefully defined, their correlations were studied, and a mathematical model using a set of nonlinear ordinary differential equations (ODEs) for fed-batch fermentation was then obtained based on the theoretical considerations and the experimental results. The extended Kalman filter (EKF) methods was applied for the best estimate of these variables based on the experimentally observable variables: rhoV, and g (CH). Each of these variable was affected by random measuring errors under the different operating conditions. Simulation results presented for verification of the model agreed with our observations and provided useful information relevant to the operation and the control of the fedbatch recombinant yeast fermentation. The method of predicting an optimal profile of the cell growth was also demonstrated under the different dissolved oxygen concentrations.

  13. Analyzing Strategic Business Rules through Simulation Modeling

    Science.gov (United States)

    Orta, Elena; Ruiz, Mercedes; Toro, Miguel

    Service Oriented Architecture (SOA) holds promise for business agility since it allows business process to change to meet new customer demands or market needs without causing a cascade effect of changes in the underlying IT systems. Business rules are the instrument chosen to help business and IT to collaborate. In this paper, we propose the utilization of simulation models to model and simulate strategic business rules that are then disaggregated at different levels of an SOA architecture. Our proposal is aimed to help find a good configuration for strategic business objectives and IT parameters. The paper includes a case study where a simulation model is built to help business decision-making in a context where finding a good configuration for different business parameters and performance is too complex to analyze by trial and error.

  14. Simulated annealing model of acupuncture

    Science.gov (United States)

    Shang, Charles; Szu, Harold

    2015-05-01

    The growth control singularity model suggests that acupuncture points (acupoints) originate from organizers in embryogenesis. Organizers are singular points in growth control. Acupuncture can cause perturbation of a system with effects similar to simulated annealing. In clinical trial, the goal of a treatment is to relieve certain disorder which corresponds to reaching certain local optimum in simulated annealing. The self-organizing effect of the system is limited and related to the person's general health and age. Perturbation at acupoints can lead a stronger local excitation (analogous to higher annealing temperature) compared to perturbation at non-singular points (placebo control points). Such difference diminishes as the number of perturbed points increases due to the wider distribution of the limited self-organizing activity. This model explains the following facts from systematic reviews of acupuncture trials: 1. Properly chosen single acupoint treatment for certain disorder can lead to highly repeatable efficacy above placebo 2. When multiple acupoints are used, the result can be highly repeatable if the patients are relatively healthy and young but are usually mixed if the patients are old, frail and have multiple disorders at the same time as the number of local optima or comorbidities increases. 3. As number of acupoints used increases, the efficacy difference between sham and real acupuncture often diminishes. It predicted that the efficacy of acupuncture is negatively correlated to the disease chronicity, severity and patient's age. This is the first biological - physical model of acupuncture which can predict and guide clinical acupuncture research.

  15. A comparison study between observations and simulation results of Barghouthi model for O+ and H+ outflows in the polar wind

    Directory of Open Access Journals (Sweden)

    I. A. Barghouthi

    2011-11-01

    Full Text Available To advance our understanding of the effect of wave-particle interactions on ion outflows in the polar wind region and the resulting ion heating and escape from low altitudes to higher altitudes, we carried out a comparison between polar wind simulations obtained using Barghouthi model with corresponding observations obtained from different satellites. The Barghouthi model describes O+ and H+ outflows in the polar wind region in the range 1.7 RE to 13.7 RE, including the effects of gravity, polarization electrostatic field, diverging geomagnetic field lines, and wave-particle interactions. Wave-particle interactions were included into the model by using a particle diffusion equation, which depends on diffusion coefficients determined from estimates of the typical electric field spectral density at relevant altitudes and frequencies. We provide a formula for the velocity diffusion coefficient that depends on altitude and velocity, in which the velocity part depends on the perpendicular wavelength of the electromagnetic turbulence λ⊥. Because of the shortage of information about λ⊥, it was included into the model as a parameter. We produce different simulations (i.e. ion velocity distributions, ions density, ion drift velocity, ion parallel and perpendicular temperatures for O+ and H+ ions, and for different λ⊥. We discuss the simulations in terms of wave-particle interactions, perpendicular adiabatic cooling, parallel adiabatic cooling, mirror force, and ion potential energy. The main findings of the simulations are as follows: (1 O+ ions are highly energized at all altitudes in the simulation tube due to wave-particle interactions that heat the ions in the perpendicular direction, and part of this gained energy transfer to the parallel direction by mirror force, resulting in accelerating O+ ions along geomagnetic field lines from lower altitudes to higher altitudes. (2 The effect of wave-particle interactions is negligible for H

  16. Simulation of regionally ecological land based on a cellular automation model: a case study of Beijing, China.

    Science.gov (United States)

    Xie, Hualin; Kung, Chih-Chun; Zhang, Yanting; Li, Xiubin

    2012-08-01

    Ecological land is like the "liver" of a city and is very useful to public health. Ecological land change is a spatially dynamic non-linear process under the interaction between natural and anthropogenic factors at different scales. In this study, by setting up natural development scenario, object orientation scenario and ecosystem priority scenario, a Cellular Automation (CA) model has been established to simulate the evolution pattern of ecological land in Beijing in the year 2020. Under the natural development scenario, most of ecological land will be replaced by construction land and crop land. But under the scenarios of object orientation and ecosystem priority, the ecological land area will increase, especially under the scenario of ecosystem priority. When considering the factors such as total area of ecological land, loss of key ecological land and spatial patterns of land use, the scenarios from priority to inferiority are ecosystem priority, object orientation and natural development, so future land management policies in Beijing should be focused on conversion of cropland to forest, wetland protection and prohibition of exploitation of natural protection zones, water source areas and forest parks to maintain the safety of the regional ecosystem.

  17. Simulation of Regionally Ecological Land Based on a Cellular Automation Model: A Case Study of Beijing, China

    Directory of Open Access Journals (Sweden)

    Xiubin Li

    2012-08-01

    Full Text Available Ecological land is like the “liver” of a city and is very useful to public health. Ecological land change is a spatially dynamic non-linear process under the interaction between natural and anthropogenic factors at different scales. In this study, by setting up natural development scenario, object orientation scenario and ecosystem priority scenario, a Cellular Automation (CA model has been established to simulate the evolution pattern of ecological land in Beijing in the year 2020. Under the natural development scenario, most of ecological land will be replaced by construction land and crop land. But under the scenarios of object orientation and ecosystem priority, the ecological land area will increase, especially under the scenario of ecosystem priority. When considering the factors such as total area of ecological land, loss of key ecological land and spatial patterns of land use, the scenarios from priority to inferiority are ecosystem priority, object orientation and natural development, so future land management policies in Beijing should be focused on conversion of cropland to forest, wetland protection and prohibition of exploitation of natural protection zones, water source areas and forest parks to maintain the safety of the regional ecosystem.

  18. A model for simulating the active dispersal of juvenile sea turtles with a case study on western Pacific leatherback turtles

    Science.gov (United States)

    Lalire, Maxime

    2017-01-01

    Oceanic currents are known to broadly shape the dispersal of juvenile sea turtles during their pelagic stage. Accordingly, simple passive drift models are widely used to investigate the distribution at sea of various juvenile sea turtle populations. However, evidence is growing that juveniles do not drift purely passively but also display some swimming activity likely directed towards favorable habitats. We therefore present here a novel Sea Turtle Active Movement Model (STAMM) in which juvenile sea turtles actively disperse under the combined effects of oceanic currents and habitat-driven movements. This model applies to all sea turtle species but is calibrated here for leatherback turtles (Dermochelys coriacea). It is first tested in a simulation of the active dispersal of juveniles originating from Jamursba-Medi, a main nesting beach of the western Pacific leatherback population. Dispersal into the North Pacific Ocean is specifically investigated. Simulation results demonstrate that, while oceanic currents broadly shape the dispersal area, modeled habitat-driven movements strongly structure the spatial and temporal distribution of juveniles within this area. In particular, these movements lead juveniles to gather in the North Pacific Transition Zone (NPTZ) and to undertake seasonal north-south migrations. More surprisingly, juveniles in the NPTZ are simulated to swim mostly towards west which considerably slows down their progression towards the American west coast. This increases their residence time, and hence the risk of interactions with fisheries, in the central and eastern part of the North Pacific basin. Simulated habitat-driven movements also strongly reduce the risk of cold-induced mortality. This risk appears to be larger among the juveniles that rapidly circulate into the Kuroshio than among those that first drift into the North Equatorial Counter Current (NECC). This mechanism might induce marked interannual variability in juvenile survival as the

  19. A model for simulating the active dispersal of juvenile sea turtles with a case study on western Pacific leatherback turtles.

    Science.gov (United States)

    Gaspar, Philippe; Lalire, Maxime

    2017-01-01

    Oceanic currents are known to broadly shape the dispersal of juvenile sea turtles during their pelagic stage. Accordingly, simple passive drift models are widely used to investigate the distribution at sea of various juvenile sea turtle populations. However, evidence is growing that juveniles do not drift purely passively but also display some swimming activity likely directed towards favorable habitats. We therefore present here a novel Sea Turtle Active Movement Model (STAMM) in which juvenile sea turtles actively disperse under the combined effects of oceanic currents and habitat-driven movements. This model applies to all sea turtle species but is calibrated here for leatherback turtles (Dermochelys coriacea). It is first tested in a simulation of the active dispersal of juveniles originating from Jamursba-Medi, a main nesting beach of the western Pacific leatherback population. Dispersal into the North Pacific Ocean is specifically investigated. Simulation results demonstrate that, while oceanic currents broadly shape the dispersal area, modeled habitat-driven movements strongly structure the spatial and temporal distribution of juveniles within this area. In particular, these movements lead juveniles to gather in the North Pacific Transition Zone (NPTZ) and to undertake seasonal north-south migrations. More surprisingly, juveniles in the NPTZ are simulated to swim mostly towards west which considerably slows down their progression towards the American west coast. This increases their residence time, and hence the risk of interactions with fisheries, in the central and eastern part of the North Pacific basin. Simulated habitat-driven movements also strongly reduce the risk of cold-induced mortality. This risk appears to be larger among the juveniles that rapidly circulate into the Kuroshio than among those that first drift into the North Equatorial Counter Current (NECC). This mechanism might induce marked interannual variability in juvenile survival as the

  20. Comparative study of non-premixed and partially-premixed combustion simulations in a realistic Tay model combustor

    OpenAIRE

    Zhang, K.; Ghobadian, A.; Nouri, J. M.

    2017-01-01

    A comparative study of two combustion models based on non-premixed assumption and partially premixed assumptions using the overall models of Zimont Turbulent Flame Speed Closure Method (ZTFSC) and Extended Coherent Flamelet Method (ECFM) are conducted through Reynolds stress turbulence modelling of Tay model gas turbine combustor for the first time. The Tay model combustor retains all essential features of a realistic gas turbine combustor. It is seen that the non-premixed combustion model fa...

  1. Impulse pumping modelling and simulation

    International Nuclear Information System (INIS)

    Pierre, B; Gudmundsson, J S

    2010-01-01

    Impulse pumping is a new pumping method based on propagation of pressure waves. Of particular interest is the application of impulse pumping to artificial lift situations, where fluid is transported from wellbore to wellhead using pressure waves generated at wellhead. The motor driven element of an impulse pumping apparatus is therefore located at wellhead and can be separated from the flowline. Thus operation and maintenance of an impulse pump are facilitated. The paper describes the different elements of an impulse pumping apparatus, reviews the physical principles and details the modelling of the novel pumping method. Results from numerical simulations of propagation of pressure waves in water-filled pipelines are then presented for illustrating impulse pumping physical principles, and validating the described modelling with experimental data.

  2. Toward a better integration of roughness in rockfall simulations - a sensitivity study with the RockyFor3D model

    Science.gov (United States)

    Monnet, Jean-Matthieu; Bourrier, Franck; Milenkovic, Milutin

    2017-04-01

    Advances in numerical simulation and analysis of real-size field experiments have supported the development of process-based rockfall simulation models. Availability of high resolution remote sensing data and high-performance computing now make it possible to implement them for operational applications, e.g. risk zoning and protection structure design. One key parameter regarding rock propagation is the surface roughness, sometimes defined as the variation in height perpendicular to the slope (Pfeiffer and Bowen, 1989). Roughness-related input parameters for rockfall models are usually determined by experts on the field. In the RockyFor3D model (Dorren, 2015), three values related to the distribution of obstacles (deposited rocks, stumps, fallen trees,... as seen from the incoming rock) relatively to the average slope are estimated. The use of high resolution digital terrain models (DTMs) questions both the scale usually adopted by experts for roughness assessment and the relevance of modeling hypotheses regarding the rock / ground interaction. Indeed, experts interpret the surrounding terrain as obstacles or ground depending on the overall visibility and on the nature of objects. Digital models represent the terrain with a certain amount of smoothing, depending on the sensor capacities. Besides, the rock rebound on the ground is modeled by changes in the velocities of the gravity center of the block due to impact. Thus, the use of a DTM with resolution smaller than the block size might have little relevance while increasing computational burden. The objective of this work is to investigate the issue of scale relevance with simulations based on RockyFor3D in order to derive guidelines for roughness estimation by field experts. First a sensitivity analysis is performed to identify the combinations of parameters (slope, soil roughness parameter, rock size) where the roughness values have a critical effect on rock propagation on a regular hillside. Second, a more

  3. Simulation model of a PWR power plant

    International Nuclear Information System (INIS)

    Larsen, N.

    1987-03-01

    A simulation model of a hypothetical PWR power plant is described. A large number of disturbances and failures in plant function can be simulated. The model is written as seven modules to the modular simulation system for continuous processes DYSIM and serves also as a user example of this system. The model runs in Fortran 77 on the IBM-PC-AT. (author)

  4. Simulation study of multi-step model algorithmic control of the nuclear reactor thermal power tracking system

    International Nuclear Information System (INIS)

    Shi Xiaoping; Xu Tianshu

    2001-01-01

    The classical control method is usually hard to ensure the thermal power tracking accuracy, because the nuclear reactor system is a complex nonlinear system with uncertain parameters and disturbances. A sort of non-parameter model is constructed with the open-loop impulse response of the system. Furthermore, a sort of thermal power tracking digital control law is presented using the multi-step model algorithmic control principle. The control method presented had good tracking performance and robustness. It can work despite the existence of unmeasurable disturbances. The simulation experiment testifies the correctness and effectiveness of the method. The high accuracy matching between the thermal power and the referenced load is achieved

  5. Simulation study of a magnetocardiogram based on a virtual heart model: effect of a cardiac equivalent source and a volume conductor

    International Nuclear Information System (INIS)

    Shou Guo-Fa; Xia Ling; Dai Ling; Ma Ping; Tang Fa-Kuan

    2011-01-01

    In this paper, we present a magnetocardiogram (MCG) simulation study using the boundary element method (BEM) and based on the virtual heart model and the realistic human volume conductor model. The different contributions of cardiac equivalent source models and volume conductor models to the MCG are deeply and comprehensively investigated. The single dipole source model, the multiple dipoles source model and the equivalent double layer (EDL) source model are analysed and compared with the cardiac equivalent source models. Meanwhile, the effect of the volume conductor model on the MCG combined with these cardiac equivalent sources is investigated. The simulation results demonstrate that the cardiac electrophysiological information will be partly missed when only the single dipole source is taken, while the EDL source is a good option for MCG simulation and the effect of the volume conductor is smallest for the EDL source. Therefore, the EDL source is suitable for the study of MCG forward and inverse problems, and more attention should be paid to it in future MCG studies. (general)

  6. Numerical simulation for regional ozone concentrations: A case study by weather research and forecasting/chemistry (WRF/Chem) model

    Energy Technology Data Exchange (ETDEWEB)

    Habib Al Razi, Khandakar Md; Hiroshi, Moritomi [Environmental and Renewable Energy System, Graduate School of Engineering, Gifu University, 1-1 Yanagido, Gifu City, 501-1193 (Japan)

    2013-07-01

    The objective of this research is to better understand and predict the atmospheric concentration distribution of ozone and its precursor (in particular, within the Planetary Boundary Layer (Within 110 km to 12 km) over Kasaki City and the Greater Tokyo Area using fully coupled online WRF/Chem (Weather Research and Forecasting/Chemistry) model. In this research, a serious and continuous high ozone episode in the Greater Tokyo Area (GTA) during the summer of 14–18 August 2010 was investigated using the observation data. We analyzed the ozone and other trace gas concentrations, as well as the corresponding weather conditions in this high ozone episode by WRF/Chem model. The simulation results revealed that the analyzed episode was mainly caused by the impact of accumulation of pollution rich in ozone over the Greater Tokyo Area. WRF/Chem has shown relatively good performance in modeling of this continuous high ozone episode, the simulated and the observed concentrations of ozone, NOx and NO2 are basically in agreement at Kawasaki City, with best correlation coefficients of 0.87, 0.70 and 0.72 respectively. Moreover, the simulations of WRF/Chem with WRF preprocessing software (WPS) show a better agreement with meteorological observations such as surface winds and temperature profiles in the ground level of this area. As a result the surface ozone simulation performances have been enhanced in terms of the peak ozone and spatial patterns, whereas WRF/Chem has been succeeded to generate meteorological fields as well as ozone, NOx, NO2 and NO.

  7. Randomized Crossover Study of Training Benefits of High Fidelity ECMO Simulation versus Porcine Animal Model An Interim Report

    Science.gov (United States)

    2017-02-25

    59 MDW/SGVU SUBJECT: Professional Presentation Approval 24 FEB 2017 1. Your paper, entitled Randomized C rossover Study of T raining Benefits of...have been the gold -standard for ECMO training due to their ability to replicate complex physiology and anatomic variation . Recently ECMO simulation

  8. Nuclear reactor core modelling in multifunctional simulators

    International Nuclear Information System (INIS)

    Puska, E.K.

    1999-01-01

    studied to assess the possibilities for using three-dimensional cores in training simulators. The core model results have been compared with the Loviisa WWER-type plant measurement data in steady state and in some transients. Hypothetical control rod withdrawal, ejection and boron dilution transients have been calculated with various three-dimensional core models for the Loviisa WWER-440 core. Several ATWS analyses for the WWER-1000/91 plant have been performed using the three-dimensional core model. In this context, the results of APROS have been compared in detail with the results of the HEXTRAN code. The three-dimensional Olkiluoto BWR-type core model has been used for transient calculation and for severe accident re-criticality studies. The one-dimensional core model is at present used in several plant analyser and training simulator applications and it has been used extensively for safety analyses in the Loviisa WWER-440 plant modernisation project. (orig.)

  9. Nuclear reactor core modelling in multifunctional simulators

    Energy Technology Data Exchange (ETDEWEB)

    Puska, E.K. [VTT Energy, Nuclear Energy, Espoo (Finland)

    1999-06-01

    studied to assess the possibilities for using three-dimensional cores in training simulators. The core model results have been compared with the Loviisa WWER-type plant measurement data in steady state and in some transients. Hypothetical control rod withdrawal, ejection and boron dilution transients have been calculated with various three-dimensional core models for the Loviisa WWER-440 core. Several ATWS analyses for the WWER-1000/91 plant have been performed using the three-dimensional core model. In this context, the results of APROS have been compared in detail with the results of the HEXTRAN code. The three-dimensional Olkiluoto BWR-type core model has been used for transient calculation and for severe accident re-criticality studies. The one-dimensional core model is at present used in several plant analyser and training simulator applications and it has been used extensively for safety analyses in the Loviisa WWER-440 plant modernisation project. (orig.) 75 refs. The thesis includes also eight previous publications by author

  10. The fragrance hand immersion study - an experimental model simulating real-life exposure for allergic contact dermatitis on the hands.

    Science.gov (United States)

    Heydorn, S; Menné, T; Andersen, K E; Bruze, M; Svedman, C; Basketter, D; Johansen, J D

    2003-06-01

    Recently, we showed that 10 x 2% of consecutively patch-tested hand eczema patients had a positive patch test to a selection of fragrances containing fragrances relevant to hand exposure. In this study, we used repeated skin exposure to a patch test-positive fragrance allergen in patients previously diagnosed with hand eczema to explore whether immersion of fingers in a solution with or without the patch-test-positive fragrance allergen would cause or exacerbate hand eczema on the exposed finger. The study was double blinded and randomized. All participants had a positive patch test to either hydroxycitronellal or Lyral (hydroxyisohexyl 3-cyclohexene carboxaldehyde). Each participant immersed a finger from each hand, once a day, in a solution containing the fragrance allergen or placebo. During the first 2 weeks, the concentration of fragrance allergen in the solution was low (approximately 10 p.p.m.), whilst during the following 2 weeks, the concentration was relatively high (approximately 250 p.p.m.), imitating real-life exposure to a household product like dishwashing liquid diluted in water and the undiluted product, respectively. Evaluation was made using a clinical scale and laser Doppler flow meter. 3 of 15 hand eczema patients developed eczema on the finger immersed in the fragrance-containing solution, 3 of 15 on the placebo finger and 3 of 15 on both fingers. Using this experimental exposure model simulating real-life exposure, we found no association between immersion of a finger in a solution containing fragrance and development of clinically visible eczema on the finger in 15 participants previously diagnosed with hand eczema and with a positive patch test to the fragrance in question.

  11. Comparison of two simulation systems to support robotic-assisted surgical training: a pilot study (Swine model).

    Science.gov (United States)

    Whitehurst, Sabrina V; Lockrow, Ernest G; Lendvay, Thomas S; Propst, Anthony M; Dunlow, Susan G; Rosemeyer, Christopher J; Gobern, Joseph M; White, Lee W; Skinner, Anna; Buller, Jerome L

    2015-01-01

    To compare the efficacy of simulation-based training between the Mimic dV- Trainer and traditional dry lab da Vinci robot training. A prospective randomized study analyzing the performance of 20 robotics-naive participants. Participants were enrolled in an online da Vinci Intuitive Surgical didactic training module, followed by training in use of the da Vinci standard surgical robot. Spatial ability tests were performed as well. Participants were randomly assigned to 1 of 2 training conditions: performance of 3 Fundamentals of Laparoscopic Surgery dry lab tasks using the da Vinci or performance of 4 dV-Trainer tasks. Participants in both groups performed all tasks to empirically establish proficiency criterion. Participants then performed the transfer task, a cystotomy closure using the daVinci robot on a live animal (swine) model. The performance of robotic tasks was blindly assessed by a panel of experienced surgeons using objective tracking data and using the validated Global Evaluative Assessment of Robotic Surgery (GEARS), a structured assessment tool. No statistically significant difference in surgeon performance was found between the 2 training conditions, dV-Trainer and da Vinci robot. Analysis of a 95% confidence interval for the difference in means (-0.803 to 0.543) indicated that the 2 methods are unlikely to differ to an extent that would be clinically meaningful. Based on the results of this study, a curriculum on the dV- Trainer was shown to be comparable to traditional da Vinci robot training. Therefore, we have identified that training on a virtual reality system may be an alternative to live animal training for future robotic surgeons. Published by Elsevier Inc.

  12. Flood Simulation Using WMS Model in Small Watershed after Strong Earthquake -A Case Study of Longxihe Watershed, Sichuan province, China

    Science.gov (United States)

    Guo, B.

    2017-12-01

    Mountain watershed in Western China is prone to flash floods. The Wenchuan earthquake on May 12, 2008 led to the destruction of surface, and frequent landslides and debris flow, which further exacerbated the flash flood hazards. Two giant torrent and debris flows occurred due to heavy rainfall after the earthquake, one was on August 13 2010, and the other on August 18 2010. Flash floods reduction and risk assessment are the key issues in post-disaster reconstruction. Hydrological prediction models are important and cost-efficient mitigation tools being widely applied. In this paper, hydrological observations and simulation using remote sensing data and the WMS model are carried out in the typical flood-hit area, Longxihe watershed, Dujiangyan City, Sichuan Province, China. The hydrological response of rainfall runoff is discussed. The results show that: the WMS HEC-1 model can well simulate the runoff process of small watershed in mountainous area. This methodology can be used in other earthquake-affected areas for risk assessment and to predict the magnitude of flash floods. Key Words: Rainfall-runoff modeling. Remote Sensing. Earthquake. WMS.

  13. Experiments with Interaction between the National Water Model and the Reservoir System Simulation Model: A Case Study of Russian River Basin

    Science.gov (United States)

    Kim, J.; Johnson, L.; Cifelli, R.; Chandra, C. V.; Gochis, D.; McCreight, J. L.; Yates, D. N.; Read, L.; Flowers, T.; Cosgrove, B.

    2017-12-01

    NOAA National Water Center (NWC) in partnership with the National Centers for Environmental Prediction (NCEP), the National Center for Atmospheric Research (NCAR) and other academic partners have produced operational hydrologic predictions for the nation using a new National Water Model (NWM) that is based on the community WRF-Hydro modeling system since the summer of 2016 (Gochis et al., 2015). The NWM produces a variety of hydrologic analysis and prediction products, including gridded fields of soil moisture, snowpack, shallow groundwater levels, inundated area depths, evapotranspiration as well as estimates of river flow and velocity for approximately 2.7 million river reaches. Also included in the NWM are representations for more than 1,200 reservoirs which are linked into the national channel network defined by the USGS NHDPlusv2.0 hydrography dataset. Despite the unprecedented spatial and temporal coverage of the NWM, many known deficiencies exist, including the representation of lakes and reservoirs. This study addresses the implementation of a reservoir assimilation scheme through coupling of a reservoir simulation model to represent the influence of managed flows. We examine the use of the reservoir operations to dynamically update lake/reservoir storage volume states, characterize flow characteristics of river reaches flowing into and out of lakes and reservoirs, and incorporate enhanced reservoir operating rules for the reservoir model options within the NWM. Model experiments focus on a pilot reservoir domain-Lake Mendocino, CA, and its contributing watershed, the East Fork Russian River. This reservoir is modeled using United States Army Corps of Engineers (USACE) HEC-ResSim developed for application to examine forecast informed reservoir operations (FIRO) in the Russian River basin.

  14. Galaxy Alignments: Theory, Modelling & Simulations

    Science.gov (United States)

    Kiessling, Alina; Cacciato, Marcello; Joachimi, Benjamin; Kirk, Donnacha; Kitching, Thomas D.; Leonard, Adrienne; Mandelbaum, Rachel; Schäfer, Björn Malte; Sifón, Cristóbal; Brown, Michael L.; Rassat, Anais

    2015-11-01

    The shapes of galaxies are not randomly oriented on the sky. During the galaxy formation and evolution process, environment has a strong influence, as tidal gravitational fields in the large-scale structure tend to align nearby galaxies. Additionally, events such as galaxy mergers affect the relative alignments of both the shapes and angular momenta of galaxies throughout their history. These "intrinsic galaxy alignments" are known to exist, but are still poorly understood. This review will offer a pedagogical introduction to the current theories that describe intrinsic galaxy alignments, including the apparent difference in intrinsic alignment between early- and late-type galaxies and the latest efforts to model them analytically. It will then describe the ongoing efforts to simulate intrinsic alignments using both N-body and hydrodynamic simulations. Due to the relative youth of this field, there is still much to be done to understand intrinsic galaxy alignments and this review summarises the current state of the field, providing a solid basis for future work.

  15. Conducting Simulation Studies in Psychometrics

    Science.gov (United States)

    Feinberg, Richard A.; Rubright, Jonathan D.

    2016-01-01

    Simulation studies are fundamental to psychometric discourse and play a crucial role in operational and academic research. Yet, resources for psychometricians interested in conducting simulations are scarce. This Instructional Topics in Educational Measurement Series (ITEMS) module is meant to address this deficiency by providing a comprehensive…

  16. Model studies of migration from paper and board into fruit and vegetables and into Tenax as a food simulant.

    Science.gov (United States)

    Bradley, E L; Castle, L; Speck, D R

    2014-01-01

    Four samples of paper and board (P/B) of a type used for packaging dry foods were subjected to migration tests using mushrooms, apples, potatoes and bananas, and using the polymeric powder Tenax as a food simulant. The P/B samples contained only low levels of diisopropylnaphthalene (DiPN) and diisobutyl phthalate (DiBP) and so the experiments were conducted after impregnating the P/B with added model substances. These were o-xylene, acetophenone, dodecane, benzophenone, DiPN and DiBP. Migration levels depended strongly on the nature of the substance and on the nature of the food and much less on the characteristics of the P/B, except insofar as they affected the contact area - flexible papers giving more extensive contact with the food than thick rigid board. Migration into Tenax was at least a factor of 10 higher than migration into the fresh fruit and vegetables. The food samples were placed in contact with the P/B and then overwrapped loosely with aluminium foil and so this correction factor will tend to be conservative compared with a more open storage of the packed foods. Washing, peeling or cooking the fruits and vegetables after contact with the P/B had a surprisingly small effect on contaminant levels in general, and no one processing step was effective in giving a significant reduction of all the types of chemicals studied. This was because either they had penetrated into the food (so resisting peeling), or were not freely water-soluble (so resisting washing) or were not particularly volatile (so resisting loss by evaporation during cooking).

  17. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    Science.gov (United States)

    BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of a...

  18. Probabilistic physics-of-failure models for component reliabilities using Monte Carlo simulation and Weibull analysis: a parametric study

    International Nuclear Information System (INIS)

    Hall, P.L.; Strutt, J.E.

    2003-01-01

    In reliability engineering, component failures are generally classified in one of three ways: (1) early life failures; (2) failures having random onset times; and (3) late life or 'wear out' failures. When the time-distribution of failures of a population of components is analysed in terms of a Weibull distribution, these failure types may be associated with shape parameters β having values 1 respectively. Early life failures are frequently attributed to poor design (e.g. poor materials selection) or problems associated with manufacturing or assembly processes. We describe a methodology for the implementation of physics-of-failure models of component lifetimes in the presence of parameter and model uncertainties. This treats uncertain parameters as random variables described by some appropriate statistical distribution, which may be sampled using Monte Carlo methods. The number of simulations required depends upon the desired accuracy of the predicted lifetime. Provided that the number of sampled variables is relatively small, an accuracy of 1-2% can be obtained using typically 1000 simulations. The resulting collection of times-to-failure are then sorted into ascending order and fitted to a Weibull distribution to obtain a shape factor β and a characteristic life-time η. Examples are given of the results obtained using three different models: (1) the Eyring-Peck (EP) model for corrosion of printed circuit boards; (2) a power-law corrosion growth (PCG) model which represents the progressive deterioration of oil and gas pipelines; and (3) a random shock-loading model of mechanical failure. It is shown that for any specific model the values of the Weibull shape parameters obtained may be strongly dependent on the degree of uncertainty of the underlying input parameters. Both the EP and PCG models can yield a wide range of values of β, from β>1, characteristic of wear-out behaviour, to β<1, characteristic of early-life failure, depending on the degree of

  19. Product Costing in FMT: Comparing Deterministic and Stochastic Models Using Computer-Based Simulation for an Actual Case Study

    DEFF Research Database (Denmark)

    Nielsen, Steen

    2000-01-01

    This paper expands the traditional product costing technique be including a stochastic form in a complex production process for product costing. The stochastic phenomenon in flesbile manufacturing technologies is seen as an important phenomenon that companies try to decreas og eliminate. DFM has...... been used for evaluating the appropriateness of the firm's production capability. In this paper a simulation model is developed to analyze the relevant cost behaviour with respect to DFM and to develop a more streamlined process in the layout of the manufacturing process....

  20. Simulation in International Studies

    Science.gov (United States)

    Boyer, Mark A.

    2011-01-01

    Social scientists have long worked to replicate real-world phenomena in their research and teaching environments. Unlike our biophysical science colleagues, we are faced with an area of study that is not governed by the laws of physics and other more predictable relationships. As a result, social scientists, and international studies scholars more…

  1. Using Akaike's information theoretic criterion in mixed-effects modeling of pharmacokinetic data: a simulation study [version 3; referees: 2 approved, 1 approved with reservations

    Directory of Open Access Journals (Sweden)

    Erik Olofsen

    2015-07-01

    Full Text Available Akaike's information theoretic criterion for model discrimination (AIC is often stated to "overfit", i.e., it selects models with a higher dimension than the dimension of the model that generated the data. However, with experimental pharmacokinetic data it may not be possible to identify the correct model, because of the complexity of the processes governing drug disposition. Instead of trying to find the correct model, a more useful objective might be to minimize the prediction error of drug concentrations in subjects with unknown disposition characteristics. In that case, the AIC might be the selection criterion of choice. We performed Monte Carlo simulations using a model of pharmacokinetic data (a power function of time with the property that fits with common multi-exponential models can never be perfect - thus resembling the situation with real data. Prespecified models were fitted to simulated data sets, and AIC and AICc (the criterion with a correction for small sample sizes values were calculated and averaged. The average predictive performances of the models, quantified using simulated validation sets, were compared to the means of the AICs. The data for fits and validation consisted of 11 concentration measurements each obtained in 5 individuals, with three degrees of interindividual variability in the pharmacokinetic volume of distribution. Mean AICc corresponded very well, and better than mean AIC, with mean predictive performance. With increasing interindividual variability, there was a trend towards larger optimal models, but with respect to both lowest AICc and best predictive performance. Furthermore, it was observed that the mean square prediction error itself became less suitable as a validation criterion, and that a predictive performance measure should incorporate interindividual variability. This simulation study showed that, at least in a relatively simple mixed-effects modelling context with a set of prespecified models

  2. THE MARK I BUSINESS SYSTEM SIMULATION MODEL

    Science.gov (United States)

    of a large-scale business simulation model as a vehicle for doing research in management controls. The major results of the program were the...development of the Mark I business simulation model and the Simulation Package (SIMPAC). SIMPAC is a method and set of programs facilitating the construction...of large simulation models. The object of this document is to describe the Mark I Corporation model, state why parts of the business were modeled as they were, and indicate the research applications of the model. (Author)

  3. Theoretical modelling, experimental studies and clinical simulations of urethral cooling catheters for use during prostate thermal therapy

    International Nuclear Information System (INIS)

    Davidson, Sean R H; Sherar, Michael D

    2003-01-01

    Urethral cooling catheters are used to prevent thermal damage to the urethra during thermal therapy of the prostate. Quantification of a catheter's heat transfer characteristics is necessary for prediction of the catheter's influence on the temperature and thermal dose distribution in periurethral tissue. Two cooling catheters with different designs were examined: the Dornier Urowave catheter and a prototype device from BSD Medical Corp. A convection coefficient, h, was used to characterize the cooling ability of each catheter. The value of the convection coefficient (h = 330 W m -2 deg C -1 for the Dornier catheter, h = 160 W m -2 deg C -1 for the BSD device) was obtained by comparing temperatures measured in a tissue-equivalent phantom material to temperatures predicted by a finite element method simulation of the phantom experiments. The coefficient was found to be insensitive to the rate of coolant flow inside the catheter between 40 and 120 ml min -1 . The convection coefficient method for modelling urethral catheters was incorporated into simulations of microwave heating of the prostate. Results from these simulations indicate that the Dornier device is significantly more effective than the BSD catheter at cooling the tissue surrounding the urethra

  4. Simulation Models for Socioeconomic Inequalities in Health: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Niko Speybroeck

    2013-11-01

    Full Text Available Background: The emergence and evolution of socioeconomic inequalities in health involves multiple factors interacting with each other at different levels. Simulation models are suitable for studying such complex and dynamic systems and have the ability to test the impact of policy interventions in silico. Objective: To explore how simulation models were used in the field of socioeconomic inequalities in health. Methods: An electronic search of studies assessing socioeconomic inequalities in health using a simulation model was conducted. Characteristics of the simulation models were extracted and distinct simulation approaches were identified. As an illustration, a simple agent-based model of the emergence of socioeconomic differences in alcohol abuse was developed. Results: We found 61 studies published between 1989 and 2013. Ten different simulation approaches were identified. The agent-based model illustration showed that multilevel, reciprocal and indirect effects of social determinants on health can be modeled flexibly. Discussion and Conclusions: Based on the review, we discuss the utility of using simulation models for studying health inequalities, and refer to good modeling practices for developing such models. The review and the simulation model example suggest that the use of simulation models may enhance the understanding and debate about existing and new socioeconomic inequalities of health frameworks.

  5. Validity of microgravity simulation models on earth

    DEFF Research Database (Denmark)

    Regnard, J; Heer, M; Drummer, C

    2001-01-01

    Many studies have used water immersion and head-down bed rest as experimental models to simulate responses to microgravity. However, some data collected during space missions are at variance or in contrast with observations collected from experimental models. These discrepancies could reflect...... incomplete knowledge of the characteristics inherent to each model. During water immersion, the hydrostatic pressure lowers the peripheral vascular capacity and causes increased thoracic blood volume and high vascular perfusion. In turn, these changes lead to high urinary flow, low vasomotor tone, and a high...

  6. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  7. Modeling and simulation of the SDC data collection chip

    International Nuclear Information System (INIS)

    Hughes, E.; Haney, M.; Golin, E.; Jones, L.; Knapp, D.; Tharakan, G.; Downing, R.

    1992-01-01

    This paper describes modeling and simulation of the Data Collection Chip (DCC) design for the Solenoidal Detector Collaboration (SDC). Models of the DCC written in Verilog and VHDL are described, and results are presented. The models have been simulated to study queue depth requirements and to compare control feedback alternatives. Insight into the management of models and simulation tools is given. Finally, techniques useful in the design process for data acquisition systems are discussed

  8. Developing Cognitive Models for Social Simulation from Survey Data

    Science.gov (United States)

    Alt, Jonathan K.; Lieberman, Stephen

    The representation of human behavior and cognition continues to challenge the modeling and simulation community. The use of survey and polling instruments to inform belief states, issue stances and action choice models provides a compelling means of developing models and simulations with empirical data. Using these types of data to population social simulations can greatly enhance the feasibility of validation efforts, the reusability of social and behavioral modeling frameworks, and the testable reliability of simulations. We provide a case study demonstrating these effects, document the use of survey data to develop cognitive models, and suggest future paths forward for social and behavioral modeling.

  9. Dual-energy mammography: simulation studies

    International Nuclear Information System (INIS)

    Bliznakova, K; Kolitsi, Z; Pallikarakis, N

    2006-01-01

    This paper presents a mammography simulator and demonstrates its applicability in feasibility studies in dual-energy (DE) subtraction mammography. This mammography simulator is an evolution of a previously presented x-ray imaging simulation system, which has been extended with new functionalities that are specific for DE simulations. The new features include incident exposure and dose calculations, the implementation of a DE subtraction algorithm as well as amendments to the detector and source modelling. The system was then verified by simulating experiments and comparing their results against published data. The simulator was used to carry out a feasibility study of the applicability of DE techniques in mammography, and more precisely to examine whether this modality could result in better visualization and detection of microcalcifications. Investigations were carried out using a 3D breast software phantom of average thickness, monoenergetic and polyenergetic beam spectra and various detector configurations. Dual-shot techniques were simulated. Results showed the advantage of using monoenergetic in comparison with polyenergetic beams. Optimization studies with monochromatic sources were carried out to obtain the optimal low and high incident energies, based on the assessment of the figure of merit of the simulated microcalcifications in the subtracted images. The results of the simulation study with the optimal energies demonstrated that the use of the DE technique can improve visualization and increase detectability, allowing identification of microcalcifications of sizes as small as 200 μm. The quantitative results are also verified by means of a visual inspection of the synthetic images

  10. Simulation modelling of fynbos ecosystems: Systems analysis and conceptual models

    CSIR Research Space (South Africa)

    Kruger, FJ

    1985-03-01

    Full Text Available -animal interactions. An additional two models, which expand aspects of the FYNBOS model, are described: a model for simulating canopy processes; and a Fire Recovery Simulator. The canopy process model will simulate ecophysiological processes in more detail than FYNBOS...

  11. Impact of Diagnosticity on the Adequacy of Models for Cognitive Diagnosis under a Linear Attribute Structure: A Simulation Study

    Science.gov (United States)

    de La Torre, Jimmy; Karelitz, Tzur M.

    2009-01-01

    Compared to unidimensional item response models (IRMs), cognitive diagnostic models (CDMs) based on latent classes represent examinees' knowledge and item requirements using discrete structures. This study systematically examines the viability of retrofitting CDMs to IRM-based data with a linear attribute structure. The study utilizes a procedure…

  12. Modeling, simulation and optimization of bipedal walking

    CERN Document Server

    Berns, Karsten

    2013-01-01

    The model-based investigation of motions of anthropomorphic systems is an important interdisciplinary research topic involving specialists from many fields such as Robotics, Biomechanics, Physiology, Orthopedics, Psychology, Neurosciences, Sports, Computer Graphics and Applied Mathematics. This book presents a study of basic locomotion forms such as walking and running is of particular interest due to the high demand on dynamic coordination, actuator efficiency and balance control. Mathematical models and numerical simulation and optimization techniques are explained, in combination with experimental data, which can help to better understand the basic underlying mechanisms of these motions and to improve them. Example topics treated in this book are Modeling techniques for anthropomorphic bipedal walking systems Optimized walking motions for different objective functions Identification of objective functions from measurements Simulation and optimization approaches for humanoid robots Biologically inspired con...

  13. Study of cellular retention of HMPAO and ECD in a model simulating the blood-brain barrier

    International Nuclear Information System (INIS)

    Ponce, C.; Pittet, N.; Slosman, D.O.

    1997-01-01

    The HMPAO and ECD are two technetium-labelled lipophilic agents clinically used in the imagery of cerebral perfusion. These molecules cross the membranes and are retained inside the cell after being converted to a hydrophilic form. The aim of this study is to establish the distribution of this retention at the level of blood-brain barrier (BBB) and nerve cells. The incorporation of HMPAO or ECD was studied on a model of co-culture simulating the BBB by means of a T84 single-cell layer of tight junction separated from another layer of U373 astrocyte cells. The cell quality and tight junction permeability were evaluated by the cellular retention of 111-indium chloride and by para-cellular diffusion of 14 C mannitol,d-1. The values reported below were obtained at 180 minutes when the radiotracers were added near the 'T84 layer'. The cell quality is validated by the low cellular retention of the indium chloride(2.3±0.3 μg -1 for the T84 cells and 8.2±5.8 μg -1 for the U373 cells). The activity of 14 C mannitol,d-1 diminishes by 23 ± 5 % in the added compartment. The retention of ECD by the U373 cells is significantly higher (20.7 ±4.5 g -1 ) than that of T84 cells (2.9 ± 0.2 μg -1 ). For HMPAO a non-significant tendency could be observed (49 ± 34 μg -1 for the U373 cells and 38 ± 25 μg -1 for the T84 cells)> The results of cellular retention of indium by HMPAO or ECD when added near 'U373 layer' are not significantly different.In conclusion, independently of the side exposed to the radiotracers, one observes an enhanced incorporation of the U373 cells. The ensemble of these results represent additional arguments in favour of a specific cellular incorporation of the radiotracers, independent of the BBB permittivity

  14. Modeling and simulation of economic processes

    Directory of Open Access Journals (Sweden)

    Bogdan Brumar

    2010-12-01

    Full Text Available In general, any activity requires a longer action often characterized by a degree of uncertainty, insecurity, in terms of size of the objective pursued. Because of the complexity of real economic systems, the stochastic dependencies between different variables and parameters considered, not all systems can be adequately represented by a model that can be solved by analytical methods and covering all issues for management decision analysis-economic horizon real. Often in such cases, it is considered that the simulation technique is the only alternative available. Using simulation techniques to study real-world systems often requires a laborious work. Making a simulation experiment is a process that takes place in several stages.

  15. An introduction to enterprise modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ostic, J.K.; Cannon, C.E. [Los Alamos National Lab., NM (United States). Technology Modeling and Analysis Group

    1996-09-01

    As part of an ongoing effort to continuously improve productivity, quality, and efficiency of both industry and Department of Energy enterprises, Los Alamos National Laboratory is investigating various manufacturing and business enterprise simulation methods. A number of enterprise simulation software models are being developed to enable engineering analysis of enterprise activities. In this document the authors define the scope of enterprise modeling and simulation efforts, and review recent work in enterprise simulation at Los Alamos National Laboratory as well as at other industrial, academic, and research institutions. References of enterprise modeling and simulation methods and a glossary of enterprise-related terms are provided.

  16. Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2014-01-01

    This book includes extended and revised versions of a set of selected papers from the 2012 International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2012) which was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and held in Rome, Italy. SIMULTECH 2012 was technically co-sponsored by the Society for Modeling & Simulation International (SCS), GDR I3, Lionphant Simulation, Simulation Team and IFIP and held in cooperation with AIS Special Interest Group of Modeling and Simulation (AIS SIGMAS) and the Movimento Italiano Modellazione e Simulazione (MIMOS).

  17. Structured building model reduction toward parallel simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University

    2013-08-26

    Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.

  18. Assessing the relationship between computational speed and precision: a case study comparing an interpreted versus compiled programming language using a stochastic simulation model in diabetes care.

    Science.gov (United States)

    McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P

    2010-01-01

    Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.

  19. Real-Time Agent-Based Modeling Simulation with in-situ Visualization of Complex Biological Systems: A Case Study on Vocal Fold Inflammation and Healing.

    Science.gov (United States)

    Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y K

    2016-05-01

    We present an efficient and scalable scheme for implementing agent-based modeling (ABM) simulation with In Situ visualization of large complex systems on heterogeneous computing platforms. The scheme is designed to make optimal use of the resources available on a heterogeneous platform consisting of a multicore CPU and a GPU, resulting in minimal to no resource idle time. Furthermore, the scheme was implemented under a client-server paradigm that enables remote users to visualize and analyze simulation data as it is being generated at each time step of the model. Performance of a simulation case study of vocal fold inflammation and wound healing with 3.8 million agents shows 35× and 7× speedup in execution time over single-core and multi-core CPU respectively. Each iteration of the model took less than 200 ms to simulate, visualize and send the results to the client. This enables users to monitor the simulation in real-time and modify its course as needed.

  20. A 2-D FEM thermal model to simulate water flow in a porous media: Campi Flegrei caldera case study

    Directory of Open Access Journals (Sweden)

    V. Romano

    2012-05-01

    Full Text Available Volcanic and geothermal aspects both exist in many geologically young areas. In these areas the heat transfer process is of fundamental importance, so that the thermal and fluid-dynamic processes characterizing a viscous fluid in a porous medium are very important to understand the complex dynamics of the these areas. The Campi Flegrei caldera, located west of the city of Naples, within the central-southern sector of the large graben of Campanian plain, is a region where both volcanic and geothermal phenomena are present. The upper part of the geothermal system can be considered roughly as a succession of volcanic porous material (tuff saturated by a mixture formed mainly by water and carbon dioxide. We have implemented a finite elements approach in transient conditions to simulate water flow in a 2-D porous medium to model the changes of temperature in the geothermal system due to magmatic fluid inflow, accounting for a transient phase, not considered in the analytical solutions and fluid compressibility. The thermal model is described by means of conductive/convective equations, in which we propose a thermal source represented by a parabolic shape function to better simulate an increase of temperature in the central part (magma chamber of a box, simulating the Campi Flegrei caldera and using more recent evaluations, from literature, for the medium's parameters (specific heat capacity, density, thermal conductivity, permeability. A best-fit velocity for the permeant is evaluated by comparing the simulated temperatures with those measured in wells drilled by Agip (Italian Oil Agency in the 1980s in the framework of geothermal exploration. A few tens of days are enough to reach the thermal steady state, showing the quick response of the system to heat injection. The increase in the pressure due to the heat transport is then used to compute ground deformation, in particular the vertical displacements characteristics of the Campi Flegrei caldera

  1. Modelling and simulation of thermal power plants

    Energy Technology Data Exchange (ETDEWEB)

    Eborn, J.

    1998-02-01

    Mathematical modelling and simulation are important tools when dealing with engineering systems that today are becoming increasingly more complex. Integrated production and recycling of materials are trends that give rise to heterogenous systems, which are difficult to handle within one area of expertise. Model libraries are an excellent way to package engineering knowledge of systems and units to be reused by those who are not experts in modelling. Many commercial packages provide good model libraries, but they are usually domain-specific and closed. Heterogenous, multi-domain systems requires open model libraries written in general purpose modelling languages. This thesis describes a model database for thermal power plants written in the object-oriented modelling language OMOLA. The models are based on first principles. Subunits describe volumes with pressure and enthalpy dynamics and flows of heat or different media. The subunits are used to build basic units such as pumps, valves and heat exchangers which can be used to build system models. Several applications are described; a heat recovery steam generator, equipment for juice blending, steam generation in a sulphuric acid plant and a condensing steam plate heat exchanger. Model libraries for industrial use must be validated against measured data. The thesis describes how parameter estimation methods can be used for model validation. Results from a case-study on parameter optimization of a non-linear drum boiler model show how the technique can be used 32 refs, 21 figs

  2. An electrical circuit model for simulation of indoor radon concentration.

    Science.gov (United States)

    Musavi Nasab, S M; Negarestani, A

    2013-01-01

    In this study, a new model based on electric circuit theory was introduced to simulate the behaviour of indoor radon concentration. In this model, a voltage source simulates radon generation in walls, conductivity simulates migration through walls and voltage across a capacitor simulates radon concentration in a room. This simulation considers migration of radon through walls by diffusion mechanism in one-dimensional geometry. Data reported in a typical Greek house were employed to examine the application of this technique of simulation to the behaviour of radon.

  3. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  4. Sensitivity of terrestrial ecosystems to elevated atmospheric CO{sub 2}: Comparisons of model simulation studies to CO{sub 2} effect

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Y. [Marine Biological Lab., Woods Hole, MA (United States)

    1995-06-01

    In the context of a project to compare terrestrial ecosystem models, the Vegetation/Ecosystem Modeling and Analysis Project (VEMAP), we have analyzed how three biogeochemistry models link plant growth to doubled atmospheric CO{sub 2}. A common set of input data was used to drive three biogeochemistry models, BIOME-BGC, CENTURY and TEM. For the continental United States the simulation results show that with doubled CO{sub 2}, NPP increased by 8.7%, 5.0% and 10.8% for TEM, CENTURY and BIOME-BGC, respectively. At the biome level the range of NPP estimates varied considerably among models. TEM-simulated enhancement of NPP ranged from 2% to 28%; CENTURY, from 2% to 9%; and BIOME-BGC, from 4% to 27%. A transect analysis across several biomes along a latitude at 41.5 N shows that the TEM-simulated CO{sub 2} enhancement of NPP ranged from 0% to 22%; CENTURY, from 1% to 10% and BIOME-BGC, from 1% to 63%. In this study, we have investigated the underlying mechanisms of the three models to reveal how increased CO{sub 2} affects photosynthesis rate, water using efficiency and nutrient cycles. The relative importance of these mechanisms in each of the three biogeochemistry models will be discussed.

  5. Theory, modeling and simulation: Annual report 1993

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, T.H. Jr.; Garrett, B.C.

    1994-07-01

    Developing the knowledge base needed to address the environmental restoration issues of the US Department of Energy requires a fundamental understanding of molecules and their interactions in insolation and in liquids, on surfaces, and at interfaces. To meet these needs, the PNL has established the Environmental and Molecular Sciences Laboratory (EMSL) and will soon begin construction of a new, collaborative research facility devoted to advancing the understanding of environmental molecular science. Research in the Theory, Modeling, and Simulation program (TMS), which is one of seven research directorates in the EMSL, will play a critical role in understanding molecular processes important in restoring DOE`s research, development and production sites, including understanding the migration and reactions of contaminants in soils and groundwater, the development of separation process for isolation of pollutants, the development of improved materials for waste storage, understanding the enzymatic reactions involved in the biodegradation of contaminants, and understanding the interaction of hazardous chemicals with living organisms. The research objectives of the TMS program are to apply available techniques to study fundamental molecular processes involved in natural and contaminated systems; to extend current techniques to treat molecular systems of future importance and to develop techniques for addressing problems that are computationally intractable at present; to apply molecular modeling techniques to simulate molecular processes occurring in the multispecies, multiphase systems characteristic of natural and polluted environments; and to extend current molecular modeling techniques to treat complex molecular systems and to improve the reliability and accuracy of such simulations. The program contains three research activities: Molecular Theory/Modeling, Solid State Theory, and Biomolecular Modeling/Simulation. Extended abstracts are presented for 89 studies.

  6. Theory, modeling and simulation: Annual report 1993

    International Nuclear Information System (INIS)

    Dunning, T.H. Jr.; Garrett, B.C.

    1994-07-01

    Developing the knowledge base needed to address the environmental restoration issues of the US Department of Energy requires a fundamental understanding of molecules and their interactions in insolation and in liquids, on surfaces, and at interfaces. To meet these needs, the PNL has established the Environmental and Molecular Sciences Laboratory (EMSL) and will soon begin construction of a new, collaborative research facility devoted to advancing the understanding of environmental molecular science. Research in the Theory, Modeling, and Simulation program (TMS), which is one of seven research directorates in the EMSL, will play a critical role in understanding molecular processes important in restoring DOE's research, development and production sites, including understanding the migration and reactions of contaminants in soils and groundwater, the development of separation process for isolation of pollutants, the development of improved materials for waste storage, understanding the enzymatic reactions involved in the biodegradation of contaminants, and understanding the interaction of hazardous chemicals with living organisms. The research objectives of the TMS program are to apply available techniques to study fundamental molecular processes involved in natural and contaminated systems; to extend current techniques to treat molecular systems of future importance and to develop techniques for addressing problems that are computationally intractable at present; to apply molecular modeling techniques to simulate molecular processes occurring in the multispecies, multiphase systems characteristic of natural and polluted environments; and to extend current molecular modeling techniques to treat complex molecular systems and to improve the reliability and accuracy of such simulations. The program contains three research activities: Molecular Theory/Modeling, Solid State Theory, and Biomolecular Modeling/Simulation. Extended abstracts are presented for 89 studies

  7. Exploratory modeling and simulation to support development of motesanib in Asian patients with non-small cell lung cancer based on MONET1 study results.

    Science.gov (United States)

    Claret, L; Bruno, R; Lu, J-F; Sun, Y-N; Hsu, C-P

    2014-04-01

    The motesanib phase III MONET1 study failed to show improvement in overall survival (OS) in non-small cell lung cancer, but a subpopulation of Asian patients had a favorable outcome. We performed exploratory modeling and simulations based on MONET1 data to support further development of motesanib in Asian patients. A model-based estimate of time to tumor growth was the best of tested tumor size response metrics in a multivariate OS model (P Simulations indicated that a phase III study in 500 Asian patients would exceed 80% power to confirm superior efficacy of motesanib combination therapy (expected HR: 0.74), suggesting that motesanib combination therapy may benefit Asian patients.

  8. Numerical study of Asian dust transport during the springtime of 2001 simulated with the Chemical Weather Forecasting System (CFORS) model

    Science.gov (United States)

    Uno, Itsushi; Satake, Shinsuke; Carmichael, Gregory R.; Tang, Youhua; Wang, Zifa; Takemura, Toshihiko; Sugimoto, Nobuo; Shimizu, Atsushi; Murayama, Toshiyuki; Cahill, Thomas A.; Cliff, Steven; Uematsu, Mitsuo; Ohta, Sachio; Quinn, Patricia K.; Bates, Timothy S.

    2004-10-01

    The regional-scale aerosol transport model Chemical Weather Forecasting System (CFORS) is used for analysis of large-scale dust phenomena during the Asian Pacific Regional Characterization Experiment (ACE-Asia) intensive observation. Dust modeling results are examined with the surface weather reports, satellite-derived dust index (Total Ozone Mapping Spectrometer (TOMS) Aerosol Index (AI)), Mie-scattering lidar observation, and surface aerosol observations. The CFORS dust results are shown to accurately reproduce many of the important observed features. Model analysis shows that the simulated dust vertical loading correlates well with TOMS AI and that the dust loading is transported with the meandering of the synoptic-scale temperature field at the 500-hPa level. Quantitative examination of aerosol optical depth shows that model predictions are within 20% difference of the lidar observations for the major dust episodes. The structure of the ACE-Asia Perfect Dust Storm, which occurred in early April, is clarified with the help of the CFORS model analysis. This storm consisted of two boundary layer components and one elevated dust (>6-km height) feature (resulting from the movement of two large low-pressure systems). Time variation of the CFORS dust fields shows the correct onset timing of the elevated dust for each observation site, but the model results tend to overpredict dust concentrations at lower latitude sites. The horizontal transport flux at 130°E longitude is examined, and the overall dust transport flux at 130°E during March-April is evaluated to be 55 Tg.

  9. Simulation modeling and analysis with Arena

    CERN Document Server

    Altiok, Tayfur

    2007-01-01

    Simulation Modeling and Analysis with Arena is a highly readable textbook which treats the essentials of the Monte Carlo discrete-event simulation methodology, and does so in the context of a popular Arena simulation environment.” It treats simulation modeling as an in-vitro laboratory that facilitates the understanding of complex systems and experimentation with what-if scenarios in order to estimate their performance metrics. The book contains chapters on the simulation modeling methodology and the underpinnings of discrete-event systems, as well as the relevant underlying probability, statistics, stochastic processes, input analysis, model validation and output analysis. All simulation-related concepts are illustrated in numerous Arena examples, encompassing production lines, manufacturing and inventory systems, transportation systems, and computer information systems in networked settings.· Introduces the concept of discrete event Monte Carlo simulation, the most commonly used methodology for modeli...

  10. Numerical simulation and parametric study of laminar mixed convection nanofluid flow in flat tubes using two phase mixture model

    Directory of Open Access Journals (Sweden)

    Safikhani Hamed

    2016-01-01

    Full Text Available In this article, the laminar mixed convection of Al2O3-Water nanofluid flow in a horizontal flat tube has been numerically simulated. The two-phase mixture model has been employed to solve the nanofluid flow, and constant heat flux has been considered as the wall boundary condition. The effects of different and important parameters such as the Reynolds number (Re, Grashof number (Gr, nanoparticles volume fraction (Φ and nanoparticle diameter (dp on the thermal and hydrodynamic performances of nanofluid flow have been analyzed. The results of numerical simulation were compared with similar existing data and good agreement is observed between them. It will be demonstrated that the Nusselt number (Nu and the friction factor (Cf are different for each of the upper, lower, left and right walls of the flat tube. The increase of Re, Gr and f and the reduction of dp lead to the increase of Nu. Similarly, the increase of Re and f results in the increase of Cf. Therefore, the best way to increase the amount of heat transfer in flat tubes using nanofluids is to increase the Gr and reduce the dp.

  11. Network Modeling and Simulation A Practical Perspective

    CERN Document Server

    Guizani, Mohsen; Khan, Bilal

    2010-01-01

    Network Modeling and Simulation is a practical guide to using modeling and simulation to solve real-life problems. The authors give a comprehensive exposition of the core concepts in modeling and simulation, and then systematically address the many practical considerations faced by developers in modeling complex large-scale systems. The authors provide examples from computer and telecommunication networks and use these to illustrate the process of mapping generic simulation concepts to domain-specific problems in different industries and disciplines. Key features: Provides the tools and strate

  12. Deep Drawing Simulations With Different Polycrystalline Models

    Science.gov (United States)

    Duchêne, Laurent; de Montleau, Pierre; Bouvier, Salima; Habraken, Anne Marie

    2004-06-01

    The goal of this research is to study the anisotropic material behavior during forming processes, represented by both complex yield loci and kinematic-isotropic hardening models. A first part of this paper describes the main concepts of the `Stress-strain interpolation' model that has been implemented in the non-linear finite element code Lagamine. This model consists of a local description of the yield locus based on the texture of the material through the full constraints Taylor's model. The texture evolution due to plastic deformations is computed throughout the FEM simulations. This `local yield locus' approach was initially linked to the classical isotropic Swift hardening law. Recently, a more complex hardening model was implemented: the physically-based microstructural model of Teodosiu. It takes into account intergranular heterogeneity due to the evolution of dislocation structures, that affects isotropic and kinematic hardening. The influence of the hardening model is compared to the influence of the texture evolution thanks to deep drawing simulations.

  13. A semi-grand canonical Monte Carlo simulation model for ion binding to ionizable surfaces: proton binding of carboxylated latex particles as a case study.

    Science.gov (United States)

    Madurga, Sergio; Rey-Castro, Carlos; Pastor, Isabel; Vilaseca, Eudald; David, Calin; Garcés, Josep Lluís; Puy, Jaume; Mas, Francesc

    2011-11-14

    In this paper, we present a computer simulation study of the ion binding process at an ionizable surface using a semi-grand canonical Monte Carlo method that models the surface as a discrete distribution of charged and neutral functional groups in equilibrium with explicit ions modelled in the context of the primitive model. The parameters of the simulation model were tuned and checked by comparison with experimental titrations of carboxylated latex particles in the presence of different ionic strengths of monovalent ions. The titration of these particles was analysed by calculating the degree of dissociation of the latex functional groups vs. pH curves at different background salt concentrations. As the charge of the titrated surface changes during the simulation, a procedure to keep the electroneutrality of the system is required. Here, two approaches are used with the choice depending on the ion selected to maintain electroneutrality: counterion or coion procedures. We compare and discuss the difference between the procedures. The simulations also provided a microscopic description of the electrostatic double layer (EDL) structure as a function of pH and ionic strength. The results allow us to quantify the effect of the size of the background salt ions and of the surface functional groups on the degree of dissociation. The non-homogeneous structure of the EDL was revealed by plotting the counterion density profiles around charged and neutral surface functional groups. © 2011 American Institute of Physics

  14. Estimating the impact of enterprise resource planning project management decisions on post-implementation maintenance costs: a case study using simulation modelling

    Science.gov (United States)

    Fryling, Meg

    2010-11-01

    Organisations often make implementation decisions with little consideration for the maintenance phase of an enterprise resource planning (ERP) system, resulting in significant recurring maintenance costs. Poor cost estimations are likely related to the lack of an appropriate framework for enterprise-wide pre-packaged software maintenance, which requires an ongoing relationship with the software vendor (Markus, M.L., Tanis, C., and Fenema, P.C., 2000. Multisite ERP implementation. CACM, 43 (4), 42-46). The end result is that critical project decisions are made with little empirical data, resulting in substantial long-term cost impacts. The product of this research is a formal dynamic simulation model that enables theory testing, scenario exploration and policy analysis. The simulation model ERPMAINT1 was developed by combining and extending existing frameworks in several research domains, and by incorporating quantitative and qualitative case study data. The ERPMAINT1 model evaluates tradeoffs between different ERP project management decisions and their impact on post-implementation total cost of ownership (TCO). Through model simulations a variety of dynamic insights were revealed that could assist ERP project managers. Major findings from the simulation show that upfront investments in mentoring and system exposure translate to long-term cost savings. The findings also indicate that in addition to customisations, add-ons have a significant impact on TCO.

  15. The Use of Model Matching Video Analysis and Computational Simulation to Study the Ankle Sprain Injury Mechanism

    Directory of Open Access Journals (Sweden)

    Daniel Tik-Pui Fong

    2012-10-01

    Full Text Available Lateral ankle sprains continue to be the most common injury sustained by athletes and create an annual healthcare burden of over $4 billion in the U.S. alone. Foot inversion is suspected in these cases, but the mechanism of injury remains unclear. While kinematics and kinetics data are crucial in understanding the injury mechanisms, ligament behaviour measures – such as ligament strains – are viewed as the potential causal factors of ankle sprains. This review article demonstrates a novel methodology that integrates model matching video analyses with computational simulations in order to investigate injury-producing events for a better understanding of such injury mechanisms. In particular, ankle joint kinematics from actual injury incidents were deduced by model matching video analyses and then input into a generic computational model based on rigid bone surfaces and deformable ligaments of the ankle so as to investigate the ligament strains that accompany these sprain injuries. These techniques may have the potential for guiding ankle sprain prevention strategies and targeted rehabilitation therapies.

  16. SYSTEM DYNAMIC MODELLING AND SIMULATION FOR CULTIVATION OF FOREST LAND: CASE STUDY PERUM PERHUTANI, CENTRAL JAVA, INDONESIA

    Directory of Open Access Journals (Sweden)

    Candra Musi

    2017-07-01

    Full Text Available The deforestation and forest degradation rates have a propensity to rise every year. The problems in pertaining with the issue is not solely preoccupied on the ecological concern but also to the socio-economic impacts. The complexity of forest management is a serious barrier in determining a better management policy. Modeling system is a simple method to describe the real situation in nature. A qualitative approach is used to identify the relationship between the dynamics of important behaviors. The causal relationships among the factors were investigated by using causal loop diagram. The model conceptualization was constructed by using a stock-flow diagram. The result of the simulation model was used to determine the alternative policies for better forest management. The results indicated that the tenant welfare would be enhanced through the provision of production-sharing by 25% and the Corporate Social Responsibility by 2%, which yields a reduction in cultivated area of ​​916.61 ha within a period of 67 years or a decline of land area by an average of 13.68 ha per year.

  17. Modelling and simulation of a heat exchanger

    Science.gov (United States)

    Xia, Lei; Deabreu-Garcia, J. Alex; Hartley, Tom T.

    1991-01-01

    Two models for two different control systems are developed for a parallel heat exchanger. First by spatially lumping a heat exchanger model, a good approximate model which has a high system order is produced. Model reduction techniques are applied to these to obtain low order models that are suitable for dynamic analysis and control design. The simulation method is discussed to ensure a valid simulation result.

  18. Computer Models Simulate Fine Particle Dispersion

    Science.gov (United States)

    2010-01-01

    Through a NASA Seed Fund partnership with DEM Solutions Inc., of Lebanon, New Hampshire, scientists at Kennedy Space Center refined existing software to study the electrostatic phenomena of granular and bulk materials as they apply to planetary surfaces. The software, EDEM, allows users to import particles and obtain accurate representations of their shapes for modeling purposes, such as simulating bulk solids behavior, and was enhanced to be able to more accurately model fine, abrasive, cohesive particles. These new EDEM capabilities can be applied in many industries unrelated to space exploration and have been adopted by several prominent U.S. companies, including John Deere, Pfizer, and Procter & Gamble.

  19. UDP-N-Acetyl glucosamine pyrophosphorylase as novel target for controlling Aedes aegypti – molecular modeling, docking and simulation studies

    Directory of Open Access Journals (Sweden)

    Bhagath Kumar Palaka

    2014-12-01

    Full Text Available Aedes aegypti is a vector that transmits diseases like dengue fever, chikungunya, and yellow fever. It is distributed in all tropical and subtropical regions of the world. According to WHO reports, 40% of the world’s population is currently at risk for dengue fever. As vaccines are not available for such diseases, controlling mosquito population becomes necessary. Hence, this study aims at UDP-N-acetyl glucosamine pyrophosphorylase of Aedes aegypti (AaUAP, an essential enzyme for chitin metabolim in insects, as a drug target. Structure of AaUAP was predicted and validated using in-silico approach. Further, docking studies were performed using a set of 10 inhibitors out of which NAG9 was found to have good docking score, which was further supported by simulation studies. Hence, we propose that NAG9 can be considered as a potential hit in designing new inhibitors to control Aedes aegypti.

  20. Digital Simulation Games for Social Studies Classrooms

    Science.gov (United States)

    Devlin-Scherer, Roberta; Sardone, Nancy B.

    2010-01-01

    Data from ten teacher candidates studying teaching methods were analyzed to determine perceptions toward digital simulation games in the area of social studies. This research can be used as a conceptual model of how current teacher candidates react to new methods of instruction and determine how education programs might change existing curricula…

  1. Systematic simulations of modified gravity: chameleon models

    International Nuclear Information System (INIS)

    Brax, Philippe; Davis, Anne-Christine; Li, Baojiu; Winther, Hans A.; Zhao, Gong-Bo

    2013-01-01

    In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference with the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc −1 , since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future

  2. Systematic simulations of modified gravity: chameleon models

    Energy Technology Data Exchange (ETDEWEB)

    Brax, Philippe [Institut de Physique Theorique, CEA, IPhT, CNRS, URA 2306, F-91191Gif/Yvette Cedex (France); Davis, Anne-Christine [DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Li, Baojiu [Institute for Computational Cosmology, Department of Physics, Durham University, Durham DH1 3LE (United Kingdom); Winther, Hans A. [Institute of Theoretical Astrophysics, University of Oslo, 0315 Oslo (Norway); Zhao, Gong-Bo, E-mail: philippe.brax@cea.fr, E-mail: a.c.davis@damtp.cam.ac.uk, E-mail: baojiu.li@durham.ac.uk, E-mail: h.a.winther@astro.uio.no, E-mail: gong-bo.zhao@port.ac.uk [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom)

    2013-04-01

    In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference with the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc{sup −1}, since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future.

  3. Modeling and simulation of gamma camera

    International Nuclear Information System (INIS)

    Singh, B.; Kataria, S.K.; Samuel, A.M.

    2002-08-01

    Simulation techniques play a vital role in designing of sophisticated instruments and also for the training of operating and maintenance staff. Gamma camera systems have been used for functional imaging in nuclear medicine. Functional images are derived from the external counting of the gamma emitting radioactive tracer that after introduction in to the body mimics the behavior of native biochemical compound. The position sensitive detector yield the coordinates of the gamma ray interaction with the detector and are used to estimate the point of gamma ray emission within the tracer distribution space. This advanced imaging device is thus dependent on the performance of algorithm for coordinate computing, estimation of point of emission, generation of image and display of the image data. Contemporary systems also have protocols for quality control and clinical evaluation of imaging studies. Simulation of this processing leads to understanding of the basic camera design problems. This report describes a PC based package for design and simulation of gamma camera along with the options of simulating data acquisition and quality control of imaging studies. Image display and data processing the other options implemented in SIMCAM will be described in separate reports (under preparation). Gamma camera modeling and simulation in SIMCAM has preset configuration of the design parameters for various sizes of crystal detector with the option to pack the PMT on hexagon or square lattice. Different algorithm for computation of coordinates and spatial distortion removal are allowed in addition to the simulation of energy correction circuit. The user can simulate different static, dynamic, MUGA and SPECT studies. The acquired/ simulated data is processed for quality control and clinical evaluation of the imaging studies. Results show that the program can be used to assess these performances. Also the variations in performance parameters can be assessed due to the induced

  4. Effect of process operating conditions in the biomass torrefaction: A simulation study using one-dimensional reactor and process model

    International Nuclear Information System (INIS)

    Park, Chansaem; Zahid, Umer; Lee, Sangho; Han, Chonghun

    2015-01-01

    Torrefaction reactor model is required for the development of reactor and process design for biomass torrefaction. In this study, a one-dimensional reactor model is developed based on the kinetic model describing volatiles components and solid evolution and the existing thermochemical model considering the heat and mass balance. The developed reactor model used the temperature and flow rate of the recycled gas as the practical manipulated variables instead of the torrefaction temperature. The temperature profiles of the gas and solid phase were generated, depending on the practical thermal conditions, using developed model. Moreover, the effect of each selected operating variables on the parameters of the torrefaction process and the effect of whole operating variables with particular energy yield were analyzed. Through the results of sensitivity analysis, it is shown that the residence time insignificantly influenced the energy yield when the flow rate of recycled gas is low. Moreover, higher temperature of recycled gas with low flow rate and residence time produces the attractive properties, including HHV and grindability, of torrefied biomass when the energy yield is specified. - Highlights: • A one-dimensional reactor model for biomass torrefaction is developed considering the heat and mass balance. • The developed reactor model uses the temperature and flow rate of the recycled gas as the practical manipulated variables. • The effect of operating variables on the parameters of the torrefaction process is analyzed. • The results of sensitivity analysis represent notable discussions which were not done by the previous researches

  5. Study, simulation and modelling of a gamma photon detector placed on an integral-type eccentric orbit

    International Nuclear Information System (INIS)

    Diallo, N.

    1999-01-01

    Gamma-ray lines are the signature of nuclear reactions and other high-energy processes that take place in the Universe. Their measurement and study provide invaluable information on many important problems in high energy astrophysics, including particle acceleration, physics of compact objects and nucleosynthesis. However the observation of astronomical gamma-ray sources has to be performed above the atmosphere because the Earth's atmosphere is opaque to gamma-rays. Unfortunately at these altitudes, spatial high energy electromagnetic radiation (X and gamma rays) detectors are exposed to intense parasite fluxes of radiation and particles induced by primary galactic cosmic rays. These fluxes as well radiation and secondary particles they generate, constitute a considerable source of background which limits their performances. Our study has been done in the framework of the INTEGRAL mission, a gamma-ray astronomy mission of the European Space Agency. INTEGRAL is devoted to the observation of celestial gamma-ray sources. It consists of two main instruments: an imager IBIS and a high resolution germanium spectrometer SPI (ΔE/E = 1.6 10 -3 at 1.3 MeV). We studied the hadronic component of the SPI background. This component is due to the radioactive decay of unstable nuclides produced by the interactions of cosmic-ray protons with the materials of SPI. It consists of a continuum with gamma ray lines superimposed. To study nuclear processes, Monte Carlo simulations have been performed with the nuclear code TIERCE developed at CEA/DAM. We used the GEANT Monte Carlo code developed at CERN to simulate the germanium detectors response. Background reduction techniques as PSD (Pulse Shape Discrimination) and energetic signatures have been applied in well chosen energy ranges to reduce the background. and improve the SPI sensitivity. With the estimated SPI narrow-line sensitivity level, SPI would be able to detect many gamma ray limes emitted in the active galactic sites

  6. On Improving 4-km Mesoscale Model Simulations

    Science.gov (United States)

    Deng, Aijun; Stauffer, David R.

    2006-03-01

    A previous study showed that use of analysis-nudging four-dimensional data assimilation (FDDA) and improved physics in the fifth-generation Pennsylvania State University National Center for Atmospheric Research Mesoscale Model (MM5) produced the best overall performance on a 12-km-domain simulation, based on the 18 19 September 1983 Cross-Appalachian Tracer Experiment (CAPTEX) case. However, reducing the simulated grid length to 4 km had detrimental effects. The primary cause was likely the explicit representation of convection accompanying a cold-frontal system. Because no convective parameterization scheme (CPS) was used, the convective updrafts were forced on coarser-than-realistic scales, and the rainfall and the atmospheric response to the convection were too strong. The evaporative cooling and downdrafts were too vigorous, causing widespread disruption of the low-level winds and spurious advection of the simulated tracer. In this study, a series of experiments was designed to address this general problem involving 4-km model precipitation and gridpoint storms and associated model sensitivities to the use of FDDA, planetary boundary layer (PBL) turbulence physics, grid-explicit microphysics, a CPS, and enhanced horizontal diffusion. Some of the conclusions include the following: 1) Enhanced parameterized vertical mixing in the turbulent kinetic energy (TKE) turbulence scheme has shown marked improvements in the simulated fields. 2) Use of a CPS on the 4-km grid improved the precipitation and low-level wind results. 3) Use of the Hong and Pan Medium-Range Forecast PBL scheme showed larger model errors within the PBL and a clear tendency to predict much deeper PBL heights than the TKE scheme. 4) Combining observation-nudging FDDA with a CPS produced the best overall simulations. 5) Finer horizontal resolution does not always produce better simulations, especially in convectively unstable environments, and a new CPS suitable for 4-km resolution is needed. 6

  7. Modelling interplanetary CMEs using magnetohydrodynamic simulations

    Directory of Open Access Journals (Sweden)

    P. J. Cargill

    Full Text Available The dynamics of Interplanetary Coronal Mass Ejections (ICMEs are discussed from the viewpoint of numerical modelling. Hydrodynamic models are shown to give a good zero-order picture of the plasma properties of ICMEs, but they cannot model the important magnetic field effects. Results from MHD simulations are shown for a number of cases of interest. It is demonstrated that the strong interaction of the ICME with the solar wind leads to the ICME and solar wind velocities being close to each other at 1 AU, despite their having very different speeds near the Sun. It is also pointed out that this interaction leads to a distortion of the ICME geometry, making cylindrical symmetry a dubious assumption for the CME field at 1 AU. In the presence of a significant solar wind magnetic field, the magnetic fields of the ICME and solar wind can reconnect with each other, leading to an ICME that has solar wind-like field lines. This effect is especially important when an ICME with the right sense of rotation propagates down the heliospheric current sheet. It is also noted that a lack of knowledge of the coronal magnetic field makes such simulations of little use in space weather forecasts that require knowledge of the ICME magnetic field strength.

    Key words. Interplanetary physics (interplanetary magnetic fields Solar physics, astrophysics, and astronomy (flares and mass ejections Space plasma physics (numerical simulation studies

  8. Modeling, Simulation and Position Control of 3DOF Articulated Manipulator

    Directory of Open Access Journals (Sweden)

    Hossein Sadegh Lafmejani

    2014-08-01

    Full Text Available In this paper, the modeling, simulation and control of 3 degrees of freedom articulated robotic manipulator have been studied. First, we extracted kinematics and dynamics equations of the mentioned manipulator by using the Lagrange method. In order to validate the analytical model of the manipulator we compared the model simulated in the simulation environment of Matlab with the model was simulated with the SimMechanics toolbox. A sample path has been designed for analyzing the tracking subject. The system has been linearized with feedback linearization and then a PID controller was applied to track a reference trajectory. Finally, the control results have been compared with a nonlinear PID controller.

  9. Modeling and Simulation of Low Voltage Arcs

    NARCIS (Netherlands)

    Ghezzi, L.; Balestrero, A.

    2010-01-01

    Modeling and Simulation of Low Voltage Arcs is an attempt to improve the physical understanding, mathematical modeling and numerical simulation of the electric arcs that are found during current interruptions in low voltage circuit breakers. An empirical description is gained by refined electrical

  10. Biomedical Simulation Models of Human Auditory Processes

    Science.gov (United States)

    Bicak, Mehmet M. A.

    2012-01-01

    Detailed acoustic engineering models that explore noise propagation mechanisms associated with noise attenuation and transmission paths created when using hearing protectors such as earplugs and headsets in high noise environments. Biomedical finite element (FE) models are developed based on volume Computed Tomography scan data which provides explicit external ear, ear canal, middle ear ossicular bones and cochlea geometry. Results from these studies have enabled a greater understanding of hearing protector to flesh dynamics as well as prioritizing noise propagation mechanisms. Prioritization of noise mechanisms can form an essential framework for exploration of new design principles and methods in both earplug and earcup applications. These models are currently being used in development of a novel hearing protection evaluation system that can provide experimentally correlated psychoacoustic noise attenuation. Moreover, these FE models can be used to simulate the effects of blast related impulse noise on human auditory mechanisms and brain tissue.

  11. Numerical Study of Natural Gas/Diesel Reactivity Controlled Compression Ignition Combustion with Large Eddy Simulation and Reynolds-Averaged Navier–Stokes Model

    Directory of Open Access Journals (Sweden)

    Amir-Hasan Kakaee

    2018-03-01

    Full Text Available In the current study, a comparative study is performed using Large Eddy Simulation (LES and Reynolds-averaged Navier–Stokes (RANS turbulence models on a natural gas/diesel Reactivity Controlled Compression Ignition (RCCI engine. The numerical results are validated against the available research work in the literature. The RNG (Re-Normalization Group k − ε and dynamic structure models are employed to model turbulent flow for RANS and LES simulations, respectively. Parameters like the premixed natural gas mass fraction, the second start of injection timing (SOI2 of diesel and the engine speed are studied to compare performance of RANS and LES models on combustion and pollutant emissions prediction. The results obtained showed that the LES and RANS model give almost similar predictions of cylinder pressure and heat release rate at lower natural gas mass fractions and late SOI2 timings. However, the LES showed improved capability to predict the natural gas auto-ignition and pollutant emissions prediction compared to RANS model especially at higher natural gas mass fractions.

  12. Deformation of the Durom acetabular component and its impact on tribology in a cadaveric model--a simulator study.

    Science.gov (United States)

    Liu, Feng; Chen, Zhefeng; Gu, Yanqing; Wang, Qing; Cui, Weiding; Fan, Weimin

    2012-01-01

    Recent studies have shown that the acetabular component frequently becomes deformed during press-fit insertion. The aim of this study was to explore the deformation of the Durom cup after implantation and to clarify the impact of deformation on wear and ion release of the Durom large head metal-on-metal (MOM) total hips in simulators. Six Durom cups impacted into reamed acetabula of fresh cadavers were used as the experimental group and another 6 size-paired intact Durom cups constituted the control group. All 12 Durom MOM total hips were put through a 3 million cycle (MC) wear test in simulators. The 6 cups in the experimental group were all deformed, with a mean deformation of 41.78 ± 8.86 µm. The average volumetric wear rate in the experimental group and in the control group in the first million cycle was 6.65 ± 0.29 mm(3)/MC and 0.89 ± 0.04 mm(3)/MC (t = 48.43, p = 0.000). The ion levels of Cr and Co in the experimental group were also higher than those in the control group before 2.0 MC. However there was no difference in the ion levels between 2.0 and 3.0 MC. This finding implies that the non-modular acetabular component of Durom total hip prosthesis is likely to become deformed during press-fit insertion, and that the deformation will result in increased volumetric wear and increased ion release. This study was determined to explore the deformation of the Durom cup after implantation and to clarify the impact of deformation on wear and ion release of the prosthesis. Deformation of the cup after implantation increases the wear of MOM bearings and the resulting ion levels. The clinical use of the Durom large head prosthesis should be with great care.

  13. Deformation of the Durom acetabular component and its impact on tribology in a cadaveric model--a simulator study.

    Directory of Open Access Journals (Sweden)

    Feng Liu

    Full Text Available BACKGROUND: Recent studies have shown that the acetabular component frequently becomes deformed during press-fit insertion. The aim of this study was to explore the deformation of the Durom cup after implantation and to clarify the impact of deformation on wear and ion release of the Durom large head metal-on-metal (MOM total hips in simulators. METHODS: Six Durom cups impacted into reamed acetabula of fresh cadavers were used as the experimental group and another 6 size-paired intact Durom cups constituted the control group. All 12 Durom MOM total hips were put through a 3 million cycle (MC wear test in simulators. RESULTS: The 6 cups in the experimental group were all deformed, with a mean deformation of 41.78 ± 8.86 µm. The average volumetric wear rate in the experimental group and in the control group in the first million cycle was 6.65 ± 0.29 mm(3/MC and 0.89 ± 0.04 mm(3/MC (t = 48.43, p = 0.000. The ion levels of Cr and Co in the experimental group were also higher than those in the control group before 2.0 MC. However there was no difference in the ion levels between 2.0 and 3.0 MC. CONCLUSIONS: This finding implies that the non-modular acetabular component of Durom total hip prosthesis is likely to become deformed during press-fit insertion, and that the deformation will result in increased volumetric wear and increased ion release. CLINICAL RELEVANCE: This study was determined to explore the deformation of the Durom cup after implantation and to clarify the impact of deformation on wear and ion release of the prosthesis. Deformation of the cup after implantation increases the wear of MOM bearings and the resulting ion levels. The clinical use of the Durom large head prosthesis should be with great care.

  14. MODELING AND SIMULATION OF A HYDROCRACKING UNIT

    Directory of Open Access Journals (Sweden)

    HASSAN A. FARAG

    2016-06-01

    Full Text Available Hydrocracking is used in the petroleum industry to convert low quality feed stocks into high valued transportation fuels such as gasoline, diesel, and jet fuel. The aim of the present work is to develop a rigorous steady state two-dimensional mathematical model which includes conservation equations of mass and energy for simulating the operation of a hydrocracking unit. Both the catalyst bed and quench zone have been included in this integrated model. The model equations were numerically solved in both axial and radial directions using Matlab software. The presented model was tested against a real plant data in Egypt. The results indicated that a very good agreement between the model predictions and industrial values have been reported for temperature profiles, concentration profiles, and conversion in both radial and axial directions at the hydrocracking unit. Simulation of the quench zone conversion and temperature profiles in the quench zone was also included and gave a low deviation from the actual ones. In concentration profiles, the percentage deviation in the first reactor was found to be 9.28 % and 9.6% for the second reactor. The effect of several parameters such as: Pellet Heat Transfer Coefficient, Effective Radial Thermal Conductivity, Wall Heat Transfer Coefficient, Effective Radial Diffusivity, and Cooling medium (quench zone has been included in this study. The variation of Wall Heat Transfer Coefficient, Effective Radial Diffusivity for the near-wall region, gave no remarkable changes in the temperature profiles. On the other hand, even small variations of Effective Radial Thermal Conductivity, affected the simulated temperature profiles significantly, and this effect could not be compensated by the variations of the other parameters of the model.

  15. TMS modeling toolbox for realistic simulation.

    Science.gov (United States)

    Cho, Young Sun; Suh, Hyun Sang; Lee, Won Hee; Kim, Tae-Seong

    2010-01-01

    Transcranial magnetic stimulation (TMS) is a technique for brain stimulation using rapidly changing magnetic fields generated by coils. It has been established as an effective stimulation technique to treat patients suffering from damaged brain functions. Although TMS is known to be painless and noninvasive, it can also be harmful to the brain by incorrect focusing and excessive stimulation which might result in seizure. Therefore there is ongoing research effort to elucidate and better understand the effect and mechanism of TMS. Lately Boundary element method (BEM) and Finite element method (FEM) have been used to simulate the electromagnetic phenomenon of TMS. However, there is a lack of general tools to generate the models of TMS due to some difficulties in realistic modeling of the human head and TMS coils. In this study, we have developed a toolbox through which one can generate high-resolution FE TMS models. The toolbox allows creating FE models of the head with isotropic and anisotropic electrical conductivities in five different tissues of the head and the coils in 3D. The generated TMS model is importable to FE software packages such as ANSYS for further and efficient electromagnetic analysis. We present a set of demonstrative results of realistic simulation of TMS with our toolbox.

  16. Simulation of volumetrically heated pebble beds in solid breeding blankets for fusion reactors. Modelling, experimental validation and sensitivity studies

    International Nuclear Information System (INIS)

    Hernandez Gonzalez, Francisco Alberto

    2016-01-01

    The Breeder Units contains pebble beds of lithium orthosilicate (Li_4SiO_4) as tritium breeder material and beryllium as neutron multiplier. In this dissertation a closed validation strategy for the thermo-mechanical validation of the Breeder Units has been developed. This strategy is based on the development of dedicated testing and modeling tools, which are needed for the qualification of the thermo-mechanical functionality of these components in an out-of-pile experimental campaign. The neutron flux in the Breeder Units induces a nonhomogeneous volumetric heating in the pebble beds that must be mimicked in an out-of-pile experiment with an external heating system minimizing the intrusion in the pebble beds. Therefore, a heater system that simulates this volumetric heating has been developed. This heater system is based on ohmic heating and linear heater elements, which approximates the point heat sources of the granular material by linear sources. These linear sources represent ''linear pebbles'' in discrete locations close enough to relatively reproduce the thermal gradients occurring in the functional materials. The heater concept has been developed for the Li_4SiO_4 and it is based on a hexagonal matrix arrangement of linear and parallel heater elements of diameter 1 mm separated by 7 mm. A set of uniformly distributed thermocouples in the transversal and longitudinal direction in the pebble bed midplane allows a 2D temperature reconstruction of that measurement plane by means of biharmonic spline interpolation. This heating system has been implemented in a relevant Breeder Unit region and its proof-of-concept has been tested in a PRE-test Mock-Up eXperiment (PREMUX) that has been designed and constructed in the frame of this dissertation. The packing factor of the pebble bed with and without the heating system does not show significant differences, giving an indirect evidence of the low intrusion of the system. Such low intrusion has been confirmed by in

  17. Simulation of volumetrically heated pebble beds in solid breeding blankets for fusion reactors. Modelling, experimental validation and sensitivity studies

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez Gonzalez, Francisco Alberto

    2016-10-14

    The Breeder Units contains pebble beds of lithium orthosilicate (Li{sub 4}SiO{sub 4}) as tritium breeder material and beryllium as neutron multiplier. In this dissertation a closed validation strategy for the thermo-mechanical validation of the Breeder Units has been developed. This strategy is based on the development of dedicated testing and modeling tools, which are needed for the qualification of the thermo-mechanical functionality of these components in an out-of-pile experimental campaign. The neutron flux in the Breeder Units induces a nonhomogeneous volumetric heating in the pebble beds that must be mimicked in an out-of-pile experiment with an external heating system minimizing the intrusion in the pebble beds. Therefore, a heater system that simulates this volumetric heating has been developed. This heater system is based on ohmic heating and linear heater elements, which approximates the point heat sources of the granular material by linear sources. These linear sources represent ''linear pebbles'' in discrete locations close enough to relatively reproduce the thermal gradients occurring in the functional materials. The heater concept has been developed for the Li{sub 4}SiO{sub 4} and it is based on a hexagonal matrix arrangement of linear and parallel heater elements of diameter 1 mm separated by 7 mm. A set of uniformly distributed thermocouples in the transversal and longitudinal direction in the pebble bed midplane allows a 2D temperature reconstruction of that measurement plane by means of biharmonic spline interpolation. This heating system has been implemented in a relevant Breeder Unit region and its proof-of-concept has been tested in a PRE-test Mock-Up eXperiment (PREMUX) that has been designed and constructed in the frame of this dissertation. The packing factor of the pebble bed with and without the heating system does not show significant differences, giving an indirect evidence of the low intrusion of the system. Such

  18. Assessing the accuracy of a simplified building energy simulation model using BESTEST : the case study of Brazilian regulation

    NARCIS (Netherlands)

    Melo, A.P.; Cóstola, D.; Lamberts, R.; Hensen, J.L.M.

    2012-01-01

    This paper reports the use of an internationally recognized validation and diagnostics procedure to test the fidelity of a simplified calculation method. The case study is the simplified model for calculation of energy performance of building envelopes, introduced by the Brazilian regulation for

  19. PEM Fuel Cells with Bio-Ethanol Processor Systems A Multidisciplinary Study of Modelling, Simulation, Fault Diagnosis and Advanced Control

    CERN Document Server

    Feroldi, Diego; Outbib, Rachid

    2012-01-01

    An apparently appropriate control scheme for PEM fuel cells may actually lead to an inoperable plant when it is connected to other unit operations in a process with recycle streams and energy integration. PEM Fuel Cells with Bio-Ethanol Processor Systems presents a control system design that provides basic regulation of the hydrogen production process with PEM fuel cells. It then goes on to construct a fault diagnosis system to improve plant safety above this control structure. PEM Fuel Cells with Bio-Ethanol Processor Systems is divided into two parts: the first covers fuel cells and the second discusses plants for hydrogen production from bio-ethanol to feed PEM fuel cells. Both parts give detailed analyses of modeling, simulation, advanced control, and fault diagnosis. They give an extensive, in-depth discussion of the problems that can occur in fuel cell systems and propose a way to control these systems through advanced control algorithms. A significant part of the book is also given over to computer-aid...

  20. Whole-building Hygrothermal Simulation Model

    DEFF Research Database (Denmark)

    Rode, Carsten; Grau, Karl

    2003-01-01

    An existing integrated simulation tool for dynamic thermal simulation of building was extended with a transient model for moisture release and uptake in building materials. Validation of the new model was begun with comparison against measurements in an outdoor test cell furnished with single...... materials. Almost quasi-steady, cyclic experiments were used to compare the indoor humidity variation and the numerical results of the integrated simulation tool with the new moisture model. Except for the case with chipboard as furnishing, the predictions of indoor humidity with the detailed model were...

  1. Mathematical modeling and numerical study of a spray in a rarefied gas. Application to the simulation of dust particle transport in ITER in case of vacuum loss accident

    International Nuclear Information System (INIS)

    Charles, F.

    2009-11-01

    -In-Cell method. Starting from these models, we perform some numerical simulations of a loss-of-vacuum event in the framework of safety studies in ITER. (author)

  2. Influence of B{sub 1}-inhomogeneity on pharmacokinetic modeling of dynamic contrast-enhanced MRI: A simulation study

    Energy Technology Data Exchange (ETDEWEB)

    Park, Bun Woo [Dept. of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul (Korea, Republic of); Choi, Byung Se [Dept. of Radiology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of); and others

    2017-08-01

    To simulate the B1-inhomogeneity-induced variation of pharmacokinetic parameters on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). B1-inhomogeneity-induced flip angle (FA) variation was estimated in a phantom study. Monte Carlo simulation was performed to assess the FA-deviation-induced measurement error of the pre-contrast R1, contrast-enhancement ratio, Gd-concentration, and two-compartment pharmacokinetic parameters (Ktrans, ve, and vp). B1-inhomogeneity resulted in −23–5% fluctuations (95% confidence interval [CI] of % error) of FA. The 95% CIs of FA-dependent % errors in the gray matter and blood were as follows: −16.7–61.8% and −16.7–61.8% for the pre-contrast R1, −1.0–0.3% and −5.2–1.3% for the contrast-enhancement ratio, and −14.2–58.1% and −14.1–57.8% for the Gd-concentration, respectively. These resulted in −43.1–48.4% error for Ktrans, −32.3–48.6% error for the ve, and −43.2–48.6% error for vp. The pre-contrast R1 was more vulnerable to FA error than the contrast-enhancement ratio, and was therefore a significant cause of the Gd-concentration error. For example, a −10% FA error led to a 23.6% deviation in the pre-contrast R1, −0.4% in the contrast-enhancement ratio, and 23.6% in the Gd-concentration. In a simulated condition with a 3% FA error in a target lesion and a −10% FA error in a feeding vessel, the % errors of the pharmacokinetic parameters were −23.7% for Ktrans, −23.7% for ve, and −23.7% for vp. Even a small degree of B1-inhomogeneity can cause a significant error in the measurement of pharmacokinetic parameters on DCE-MRI, while the vulnerability of the pre-contrast R1 calculations to FA deviations is a significant cause of the miscalculation.

  3. Assessing the impact of releases of radionuclides into sewage systems in urban environment - simulation, modelling and experimental studies - LUCIA

    International Nuclear Information System (INIS)

    Sundelll-Bergman, S.; Avila, R.; Cruz, I. de la; Xu, S.; Puhakainen, M.; Heikkinene, T.; Rahola, T.; Hosseini, A.; Nielsen, Sven; Sigurgeirsson, M.

    2009-06-01

    This report summarises the findings of a project on assessing the impact of releases of radionuclides into sewage systems and was established to provide more knowledge and suitable tools for emergency preparedness purposes in urban areas. It was known that the design of sewage plants, and their wastewater treatments, is rather similar between the Nordic countries. One sewage plant in each of the five Nordic countries was selected for assessing the impact of radionuclide releases from hospitals into their sewerage systems. Measurements and model predictions of dose assessments to different potentially exposed members of the public were carried out. The results from the dose assessments indicate that in case of routine releases annual doses to the three hypothetical groups of individuals are most likely insignificant. Estimated doses for workers are below 10 μSv/y, for the two studied radionuclides 99mTc and 131I. If uncertainties in the predictions of activity concentrations in sludge are considered, then the probability of obtaining doses above 10 μSv/y may not be insignificant. The models and approaches developed can also be applied in case of accidental releases. A laboratory inter-comparison exercise was also organised to compare analytical results across the laboratories participating in the project, using both 131I, dominating man-made radionuclide in sewage systems due to the medical use. A process oriented model of the biological treatment is also proposed in the report that does not require as much input data as for the LUCIA model. This model is a combination of a simplified well known Activated Sludge Model No.1 (Henze, 1987) and the Kd concept used in the LUCIA model. The simplified model is able to estimate the concentrations and the retention time of the sludge in different parts of the treatment plant, which in turn, can be used as a tool for the dose assessment purpose.filled by the activity. (au)

  4. Assessing the impact of releases of radionuclides into sewage systems in urban environment - simulation, modelling and experimental studies - LUCIA

    Energy Technology Data Exchange (ETDEWEB)

    Sundelll-Bergman, S. (Vattenfall Power Consultant, Stockholm (Sweden)); Avila, R.; Cruz, I. de la (Facilia AB, (Sweden)); Xu, S. (Swedish Radiation Safety Authority, (Sweden)); Puhakainen, M.; Heikkinene, T.; Rahola, T. (STUK (Finland)); Hosseini, A. (Norwegian Radiation Protection Authority (Norway)); Nielsen, Sven (Risoe National Laboratory for Sustainable Energy, DTU (Denmark)); Sigurgeirsson, M. (Geislavarnir rikisins (Iceland))

    2009-06-15

    This report summarises the findings of a project on assessing the impact of releases of radionuclides into sewage systems and was established to provide more knowledge and suitable tools for emergency preparedness purposes in urban areas. It was known that the design of sewage plants, and their wastewater treatments, is rather similar between the Nordic countries. One sewage plant in each of the five Nordic countries was selected for assessing the impact of radionuclide releases from hospitals into their sewerage systems. Measurements and model predictions of dose assessments to different potentially exposed members of the public were carried out. The results from the dose assessments indicate that in case of routine releases annual doses to the three hypothetical groups of individuals are most likely insignificant. Estimated doses for workers are below 10 muSv/y, for the two studied radionuclides 99mTc and 131I. If uncertainties in the predictions of activity concentrations in sludge are considered, then the probability of obtaining doses above 10 muSv/y may not be insignificant. The models and approaches developed can also be applied in case of accidental releases. A laboratory inter-comparison exercise was also organised to compare analytical results across the laboratories participating in the project, using both 131I, dominating man-made radionuclide in sewage systems due to the medical use. A process oriented model of the biological treatment is also proposed in the report that does not require as much input data as for the LUCIA model. This model is a combination of a simplified well known Activated Sludge Model No.1 (Henze, 1987) and the Kd concept used in the LUCIA model. The simplified model is able to estimate the concentrations and the retention time of the sludge in different parts of the treatment plant, which in turn, can be used as a tool for the dose assessment purpose.filled by the activity. (au)

  5. SU-E-J-82: Intra-Fraction Proton Beam-Range Verification with PET Imaging: Feasibility Studies with Monte Carlo Simulations and Statistical Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Lou, K [U.T M.D. Anderson Cancer Center, Houston, TX (United States); Rice University, Houston, TX (United States); Mirkovic, D; Sun, X; Zhu, X; Poenisch, F; Grosshans, D; Shao, Y [U.T M.D. Anderson Cancer Center, Houston, TX (United States); Clark, J [Rice University, Houston, TX (United States)

    2014-06-01

    Purpose: To study the feasibility of intra-fraction proton beam-range verification with PET imaging. Methods: Two phantoms homogeneous cylindrical PMMA phantoms (290 mm axial length, 38 mm and 200 mm diameter respectively) were studied using PET imaging: a small phantom using a mouse-sized PET (61 mm diameter field of view (FOV)) and a larger phantom using a human brain-sized PET (300 mm FOV). Monte Carlo (MC) simulations (MCNPX and GATE) were used to simulate 179.2 MeV proton pencil beams irradiating the two phantoms and be imaged by the two PET systems. A total of 50 simulations were conducted to generate 50 positron activity distributions and correspondingly 50 measured activity-ranges. The accuracy and precision of these activity-ranges were calculated under different conditions (including count statistics and other factors, such as crystal cross-section). Separate from the MC simulations, an activity distribution measured from a simulated PET image was modeled as a noiseless positron activity distribution corrupted by Poisson counting noise. The results from these two approaches were compared to assess the impact of count statistics on the accuracy and precision of activity-range calculations. Results: MC Simulations show that the accuracy and precision of an activity-range are dominated by the number (N) of coincidence events of the reconstructed image. They are improved in a manner that is inversely proportional to 1/sqrt(N), which can be understood from the statistical modeling. MC simulations also indicate that the coincidence events acquired within the first 60 seconds with 10{sup 9} protons (small phantom) and 10{sup 10} protons (large phantom) are sufficient to achieve both sub-millimeter accuracy and precision. Conclusion: Under the current MC simulation conditions, the initial study indicates that the accuracy and precision of beam-range verification are dominated by count statistics, and intra-fraction PET image-based beam-range verification is

  6. Simulation-based modeling of building complexes construction management

    Science.gov (United States)

    Shepelev, Aleksandr; Severova, Galina; Potashova, Irina

    2018-03-01

    The study reported here examines the experience in the development and implementation of business simulation games based on network planning and management of high-rise construction. Appropriate network models of different types and levels of detail have been developed; a simulation model including 51 blocks (11 stages combined in 4 units) is proposed.

  7. Simulation modeling for the health care manager.

    Science.gov (United States)

    Kennedy, Michael H

    2009-01-01

    This article addresses the use of simulation software to solve administrative problems faced by health care managers. Spreadsheet add-ins, process simulation software, and discrete event simulation software are available at a range of costs and complexity. All use the Monte Carlo method to realistically integrate probability distributions into models of the health care environment. Problems typically addressed by health care simulation modeling are facility planning, resource allocation, staffing, patient flow and wait time, routing and transportation, supply chain management, and process improvement.

  8. Protein Simulation Data in the Relational Model.

    Science.gov (United States)

    Simms, Andrew M; Daggett, Valerie

    2012-10-01

    High performance computing is leading to unprecedented volumes of data. Relational databases offer a robust and scalable model for storing and analyzing scientific data. However, these features do not come without a cost-significant design effort is required to build a functional and efficient repository. Modeling protein simulation data in a relational database presents several challenges: the data captured from individual simulations are large, multi-dimensional, and must integrate with both simulation software and external data sites. Here we present the dimensional design and relational implementation of a comprehensive data warehouse for storing and analyzing molecular dynamics simulations using SQL Server.

  9. Modeling and simulation of blood collection systems.

    Science.gov (United States)

    Alfonso, Edgar; Xie, Xiaolan; Augusto, Vincent; Garraud, Olivier

    2012-03-01

    This paper addresses the modeling and simulation of blood collection systems in France for both fixed site and mobile blood collection with walk in whole blood donors and scheduled plasma and platelet donors. Petri net models are first proposed to precisely describe different blood collection processes, donor behaviors, their material/human resource requirements and relevant regulations. Petri net models are then enriched with quantitative modeling of donor arrivals, donor behaviors, activity times and resource capacity. Relevant performance indicators are defined. The resulting simulation models can be straightforwardly implemented with any simulation language. Numerical experiments are performed to show how the simulation models can be used to select, for different walk in donor arrival patterns, appropriate human resource planning and donor appointment strategies.

  10. Simulating land-use changes by incorporating spatial autocorrelation and self-organization in CLUE-S modeling: a case study in Zengcheng District, Guangzhou, China

    Science.gov (United States)

    Mei, Zhixiong; Wu, Hao; Li, Shiyun

    2018-06-01

    The Conversion of Land Use and its Effects at Small regional extent (CLUE-S), which is a widely used model for land-use simulation, utilizes logistic regression to estimate the relationships between land use and its drivers, and thus, predict land-use change probabilities. However, logistic regression disregards possible spatial autocorrelation and self-organization in land-use data. Autologistic regression can depict spatial autocorrelation but cannot address self-organization, while logistic regression by considering only self-organization (NElogistic regression) fails to capture spatial autocorrelation. Therefore, this study developed a regression (NE-autologistic regression) method, which incorporated both spatial autocorrelation and self-organization, to improve CLUE-S. The Zengcheng District of Guangzhou, China was selected as the study area. The land-use data of 2001, 2005, and 2009, as well as 10 typical driving factors, were used to validate the proposed regression method and the improved CLUE-S model. Then, three future land-use scenarios in 2020: the natural growth scenario, ecological protection scenario, and economic development scenario, were simulated using the improved model. Validation results showed that NE-autologistic regression performed better than logistic regression, autologistic regression, and NE-logistic regression in predicting land-use change probabilities. The spatial allocation accuracy and kappa values of NE-autologistic-CLUE-S were higher than those of logistic-CLUE-S, autologistic-CLUE-S, and NE-logistic-CLUE-S for the simulations of two periods, 2001-2009 and 2005-2009, which proved that the improved CLUE-S model achieved the best simulation and was thereby effective to a certain extent. The scenario simulation results indicated that under all three scenarios, traffic land and residential/industrial land would increase, whereas arable land and unused land would decrease during 2009-2020. Apparent differences also existed in the

  11. Optimal model-based deficit irrigation scheduling using AquaCrop: a simulation study with cotton, potato and tomato

    DEFF Research Database (Denmark)

    Linker, Raphael; Ioslovich, Ilya; Sylaios, Georgios

    2016-01-01

    -smooth behavior of the objective function and the fact that it involves multiple integer variables. We developed an optimization scheme for generating sub-optimal irrigation schedules that take implicitly into account the response of the crop to water stress, and used these as initial guesses for a full......Water shortage is the main limiting factor for agricultural productivity in many countries and improving water use efficiency in agriculture has been the focus of numerous studies. The usual approach to limit water consumption in agriculture is to apply water quotas and in such a situation farmers...... variables are the irrigation amounts for each day of the season. The objective function is the expected yield calculated with the use of a model. In the present work we solved this optimization problem for three crops modeled by the model AquaCrop. This optimization problem is non-trivial due to the non...

  12. Modeling and Simulation of Matrix Converter

    DEFF Research Database (Denmark)

    Liu, Fu-rong; Klumpner, Christian; Blaabjerg, Frede

    2005-01-01

    This paper discusses the modeling and simulation of matrix converter. Two models of matrix converter are presented: one is based on indirect space vector modulation and the other is based on power balance equation. The basis of these two models is• given and the process on modeling is introduced...

  13. Process model simulations of the divergence effect

    Science.gov (United States)

    Anchukaitis, K. J.; Evans, M. N.; D'Arrigo, R. D.; Smerdon, J. E.; Hughes, M. K.; Kaplan, A.; Vaganov, E. A.

    2007-12-01

    We explore the extent to which the Vaganov-Shashkin (VS) model of conifer tree-ring formation can explain evidence for changing relationships between climate and tree growth over recent decades. The VS model is driven by daily environmental forcing (temperature, soil moisture, and solar radiation), and simulates tree-ring growth cell-by-cell as a function of the most limiting environmental control. This simplified representation of tree physiology allows us to examine using a selection of case studies whether instances of divergence may be explained in terms of changes in limiting environmental dependencies or transient climate change. Identification of model-data differences permits further exploration of the effects of tree-ring standardization, atmospheric composition, and additional non-climatic factors.

  14. Radiation Modeling with Direct Simulation Monte Carlo

    Science.gov (United States)

    Carlson, Ann B.; Hassan, H. A.

    1991-01-01

    Improvements in the modeling of radiation in low density shock waves with direct simulation Monte Carlo (DSMC) are the subject of this study. A new scheme to determine the relaxation collision numbers for excitation of electronic states is proposed. This scheme attempts to move the DSMC programs toward a more detailed modeling of the physics and more reliance on available rate data. The new method is compared with the current modeling technique and both techniques are compared with available experimental data. The differences in the results are evaluated. The test case is based on experimental measurements from the AVCO-Everett Research Laboratory electric arc-driven shock tube of a normal shock wave in air at 10 km/s and .1 Torr. The new method agrees with the available data as well as the results from the earlier scheme and is more easily extrapolated to di erent ow conditions.

  15. Study on a Dynamic Vegetation Model for Simulating Land Surface Flux Exchanges at Lien-Hua-Chih Flux Observation Site in Taiwan

    Science.gov (United States)

    Yeh, T. Y.; Li, M. H.; Chen, Y. Y.; Ryder, J.; McGrath, M.; Otto, J.; Naudts, K.; Luyssaert, S.; MacBean, N.; Bastrikov, V.

    2016-12-01

    Dynamic vegetation model ORCHIDEE (Organizing Carbon and Hydrology In Dynamic EcosystEms) is a state of art land surface component of the IPSL (Institute Pierre Simon Laplace) Earth System Model. It has been used world-wide to investigate variations of water, carbon, and energy exchanges between the land surface and the atmosphere. In this study we assessed the applicability of using ORCHIDEE-CAN, a new feature with 3-D CANopy structure (Naudts et al., 2015; Ryder et al., 2016), to simulate surface fluxes measured at tower-based eddy covariance fluxes at the Lien-Hua-Chih experimental watershed in Taiwan. The atmospheric forcing including radiation, air temperature, wind speed, and the dynamics of vertical canopy structure for driving the model were obtained from the observations site. Suitable combinations of default plant function types were examined to meet in-situ observations of soil moisture and leaf area index from 2009 to 2013. The simulated top layer soil moisture was ranging from 0.1 to 0.4 and total leaf area was ranging from 2.2 to 4.4, respectively. A sensitivity analysis was performed to investigate the sensitive of model parameters and model skills of ORCHIDEE-CAN on capturing seasonal variations of surface fluxes. The most sensitive parameters were suggested and calibrated by an automatic data assimilation tool ORCHDAS (ORCHIDEE Data Assimilation Systems; http://orchidas.lsce.ipsl.fr/). Latent heat, sensible heat, and carbon fluxes simulated by the model were compared with long-term observations at the site. ORCHIDEE-CAN by making use of calibrated surface parameters was used to study variations of land-atmosphere interactions on a variety of temporal scale in associations with changes in both land and atmospheric conditions. Ref: Naudts, K., et al.,: A vertically discretised canopy description for ORCHIDEE (SVN r2290) and the modifications to the energy, water and carbon fluxes, Geoscientific Model Development, 8, 2035-2065, doi:10.5194/gmd-8

  16. Simulation models for tokamak plasmas

    International Nuclear Information System (INIS)

    Dimits, A.M.; Cohen, B.I.

    1992-01-01

    Two developments in the nonlinear simulation of tokamak plasmas are described: (A) Simulation algorithms that use quasiballooning coordinates have been implemented in a 3D fluid code and a 3D partially linearized (Δf) particle code. In quasiballooning coordinates, one of the coordinate directions is closely aligned with that of the magnetic field, allowing both optimal use of the grid resolution for structures highly elongated along the magnetic field as well as implementation of the correct periodicity conditions with no discontinuities in the toroidal direction. (B) Progress on the implementation of a likeparticle collision operator suitable for use in partially linearized particle codes is reported. The binary collision approach is shown to be unusable for this purpose. The algorithm under development is a complete version of the test-particle plus source-field approach that was suggested and partially implemented by Xu and Rosenbluth

  17. Long-term manure carbon sequestration in soil simulated with the Daisy model on the basis of short-term incubation study

    DEFF Research Database (Denmark)

    Karki, Yubaraj Kumar; Børgesen, Christen Duus; Thomsen, Ingrid Kaag

    2013-01-01

    This study focused on simulating the long-term soil carbon sequestration after application of anaerobically digested and non-digested cattle manure using the Daisy model. The model was parameterized and calibrated for soil carbon (C) release during a 247 days incubation study including a coarse...... application of the two manures (70 kg organic manure N ha-1 plus 90 kg mineral N ha-1) and compared with a mineral N reference (120 kg N ha-1 yr-1). Carbon retention in soil was related to the initial C in non-digested manure, and after 52 years of repeated manure application extra C retention was equivalent...... to 41% for non-digested and 35% for digested manure in the loamy sand. In the sandy soil corresponding C retention was 37 and 29%. The higher C retention from non-digested compared to digested manure differed from the incubation study and was mainly due to the model response to the optimized parameters...

  18. Trace gas composition in the Asian summer monsoon anticyclone: a case study based on aircraft observations and model simulations

    Science.gov (United States)

    Gottschaldt, Klaus-D.; Schlager, Hans; Baumann, Robert; Bozem, Heiko; Eyring, Veronika; Hoor, Peter; Jöckel, Patrick; Jurkat, Tina; Voigt, Christiane; Zahn, Andreas; Ziereis, Helmut

    2017-05-01

    We present in situ measurements of the trace gas composition of the upper tropospheric (UT) Asian summer monsoon anticyclone (ASMA) performed with the High Altitude and Long Range Research Aircraft (HALO) in the frame of the Earth System Model Validation (ESMVal) campaign. Air masses with enhanced O3 mixing ratios were encountered after entering the ASMA at its southern edge at about 150 hPa on 18 September 2012. This is in contrast to the presumption that the anticyclone's interior is dominated by recently uplifted air with low O3 in the monsoon season. We also observed enhanced CO and HCl in the ASMA, which are tracers for boundary layer pollution and tropopause layer (TL) air or stratospheric in-mixing respectively. In addition, reactive nitrogen was enhanced in the ASMA. Along the HALO flight track across the ASMA boundary, strong gradients of these tracers separate anticyclonic from outside air. Lagrangian trajectory calculations using HYSPLIT show that HALO sampled a filament of UT air three times, which included air masses uplifted from the lower or mid-troposphere north of the Bay of Bengal. The trace gas gradients between UT and uplifted air masses were preserved during transport within a belt of streamlines fringing the central part of the anticyclone (fringe), but are smaller than the gradients across the ASMA boundary. Our data represent the first in situ observations across the southern part and downstream of the eastern ASMA flank. Back-trajectories starting at the flight track furthermore indicate that HALO transected the ASMA where it was just splitting into a Tibetan and an Iranian part. The O3-rich filament is diverted from the fringe towards the interior of the original anticyclone, and is at least partially bound to become part of the new Iranian eddy. A simulation with the ECHAM/MESSy Atmospheric Chemistry (EMAC) model is found to reproduce the observations reasonably well. It shows that O3-rich air is entrained by the outer streamlines of the

  19. Fluence dependent changes of erosion yields and surface morphology of the iron-tungsten model system: SDTrimSP-2D simulation studies

    Directory of Open Access Journals (Sweden)

    U. von Toussaint

    2017-08-01

    Full Text Available The effect of different sample structures of an iron-tungsten model system (as a surrogate for reduced activation ferritic martensitic steels like EUROFER on the development of surface morphologies, tungsten surface enrichment and sputter yields under low-energy monoenergetic perpendicular 200 eV deuterium bombardment has been studied with SDTrimSP-2d simulations. Previous modeling studies considering diffusive effects also could reasonably reproduce and explain the experimental results for a large set of experimental parameters like temperature, flux and sample concentration. However, for settings with negligible Fe-W-interdiffusion the fluence needed for steady-state conditions differed between the experiments and the simulations. Thus, the main focus of the present study is directed towards the elucidation of this fluence mismatch. Comparison of one and two-dimensional simulation results reveal a strong dependency of the tungsten enrichment on the sample homogeneity and a significantly delayed reduction of the erosion yield due to a pronounced formation of surface structures from initially flat sample surfaces.

  20. Modeling and Simulation Techniques for Large-Scale Communications Modeling

    National Research Council Canada - National Science Library

    Webb, Steve

    1997-01-01

    .... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.

  1. A model management system for combat simulation

    OpenAIRE

    Dolk, Daniel R.

    1986-01-01

    The design and implementation of a model management system to support combat modeling is discussed. Structured modeling is introduced as a formalism for representing mathematical models. A relational information resource dictionary system is developed which can accommodate structured models. An implementation is described. Structured modeling is then compared to Jackson System Development (JSD) as a methodology for facilitating discrete event simulation. JSD is currently better at representin...

  2. HVDC System Characteristics and Simulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Moon, S.I.; Han, B.M.; Jang, G.S. [Electric Enginnering and Science Research Institute, Seoul (Korea)

    2001-07-01

    This report deals with the AC-DC power system simulation method by PSS/E and EUROSTAG for the development of a strategy for the reliable operation of the Cheju-Haenam interconnected system. The simulation using both programs is performed to analyze HVDC simulation models. In addition, the control characteristics of the Cheju-Haenam HVDC system as well as Cheju AC system characteristics are described in this work. (author). 104 figs., 8 tabs.

  3. Software-Engineering Process Simulation (SEPS) model

    Science.gov (United States)

    Lin, C. Y.; Abdel-Hamid, T.; Sherif, J. S.

    1992-01-01

    The Software Engineering Process Simulation (SEPS) model is described which was developed at JPL. SEPS is a dynamic simulation model of the software project development process. It uses the feedback principles of system dynamics to simulate the dynamic interactions among various software life cycle development activities and management decision making processes. The model is designed to be a planning tool to examine tradeoffs of cost, schedule, and functionality, and to test the implications of different managerial policies on a project's outcome. Furthermore, SEPS will enable software managers to gain a better understanding of the dynamics of software project development and perform postmodern assessments.

  4. Systematic modelling and simulation of refrigeration systems

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1998-01-01

    The task of developing a simulation model of a refrigeration system can be very difficult and time consuming. In order for this process to be effective, a systematic method for developing the system model is required. This method should aim at guiding the developer to clarify the purpose...... of the simulation, to select appropriate component models and to set up the equations in a well-arranged way. In this paper the outline of such a method is proposed and examples showing the use of this method for simulation of refrigeration systems are given....

  5. Relative importance of secondary settling tank models in WWTP simulations

    DEFF Research Database (Denmark)

    Ramin, Elham; Flores-Alsina, Xavier; Sin, Gürkan

    2012-01-01

    Results obtained in a study using the Benchmark Simulation Model No. 1 (BSM1) show that a one-dimensional secondary settling tank (1-D SST) model structure and its parameters are among the most significant sources of uncertainty in wastewater treatment plant (WWTP) simulations [Ramin et al., 2011......]. The sensitivity results consistently indicate that the prediction of sludge production is most sensitive to the variation of the settling parameters. In the present study, we use the Benchmark Simulation Model No. 2 (BSM2), a plant-wide benchmark, that combines the Activated Sludge Model No. 1 (ASM1...

  6. Efficient Turbulence Modeling for CFD Wake Simulations

    DEFF Research Database (Denmark)

    van der Laan, Paul

    Wind turbine wakes can cause 10-20% annual energy losses in wind farms, and wake turbulence can decrease the lifetime of wind turbine blades. One way of estimating these effects is the use of computational fluid dynamics (CFD) to simulate wind turbines wakes in the atmospheric boundary layer. Since...... this flow is in the high Reynolds number regime, it is mainly dictated by turbulence. As a result, the turbulence modeling in CFD dominates the wake characteristics, especially in Reynolds-averaged Navier-Stokes (RANS). The present work is dedicated to study and develop RANS-based turbulence models...... verified with a grid dependency study. With respect to the standard k-ε EVM, the k-ε- fp EVM compares better with measurements of the velocity deficit, especially in the near wake, which translates to improved power deficits of the first wind turbines in a row. When the CFD metholody is applied to a large...

  7. Molecular models of zinc phthalocyanines: semi-empirical molecular orbital computations and physicochemical properties studied by molecular mechanics simulations

    International Nuclear Information System (INIS)

    Gantchev, Tsvetan G.; van Lier, Johan E.; Hunting, Darel J.

    2005-01-01

    To build 3D-molecular models of Zinc-phthalocyanines (ZnPc) and to study their diverse chemical and photosensitization properties, we performed quantum mechanical molecular orbital (MO) semi-empirical (AM1) computations of the ground, excited singlet and triplet states as well as free radical (ionic) species. RHF and UHF (open shell) geometry optimizations led to near-perfect symmetrical ZnPc. Predicted ionization potentials (IP), electron affinities (EA) and lowest electronic transitions of ZnPc are in good agreement with the published experimental and theoretical data. The computation-derived D 4h /D 2h -symmetry 3D-structures of ground and excited states and free radicals of ZnPc, together with the frontier orbital energies and Mulliken electron population analysis enabled us to build robust molecular models. These models were used to predict important chemical-reactivity entities such as global electronegativity (χ), hardness (η) and local softness based on Fukui-functions analysis. Examples of molecular mechanics (MM) applications of the 3D-molecular models are presented as approaches to evaluate solvation free energy (ΔG 0 ) solv and to estimate ground- and excited- state oxidation/reduction potentials as well as intermolecular interactions and stability of ground and excited state dimers (exciplexes) and radical ion-pairs

  8. Sensitivity study of heavy precipitation in Limited Area Model climate simulations: influence of the size of the domain and the use of the spectral nudging technique

    Science.gov (United States)

    Colin, Jeanne; Déqué, Michel; Radu, Raluca; Somot, Samuel

    2010-10-01

    We assess the impact of two sources of uncertainties in a limited area model (LAM) on the representation of intense precipitation: the size of the domain of integration and the use of the spectral nudging technique (driving of the large-scale within the domain of integration). We work in a perfect-model approach where the LAM is driven by a general circulation model (GCM) run at the same resolution and sharing the same physics and dynamics as the LAM. A set of three 50 km resolution simulations run over Western Europe with the LAM ALADIN-Climate and the GCM ARPEGE-Climate are performed to address this issue. Results are consistent with previous studies regarding the seasonal-mean fields. Furthermore, they show that neither the use of the spectral nudging nor the choice of a small domain are detrimental to the modelling of heavy precipitation in the present experiment.

  9. Multi-agent modeling and simulation of farmland use change in the farming-pastoral zone: A case study of Qianjingou Town in Inner Mongolia, China

    Science.gov (United States)

    Yan, H.

    2015-12-01

    Farmland is the most basic material conditions for guaranteeing rural livelihoods and national food security, and exploring management strategies that take both of the sustainable rural livelihoods and sustainable farmland use into account has vital significance of theory and practice. Farmland is a complex and self-adaptive system that couples human and natural systems together, and natural factors and social factors that are related to its changing process need to be considered when modeling farmland changing process. This paper takes Qianjingou Town in Inner Mongolia farming-pastoral zone as study area. From the perspective of the relationship between households' livelihoods and farmland use, this study builds the process mechanism of farmland use change based on questionnaires data, and constructs multi-agent simulation model of farmland use change with the help of Eclipse and Repast toolbox. Through simulating the relationship between natural factors (with geographical location) and households' behaviors, this paper systematically simulates households' renting and abandoning farmland behaviors, and truly describes dynamic interactions between households' livelihoods and factors related to farmland use change. These factors include natural factors (net primary productivity, road accessibility, slope and relief amplitude) and social factors (households' family structures, economic development and government policies). In the end, this study scientifically predicts farmland use change trend in the future 30 years. The simulation results show that, the number of abandoned and sublet farmland plots has a gradually increasing trend, the number of non-farm households and pure-outwork households has a remarkable increasing trend, and the number of part-farm households and pure-farm households shows a decreasing trend. Households' livelihoods sustainability in the study area is confronted with increasing pressure, and households' nonfarm employment has an increasing

  10. Multi-Agent Modeling and Simulation of Farmland Use Change in a Farming–Pastoral Zone: A Case Study of Qianjingou Town in Inner Mongolia, China

    Directory of Open Access Journals (Sweden)

    Xuehong Bai

    2015-11-01

    Full Text Available Farmland is the most basic material condition for guaranteeing rural livelihoods and national food security, and exploring management strategies that take both stable rural livelihoods and sustainable farmland use into account has vital significance in theory and practice. Farmland is a complex and self-adaptive system that couples human and natural systems, and natural and social factors that are related to its changing process need to be considered when modeling farmland changing processes. This paper uses Qianjingou Town in the Inner Mongolian farming–pastoral zone as a study area. From the perspective of the relationship between household livelihood and farmland use, this study establishes the process mechanism of farmland use change based on questionnaire data, and constructs a multi-agent simulation model of farmland use change using the Eclipse and Repast toolbox. Through simulating the relationship between natural factors (including geographical location and household behavior, this paper systematically simulates household farmland abandonment and rent behaviors, and accurately describes the dynamic interactions between household livelihoods and the factors related to farmland use change. These factors include natural factors (net primary productivity, road accessibility, slope and relief amplitude and social factors (household family structures, economic development and government policies. Ultimately, this study scientifically predicts the future farmland use change trend in the next 30 years. The simulation results show that the number of abandoned and sublet farmland plots has a gradually increasing trend, and the number of non-farming households and pure-outworking households has a remarkable increasing trend, whereas the number of part-farming households and pure-farming households has a decreasing trend. Household livelihood sustainability in the study area is confronted with increasing pressure, and household non

  11. Deriving simulators for hybrid Chi models

    NARCIS (Netherlands)

    Beek, van D.A.; Man, K.L.; Reniers, M.A.; Rooda, J.E.; Schiffelers, R.R.H.

    2006-01-01

    The hybrid Chi language is formalism for modeling, simulation and verification of hybrid systems. The formal semantics of hybrid Chi allows the definition of provably correct implementations for simulation, verification and realtime control. This paper discusses the principles of deriving an

  12. Modeling and simulation for RF system design

    CERN Document Server

    Frevert, Ronny; Jancke, Roland; Knöchel, Uwe; Schwarz, Peter; Kakerow, Ralf; Darianian, Mohsen

    2005-01-01

    Focusing on RF specific modeling and simulation methods, and system and circuit level descriptions, this work contains application-oriented training material. Accompanied by a CD- ROM, it combines the presentation of a mixed-signal design flow, an introduction into VHDL-AMS and Verilog-A, and the application of commercially available simulators.

  13. Impact of reactive settler models on simulated WWTP performance

    DEFF Research Database (Denmark)

    Gernaey, Krist; Jeppsson, Ulf; Batstone, Damien J.

    2006-01-01

    for an ASM1 case study. Simulations with a whole plant model including the non-reactive Takacs settler model are used as a reference, and are compared to simulation results considering two reactive settler models. The first is a return sludge model block removing oxygen and a user-defined fraction of nitrate......, combined with a non-reactive Takacs settler. The second is a fully reactive ASM1 Takacs settler model. Simulations with the ASM1 reactive settler model predicted a 15.3% and 7.4% improvement of the simulated N removal performance, for constant (steady-state) and dynamic influent conditions respectively....... The oxygen/nitrate return sludge model block predicts a 10% improvement of N removal performance under dynamic conditions, and might be the better modelling option for ASM1 plants: it is computationally more efficient and it will not overrate the importance of decay processes in the settler....

  14. Four Models of In Situ Simulation

    DEFF Research Database (Denmark)

    Musaeus, Peter; Krogh, Kristian; Paltved, Charlotte

    2014-01-01

    Introduction In situ simulation is characterized by being situated in the clinical environment as opposed to the simulation laboratory. But in situ simulation bears a family resemblance to other types of on the job training. We explore a typology of in situ simulation and suggest that there are f......Introduction In situ simulation is characterized by being situated in the clinical environment as opposed to the simulation laboratory. But in situ simulation bears a family resemblance to other types of on the job training. We explore a typology of in situ simulation and suggest...... that there are four fruitful approaches to in situ simulation: (1) In situ simulation informed by reported critical incidents and adverse events from emergency departments (ED) in which team training is about to be conducted to write scenarios. (2) In situ simulation through ethnographic studies at the ED. (3) Using...... the following processes: Transition processes, Action processes and Interpersonal processes. Design and purpose This abstract suggests four approaches to in situ simulation. A pilot study will evaluate the different approaches in two emergency departments in the Central Region of Denmark. Methods The typology...

  15. Columnar modelling of nucleation burst evolution in the convective boundary layer – first results from a feasibility study Part IV: A compilation of previous observations for valuation of simulation results from a columnar modelling study

    Directory of Open Access Journals (Sweden)

    O. Hellmuth

    2006-01-01

    according to the parameterisation of the collision-controlled binary nucleation rate proposed by Weber et al. (1996, H2O vapour does not explicitly affect the particle formation. Since the H2SO4 concentration is overpredicted in the simulations presented in Paper III, the nucleation rates are too high compared to previous estimations. Therefore, the results are not directly comparable to measurements. Especially NPF events, where organics are suspected to play a key role, such as those observed at the boreal forest station in Hyytiälä (Southern Finland or at Hohenpeissenberg (mountain site in Southern Germany, can not be explained by employing simple sulphur/ammonia chemistry. However, some valuable hints regarding the role of CBL turbulence in NPF can be obtained. In the literature a number of observations on the link between turbulence and NPF can be found, whose burst patterns support a strong contribution of CBL turbulence to the NPF burst evolution simulated here. Observations, that do not correspond to the scenarios are discussed with respect to possible reasons for the differences between model and observation. The model simulations support some state-of-the-art hypotheses on the contribution of CBL turbulence to NPF. Considering the application of box models, the present study shows, that CBL turbulence, not explicitly considered in such models, can strongly affect the spatio-temporal NPF burst evolution. The columnar high-order model presented here is a helpful tool to elucidate gas-aerosol-turbulence interactions, especially the genesis of NPF bursts in the CBL. An advanced description of the cluster formation and condensation growth is required as well as a comprehensive verification/validation study using observed high-order moments. Further scenario simulations remain to be performed.

  16. Magnetosphere Modeling: From Cartoons to Simulations

    Science.gov (United States)

    Gombosi, T. I.

    2017-12-01

    Over the last half a century physics-based global computer simulations became a bridge between experiment and basic theory and now it represents the "third pillar" of geospace research. Today, many of our scientific publications utilize large-scale simulations to interpret observations, test new ideas, plan campaigns, or design new instruments. Realistic simulations of the complex Sun-Earth system have been made possible by the dramatically increased power of both computing hardware and numerical algorithms. Early magnetosphere models were based on simple E&M concepts (like the Chapman-Ferraro cavity) and hydrodynamic analogies (bow shock). At the beginning of the space age current system models were developed culminating in the sophisticated Tsyganenko-type description of the magnetic configuration. The first 3D MHD simulations of the magnetosphere were published in the early 1980s. A decade later there were several competing global models that were able to reproduce many fundamental properties of the magnetosphere. The leading models included the impact of the ionosphere by using a height-integrated electric potential description. Dynamic coupling of global and regional models started in the early 2000s by integrating a ring current and a global magnetosphere model. It has been recognized for quite some time that plasma kinetic effects play an important role. Presently, global hybrid simulations of the dynamic magnetosphere are expected to be possible on exascale supercomputers, while fully kinetic simulations with realistic mass ratios are still decades away. In the 2010s several groups started to experiment with PIC simulations embedded in large-scale 3D MHD models. Presently this integrated MHD-PIC approach is at the forefront of magnetosphere simulations and this technique is expected to lead to some important advances in our understanding of magnetosheric physics. This talk will review the evolution of magnetosphere modeling from cartoons to current systems

  17. Modeling and simulation of complex systems a framework for efficient agent-based modeling and simulation

    CERN Document Server

    Siegfried, Robert

    2014-01-01

    Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard

  18. Dispersion modeling by kinematic simulation: Cloud dispersion model

    International Nuclear Information System (INIS)

    Fung, J C H; Perkins, R J

    2008-01-01

    A new technique has been developed to compute mean and fluctuating concentrations in complex turbulent flows (tidal current near a coast and deep ocean). An initial distribution of material is discretized into any small clouds which are advected by a combination of the mean flow and large scale turbulence. The turbulence can be simulated either by kinematic simulation (KS) or direct numerical simulation. The clouds also diffuse relative to their centroids; the statistics for this are obtained from a separate calculation of the growth of individual clouds in small scale turbulence, generated by KS. The ensemble of discrete clouds is periodically re-discretized, to limit the size of the small clouds and prevent overlapping. The model is illustrated with simulations of dispersion in uniform flow, and the results are compared with analytic, steady state solutions. The aim of this study is to understand how pollutants disperses in a turbulent flow through a numerical simulation of fluid particle motion in a random flow field generated by Fourier modes. Although this homogeneous turbulent is rather a 'simple' flow, it represents a building block toward understanding pollutant dispersion in more complex flow. The results presented here are preliminary in nature, but we expect that similar qualitative results should be observed in a genuine turbulent flow.

  19. Simulation models for food separation by adsorption process | Aoyi ...

    African Journals Online (AJOL)

    Separation of simulated industrial food products, by method of adsorption, has been studied. A thermodynamic approach has been applied to study the liquid adsorption where benzene and cyclohexane have been used to simulate edible oils in a system that employs silica gel as the adsorbent. Different models suggested ...

  20. A Simulation Study: The Impact of Random and Realistic Mobility Models on the Performance of Bypass-AODV in Ad Hoc Wireless Networks

    Directory of Open Access Journals (Sweden)

    Baroudi Uthman

    2010-01-01

    Full Text Available To bring VANET into reality, it is crucial to devise routing protocols that can exploit the inherited characteristics of VANET environment to enhance the performance of the running applications. Previous studies have shown that a certain routing protocol behaves differently under different presumed mobility patterns. Bypass-AODV is a new optimization of the AODV routing protocol for mobile ad-hoc networks. It is proposed as a local recovery mechanism to enhance the performance of the AODV routing protocol. It shows outstanding performance under the Random Waypoint mobility model compared with AODV. However, Random Waypoint is a simple model that may be applicable to some scenarios but it is not sufficient to capture some important mobility characteristics of scenarios where VANETs are deployed. In this paper, we will investigate the performance of Bypass-AODV under a wide range of mobility models including other random mobility models, group mobility models, and vehicular mobility models. Simulation results show an interesting feature that is the insensitivity of Bypass-AODV to the selected random mobility model, and it has a clear performance improvement compared to AODV. For group mobility model, both protocols show a comparable performance, but for vehicular mobility models, Bypass-AODV suffers from performance degradation in high-speed conditions.

  1. SEIR model simulation for Hepatitis B

    Science.gov (United States)

    Side, Syafruddin; Irwan, Mulbar, Usman; Sanusi, Wahidah

    2017-09-01

    Mathematical modelling and simulation for Hepatitis B discuss in this paper. Population devided by four variables, namely: Susceptible, Exposed, Infected and Recovered (SEIR). Several factors affect the population in this model is vaccination, immigration and emigration that occurred in the population. SEIR Model obtained Ordinary Differential Equation (ODE) non-linear System 4-D which then reduces to 3-D. SEIR model simulation undertaken to predict the number of Hepatitis B cases. The results of the simulation indicates the number of Hepatitis B cases will increase and then decrease for several months. The result of simulation using the number of case in Makassar also found the basic reproduction number less than one, that means, Makassar city is not an endemic area of Hepatitis B.

  2. Maintenance Personnel Performance Simulation (MAPPS) model

    International Nuclear Information System (INIS)

    Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Knee, H.E.; Haas, P.M.

    1984-01-01

    A stochastic computer model for simulating the actions and behavior of nuclear power plant maintenance personnel is described. The model considers personnel, environmental, and motivational variables to yield predictions of maintenance performance quality and time to perform. The mode has been fully developed and sensitivity tested. Additional evaluation of the model is now taking place

  3. Computer simulations of the random barrier model

    DEFF Research Database (Denmark)

    Schrøder, Thomas; Dyre, Jeppe

    2002-01-01

    A brief review of experimental facts regarding ac electronic and ionic conduction in disordered solids is given followed by a discussion of what is perhaps the simplest realistic model, the random barrier model (symmetric hopping model). Results from large scale computer simulations are presented...

  4. Turbine modelling for real time simulators

    International Nuclear Information System (INIS)

    Oliveira Barroso, A.C. de; Araujo Filho, F. de

    1992-01-01

    A model for vapor turbines and its peripherals has been developed. All the important variables have been included and emphasis has been given for the computational efficiency to obtain a model able to simulate all the modeled equipment. (A.C.A.S.)

  5. A simulation study on garment manufacturing process

    Science.gov (United States)

    Liong, Choong-Yeun; Rahim, Nur Azreen Abdul

    2015-02-01

    Garment industry is an important industry and continues to evolve in order to meet the consumers' high demands. Therefore, elements of innovation and improvement are important. In this work, research studies were conducted at a local company in order to model the sewing process of clothes manufacturing by using simulation modeling. Clothes manufacturing at the company involves 14 main processes, which are connecting the pattern, center sewing and side neating, pockets sewing, backside-sewing, attaching the front and back, sleeves preparation, attaching the sleeves and over lock, collar preparation, collar sewing, bottomedge sewing, buttonholing sewing, removing excess thread, marking button, and button cross sewing. Those fourteen processes are operated by six tailors only. The last four sets of processes are done by a single tailor. Data collection was conducted by on site observation and the probability distribution of processing time for each of the processes is determined by using @Risk's Bestfit. Then a simulation model is developed using Arena Software based on the data collected. Animated simulation model is developed in order to facilitate understanding and verifying that the model represents the actual system. With such model, what if analysis and different scenarios of operations can be experimented with virtually. The animation and improvement models will be presented in further work.

  6. Traffic simulation based ship collision probability modeling

    Energy Technology Data Exchange (ETDEWEB)

    Goerlandt, Floris, E-mail: floris.goerlandt@tkk.f [Aalto University, School of Science and Technology, Department of Applied Mechanics, Marine Technology, P.O. Box 15300, FI-00076 AALTO, Espoo (Finland); Kujala, Pentti [Aalto University, School of Science and Technology, Department of Applied Mechanics, Marine Technology, P.O. Box 15300, FI-00076 AALTO, Espoo (Finland)

    2011-01-15

    Maritime traffic poses various risks in terms of human, environmental and economic loss. In a risk analysis of ship collisions, it is important to get a reasonable estimate for the probability of such accidents and the consequences they lead to. In this paper, a method is proposed to assess the probability of vessels colliding with each other. The method is capable of determining the expected number of accidents, the locations where and the time when they are most likely to occur, while providing input for models concerned with the expected consequences. At the basis of the collision detection algorithm lays an extensive time domain micro-simulation of vessel traffic in the given area. The Monte Carlo simulation technique is applied to obtain a meaningful prediction of the relevant factors of the collision events. Data obtained through the Automatic Identification System is analyzed in detail to obtain realistic input data for the traffic simulation: traffic routes, the number of vessels on each route, the ship departure times, main dimensions and sailing speed. The results obtained by the proposed method for the studied case of the Gulf of Finland are presented, showing reasonable agreement with registered accident and near-miss data.

  7. Theory, modeling, and simulation annual report, 1992

    Energy Technology Data Exchange (ETDEWEB)

    1993-05-01

    This report briefly discusses research on the following topics: development of electronic structure methods; modeling molecular processes in clusters; modeling molecular processes in solution; modeling molecular processes in separations chemistry; modeling interfacial molecular processes; modeling molecular processes in the atmosphere; methods for periodic calculations on solids; chemistry and physics of minerals; graphical user interfaces for computational chemistry codes; visualization and analysis of molecular simulations; integrated computational chemistry environment; and benchmark computations.

  8. Modeling and simulation with operator scaling

    OpenAIRE

    Cohen, Serge; Meerschaert, Mark M.; Rosiński, Jan

    2010-01-01

    Self-similar processes are useful in modeling diverse phenomena that exhibit scaling properties. Operator scaling allows a different scale factor in each coordinate. This paper develops practical methods for modeling and simulating stochastic processes with operator scaling. A simulation method for operator stable Levy processes is developed, based on a series representation, along with a Gaussian approximation of the small jumps. Several examples are given to illustrate practical application...

  9. Modeling of magnetic particle suspensions for simulations

    CERN Document Server

    Satoh, Akira

    2017-01-01

    The main objective of the book is to highlight the modeling of magnetic particles with different shapes and magnetic properties, to provide graduate students and young researchers information on the theoretical aspects and actual techniques for the treatment of magnetic particles in particle-based simulations. In simulation, we focus on the Monte Carlo, molecular dynamics, Brownian dynamics, lattice Boltzmann and stochastic rotation dynamics (multi-particle collision dynamics) methods. The latter two simulation methods can simulate both the particle motion and the ambient flow field simultaneously. In general, specialized knowledge can only be obtained in an effective manner under the supervision of an expert. The present book is written to play such a role for readers who wish to develop the skill of modeling magnetic particles and develop a computer simulation program using their own ability. This book is therefore a self-learning book for graduate students and young researchers. Armed with this knowledge,...

  10. Using Modeling and Simulation to Predict Operator Performance and Automation-Induced Complacency With Robotic Automation: A Case Study and Empirical Validation.

    Science.gov (United States)

    Wickens, Christopher D; Sebok, Angelia; Li, Huiyang; Sarter, Nadine; Gacy, Andrew M

    2015-09-01

    The aim of this study was to develop and validate a computational model of the automation complacency effect, as operators work on a robotic arm task, supported by three different degrees of automation. Some computational models of complacency in human-automation interaction exist, but those are formed and validated within the context of fairly simplified monitoring failures. This research extends model validation to a much more complex task, so that system designers can establish, without need for human-in-the-loop (HITL) experimentation, merits and shortcomings of different automation degrees. We developed a realistic simulation of a space-based robotic arm task that could be carried out with three different levels of trajectory visualization and execution automation support. Using this simulation, we performed HITL testing. Complacency was induced via several trials of correctly performing automation and then was assessed on trials when automation failed. Following a cognitive task analysis of the robotic arm operation, we developed a multicomponent model of the robotic operator and his or her reliance on automation, based in part on visual scanning. The comparison of model predictions with empirical results revealed that the model accurately predicted routine performance and predicted the responses to these failures after complacency developed. However, the scanning models do not account for the entire attention allocation effects of complacency. Complacency modeling can provide a useful tool for predicting the effects of different types of imperfect automation. The results from this research suggest that focus should be given to supporting situation awareness in automation development. © 2015, Human Factors and Ergonomics Society.

  11. Modelling and Simulation of Wave Loads

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    velocity can be approximated by a Gaussian Markov process. Known approximate results for the first-passage density or equivalently, the distribution of the extremes of wave loads are presented and compared with rather precise simulation results. It is demonstrated that the approximate results......A simple model of the wave load on slender members of offshore structures is described. The wave elevation of the sea state is modelled by a stationary Gaussian process. A new procedure to simulate realizations of the wave loads is developed. The simulation method assumes that the wave particle...

  12. Modelling and Simulation of Wave Loads

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    1985-01-01

    velocity can be approximated by a Gaussian Markov process. Known approximate results for the first passage density or equivalently, the distribution of the extremes of wave loads are presented and compared with rather precise simulation results. It is demonstrated that the approximate results......A simple model of the wave load on stender members of offshore structures is described . The wave elevation of the sea stateis modelled by a stationary Gaussian process. A new procedure to simulate realizations of the wave loads is developed. The simulation method assumes that the wave particle...

  13. Minimum-complexity helicopter simulation math model

    Science.gov (United States)

    Heffley, Robert K.; Mnich, Marc A.

    1988-01-01

    An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.

  14. SU-E-I-80: Quantification of Respiratory and Cardiac Motion Effect in SPECT Acquisitions Using Anthropomorphic Models: A Monte Carlo Simulation Study

    Energy Technology Data Exchange (ETDEWEB)

    Papadimitroulas, P; Kostou, T; Kagadis, G [University of Patras, Rion, Ahaia (Greece); Loudos, G [Technological Educational Institute of Athens, Egaleo, Attika (Greece)

    2015-06-15

    Purpose: The purpose of the present study was to quantify, evaluate the impact of cardiac and respiratory motion on clinical nuclear imaging protocols. Common SPECT and scintigraphic scans are studied using Monte Carlo (MC) simulations, comparing the resulted images with and without motion. Methods: Realistic simulations were executed using the GATE toolkit and the XCAT anthropomorphic phantom as a reference model for human anatomy. Three different radiopharmaceuticals based on 99mTc were studied, namely 99mTc-MDP, 99mTc—N—DBODC and 99mTc—DTPA-aerosol for bone, myocardium and lung scanning respectively. The resolution of the phantom was set to 3.5 mm{sup 3}. The impact of the motion on spatial resolution was quantified using a sphere with 3.5 mm diameter and 10 separate time frames, in the ECAM modeled SPECT scanner. Finally, respiratory motion impact on resolution and imaging of lung lesions was investigated. The MLEM algorithm was used for data reconstruction, while the literature derived biodistributions of the pharmaceuticals were used as activity maps in the simulations. Results: FWHM was extracted for a static and a moving sphere which was ∼23 cm away from the entrance of the SPECT head. The difference in the FWHM was 20% between the two simulations. Profiles in thorax were compared in the case of bone scintigraphy, showing displacement and blurring of the bones when respiratory motion was inserted in the simulation. Large discrepancies were noticed in the case of myocardium imaging when cardiac motion was incorporated during the SPECT acquisition. Finally the borders of the lungs are blurred when respiratory motion is included resulting to a dislocation of ∼2.5 cm. Conclusion: As we move to individualized imaging and therapy procedures, quantitative and qualitative imaging is of high importance in nuclear diagnosis. MC simulations combined with anthropomorphic digital phantoms can provide an accurate tool for applications like motion correction

  15. Performance of the WRF model to simulate the seasonal and interannual variability of hydrometeorological variables in East Africa: a case study for the Tana River basin in Kenya

    Science.gov (United States)

    Kerandi, Noah Misati; Laux, Patrick; Arnault, Joel; Kunstmann, Harald

    2017-10-01

    This study investigates the ability of the regional climate model Weather Research and Forecasting (WRF) in simulating the seasonal and interannual variability of hydrometeorological variables in the Tana River basin (TRB) in Kenya, East Africa. The impact of two different land use classifications, i.e., the Moderate Resolution Imaging Spectroradiometer (MODIS) and the US Geological Survey (USGS) at two horizontal resolutions (50 and 25 km) is investigated. Simulated precipitation and temperature for the period 2011-2014 are compared with Tropical Rainfall Measuring Mission (TRMM), Climate Research Unit (CRU), and station data. The ability of Tropical Rainfall Measuring Mission (TRMM) and Climate Research Unit (CRU) data in reproducing in situ observation in the TRB is analyzed. All considered WRF simulations capture well the annual as well as the interannual and spatial distribution of precipitation in the TRB according to station data and the TRMM estimates. Our results demonstrate that the increase of horizontal resolution from 50 to 25 km, together with the use of the MODIS land use classification, significantly improves the precipitation results. In the case of temperature, spatial patterns and seasonal cycle are well reproduced, although there is a systematic cold bias with respect to both station and CRU data. Our results contribute to the identification of suitable and regionally adapted regional climate models (RCMs) for East Africa.

  16. A thermodynamic study of peptides binding to carbon nanotubes based on a hydrophobic-polar lattice model using Monte Carlo simulations

    International Nuclear Information System (INIS)

    Cheng, Y; Lu, C; Liu, G R; Li, Z R; Mi, D

    2008-01-01

    Carbon nanotubes (CNTs) are outstanding novel materials that have great potential for a variety of chemical and biomedical applications. However, the mechanism of their interactions with biomaterials is still not fully understood, and more insightful research work is needed. In this work, we use the 2D hydrophobic-polar lattice model and the Monte Carlo simulation method to study the interactions between model peptides and CNTs. The energy parameters of the coarse-grained lattice model are qualitatively determined based on experimental data and molecular dynamics simulation results. Our model is capable of reproducing the essential phenomena of peptides folding in bulk water and binding to CNTs, as well as providing new insights into the thermodynamics and conformational properties of peptides interacting with nanotubes. The results suggest that both the internal energy and the peptide conformational entropy contribute to the binding process. Upon binding to the CNTs, peptides generally unfold into their denatured structures before they reach the lowest-accessible energy states of the system. Temperature has a significant influence on the adsorption process

  17. Obesity trend in the United States and economic intervention options to change it: A simulation study linking ecological epidemiology and system dynamics modeling.

    Science.gov (United States)

    Chen, H-J; Xue, H; Liu, S; Huang, T T K; Wang, Y C; Wang, Y

    2018-05-29

    To study the country-level dynamics and influences between population weight status and socio-economic distribution (employment status and family income) in the US and to project the potential impacts of socio-economic-based intervention options on obesity prevalence. Ecological study and simulation. Using the longitudinal data from the 2001-2011 Medical Expenditure Panel Survey (N = 88,453 adults), we built and calibrated a system dynamics model (SDM) capturing the feedback loops between body weight status and socio-economic status distribution and simulated the effects of employment- and income-based intervention options. The SDM-based simulation projected rising overweight/obesity prevalence in the US in the future. Improving people's income from lower to middle-income group would help control the rising prevalence, while only creating jobs for the unemployed did not show such effect. Improving people from low- to middle-income levels may be effective, instead of solely improving reemployment rate, in curbing the rising obesity trend in the US adult population. This study indicates the value of the SDM as a virtual laboratory to evaluate complex distributive phenomena of the interplay between population health and economy. Copyright © 2018 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  18. Rotating and translating anthropomorphic head voxel models to establish an horizontal Frankfort plane for dental CBCT Monte Carlo simulations: a dose comparison study

    Science.gov (United States)

    Stratis, A.; Zhang, G.; Jacobs, R.; Bogaerts, R.; Bosmans, H.

    2016-12-01

    In order to carry out Monte Carlo (MC) dosimetry studies, voxel phantoms, modeling human anatomy, and organ-based segmentation of CT image data sets are applied to simulation frameworks. The resulting voxel phantoms preserve patient CT acquisition geometry; in the case of head voxel models built upon head CT images, the head support with which CT scanners are equipped introduces an inclination to the head, and hence to the head voxel model. In dental cone beam CT (CBCT) imaging, patients are always positioned in such a way that the Frankfort line is horizontal, implying that there is no head inclination. The orientation of the head is important, as it influences the distance of critical radiosensitive organs like the thyroid and the esophagus from the x-ray tube. This work aims to propose a procedure to adjust head voxel phantom orientation, and to investigate the impact of head inclination on organ doses in dental CBCT MC dosimetry studies. The female adult ICRP, and three in-house-built paediatric voxel phantoms were in this study. An EGSnrc MC framework was employed to simulate two commonly used protocols; a Morita Accuitomo 170 dental CBCT scanner (FOVs: 60  ×  60 mm2 and 80  ×  80 mm2, standard resolution), and a 3D Teeth protocol (FOV: 100  ×  90 mm2) in a Planmeca Promax 3D MAX scanner. Result analysis revealed large absorbed organ dose differences in radiosensitive organs between the original and the geometrically corrected voxel models of this study, ranging from  -45.6% to 39.3%. Therefore, accurate dental CBCT MC dose calculations require geometrical adjustments to be applied to head voxel models.

  19. Molecular Dynamics Simulation Study of Parallel Telomeric DNA Quadruplexes at Different Ionic Strengths: Evaluation of Water and Ion Models

    Czech Academy of Sciences Publication Activity Database

    Rebic, M.; Laaksonen, A.; Šponer, Jiří; Uličný, J.; Mocci, F.

    2016-01-01

    Roč. 120, č. 30 (2016), s. 7380-7391 ISSN 1520-6106 R&D Projects: GA ČR(CZ) GA16-13721S Institutional support: RVO:68081707 Keywords : amber force-field * nucleic-acids * biomolecular simulations Subject RIV: BO - Biophysics OBOR OECD: Physical chemistry Impact factor: 3.177, year: 2016

  20. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    GENERAL I ARTICLE. Computer Based ... universities, and later did system analysis, ... sonal computers (PC) and low cost software packages and tools. They can serve as useful learning experience through student projects. Models are .... Let us consider a numerical example: to calculate the velocity of a trainer aircraft ...

  1. At the biological modeling and simulation frontier.

    Science.gov (United States)

    Hunt, C Anthony; Ropella, Glen E P; Lam, Tai Ning; Tang, Jonathan; Kim, Sean H J; Engelberg, Jesse A; Sheikh-Bahaei, Shahab

    2009-11-01

    We provide a rationale for and describe examples of synthetic modeling and simulation (M&S) of biological systems. We explain how synthetic methods are distinct from familiar inductive methods. Synthetic M&S is a means to better understand the mechanisms that generate normal and disease-related phenomena observed in research, and how compounds of interest interact with them to alter phenomena. An objective is to build better, working hypotheses of plausible mechanisms. A synthetic model is an extant hypothesis: execution produces an observable mechanism and phenomena. Mobile objects representing compounds carry information enabling components to distinguish between them and react accordingly when different compounds are studied simultaneously. We argue that the familiar inductive approaches contribute to the general inefficiencies being experienced by pharmaceutical R&D, and that use of synthetic approaches accelerates and improves R&D decision-making and thus the drug development process. A reason is that synthetic models encourage and facilitate abductive scientific reasoning, a primary means of knowledge creation and creative cognition. When synthetic models are executed, we observe different aspects of knowledge in action from different perspectives. These models can be tuned to reflect differences in experimental conditions and individuals, making translational research more concrete while moving us closer to personalized medicine.

  2. Thermal unit availability modeling in a regional simulation model

    International Nuclear Information System (INIS)

    Yamayee, Z.A.; Port, J.; Robinett, W.

    1983-01-01

    The System Analysis Model (SAM) developed under the umbrella of PNUCC's System Analysis Committee is capable of simulating the operation of a given load/resource scenario. This model employs a Monte-Carlo simulation to incorporate uncertainties. Among uncertainties modeled is thermal unit availability both for energy simulation (seasonal) and capacity simulations (hourly). This paper presents the availability modeling in the capacity and energy models. The use of regional and national data in deriving the two availability models, the interaction between the two and modifications made to the capacity model in order to reflect regional practices is presented. A sample problem is presented to show the modification process. Results for modeling a nuclear unit using NERC-GADS is presented

  3. Modeling lift operations with SASmacr Simulation Studio

    Science.gov (United States)

    Kar, Leow Soo

    2016-10-01

    Lifts or elevators are an essential part of multistorey buildings which provide vertical transportation for its occupants. In large and high-rise apartment buildings, its occupants are permanent, while in buildings, like hospitals or office blocks, the occupants are temporary or users of the buildings. They come in to work or to visit, and thus, the population of such buildings are much higher than those in residential apartments. It is common these days that large office blocks or hospitals have at least 8 to 10 lifts serving its population. In order to optimize the level of service performance, different transportation schemes are devised to control the lift operations. For example, one lift may be assigned to solely service the even floors and another solely for the odd floors, etc. In this paper, a basic lift system is modelled using SAS Simulation Studio to study the effect of factors such as the number of floors, capacity of the lift car, arrival rate and exit rate of passengers at each floor, peak and off peak periods on the system performance. The simulation is applied to a real lift operation in Sunway College's North Building to validate the model.

  4. Plasma disruption modeling and simulation

    International Nuclear Information System (INIS)

    Hassanein, A.

    1994-01-01

    Disruptions in tokamak reactors are considered a limiting factor to successful operation and reliable design. The behavior of plasma-facing components during a disruption is critical to the overall integrity of the reactor. Erosion of plasma facing-material (PFM) surfaces due to thermal energy dump during the disruption can severely limit the lifetime of these components and thus diminish the economic feasibility of the reactor. A comprehensive understanding of the interplay of various physical processes during a disruption is essential for determining component lifetime and potentially improving the performance of such components. There are three principal stages in modeling the behavior of PFM during a disruption. Initially, the incident plasma particles will deposit their energy directly on the PFM surface, heating it to a very high temperature where ablation occurs. Models for plasma-material interactions have been developed and used to predict material thermal evolution during the disruption. Within a few microseconds after the start of the disruption, enough material is vaporized to intercept most of the incoming plasma particles. Models for plasma-vapor interactions are necessary to predict vapor cloud expansion and hydrodynamics. Continuous heating of the vapor cloud above the material surface by the incident plasma particles will excite, ionize, and cause vapor atoms to emit thermal radiation. Accurate models for radiation transport in the vapor are essential for calculating the net radiated flux to the material surface which determines the final erosion thickness and consequently component lifetime. A comprehensive model that takes into account various stages of plasma-material interaction has been developed and used to predict erosion rates during reactor disruption, as well during induced disruption in laboratory experiments

  5. Modelling and simulating fire tube boiler performance

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    A model for a flue gas boiler covering the flue gas and the water-/steam side has been formulated. The model has been formulated as a number of sub models that are merged into an overall model for the complete boiler. Sub models have been defined for the furnace, the convection zone (split in 2......: a zone submerged in water and a zone covered by steam), a model for the material in the boiler (the steel) and 2 models for resp. the water/steam zone (the boiling) and the steam. The dynamic model has been developed as a number of Differential-Algebraic-Equation system (DAE). Subsequently Mat......Lab/Simulink has been applied for carrying out the simulations. To be able to verify the simulated results experiments has been carried out on a full scale boiler plant....

  6. A beer game simulation model for studying the impact of information sharing to diminish the bullwhip effect in supply chains: an educational support tool in supply chain management

    Directory of Open Access Journals (Sweden)

    Éder Vasco Pinheiro

    2016-06-01

    Full Text Available This paper simulates the Beer Distribution Game using object oriented simulation software. A five echelon supply chain with bidirectional relationships is reproduced, employing simulation to demonstrate the impact of information on the generation of the bullwhip effect. In doing so, this study intends to provide a simple didactic tool to assist academically in supply chain management. As the result of the simulations, it was possible to demonstrate the occurrence of the bullwhip effect and how information sharing can diminish it.

  7. A virtual laboratory notebook for simulation models.

    Science.gov (United States)

    Winfield, A J

    1998-01-01

    In this paper we describe how we have adopted the laboratory notebook as a metaphor for interacting with computer simulation models. This 'virtual' notebook stores the simulation output and meta-data (which is used to record the scientist's interactions with the simulation). The meta-data stored consists of annotations (equivalent to marginal notes in a laboratory notebook), a history tree and a log of user interactions. The history tree structure records when in 'simulation' time, and from what starting point in the tree changes are made to the parameters by the user. Typically these changes define a new run of the simulation model (which is represented as a new branch of the history tree). The tree shows the structure of the changes made to the simulation and the log is required to keep the order in which the changes occurred. Together they form a record which you would normally find in a laboratory notebook. The history tree is plotted in simulation parameter space. This shows the scientist's interactions with the simulation visually and allows direct manipulation of the parameter information presented, which in turn is used to control directly the state of the simulation. The interactions with the system are graphical and usually involve directly selecting or dragging data markers and other graphical control devices around in parameter space. If the graphical manipulators do not provide precise enough control then textual manipulation is still available which allows numerical values to be entered by hand. The Virtual Laboratory Notebook, by providing interesting interactions with the visual view of the history tree, provides a mechanism for giving the user complex and novel ways of interacting with biological computer simulation models.

  8. Bridging experiments, models and simulations

    DEFF Research Database (Denmark)

    Carusi, Annamaria; Burrage, Kevin; Rodríguez, Blanca

    2012-01-01

    Computational models in physiology often integrate functional and structural information from a large range of spatiotemporal scales from the ionic to the whole organ level. Their sophistication raises both expectations and skepticism concerning how computational methods can improve our...... understanding of living organisms and also how they can reduce, replace, and refine animal experiments. A fundamental requirement to fulfill these expectations and achieve the full potential of computational physiology is a clear understanding of what models represent and how they can be validated. The present...... that contributes to defining the specific aspects of cardiac electrophysiology the MSE system targets, rather than being only an external test, and that this is driven by advances in experimental and computational methods and the combination of both....

  9. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    Science.gov (United States)

    Dionisio, Kathie L; Chang, Howard H; Baxter, Lisa K

    2016-11-25

    Exposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health. ZIP-code level estimates of exposure for six pollutants (CO, NO x , EC, PM 2.5 , SO 4 , O 3 ) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error. Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs. Substantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3-85% for population error, and 31-85% for total error. When CO, NO x or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copollutants based on the estimated type I error rate. The impact of exposure error must be considered when interpreting results of copollutant epidemiologic models, due to the possibility of attenuation of main pollutant RRs and the increased probability of false positives when measurement error is present.

  10. Chernobyl reactor transient simulation study

    International Nuclear Information System (INIS)

    Gaber, F.A.; El Messiry, A.M.

    1988-01-01

    This paper deals with the Chernobyl nuclear power station transient simulation study. The Chernobyl (RBMK) reactor is a graphite moderated pressure tube type reactor. It is cooled by circulating light water that boils in the upper parts of vertical pressure tubes to produce steam. At equilibrium fuel irradiation, the RBMK reactor has a positive void reactivity coefficient. However, the fuel temperature coefficient is negative and the net effect of a power change depends upon the power level. Under normal operating conditions the net effect (power coefficient) is negative at full power and becomes positive under certain transient conditions. A series of dynamic performance transient analysis for RBMK reactor, pressurized water reactor (PWR) and fast breeder reactor (FBR) have been performed using digital simulator codes, the purpose of this transient study is to show that an accident of Chernobyl's severity does not occur in PWR or FBR nuclear power reactors. This appears from the study of the inherent, stability of RBMK, PWR and FBR under certain transient conditions. This inherent stability is related to the effect of the feed back reactivity. The power distribution stability in the graphite RBMK reactor is difficult to maintain throughout its entire life, so the reactor has an inherent instability. PWR has larger negative temperature coefficient of reactivity, therefore, the PWR by itself has a large amount of natural stability, so PWR is inherently safe. FBR has positive sodium expansion coefficient, therefore it has insufficient stability it has been concluded that PWR has safe operation than FBR and RBMK reactors

  11. Study of cellular retention of HMPAO and ECD in a model simulating the blood-brain barrier; Etude de la retention cellulaire de l`HMPAO et de l`ECD dans un modele simulant la barriere hematoencephalique

    Energy Technology Data Exchange (ETDEWEB)

    Ponce, C.; Pittet, N.; Slosman, D.O. [HUG, 1211 Geneve 14, (Switzerland)

    1997-12-31

    The HMPAO and ECD are two technetium-labelled lipophilic agents clinically used in the imagery of cerebral perfusion. These molecules cross the membranes and are retained inside the cell after being converted to a hydrophilic form. The aim of this study is to establish the distribution of this retention at the level of blood-brain barrier (BBB) and nerve cells. The incorporation of HMPAO or ECD was studied on a model of co-culture simulating the BBB by means of a T84 single-cell layer of tight junction separated from another layer of U373 astrocyte cells. The cell quality and tight junction permeability were evaluated by the cellular retention of 111-indium chloride and by para-cellular diffusion of {sup 14}C mannitol,d-1. The values reported below were obtained at 180 minutes when the radiotracers were added near the `T84 layer`. The cell quality is validated by the low cellular retention of the indium chloride(2.3{+-}0.3 {mu}g{sup -1} for the T84 cells and 8.2{+-}5.8 {mu}g{sup -1} for the U373 cells). The activity of {sup 14}C mannitol,d-1 diminishes by 23 {+-} 5 % in the added compartment. The retention of ECD by the U373 cells is significantly higher (20.7 {+-}4.5 g{sup -1}) than that of T84 cells (2.9 {+-} 0.2 {mu}g{sup -1}). For HMPAO a non-significant tendency could be observed (49 {+-} 34 {mu}g{sup -1} for the U373 cells and 38 {+-} 25 {mu}g{sup -1} for the T84 cells)> The results of cellular retention of indium by HMPAO or ECD when added near `U373 layer` are not significantly different.In conclusion, independently of the side exposed to the radiotracers, one observes an enhanced incorporation of the U373 cells. The ensemble of these results represent additional arguments in favour of a specific cellular incorporation of the radiotracers, independent of the BBB permittivity

  12. Validation techniques of agent based modelling for geospatial simulations

    OpenAIRE

    Darvishi, M.; Ahmadi, G.

    2014-01-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent...

  13. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    , and the total stress level (i.e. stresses introduced due to internal pressure plus stresses introduced due to temperature gradients) must always be kept below the allowable stress level. In this way, the increased water-/steam space that should allow for better dynamic performance, in the end causes limited...... freedom with respect to dynamic operation of the plant. By means of an objective function including as well the price of the plant as a quantification of the value of dynamic operation of the plant an optimization is carried out. The dynamic model of the boiler plant is applied to define parts...

  14. An Investigation of Two Finite Element Modeling Solutions for Biomechanical Simulation Using a Case Study of a Mandibular Bone.

    Science.gov (United States)

    Liu, Yun-Feng; Fan, Ying-Ying; Dong, Hui-Yue; Zhang, Jian-Xing

    2017-12-01

    The method used in biomechanical modeling for finite element method (FEM) analysis needs to deliver accurate results. There are currently two solutions used in FEM modeling for biomedical model of human bone from computerized tomography (CT) images: one is based on a triangular mesh and the other is based on the parametric surface model and is more popular in practice. The outline and modeling procedures for the two solutions are compared and analyzed. Using a mandibular bone as an example, several key modeling steps are then discussed in detail, and the FEM calculation was conducted. Numerical calculation results based on the models derived from the two methods, including stress, strain, and displacement, are compared and evaluated in relation to accuracy and validity. Moreover, a comprehensive comparison of the two solutions is listed. The parametric surface based method is more helpful when using powerful design tools in computer-aided design (CAD) software, but the triangular mesh based method is more robust and efficient.

  15. Dosimetry study of [I-131] and [I-125]- meta-iodobenz guanidine in a simulating model for neuroblastoma metastasis.

    Science.gov (United States)

    Roa, W H; Yaremko, B; McEwan, A; Amanie, J; Yee, D; Cho, J; McQuarrie, S; Riauka, T; Sloboda, R; Wiebe, L; Loebenberg, R; Janicki, C

    2013-02-01

    The physical properties of I-131 may be suboptimal for the delivery of therapeutic radiation to bone marrow metastases, which are common in the natural history of neuroblastoma. In vitro and preliminary clinical studies have implied improved efficacy of I-125 relative to I-131 in certain clinical situations, although areas of uncertainty remain regarding intratumoral dosimetry. This prompted our study using human neuroblastoma multicellular spheroids as a model of metastasis. 3D dose calculations were made using voxel-based Medical Internal Radiation Dosimetry (MIRD) and dose-point-kernel (DPK) techniques. Dose distributions for I-131 and I-125 labeled mIBG were calculated for spheroids (metastases) of various sizes from 0.01 cm to 3 cm diameter, and the relative dose delivered to the tumors was compared for the same limiting dose to the bone marrow. Based on the same data, arguments were advanced based upon the principles of tumor control probability (TCP) to emphasize the potential theoretical utility of I-125 over I-131 in specific clinical situations. I-125-mIBG can deliver a higher and more uniform dose to tumors compared to I-131 mIBG without increasing the dose to the bone marrow. Depending on the tumor size and biological half-life, the relative dose to tumors of less than 1 mm diameter can increase several-fold. TCP calculations indicate that tumor control increases with increasing administered activity, and that I-125 is more effective than I-131 for tumor diameters of 0.01 cm or less. This study suggests that I-125-mIBG is dosimetrically superior to I-131-mIBG therapy for small bone marrow metastases from neuroblastoma. It is logical to consider adding I-125-mIBG to I-131-mIBG in multi-modality therapy as these two isotopes could be complementary in terms of their cumulative dosimetry.

  16. The sensitivity of the Arctic sea ice to orbitally induced insolation changes: a study of the mid-Holocene Paleoclimate Modelling Intercomparison Project 2 and 3 simulations

    Directory of Open Access Journals (Sweden)

    M. Berger

    2013-04-01

    Full Text Available In the present work the Arctic sea ice in the mid-Holocene and the pre-industrial climates are analysed and compared on the basis of climate-model results from the Paleoclimate Modelling Intercomparison Project phase 2 (PMIP2 and phase 3 (PMIP3. The PMIP3 models generally simulate smaller and thinner sea-ice extents than the PMIP2 models both for the pre-industrial and the mid-Holocene climate. Further, the PMIP2 and PMIP3 models all simulate a smaller and thinner Arctic summer sea-ice cover in the mid-Holocene than in the pre-industrial control climate. The PMIP3 models also simulate thinner winter sea ice than the PMIP2 models. The winter sea-ice extent response, i.e. the difference between the mid-Holocene and the pre-industrial climate, varies among both PMIP2 and PMIP3 models. Approximately one half of the models simulate a decrease in winter sea-ice extent and one half simulates an increase. The model-mean summer sea-ice extent is 11 % (21 % smaller in the mid-Holocene than in the pre-industrial climate simulations in the PMIP2 (PMIP3. In accordance with the simple model of Thorndike (1992, the sea-ice thickness response to the insolation change from the pre-industrial to the mid-Holocene is stronger in models with thicker ice in the pre-industrial climate simulation. Further, the analyses show that climate models for which the Arctic sea-ice responses to increasing atmospheric CO2 concentrations are similar may simulate rather different sea-ice responses to the change in solar forcing between the mid-Holocene and the pre-industrial. For two specific models, which are analysed in detail, this difference is found to be associated with differences in the simulated cloud fractions in the summer Arctic; in the model with a larger cloud fraction the effect of insolation change is muted. A sub-set of the mid-Holocene simulations in the PMIP ensemble exhibit open water off the north-eastern coast of Greenland in summer, which can provide a fetch

  17. Advanced training simulator models. Implementation and validation

    International Nuclear Information System (INIS)

    Borkowsky, Jeffrey; Judd, Jerry; Belblidia, Lotfi; O'farrell, David; Andersen, Peter

    2008-01-01

    Modern training simulators are required to replicate plant data for both thermal-hydraulic and neutronic response. Replication is required such that reactivity manipulation on the simulator properly trains the operator for reactivity manipulation at the plant. This paper discusses advanced models which perform this function in real-time using the coupled code system THOR/S3R. This code system models the all fluids systems in detail using an advanced, two-phase thermal-hydraulic a model. The nuclear core is modeled using an advanced, three-dimensional nodal method and also by using cycle-specific nuclear data. These models are configured to run interactively from a graphical instructor station or handware operation panels. The simulator models are theoretically rigorous and are expected to replicate the physics of the plant. However, to verify replication, the models must be independently assessed. Plant data is the preferred validation method, but plant data is often not available for many important training scenarios. In the absence of data, validation may be obtained by slower-than-real-time transient analysis. This analysis can be performed by coupling a safety analysis code and a core design code. Such a coupling exists between the codes RELAP5 and SIMULATE-3K (S3K). RELAP5/S3K is used to validate the real-time model for several postulated plant events. (author)

  18. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  19. Analytical system dynamics modeling and simulation

    CERN Document Server

    Fabien, Brian C

    2008-01-01

    This book offering a modeling technique based on Lagrange's energy method includes 125 worked examples. Using this technique enables one to model and simulate systems as diverse as a six-link, closed-loop mechanism or a transistor power amplifier.

  20. Hybrid simulation models of production networks

    CERN Document Server

    Kouikoglou, Vassilis S

    2001-01-01

    This book is concerned with a most important area of industrial production, that of analysis and optimization of production lines and networks using discrete-event models and simulation. The book introduces a novel approach that combines analytic models and discrete-event simulation. Unlike conventional piece-by-piece simulation, this method observes a reduced number of events between which the evolution of the system is tracked analytically. Using this hybrid approach, several models are developed for the analysis of production lines and networks. The hybrid approach combines speed and accuracy for exceptional analysis of most practical situations. A number of optimization problems, involving buffer design, workforce planning, and production control, are solved through the use of hybrid models.

  1. Study of x-ray fluorescence : Development in Geant4 of new models of cross sections for simulation PIXE. Biological and archaeological applications

    International Nuclear Information System (INIS)

    Ben Abdelouahed, Haifa

    2010-01-01

    a great number of times, that makes it possible to reproduce the macroscopic effects and to calculate sizes like, in our case, x-ray fluorescence. Among several tools available for Monte Carlo simulation of the interactions particle-matter, the tool for simulation Monte Carlo Geant4 is characterized by a particularly flexible architecture based on directed technology object. This is why, to build our code of simulation, our choice was fixed on the use of this tool. We treat in the first chapter the guiding principle of x-ray fluorescence, which leads us to present the various phenomena which limit its sensitivity of analysis. We devote the second chapter to the presentation of the Geant4 tool, and our tests of validation of its electromagnetic processes. We describe the performances and the limits of this tool as for the simulation of the cross sections. We will expose in this same chapter our development, in the Geant4 tool, of new models of calculation of the cross sections of ionization of the atoms by the protons and the particles alpha. We discuss also our method of validation of the models in question. This work was object of publication. It was taken into account by the Geant4 project: our development currently replaces the data base EEDL of Geant4 in the process of determination of the cross sections of ionization by charged particles, and makes thus functional the Geant4 tool in the simulation of the x-ray fluorescence induced by charged particles. We describe in the third chapter our use of the code worked out for the simulation of the effectiveness of absorption of the standard Si(Li) detector which constitutes the essential piece of our device of analysis by x-ray fluorescence. We expose the adopted approach which enabled us to optimize the geometrical parameters of the studied detector. This work was presented at the second international conference of spectroscopy. We present, in the fourth chapter, some examples of concrete applications on the study of

  2. Dynamic modeling and simulation of wind turbines

    International Nuclear Information System (INIS)

    Ghafari Seadat, M.H.; Kheradmand Keysami, M.; Lari, H.R.

    2002-01-01

    Using wind energy for generating electricity in wind turbines is a good way for using renewable energies. It can also help to protect the environment. The main objective of this paper is dynamic modeling by energy method and simulation of a wind turbine aided by computer. In this paper, the equations of motion are extracted for simulating the system of wind turbine and then the behavior of the system become obvious by solving the equations. The turbine is considered with three blade rotor in wind direction, induced generator that is connected to the network and constant revolution for simulation of wind turbine. Every part of the wind turbine should be simulated for simulation of wind turbine. The main parts are blades, gearbox, shafts and generator

  3. Joint EEG/fMRI state space model for the detection of directed interactions in human brains—a simulation study

    International Nuclear Information System (INIS)

    Lenz, Michael; Linke, Yannick; Timmer, Jens; Schelter, Björn; Musso, Mariachristina; Weiller, Cornelius; Tüscher, Oliver

    2011-01-01

    An often addressed challenge in neuroscience research is the assignment of different tasks to specific brain regions. In many cases several brain regions are activated during a single task. Therefore, one is also interested in the temporal evolution of brain activity to infer causal relations between activated brain regions. These causal relations may be described by a directed, task specific network which consists of activated brain regions as vertices and directed edges. The edges describe the causal relations. Inference of the task specific brain network from measurements like electroencephalography (EEG) or functional magnetic resonance imaging (fMRI) is challenging, due to the low spatial resolution of the former and the low temporal resolution of the latter. Here, we present a simulation study investigating a possible combined analysis of simultaneously measured EEG and fMRI data to address the challenge specified above. A nonlinear state space model is used to distinguish between the underlying brain states and the (simulated) EEG/fMRI measurements. We make use of a modified unscented Kalman filter and a corresponding unscented smoother for the estimation of the underlying neural activity. Model parameters are estimated using an expectation-maximization algorithm, which exploits the partial linearity of our model. Inference of the brain network structure is then achieved using directed partial correlation, a measure for Granger causality. The results indicate that the convolution effect of the fMRI forward model imposes a big challenge for the parameter estimation and reduces the influence of the fMRI in combined EEG–fMRI models. It remains to be investigated whether other models or similar combinations of other modalities such as, e.g., EEG and magnetoencephalography can increase the profit of the promising idea of combining various modalities

  4. Regional model simulations of New Zealand climate

    Science.gov (United States)

    Renwick, James A.; Katzfey, Jack J.; Nguyen, Kim C.; McGregor, John L.

    1998-03-01

    Simulation of New Zealand climate is examined through the use of a regional climate model nested within the output of the Commonwealth Scientific and Industrial Research Organisation nine-level general circulation model (GCM). R21 resolution GCM output is used to drive a regional model run at 125 km grid spacing over the Australasian region. The 125 km run is used in turn to drive a simulation at 50 km resolution over New Zealand. Simulations with a full seasonal cycle are performed for 10 model years. The focus is on the quality of the simulation of present-day climate, but results of a doubled-CO2 run are discussed briefly. Spatial patterns of mean simulated precipitation and surface temperatures improve markedly as horizontal resolution is increased, through the better resolution of the country's orography. However, increased horizontal resolution leads to a positive bias in precipitation. At 50 km resolution, simulated frequency distributions of daily maximum/minimum temperatures are statistically similar to those of observations at many stations, while frequency distributions of daily precipitation appear to be statistically different to those of observations at most stations. Modeled daily precipitation variability at 125 km resolution is considerably less than observed, but is comparable to, or exceeds, observed variability at 50 km resolution. The sensitivity of the simulated climate to changes in the specification of the land surface is discussed briefly. Spatial patterns of the frequency of extreme temperatures and precipitation are generally well modeled. Under a doubling of CO2, the frequency of precipitation extremes changes only slightly at most locations, while air frosts become virtually unknown except at high-elevation sites.

  5. Landscape Modelling and Simulation Using Spatial Data

    Directory of Open Access Journals (Sweden)

    Amjed Naser Mohsin AL-Hameedawi

    2017-08-01

    Full Text Available In this paper a procedure was performed for engendering spatial model of landscape acclimated to reality simulation. This procedure based on combining spatial data and field measurements with computer graphics reproduced using Blender software. Thereafter that we are possible to form a 3D simulation based on VIS ALL packages. The objective was to make a model utilising GIS, including inputs to the feature attribute data. The objective of these efforts concentrated on coordinating a tolerable spatial prototype, circumscribing facilitation scheme and outlining the intended framework. Thus; the eventual result was utilized in simulation form. The performed procedure contains not only data gathering, fieldwork and paradigm providing, but extended to supply a new method necessary to provide the respective 3D simulation mapping production, which authorises the decision makers as well as investors to achieve permanent acceptance an independent navigation system for Geoscience applications.

  6. Study on the thermal degradation of 3-MCPD esters in model systems simulating deodorization of vegetable oils.

    Science.gov (United States)

    Ermacora, Alessia; Hrncirik, Karel

    2014-05-01

    The establishment of effective strategies for the mitigation of 3-MCPD esters in refined vegetable oils is restricted by limited knowledge of their mechanisms of formation and decomposition. In order to gain better understanding on the thermal stability of these compounds, a model system for mimicking oil refining conditions was developed. Pure 3-MCPD esters (3-MCPD dipalmitate and 3-MCPD dilaurate) were subjected to thermal treatment (180-260°C) and the degradation products where monitored over time (0-24h). After 24h of treatment, both 3-MCPD esters showed a significant degradation (ranging from 30% to 70%), correlating with the temperature applied. The degradation pathway, similar for both compounds, was found to involve isomerisation (very rapid, equilibrium was reached within 2h at 260°C), dechlorination and deacylation reactions. The higher relative abundance of non-chlorinated compounds, namely acylglycerols, in the first stages of the treatment suggested that dechlorination is preferred over deacylation with the conditions applied in this study. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Pilot Study on the Applicability of Variance Reduction Techniques to the Simulation of a Stochastic Combat Model

    Science.gov (United States)

    1987-09-01

    inverse transform method to obtain unit-mean exponential random variables, where Vi is the jth random number in the sequence of a stream of uniform random...numbers. The inverse transform method is discussed in the simulation textbooks listed in the reference section of this thesis. X(b,c,d) = - P(b,c,d...Defender ,C * P(b,c,d) We again use the inverse transform method to obtain the conditions for an interim event to occur and to induce the change in

  8. A fuzzy-stochastic simulation-optimization model for planning electric power systems with considering peak-electricity demand: A case study of Qingdao, China

    International Nuclear Information System (INIS)

    Yu, L.; Li, Y.P.; Huang, G.H.

    2016-01-01

    In this study, a FSSOM (fuzzy-stochastic simulation-optimization model) is developed for planning EPS (electric power systems) with considering peak demand under uncertainty. FSSOM integrates techniques of SVR (support vector regression), Monte Carlo simulation, and FICMP (fractile interval chance-constrained mixed-integer programming). In FSSOM, uncertainties expressed as fuzzy boundary intervals and random variables can be effectively tackled. In addition, SVR coupled Monte Carlo technique is used for predicting the peak-electricity demand. The FSSOM is applied to planning EPS for the City of Qingdao, China. Solutions of electricity generation pattern to satisfy the city's peak demand under different probability levels and p-necessity levels have been generated. Results reveal that the city's electricity supply from renewable energies would be low (only occupying 8.3% of the total electricity generation). Compared with the energy model without considering peak demand, the FSSOM can better guarantee the city's power supply and thus reduce the system failure risk. The findings can help decision makers not only adjust the existing electricity generation/supply pattern but also coordinate the conflict interaction among system cost, energy supply security, pollutant mitigation, as well as constraint-violation risk. - Highlights: • FSSOM (Fuzzy-stochastic simulation-optimization model) is developed for planning EPS. • It can address uncertainties as fuzzy-boundary intervals and random variables. • FSSOM can satisfy peak-electricity demand and optimize power allocation. • Solutions under different probability levels and p-necessity levels are analyzed. • Results create tradeoff among system cost and peak-electricity demand violation risk.

  9. Time-integrated activity coefficient estimation for radionuclide therapy using PET and a pharmacokinetic model: A simulation study on the effect of sampling schedule and noise

    Energy Technology Data Exchange (ETDEWEB)

    Hardiansyah, Deni [Medical Radiation Physics/Radiation Protection, Medical Faculty Mannheim, Universitätsmedizin Mannheim, Heidelberg University, Mannheim 68167, Germany and Department of Radiation Oncology, Medical Faculty Mannheim, Universitätsmedizin Mannheim, Heidelberg University, Mannheim 68167 (Germany); Guo, Wei; Glatting, Gerhard, E-mail: gerhard.glatting@medma.uni-heidelberg.de [Medical Radiation Physics/Radiation Protection, Medical Faculty Mannheim, Universitätsmedizin Mannheim, Heidelberg University, Mannheim 68167 (Germany); Kletting, Peter [Department of Nuclear Medicine, Ulm University, Ulm 89081 (Germany); Mottaghy, Felix M. [Department of Nuclear Medicine, University Hospital, RWTH Aachen University, Aachen 52074, Germany and Department of Nuclear Medicine, Maastricht University Medical Center MUMC+, Maastricht 6229 (Netherlands)

    2016-09-15

    Purpose: The aim of this study was to investigate the accuracy of PET-based treatment planning for predicting the time-integrated activity coefficients (TIACs). Methods: The parameters of a physiologically based pharmacokinetic (PBPK) model were fitted to the biokinetic data of 15 patients to derive assumed true parameters and were used to construct true mathematical patient phantoms (MPPs). Biokinetics of 150 MBq {sup 68}Ga-DOTATATE-PET was simulated with different noise levels [fractional standard deviation (FSD) 10%, 1%, 0.1%, and 0.01%], and seven combinations of measurements at 30 min, 1 h, and 4 h p.i. PBPK model parameters were fitted to the simulated noisy PET data using population-based Bayesian parameters to construct predicted MPPs. Therapy simulations were performed as 30 min infusion of {sup 90}Y-DOTATATE of 3.3 GBq in both true and predicted MPPs. Prediction accuracy was then calculated as relative variability v{sub organ} between TIACs from both MPPs. Results: Large variability values of one time-point protocols [e.g., FSD = 1%, 240 min p.i., v{sub kidneys} = (9 ± 6)%, and v{sub tumor} = (27 ± 26)%] show inaccurate prediction. Accurate TIAC prediction of the kidneys was obtained for the case of two measurements (1 and 4 h p.i.), e.g., FSD = 1%, v{sub kidneys} = (7 ± 3)%, and v{sub tumor} = (22 ± 10)%, or three measurements, e.g., FSD = 1%, v{sub kidneys} = (7 ± 3)%, and v{sub tumor} = (22 ± 9)%. Conclusions: {sup 68}Ga-DOTATATE-PET measurements could possibly be used to predict the TIACs of {sup 90}Y-DOTATATE when using a PBPK model and population-based Bayesian parameters. The two time-point measurement at 1 and 4 h p.i. with a noise up to FSD = 1% allows an accurate prediction of the TIACs in kidneys.

  10. Quantitative interface models for simulating microstructure evolution

    International Nuclear Information System (INIS)

    Zhu, J.Z.; Wang, T.; Zhou, S.H.; Liu, Z.K.; Chen, L.Q.

    2004-01-01

    To quantitatively simulate microstructural evolution in real systems, we investigated three different interface models: a sharp-interface model implemented by the software DICTRA and two diffuse-interface models which use either physical order parameters or artificial order parameters. A particular example is considered, the diffusion-controlled growth of a γ ' precipitate in a supersaturated γ matrix in Ni-Al binary alloys. All three models use the thermodynamic and kinetic parameters from the same databases. The temporal evolution profiles of composition from different models are shown to agree with each other. The focus is on examining the advantages and disadvantages of each model as applied to microstructure evolution in alloys

  11. A queuing model for road traffic simulation

    International Nuclear Information System (INIS)

    Guerrouahane, N.; Aissani, D.; Bouallouche-Medjkoune, L.; Farhi, N.

    2015-01-01

    We present in this article a stochastic queuing model for the raod traffic. The model is based on the M/G/c/c state dependent queuing model, and is inspired from the deterministic Godunov scheme for the road traffic simulation. We first propose a variant of M/G/c/c state dependent model that works with density-flow fundamental diagrams rather than density-speed relationships. We then extend this model in order to consider upstream traffic demand as well as downstream traffic supply. Finally, we show how to model a whole raod by concatenating raod sections as in the deterministic Godunov scheme

  12. Clock error models for simulation and estimation

    International Nuclear Information System (INIS)

    Meditch, J.S.

    1981-10-01

    Mathematical models for the simulation and estimation of errors in precision oscillators used as time references in satellite navigation systems are developed. The results, based on all currently known oscillator error sources, are directly implementable on a digital computer. The simulation formulation is sufficiently flexible to allow for the inclusion or exclusion of individual error sources as desired. The estimation algorithms, following from Kalman filter theory, provide directly for the error analysis of clock errors in both filtering and prediction

  13. Modeling and simulation goals and accomplishments

    International Nuclear Information System (INIS)

    Turinsky, P.

    2013-01-01

    The CASL (Consortium for Advanced Simulation of Light Water Reactors) mission is to develop and apply the Virtual Reactor simulator (VERA) to optimise nuclear power in terms of capital and operating costs, of nuclear waste production and of nuclear safety. An efficient and reliable virtual reactor simulator relies on 3-dimensional calculations, accurate physics models and code coupling. Advances in computer hardware, along with comparable advances in numerical solvers make the VERA project achievable. This series of slides details the VERA project and presents the specificities and performance of the codes involved in the project and ends by listing the computing needs

  14. Study on the cloud detection of GOCI by using the simulated surface reflectance from BRDF-model for the land application and meteorological utilization

    Science.gov (United States)

    Kim, Hye-Won; Yeom, Jong-Min; Woo, Sun-Hee; Chae, Tae-Byeong

    2016-04-01

    COMS (Communication, Ocean, and Meteorological Satellite) was launched at French Guiana Kourou space center on 27 June 2010. Geostationary Ocean Color Imager (GOCI), which is the first ocean color geostationary satellite in the world for observing the ocean phenomena, is able to obtain the scientific data per an hour from 00UTC to 07UTC. Moreover, the spectral channels of GOCI would enable not only monitoring for the ocean, but for extracting the information of the land surface over the Korean Peninsula, Japan, and Eastern China. Since it is extremely important to utilize GOCI data accurately for the land application, cloud pixels over the surface have to be removed. Unfortunately, infra-red (IR) channels that can easily detect the water vapor with the cloud top temperature, are not included in the GOCI sensor. In this paper, the advanced cloud masking algorithm will be proposed with visible and near-IR (NIR) bands that are within GOCI bands. The main obstacle of cloud masking with GOCI is how to handle the high variable surface reflectance, which is mainly depending on the solar zenith angle. In this study, we use semi-empirical BRDF model to simulate the surface reflectance by using 16 day composite cloudy free image. When estimating the simulated surface reflectance, same geometry for GOCI observation was applied. The simulated surface reflectance is used to discriminate cloud areas especially for the thin cloud and shows more reasonable result than original threshold methods.

  15. Simulation, calibration and validation protocols for the model 3D-CMCC-CNR-FEM: a case study in the Bonis’ watershed (Calabria, Italy

    Directory of Open Access Journals (Sweden)

    Collalti A

    2017-08-01

    Full Text Available Simulation, calibration and validation protocols for the model 3D-CMCC-CNR-FEM: a case study in the Bonis’ watershed (Calabria, Italy. At present, the climate changes issue is perhaps the greatest threat that is affecting people and the environment. Forest ecosystems have a key role in the mitigation of climate change. In this context, the prediction of the evolution and growth dynamics of the forests including carbon and water fluxes, and in relation to forest management has become a primary objective. The present study aims at defining a protocol for data collection and the workflow for using the 3D-CMCC-CNR-FEM model in a small mountain watershed in the Calabria region. Within this work we synergistically integrate data coming from different methods (e.g., LiDAR, eddy covariance and sample area to predict forest dynamics (growth, carbon and water fluxes. Carbon and water fluxes will be simulated considering also the effects of forest management.

  16. PRELIMINARY MULTIDOMAIN MODELLING AND SIMULATION ...

    African Journals Online (AJOL)

    Renewable energy sources have gained much attention due to the recent energy crisis and the urge to get clean energy. Among the main options being studied, wind energy is a strong contender because of its reliability due to the maturity of the technology, good infrastructure and relative cost competitiveness. It is also ...

  17. The Misspecification of the Covariance Structures in Multilevel Models for Single-Case Data: A Monte Carlo Simulation Study

    Science.gov (United States)

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Beretvas, S. Natasha; Van den Noortgate, Wim

    2016-01-01

    The impact of misspecifying covariance matrices at the second and third levels of the three-level model is evaluated. Results indicate that ignoring existing covariance has no effect on the treatment effect estimate. In addition, the between-case variance estimates are unbiased when covariance is either modeled or ignored. If the research interest…

  18. Simulation studies of protein-induced bilayer deformations, and lipid-induced protein tilting, on a mesoscopic model for lipid bilayers with embedded proteins

    DEFF Research Database (Denmark)

    Venturoli, M.; Smit, B.; Sperotto, Maria Maddalena

    2005-01-01

    membranes. Here we present a mesoscopic model for lipid bilayers with embedded proteins, which we have studied with the help of the dissipative particle dynamics simulation technique. Because hydrophobic matching is believed to be one of the main physical mechanisms regulating lipid-protein interactions......-induced protein tilt, with the hydrophobic mismatch ( positive and negative) between the protein hydrophobic length and the pure lipid bilayer hydrophobic thickness. The protein-induced bilayer perturbation was quantified in terms of a coherence length, xi(P), of the lipid bilayer hydrophobic thickness pro. le...... for positive values of mismatch; a dependence on the protein size appears as well. In the case of large model proteins experiencing extreme mismatch conditions, in the region next to the so-called lipid annulus, there appears an undershooting ( or overshooting) region where the bilayer hydrophobic thickness...

  19. Simulation Modeling of Software Development Processes

    Science.gov (United States)

    Calavaro, G. F.; Basili, V. R.; Iazeolla, G.

    1996-01-01

    A simulation modeling approach is proposed for the prediction of software process productivity indices, such as cost and time-to-market, and the sensitivity analysis of such indices to changes in the organization parameters and user requirements. The approach uses a timed Petri Net and Object Oriented top-down model specification. Results demonstrate the model representativeness, and its usefulness in verifying process conformance to expectations, and in performing continuous process improvement and optimization.

  20. Beyond Modeling: All-Atom Olfactory Receptor Model Simulations

    Directory of Open Access Journals (Sweden)

    Peter C Lai

    2012-05-01

    Full Text Available Olfactory receptors (ORs are a type of GTP-binding protein-coupled receptor (GPCR. These receptors are responsible for mediating the sense of smell through their interaction with odor ligands. OR-odorant interactions marks the first step in the process that leads to olfaction. Computational studies on model OR structures can validate experimental functional studies as well as generate focused and novel hypotheses for further bench investigation by providing a view of these interactions at the molecular level. Here we have shown the specific advantages of simulating the dynamic environment that is associated with OR-odorant interactions. We present a rigorous methodology that ranges from the creation of a computationally-derived model of an olfactory receptor to simulating the interactions between an OR and an odorant molecule. Given the ubiquitous occurrence of GPCRs in the membranes of cells, we anticipate that our OR-developed methodology will serve as a model for the computational structural biology of all GPCRs.

  1. Systems modeling and simulation applications for critical care medicine

    Science.gov (United States)

    2012-01-01

    Critical care delivery is a complex, expensive, error prone, medical specialty and remains the focal point of major improvement efforts in healthcare delivery. Various modeling and simulation techniques offer unique opportunities to better understand the interactions between clinical physiology and care delivery. The novel insights gained from the systems perspective can then be used to develop and test new treatment strategies and make critical care delivery more efficient and effective. However, modeling and simulation applications in critical care remain underutilized. This article provides an overview of major computer-based simulation techniques as applied to critical care medicine. We provide three application examples of different simulation techniques, including a) pathophysiological model of acute lung injury, b) process modeling of critical care delivery, and c) an agent-based model to study interaction between pathophysiology and healthcare delivery. Finally, we identify certain challenges to, and opportunities for, future research in the area. PMID:22703718

  2. Systems modeling and simulation applications for critical care medicine.

    Science.gov (United States)

    Dong, Yue; Chbat, Nicolas W; Gupta, Ashish; Hadzikadic, Mirsad; Gajic, Ognjen

    2012-06-15

    Critical care delivery is a complex, expensive, error prone, medical specialty and remains the focal point of major improvement efforts in healthcare delivery. Various modeling and simulation techniques offer unique opportunities to better understand the interactions between clinical physiology and care delivery. The novel insights gained from the systems perspective can then be used to develop and test new treatment strategies and make critical care delivery more efficient and effective. However, modeling and simulation applications in critical care remain underutilized. This article provides an overview of major computer-based simulation techniques as applied to critical care medicine. We provide three application examples of different simulation techniques, including a) pathophysiological model of acute lung injury, b) process modeling of critical care delivery, and c) an agent-based model to study interaction between pathophysiology and healthcare delivery. Finally, we identify certain challenges to, and opportunities for, future research in the area.

  3. Simulation of finite size effects of the fiber bundle model

    Science.gov (United States)

    Hao, Da-Peng; Tang, Gang; Xun, Zhi-Peng; Xia, Hui; Han, Kui

    2018-01-01

    In theory, the macroscopic fracture of materials should correspond with the thermodynamic limit of the fiber bundle model. However, the simulation of a fiber bundle model with an infinite size is unrealistic. To study the finite size effects of the fiber bundle model, fiber bundle models of various size are simulated in detail. The effects of system size on the constitutive behavior, critical stress, maximum avalanche size, avalanche size distribution, and increased step number of external load are explored. The simulation results imply that there is no feature size or cut size for macroscopic mechanical and statistical properties of the model. The constitutive curves near the macroscopic failure for various system size can collapse well with a simple scaling relationship. Simultaneously, the introduction of a simple extrapolation method facilitates the acquisition of more accurate simulation results in a large-limit system, which is better for comparison with theoretical results.

  4. Validation of the simulator neutronics model

    International Nuclear Information System (INIS)

    Gregory, M.V.

    1984-01-01

    The neutronics model in the SRP reactor training simulator computes the variation with time of the neutron population in the reactor core. The power output of a reactor is directly proportional to the neutron population, thus in a very real sense the neutronics model determines the response of the simulator. The geometrical complexity of the reactor control system in SRP reactors requires the neutronics model to provide a detailed, 3D representation of the reactor core. Existing simulator technology does not allow such a detailed representation to run in real-time in a minicomputer environment, thus an entirely different approach to the problem was required. A prompt jump method has been developed in answer to this need

  5. Sediment distribution study in the Gulf of Kachchh, India, from 3D hydrodynamic model simulation and satellite data

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.; Zhao, C.; Osawa, T.; Sugimori, Y.

    motion and found that the semidiurnal constit- uents M 2 and S 2 get amplified approximately three- fold due to a combination of quarter wavelength resonance, geometric effect, and sea bottom friction. Unnikrishnan et al. (1999) used a 2D barotropic model... forcing, sur- face wind and local density gradients, together with the actual coastline and bathymetry. Under the hydrostatic and Boussinesq approximations on a rotating Cartesian coordinate system, the COSMOS model employs the equation of fluid motion...

  6. Simulation-Based Internal Models for Safer Robots

    Directory of Open Access Journals (Sweden)

    Christian Blum

    2018-01-01

    Full Text Available In this paper, we explore the potential of mobile robots with simulation-based internal models for safety in highly dynamic environments. We propose a robot with a simulation of itself, other dynamic actors and its environment, inside itself. Operating in real time, this simulation-based internal model is able to look ahead and predict the consequences of both the robot’s own actions and those of the other dynamic actors in its vicinity. Hence, the robot continuously modifies its own actions in order to actively maintain its own safety while also achieving its goal. Inspired by the problem of how mobile robots could move quickly and safely through crowds of moving humans, we present experimental results which compare the performance of our internal simulation-based controller with a purely reactive approach as a proof-of-concept study for the practical use of simulation-based internal models.

  7. GIS and agent based spatial-temporal simulation modeling for assessing tourism social carrying capacity: a study on Mount Emei scenic area, China

    Science.gov (United States)

    Zhang, Renjun

    2007-06-01

    Each scenic area can sustain a specific level of acceptance of tourist development and use, beyond which further development can result in socio-cultural deterioration or a decline in the quality of the experience gained by visitors. This specific level is called carrying capacity. Social carrying capacity can be defined as the maximum level of use (in terms of numbers and activities) that can be absorbed by an area without an unacceptable decline in the quality of experience of visitors and without an unacceptable adverse impact on the society of the area. It is difficult to assess the carrying capacity, because the carrying capacity is determined by not only the number of visitors, but also the time, the type of the recreation, the characters of each individual and the physical environment. The objective of this study is to build a spatial-temporal simulation model to simulate the spatial-temporal distribution of tourists. This model is a tourist spatial behaviors simulator (TSBS). Based on TSBS, the changes of each visitor's travel patterns such as location, cost, and other states data are recoded in a state table. By analyzing this table, the intensity of the tourist use in any area can be calculated; the changes of the quality of tourism experience can be quantized and analyzed. So based on this micro simulation method the social carrying capacity can be assessed more accurately, can be monitored proactively and managed adaptively. In this paper, the carrying capacity of Mount Emei scenic area is analyzed as followed: The author selected the intensity of the crowd as the monitoring Indicators. it is regarded that longer waiting time means more crowded. TSBS was used to simulate the spatial-temporal distribution of tourists. the average of waiting time all the visitors is calculated. And then the author assessed the social carrying capacity of Mount Emei scenic area, found the key factors have impacted on social carrying capacity. The results show that the TSBS

  8. New exploration on TMSR: modelling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Si, S.; Chen, Q.; Bei, H.; Zhao, J., E-mail: ssy@snerdi.com.cn [Shanghai Nuclear Engineering Research & Design Inst., Shanghai (China)

    2015-07-01

    A tightly coupled multi-physics model for MSR (Molten Salt Reactor) system involving the reactor core and the rest of the primary loop has been developed and employed in an in-house developed computer code TANG-MSR. In this paper, the computer code is used to simulate the behavior of steady state operation and transient for our redesigned TMSR. The presented simulation results demonstrate that the models employed in TANG-MSR can capture major physics phenomena in MSR and the redesigned TMSR has excellent performance of safety and sustainability. (author)

  9. Modeling premartensitic effects in Ni2MnGa: A mean-field and Monte Carlo simulation study

    DEFF Research Database (Denmark)

    Castan, T.; Vives, E.; Lindgård, Per-Anker

    1999-01-01

    is constructed and justified based on the analysis of the experimentally observed strain variables and precursor phenomena. The description includes the (local) tetragonal distortion, the amplitude of the plane-modulating strain, and the magnetization. The model is solved by means of mean-field theory and Monte......The degenerate Blume-Emery-Griffiths model for martensitic transformations is extended by including both structural and magnetic degrees of freedom in order to elucidate premartensitic effects. Special attention is paid to the effect of the magnetoelastic coupling in Ni2MnGa. The microscopic model...... heat, not always associated with a true phase transition. The main conclusion is that premartensitic effects result from the interplay between the softness of the anomalous phonon driving the modulation and the magnetoelastic coupling. In particular, the premartensitic transition occurs when...

  10. Effect of Alternate Nostril Breathing Exercise on Experimentally Induced Anxiety in Healthy Volunteers Using the Simulated Public Speaking Model: A Randomized Controlled Pilot Study.

    Science.gov (United States)

    Kamath, Ashwin; Urval, Rathnakar P; Shenoy, Ashok K

    2017-01-01

    A randomized controlled pilot study was carried out to determine the effect of a 15-minute practice of ANB exercise on experimentally induced anxiety using the simulated public speaking model in yoga-naïve healthy young adults. Thirty consenting medical students were equally divided into test and control groups. The test group performed alternate nostril breathing exercise for 15 minutes, while the control group sat in a quiet room before participating in the simulated public speaking test (SPST). Visual Analog Mood Scale and Self-Statements during Public Speaking scale were used to measure the mood state at different phases of the SPST. The psychometric scores of both groups were comparable at baseline. Repeated-measures ANOVA showed a significant effect of phase ( p < 0.05), but group and gender did not have statistically significant influence on the mean anxiety scores. However, the test group showed a trend towards lower mean scores for the anxiety factor when compared with the control group. Considering the limitations of this pilot study and the trend seen towards lower anxiety in the test group, alternate nostril breathing may have potential anxiolytic effect in acute stressful situations. A study with larger sample size is therefore warranted. This trial is registered with CTRI/2014/03/004460.

  11. Kanban simulation model for production process optimization

    Directory of Open Access Journals (Sweden)

    Golchev Riste

    2015-01-01

    Full Text Available A long time has passed since the KANBAN system has been established as an efficient method for coping with the excessive inventory. Still, the possibilities for its improvement through its integration with other different approaches should be investigated further. The basic research challenge of this paper is to present benefits of KANBAN implementation supported with Discrete Event Simulation (DES. In that direction, at the beginning, the basics of KANBAN system are presented with emphasis on the information and material flow, together with a methodology for implementation of KANBAN system. Certain analysis on combining the simulation with this methodology is presented. The paper is concluded with a practical example which shows that through understanding the philosophy of the implementation methodology of KANBAN system and the simulation methodology, a simulation model can be created which can serve as a basis for a variety of experiments that can be conducted within a short period of time, resulting with production process optimization.

  12. Vermont Yankee simulator BOP model upgrade

    International Nuclear Information System (INIS)

    Alejandro, R.; Udbinac, M.J.

    2006-01-01

    The Vermont Yankee simulator has undergone significant changes in the 20 years since the original order was placed. After the move from the original Unix to MS Windows environment, and upgrade to the latest version of SimPort, now called MASTER, the platform was set for an overhaul and replacement of major plant system models. Over a period of a few months, the VY simulator team, in partnership with WSC engineers, replaced outdated legacy models of the main steam, condenser, condensate, circulating water, feedwater and feedwater heaters, and main turbine and auxiliaries. The timing was ideal, as the plant was undergoing a power up-rate, so the opportunity was taken to replace the legacy models with industry-leading, true on-line object oriented graphical models. Due to the efficiency of design and ease of use of the MASTER tools, VY staff performed the majority of the modeling work themselves with great success, with only occasional assistance from WSC, in a relatively short time-period, despite having to maintain all of their 'regular' simulator maintenance responsibilities. This paper will provide a more detailed view of the VY simulator, including how it is used and how it has benefited from the enhancements and upgrades implemented during the project. (author)

  13. Evaluation of articulation simulation system using artificial maxillectomy models.

    Science.gov (United States)

    Elbashti, M E; Hattori, M; Sumita, Y I; Taniguchi, H

    2015-09-01

    Acoustic evaluation is valuable for guiding the treatment of maxillofacial defects and determining the effectiveness of rehabilitation with an obturator prosthesis. Model simulations are important in terms of pre-surgical planning and pre- and post-operative speech function. This study aimed to evaluate the acoustic characteristics of voice generated by an articulation simulation system using a vocal tract model with or without artificial maxillectomy defects. More specifically, we aimed to establish a speech simulation system for maxillectomy defect models that both surgeons and maxillofacial prosthodontists can use in guiding treatment planning. Artificially simulated maxillectomy defects were prepared according to Aramany's classification (Classes I-VI) in a three-dimensional vocal tract plaster model of a subject uttering the vowel /a/. Formant and nasalance acoustic data were analysed using Computerized Speech Lab and the Nasometer, respectively. Formants and nasalance of simulated /a/ sounds were successfully detected and analysed. Values of Formants 1 and 2 for the non-defect model were 675.43 and 976.64 Hz, respectively. Median values of Formants 1 and 2 for the defect models were 634.36 and 1026.84 Hz, respectively. Nasalance was 11% in the non-defect model, whereas median nasalance was 28% in the defect models. The results suggest that an articulation simulation system can be used to help surgeons and maxillofacial prosthodontists to plan post-surgical defects that will be facilitate maxillofacial rehabilitation. © 2015 John Wiley & Sons Ltd.

  14. Study of Sediment Transportation in the Gulf of Kachchh, using 3D Hydro-dynamic Model Simulation and Satellite Data

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    at the CEReS, for November and December months of 1999. While modeling, the water column is divided into five layers and at each layer the distribution of current velocity and direction, pressure water temperature, salinity and turbulent energy were computed...

  15. Studying furosemide solubilization using an in vitro model simulating gastrointestinal digestion and drug solubilization in neonates and young infants

    DEFF Research Database (Denmark)

    Klitgaard, Mette; Sassene, Philip Jonas; Selen, Arzu

    2017-01-01

    -2months). METHODS: The utilized in vitro model was designed to mimic the digestion and drug solubilization processes occurring in the stomach, and the small intestine of the neonate and young infant population, using physiologically relevant media, volumes and digestive enzymes. Overall the experimental...

  16. A Mixture Rasch Model with a Covariate: A Simulation Study via Bayesian Markov Chain Monte Carlo Estimation

    Science.gov (United States)

    Dai, Yunyun

    2013-01-01

    Mixtures of item response theory (IRT) models have been proposed as a technique to explore response patterns in test data related to cognitive strategies, instructional sensitivity, and differential item functioning (DIF). Estimation proves challenging due to difficulties in identification and questions of effect size needed to recover underlying…

  17. The Model of Gas Supply Capacity Simulation In Regional Energy Security Framework: Policy Studies PT. X Cirebon Area

    Science.gov (United States)

    Nuryadin; Ronny Rahman Nitibaskara, Tb; Herdiansyah, Herdis; Sari, Ravita

    2017-10-01

    The needs of energy are increasing every year. The unavailability of energy will cause economic losses and weaken energy security. To overcome the availability of gas supply in the future, planning are cruacially needed. Therefore, it is necessary to approach the system, so that the process of gas distribution is running properly. In this research, system dynamic method will be used to measure how much supply capacity planning is needed until 2050, with parameters of demand in industrial, household and commercial sectors. From the model obtained PT.X Cirebon area in 2031 was not able to meet the needs of gas customers in the Cirebon region, as well as with Businnes as usual scenario, the ratio of gas fulfillment only until 2027. The implementation of the national energy policy that is the use of NRE as government intervention in the model is produced up to 2035 PT.X Cirebon area is still able to supply the gas needs of its customers.

  18. Modeling salmonella Dublin into the dairy herd simulation model Simherd

    DEFF Research Database (Denmark)

    Kudahl, Anne Braad

    2010-01-01

    Infection with Salmonella Dublin in the dairy herd and effects of the infection and relevant control measures are currently being modeled into the dairy herd simulation model called Simherd. The aim is to compare the effects of different control strategies against Salmonella Dublin on both within...... of the simulations will therefore be used for decision support in the national surveillance and eradication program against Salmonella Dublin. Basic structures of the model are programmed and will be presented at the workshop. The model is in a phase of face-validation by a group of Salmonella......-herd- prevalence and economy by simulations. The project Dublin on both within-herd- prevalence and economy by simulations. The project is a part of a larger national project "Salmonella 2007 - 2011" with the main objective to reduce the prevalence of Salmonella Dublin in Danish Dairy herds. Results...

  19. Structure, dynamics, and interaction of Mycobacterium tuberculosis (Mtb DprE1 and DprE2 examined by molecular modeling, simulation, and electrostatic studies.

    Directory of Open Access Journals (Sweden)

    Isha Bhutani

    Full Text Available The enzymes decaprenylphosphoryl-β-D-ribose oxidase (DprE1 and decaprenylphosphoryl-β-D-ribose-2-epimerase (DprE2 catalyze epimerization of decaprenylphosporyl ribose (DPR todecaprenylphosporyl arabinose (DPA and are critical for the survival of Mtb. Crystal structures of DprE1 so far reported display significant disordered regions and no structural information is known for DprE2. We used homology modeling, protein threading, molecular docking and dynamics studies to investigate the structural and dynamic features of Mtb DprE1 and DprE2 and DprE1-DprE2 complex. A three-dimensional model for DprE2 was generated using the threading approach coupled with ab initio modeling. A 50 ns simulation of DprE1 and DprE2 revealed the overall stability of the structures. Principal Component Analysis (PCA demonstrated the convergence of sampling in both DprE1 and DprE2. In DprE1, residues in the 269-330 area showed considerable fluctuation in agreement with the regions of disorder observed in the reported crystal structures. In DprE2, large fluctuations were detected in residues 95-113, 146-157, and 197-226. The study combined docking and MD simulation studies to map and characterize the key residues involved in DprE1-DprE2 interaction. A 60 ns MD simulation for DprE1-DprE2 complex was also performed. Analysis of data revealed that the docked complex is stabilized by H-bonding, hydrophobic and ionic interactions. The key residues of DprE1 involved in DprE1-DprE2 interactions belong to the disordered region. We also examined the docked complex of DprE1-BTZ043 to investigate the binding pocket of DprE1 and its interactions with the inhibitor BTZ043. In summary, we hypothesize that DprE1-DprE2 interaction is crucial for the synthesis of DPA and DprE1-DprE2 complex may be a new therapeutic target amenable to pharmacological validation. The findings have important implications in tuberculosis (TB drug discovery and will facilitate drug development efforts against

  20. Simulation model cooling and freezing of bread- and pastry products. Orientation and literature study; Simulatiemodel koelen en vriezen van brood- en banketprodukten. Orientatie en literatuurstudie

    Energy Technology Data Exchange (ETDEWEB)

    Van der Sluis, S.M.

    1991-07-01

    To improve the quality, the process, and to save costs during the production of bakery and pastry products, a simulation model to freeze and thaw these products could be an excellent means to optimize the system and to realize the optional energy savings. A literature survey was carried out to find examples of appropriate simulation models. The results of the survey must form the basis of the title project. The BERTIX model, developed by TNO (Netherlands Organization for Applied Scientific Research) for the meat industry, can be applicated to simulate cooling- and freezing processes in the bakery sector when it is extended with the modelling of internal moisture transport and with freezing proecesses. In the literature little is known of thermophysical properties of bakery products. By means of the computer program Costherm most of these properties can be calculated (specific heat, heat conduction coefficient, enthalpy). Other properties have to be determined experimentally (porosity, density). 4 figs., 104 figs.

  1. A universal simulator for ecological models

    DEFF Research Database (Denmark)

    Holst, Niels

    2013-01-01

    Software design is an often neglected issue in ecological models, even though bad software design often becomes a hindrance for re-using, sharing and even grasping an ecological model. In this paper, the methodology of agile software design was applied to the domain of ecological models. Thus...... the principles for a universal design of ecological models were arrived at. To exemplify this design, the open-source software Universal Simulator was constructed using C++ and XML and is provided as a resource for inspiration....

  2. Biological transportation networks: Modeling and simulation

    KAUST Repository

    Albi, Giacomo

    2015-09-15

    We present a model for biological network formation originally introduced by Cai and Hu [Adaptation and optimization of biological transport networks, Phys. Rev. Lett. 111 (2013) 138701]. The modeling of fluid transportation (e.g., leaf venation and angiogenesis) and ion transportation networks (e.g., neural networks) is explained in detail and basic analytical features like the gradient flow structure of the fluid transportation network model and the impact of the model parameters on the geometry and topology of network formation are analyzed. We also present a numerical finite-element based discretization scheme and discuss sample cases of network formation simulations.

  3. Reproducibility in Computational Neuroscience Models and Simulations

    Science.gov (United States)

    McDougal, Robert A.; Bulanova, Anna S.; Lytton, William W.

    2016-01-01

    Objective Like all scientific research, computational neuroscience research must be reproducible. Big data science, including simulation research, cannot depend exclusively on journal articles as the method to provide the sharing and transparency required for reproducibility. Methods Ensuring model reproducibility requires the use of multiple standard software practices and tools, including version control, strong commenting and documentation, and code modularity. Results Building on these standard practices, model sharing sites and tools have been developed that fit into several categories: 1. standardized neural simulators, 2. shared computational resources, 3. declarative model descriptors, ontologies and standardized annotations; 4. model sharing repositories and sharing standards. Conclusion A number of complementary innovations have been proposed to enhance sharing, transparency and reproducibility. The individual user can be encouraged to make use of version control, commenting, documentation and modularity in development of models. The community can help by requiring model sharing as a condition of publication and funding. Significance Model management will become increasingly important as multiscale models become larger, more detailed and correspondingly more difficult to manage by any single investigator or single laboratory. Additional big data management complexity will come as the models become more useful in interpreting experiments, thus increasing the need to ensure clear alignment between modeling data, both parameters and results, and experiment. PMID:27046845

  4. A SIMULATION MODEL OF THE GAS COMPLEX

    Directory of Open Access Journals (Sweden)

    Sokolova G. E.

    2016-06-01

    Full Text Available The article considers the dynamics of gas production in Russia, the structure of sales in the different market segments, as well as comparative dynamics of selling prices on these segments. Problems of approach to the creation of the gas complex using a simulation model, allowing to estimate efficiency of the project and determine the stability region of the obtained solutions. In the presented model takes into account the unit repayment of the loan, allowing with the first year of simulation to determine the possibility of repayment of the loan. The model object is a group of gas fields, which is determined by the minimum flow rate above which the project is cost-effective. In determining the minimum source flow rate for the norm of discount is taken as a generalized weighted average percentage on debt and equity taking into account risk premiums. He also serves as the lower barrier to internal rate of return below which the project is rejected as ineffective. Analysis of the dynamics and methods of expert evaluation allow to determine the intervals of variation of the simulated parameters, such as the price of gas and the exit gas complex at projected capacity. Calculated using the Monte Carlo method, for each random realization of the model simulated values of parameters allow to obtain a set of optimal for each realization of values minimum yield of wells, and also allows to determine the stability region of the solution.

  5. Object Oriented Modelling and Dynamical Simulation

    DEFF Research Database (Denmark)

    Wagner, Falko Jens; Poulsen, Mikael Zebbelin

    1998-01-01

    This report with appendix describes the work done in master project at DTU.The goal of the project was to develop a concept for simulation of dynamical systems based on object oriented methods.The result was a library of C++-classes, for use when both building componentbased models and when...

  6. Advanced feeder control using fast simulation models

    NARCIS (Netherlands)

    Verheijen, O.S.; Op den Camp, O.M.G.C.; Beerkens, R.G.C.; Backx, A.C.P.M.; Huisman, L.; Drummond, C.H.

    2005-01-01

    For the automatic control of glass quality in glass production, the relation between process variable and product or glass quality and process conditions/process input parameters must be known in detail. So far, detailed 3-D glass melting simulation models were used to predict the effect of process

  7. Modeling and Simulating Virtual Anatomical Humans

    NARCIS (Netherlands)

    Madehkhaksar, Forough; Luo, Zhiping; Pronost, Nicolas; Egges, Arjan

    2014-01-01

    This chapter presents human musculoskeletal modeling and simulation as a challenging field that lies between biomechanics and computer animation. One of the main goals of computer animation research is to develop algorithms and systems that produce plausible motion. On the other hand, the main

  8. Agent Based Modelling for Social Simulation

    NARCIS (Netherlands)

    Smit, S.K.; Ubink, E.M.; Vecht, B. van der; Langley, D.J.

    2013-01-01

    This document is the result of an exploratory project looking into the status of, and opportunities for Agent Based Modelling (ABM) at TNO. The project focussed on ABM applications containing social interactions and human factors, which we termed ABM for social simulation (ABM4SS). During the course

  9. Thermal and non-thermal lattice gas models for a dimer-trimer surface catalytic reaction: a Monte-Carlo simulation study

    International Nuclear Information System (INIS)

    Iqbal, K.; Khand, P.A.

    2012-01-01

    The kinetics of an irreversible dimer-trimer reaction of the type 2 A/sub 3/ +3 B/sub 2/ -- 6 AB by considering the precursor motion of the dimer (B/sub 2) on a square, as well as on a hexagonal surface, by using a Monte Carlo simulation have been studied. When the movement of precursors is limited to the first nearest neighborhood, the model gives reactive window widths of the order of 0.22 and 0.29 for the square and the hexagonal lattices, respectively, which are quite large compared to those predicted by the LH model. In our model, the reactive window width for a square lattice increases significantly as compared to that for the LH models of the same system on square and hexagonal lattices. The width of the reactive region increases when the precursor motion is extended to the second and the third nearest neighborhood. The continuous transition disappears when the precursor motion is extended to the third nearest neighborhood. The diffusion of B atoms does not change the situation qualitatively for both the precursor and the LH models. However, desorption of the dimer changes the situation significantly; i.e., the width of the reactive window shows an exponential growth with respect to the desorption probability of the dimer for both the precursor and the LH models. In our opinion, the inclusion of precursors in the LH model of the dimer-trimer reactions leads to a better and more realistic description of the heterogeneous catalytic reactions. Consequently, further numerical and theoretical activity in this field will be very useful for understanding complex heterogeneous reactions. (orig./A.B.)

  10. Modelling, simulation and visualisation for electromagnetic non-destructive testing

    International Nuclear Information System (INIS)

    Ilham Mukriz Zainal Abidin; Abdul Razak Hamzah

    2010-01-01

    This paper reviews the state-of-the art and the recent development of modelling, simulation and visualization for eddy current Non-Destructive Testing (NDT) technique. Simulation and visualization has aid in the design and development of electromagnetic sensors and imaging techniques and systems for Electromagnetic Non-Destructive Testing (ENDT); feature extraction and inverse problems for Quantitative Non-Destructive Testing (QNDT). After reviewing the state-of-the art of electromagnetic modelling and simulation, case studies of Research and Development in eddy current NDT technique via magnetic field mapping and thermography for eddy current distribution are discussed. (author)

  11. Thermohydraulic modeling and simulation of breeder reactors

    International Nuclear Information System (INIS)

    Agrawal, A.K.; Khatib-Rahbar, M.; Curtis, R.T.; Hetrick, D.L.; Girijashankar, P.V.

    1982-01-01

    This paper deals with the modeling and simulation of system-wide transients in LMFBRs. Unprotected events (i.e., the presumption of failure of the plant protection system) leading to core-melt are not considered in this paper. The existing computational capabilities in the area of protected transients in the US are noted. Various physical and numerical approximations that are made in these codes are discussed. Finally, the future direction in the area of model verification and improvements is discussed

  12. Studies of acid-base homeostasis during simulated weightlessness: Application of the water immersion model to man

    Science.gov (United States)

    Epstein, M.

    1975-01-01

    The effects of water immersion on acid-base homeostasis were investigated under carefully controlled conditions. Studies of renal acidification were carried out on seven healthy male subjects, each consuming a diet containing 150 meq sodium and 100 meq potassium. Control and immersion studies were carried out on each subject on the fourth and sixth days, respectively, of dietary equilibration, by which time all subjects had achieved sodium balance. The experimental protocols on study days were similar (except for the amount of water administered).

  13. Modeling Supermassive Black Holes in Cosmological Simulations

    Science.gov (United States)

    Tremmel, Michael

    My thesis work has focused on improving the implementation of supermassive black hole (SMBH) physics in cosmological hydrodynamic simulations. SMBHs are ubiquitous in mas- sive galaxies, as well as bulge-less galaxies and dwarfs, and are thought to be a critical component to massive galaxy evolution. Still, much is unknown about how SMBHs form, grow, and affect their host galaxies. Cosmological simulations are an invaluable tool for un- derstanding the formation of galaxies, self-consistently tracking their evolution with realistic merger and gas accretion histories. SMBHs are often modeled in these simulations (generally as a necessity to produce realistic massive galaxies), but their implementations are commonly simplified in ways that can limit what can be learned. Current and future observations are opening new windows into the lifecycle of SMBHs and their host galaxies, but require more detailed, physically motivated simulations. Within the novel framework I have developed, SMBHs 1) are seeded at early times without a priori assumptions of galaxy occupation, 2) grow in a way that accounts for the angular momentum of gas, and 3) experience realistic orbital evolution. I show how this model, properly tuned with a novel parameter optimiza- tion technique, results in realistic galaxies and SMBHs. Utilizing the unique ability of these simulations to capture the dynamical evolution of SMBHs, I present the first self-consistent prediction for the formation timescales of close SMBH pairs, precursors to SMBH binaries and merger events potentially detected by future gravitational wave experiments.

  14. 2D and 3D simulation of cavitating flows: development of an original algorithm in code Saturne and study of the influence of turbulence modeling

    International Nuclear Information System (INIS)

    Chebli, Rezki

    2014-01-01

    Cavitation is one of the most demanding physical phenomena influencing the performance of hydraulic machines. It is therefore important to predict correctly its inception and development, in order to quantify the performance drop it induces, and also to characterize the resulting flow instabilities. The aim of this work is to develop an unsteady 3D algorithm for the numerical simulation of cavitation in an industrial CFD solver 'Code Saturne'. It is based on a fractional step method and preserves the minimum/maximum principle of the void fraction. An implicit solver, based on a transport equation of the void fraction coupled with the Navier-Stokes equations is proposed. A specific numerical treatment of the cavitation source terms provides physical values of the void fraction (between 0 and 1) without including any artificial numerical limitation. The influence of RANS turbulence models on the simulation of cavitation on 2D geometries (Venturi and Hydrofoil) is then studied. It confirms the capability of the two-equation eddy viscosity models, k-epsilon and k-omega-SST, with the modification proposed by Reboud et al. (1998) to reproduce the main features of the unsteady sheet cavity behavior. The second order model RSM-SSG, based on the Reynolds stress transport, appears able to reproduce the highly unsteady flow behavior without including any arbitrary modification. The three-dimensional effects involved in the instability mechanisms are also analyzed. This work allows us to achieve a numerical tool, validated on complex configurations of cavitating flows, to improve the understanding of the physical mechanisms that control the three-dimensional unsteady effects involved in the mechanisms of instability. (author)

  15. Advances in NLTE Modeling for Integrated Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Scott, H A; Hansen, S B

    2009-07-08

    The last few years have seen significant progress in constructing the atomic models required for non-local thermodynamic equilibrium (NLTE) simulations. Along with this has come an increased understanding of the requirements for accurately modeling the ionization balance, energy content and radiative properties of different elements for a wide range of densities and temperatures. Much of this progress is the result of a series of workshops dedicated to comparing the results from different codes and computational approaches applied to a series of test problems. The results of these workshops emphasized the importance of atomic model completeness, especially in doubly excited states and autoionization transitions, to calculating ionization balance, and the importance of accurate, detailed atomic data to producing reliable spectra. We describe a simple screened-hydrogenic model that calculates NLTE ionization balance with surprising accuracy, at a low enough computational cost for routine use in radiation-hydrodynamics codes. The model incorporates term splitting, {Delta}n = 0 transitions, and approximate UTA widths for spectral calculations, with results comparable to those of much more detailed codes. Simulations done with this model have been increasingly successful at matching experimental data for laser-driven systems and hohlraums. Accurate and efficient atomic models are just one requirement for integrated NLTE simulations. Coupling the atomic kinetics to hydrodynamics and radiation transport constrains both discretizations and algorithms to retain energy conservation, accuracy and stability. In particular, the strong coupling between radiation and populations can require either very short timesteps or significantly modified radiation transport algorithms to account for NLTE material response. Considerations such as these continue to provide challenges for NLTE simulations.

  16. Mesoscopic modelling and simulation of soft matter.

    Science.gov (United States)

    Schiller, Ulf D; Krüger, Timm; Henrich, Oliver

    2017-12-20

    The deformability of soft condensed matter often requires modelling of hydrodynamical aspects to gain quantitative understanding. This, however, requires specialised methods that can resolve the multiscale nature of soft matter systems. We review a number of the most popular simulation methods that have emerged, such as Langevin dynamics, dissipative particle dynamics, multi-particle collision dynamics, sometimes also referred to as stochastic rotation dynamics, and the lattice-Boltzmann method. We conclude this review with a short glance at current compute architectures for high-performance computing and community codes for soft matter simulation.

  17. Biomechanical study: resistance comparison of posterior antiglide plate and lateral plate on synthetic bone models simulating Danis-Weber B malleolar fractures

    Directory of Open Access Journals (Sweden)

    Bruna Buscharino

    2013-06-01

    Full Text Available OBJECTIVE : The purpose of this study was to compare different positions of plates in lateral malleolar Danis-Weber B fractures on synthetic bone: a lateral plate and a posterior antiglide plate. METHODS : Short oblique fractures of distal fibula at the level of the syndesmosys were simulated with a fibular osteotomy in sixteen synthetic fibula bones (Synbone®. Eight fractures were fixed with lateral plating associated with an independent lag screw, and the other eight were fixed with posterior antiglide plating with a lag screw through the plate. A strain gage was installed at the center of each plate at the osteotomy site. Supination and external rotation forces were applied to each of the two groups at the bend. RESULTS : The lateral position plate group suffered more deformity in response to supination forces compared to the group with the posterior antiglide plate, but this result was not statistically significant. In the tests with external rotation forces, the posterior antiglide plating group had significantly higher resistance (p < 0.05. CONCLUSION : When subjected to external rotation forces, osteosynthesis with posterior antiglide plate models simulating type B fractures of the lateral malleolus of the ankle is more resistant than that of the neutralization plate.

  18. Stereoscopic (3D) versus monoscopic (2D) laparoscopy: comparative study of performance using advanced HD optical systems in a surgical simulator model.

    Science.gov (United States)

    Schoenthaler, Martin; Schnell, Daniel; Wilhelm, Konrad; Schlager, Daniel; Adams, Fabian; Hein, Simon; Wetterauer, Ulrich; Miernik, Arkadiusz

    2016-04-01

    To compare task performances of novices and experts using advanced high-definition 3D versus 2D optical systems in a surgical simulator model. Fifty medical students (novices in laparoscopy) were randomly assigned to perform five standardized tasks adopted from the Fundamentals of Laparoscopic Surgery (FLS) curriculum in either a 2D or 3D laparoscopy simulator system. In addition, eight experts performed the same tasks. Task performances were evaluated using a validated scoring system of the SAGES/FLS program. Participants were asked to rate 16 items in a questionnaire. Overall task performance of novices was significantly better using stereoscopic visualization. Superiority of performances in 3D reached a level of significance for tasks peg transfer and precision cutting. No significant differences were noted in performances of experts when using either 2D or 3D. Overall performances of experts compared to novices were better in both 2D and 3D. Scorings in the questionnaires showed a tendency toward lower scores in the group of novices using 3D. Stereoscopic imaging significantly improves performance of laparoscopic phantom tasks of novices. The current study confirms earlier data based on a large number of participants and a standardized task and scoring system. Participants felt more confident and comfortable when using a 3D laparoscopic system. However, the question remains open whether these findings translate into faster and safer operations in a clinical setting.

  19. Numerical model simulation of atmospheric coolant plumes

    International Nuclear Information System (INIS)

    Gaillard, P.

    1980-01-01

    The effect of humid atmospheric coolants on the atmosphere is simulated by means of a three-dimensional numerical model. The atmosphere is defined by its natural vertical profiles of horizontal velocity, temperature, pressure and relative humidity. Effluent discharge is characterised by its vertical velocity and the temperature of air satured with water vapour. The subject of investigation is the area in the vicinity of the point of discharge, with due allowance for the wake effect of the tower and buildings and, where application, wind veer with altitude. The model equations express the conservation relationships for mometum, energy, total mass and water mass, for an incompressible fluid behaving in accordance with the Boussinesq assumptions. Condensation is represented by a simple thermodynamic model, and turbulent fluxes are simulated by introduction of turbulent viscosity and diffusivity data based on in-situ and experimental water model measurements. The three-dimensional problem expressed in terms of the primitive variables (u, v, w, p) is governed by an elliptic equation system which is solved numerically by application of an explicit time-marching algorithm in order to predict the steady-flow velocity distribution, temperature, water vapour concentration and the liquid-water concentration defining the visible plume. Windstill conditions are simulated by a program processing the elliptic equations in an axisymmetrical revolution coordinate system. The calculated visible plumes are compared with plumes observed on site with a view to validate the models [fr

  20. Multiphase reacting flows modelling and simulation

    CERN Document Server

    Marchisio, Daniele L

    2007-01-01

    The papers in this book describe the most widely applicable modeling approaches and are organized in six groups covering from fundamentals to relevant applications. In the first part, some fundamentals of multiphase turbulent reacting flows are covered. In particular the introduction focuses on basic notions of turbulence theory in single-phase and multi-phase systems as well as on the interaction between turbulence and chemistry. In the second part, models for the physical and chemical processes involved are discussed. Among other things, particular emphasis is given to turbulence modeling strategies for multiphase flows based on the kinetic theory for granular flows. Next, the different numerical methods based on Lagrangian and/or Eulerian schemes are presented. In particular the most popular numerical approaches of computational fluid dynamics codes are described (i.e., Direct Numerical Simulation, Large Eddy Simulation, and Reynolds-Averaged Navier-Stokes approach). The book will cover particle-based meth...

  1. Persistence of DNA studied in different ex vivo and in vivo rat models simulating the human gut situation

    DEFF Research Database (Denmark)

    Wilcks, Andrea; van Hoek, A.H.A.M.; Joosten, R.G.

    2004-01-01

    This study aimed to evaluate the possibility of DNA sequences from genetically modified plants to persist in the gastrointestinal (GI) tract. PCR analysis and transformation assays were used to study DNA persistence and integrity in various ex vivo and in vivo systems using gnotobiotic rats. DNA......, plasmid DNA could be recovered throughout the GI tract when intestinal samples were taken up to 5 h after feeding rats with plasmid. Furthermore, DNA isolated from these intestinal samples was able to transform electro-competent Escherichia coli, showing that the plasmid was still biologically active....... The results indicate that ingested DNA may persist in the GI tract and consequently may be present for uptake by intestinal bacteria....

  2. Advancing Material Models for Automotive Forming Simulations

    International Nuclear Information System (INIS)

    Vegter, H.; An, Y.; Horn, C.H.L.J. ten; Atzema, E.H.; Roelofsen, M.E.

    2005-01-01

    Simulations in automotive industry need more advanced material models to achieve highly reliable forming and springback predictions. Conventional material models implemented in the FEM-simulation models are not capable to describe the plastic material behaviour during monotonic strain paths with sufficient accuracy. Recently, ESI and Corus co-operate on the implementation of an advanced material model in the FEM-code PAMSTAMP 2G. This applies to the strain hardening model, the influence of strain rate, and the description of the yield locus in these models. A subsequent challenge is the description of the material after a change of strain path.The use of advanced high strength steels in the automotive industry requires a description of plastic material behaviour of multiphase steels. The simplest variant is dual phase steel consisting of a ferritic and a martensitic phase. Multiphase materials also contain a bainitic phase in addition to the ferritic and martensitic phase. More physical descriptions of strain hardening than simple fitted Ludwik/Nadai curves are necessary.Methods to predict plastic behaviour of single-phase materials use a simple dislocation interaction model based on the formed cells structures only. At Corus, a new method is proposed to predict plastic behaviour of multiphase materials have to take hard phases into account, which deform less easily. The resulting deformation gradients create geometrically necessary dislocations. Additional micro-structural information such as morphology and size of hard phase particles or grains is necessary to derive the strain hardening models for this type of materials.Measurements available from the Numisheet benchmarks allow these models to be validated. At Corus, additional measured values are available from cross-die tests. This laboratory test can attain critical deformations by large variations in blank size and processing conditions. The tests are a powerful tool in optimising forming simulations prior

  3. Peer Influence, Peer Selection and Adolescent Alcohol Use: a Simulation Study Using a Dynamic Network Model of Friendship Ties and Alcohol Use.

    Science.gov (United States)

    Wang, Cheng; Hipp, John R; Butts, Carter T; Jose, Rupa; Lakon, Cynthia M

    2017-05-01

    While studies suggest that peer influence can in some cases encourage adolescent substance use, recent work demonstrates that peer influence may be on average protective for cigarette smoking, raising questions about whether this effect occurs for other substance use behaviors. Herein, we focus on adolescent drinking, which may follow different social dynamics than smoking. We use a data-calibrated Stochastic Actor-Based (SAB) Model of adolescent friendship tie choice and drinking behavior to explore the impact of manipulating the size of peer influence and selection effects on drinking in two school-based networks. We first fit a SAB Model to data on friendship tie choice and adolescent drinking behavior within two large schools (n = 2178 and n = 976) over three time points using data from the National Longitudinal Study of Adolescent to Adult Health. We then alter the size of the peer influence and selection parameters with all other effects fixed at their estimated values and simulate the social systems forward 1000 times under varying conditions. Whereas peer selection appears to contribute to drinking behavior similarity among adolescents, there is no evidence that it leads to higher levels of drinking at the school level. A stronger peer influence effect lowers the overall level of drinking in both schools. There are many similarities in the patterning of findings between this study of drinking and previous work on smoking, suggesting that peer influence and selection may function similarly with respect to these substances.

  4. Simulation Tools for Electrical Machines Modelling: Teaching and ...

    African Journals Online (AJOL)

    Simulation tools are used both for research and teaching to allow a good comprehension of the systems under study before practical implementations. This paper illustrates the way MATLAB is used to model non-linearites in synchronous machine. The machine is modeled in rotor reference frame with currents as state ...

  5. Diversity modelling for electrical power system simulation

    International Nuclear Information System (INIS)

    Sharip, R M; Abu Zarim, M A U A

    2013-01-01

    This paper considers diversity of generation and demand profiles against the different future energy scenarios and evaluates these on a technical basis. Compared to previous studies, this research applied a forecasting concept based on possible growth rates from publically electrical distribution scenarios concerning the UK. These scenarios were created by different bodies considering aspects such as environment, policy, regulation, economic and technical. In line with these scenarios, forecasting is on a long term timescale (up to every ten years from 2020 until 2050) in order to create a possible output of generation mix and demand profiles to be used as an appropriate boundary condition for the network simulation. The network considered is a segment of rural LV populated with a mixture of different housing types. The profiles for the 'future' energy and demand have been successfully modelled by applying a forecasting method. The network results under these profiles shows for the cases studied that even though the value of the power produced from each Micro-generation is often in line with the demand requirements of an individual dwelling there will be no problems arising from high penetration of Micro-generation and demand side management for each dwellings considered. The results obtained highlight the technical issues/changes for energy delivery and management to rural customers under the future energy scenarios

  6. Diversity modelling for electrical power system simulation

    Science.gov (United States)

    Sharip, R. M.; Abu Zarim, M. A. U. A.

    2013-12-01

    This paper considers diversity of generation and demand profiles against the different future energy scenarios and evaluates these on a technical basis. Compared to previous studies, this research applied a forecasting concept based on possible growth rates from publically electrical distribution scenarios concerning the UK. These scenarios were created by different bodies considering aspects such as environment, policy, regulation, economic and technical. In line with these scenarios, forecasting is on a long term timescale (up to every ten years from 2020 until 2050) in order to create a possible output of generation mix and demand profiles to be used as an appropriate boundary condition for the network simulation. The network considered is a segment of rural LV populated with a mixture of different housing types. The profiles for the 'future' energy and demand have been successfully modelled by applying a forecasting method. The network results under these profiles shows for the cases studied that even though the value of the power produced from each Micro-generation is often in line with the demand requirements of an individual dwelling there will be no problems arising from high penetration of Micro-generation and demand side management for each dwellings considered. The results obtained highlight the technical issues/changes for energy delivery and management to rural customers under the future energy scenarios.

  7. Mathematical models and numerical simulation in electromagnetism

    CERN Document Server

    Bermúdez, Alfredo; Salgado, Pilar

    2014-01-01

    The book represents a basic support for a master course in electromagnetism oriented to numerical simulation. The main goal of the book is that the reader knows the boundary-value problems of partial differential equations that should be solved in order to perform computer simulation of electromagnetic processes. Moreover it includes a part devoted to electric circuit theory  based on ordinary differential equations. The book is mainly oriented to electric engineering applications, going from the general to the specific, namely, from the full Maxwell’s equations to the particular cases of electrostatics, direct current, magnetostatics and eddy currents models. Apart from standard exercises related to analytical calculus, the book includes some others oriented to real-life applications solved with MaxFEM free simulation software.

  8. Simulation as a surgical teaching model.

    Science.gov (United States)

    Ruiz-Gómez, José Luis; Martín-Parra, José Ignacio; González-Noriega, Mónica; Redondo-Figuero, Carlos Godofredo; Manuel-Palazuelos, José Carlos

    2018-01-01

    Teaching of surgery has been affected by many factors over the last years, such as the reduction of working hours, the optimization of the use of the operating room or patient safety. Traditional teaching methodology fails to reduce the impact of these factors on surgeońs training. Simulation as a teaching model minimizes such impact, and is more effective than traditional teaching methods for integrating knowledge and clinical-surgical skills. Simulation complements clinical assistance with training, creating a safe learning environment where patient safety is not affected, and ethical or legal conflicts are avoided. Simulation uses learning methodologies that allow teaching individualization, adapting it to the learning needs of each student. It also allows training of all kinds of technical, cognitive or behavioural skills. Copyright © 2017 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.

  9. Tutorial: Parallel Computing of Simulation Models for Risk Analysis.

    Science.gov (United States)

    Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D

    2016-10-01

    Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.

  10. Virtual milk for modelling and simulation of dairy processes.

    Science.gov (United States)

    Munir, M T; Zhang, Y; Yu, W; Wilson, D I; Young, B R

    2016-05-01

    The modeling of dairy processing using a generic process simulator suffers from shortcomings, given that many simulators do not contain milk components in their component libraries. Recently, pseudo-milk components for a commercial process simulator were proposed for simulation and the current work extends this pseudo-milk concept by studying the effect of both total milk solids and temperature on key physical properties such as thermal conductivity, density, viscosity, and heat capacity. This paper also uses expanded fluid and power law models to predict milk viscosity over the temperature range from 4 to 75°C and develops a succinct regressed model for heat capacity as a function of temperature and fat composition. The pseudo-milk was validated by comparing the simulated and actual values of the physical properties of milk. The milk thermal conductivity, density, viscosity, and heat capacity showed differences of less than 2, 4, 3, and 1.5%, respectively, between the simulated results and actual values. This work extends the capabilities of the previously proposed pseudo-milk and of a process simulator to model dairy processes, processing different types of milk (e.g., whole milk, skim milk, and concentrated milk) with different intrinsic compositions, and to predict correct material and energy balances for dairy processes. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  11. Managing health care decisions and improvement through simulation modeling.

    Science.gov (United States)

    Forsberg, Helena Hvitfeldt; Aronsson, Håkan; Keller, Christina; Lindblad, Staffan

    2011-01-01

    Simulation modeling is a way to test changes in a computerized environment to give ideas for improvements before implementation. This article reviews research literature on simulation modeling as support for health care decision making. The aim is to investigate the experience and potential value of such decision support and quality of articles retrieved. A literature search was conducted, and the selection criteria yielded 59 articles derived from diverse applications and methods. Most met the stated research-quality criteria. This review identified how simulation can facilitate decision making and that it may induce learning. Furthermore, simulation offers immediate feedback about proposed changes, allows analysis of scenarios, and promotes communication on building a shared system view and understanding of how a complex system works. However, only 14 of the 59 articles reported on implementation experiences, including how decision making was supported. On the basis of these articles, we proposed steps essential for the success of simulation projects, not just in the computer, but also in clinical reality. We also presented a novel concept combining simulation modeling with the established plan-do-study-act cycle for improvement. Future scientific inquiries concerning implementation, impact, and the value for health care management are needed to realize the full potential of simulation modeling.

  12. Modeling and simulation of flow field in giant magnetostrictive pump

    Science.gov (United States)

    Zhao, Yapeng; Ren, Shiyong; Lu, Quanguo

    2017-09-01

    Recent years, there has been significant research in the design and analysis of giant magnetostrictive pump. In this paper, the flow field model of giant magnetostrictive pump was established and the relationship between pressure loss and working frequency of piston was studied by numerical simulation method. Then, the influence of different pump chamber height on pressure loss in giant magnetostrictive pump was studied by means of flow field simulation. Finally, the fluid pressure and velocity vector distribution in giant magnetostrictive pump chamber were simulated.

  13. A model for plasma discharges simulation in Tokamak devices

    International Nuclear Information System (INIS)

    Fonseca, Antonio M.M.; Silva, Ruy P. da; Galvao, Ricardo M.O.; Kusnetzov, Yuri; Nascimento, I.C.; Cuevas, Nelson

    2001-01-01

    In this work, a 'zero-dimensional' model for simulation of discharges in Tokamak machine is presented. The model allows the calculation of the time profiles of important parameters of the discharge. The model was applied to the TCABR Tokamak to study the influence of parameters and physical processes during the discharges. Basically it is constituted of five differential equations: two related to the primary and secondary circuits of the ohmic heating transformer and the other three conservation equations of energy, charge and neutral particles. From the physical model, a computer program has been built with the objective of obtaining the time profiles of plasma current, the current in the primary of the ohmic heating transformer, the electronic temperature, the electronic density and the neutral particle density. It was also possible, with the model, to simulate the effects of gas puffing during the shot. The results of the simulation were compared with the experimental results obtained in the TCABR Tokamak, using hydrogen gas

  14. Modeling and simulation of photovoltaic solar panel

    International Nuclear Information System (INIS)

    Belarbi, M.; Haddouche, K.; Midoun, A.

    2006-01-01

    In this article, we present a new approach for estimating the model parameters of a photovoltaic solar panel according to the irradiance and temperature. The parameters of the one diode model are given from the knowledge of three operating points: short-circuit,