WorldWideScience

Sample records for simulation modeling study

  1. preliminary multidomain modelling and simulation study

    African Journals Online (AJOL)

    user

    PRELIMINARY MULTIDOMAIN MODELLING AND SIMULATION STUDY OF A. HORIZONTAL AXIS WIND TURBINE (HAWT) TOWER VIBRATION. I. lliyasu1, I. Iliyasu2, I. K. Tanimu3 and D. O Obada4. 1,4 DEPARTMENT OF MECHANICAL ENGINEERING, AHMADU BELLO UNIVERSITY, ZARIA, KADUNA STATE. NIGERIA.

  2. Plasma simulation studies using multilevel physics models

    International Nuclear Information System (INIS)

    Park, W.; Belova, E.V.; Fu, G.Y.

    2000-01-01

    The question of how to proceed toward ever more realistic plasma simulation studies using ever increasing computing power is addressed. The answer presented here is the M3D (Multilevel 3D) project, which has developed a code package with a hierarchy of physics levels that resolve increasingly complete subsets of phase-spaces and are thus increasingly more realistic. The rationale for the multilevel physics models is given. Each physics level is described and examples of its application are given. The existing physics levels are fluid models (3D configuration space), namely magnetohydrodynamic (MHD) and two-fluids; and hybrid models, namely gyrokinetic-energetic-particle/MHD (5D energetic particle phase-space), gyrokinetic-particle-ion/fluid-electron (5D ion phase-space), and full-kinetic-particle-ion/fluid-electron level (6D ion phase-space). Resolving electron phase-space (5D or 6D) remains a future project. Phase-space-fluid models are not used in favor of delta f particle models. A practical and accurate nonlinear fluid closure for noncollisional plasmas seems not likely in the near future

  3. Plasma simulation studies using multilevel physics models

    Energy Technology Data Exchange (ETDEWEB)

    Park, W.; Belova, E.V.; Fu, G.Y. [and others

    2000-01-19

    The question of how to proceed toward ever more realistic plasma simulation studies using ever increasing computing power is addressed. The answer presented here is the M3D (Multilevel 3D) project, which has developed a code package with a hierarchy of physics levels that resolve increasingly complete subsets of phase-spaces and are thus increasingly more realistic. The rationale for the multilevel physics models is given. Each physics level is described and examples of its application are given. The existing physics levels are fluid models (3D configuration space), namely magnetohydrodynamic (MHD) and two-fluids; and hybrid models, namely gyrokinetic-energetic-particle/MHD (5D energetic particle phase-space), gyrokinetic-particle-ion/fluid-electron (5D ion phase-space), and full-kinetic-particle-ion/fluid-electron level (6D ion phase-space). Resolving electron phase-space (5D or 6D) remains a future project. Phase-space-fluid models are not used in favor of delta f particle models. A practical and accurate nonlinear fluid closure for noncollisional plasmas seems not likely in the near future.

  4. Computer simulation study of water using a fluctuating charge model

    Indian Academy of Sciences (India)

    Unknown

    Abstract. Hydrogen bonding in small water clusters is studied through computer simulation methods using a sophisticated, empirical model of interaction developed by Rick et al (S W Rick, S J Stuart and B J Berne 1994 J. Chem. Phys. 101 6141) and others. The model allows for the charges on the interacting sites to ...

  5. Computer simulation study of water using a fluctuating charge model

    Indian Academy of Sciences (India)

    Unknown

    study of water through computer simulation methods has attracted considerable attention. ... water. In particular, the single particle and collective relaxation times obtained using this model are in rough agreement with experiment. Yet, in all these quantities, the ..... The fictitious mass of the charge has to be chosen with care.

  6. A study for production simulation model generation system based on data model at a shipyard

    Directory of Open Access Journals (Sweden)

    Myung-Gi Back

    2016-09-01

    Full Text Available Simulation technology is a type of shipbuilding product lifecycle management solution used to support production planning or decision-making. Normally, most shipbuilding processes are consisted of job shop production, and the modeling and simulation require professional skills and experience on shipbuilding. For these reasons, many shipbuilding companies have difficulties adapting simulation systems, regardless of the necessity for the technology. In this paper, the data model for shipyard production simulation model generation was defined by analyzing the iterative simulation modeling procedure. The shipyard production simulation data model defined in this study contains the information necessary for the conventional simulation modeling procedure and can serve as a basis for simulation model generation. The efficacy of the developed system was validated by applying it to the simulation model generation of the panel block production line. By implementing the initial simulation model generation process, which was performed in the past with a simulation modeler, the proposed system substantially reduced the modeling time. In addition, by reducing the difficulties posed by different modeler-dependent generation methods, the proposed system makes the standardization of the simulation model quality possible.

  7. Mathematical and computational modeling and simulation fundamentals and case studies

    CERN Document Server

    Moeller, Dietmar P F

    2004-01-01

    Mathematical and Computational Modeling and Simulation - a highly multi-disciplinary field with ubiquitous applications in science and engineering - is one of the key enabling technologies of the 21st century. This book introduces to the use of Mathematical and Computational Modeling and Simulation in order to develop an understanding of the solution characteristics of a broad class of real-world problems. The relevant basic and advanced methodologies are explained in detail, with special emphasis on ill-defined problems. Some 15 simulation systems are presented on the language and the logical level. Moreover, the reader can accumulate experience by studying a wide variety of case studies. The latter are briefly described within the book but their full versions as well as some simulation software demos are available on the Web. The book can be used for University courses of different level as well as for self-study. Advanced sections are marked and can be skipped in a first reading or in undergraduate courses...

  8. Simulating lightning into the RAMS model: two case studies

    Science.gov (United States)

    Federico, Stefano; Avolio, Elenio; Petracca, Marco; Panegrossi, Giulia; Dietrich, Stefano

    2013-04-01

    In this paper we show the results of the implementation of a tailored version of a methodology already presented in the bibliography to simulate flashes into the Regional Atmospheric Modeling System (RAMS). The method gives the flash rate for each thundercloud, which is detected by a labelling algorithm applied to the output of RAMS. The flash rate is computed by assuming a plane capacitor model, which is charged by the non-inductive graupel-ice charge separation mechanism and is discharged by lightning. The method explicitly considers the charging zone and uses the geometry of the graupel field to redistribute the flashes. An important feature of the method is that it gives the position and time of occurrence of each flash, allowing for a detailed and comprehensive display of the lightning activity during the simulation period. The method is applied to two case studies occurred over the Lazio Region, in central Italy. Simulations are compared with the lightning detected by the LINET network. The cases refer to a thunderstorm characterized by an intense lightning activity (up to 2800 flashes per hour over the Lazio Region), and a moderate thunderstorm (up to 1600 flashes per hour over the same domain). The results show that the model is able to catch the main features of both storms and their relative differences. This feature is promising because the method is computationally fast and gives a tool to the forecaster to predict the lightning threat. Nevertheless there are errors in timing (O(3h)) and positioning (O(100km)) of the convection, which mirrors in timing and position errors of the lightning distribution. These model shortcomings presently limit the use of the lightning forecast; nevertheless the method can take advantages of future development of the model physics, initialization techniques, and ensemble forecast. A useful application of the method in an ensemble forecast is already suggested.

  9. A model ecosystem experiment and its computational simulation studies

    International Nuclear Information System (INIS)

    Doi, M.

    2002-01-01

    Simplified microbial model ecosystem and its computer simulation model are introduced as eco-toxicity test for the assessment of environmental responses from the effects of environmental impacts. To take the effects on the interactions between species and environment into account, one option is to select the keystone species on the basis of ecological knowledge, and to put it in the single-species toxicity test. Another option proposed is to put the eco-toxicity tests as experimental micro ecosystem study and a theoretical model ecosystem analysis. With these tests, the stressors which are more harmful to the ecosystems should be replace with less harmful ones on the basis of unified measures. Management of radioactive materials, chemicals, hyper-eutrophic, and other artificial disturbances of ecosystem should be discussed consistently from the unified view point of environmental protection. (N.C.)

  10. Studies of climate dynamics with innovative global-model simulations

    Science.gov (United States)

    Shi, Xiaoming

    Climate simulations with different degrees of idealization are essential for the development of our understanding of the climate system. Studies in this dissertation employ carefully designed global-model simulations for the goal of gaining theoretical and conceptual insights into some problems of climate dynamics. Firstly, global warming-induced changes in extreme precipitation are investigated using a global climate model with idealized geography. The precipitation changes over an idealized north-south mid-latitude mountain barrier at the western margin of an otherwise flat continent are studied. The intensity of the 40 most intense events on the western slopes increases by about ~4°C of surface warming. In contrast, the intensity of the top 40 events on the eastern mountain slopes increases at about ~6°C. This higher sensitivity is due to enhanced ascent during the eastern-slope events, which can be explained in terms of linear mountain-wave theory relating to global warming-induced changes in the upper-tropospheric static stability and the tropopause level. Dominated by different dynamical factors, changes in the intensity of extreme precipitation events over plains and oceans might differ from changes over mountains. So the response of extreme precipitation over mountains and flat areas are further compared using larger data sets of simulated extreme events over the two types of surfaces. It is found that the sensitivity of extreme precipitation to increases in global mean surface temperature is 3% per °C lower over mountains than over the oceans or the plains. The difference in sensitivity among these regions is not due to thermodynamic effects, but rather to differences between the gravity-wave dynamics governing vertical velocities over the mountains and the cyclone dynamics governing vertical motions over the oceans and plains. The strengthening of latent heating in the storms over oceans and plains leads to stronger ascent in the warming climate

  11. A Theoretical Study of Subsurface Drainage Model Simulation of ...

    African Journals Online (AJOL)

    A three-dimensional variable-density groundwater flow model, the SEAWAT model, was used to assess the influence of subsurface drain spacing, evapotranspiration and irrigation water quality on salt concentration at the base of the root zone, leaching and drainage in salt affected irrigated land. The study was carried out ...

  12. Impact of atmospheric model resolution on simulation of ENSO feedback processes: a coupled model study

    Science.gov (United States)

    Hua, Lijuan; Chen, Lin; Rong, Xinyao; Su, Jingzhi; Wang, Lu; Li, Tim; Yu, Yongqiang

    2018-03-01

    This study examines El Niño-Southern Oscillation (ENSO)-related air-sea feedback processes in a coupled general circulation model (CGCM) to gauge model errors and pin down their sources in ENSO simulation. Three horizontal resolutions of the atmospheric component (T42, T63 and T106) of the CGCM are used to investigate how the simulated ENSO behaviors are affected by the resolution. We find that air-sea feedback processes in the three experiments mainly differ in terms of both thermodynamic and dynamic feedbacks. We also find that these processes are simulated more reasonably in the highest resolution version than in the other two lower resolution versions. The difference in the thermodynamic feedback arises from the difference in the shortwave-radiation (SW) feedback. Due to the severely (mildly) excessive cold tongue in the lower (higher) resolution version, the SW feedback is severely (mildly) underestimated. The main difference in the dynamic feedback processes lies in the thermocline feedback and the zonal-advection feedback, both of which are caused by the difference in the anomalous thermocline response to anomalous zonal wind stress. The difference in representing the anomalous thermocline response is attributed to the difference in meridional structure of zonal wind stress anomaly in the three simulations, which is linked to meridional resolution.

  13. Extremophiles Survival to Simulated Space Conditions: An Astrobiology Model Study

    OpenAIRE

    Mastascusa, V.; Romano, I.; Di Donato, P.; Poli, A.; Della Corte, V.; Rotundi, A.; Bussoletti, E.; Quarto, M.; Pugliese, M.; Nicolaus, B.

    2015-01-01

    In this work we investigated the ability of four extremophilic bacteria from Archaea and Bacteria domains to resist to space environment by exposing them to extreme conditions of temperature, UV radiation, desiccation coupled to low pressure generated in a Mars? conditions simulator. All the investigated extremophilic strains (namely Sulfolobus solfataricus, Haloterrigena hispanica, Thermotoga neapolitana and Geobacillus thermantarcticus) showed a good resistance to the simulation of the temp...

  14. Modelling and Simulation of TCPAR for Power System Flow Studies

    Directory of Open Access Journals (Sweden)

    Narimen Lahaçani AOUZELLAG

    2012-12-01

    Full Text Available In this paper, the modelling of Thyristor Controlled Phase Angle Regulator ‘TCPAR’ for power flow studies and the role of that modelling in the study of Flexible Alternating Current Transmission Systems ‘FACTS’ for power flow control are discussed. In order to investigate the impact of TCPAR on power systems effectively, it is essential to formulate a correct and appropriate model for it. The TCPAR, thus, makes it possible to increase or decrease the power forwarded in the line where it is inserted in a considerable way, which makes of it an ideal tool for this kind of use. Knowing that the TCPAR does not inject any active power, it offers a good solution with a less consumption. One of the adverse effects of the TCPAR is the voltage drop which it causes in the network although it is not significant. To solve this disadvantage, it is enough to introduce a Static VAR Compensator ‘SVC’ into the electrical network which will compensate the voltages fall and will bring them back to an acceptable level.

  15. Simulation study of a rectifying bipolar ion channel: Detailed model versus reduced model

    Directory of Open Access Journals (Sweden)

    Z. Ható

    2016-02-01

    Full Text Available We study a rectifying mutant of the OmpF porin ion channel using both all-atom and reduced models. The mutant was created by Miedema et al. [Nano Lett., 2007, 7, 2886] on the basis of the NP semiconductor diode, in which an NP junction is formed. The mutant contains a pore region with positive amino acids on the left-hand side and negative amino acids on the right-hand side. Experiments show that this mutant rectifies. Although we do not know the structure of this mutant, we can build an all-atom model for it on the basis of the structure of the wild type channel. Interestingly, molecular dynamics simulations for this all-atom model do not produce rectification. A reduced model that contains only the important degrees of freedom (the positive and negative amino acids and free ions in an implicit solvent, on the other hand, exhibits rectification. Our calculations for the reduced model (using the Nernst-Planck equation coupled to Local Equilibrium Monte Carlo simulations reveal a rectification mechanism that is different from that seen for semiconductor diodes. The basic reason is that the ions are different in nature from electrons and holes (they do not recombine. We provide explanations for the failure of the all-atom model including the effect of all the other atoms in the system as a noise that inhibits the response of ions (that would be necessary for rectification to the polarizing external field.

  16. Extremophiles Survival to Simulated Space Conditions: An Astrobiology Model Study

    Science.gov (United States)

    Mastascusa, V.; Romano, I.; Di Donato, P.; Poli, A.; Della Corte, V.; Rotundi, A.; Bussoletti, E.; Quarto, M.; Pugliese, M.; Nicolaus, B.

    2014-09-01

    In this work we investigated the ability of four extremophilic bacteria from Archaea and Bacteria domains to resist to space environment by exposing them to extreme conditions of temperature, UV radiation, desiccation coupled to low pressure generated in a Mars' conditions simulator. All the investigated extremophilic strains (namely Sulfolobus solfataricus, Haloterrigena hispanica, Thermotoga neapolitana and Geobacillus thermantarcticus) showed a good resistance to the simulation of the temperature variation in the space; on the other hand irradiation with UV at 254 nm affected only slightly the growth of H. hispanica, G. thermantarcticus and S. solfataricus; finally exposition to Mars simulated condition showed that H. hispanica and G. thermantarcticus were resistant to desiccation and low pressure.

  17. Large wind power plants modeling techniques for power system simulation studies

    Energy Technology Data Exchange (ETDEWEB)

    Larose, Christian; Gagnon, Richard; Turmel, Gilbert; Giroux, Pierre; Brochu, Jacques [IREQ Hydro-Quebec Research Institute, Varennes, QC (Canada); McNabb, Danielle; Lefebvre, Daniel [Hydro-Quebec TransEnergie, Montreal, QC (Canada)

    2009-07-01

    This paper presents efficient modeling techniques for the simulation of large wind power plants in the EMT domain using a parallel supercomputer. Using these techniques, large wind power plants can be simulated in detail, with each wind turbine individually represented, as well as the collector and receiving network. The simulation speed of the resulting models is fast enough to perform both EMT and transient stability studies. The techniques are applied to develop an EMT detailed model of a generic wind power plant consisting of 73 x 1.5-MW doubly-fed induction generator (DFIG) wind turbine. Validation of the modeling techniques is presented using a comparison with a Matlab/SimPowerSystems simulation. To demonstrate the simulation capabilities using these modeling techniques, simulations involving a 120-bus receiving network with two generic wind power plants (146 wind turbines) are performed. The complete system is modeled using the Hypersim simulator and Matlab/SimPowerSystems. The simulations are performed on a 32-processor supercomputer using an EMTP-like solution with a time step of 18.4 {mu}s. The simulation performance is 10 times slower than in real-time, which is a huge gain in performance compared to traditional tools. The simulation is designed to run in real-time so it never stops, resulting in a capability to perform thousand of tests via automatic testing tools. (orig.)

  18. A Tower Model for Lightning Overvoltage Studies Based on the Result of an FDTD Simulation

    Science.gov (United States)

    Noda, Taku

    This paper describes a method for deriving a transmission tower model for EMTP lightning overvoltage studies from a numerical electromagnetic simulation result obtained by the FDTD (Finite Difference Time Domain) method. The FDTD simulation carried out in this paper takes into account the following items which have been ignored or over-simplified in previously-presented simulations: (i) resistivity of the ground soil; (ii) arms, major slant elements, and foundations of the tower; (iii) development speed of the lightning return stroke. For validation purpose a pulse test of a 500-kV transmission tower is simulated, and a comparison with the measured result shows that the present FDTD simulation gives a sufficiently accurate result. Using this validated FDTD-based simulation method the insulator-string voltages of a tower for a lightning stroke are calculated, and based on the simulation result the parameter values of the proposed tower model for EMTP studies are determined in a systematic way. Since previously-presented models include trial-and-error process in the parameter determination, it can be said that the proposed model is more general in this regard. As an illustrative example, the 500-kV transmission tower mentioned above is modeled, and it is shown that the derived model closely reproduces the FDTD simulation result.

  19. A simulation study in dynamic 'Utah'-models

    DEFF Research Database (Denmark)

    Toldam-Andersen, Torben Bo

    1992-01-01

    major variants: I) Models involving a procedure which makes it possible to include the dynamic relationship between chilling and heat requirement. II) Models involving a chain structure with each "link" representing a develop­mental phase through the dormancy period. Based on the teories proposed...... function is thought to represent the average response. However, in deep and late dormancy periods the response might be quite different. A possible way to improve phenological temperature models might be to involve cultivar specific rutines which adjust the temperature fimetions according to the dormancy...

  20. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    leaving students. It is a probabilistic model. In the next part of this article, two more models - 'input/output model' used for production systems or economic studies and a. 'discrete event simulation model' are introduced. Aircraft Performance Model.

  1. Numerical Modeling and Combustion Studies of Scram Jet Simulation

    Science.gov (United States)

    2014-12-01

    National Bureau of Standards, June 1974. [26] Ronald K. Hanson Mirko Gamba, M. Godfrey Mungal. Oh plif imaging of the reaction zone in combusting...A flamelet-based model for supersonic combustion. Annual Research Briefs, 2009. [45] M. G. Godfrey Mungal Victor Miller, Mirko Gamba and Ronald K

  2. Modeling and simulation of queuing system for customer service improvement: A case study

    Science.gov (United States)

    Xian, Tan Chai; Hong, Chai Weng; Hawari, Nurul Nazihah

    2016-10-01

    This study aims to develop a queuing model at UniMall by using discrete event simulation approach in analyzing the service performance that affects customer satisfaction. The performance measures that considered in this model are such as the average time in system, the total number of student served, the number of student in waiting queue, the waiting time in queue as well as the maximum length of buffer. ARENA simulation software is used to develop a simulation model and the output is analyzed. Based on the analysis of output, it is recommended that management of UniMall consider introducing shifts and adding another payment counter in the morning.

  3. An IT-enabled supply chain model: a simulation study

    Science.gov (United States)

    Cannella, Salvatore; Framinan, Jose M.; Barbosa-Póvoa, Ana

    2014-11-01

    During the last decades, supply chain collaboration practices and the underlying enabling technologies have evolved from the classical electronic data interchange (EDI) approach to a web-based and radio frequency identification (RFID)-enabled collaboration. In this field, most of the literature has focused on the study of optimal parameters for reducing the total cost of suppliers, by adopting operational research (OR) techniques. Herein we are interested in showing that the considered information technology (IT)-enabled structure is resilient, that is, it works well across a reasonably broad range of parameter settings. By adopting a methodological approach based on system dynamics, we study a multi-tier collaborative supply chain. Results show that the IT-enabled supply chain improves operational performance and customer service level. Nonetheless, benefits for geographically dispersed networks are of minor entity.

  4. EMC Simulation and Modeling

    Science.gov (United States)

    Takahashi, Takehiro; Schibuya, Noboru

    The EMC simulation is now widely used in design stage of electronic equipment to reduce electromagnetic noise. As the calculated electromagnetic behaviors of the EMC simulator depends on the inputted EMC model of the equipment, the modeling technique is important to obtain effective results. In this paper, simple outline of the EMC simulator and EMC model are described. Some modeling techniques of EMC simulation are also described with an example of the EMC model which is shield box with aperture.

  5. Modelling, Simulation and Optimisation of Utility – Service Provision for Households: Case Studies

    OpenAIRE

    Strzelecka, A.; Skworcow, P.; Ulanicki, B.

    2014-01-01

    In the research presented in this paper household case studies were considered. Main objective of this research is to investigate models and algorithms for alternative approaches to current utility–service provision. This paper is focused on case studies that consider standard solutions to utility–service provision problems and propose improvements to these solutions. Results are obtained using a simulation system developed in C#. The simulation system evaluates feasibility of proposed candid...

  6. Co-producing simulation models to inform resource management: a case study from southwest South Dakota

    Science.gov (United States)

    Miller, Brian W.; Symstad, Amy J.; Frid, Leonardo; Fisichelli, Nicholas A.; Schuurman, Gregor W.

    2017-01-01

    Simulation models can represent complexities of the real world and serve as virtual laboratories for asking “what if…?” questions about how systems might respond to different scenarios. However, simulation models have limited relevance to real-world applications when designed without input from people who could use the simulated scenarios to inform their decisions. Here, we report on a state-and-transition simulation model of vegetation dynamics that was coupled to a scenario planning process and co-produced by researchers, resource managers, local subject-matter experts, and climate change adaptation specialists to explore potential effects of climate scenarios and management alternatives on key resources in southwest South Dakota. Input from management partners and local experts was critical for representing key vegetation types, bison and cattle grazing, exotic plants, fire, and the effects of climate change and management on rangeland productivity and composition given the paucity of published data on many of these topics. By simulating multiple land management jurisdictions, climate scenarios, and management alternatives, the model highlighted important tradeoffs between grazer density and vegetation composition, as well as between the short- and long-term costs of invasive species management. It also pointed to impactful uncertainties related to the effects of fire and grazing on vegetation. More broadly, a scenario-based approach to model co-production bracketed the uncertainty associated with climate change and ensured that the most important (and impactful) uncertainties related to resource management were addressed. This cooperative study demonstrates six opportunities for scientists to engage users throughout the modeling process to improve model utility and relevance: (1) identifying focal dynamics and variables, (2) developing conceptual model(s), (3) parameterizing the simulation, (4) identifying relevant climate scenarios and management

  7. Study of Monte Carlo Simulation Method for Methane Phase Diagram Prediction using Two Different Potential Models

    KAUST Repository

    Kadoura, Ahmad

    2011-06-06

    Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.

  8. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    Science.gov (United States)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  9. Simulation modeling and arena

    CERN Document Server

    Rossetti, Manuel D

    2015-01-01

    Emphasizes a hands-on approach to learning statistical analysis and model building through the use of comprehensive examples, problems sets, and software applications With a unique blend of theory and applications, Simulation Modeling and Arena®, Second Edition integrates coverage of statistical analysis and model building to emphasize the importance of both topics in simulation. Featuring introductory coverage on how simulation works and why it matters, the Second Edition expands coverage on static simulation and the applications of spreadsheets to perform simulation. The new edition als

  10. Use of rainfall-simulator data in precipitation-runoff modeling studies

    Science.gov (United States)

    Lusby, G.C.; Lichty, R.W.

    1983-01-01

    Results of a study using a rainfall simulator to define infiltration parameters for use in watershed modeling are presented. A total of 23 rainfall-simulation runs were made on five small plots representing four representative soil-vegetation types of the study watershed in eastern Colorado. Data for three observed rainfall-runoff events were recorded by gages on four of the plots. Data from all events were used to develop best-fit parameters of the Green and Ampt infiltration equation. The hydraulic conductivity of the transmission zone, KSAT, grossly controlled the goodness of fit of all modeling attempts. Results of fitting KSAT to reproduce runoff from rainfall simulator runs and results of fitting KSAT to reproduce runoff from observed rainfall-runoff events are inconsistent. Variations in results from site to site and at different times of the year were observed. (USGS)

  11. Integration of environmental simulation models with satellite remote sensing and geographic information systems technologies: case studies

    Science.gov (United States)

    Steyaert, Louis T.; Loveland, Thomas R.; Brown, Jesslyn F.; Reed, Bradley C.

    1993-01-01

    Environmental modelers are testing and evaluating a prototype land cover characteristics database for the conterminous United States developed by the EROS Data Center of the U.S. Geological Survey and the University of Nebraska Center for Advanced Land Management Information Technologies. This database was developed from multi temporal, 1-kilometer advanced very high resolution radiometer (AVHRR) data for 1990 and various ancillary data sets such as elevation, ecological regions, and selected climatic normals. Several case studies using this database were analyzed to illustrate the integration of satellite remote sensing and geographic information systems technologies with land-atmosphere interactions models at a variety of spatial and temporal scales. The case studies are representative of contemporary environmental simulation modeling at local to regional levels in global change research, land and water resource management, and environmental simulation modeling at local to regional levels in global change research, land and water resource management and environmental risk assessment. The case studies feature land surface parameterizations for atmospheric mesoscale and global climate models; biogenic-hydrocarbons emissions models; distributed parameter watershed and other hydrological models; and various ecological models such as ecosystem, dynamics, biogeochemical cycles, ecotone variability, and equilibrium vegetation models. The case studies demonstrate the important of multi temporal AVHRR data to develop to develop and maintain a flexible, near-realtime land cover characteristics database. Moreover, such a flexible database is needed to derive various vegetation classification schemes, to aggregate data for nested models, to develop remote sensing algorithms, and to provide data on dynamic landscape characteristics. The case studies illustrate how such a database supports research on spatial heterogeneity, land use, sensitivity analysis, and scaling issues

  12. Atmospheric models in the numerical simulation system (SPEEDI-MP) for environmental studies

    International Nuclear Information System (INIS)

    Nagai, Haruyasu; Terada, Hiroaki

    2007-01-01

    As a nuclear emergency response system, numerical models to predict the atmospheric dispersion of radionuclides have been developed at Japan Atomic Energy Agency (JAEA). Evolving these models by incorporating new schemes for physical processes and up-to-date computational technologies, a numerical simulation system, which consists of dynamical models and material transport models for the atmospheric, terrestrial, and oceanic environments, has been constructed to apply for various environmental studies. In this system, the combination of a non-hydrostatic atmospheric dynamic model and Lagrangian particle dispersion model is used for the emergency response system. The utilization of detailed meteorological field by the atmospheric model improves the model performance for diffusion and deposition calculations. It also calculates a large area domain with coarse resolution and local area domain with high resolution simultaneously. The performance of new model system was evaluated using measurements of surface deposition of 137 Cs over Europe during the Chernobyl accident. (author)

  13. Evaluation of Al-Najaf Hospital Intersection Performance Using Simulation model: Case Study

    Directory of Open Access Journals (Sweden)

    Hamid Athab Eedan Al-Jameel

    2016-03-01

    Full Text Available Traffic congestion is a widely spreading problem through the world. It is mainly observed around intersections in urban areas. In this study, Al-Najaf Hospital (Ibn Blal intersection has been evaluated because it is considered the congested T-intersection on Kufa-Nafa road. This T-intersection suffers from high congestion especially in the morning peak. This could be due to a lot of centers of activities (trip generation and attractive on that road such as University of Kufa, four hospitals and other facilities. Although the Highway Capacity Manual (HCM 2000 suffers from several shortcomings and limitations, it is used widely in the evaluation of intersections in Iraq. On the other hand, simulation models have been proved to be accurate tools in the evaluation of intersections. Therefore, a simulation model (S-Paramics model has been used to assess the performance of the current intersection. Then, the simulation model was calibrated with field data. Data was collected from the intersection using video camera installing over Al-Najaf Hospital building. The results of this study show that the developed model clearly mimics the reality. Then, different alternatives have been implemented using the developed model. Consequently, the construction of an overpass coming from Najaf-Kufa road towards Al-Sahlaa road is the best alternative with protecting U-turn.

  14. Simulation in Complex Modelling

    DEFF Research Database (Denmark)

    Nicholas, Paul; Ramsgaard Thomsen, Mette; Tamke, Martin

    2017-01-01

    This paper will discuss the role of simulation in extended architectural design modelling. As a framing paper, the aim is to present and discuss the role of integrated design simulation and feedback between design and simulation in a series of projects under the Complex Modelling framework. Complex...... Restraint developed for the research exhibition Complex Modelling, Meldahls Smedie Gallery, Copenhagen in 2016. Where the direct project aims and outcomes have been reported elsewhere, the aim for this paper is to discuss overarching strategies for working with design integrated simulation....

  15. Cross-flow turbines: physical and numerical model studies towards improved array simulations

    Science.gov (United States)

    Wosnik, M.; Bachant, P.

    2015-12-01

    Cross-flow, or vertical-axis turbines, show potential in marine hydrokinetic (MHK) and wind energy applications. As turbine designs mature, the research focus is shifting from individual devices towards improving turbine array layouts for maximizing overall power output, i.e., minimizing wake interference for axial-flow turbines, or taking advantage of constructive wake interaction for cross-flow turbines. Numerical simulations are generally better suited to explore the turbine array design parameter space, as physical model studies of large arrays at large model scale would be expensive. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries, the turbines' interaction with the energy resource needs to be parameterized, or modeled. Most models in use today, e.g. actuator disk, are not able to predict the unique wake structure generated by cross-flow turbines. Experiments were carried out using a high-resolution turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier--Stokes models. The ALM predicts turbine loading with the blade element method combined with sub-models for dynamic stall and flow curvature. The open-source software is written as an extension library for the OpenFOAM CFD package, which allows the ALM body force to be applied to their standard RANS and LES solvers. Turbine forcing is also applied to volume of fluid (VOF) models, e.g., for predicting free surface effects on submerged MHK devices. An

  16. Simulation study of axial ultrasound transmission in heterogeneous cortical bone model

    Science.gov (United States)

    Takano, Koki; Nagatani, Yoshiki; Matsukawa, Mami

    2017-07-01

    Ultrasound propagation in a heterogeneous cortical bone was studied. Using a bovine radius, the longitudinal wave velocity distribution in the axial direction was experimentally measured in the MHz range. The bilinear interpolation and piecewise cubic Hermite interpolation methods were applied to create a three-dimensional (3D) precise velocity model of the bone using experimental data. By assuming the uniaxial anisotropy of the bone, the distributions of all elastic moduli of a 3D heterogeneous model were estimated. The elastic finite-difference time-domain method was used to simulate axial ultrasonic wave propagation. The wave propagation in the initial model was compared with that in the thinner model, where the inner part of the cortical bone model was removed. The wave front of the first arriving signal (FAS) slightly depended on the heterogeneity in each model. Owing to the decrease in bone thickness, the propagation behavior also changed and the FAS velocity clearly decreased.

  17. Theoretical model simulations for the global Thermospheric Mapping Study (TMS) periods

    Science.gov (United States)

    Rees, D.; Fuller-Rowell, T. J.

    Theoretical and semiempirical models of the solar UV/EUV and of the geomagnetic driving forces affecting the terrestrial mesosphere and thermosphere have been used to generate a series of representative numerical time-dependent and global models of the thermosphere, for the range of solar and geoamgnetic activity levels which occurred during the three Thermospheric Mapping Study periods. The simulations obtained from these numerical models are compared with observations, and with the results of semiempirical models of the thermosphere. The theoretical models provide a record of the magnitude of the major driving forces which affected the thermosphere during the study periods, and a baseline against which the actual observed structure and dynamics can be compared.

  18. Clinical prediction in defined populations: a simulation study investigating when and how to aggregate existing models

    Directory of Open Access Journals (Sweden)

    Glen P. Martin

    2017-01-01

    Full Text Available Abstract Background Clinical prediction models (CPMs are increasingly deployed to support healthcare decisions but they are derived inconsistently, in part due to limited data. An emerging alternative is to aggregate existing CPMs developed for similar settings and outcomes. This simulation study aimed to investigate the impact of between-population-heterogeneity and sample size on aggregating existing CPMs in a defined population, compared with developing a model de novo. Methods Simulations were designed to mimic a scenario in which multiple CPMs for a binary outcome had been derived in distinct, heterogeneous populations, with potentially different predictors available in each. We then generated a new ‘local’ population and compared the performance of CPMs developed for this population by aggregation, using stacked regression, principal component analysis or partial least squares, with redevelopment from scratch using backwards selection and penalised regression. Results While redevelopment approaches resulted in models that were miscalibrated for local datasets of less than 500 observations, model aggregation methods were well calibrated across all simulation scenarios. When the size of local data was less than 1000 observations and between-population-heterogeneity was small, aggregating existing CPMs gave better discrimination and had the lowest mean square error in the predicted risks compared with deriving a new model. Conversely, given greater than 1000 observations and significant between-population-heterogeneity, then redevelopment outperformed the aggregation approaches. In all other scenarios, both aggregation and de novo derivation resulted in similar predictive performance. Conclusion This study demonstrates a pragmatic approach to contextualising CPMs to defined populations. When aiming to develop models in defined populations, modellers should consider existing CPMs, with aggregation approaches being a suitable modelling

  19. Modeling and Simulation Optimization and Feasibility Studies for the Neutron Detection without Helium-3 Project

    Energy Technology Data Exchange (ETDEWEB)

    Ely, James H.; Siciliano, Edward R.; Swinhoe, Martyn T.; Lintereur, Azaree T.

    2013-01-01

    This report details the results of the modeling and simulation work accomplished for the ‘Neutron Detection without Helium-3’ project during the 2011 and 2012 fiscal years. The primary focus of the project is to investigate commercially available technologies that might be used in safeguards applications in the relatively near term. Other technologies that are being developed may be more applicable in the future, but are outside the scope of this study.

  20. A participative and facilitative conceptual modelling framework for discrete event simulation studies in healthcare

    OpenAIRE

    Kotiadis, Kathy; Tako, Antuela; Vasilakis, Christos

    2014-01-01

    Existing approaches to conceptual modelling (CM) in discrete-event simulation do not formally support the participation of a group of stakeholders. Simulation in healthcare can benefit from stakeholder participation as it makes possible to share multiple views and tacit knowledge from different parts of the system. We put forward a framework tailored to healthcare that supports the interaction of simulation modellers with a group of stakeholders to arrive at a common conceptual model. The fra...

  1. Simulation in Complex Modelling

    DEFF Research Database (Denmark)

    Nicholas, Paul; Ramsgaard Thomsen, Mette; Tamke, Martin

    2017-01-01

    This paper will discuss the role of simulation in extended architectural design modelling. As a framing paper, the aim is to present and discuss the role of integrated design simulation and feedback between design and simulation in a series of projects under the Complex Modelling framework. Complex...... performance, engage with high degrees of interdependency and allow the emergence of design agency and feedback between the multiple scales of architectural construction. This paper presents examples for integrated design simulation from a series of projects including Lace Wall, A Bridge Too Far and Inflated...... Restraint developed for the research exhibition Complex Modelling, Meldahls Smedie Gallery, Copenhagen in 2016. Where the direct project aims and outcomes have been reported elsewhere, the aim for this paper is to discuss overarching strategies for working with design integrated simulation....

  2. Simulation models: a current indispensable tool in studies of the continuous water-soil-plant - atmosphere

    International Nuclear Information System (INIS)

    Lopez Seijas, Teresa; Gonzalez, Felicita; Cid, G.; Osorio, Maria de los A.; Ruiz, Maria Elena

    2008-01-01

    Full text: This work assesses the current use of simulation models as a tool useful and indispensable for the advancement in the research and study of the processes related to the continuous water-soil - plant-atmosphere. In recent years they have reported in the literature many jobs where these modeling tools are used as a support to the decision-making process of companies or organizations in the agricultural sphere and in Special for the design of optimal management of irrigation and fertilization strategies of the crops. Summarizes some of the latest applications reported with respect to the use of water transfers and solutes, such simulation models mainly to nitrate leaching and groundwater contamination problems. On the other hand also summarizes important applications of simulation models of growth of cultivation for the prediction of effects on the performance of different conditions of water stress, and finally some other applications on the management of the different irrigation technologies as kingpins, superfiail irrigation and drip irrigation. Refer also the main work carried out in Cuba. (author)

  3. Scientific Modeling and simulations

    CERN Document Server

    Diaz de la Rubia, Tomás

    2009-01-01

    Showcases the conceptual advantages of modeling which, coupled with the unprecedented computing power through simulations, allow scientists to tackle the formibable problems of our society, such as the search for hydrocarbons, understanding the structure of a virus, or the intersection between simulations and real data in extreme environments

  4. Predictor characteristics necessary for building a clinically useful risk prediction model: a simulation study

    Directory of Open Access Journals (Sweden)

    Laura Schummers

    2016-09-01

    Full Text Available Abstract Background Compelled by the intuitive appeal of predicting each individual patient’s risk of an outcome, there is a growing interest in risk prediction models. While the statistical methods used to build prediction models are increasingly well understood, the literature offers little insight to researchers seeking to gauge a priori whether a prediction model is likely to perform well for their particular research question. The objective of this study was to inform the development of new risk prediction models by evaluating model performance under a wide range of predictor characteristics. Methods Data from all births to overweight or obese women in British Columbia, Canada from 2004 to 2012 (n = 75,225 were used to build a risk prediction model for preeclampsia. The data were then augmented with simulated predictors of the outcome with pre-set prevalence values and univariable odds ratios. We built 120 risk prediction models that included known demographic and clinical predictors, and one, three, or five of the simulated variables. Finally, we evaluated standard model performance criteria (discrimination, risk stratification capacity, calibration, and Nagelkerke’s r2 for each model. Results Findings from our models built with simulated predictors demonstrated the predictor characteristics required for a risk prediction model to adequately discriminate cases from non-cases and to adequately classify patients into clinically distinct risk groups. Several predictor characteristics can yield well performing risk prediction models; however, these characteristics are not typical of predictor-outcome relationships in many population-based or clinical data sets. Novel predictors must be both strongly associated with the outcome and prevalent in the population to be useful for clinical prediction modeling (e.g., one predictor with prevalence ≥20 % and odds ratio ≥8, or 3 predictors with prevalence ≥10 % and odds ratios ≥4. Area

  5. A Study on Bipedal and Mobile Robot Behavior Through Modeling and Simulation

    Directory of Open Access Journals (Sweden)

    Nirmala Nirmala

    2015-05-01

    Full Text Available The purpose of this work is to study and analyze mobile robot behavior. In performing this, a framework is adopted and developed for mobile and bipedal robot. The robots are design, build, and run as proceed from the development of mechanical structure, electronics and control integration, and control software application. The behavior of those robots are difficult to be observed and analyzed qualitatively. To evaluate the design and behavior quality, modeling and simulation of robot structure and its task capability is performed. The stepwise procedure to robot behavior study is explained. Behavior cases study are experimented to bipedal robots, transporter robot and Autonomous Guided Vehicle (AGV developed at our institution. The experimentation are conducted on those robots by adjusting their dynamic properties and/or surrounding environment. Validation is performed by comparing the simulation result and the real robot execution. The simulation gives a more idealistic behavior execution rather than realistic one. Adjustments are performed to fine tuning simulation's parameters to provide a more realistic performance.

  6. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  7. JPL Thermal Design Modeling Philosophy and NASA-STD-7009 Standard for Models and Simulations - A Case Study

    Science.gov (United States)

    Avila, Arturo

    2011-01-01

    The Standard JPL thermal engineering practice prescribes worst-case methodologies for design. In this process, environmental and key uncertain thermal parameters (e.g., thermal blanket performance, interface conductance, optical properties) are stacked in a worst case fashion to yield the most hot- or cold-biased temperature. Thus, these simulations would represent the upper and lower bounds. This, effectively, represents JPL thermal design margin philosophy. Uncertainty in the margins and the absolute temperatures is usually estimated by sensitivity analyses and/or by comparing the worst-case results with "expected" results. Applicability of the analytical model for specific design purposes along with any temperature requirement violations are documented in peer and project design review material. In 2008, NASA released NASA-STD-7009, Standard for Models and Simulations. The scope of this standard covers the development and maintenance of models, the operation of simulations, the analysis of the results, training, recommended practices, the assessment of the Modeling and Simulation (M&S) credibility, and the reporting of the M&S results. The Mars Exploration Rover (MER) project thermal control system M&S activity was chosen as a case study determining whether JPL practice is in line with the standard and to identify areas of non-compliance. This paper summarizes the results and makes recommendations regarding the application of this standard to JPL thermal M&S practices.

  8. Automated Simulation Model Generation

    NARCIS (Netherlands)

    Huang, Y.

    2013-01-01

    One of today's challenges in the field of modeling and simulation is to model increasingly larger and more complex systems. Complex models take long to develop and incur high costs. With the advances in data collection technologies and more popular use of computer-aided systems, more data has become

  9. Preliminary subsonic aerodynamic model for simulation studies of the HL-20 lifting body

    Science.gov (United States)

    Jackson, E. Bruce; Cruz, Christopher I.

    1992-01-01

    A nonlinear, six-degree-of-freedom aerodynamic model for an early version of the HL-20 lifting body is described and compared with wind tunnel data upon which it is based. Polynomial functions describing most of the aerodynamic parameters are given and tables of these functions are presented. Techniques used to arrive at these functions are described. Basic aerodynamic coefficients were modeled as functions of angles of attack and sideslip. Vehicle lateral symmetry was assumed. Compressibility (Mach) effects were ignored. Control-surface effectiveness was assumed to vary linearly with angle of deflection and was assumed to be invariant with the angle of sideslip. Dynamic derivatives were obtained from predictive aerodynamic codes. Landing-gear and ground effects were scaled from Space Shuttle data. The model described is provided to support pilot-in-the-loop simulation studies of the HL-20. By providing the data in tabular format, the model is suitable for the data interpolation architecture of many existing engineering simulation facilities. Because of the preliminary nature of the data, however, this model is not recommended for study of the absolute performance of the HL-20.

  10. Study on driver model for hybrid truck based on driving simulator experimental results

    Directory of Open Access Journals (Sweden)

    Dam Hoang Phuc

    2018-04-01

    Full Text Available In this paper, a proposed car-following driver model taking into account some features of both the compensatory and anticipatory model representing the human pedal operation has been verified by driving simulator experiments with several real drivers. The comparison between computer simulations performed by determined model parameters with the experimental results confirm the correctness of this mathematical driver model and identified model parameters. Then the driver model is joined to a hybrid vehicle dynamics model and the moderate car following maneuver simulations with various driver parameters are conducted to investigate influences of driver parameters on vehicle dynamics response and fuel economy. Finally, major driver parameters involved in the longitudinal control of drivers are clarified. Keywords: Driver model, Driver-vehicle closed-loop system, Car Following, Driving simulator/hybrid electric vehicle (B1

  11. Numerical Studies of Thermal Conditions in Cities - Systematic Model Simulations of Idealized Urban Domains

    Science.gov (United States)

    Heene, V.; Buchholz, S.; Kossmann, M.

    2016-12-01

    Numerical studies of thermal conditions in cities based on model simulations of idealized urban domains are carried out to investigate how changes in the characteristics of urban areas influence street level air temperatures. The simulated modifications of the urban characteristics represent possible adaptation measures for heat reduction in cities, which are commonly used in urban planning. Model simulations are performed with the thermodynamic version of the 3-dimensional micro-scale urban climate model MUKLIMO_3. The simulated idealized urban areas are designed in a simplistic way, i. e. defining homogeneous squared cities of one settlement type, without orography and centered in the model domain. To assess the impact of different adaptation measures the characteristics of the urban areas have been systematically modified regarding building height, albedo of building roof and impervious surfaces, fraction of impervious surfaces between buildings, and percentage of green roofs. To assess the impact of green and blue infrastructure in cities, different configurations for parks and lakes have been investigated - e. g. varying size and distribution within the city. The experiments are performed for different combinations of typical German settlement types and surrounding rural types under conditions of a typical summer day in July. The adaptation measures implemented in the experiments show different impacts for different settlement types mainly due to the differences in building density, building height or impervious surface fraction. Parks and lakes implemented as adaptation measure show strong potential to reduce daytime air temperature, with cooling effects on their built-up surroundings. At night lakes generate negative and positive effects on air temperature, depending on water temperature. In general, all adaptation measures implemented in experiments reveal different impacts on day and night air temperature.

  12. Neuronal model with distributed delay: analysis and simulation study for gamma distribution memory kernel.

    Science.gov (United States)

    Karmeshu; Gupta, Varun; Kadambari, K V

    2011-06-01

    A single neuronal model incorporating distributed delay (memory)is proposed. The stochastic model has been formulated as a Stochastic Integro-Differential Equation (SIDE) which results in the underlying process being non-Markovian. A detailed analysis of the model when the distributed delay kernel has exponential form (weak delay) has been carried out. The selection of exponential kernel has enabled the transformation of the non-Markovian model to a Markovian model in an extended state space. For the study of First Passage Time (FPT) with exponential delay kernel, the model has been transformed to a system of coupled Stochastic Differential Equations (SDEs) in two-dimensional state space. Simulation studies of the SDEs provide insight into the effect of weak delay kernel on the Inter-Spike Interval(ISI) distribution. A measure based on Jensen-Shannon divergence is proposed which can be used to make a choice between two competing models viz. distributed delay model vis-á-vis LIF model. An interesting feature of the model is that the behavior of (CV(t))((ISI)) (Coefficient of Variation) of the ISI distribution with respect to memory kernel time constant parameter η reveals that neuron can switch from a bursting state to non-bursting state as the noise intensity parameter changes. The membrane potential exhibits decaying auto-correlation structure with or without damped oscillatory behavior depending on the choice of parameters. This behavior is in agreement with empirically observed pattern of spike count in a fixed time window. The power spectral density derived from the auto-correlation function is found to exhibit single and double peaks. The model is also examined for the case of strong delay with memory kernel having the form of Gamma distribution. In contrast to fast decay of damped oscillations of the ISI distribution for the model with weak delay kernel, the decay of damped oscillations is found to be slower for the model with strong delay kernel.

  13. AEGIS geologic simulation model

    International Nuclear Information System (INIS)

    Foley, M.G.

    1982-01-01

    The Geologic Simulation Model (GSM) is used by the AEGIS (Assessment of Effectiveness of Geologic Isolation Systems) program at the Pacific Northwest Laboratory to simulate the dynamic geology and hydrology of a geologic nuclear waste repository site over a million-year period following repository closure. The GSM helps to organize geologic/hydrologic data; to focus attention on active natural processes by requiring their simulation; and, through interactive simulation and calibration, to reduce subjective evaluations of the geologic system. During each computer run, the GSM produces a million-year geologic history that is possible for the region and the repository site. In addition, the GSM records in permanent history files everything that occurred during that time span. Statistical analyses of data in the history files of several hundred simulations are used to classify typical evolutionary paths, to establish the probabilities associated with deviations from the typical paths, and to determine which types of perturbations of the geologic/hydrologic system, if any, are most likely to occur. These simulations will be evaluated by geologists familiar with the repository region to determine validity of the results. Perturbed systems that are determined to be the most realistic, within whatever probability limits are established, will be used for the analyses that involve radionuclide transport and dose models. The GSM is designed to be continuously refined and updated. Simulation models are site specific, and, although the submodels may have limited general applicability, the input data equirements necessitate detailed characterization of each site before application

  14. A Realistically Perturbed Atmosphere and Ocean De-Aliasing Model for Future Gravity Mission Simulation Studies

    Science.gov (United States)

    Dobslaw, Henryk; Forootan, Ehsan; Bergmann-Wolf, Inga; Neumayer, Karl-Hans; Mayer-Gürr, Torsten; Kusche, Jürgen; Flechtner, Frank

    2015-04-01

    Recently completed performance studies of future gravity mission concepts arrived at sometimes contradicting conclusions about the importance of non-tidal aliasing errors that remain in the finally retrieved gravity field time-series. In those studies, typically a fraction of the differences between two different models of atmosphere and ocean mass variability determined the magnitude of the aliasing errors. Since differences among arbitrary pairs of the numerical models available might lead to widely different aliasing errors and thus conclusions regarding limiting error contributors of a candidate mission, we present here for the first time a version of a realistically perturbed de-aliasing model that is consistent with the updated ESA Earth System Model for gravity mission simulation studies (Dobslaw et al., 2015). The error model is available over the whole 12-year period of the ESA ESM and consists of two parts: (i) a component containing signals from physical processes that are intentionally omitted from de-aliasing models, as for a example, variations in global eustatic sea-level; and (ii) a series of true errors that consist of in total five different components with realistically re-scaled variability at both small and large spatial scales for different frequency bands ranging from sub-daily to sub-monthly periods. Based on a multi-model ensemble of atmosphere and ocean mass variability available to us for the year 2006, we will demonstrate that our re-scaled true errors have plausible magnitudes and correlation characteristics in all frequency bands considered. The realism of the selected scaling coefficients for periods between 1 and 30 days is tested further by means of a variance component estimation based on the constrained daily GRACE solution series ITSG-GRACE2014. Initial full-scale simulation experiments are used to re-assess the relative importance of non-tidal de-aliasing errors for the GRACE-FO mission, which might be subsequently expanded to

  15. Theoretical modeling, simulation and experimental study of hybrid piezoelectric and electromagnetic energy harvester

    Directory of Open Access Journals (Sweden)

    Ping Li

    2018-03-01

    Full Text Available In this paper, performances of vibration energy harvester combined piezoelectric (PE and electromagnetic (EM mechanism are studied by theoretical analysis, simulation and experimental test. For the designed harvester, electromechanical coupling modeling is established, and expressions of vibration response, output voltage, current and power are derived. Then, performances of the harvester are simulated and tested; moreover, the power charging rechargeable battery is realized through designed energy storage circuit. By the results, it’s found that compared with piezoelectric-only and electromagnetic-only energy harvester, the hybrid energy harvester can enhance the output power and harvesting efficiency; furthermore, at the harmonic excitation, output power of harvester linearly increases with acceleration amplitude increasing; while it enhances with acceleration spectral density increasing at the random excitation. In addition, the bigger coupling strength, the bigger output power is, and there is the optimal load resistance to make the harvester output the maximal power.

  16. Validation of simulation models

    DEFF Research Database (Denmark)

    Rehman, Muniza; Pedersen, Stig Andur

    2012-01-01

    In philosophy of science, the interest for computational models and simulations has increased heavily during the past decades. Different positions regarding the validity of models have emerged but the views have not succeeded in capturing the diversity of validation methods. The wide variety...

  17. Simulation study using 3-D wavefield modeling for oil and gas exploration; Sanjigen hadoba modeling wo mochiita sekiyu tanko no simulation study

    Energy Technology Data Exchange (ETDEWEB)

    Sato, T.; Matsuoka, T. [Japan Petroleum Exploration Corp., Tokyo (Japan); Saeki, T. [Japan National Oil Corp., Tokyo (Japan). Technology Research Center

    1997-05-27

    As the surroundings of objects of oil exploration grow more complicated, seismic survey methods have turned 3-dimensional and, in this report, several models are examined using the 3-dimensional simulation technology. The result obtained by the conventional wave tracking method is different from actual wavefields, and is unrealistic. The difference method among the fullwave modelling methods demands an exorbitantly long computation time and high cost. A pseudospectral method has been developed which is superior to the difference method, and has been put to practical use thanks to the advent of parallel computers. It is found that a 3-dimensional survey is mandatory in describing faults. After examining the SEG/EAGE Salt model, it is learned that the salt is well-developed and that 3-dimensional depth migration is required for sub-salt exploration. It is also found through simulation of the EAGE/S Overthrust model, which is an elastic model, that no quality records are available on thrust zones in complicated terrains. The records are poor in quality since the actually measured wavefield is regarded as an acoustic wavefield when it is an elastic wavefield. 1 refs., 18 figs., 2 tabs.

  18. Model continuity in discrete event simulation: A framework for model-driven development of simulation models

    NARCIS (Netherlands)

    Cetinkaya, D; Verbraeck, A.; Seck, MD

    2015-01-01

    Most of the well-known modeling and simulation (M&S) methodologies state the importance of conceptual modeling in simulation studies, and they suggest the use of conceptual models during the simulation model development process. However, only a limited number of methodologies refers to how to

  19. Estimation of muscle response using three-dimensional musculoskeletal models before impact situation: a simulation study.

    Science.gov (United States)

    Bae, Tae Soo; Loan, Peter; Choi, Kuiwon; Hong, Daehie; Mun, Mu Seong

    2010-12-01

    When car crash experiments are performed using cadavers or dummies, the active muscles' reaction on crash situations cannot be observed. The aim of this study is to estimate muscles' response of the major muscle groups using three-dimensional musculoskeletal model by dynamic simulations of low-speed sled-impact. The three-dimensional musculoskeletal models of eight subjects were developed, including 241 degrees of freedom and 86 muscles. The muscle parameters considering limb lengths and the force-generating properties of the muscles were redefined by optimization to fit for each subject. Kinematic data and external forces measured by motion tracking system and dynamometer were then input as boundary conditions. Through a least-squares optimization algorithm, active muscles' responses were calculated during inverse dynamic analysis tracking the motion of each subject. Electromyography for major muscles at elbow, knee, and ankle joints was measured to validate each model. For low-speed sled-impact crash, experiment and simulation with optimized and unoptimized muscle parameters were performed at 9.4 m/h and 10 m/h and muscle activities were compared among them. The muscle activities with optimized parameters were closer to experimental measurements than the results without optimization. In addition, the extensor muscle activities at knee, ankle, and elbow joint were found considerably at impact time, unlike previous studies using cadaver or dummies. This study demonstrated the need to optimize the muscle parameters to predict impact situation correctly in computational studies using musculoskeletal models. And to improve accuracy of analysis for car crash injury using humanlike dummies, muscle reflex function, major extensor muscles' response at elbow, knee, and ankle joints, should be considered.

  20. Modern modelling techniques are data hungry: a simulation study for predicting dichotomous endpoints.

    Science.gov (United States)

    van der Ploeg, Tjeerd; Austin, Peter C; Steyerberg, Ewout W

    2014-12-22

    Modern modelling techniques may potentially provide more accurate predictions of binary outcomes than classical techniques. We aimed to study the predictive performance of different modelling techniques in relation to the effective sample size ("data hungriness"). We performed simulation studies based on three clinical cohorts: 1282 patients with head and neck cancer (with 46.9% 5 year survival), 1731 patients with traumatic brain injury (22.3% 6 month mortality) and 3181 patients with minor head injury (7.6% with CT scan abnormalities). We compared three relatively modern modelling techniques: support vector machines (SVM), neural nets (NN), and random forests (RF) and two classical techniques: logistic regression (LR) and classification and regression trees (CART). We created three large artificial databases with 20 fold, 10 fold and 6 fold replication of subjects, where we generated dichotomous outcomes according to different underlying models. We applied each modelling technique to increasingly larger development parts (100 repetitions). The area under the ROC-curve (AUC) indicated the performance of each model in the development part and in an independent validation part. Data hungriness was defined by plateauing of AUC and small optimism (difference between the mean apparent AUC and the mean validated AUC techniques. The RF, SVM and NN models showed instability and a high optimism even with >200 events per variable. Modern modelling techniques such as SVM, NN and RF may need over 10 times as many events per variable to achieve a stable AUC and a small optimism than classical modelling techniques such as LR. This implies that such modern techniques should only be used in medical prediction problems if very large data sets are available.

  1. Computer simulation modeling of recreation use: Current status, case studies, and future directions

    Science.gov (United States)

    David N. Cole

    2005-01-01

    This report compiles information about recent progress in the application of computer simulation modeling to planning and management of recreation use, particularly in parks and wilderness. Early modeling efforts are described in a chapter that provides an historical perspective. Another chapter provides an overview of modeling options, common data input requirements,...

  2. Simulation and Experimental Studies of Jamming for Model Two-Dimensional Particles Under Flow

    Science.gov (United States)

    Guariguata, A.; Wu, D. T.; Koh, C. A.; Sum, A. K.; Sloan, E. D.

    2009-06-01

    Jamming and plugging of flowlines with gas hydrates is the most critical issue in the flow assurance of oil and gas production lines. Because solid hydrate particles are often suspended in a fluid, the pipeline jamming and flow constriction formed by hydrates depend not only on particle/wall properties, such as friction, binding forces and mechanical characteristics, but also on the concentration of particles upstream of the restriction, flow velocity, fluid viscosity, and forces between the particles. Therefore, to gain insight into the jamming phenomena, both experiments and computer simulations on two-dimensional model systems have been carried out to characterize the flow of particles in a channel, with the eventual goal of applying that knowledge to gas hydrates jamming. Using the simulation software PFC2d®, we studied the effect of restriction geometry and flow velocity on the jamming process of particles. Results from the simulations were compared to experimental measurements on polyethylene discs floating on water flowing in an open channel.

  3. Physical and Numerical Model Studies of Cross-flow Turbines Towards Accurate Parameterization in Array Simulations

    Science.gov (United States)

    Wosnik, M.; Bachant, P.

    2014-12-01

    Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of

  4. PERFORMANCE STUDIES OF INTEGRATED FUZZY LOGIC CONTROLLER FOR BRUSHLESS DC MOTOR DRIVES USING ADVANCED SIMULATION MODEL

    Directory of Open Access Journals (Sweden)

    C. Subba Rami Reddy

    2011-07-01

    Full Text Available This paper introduces an Integrated fuzzy logic controller (IFLC for brushless dc (BLDC motor drives using advanced simulation model and presents a comparative study of performances of PID controller and IFLC. The dynamic characteristics of speed and torque are effectively monitored and analyzed using the proposed model. The aim of IFLC is to obtain improved performance in terms of disturbance rejection or parameter variation than obtained using PID controller. The IFLC is constructed by using Fuzzy logic controller (FLC and PID controller. A performance comparison of the controllers is also given based on the integral of the absolute value of the error (IAE, the integral of the squared error (ISE, the integral of the time-weighted absolute error (ITAE and the integral of the time-weighted squared error (ITSE. The results show the effectiveness of the proposed controller.

  5. Comprehensive study on parameter sensitivity for flow and nutrient modeling in the Hydrological Simulation Program Fortran model.

    Science.gov (United States)

    Luo, Chuan; Li, Zhaofu; Wu, Min; Jiang, Kaixia; Chen, Xiaomin; Li, Hengpeng

    2017-09-01

    Numerous parameters are used to construct the HSPF (Hydrological Simulation Program Fortran) model, which results in significant difficulty in calibrating the model. Parameter sensitivity analysis is an efficient method to identify important model parameters. Through this method, a model's calibration process can be simplified on the basis of understanding the model's structure. This study investigated the sensitivity of the flow and nutrient parameters of HSPF using the DSA (differential sensitivity analysis) method in the Xitiaoxi watershed, China. The results showed that flow was mostly affected by parameters related to groundwater and evapotranspiration, including DEEPFR (fraction of groundwater inflow to deep recharge), LZETP (lower-zone evapotranspiration parameter), and AGWRC (base groundwater recession), and most of the sensitive parameters had negative and nonlinear effects on flow. Additionally, nutrient components were commonly affected by parameters from land processes, including MON-SQOLIM (monthly values limiting storage of water quality in overland flow), MON-ACCUM (monthly values of accumulation), MON-IFLW-CONC (monthly concentration of water quality in interflow), and MON-GRND-CONC (monthly concentration of water quality in active groundwater). Besides, parameters from river systems, KATM20 (unit oxidation rate of total ammonia at 20 °C) had a negative and almost linear effect on ammonia concentration and MALGR (maximal unit algal growth rate for phytoplankton) had a negative and nonlinear effect on ammonia and orthophosphate concentrations. After calibrating these sensitive parameters, our model performed well for simulating flow and nutrient outputs, with R 2 and E NS (Nash-Sutcliffe efficiency) both greater than 0.75 for flow and greater than 0.5 for nutrient components. This study is expected to serve as a valuable complement to the documentation of the HSPF model to help users identify key parameters and provide a reference for performing

  6. PSH Transient Simulation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Muljadi, Eduard [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-12-21

    PSH Transient Simulation Modeling presentation from the WPTO FY14 - FY16 Peer Review. Transient effects are an important consideration when designing a PSH system, yet numerical techniques for hydraulic transient analysis still need improvements for adjustable-speed (AS) reversible pump-turbine applications.

  7. Wake modeling and simulation

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Madsen Aagaard, Helge; Larsen, Torben J.

    We present a consistent, physically based theory for the wake meandering phenomenon, which we consider of crucial importance for the overall description of wind turbine loadings in wind farms. In its present version the model is confined to single wake situations. The model philosophy does, howev...... methodology has been implemented in the aeroelastic code HAWC2, and example simulations of wake situations, from the small Tjæreborg wind farm, have been performed showing satisfactory agreement between predictions and measurements...

  8. Ball bearing defect models: A study of simulated and experimental fault signatures

    Science.gov (United States)

    Mishra, C.; Samantaray, A. K.; Chakraborty, G.

    2017-07-01

    Numerical model based virtual prototype of a system can serve as a tool to generate huge amount of data which replace the dependence on expensive and often difficult to conduct experiments. However, the model must be accurate enough to substitute the experiments. The abstraction level and details considered during model development depend on the purpose for which simulated data should be generated. This article concerns development of simulation models for deep groove ball bearings which are used in a variety of rotating machinery. The purpose of the model is to generate vibration signatures which usually contain features of bearing defects. Three different models with increasing level-of-complexity are considered: a bearing kinematics based planar motion block diagram model developed in MATLAB Simulink which does not explicitly consider cage and traction dynamics, a planar motion model with cage, traction and contact dynamics developed using multi-energy domain bond graph formalism in SYMBOLS software, and a detailed spatial multi-body dynamics model with complex contact and traction mechanics developed using ADAMS software. Experiments are conducted using Spectra Quest machine fault simulator with different prefabricated faulted bearings. The frequency domain characteristics of simulated and experimental vibration signals for different bearing faults are compared and conclusions are drawn regarding usefulness of the developed models.

  9. Simulation and modeling of the powder diffraction pattern from nanoparticles: Studying the influence of surface strain

    Science.gov (United States)

    Beyerlein, Kenneth Roy

    Nanostructured materials are currently at the forefront of nearly every emerging industry, as they offer promising solutions to problems ranging from those facing energy technologies, to those concerning the structural integrity of materials. With all of these future applications, it is crucial that methods are developed which can offer accurate, and statistically reliable characterization of these materials in a reasonable amount of time. X-ray diffraction is one such method which is already widely available, and can offer further insight into the atomic structure, as well as, microstructure of nanomaterials. This thesis work is then focused on investigating how different structural features of nanoparticles influence the line profiles of the x-ray powder diffraction pattern. Due to their extremely small size, the contribution from crystallite size broadening becomes the dominating feature in an observed diffraction peak. Therefore, the theory of size broadening was critically reviewed concerning the considerations necessary when the crystallite size approaches a few nanometers. Furthermore, the analysis of synthesized shape controlled platinum nanoparticles was carried out using a developed line profile analysis routine, based on the Debye function analysis (DFA) approach, to determine the distribution of particle size and shape in the sample. The DFA method is based on the use of atomistic models to simulate the features in the powder diffraction pattern. The atomistic descriptions of molecular dynamics simulations was coupled with this approach, allowing for the further understanding of the pattern from nanoparticles. The techniques were developed to study how lattice dynamics, and the resulting thermal diffuse scattering, are affected by the small crystallite domains. Furthermore, the relaxation of structural models for nanoparticles by MD simulations allowed for the assessment of features which are a present in the powder pattern as a result of a strain

  10. Learning nursing through simulation: A case study approach towards an expansive model of learning.

    Science.gov (United States)

    Berragan, Liz

    2014-08-01

    This study explores the impact of simulation upon learning for undergraduate nursing students. The study objectives were (a) to explore the experiences of participating in simulation education for a small group of student nurses; and (b) to explore learning through simulation from the perspectives of the nursing students, the nurse educators and the nurse mentors. Conducted as a small-scale narrative case study, it tells the unique stories of a small number of undergraduate nursing students, nurse mentors and nurse educators and explores their experiences of learning through simulation. Data analysis through progressive focusing revealed that the nurse educators viewed simulation as a means of helping students to learn to be nurses, whilst, the nurse mentors suggested that simulation helped them to determine nursing potential. The students' narratives showed that they approached simulation learning in different ways resulting in a range of outcomes: those who were successfully becoming nurses, those who were struggling or working hard to become nurses and those who were not becoming nurses. Theories of professional practice learning and activity theory present an opportunity to articulate and theorise the learning inherent in simulation activities. They recognise the links between learning and the environment of work and highlight the possibilities for learning to inspire change and innovation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Randomized Crossover Study of Training Benefits of High Fidelity ECMO Simulation versus Porcine Animal Model An Interim Report

    Science.gov (United States)

    2017-02-25

    High- Fidelity ECMO Simulation versus Porcine Animal Model - An Interim Report presented at/published to ECMO and the Advanced Therapies for...event. If the sponsor of a conference or meeting is a non-DoD commercial entity or an entity seeking to do business with the government, then your...approval.) Randomized Crossover Study of Training Benefits of High-Fidelity ECMO Simulation versus Porcine Animal Model 6. TITLE OF MATERIAL TO BE

  12. A Software Toolkit to Study Systematic Uncertainties of the Physics Models of the Geant4 Simulation Package

    CERN Document Server

    Genser, Krzysztof; Perdue, Gabriel; Wenzel, Hans; Yarba, Julia; Kelsey, Michael; Wright, Dennis H

    2016-01-01

    The Geant4 toolkit is used to model interactions between particles and matter. Geant4 employs a set of validated physics models that span a wide range of interaction energies. These models are tuned to cover a large variety of possible applications. This raises the critical question of what uncertainties are associated with the Geant4 physics model, or group of models, involved in a simulation project. To address the challenge, we have designed and implemented a comprehen- sive, modular, user-friendly software toolkit that allows the variation of one or more parameters of one or more Geant4 physics models involved in simulation studies. It also enables analysis of multiple variants of the resulting physics observables of interest in order to estimate the uncertain- ties associated with the simulation model choices. Key functionalities of the toolkit are presented in this paper and are illustrated with selected results.

  13. Low resolution brain electromagnetic tomography in a realistic geometry head model: a simulation study

    International Nuclear Information System (INIS)

    Ding Lei; Lai Yuan; He Bin

    2005-01-01

    It is of importance to localize neural sources from scalp recorded EEG. Low resolution brain electromagnetic tomography (LORETA) has received considerable attention for localizing brain electrical sources. However, most such efforts have used spherical head models in representing the head volume conductor. Investigation of the performance of LORETA in a realistic geometry head model, as compared with the spherical model, will provide useful information guiding interpretation of data obtained by using the spherical head model. The performance of LORETA was evaluated by means of computer simulations. The boundary element method was used to solve the forward problem. A three-shell realistic geometry (RG) head model was constructed from MRI scans of a human subject. Dipole source configurations of a single dipole located at different regions of the brain with varying depth were used to assess the performance of LORETA in different regions of the brain. A three-sphere head model was also used to approximate the RG head model, and similar simulations performed, and results compared with the RG-LORETA with reference to the locations of the simulated sources. Multi-source localizations were discussed and examples given in the RG head model. Localization errors employing the spherical LORETA, with reference to the source locations within the realistic geometry head, were about 20-30 mm, for four brain regions evaluated: frontal, parietal, temporal and occipital regions. Localization errors employing the RG head model were about 10 mm over the same four brain regions. The present simulation results suggest that the use of the RG head model reduces the localization error of LORETA, and that the RG head model based LORETA is desirable if high localization accuracy is needed

  14. A Case Study of Low-Level Jets in Yerevan Simulated by the WRF Model

    Science.gov (United States)

    Gevorgyan, Artur

    2018-01-01

    Capabilities of high-resolution (3 km) Weather Research and Forecasting (WRF) simulations to reproduce topographically induced mountain-valley winds and low-level jets (LLJs) in Yerevan have been evaluated using high-frequency observational and modeled data. High sensitivities of simulations of near-surface winds and LLJ characteristics observed on 4 July 2015 to both boundary layer and initial and lateral boundary conditions setup have been demonstrated. Among the nine tested planetary boundary layer (PBL) parameterization schemes the MYJ, QNSE, and TEMF PBL schemes showed greater skill in simulation of near-surface valley winds over Yerevan, while the other PBL schemes tend to significantly underestimate the strength of valley winds, with the BouLac PBL scheme being the worst performer. Most of PBL schemes simulate well-defined LLJs in Yerevan associated with evening valley winds. The simulated jet cores are mostly located between 150 and 250 m above ground with magnitudes varying from 12 to 21 m s-1. However, the intensity of the observed nocturnal LLJ in Yerevan (located at 110 m above ground) is strongly underestimated by most of the WRF runs while the Shin and Hong and YSU PBL schemes simulate nocturnal LLJs higher than the observed LLJ. The WRF runs initiated with newly released European Centre for Medium-Range Weather Forecasts ERA-5 data set showed improved simulation of near-surface winds and nighttime potential temperatures in Yerevan relative to those forced by the Global Forecast System fields.

  15. Simulation - modeling - experiment

    International Nuclear Information System (INIS)

    2004-01-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  16. A simplified heat pump model for use in solar plus heat pump system simulation studies

    DEFF Research Database (Denmark)

    Perers, Bengt; Andersen, Elsa; Nordman, Roger

    2012-01-01

    Solar plus heat pump systems are often very complex in design, with sometimes special heat pump arrangements and control. Therefore detailed heat pump models can give very slow system simulations and still not so accurate results compared to real heat pump performance in a system. The idea here...

  17. Benchmark simulation model no 2: general protocol and exploratory case studies

    DEFF Research Database (Denmark)

    Jeppsson, U.; Pons, M.N.; Nopens, I.

    2007-01-01

    and digester models, the included temperature dependencies and the reject water storage. BSM2-implementations are now available in a wide range of simulation platforms and a ring test has verified their proper implementation, consistent with the BSM2 definition. This guarantees that users can focus...

  18. Exploring Students' Computational Thinking Skills in Modeling and Simulation Projects: : A Pilot Study

    NARCIS (Netherlands)

    Grgurina, Natasa; van Veen, Klaas; Barendsen, Erik; Zwaneveld, Bert; Suhre, Cor; Gal-Ezer, Judith; Sentance, Sue; Vahrenhold, Jan

    2015-01-01

    Computational Thinking (CT) is gaining a lot of attention in education. We explored how to discern the occurrences of CT in the projects of 12th grade high school students in the computer science (CS) course. Within the projects, they constructed models and ran simulations of phenomena from other

  19. Study of food chain functioning using the methods of simulation modeling

    Directory of Open Access Journals (Sweden)

    V. V. Brygadyrenko

    2015-05-01

    Full Text Available The algorithm of simulating the trophic net in a system consisted of 1 producer and 8 consumer species, which linked by 14 trophic relations, using Microsoft Excel is presented. Dynamics of productivity and biomass consumption in the model system are analyzed. Regularities of consortium nets functioning are stated.

  20. Pretreatment chemistry for dual media filtration: model simulations and experimental studies.

    Science.gov (United States)

    Shin, J Y; O'Melia, C R

    2006-01-01

    Laboratory dual media filtration experiments were conducted (a) in direct filtration mode using model raw water moderate in turbidity and low in DOC, and (b) in conventional filtration mode treating water moderate in turbidity and high in DOC. Model simulations of filter performance for the removal of particles provided hypotheses for the experimental studies of dual media filtration. An increase in alum dose in direct filtration mode, while improving filter performance, also showed some disadvantages, including rapid development of head loss. Suboptimal dose in direct filtration significantly impaired the filter performance. In conventional mode, the effect of alum dose on the filter performance, while obvious, was not as dramatic as in direct filtration. Ripening indicated by particle counts occurred earlier than by turbidity and breakthrough of particle counts started earlier than breakthrough of turbidity, suggesting that turbidity can be used as a more conservative monitor of filter performance during the ripening period to minimise the risk of passage of small particles, while particle counts can be considered a more sensitive indicator of deteriorating filter performance during the breakthrough period. The lower sand layer served as a multiple barrier for particle when the performance of the anthracite layer was not effective.

  1. Comparative study of viscoelastic arterial wall models in nonlinear one-dimensional finite element simulations of blood flow.

    Science.gov (United States)

    Raghu, Rashmi; Vignon-Clementel, Irene E; Figueroa, C Alberto; Taylor, Charles A

    2011-08-01

    It is well known that blood vessels exhibit viscoelastic properties, which are modeled in the literature with different mathematical forms and experimental bases. The wide range of existing viscoelastic wall models may produce significantly different blood flow, pressure, and vessel deformation solutions in cardiovascular simulations. In this paper, we present a novel comparative study of two different viscoelastic wall models in nonlinear one-dimensional (1D) simulations of blood flow. The viscoelastic models are from papers by Holenstein et al. in 1980 (model V1) and Valdez-Jasso et al. in 2009 (model V2). The static elastic or zero-frequency responses of both models are chosen to be identical. The nonlinear 1D blood flow equations incorporating wall viscoelasticity are solved using a space-time finite element method and the implementation is verified with the Method of Manufactured Solutions. Simulation results using models V1, V2 and the common static elastic model are compared in three application examples: (i) wave propagation study in an idealized vessel with reflection-free outflow boundary condition; (ii) carotid artery model with nonperiodic boundary conditions; and (iii) subject-specific abdominal aorta model under rest and simulated lower limb exercise conditions. In the wave propagation study the damping and wave speed were largest for model V2 and lowest for the elastic model. In the carotid and abdominal aorta studies the most significant differences between wall models were observed in the hysteresis (pressure-area) loops, which were larger for V2 than V1, indicating that V2 is a more dissipative model. The cross-sectional area oscillations over the cardiac cycle were smaller for the viscoelastic models compared to the elastic model. In the abdominal aorta study, differences between constitutive models were more pronounced under exercise conditions than at rest. Inlet pressure pulse for model V1 was larger than the pulse for V2 and the elastic model

  2. Fitting logistic multilevel models with crossed random effects via Bayesian Integrated Nested Laplace Approximations : a simulation study

    NARCIS (Netherlands)

    Grilli, Leonardo; Innocenti, Francesco

    2017-01-01

    Fitting cross-classified multilevel models with binary response is challenging. In this setting a promising method is Bayesian inference through Integrated Nested Laplace Approximations (INLA), which performs well in several latent variable models. We devise a systematic simulation study to assess

  3. A study of finite element modeling for simulation of vehicle rollover

    Science.gov (United States)

    Lin, Zhigui; Liu, Changye; Lv, Juncheng; Jia, Ligang; Sun, Haichao; Chen, Tao

    2017-04-01

    At present, the automobile ownership has been a very large figure, and growing rapidly with the social progress and development. Automobile has been one of the most important transportation in people's life. Accordingly, there are a large number of fatalities and serious injuries in traffic accident every year. Vehicle safety has been paid more and more attentions in recent years. There are several kinds of traffic accidents including frontal crash, side crash, etc., while rollover crash is a special kind. The vehicle rollover has the lowest incidence in the all kinds of traffic accidents but has the highest rate of seriously injuries, most of which lead to death. For these reasons, it is very necessary to study the vehicle rollover crash. However, it's so hard that there are a small amount of literatures studying rollover due to its variety, large degree of freedom, and difficulty to repeat and control. The method to investigate rollover crash contains experiment, the finite element method and rigid-body-based models. The finite element method contains many advantages such as low cost, repeatability, detailed data and so on, but the limitation is obvious. A test and simulation has been accomplished to study the FEM for vehicle rollover crash particularly in this paper.

  4. Simulation programs for ph.D. study of analysis, modeling and optimum design of solar domestic hot water systems

    Energy Technology Data Exchange (ETDEWEB)

    Lin Qin

    1998-12-31

    The design of solar domestic hot water (DHW) systems is a complex process, due to characteristics inherent in the solar heating technology. Recently, computer simulation has become a widely used technique to improve the understanding of the thermal processes in such systems. One of the main objects of the Ph.D. study of `Analysis, Modelling and optimum Design of Solar Domestic Hot Water Systems` is to develop and verify programs for carrying out the simulation and evaluation of the dynamic performance of solar DHW systems. During this study, simulation programs for hot water distribution networks and for certain types of solar DHW systems were developed. (au)

  5. A simulation study of sample size for multilevel logistic regression models

    Directory of Open Access Journals (Sweden)

    Moineddin Rahim

    2007-07-01

    Full Text Available Abstract Background Many studies conducted in health and social sciences collect individual level data as outcome measures. Usually, such data have a hierarchical structure, with patients clustered within physicians, and physicians clustered within practices. Large survey data, including national surveys, have a hierarchical or clustered structure; respondents are naturally clustered in geographical units (e.g., health regions and may be grouped into smaller units. Outcomes of interest in many fields not only reflect continuous measures, but also binary outcomes such as depression, presence or absence of a disease, and self-reported general health. In the framework of multilevel studies an important problem is calculating an adequate sample size that generates unbiased and accurate estimates. Methods In this paper simulation studies are used to assess the effect of varying sample size at both the individual and group level on the accuracy of the estimates of the parameters and variance components of multilevel logistic regression models. In addition, the influence of prevalence of the outcome and the intra-class correlation coefficient (ICC is examined. Results The results show that the estimates of the fixed effect parameters are unbiased for 100 groups with group size of 50 or higher. The estimates of the variance covariance components are slightly biased even with 100 groups and group size of 50. The biases for both fixed and random effects are severe for group size of 5. The standard errors for fixed effect parameters are unbiased while for variance covariance components are underestimated. Results suggest that low prevalent events require larger sample sizes with at least a minimum of 100 groups and 50 individuals per group. Conclusion We recommend using a minimum group size of 50 with at least 50 groups to produce valid estimates for multi-level logistic regression models. Group size should be adjusted under conditions where the prevalence

  6. Development of a pore network simulation model to study nonaqueous phase liquid dissolution

    Science.gov (United States)

    Dillard, Leslie A.; Blunt, Martin J.

    2000-01-01

    A pore network simulation model was developed to investigate the fundamental physics of nonequilibrium nonaqueous phase liquid (NAPL) dissolution. The network model is a lattice of cubic chambers and rectangular tubes that represent pore bodies and pore throats, respectively. Experimental data obtained by Powers [1992] were used to develop and validate the model. To ensure the network model was representative of a real porous medium, the pore size distribution of the network was calibrated by matching simulated and experimental drainage and imbibition capillary pressure-saturation curves. The predicted network residual styrene blob-size distribution was nearly identical to the observed distribution. The network model reproduced the observed hydraulic conductivity and produced relative permeability curves that were representative of a poorly consolidated sand. Aqueous-phase transport was represented by applying the equation for solute flux to the network tubes and solving for solute concentrations in the network chambers. Complete mixing was found to be an appropriate approximation for calculation of chamber concentrations. Mass transfer from NAPL blobs was represented using a corner diffusion model. Predicted results of solute concentration versus Peclet number and of modified Sherwood number versus Peclet number for the network model compare favorably with experimental data for the case in which NAPL blob dissolution was negligible. Predicted results of normalized effluent concentration versus pore volume for the network were similar to the experimental data for the case in which NAPL blob dissolution occurred with time.

  7. Study of visualized simulation and analysis of nuclear fuel cycle system based on multilevel flow model

    International Nuclear Information System (INIS)

    Liu Jingquan; Yoshikawa, H.; Zhou Yangping

    2005-01-01

    Complex energy and environment system, especially nuclear fuel cycle system recently raised social concerns about the issues of economic competitiveness, environmental effect and nuclear proliferation. Only under the condition that those conflicting issues are gotten a consensus between stakeholders with different knowledge background, can nuclear power industry be continuingly developed. In this paper, a new analysis platform has been developed to help stakeholders to recognize and analyze various socio-technical issues in the nuclear fuel cycle sys- tem based on the functional modeling method named Multilevel Flow Models (MFM) according to the cognition theory of human being, Its character is that MFM models define a set of mass, energy and information flow structures on multiple levels of abstraction to describe the functional structure of a process system and its graphical symbol representation and the means-end and part-whole hierarchical flow structure to make the represented process easy to be understood. Based upon this methodology, a micro-process and a macro-process of nuclear fuel cycle system were selected to be simulated and some analysis processes such as economics analysis, environmental analysis and energy balance analysis related to those flows were also integrated to help stakeholders to understand the process of decision-making with the introduction of some new functions for the improved Multilevel Flow Models Studio, and finally the simple simulation such as spent fuel management process simulation and money flow of nuclear fuel cycle and its levelised cost analysis will be represented as feasible examples. (authors)

  8. Mechanical Study of Standard Six Beat Front Crawl Swimming by Using Swimming Human Simulation Model

    Science.gov (United States)

    Nakashima, Motomu

    There are many dynamical problems in front crawl swimming which have not been fully investigated by analytical approaches. Therefore, in this paper, standard six beat front crawl swimming is analyzed by the swimming human simulation model SWUM, which has been developed by the authors. First, the outline of the simulation model, the joint motion for one stroke cycle, and the specifications of calculation are described respectively. Next, contribution of each fluid force component and of each body part to the thrust, effect of the flutter kick, estimation of the active drag, roll motion, and the propulsive efficiency are discussed respectively. The following results were theoretically obtained: The thrust is produced at the upper limb by the normal drag force component. The flutter kick plays a role in raising the lower half of the body. The active drag coefficient in the simulation becomes 0.082. Buoyancy determines the primal wave of the roll motion fluctuation. The propulsive efficiency in the simulation becomes 0.2.

  9. A Study of Synchronous Machine Model Implementations in Matlab/Simulink Simulations for New and Renewable Energy Systems

    DEFF Research Database (Denmark)

    Chen, Zhe; Blaabjerg, Frede; Iov, Florin

    2005-01-01

    A direct phase model of synchronous machines implemented in MA TLAB/SIMULINK is presented. The effects of the machine saturation have been included. Simulation studies are performed under various conditions. It has been demonstrated that the MATLAB/SIMULINK is an effective tool to study the complex...... synchronous machine and the implemented model could be used for studies of various applications of synchronous machines including in renewable and DG generation systems....

  10. Distributed simulation a model driven engineering approach

    CERN Document Server

    Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent

    2016-01-01

    Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.

  11. A COMPARATIVE STUDY OF SIMULATION AND TIME SERIES MODEL IN QUANTIFYING BULLWHIP EFFECT IN SUPPLY CHAIN

    Directory of Open Access Journals (Sweden)

    T. V. O. Fabson

    2011-11-01

    Full Text Available Bullwhip (or whiplash effect is an observed phenomenon in forecast driven distribution channeland careful management of these effects is of great importance to managers of supply chain.Bullwhip effect refers to situations where orders to the suppliers tend to have larger variance thansales to the buyer (demand distortion and the distortion increases as we move up the supply chain.Due to the fact that demand of customer for product is unstable, business managers must forecast inorder to properly position inventory and other resources. Forecasts are statistically based and in mostcases, are not very accurate. The existence of forecast errors made it necessary for organizations tooften carry an inventory buffer called “safety stock”. Moving up the supply chain from the end userscustomers to raw materials supplier there is a lot of variation in demand that can be observed, whichcall for greater need for safety stock.This study compares the efficacy of simulation and Time Series model in quantifying the bullwhipeffects in supply chain management.

  12. Modeling the autonomic and metabolic effects of obstructive sleep apnea: A simulation study.

    Directory of Open Access Journals (Sweden)

    Limei eCheng

    2012-01-01

    Full Text Available Long term exposure to intermittent hypoxia and sleep fragmentation introduced by recurring obstructive sleep apnea has been linked to subsequent cardiovascular disease and Type 2 diabetes. The underlying mechanisms remain unclear, but impairment of the normal interactions among the systems that regulate autonomic and metabolic function is likely involved. We have extended an existing integrative model of respiratory, cardiovascular and sleep-wake state control, to incorporate a sub-model of glucose-insulin-fatty acid regulation. This computational model is capable of simulating the complex dynamics of cardiorespiratory control, chemoreflex and state-related control of breath-to-breath ventilation, state-related and chemoreflex control of upper airway potency, respiratory and circulatory mechanics, as well as the metabolic control of glucose insulin dynamics and its interactions with the autonomic control. The interactions between autonomic and metabolic control include the circadian regulation of epinephrine secretion, epinephrine regulation on dynamic fluctuations in glucose and free-fatty acid in plasma, metabolic coupling among tissues and organs provided by insulin and epinephrine, as well as the effect of insulin on peripheral vascular sympathetic activity. These model simulations provide insight into the relative importance of the various mechanisms that determine the acute and chronic physiological effects of sleep-disordered breathing. The model can also be used to investigate the effects of a variety of interventions, such as different glucose clamps, the intravenous glucose tolerance test and the application of continuous positive airway pressure on obstructive sleep apnea subjects. As such, this model provides the foundation on which future efforts to simulate disease progression and the long-term effects of pharmacological intervention can be based.

  13. A Study of Synchronous Machine Model Implementations in Matlab/Simulink Simulations for New and Renewable Energy Systems

    DEFF Research Database (Denmark)

    Chen, Zhe; Blaabjerg, Frede; Iov, Florin

    2005-01-01

    A direct phase model of synchronous machines implemented in MA TLAB/SIMULINK is presented. The effects of the machine saturation have been included. Simulation studies are performed under various conditions. It has been demonstrated that the MATLAB/SIMULINK is an effective tool to study the complex...

  14. Simulation Methods for High-Cycle Fatigue-Driven Delamination using Cohesive Zone Models - Fundamental Behavior and Benchmark Studies

    DEFF Research Database (Denmark)

    Bak, Brian Lau Verndal; Lindgaard, Esben; Turon, A.

    2015-01-01

    A novel computational method for simulating fatigue-driven delamination cracks in composite laminated structures under cyclic loading based on a cohesive zone model [2] and new benchmark studies with four other comparable methods [3-6] are presented. The benchmark studies describe and compare the...

  15. A simulation study on Bayesian Ridge regression models for several collinearity levels

    Science.gov (United States)

    Efendi, Achmad; Effrihan

    2017-12-01

    When analyzing data with multiple regression model if there are collinearities, then one or several predictor variables are usually omitted from the model. However, there sometimes some reasons, for instance medical or economic reasons, the predictors are all important and should be included in the model. Ridge regression model is not uncommon in some researches to use to cope with collinearity. Through this modeling, weights for predictor variables are used for estimating parameters. The next estimation process could follow the concept of likelihood. Furthermore, for the estimation nowadays the Bayesian version could be an alternative. This estimation method does not match likelihood one in terms of popularity due to some difficulties; computation and so forth. Nevertheless, with the growing improvement of computational methodology recently, this caveat should not at the moment become a problem. This paper discusses about simulation process for evaluating the characteristic of Bayesian Ridge regression parameter estimates. There are several simulation settings based on variety of collinearity levels and sample sizes. The results show that Bayesian method gives better performance for relatively small sample sizes, and for other settings the method does perform relatively similar to the likelihood method.

  16. Do we need full mesoscale models to simulate the urban heat island? A study over the city of Barcelona.

    Science.gov (United States)

    García-Díez, Markel; Ballester, Joan; De Ridder, Koen; Hooyberghs, Hans; Lauwaet, Dirk; Rodó, Xavier

    2016-04-01

    As most of the population lives in urban environments, the simulation of the urban climate has become an important part of the global climate change impact assessment. However, due to the high resolution required, these simulations demand a large amount of computational resources. Here we present a comparison between a simplified fast urban climate model (UrbClim) and a widely used full mesoscale model, the Weather Research and Forecasting (WRF) model, over the city of Barcelona. In order to check the advantages and disadvantages of each approach, both simulations were compared with station data and with land surface temperature observations retrieved by satellites, focusing on the urban heat island. The effect of changing the UrbClim boundary conditions was studied too, by using low resolution global reanalysis data (70 km) and a higher resolution forecast model (15 km). Finally, a strict comparison of the computational resources consumed by both models was carried out. Results show that, generally, the performance of the simple model is comparable to or better than the mesoscale model. The exception are the winds and the day-to-day correlation in the reanalysis driven run, but these problems disappear when taking the boundary conditions from a higher resolution global model. UrbClim was found to run 133 times faster than WRF, using 4x times higher resolution and, thus, it is an efficient solution for running long climate change simulations over large city ensembles.

  17. Using ProModel as a simulation tools to assist plant layout design and planning: Case study plastic packaging factory

    Directory of Open Access Journals (Sweden)

    Pochamarn Tearwattanarattikal

    2008-01-01

    Full Text Available This study is about the application of a Simulation Model to assist decision making on expanding capacity and plant layout design and planning. The plant layout design concept is performed first to create the physical layouts then the simulation model used to test the capability of plant to meet various demand forecast scena. The study employed ProModel package as a tool, using the model to compare the performances in term of % utilization, characteristics of WIP and ability to meet due date. The verification and validation stages were perform before running the scenarios. The model runs daily production and then the capacity constraint resources defined by % utilization. The expanding capacity policy can be extra shift-working hours or increasing the number of machines. After expanding capacity solutions are found, the physical layout is selected based on the criterion of space available for WIP and easy flow of material.

  18. When are solar refrigerators less costly than on-grid refrigerators: A simulation modeling study.

    Science.gov (United States)

    Haidari, Leila A; Brown, Shawn T; Wedlock, Patrick; Connor, Diana L; Spiker, Marie; Lee, Bruce Y

    2017-04-19

    Gavi recommends solar refrigerators for vaccine storage in areas with less than eight hours of electricity per day, and WHO guidelines are more conservative. The question remains: Can solar refrigerators provide value where electrical outages are less frequent? Using a HERMES-generated computational model of the Mozambique routine immunization supply chain, we simulated the use of solar versus electric mains-powered refrigerators (hereafter referred to as "electric refrigerators") at different locations in the supply chain under various circumstances. At their current price premium, the annual cost of each solar refrigerator is 132% more than each electric refrigerator at the district level and 241% more at health facilities. Solar refrigerators provided savings over electric refrigerators when one-day electrical outages occurred more than five times per year at either the district level or the health facilities, even when the electric refrigerator holdover time exceeded the duration of the outage. Two-day outages occurring more than three times per year at the district level or more than twice per year at the health facilities also caused solar refrigerators to be cost saving. Lowering the annual cost of a solar refrigerator to 75% more than an electric refrigerator allowed solar refrigerators to be cost saving at either level when one-day outages occurred more than once per year, or when two-day outages occurred more than once per year at the district level or even once per year at the health facilities. Our study supports WHO and Gavi guidelines. In fact, solar refrigerators may provide savings in total cost per dose administered over electrical refrigerators when electrical outages are less frequent. Our study identified the frequency and duration at which electrical outages need to occur for solar refrigerators to provide savings in total cost per dose administered over electric refrigerators at different solar refrigerator prices. Copyright © 2017. Published

  19. Accuracy study of numerical simulation of tsunami applied to the submarine landslide model

    International Nuclear Information System (INIS)

    Tonomo, Koji; Shikata, Takemi; Murakami, Yoshikane

    2015-01-01

    This study carried out the reproductive calculation for the submarine landslide model experiment that was conducted by Hashimoto and Dan (2008), adopted to kinematic landslide model (KLS model) and Watts model which calculates Tsunami wave propagation applying initial wave profile. Moreover, KLS model was modified to focus on synchronize the amount between collapse and deposition as 'modified-KLS model' in this study, which is designed to proceed collapse and deposition virtually simultaneously. As a result, KLS model does not have the advantage of tsunami height evaluation for the submarine landslide model since it becomes the Tsunami wave height of approximately 1.5-3.0 times in comparison with the experimental result. On the other hand, modified-KLS model and Watts model mostly reproduced the spatial distribution of Tsunami wave height. (author)

  20. Modeling and Simulation of Air Pollutant Dispartion a Case Study of an Industrial Area in Nigeria

    Directory of Open Access Journals (Sweden)

    AbdulFatai JIMOH

    2006-07-01

    Full Text Available This work was carried out to develop a model equation for predicting air pollutant dispersion. Major air pollutant were identified, their source, how they cause air pollution, effects and control measures were analysed. Chemiluminecent analyser, non dispersive infrared analyzer (NDN, flame ionization detector, charcoal column absorber, and titration techniques were used for the analysis. Great emphasis was laid on the pollutants resulting from united African textile in Lagos State. A predictive model for air pollutant dispersion was developed and simulated using data collected from the industry for the year 2001, 2002 and 2003. Both the model and simulated result shows that pollutants such as NO, CO, and CO2 are dispersed in accordance with the law of the dispersion (which state that there is a trend in the reduction of pollutant concentration with increasing distance, The quantities of air pollutants emitted from the industries were compared with that of FEPA regulated emission limit for each pollutant and it was discover that UNTL Lagos at a certain point in time exceeded the regulated limits. Hence the model could be used in predicting air pollutant dispersion in air pollution control and the safe distance for human habitation from the industrial area.

  1. A Case Study Using Modeling and Simulation to Predict Logistics Supply Chain Issues

    Science.gov (United States)

    Tucker, David A.

    2007-01-01

    Optimization of critical supply chains to deliver thousands of parts, materials, sub-assemblies, and vehicle structures as needed is vital to the success of the Constellation Program. Thorough analysis needs to be performed on the integrated supply chain processes to plan, source, make, deliver, and return critical items efficiently. Process modeling provides simulation technology-based, predictive solutions for supply chain problems which enable decision makers to reduce costs, accelerate cycle time and improve business performance. For example, United Space Alliance, LLC utilized this approach in late 2006 to build simulation models that recreated shuttle orbiter thruster failures and predicted the potential impact of thruster removals on logistics spare assets. The main objective was the early identification of possible problems in providing thruster spares for the remainder of the Shuttle Flight Manifest. After extensive analysis the model results were used to quantify potential problems and led to improvement actions in the supply chain. Similarly the proper modeling and analysis of Constellation parts, materials, operations, and information flows will help ensure the efficiency of the critical logistics supply chains and the overall success of the program.

  2. [Studies of ozone formation potentials for benzene and ethylbenzene using a smog chamber and model simulation].

    Science.gov (United States)

    Jia, Long; Xu, Yong-Fu

    2014-02-01

    Ozone formation potentials from irradiations of benzene-NO(x) and ethylbenzene-NO(x) systems under the conditions of different VOC/NO(x) ratios and RH were investigated using a characterized chamber and model simulation. The repeatability of the smog chamber experiment shows that for two sets of ethylbenzene-NO(x) irradiations with similar initial concentrations and reaction conditions, such as temperature, relative humidity and relative light intensity, the largest difference in O3 between two experiments is only 4% during the whole experimental run. On the basis of smog chamber experiments, ozone formation of photo-oxidation of benzene and ethylbenzene was simulated in terms of the master chemical mechanism (MCM). The peak ozone values for benzene and ethylbenzene simulated by MCM are higher than the chamber data, and the difference between the MCM-simulated results and chamber data increases with increasing RH. Under the conditions of sunlight irradiations, with benzene and ethylbenzene concentrations being in the range of (10-50) x 10(-9) and NO(x) concentrations in the range of (10-100) x 10(-9), the 6 h ozone contributions of benzene and ethylbenzene were obtained to be (3.1-33) x 10(-9) and (2.6-122) x 10(-9), whereas the peak O3 contributions of benzene and ethylbenzene were (3.5-54) x 10(-9) and (3.8-164) x 10(-9), respectively. The MCM-simulated maximum incremental reactivity (MIR) values for benzene and ethylbenzene were 0.25/C and 0.97/C (per carbon), respectively. The maximum ozone reactivity (MOR) values for these two species were obtained to be 0.73/C and 1.03/C, respectively. The MOR value of benzene from MCM is much higher than that obtained by carter from SAPRC, indicating that SAPRC may underestimate the ozone formation potential of benzene.

  3. Cycle Time and Throughput Rate Modelling Study through the Simulation Platform

    Directory of Open Access Journals (Sweden)

    Fei Xiong

    2014-02-01

    Full Text Available The shorter cycle time (CT and higher throughput rate (TH are primary goals of the industry, including sensors and transducer factory. The common way of cycle time reduction is to reduce WIP, but such action may also reduce throughput. This paper will show one practical healthy heuristic algorithm based on tool time modelling to balance both the CT and the TH. This algorithm considers the factors that exist in the work in process (WIP and its constrains in modules of the factory. One computer simulation platform based on a semiconductor factory is built to verify this algorithm. The result of computing simulation experiments suggests that the WIP level calculated by this algorithm can achieve the good balance of CT and TH.

  4. Bias correction of regional climate model simulations for hydrological climate-change impact studies: Review and evaluation of different methods

    Science.gov (United States)

    Teutschbein, Claudia; Seibert, Jan

    2012-08-01

    SummaryDespite the increasing use of regional climate model (RCM) simulations in hydrological climate-change impact studies, their application is challenging due to the risk of considerable biases. To deal with these biases, several bias correction methods have been developed recently, ranging from simple scaling to rather sophisticated approaches. This paper provides a review of available bias correction methods and demonstrates how they can be used to correct for deviations in an ensemble of 11 different RCM-simulated temperature and precipitation series. The performance of all methods was assessed in several ways: At first, differently corrected RCM data was compared to observed climate data. The second evaluation was based on the combined influence of corrected RCM-simulated temperature and precipitation on hydrological simulations of monthly mean streamflow as well as spring and autumn flood peaks for five catchments in Sweden under current (1961-1990) climate conditions. Finally, the impact on hydrological simulations based on projected future (2021-2050) climate conditions was compared for the different bias correction methods. Improvement of uncorrected RCM climate variables was achieved with all bias correction approaches. While all methods were able to correct the mean values, there were clear differences in their ability to correct other statistical properties such as standard deviation or percentiles. Simulated streamflow characteristics were sensitive to the quality of driving input data: Simulations driven with bias-corrected RCM variables fitted observed values better than simulations forced with uncorrected RCM climate variables and had more narrow variability bounds.

  5. Simulation model study of limitation on the locating distance of a ground penetrating radar; Chichu tansa radar no tansa kyori genkai ni kansuru simulation model no kochiku

    Energy Technology Data Exchange (ETDEWEB)

    Nakauchi, T.; Tsunasaki, M.; Kishi, M.; Hayakawa, H. [Osaka Gas Co. Ltd., Osaka (Japan)

    1996-10-01

    Various simulations were carried out under various laying conditions to obtain the limitation of locating distance for ground penetrating radar. Recently, ground penetrating radar has been remarked as location technology of obstacles such as the existing buried objects. To enhance the theoretical model (radar equation) of a maximum locating distance, the following factors were examined experimentally using pulse ground penetrating radar: ground surface conditions such as asphalt pavement, diameter of buried pipes, material of buried pipes, effect of soil, antenna gain. The experiment results well agreed with actual field experiment ones. By adopting the antenna gain and effect of the ground surface, the more practical simulation using underground models became possible. The maximum locating distance was more improved by large antenna than small one in actual field. It is assumed that large antenna components contributed to improvement of gain and reduction of attenuation during passing through soil. 5 refs., 12 figs.

  6. Simulation studies of optimum energies for DXA: dependence on tissue type, patient size and dose model

    International Nuclear Information System (INIS)

    Michael, G. J.; Henderson, C. J.

    1999-01-01

    Dual-energy x-ray absorptiometry (DXA) is a well established technique for measuring bone mineral density (BMD). However, in recent years DXA is increasingly being used to measure body composition in terms of fat and fat-free mass. DXA scanners must also determine the soft tissue baseline value from soft-tissue-only regions adjacent to bone. The aim of this work is to determine, using computer simulations, the optimum x- ray energies for a number of dose models, different tissues, i.e. bone mineral, average soft tissue, lean soft tissue and fat; and a range of anatomical sites and patient sizes. Three models for patient dose were evaluated total beam energy, entrance exposure and absorbed dose calculated by Monte Carlo modelling. A range of tissue compositions and thicknesses were chosen to cover typical patient variations for the three sites femoral neck, PA spine and lateral spine. In this work, the optimisation of the energies is based on (1) the uncertainty that arises from the quantum statistical nature of the number of x-rays recorded by the detector, and (2) the radiation dose received by the patient. This study has deliberately not considered other parameters such as detector response, electronic noise, x-ray tube heat load etc, because these are technology dependent parameters, not ones that are inherent to the measuring technique. Optimisation of the energies is achieved by minimisation of the product of variance of density measurement and dose which is independent of the absolute intensities of the x-ray beams. The results obtained indicate that if solving for bone density, then E-low in the range 34 to 42 keV, E-high in the range 100 to 200 keV and incident intensity ratio (low energy/high energy) in the range 3 to 10 is a reasonable compromise for the normal range of patient sizes. The choice of energies is complicated by the fact that the DXA unit must also solve for fat and lean soft tissue in soft- tissue-only regions adjacent to the bone. In this

  7. Study of tropical clouds feedback to a climate warming as simulated by climate models

    International Nuclear Information System (INIS)

    Brient, Florent

    2012-01-01

    The last IPCC report affirms the predominant role of low cloud-radiative feedbacks in the inter-model spread of climate sensitivity. Understanding the mechanisms that control the behavior of low-level clouds is thus crucial. However, the complexity of coupled ocean-atmosphere models and the large number of processes potentially involved make the analysis of this response difficult. To simplify the analysis and to identify the most critical controls of cloud feedbacks, we analyze the cloud response to climate change simulated by the IPSL-CM5A model in a hierarchy of configurations. A comparison between three model configurations (coupled, atmospheric and aqua-planet) using the same physical parametrizations shows that the cloud response to global warming is dominated by a decrease of low clouds in regimes of moderate subsidence. Using a Single Column Model, forced by weak subsidence large-scale forcing, allows us to reproduce the vertical cloud profile predicted in the 3D model, as well as its response to climate change (if a stochastic forcing is added on vertical velocity). We analyze the sensitivity of this low-cloud response to external forcing and also to uncertain parameters of physical parameterizations involved on the atmospheric model. Through a moist static energy (MSE) budget, we highlight several mechanisms: (1) Robust: Over weak subsidence regimes, the Clausius-Clapeyron relationship predicts that a warmer atmosphere leads to a increase of the vertical MSE gradient, resulting on a strengthening of the import of low-MSE from the free atmosphere into the cloudy boundary layer. The MSE budget links changes of vertical advection and cloud radiative effects. (2) Physics Model Dependent: The coupling between shallow convection, turbulence and cloud schemes allows the intensification of low-MSE transport so that cloud radiative cooling becomes 'less necessary' to balance the energy budget (Robust positive low cloud-radiative feedback for the model). The

  8. A simplified heat pump model for use in solar plus heat pump system simulation studies

    DEFF Research Database (Denmark)

    Perers, Bengt; Andersen, Elsa; Nordman, Roger

    2012-01-01

    Solar plus heat pump systems are often very complex in design, with sometimes special heat pump arrangements and control. Therefore detailed heat pump models can give very slow system simulations and still not so accurate results compared to real heat pump performance in a system. The idea here...... is to start from a standard measured performance map of test points for a heat pump according to EN 14825 and then determine characteristic parameters for a simplified correlation based model of the heat pump. By plotting heat pump test data in different ways including power input and output form and not only...... as COP, a simplified relation could be seen. By using the same methodology as in the EN 12975 QDT part in the collector test standard it could be shown that a very simple model could describe the heat pump test data very accurately, by identifying 4 parameters in the correlation equation found....

  9. A simplified heat pump model for use in solar plus heat pump system simulation studies

    DEFF Research Database (Denmark)

    Perers, Bengt; Andersen, Elsa; Nordman, Roger

    2012-01-01

    as COP, a simplified relation could be seen. By using the same methodology as in the EN 12975 QDT part in the collector test standard it could be shown that a very simple model could describe the heat pump test data very accurately, by identifying 4 parameters in the correlation equation found.......Solar plus heat pump systems are often very complex in design, with sometimes special heat pump arrangements and control. Therefore detailed heat pump models can give very slow system simulations and still not so accurate results compared to real heat pump performance in a system. The idea here...... is to start from a standard measured performance map of test points for a heat pump according to EN 14825 and then determine characteristic parameters for a simplified correlation based model of the heat pump. By plotting heat pump test data in different ways including power input and output form and not only...

  10. Dynamic Value at Risk: A Comparative Study Between Heteroscedastic Models and Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    José Lamartine Távora Junior

    2006-12-01

    Full Text Available The objective of this paper was to analyze the risk management of a portfolio composed by Petrobras PN, Telemar PN and Vale do Rio Doce PNA stocks. It was verified if the modeling of Value-at-Risk (VaR through the place Monte Carlo simulation with volatility of GARCH family is supported by hypothesis of efficient market. The results have shown that the statistic evaluation in inferior to dynamics, evidencing that the dynamic analysis supplies support to the hypothesis of efficient market of the Brazilian share holding market, in opposition of some empirical evidences. Also, it was verified that the GARCH models of volatility is enough to accommodate the variations of the shareholding Brazilian market, since the model is capable to accommodate the great dynamic of the Brazilian market.

  11. Accounting for treatment use when validating a prognostic model: a simulation study.

    Science.gov (United States)

    Pajouheshnia, Romin; Peelen, Linda M; Moons, Karel G M; Reitsma, Johannes B; Groenwold, Rolf H H

    2017-07-14

    Prognostic models often show poor performance when applied to independent validation data sets. We illustrate how treatment use in a validation set can affect measures of model performance and present the uses and limitations of available analytical methods to account for this using simulated data. We outline how the use of risk-lowering treatments in a validation set can lead to an apparent overestimation of risk by a prognostic model that was developed in a treatment-naïve cohort to make predictions of risk without treatment. Potential methods to correct for the effects of treatment use when testing or validating a prognostic model are discussed from a theoretical perspective.. Subsequently, we assess, in simulated data sets, the impact of excluding treated individuals and the use of inverse probability weighting (IPW) on the estimated model discrimination (c-index) and calibration (observed:expected ratio and calibration plots) in scenarios with different patterns and effects of treatment use. Ignoring the use of effective treatments in a validation data set leads to poorer model discrimination and calibration than would be observed in the untreated target population for the model. Excluding treated individuals provided correct estimates of model performance only when treatment was randomly allocated, although this reduced the precision of the estimates. IPW followed by exclusion of the treated individuals provided correct estimates of model performance in data sets where treatment use was either random or moderately associated with an individual's risk when the assumptions of IPW were met, but yielded incorrect estimates in the presence of non-positivity or an unobserved confounder. When validating a prognostic model developed to make predictions of risk without treatment, treatment use in the validation set can bias estimates of the performance of the model in future targeted individuals, and should not be ignored. When treatment use is random, treated

  12. Accounting for treatment use when validating a prognostic model: a simulation study

    Directory of Open Access Journals (Sweden)

    Romin Pajouheshnia

    2017-07-01

    Full Text Available Abstract Background Prognostic models often show poor performance when applied to independent validation data sets. We illustrate how treatment use in a validation set can affect measures of model performance and present the uses and limitations of available analytical methods to account for this using simulated data. Methods We outline how the use of risk-lowering treatments in a validation set can lead to an apparent overestimation of risk by a prognostic model that was developed in a treatment-naïve cohort to make predictions of risk without treatment. Potential methods to correct for the effects of treatment use when testing or validating a prognostic model are discussed from a theoretical perspective.. Subsequently, we assess, in simulated data sets, the impact of excluding treated individuals and the use of inverse probability weighting (IPW on the estimated model discrimination (c-index and calibration (observed:expected ratio and calibration plots in scenarios with different patterns and effects of treatment use. Results Ignoring the use of effective treatments in a validation data set leads to poorer model discrimination and calibration than would be observed in the untreated target population for the model. Excluding treated individuals provided correct estimates of model performance only when treatment was randomly allocated, although this reduced the precision of the estimates. IPW followed by exclusion of the treated individuals provided correct estimates of model performance in data sets where treatment use was either random or moderately associated with an individual's risk when the assumptions of IPW were met, but yielded incorrect estimates in the presence of non-positivity or an unobserved confounder. Conclusions When validating a prognostic model developed to make predictions of risk without treatment, treatment use in the validation set can bias estimates of the performance of the model in future targeted individuals, and

  13. Study on Fluid-solid Coupling Mathematical Models and Numerical Simulation of Coal Containing Gas

    Science.gov (United States)

    Xu, Gang; Hao, Meng; Jin, Hongwei

    2018-02-01

    Based on coal seam gas migration theory under multi-physics field coupling effect, fluid-solid coupling model of coal seam gas was build using elastic mechanics, fluid mechanics in porous medium and effective stress principle. Gas seepage behavior under different original gas pressure was simulated. Results indicated that residual gas pressure, gas pressure gradient and gas low were bigger when original gas pressure was higher. Coal permeability distribution decreased exponentially when original gas pressure was lower than critical pressure. Coal permeability decreased rapidly first and then increased slowly when original pressure was higher than critical pressure.

  14. Validation process of simulation model

    International Nuclear Information System (INIS)

    San Isidro, M. J.

    1998-01-01

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  15. A Regional Climate Simulation Study Using WRF-ARW Model over Europe and Evaluation for Extreme Temperature Weather Events

    Directory of Open Access Journals (Sweden)

    Hari Prasad Dasari

    2014-01-01

    Full Text Available In this study regional climate simulations of Europe over the 60-year period (1950–2010 made using a 25 km resolution WRF model with NCEP 2.5 degree analysis for initial/boundary conditions are presented for air temperature and extreme events of heat and cold waves. The E-OBS 25 km analysis data sets are used for model validation. Results suggest that WRF could simulate the temperature trends (mean, maximum, minimum, seasonal maximum, and minimum over most parts of Europe except over Iberian Peninsula, Mediterranean, and coastal regions. Model could simulate the slight fall of temperatures from 1950 to 1970 as well as steady rise in temperatures from 1970 to 2010 over Europe. Simulations show occurrence of about 80% of the total heat waves in the period 1970–2010 with maximum number of heat/cold wave episodes over Eastern and Central Europe in good agreement with observations. Relatively poor correlations and high bias are found for heat/cold wave episodes over the complex topographic areas of Iberia and Mediterranean regions where land surface processes play important role in local climate. The poor simulation of temperatures over the above regions could be due to deficiencies in representation of topography and surface physics which need further sensitivity studies.

  16. A Study of K- Factor Power Transformer Characteristics by Modeling Simulation

    Directory of Open Access Journals (Sweden)

    W. A. A. Salem

    2011-10-01

    Full Text Available Harmonic currents generated by nonlinear loads can cause overheating and premature failure in power transformers. K-factor transformers are specially designed to accommodate harmonic currents and offer protection against overheating caused by harmonics. They minimize harmonic current loss and have an additional thermal capacity of known limits. According to IEEE C57-110, the winding eddy current losses are considered proportional to the harmonic current squared times its harmonic number. K-factor is only an indicative value and the authors' main objective in this paper is to study the effect of harmonics on oil filled transformer and to simulate harmonic behavior using Matlab Simulink. A case study is simulated in order to investigate K-factor values with pumping loads, with and without the use of harmonic filters. Results are compared with measured values.

  17. Assessing type I error and power of multistate Markov models for panel data-A simulation study

    OpenAIRE

    Cassarly, Christy; Martin, Renee’ H.; Chimowitz, Marc; Peña, Edsel A.; Ramakrishnan, Viswanathan; Palesch, Yuko Y.

    2016-01-01

    Ordinal outcomes collected at multiple follow-up visits are common in clinical trials. Sometimes, one visit is chosen for the primary analysis and the scale is dichotomized amounting to loss of information. Multistate Markov models describe how a process moves between states over time. Here, simulation studies are performed to investigate the type I error and power characteristics of multistate Markov models for panel data with limited non-adjacent state transitions. The results suggest that ...

  18. A Software Toolkit to Study Systematic Uncertainties of the Physics Models of the Geant4 Simulation Package

    Science.gov (United States)

    Genser, Krzysztof; Hatcher, Robert; Kelsey, Michael; Perdue, Gabriel; Wenzel, Hans; Wright, Dennis H.; Yarba, Julia

    2017-10-01

    The Geant4 simulation toolkit is used to model interactions between particles and matter. Geant4 employs a set of validated physics models that span a wide range of interaction energies. These models rely on measured cross-sections and phenomenological models with the physically motivated parameters that are tuned to cover many application domains. To study what uncertainties are associated with the Geant4 physics models we have designed and implemented a comprehensive, modular, user-friendly software toolkit that allows the variation of one or more parameters of one or more Geant4 physics models involved in simulation studies. It also enables analysis of multiple variants of the resulting physics observables of interest in order to estimate the uncertainties associated with the simulation model choices. Based on modern event-processing infrastructure software, the toolkit offers a variety of attractive features, e.g. flexible run-time configurable workflow, comprehensive bookkeeping, easy to expand collection of analytical components. Design, implementation technology, and key functionalities of the toolkit are presented in this paper and illustrated with selected results.

  19. A Software Toolkit to Study Systematic Uncertainties of the Physics Models of the Geant4 Simulation Package

    Energy Technology Data Exchange (ETDEWEB)

    Genser, Krzysztof [Fermilab; Hatcher, Robert [Fermilab; Kelsey, Michael [SLAC; Perdue, Gabriel [Fermilab; Wenzel, Hans [Fermilab; Wright, Dennis H. [SLAC; Yarba, Julia [Fermilab

    2017-02-17

    The Geant4 simulation toolkit is used to model interactions between particles and matter. Geant4 employs a set of validated physics models that span a wide range of interaction energies. These models rely on measured cross-sections and phenomenological models with the physically motivated parameters that are tuned to cover many application domains. To study what uncertainties are associated with the Geant4 physics models we have designed and implemented a comprehensive, modular, user-friendly software toolkit that allows the variation of one or more parameters of one or more Geant4 physics models involved in simulation studies. It also enables analysis of multiple variants of the resulting physics observables of interest in order to estimate the uncertainties associated with the simulation model choices. Based on modern event-processing infrastructure software, the toolkit offers a variety of attractive features, e.g. exible run-time con gurable work ow, comprehensive bookkeeping, easy to expand collection of analytical components. Design, implementation technology, and key functionalities of the toolkit are presented in this paper and illustrated with selected results.

  20. Modeling, simulation, and concept studies of a fuel cell hybrid electric vehicle powertrain

    Energy Technology Data Exchange (ETDEWEB)

    Oezbek, Markus

    2010-03-29

    This thesis focuses on the development of a fuel cell-based hybrid electric powertrain for smaller (2 kW) hybrid electric vehicles (HEVs). A Hardware-in-the-Loop test rig is designed and built with the possibility to simulate any load profile for HEVs in a realistic environment, whereby the environment is modeled. Detailed simulation models of the test rig are developed and validated to real physical components and control algorithms are designed for the DC/DC-converters and the fuel cell system. A state-feedback controller is developed for the DC/DC-converters where the state-space averaging method is used for the development. For the fuel cells, a gain-scheduling controller based on state feedback is developed and compared to two conventional methods. The design process of an HEV with regard to a given load profile is introduced with comparison between SuperCaps and batteries. The HEV is also evaluated with an introduction to different power management concepts with regard to fuel consumption, dynamics, and fuel cell deterioration rate. The power management methods are implemented in the test rig and compared. (orig.)

  1. Numerical Modelling and Simulation of Dynamic Parameters for Vibration Driven Mobile Robot: Preliminary Study

    Science.gov (United States)

    Baharudin, M. E.; Nor, A. M.; Saad, A. R. M.; Yusof, A. M.

    2018-03-01

    The motion of vibration-driven robots is based on an internal oscillating mass which can move without legs or wheels. The oscillation of the unbalanced mass by a motor is translated into vibration which in turn produces vertical and horizontal forces. Both vertical and horizontal oscillations are of the same frequency but the phases are shifted. The vertical forces will deflect the bristles which cause the robot to move forward. In this paper, the horizontal motion direction caused by the vertically vibrated bristle is numerically simulated by tuning the frequency of their oscillatory actuation. As a preliminary work, basic equations for a simple off-centered vibration location on the robot platform and simulation model for vibration excitement are introduced. It involves both static and dynamic vibration analysis of robots and analysis of different type of parameters. In addition, the orientation of the bristles and oscillators are also analysed. Results from the numerical integration seem to be in good agreement with those achieved from the literature. The presented numerical integration modeling can be used for designing the bristles and controlling the speed and direction of the robot.

  2. Modelling and Simulation: An Overview

    OpenAIRE

    McAleer, Michael; Chan, Felix; Oxley, Les

    2013-01-01

    This discussion paper resulted in a publication in 'Selected Papers of the MSSANZ 19th Biennial Conference on Modelling and Simulation Mathematics and Computers in Simulation', 2013, pp. viii. The papers in this special issue of Mathematics and Computers in Simulation cover the following topics: improving judgmental adjustment of model-based forecasts, whether forecast updates are progressive, on a constrained mixture vector autoregressive model, whether all estimators are born equal: the emp...

  3. Notes on modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Redondo, Antonio [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-10

    These notes present a high-level overview of how modeling and simulation are carried out by practitioners. The discussion is of a general nature; no specific techniques are examined but the activities associated with all modeling and simulation approaches are briefly addressed. There is also a discussion of validation and verification and, at the end, a section on why modeling and simulation are useful.

  4. Mathematical Model of Innate and Adaptive Immunity of Sepsis: A Modeling and Simulation Study of Infectious Disease

    OpenAIRE

    Shi, Zhenzhen; Wu, Chih-Hang J.; Ben-Arieh, David; Simpson, Steven Q.

    2015-01-01

    Sepsis is a systemic inflammatory response (SIR) to infection. In this work, a system dynamics mathematical model (SDMM) is examined to describe the basic components of SIR and sepsis progression. Both innate and adaptive immunities are included, and simulated results in silico have shown that adaptive immunity has significant impacts on the outcomes of sepsis progression. Further investigation has found that the intervention timing, intensity of anti-inflammatory cytokines, and initial patho...

  5. A study on the modelling and simulation of the hydrogen behavior

    International Nuclear Information System (INIS)

    Park, Jae Hong

    1993-02-01

    The severe accident hydrogen control regulation of 10CFR50.34 (f) requires that the plant design shall include a hydrogen control system which can safely accommodate the hydrogen resulting from a 100% metal-water reaction, and limit its concentration in the containment to no greater than 10%. This regulation is applied to UCN 3 and 4 design for the first time in Korea. However, the severe accident hydrogen control system such as hydrogen igniters is not designed to be installed at UCN 3 and 4 containment. Further, UCN 3 and 4 do not have the safety-grade containment fan cooler system that will influence on the hydrogen transport and combustion. This study is focussed on the modelling and simulation of the hydrogen behavior by CONTAIN computer code to identify quantitatively how the containment ESFs such as containment fan system, hydrogen igniters and containment spray system have influence on the hydrogen mixing and burning in the severe accident conditions in order to determine whether the hydrogen control system such as igniters is needed for large dry containment and suggest the severe accident mitigation scheme for the hydrogen combustion. However, the direct containment heating (DCH) effects and corium-coolant-concrete interactions including steam explosion in the reactor cavity are not analyzed in this study, the following conclusions are effective only in the severe accident sequences without these phenomena. About 1 hour after RPV breach, high turbulent flow is predicted to prevail in the lower containment compartments. During this period, the irreversible flow loss coefficients govern the hydrogen mixing in the lower compartments. However, after this period, it is estimated that the containment fan system do not have a significant influence on the hydrogen transport in the containment. A more detailed investigation for the estimation of relevant flow loss coefficients for the lumped-parameter code such as CONTAIN should be carried out. After completion of

  6. Molecular dynamics simulation study of superhydrated perdeuterated natrolite using a new interaction potential model.

    Science.gov (United States)

    Demontis, Pierfranco; Gulín-Gonzalez, Jorge; Suffritti, Giuseppe B

    2006-04-13

    To test a new interaction potential, molecular dynamics simulations of zeolite natrolite were performed for the structures under ambient conditions hydrated by perdeuterated water and at high pressure (1.87 GPa) in the superhydrated phase, which were recently studied by neutron diffraction. The experimental structures were reproduced with reasonable accuracy, and the hydrogen bond features are discussed. As in ordinary natrolite, a flip motion of water molecules around the HOH bisector is found, which, together with translational oscillations, gives rise to transient hydrogen bonds between water molecules, which do not appear from experimental equilibrium coordinates. The dynamics of water molecules can explain some problems encountered in refining the experimental structure. Vibrational spectra of natrolite containing perdeuterated water, which are not yet measured, were simulated, and their qualitative trend is discussed.

  7. Study of Multi-phase Flow in Porous Media : Comparison of SPH Simulations with Micro-model Experiments

    OpenAIRE

    Kunz, P.; Zarikos, I. M.; Karadimitriou, N. K.; Huber, M.; Nieken, U.; Hassanizadeh, S. M.

    2016-01-01

    We present simulations and experiments of drainage processes in a micro-model. A direct numerical simulation is introduced which is capable of describing wetting phenomena on the pore scale. A numerical smoothed particle hydrodynamics model was developed and used to simulate the two-phase flow of immiscible fluids. The experiments were performed in a micro-model which allows the visualization of interface propagation in detail. We compare the experiments and simulations of a quasistatic drain...

  8. Cortical imaging on a head template: a simulation study using a resistor mesh model (RMM).

    Science.gov (United States)

    Chauveau, Nicolas; Franceries, Xavier; Aubry, Florent; Celsis, Pierre; Rigaud, Bernard

    2008-09-01

    The T1 head template model used in Statistical Parametric Mapping Version 2000 (SPM2), was segmented into five layers (scalp, skull, CSF, grey and white matter) and implemented in 2 mm voxels. We designed a resistor mesh model (RMM), based on the finite volume method (FVM) to simulate the electrical properties of this head model along the three axes for each voxel. Then, we introduced four dipoles of high eccentricity (about 0.8) in this RMM, separately and simultaneously, to compute the potentials for two sets of conductivities. We used the direct cortical imaging technique (CIT) to recover the simulated dipoles, using 60 or 107 electrodes and with or without addition of Gaussian white noise (GWN). The use of realistic conductivities gave better CIT results than standard conductivities, lowering the blurring effect on scalp potentials and displaying more accurate position areas when CIT was applied to single dipoles. Simultaneous dipoles were less accurately localized, but good qualitative and stable quantitative results were obtained up to 5% noise level for 107 electrodes and up to 10% noise level for 60 electrodes, showing that a compromise must be found to optimize both the number of electrodes and the noise level. With the RMM defined in 2 mm voxels, the standard 128-electrode cap and 5% noise appears to be the upper limit providing reliable source positions when direct CIT is used. The admittance matrix defining the RMM is easy to modify so as to adapt to different conductivities. The next step will be the adaptation of individual real head T2 images to the RMM template and the introduction of anisotropy using diffusion imaging (DI).

  9. Laboratory studies and model simulations of sorbent material behavior for an in-situ passive treatment barrier

    International Nuclear Information System (INIS)

    Aloysius, D.; Fuhrmann, M.

    1995-01-01

    This paper presents a study combining laboratory experiments and model simulations in support of the design and construction of a passive treatment barrier (or filter wall) for retarding the migration of Sr-90 within a water-bearing surficial sand and gravel layer. Preliminary evaluation was used to select materials for column testing. A one-dimensional finite-difference model was used to simulate the laboratory column results and extrapolation of the calibrated model was then used to assess barrier performance over extended time frames with respect to Sr-90 breakthrough and loading on the filter media. The final results of the study showed that 20 by 50 mesh clinoptilolite will attenuate Sr-90 with a maximum life expentancy of approximately 10 years. This time period is based on allowable limits of Sr-90 activity on the filter media and is also a function of site-specific conditions

  10. Modelling and simulation of electrical energy systems through a complex systems approach using agent-based models. Case study: Under-frequency load shedding for refrigerators

    Energy Technology Data Exchange (ETDEWEB)

    Kremers, Enrique [Karlsruhe Institute of Technology (KIT), Karlsruhe (Germany). European Inst. for Energy Research (EIFER); Gonzalez de Durana, Jose Maria; Barambones, Oscar [Universidad del Pais Vasco, Vitoria (Spain). Escuela Universitaria de Ingenieria de Vitoria-Gasteiz

    2013-09-01

    One of the ways of studying complex systems is through modelling and simulation, which are used as tools to represent these systems in a virtual environment. Current advances in computing performance (which has been a major constraint in this field for some time) allow for the simulation these kinds of systems within reasonable time horizons. One of the tools for simulating complex systems is agent-based modelling. This individual-centric approach is based on autonomous entities that can interact with each other, thus modelling the system in a disaggregated way. Agent-based models can be coupled with other modelling methods, such as continuous models and discrete events, which can be embedded or run in parallel to the multi-agent system. When representing the electrical energy system in a systemic and multi-layered way, it is treated as a true socio-technical system, in which not only technical models are taken into account, but also socio-behavioural ones. In this work, a number of different models for the parts of an electrical system are presented, related to production, demand and storage. The models are intended to be as simple as possible in order to be simulated in an integrated framework representing the system as a whole. Furthermore, the models allow the inclusion of social behaviour and other, not purely engineering-related aspects of the system, which have to be considered from a complex point of view. (orig.)

  11. Energy Modelling and Automated Calibrations of Ancient Building Simulations: A Case Study of a School in the Northwest of Spain

    Directory of Open Access Journals (Sweden)

    Ana Ogando

    2017-06-01

    Full Text Available In the present paper, the energy performance of buildings forming a school centre in the northwest of Spain was analyzed using a transient simulation of the energy model of the school, which was developed with TRNSYS, a software of proven reliability in the field of thermal simulations. A deterministic calibration approach was applied to the initial building model to adjust the predictions to the actual performance of the school, data acquired during the temperature measurement campaign. The buildings under study were in deteriorated conditions due to poor maintenance over the years, presenting a big challenge for modelling and simulating it in a reliable way. The results showed that the proposed methodology is successful for obtaining calibrated thermal models of these types of damaged buildings, as the metrics employed to verify the final error showed a reduced normalized mean bias error (NMBE of 2.73%. It was verified that a decrease of approximately 60% in NMBE and 17% in the coefficient of variation of the root mean square error (CV(RMSE was achieved due to the calibration process. Subsequent steps were performed with the aid of new software, which was developed under a European project that enabled the automated calibration of the simulations.

  12. Comparative simulation study of gas-phase propylene polymerization in fluidized bed reactors using aspen polymers and two phase models

    Directory of Open Access Journals (Sweden)

    Shamiria Ahmad

    2013-01-01

    Full Text Available A comparative study describing gas-phase propylene polymerization in fluidized-bed reactors using Ziegler-Natta catalyst is presented. The reactor behavior was explained using a two-phase model (which is based on principles of fluidization as well as simulation using the Aspen Polymers process simulator. The two-phase reactor model accounts for the emulsion and bubble phases which contain different portions of catalysts with the polymerization occurring in both phases. Both models predict production rate, molecular weight, polydispersity index (PDI and melt flow index (MFI of the polymer. We used both models to investigate the effect of important polymerization parameters, namely catalyst feed rate and hydrogen concentration, on the product polypropylene properties, such as production rate, molecular weight, PDI and MFI. Both the two-phase model and Aspen Polymers simulator showed good agreement in terms of production rate. However, the models differed in their predictions for weight-average molecular weight, PDI and MFI. Based on these results, we propose incorporating the missing hydrodynamic effects into Aspen Polymers to provide a more realistic understanding of the phenomena encountered in fluidized bed reactors for polyolefin production.

  13. Study of the dosimetric response of Gallium Nitride (GaN): modeling, simulation and characterization on radiotherapy

    International Nuclear Information System (INIS)

    Wang, Ruoxi

    2015-01-01

    The work in this thesis has the objective to increase the measurement precision of the dosimetry based on the Gallium Nitride (GaN) transducer and develop its applications on radiotherapy. The study includes the aspects of modeling, simulation and characterization of this response in external radiotherapy and brachytherapy. In modeling, we have proposed two approaches to model the GaN transducer's response in external radiotherapy. For the first approach, a model has been built based on experimental data, while separating the primary and scattering component of the beam. For the second approach, we have adopted a response model initially developed for the silicon diodes for the GaN radioluminescent transducer. We have also proposed an original concept of bi-media dosimetry which evaluates the dose in tissue according to different responses from two media without prior information on the conditions of irradiation. This concept has been shown by Monte Carlo simulation. Moreover, for High Dose Rate brachytherapy, the response of GaN transducer irradiated by iridium 192 and cobalt 60 sources has been evaluated by Monte Carlo simulation and confirmed by the measurements. Studies on the property characterization of GaN radioluminescent transducer has been carried out with these sources as well. An instrumented phantom prototype with GaN probe has been developed for the HDR brachytherapy quality control. It allows a real-time verification of the physics parameters of a treatment (source dwell position, source dwell time, source activity). (author) [fr

  14. Simulation Model of a Transient

    DEFF Research Database (Denmark)

    Jauch, Clemens; Sørensen, Poul; Bak-Jensen, Birgitte

    2005-01-01

    This paper describes the simulation model of a controller that enables an active-stall wind turbine to ride through transient faults. The simulated wind turbine is connected to a simple model of a power system. Certain fault scenarios are specified and the turbine shall be able to sustain operati...

  15. Assessing type I error and power of multistate Markov models for panel data-A simulation study.

    Science.gov (United States)

    Cassarly, Christy; Martin, Renee' H; Chimowitz, Marc; Peña, Edsel A; Ramakrishnan, Viswanathan; Palesch, Yuko Y

    2017-01-01

    Ordinal outcomes collected at multiple follow-up visits are common in clinical trials. Sometimes, one visit is chosen for the primary analysis and the scale is dichotomized amounting to loss of information. Multistate Markov models describe how a process moves between states over time. Here, simulation studies are performed to investigate the type I error and power characteristics of multistate Markov models for panel data with limited non-adjacent state transitions. The results suggest that the multistate Markov models preserve the type I error and adequate power is achieved with modest sample sizes for panel data with limited non-adjacent state transitions.

  16. Sulfur deposition simulations over China, Japan, and Korea: a model intercomparison study for abating sulfur emission.

    Science.gov (United States)

    Kim, Cheol-Hee; Chang, Lim-Seok; Meng, Fan; Kajino, Mizuo; Ueda, Hiromasa; Zhang, Yuanhang; Son, Hye-Young; Lee, Jong-Jae; He, Youjiang; Xu, Jun; Sato, Keiichi; Sakurai, Tatsuya; Han, Zhiwei; Duan, Lei; Kim, Jeong-Soo; Lee, Suk-Jo; Song, Chang-Keun; Ban, Soo-Jin; Shim, Shang-Gyoo; Sunwoo, Young; Lee, Tae-Young

    2012-11-01

    In response to increasing trends in sulfur deposition in Northeast Asia, three countries in the region (China, Japan, and Korea) agreed to devise abatement strategies. The concepts of critical loads and source-receptor (S-R) relationships provide guidance for formulating such strategies. Based on the Long-range Transboundary Air Pollutants in Northeast Asia (LTP) project, this study analyzes sulfur deposition data in order to optimize acidic loads over the three countries. The three groups involved in this study carried out a full year (2002) of sulfur deposition modeling over the geographic region spanning the three countries, using three air quality models: MM5-CMAQ, MM5-RAQM, and RAMS-CADM, employed by Chinese, Japanese, and Korean modeling groups, respectively. Each model employed its own meteorological numerical model and model parameters. Only the emission rates for SO(2) and NO(x) obtained from the LTP project were the common parameter used in the three models. Three models revealed some bias from dry to wet deposition, particularly the latter because of the bias in annual precipitation. This finding points to the need for further sensitivity tests of the wet removal rates in association with underlying cloud-precipitation physics and parameterizations. Despite this bias, the annual total (dry plus wet) sulfur deposition predicted by the models were surprisingly very similar. The ensemble average annual total deposition was 7,203.6 ± 370 kt S with a minimal mean fractional error (MFE) of 8.95 ± 5.24 % and a pattern correlation (PC) of 0.89-0.93 between the models. This exercise revealed that despite rather poor error scores in comparison with observations, these consistent total deposition values across the three models, based on LTP group's input data assumptions, suggest a plausible S-R relationship that can be applied to the next task of designing cost-effective emission abatement strategies.

  17. Study of Swarm Behavior in Modeling and Simulation of Cluster Formation in Nanofluids

    Directory of Open Access Journals (Sweden)

    Mohammad Pirani

    2013-01-01

    Full Text Available Modeling the multiagents cooperative systems inspired from biological self-organized systems in the context of swarm model has been under great considerations especially in the field of the cooperation of multi robots. These models are trying to optimize the behavior of artificial multiagent systems by introducing a consensus, which is a mathematical model between the agents as an intelligence property for each member of the swarm set. The application of this novel approach in the modeling of nonintelligent multi agents systems in the field of cohesion and cluster formation of nanoparticles in nanofluids has been investigated in this study. This goal can be obtained by applying the basic swarm model for agents that are more mechanistic by considering their physical properties such as their mass, diameter, as well as the physical properties of the flow. Clustering in nanofluids is one of the major issues in the study of its effects on heat transfer. Study of the cluster formation dynamics in nanofluids using the swarm model can be useful in controlling the size and formation time of the clusters as well as designing appropriate microchannels, which the nanoparticles are plunged into.

  18. Cognitive models embedded in system simulation models

    International Nuclear Information System (INIS)

    Siegel, A.I.; Wolf, J.J.

    1982-01-01

    If we are to discuss and consider cognitive models, we must first come to grips with two questions: (1) What is cognition; (2) What is a model. Presumably, the answers to these questions can provide a basis for defining a cognitive model. Accordingly, this paper first places these two questions into perspective. Then, cognitive models are set within the context of computer simulation models and a number of computer simulations of cognitive processes are described. Finally, pervasive issues are discussed vis-a-vis cognitive modeling in the computer simulation context

  19. Creating Simulated Microgravity Patient Models

    Science.gov (United States)

    Hurst, Victor; Doerr, Harold K.; Bacal, Kira

    2004-01-01

    The Medical Operational Support Team (MOST) has been tasked by the Space and Life Sciences Directorate (SLSD) at the NASA Johnson Space Center (JSC) to integrate medical simulation into 1) medical training for ground and flight crews and into 2) evaluations of medical procedures and equipment for the International Space Station (ISS). To do this, the MOST requires patient models that represent the physiological changes observed during spaceflight. Despite the presence of physiological data collected during spaceflight, there is no defined set of parameters that illustrate or mimic a 'space normal' patient. Methods: The MOST culled space-relevant medical literature and data from clinical studies performed in microgravity environments. The areas of focus for data collection were in the fields of cardiovascular, respiratory and renal physiology. Results: The MOST developed evidence-based patient models that mimic the physiology believed to be induced by human exposure to a microgravity environment. These models have been integrated into space-relevant scenarios using a human patient simulator and ISS medical resources. Discussion: Despite the lack of a set of physiological parameters representing 'space normal,' the MOST developed space-relevant patient models that mimic microgravity-induced changes in terrestrial physiology. These models are used in clinical scenarios that will medically train flight surgeons, biomedical flight controllers (biomedical engineers; BME) and, eventually, astronaut-crew medical officers (CMO).

  20. Preliminary study of virtual reality and model simulation for learning laparoscopic suturing skills.

    Science.gov (United States)

    McDougall, Elspeth M; Kolla, Surendra B; Santos, Rosanne T; Gan, Jennifer M; Box, Geoffrey N; Louie, Michael K; Gamboa, Aldrin J R; Kaplan, Adam G; Moskowitz, Ross M; Andrade, Lorena A; Skarecky, Douglas W; Osann, Kathryn E; Clayman, Ralph V

    2009-09-01

    Repetitive practice of laparoscopic suturing and knot tying can facilitate surgeon proficiency in performing this reconstructive technique. We compared a silicone model and pelvic trainer to a virtual reality simulator in the learning of laparoscopic suturing and knot tying by laparoscopically naïve medical students, and evaluated the subsequent performance of porcine laparoscopic cystorrhaphy. A total of 20 medical students underwent a 1-hour didactic session with video demonstration of laparoscopic suturing and knot tying by an expert laparoscopic surgeon. The students were randomized to a pelvic trainer (10) or virtual reality simulator (10) for a minimum of 2 hours of laparoscopic suturing and knot tying training. Within 1 week of the training session the medical students performed laparoscopic closure of a 2 cm cystotomy in a porcine model. Objective structured assessment of technical skills for laparoscopic cystorrhaphy was performed at the procedure by laparoscopic surgeons blinded to the medical student training format. A video of the procedure was evaluated with an objective structured assessment of technical skills by an expert laparoscopic surgeon blinded to medical student identity and training format. The medical students completed an evaluation questionnaire regarding the training format after the laparoscopic cystorrhaphy. All students were able to complete the laparoscopic cystorrhaphy. There was no difference between the pelvic trainer and virtual reality groups in mean +/- SD time to perform the porcine cystorrhaphy at 40 +/- 15 vs 41 +/- 10 minutes (p = 0.87) or the objective structured assessment of technical skills score of 8.8 +/- 2.3 vs 8.2 +/- 2.2 (p = 0.24), respectively. Bladder leak occurred in 3 (30%) of the pelvic trainer trained and 6 (60%) of the virtual reality trained medical student laparoscopic cystorrhaphy procedures (Fisher exact test p = 0.37). The only significant difference between the 2 groups was that 4 virtual reality

  1. Simulation - modeling - experiment; Simulation - modelisation - experience

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  2. An ab initio chemical reaction model for the direct simulation Monte Carlo study of non-equilibrium nitrogen flows.

    Science.gov (United States)

    Mankodi, T K; Bhandarkar, U V; Puranik, B P

    2017-08-28

    A new ab initio based chemical model for a Direct Simulation Monte Carlo (DSMC) study suitable for simulating rarefied flows with a high degree of non-equilibrium is presented. To this end, Collision Induced Dissociation (CID) cross sections for N 2 +N 2 →N 2 +2N are calculated and published using a global complete active space self-consistent field-complete active space second order perturbation theory N 4 potential energy surface and quasi-classical trajectory algorithm for high energy collisions (up to 30 eV). CID cross sections are calculated for only a selected set of ro-vibrational combinations of the two nitrogen molecules, and a fitting scheme based on spectroscopic weights is presented to interpolate the CID cross section for all possible ro-vibrational combinations. The new chemical model is validated by calculating equilibrium reaction rate coefficients that can be compared well with existing shock tube and computational results. High-enthalpy hypersonic nitrogen flows around a cylinder in the transition flow regime are simulated using DSMC to compare the predictions of the current ab initio based chemical model with the prevailing phenomenological model (the total collision energy model). The differences in the predictions are discussed.

  3. Assessment of input function distortions on kinetic model parameters in simulated dynamic 82Rb PET perfusion studies

    International Nuclear Information System (INIS)

    Meyer, Carsten; Peligrad, Dragos-Nicolae; Weibrecht, Martin

    2007-01-01

    Cardiac 82 rubidium dynamic PET studies allow quantifying absolute myocardial perfusion by using tracer kinetic modeling. Here, the accurate measurement of the input function, i.e. the tracer concentration in blood plasma, is a major challenge. This measurement is deteriorated by inappropriate temporal sampling, spillover, etc. Such effects may influence the measured input peak value and the measured blood pool clearance. The aim of our study is to evaluate the effect of input function distortions on the myocardial perfusion as estimated by the model. To this end, we simulate noise-free myocardium time activity curves (TACs) with a two-compartment kinetic model. The input function to the model is a generic analytical function. Distortions of this function have been introduced by varying its parameters. Using the distorted input function, the compartment model has been fitted to the simulated myocardium TAC. This analysis has been performed for various sets of model parameters covering a physiologically relevant range. The evaluation shows that ±10% error in the input peak value can easily lead to ±10-25% error in the model parameter K 1 , which relates to myocardial perfusion. Variations in the input function tail are generally less relevant. We conclude that an accurate estimation especially of the plasma input peak is crucial for a reliable kinetic analysis and blood flow estimation

  4. Simulation Studies as Designed Experiments: The Comparison of Penalized Regression Models in the “Large p, Small n” Setting

    Science.gov (United States)

    Chaibub Neto, Elias; Bare, J. Christopher; Margolin, Adam A.

    2014-01-01

    New algorithms are continuously proposed in computational biology. Performance evaluation of novel methods is important in practice. Nonetheless, the field experiences a lack of rigorous methodology aimed to systematically and objectively evaluate competing approaches. Simulation studies are frequently used to show that a particular method outperforms another. Often times, however, simulation studies are not well designed, and it is hard to characterize the particular conditions under which different methods perform better. In this paper we propose the adoption of well established techniques in the design of computer and physical experiments for developing effective simulation studies. By following best practices in planning of experiments we are better able to understand the strengths and weaknesses of competing algorithms leading to more informed decisions about which method to use for a particular task. We illustrate the application of our proposed simulation framework with a detailed comparison of the ridge-regression, lasso and elastic-net algorithms in a large scale study investigating the effects on predictive performance of sample size, number of features, true model sparsity, signal-to-noise ratio, and feature correlation, in situations where the number of covariates is usually much larger than sample size. Analysis of data sets containing tens of thousands of features but only a few hundred samples is nowadays routine in computational biology, where “omics” features such as gene expression, copy number variation and sequence data are frequently used in the predictive modeling of complex phenotypes such as anticancer drug response. The penalized regression approaches investigated in this study are popular choices in this setting and our simulations corroborate well established results concerning the conditions under which each one of these methods is expected to perform best while providing several novel insights. PMID:25289666

  5. Simulation studies as designed experiments: the comparison of penalized regression models in the "large p, small n" setting.

    Science.gov (United States)

    Chaibub Neto, Elias; Bare, J Christopher; Margolin, Adam A

    2014-01-01

    New algorithms are continuously proposed in computational biology. Performance evaluation of novel methods is important in practice. Nonetheless, the field experiences a lack of rigorous methodology aimed to systematically and objectively evaluate competing approaches. Simulation studies are frequently used to show that a particular method outperforms another. Often times, however, simulation studies are not well designed, and it is hard to characterize the particular conditions under which different methods perform better. In this paper we propose the adoption of well established techniques in the design of computer and physical experiments for developing effective simulation studies. By following best practices in planning of experiments we are better able to understand the strengths and weaknesses of competing algorithms leading to more informed decisions about which method to use for a particular task. We illustrate the application of our proposed simulation framework with a detailed comparison of the ridge-regression, lasso and elastic-net algorithms in a large scale study investigating the effects on predictive performance of sample size, number of features, true model sparsity, signal-to-noise ratio, and feature correlation, in situations where the number of covariates is usually much larger than sample size. Analysis of data sets containing tens of thousands of features but only a few hundred samples is nowadays routine in computational biology, where "omics" features such as gene expression, copy number variation and sequence data are frequently used in the predictive modeling of complex phenotypes such as anticancer drug response. The penalized regression approaches investigated in this study are popular choices in this setting and our simulations corroborate well established results concerning the conditions under which each one of these methods is expected to perform best while providing several novel insights.

  6. Simulation studies as designed experiments: the comparison of penalized regression models in the "large p, small n" setting.

    Directory of Open Access Journals (Sweden)

    Elias Chaibub Neto

    Full Text Available New algorithms are continuously proposed in computational biology. Performance evaluation of novel methods is important in practice. Nonetheless, the field experiences a lack of rigorous methodology aimed to systematically and objectively evaluate competing approaches. Simulation studies are frequently used to show that a particular method outperforms another. Often times, however, simulation studies are not well designed, and it is hard to characterize the particular conditions under which different methods perform better. In this paper we propose the adoption of well established techniques in the design of computer and physical experiments for developing effective simulation studies. By following best practices in planning of experiments we are better able to understand the strengths and weaknesses of competing algorithms leading to more informed decisions about which method to use for a particular task. We illustrate the application of our proposed simulation framework with a detailed comparison of the ridge-regression, lasso and elastic-net algorithms in a large scale study investigating the effects on predictive performance of sample size, number of features, true model sparsity, signal-to-noise ratio, and feature correlation, in situations where the number of covariates is usually much larger than sample size. Analysis of data sets containing tens of thousands of features but only a few hundred samples is nowadays routine in computational biology, where "omics" features such as gene expression, copy number variation and sequence data are frequently used in the predictive modeling of complex phenotypes such as anticancer drug response. The penalized regression approaches investigated in this study are popular choices in this setting and our simulations corroborate well established results concerning the conditions under which each one of these methods is expected to perform best while providing several novel insights.

  7. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments.

    Science.gov (United States)

    Santos, José; Monteagudo, Angel

    2011-02-21

    As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the fact that the best possible codes show the patterns of the

  8. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments

    Directory of Open Access Journals (Sweden)

    Monteagudo Ángel

    2011-02-01

    Full Text Available Abstract Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the

  9. Sensitivity and requirement of improvements of four soybean crop simulation models for climate change studies in Southern Brazil

    Science.gov (United States)

    Battisti, R.; Sentelhas, P. C.; Boote, K. J.

    2017-12-01

    Crop growth models have many uncertainties that affect the yield response to climate change. Based on that, the aim of this study was to evaluate the sensitivity of crop models to systematic changes in climate for simulating soybean attainable yield in Southern Brazil. Four crop models were used to simulate yields: AQUACROP, MONICA, DSSAT, and APSIM, as well as their ensemble. The simulations were performed considering changes of air temperature (0, + 1.5, + 3.0, + 4.5, and + 6.0 °C), [CO2] (380, 480, 580, 680, and 780 ppm), rainfall (- 30, - 15, 0, + 15, and + 30%), and solar radiation (- 15, 0, + 15), applied to daily values. The baseline climate was from 1961 to 2014, totalizing 53 crop seasons. The crop models simulated a reduction of attainable yield with temperature increase, reaching 2000 kg ha-1 for the ensemble at + 6 °C, mainly due to shorter crop cycle. For rainfall, the yield had a higher rate of reduction when it was diminished than when rainfall was increased. The crop models increased yield variability when solar radiation was changed from - 15 to + 15%, whereas [CO2] rise resulted in yield gains, following an asymptotic response, with a mean increase of 31% from 380 to 680 ppm. The models used require further attention to improvements in optimal and maximum cardinal temperature for development rate; runoff, water infiltration, deep drainage, and dynamic of root growth; photosynthesis parameters related to soil water availability; and energy balance of soil-plant system to define leaf temperature under elevated CO2.

  10. A comparative study of shallow groundwater level simulation with three time series models in a coastal aquifer of South China

    Science.gov (United States)

    Yang, Q.; Wang, Y.; Zhang, J.; Delgado, J.

    2017-05-01

    Accurate and reliable groundwater level forecasting models can help ensure the sustainable use of a watershed's aquifers for urban and rural water supply. In this paper, three time series analysis methods, Holt-Winters (HW), integrated time series (ITS), and seasonal autoregressive integrated moving average (SARIMA), are explored to simulate the groundwater level in a coastal aquifer, China. The monthly groundwater table depth data collected in a long time series from 2000 to 2011 are simulated and compared with those three time series models. The error criteria are estimated using coefficient of determination ( R 2), Nash-Sutcliffe model efficiency coefficient ( E), and root-mean-squared error. The results indicate that three models are all accurate in reproducing the historical time series of groundwater levels. The comparisons of three models show that HW model is more accurate in predicting the groundwater levels than SARIMA and ITS models. It is recommended that additional studies explore this proposed method, which can be used in turn to facilitate the development and implementation of more effective and sustainable groundwater management strategies.

  11. Preliminary Study of Soil Available Nutrient Simulation Using a Modified WOFOST Model and Time-Series Remote Sensing Observations

    Directory of Open Access Journals (Sweden)

    Zhiqiang Cheng

    2018-01-01

    Full Text Available The approach of using multispectral remote sensing (RS to estimate soil available nutrients (SANs has been recently developed and shows promising results. This method overcomes the limitations of commonly used methods by building a statistical model that connects RS-based crop growth and nutrient content. However, the stability and accuracy of this model require improvement. In this article, we replaced the statistical model by integrating the World Food Studies (WOFOST model and time series of remote sensing (T-RS observations to ensure stability and accuracy. Time series of HJ-1 A/B data was assimilated into the WOFOST model to extrapolate crop growth simulations from a single point to a large area using a specific assimilation method. Because nutrient-limited growth within the growing season is required and the SAN parameters can only be used at the end of the growing season in the original model, the WOFOST model was modified. Notably, the calculation order was changed, and new soil nutrient uptake algorithms were implemented in the model for nutrient-limited growth estimation. Finally, experiments were conducted in the spring maize plots of Hongxing Farm to analyze the effects of nutrient stress on crop growth and the SAN simulation accuracy. The results confirm the differences in crop growth status caused by a lack of soil nutrients. The new approach can take advantage of these differences to provide better SAN estimates. In general, the new approach can overcome the limitations of existing methods and simulate the SAN status with reliable accuracy.

  12. Factors influencing the renal arterial Doppler waveform: a simulation study using an electrical circuit model (secondary publication)

    International Nuclear Information System (INIS)

    Sung, Chang Kyu; Han, Bong Soo; Kim, Seung Hyup

    2016-01-01

    The goal of this study was to evaluate the effect of vascular compliance, resistance, and pulse rate on the resistive index (RI) by using an electrical circuit model to simulate renal blood flow. In order to analyze the renal arterial Doppler waveform, we modeled the renal blood-flow circuit with an equivalent simple electrical circuit containing resistance, inductance, and capacitance. The relationships among the impedance, resistance, and compliance of the circuit were derived from well-known equations, including Kirchhoff’s current law for alternating current circuits. Simulated velocity-time profiles for pulsatile flow were generated using Mathematica (Wolfram Research) and the influence of resistance, compliance, and pulse rate on waveforms and the RI was evaluated. Resistance and compliance were found to alter the waveforms independently. The impedance of the circuit increased with increasing proximal compliance, proximal resistance, and distal resistance. The impedance decreased with increasing distal compliance. The RI of the circuit decreased with increasing proximal compliance and resistance. The RI increased with increasing distal compliance and resistance. No positive correlation between impedance and the RI was found. Pulse rate was found to be an extrinsic factor that also influenced the RI. This simulation study using an electrical circuit model led to a better understanding of the renal arterial Doppler waveform and the RI, which may be useful for interpreting Doppler findings in various clinical settings

  13. Mathematical Model of Innate and Adaptive Immunity of Sepsis: A Modeling and Simulation Study of Infectious Disease.

    Science.gov (United States)

    Shi, Zhenzhen; Wu, Chih-Hang J; Ben-Arieh, David; Simpson, Steven Q

    2015-01-01

    Sepsis is a systemic inflammatory response (SIR) to infection. In this work, a system dynamics mathematical model (SDMM) is examined to describe the basic components of SIR and sepsis progression. Both innate and adaptive immunities are included, and simulated results in silico have shown that adaptive immunity has significant impacts on the outcomes of sepsis progression. Further investigation has found that the intervention timing, intensity of anti-inflammatory cytokines, and initial pathogen load are highly predictive of outcomes of a sepsis episode. Sensitivity and stability analysis were carried out using bifurcation analysis to explore system stability with various initial and boundary conditions. The stability analysis suggested that the system could diverge at an unstable equilibrium after perturbations if r t2max (maximum release rate of Tumor Necrosis Factor- (TNF-) α by neutrophil) falls below a certain level. This finding conforms to clinical findings and existing literature regarding the lack of efficacy of anti-TNF antibody therapy.

  14. Studying furosemide solubilization using an in vitro model simulating gastrointestinal digestion and drug solubilization in neonates and young infants

    DEFF Research Database (Denmark)

    Klitgaard, Mette; Sassene, Philip Jonas; Selen, Arzu

    2017-01-01

    OBJECTIVE: The aim of the present study was to study the oral performance of furosemide in neonates and young infants using a newly developed in vitro model simulating digestion and drug solubilization in the gastrointestinal (GI) tract of the human neonate and young infant population (age 0...... model setup was based on the dynamic in vitro lipolysis model previously described by Fernandez et al. (2009). The amount of furosemide solubilized in the aqueous phase during a digestion study was used as an estimate for the amount of drug available for absorption in vivo. By varying different factors...... that the oral performance of furosemide in neonates and young infants will be increased by the presence of food (frequent feedings) due to increased drug solubilization, however, not influenced by the GI digestion of this food. The properties of the dosage form (immediate release tablets) did not affect...

  15. Similarities and differences of serotonin and its precursors in their interactions with model membranes studied by molecular dynamics simulation

    Science.gov (United States)

    Wood, Irene; Martini, M. Florencia; Pickholz, Mónica

    2013-08-01

    In this work, we report a molecular dynamics (MD) simulations study of relevant biological molecules as serotonin (neutral and protonated) and its precursors, tryptophan and 5-hydroxy-tryptophan, in a fully hydrated bilayer of 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphatidyl-choline (POPC). The simulations were carried out at the fluid lamellar phase of POPC at constant pressure and temperature conditions. Two guest molecules of each type were initially placed at the water phase. We have analyzed, the main localization, preferential orientation and specific interactions of the guest molecules within the bilayer. During the simulation run, the four molecules were preferentially found at the water-lipid interphase. We found that the interactions that stabilized the systems are essentially hydrogen bonds, salt bridges and cation-π. None of the guest molecules have access to the hydrophobic region of the bilayer. Besides, zwitterionic molecules have access to the water phase, while protonated serotonin is anchored in the interphase. Even taking into account that these simulations were done using a model membrane, our results suggest that the studied molecules could not cross the blood brain barrier by diffusion. These results are in good agreement with works that show that serotonin and Trp do not cross the BBB by simple diffusion.

  16. Study of Sediment Transportation in the Gulf of Kachchh, using 3D Hydro-dynamic Model Simulation and Satellite Data

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    friction boundary, discharge from river boundary and Gulf-open ocean (open) boundary are defined and used. Programs constituting the COSMOS model were executed along with initial input cards to simulate the model using an Alpher mini-computer system...

  17. TREAT Modeling and Simulation Strategy

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, Mark David [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a four-phase process used to describe the strategy in developing modeling and simulation software for the Transient Reactor Test Facility. The four phases of this research and development task are identified as (1) full core transient calculations with feedback, (2) experiment modeling, (3) full core plus experiment simulation and (4) quality assurance. The document describes the four phases, the relationship between these research phases, and anticipated needs within each phase.

  18. Simulation of transport and chemical transformation of aircraft exhaust at the tropopause region: Box model studies

    Energy Technology Data Exchange (ETDEWEB)

    Petry, H.; Lippert, E.; Hendricks, J.; Ebel, A. [Koeln Univ. (Germany). Inst. fuer Geophysik und Meteorologie

    1997-12-01

    Within the framework of the STRATFLUT project (Simulation of the transport and the chemical transformation of aircraft exhaust at the tropopause region) a chemistry mechanism for applications in the tropopause region was developed and continuously improved (CHEST/CHEST2). This mechanism has been applied in various sensitivity studies and to the evaluation of effective aircraft emission indices. In particular, an increase of ozone production due to airtraffic induced NO{sub x} emissions is found. This increase depends in a non-linear manner on the atmospheric background conditions into which the exhaust is released, on the altitude of release (absolute and relative to the tropopause), on the emission amount, on the daytime of release, on season and on aerosol loading. The effect of NO{sub x} released during one day by a fleet of 10 aircrafts into a box on ozone was found to vary between 0.05 ppbv and 2.3 ppbv (relative changes between approximately 0.02% and 6.57%) depending on the specific assumptions for the respective experiment. (orig.) 144 figs., 42 tabs., 497 refs.

  19. 3D-printed soft-tissue physical models of renal malignancies for individualized surgical simulation: a feasibility study.

    Science.gov (United States)

    Maddox, Michael M; Feibus, Allison; Liu, James; Wang, Julie; Thomas, Raju; Silberstein, Jonathan L

    2018-03-01

    To construct patient-specific physical three-dimensional (3D) models of renal units with materials that approximates the properties of renal tissue to allow pre-operative and robotic training surgical simulation, 3D physical kidney models were created (3DSystems, Rock Hill, SC) using computerized tomography to segment structures of interest (parenchyma, vasculature, collection system, and tumor). Images were converted to a 3D surface mesh file for fabrication using a multi-jet 3D printer. A novel construction technique was employed to approximate normal renal tissue texture, printers selectively deposited photopolymer material forming the outer shell of the kidney, and subsequently, an agarose gel solution was injected into the inner cavity recreating the spongier renal parenchyma. We constructed seven models of renal units with suspected malignancies. Partial nephrectomy and renorrhaphy were performed on each of the replicas. Subsequently all patients successfully underwent robotic partial nephrectomy. Average tumor diameter was 4.4 cm, warm ischemia time was 25 min, RENAL nephrometry score was 7.4, and surgical margins were negative. A comparison was made between the seven cases and the Tulane Urology prospectively maintained robotic partial nephrectomy database. Patients with surgical models had larger tumors, higher nephrometry score, longer warm ischemic time, fewer positive surgical margins, shorter hospitalization, and fewer post-operative complications; however, the only significant finding was lower estimated blood loss (186 cc vs 236; p = 0.01). In this feasibility study, pre-operative resectable physical 3D models can be constructed and used as patient-specific surgical simulation tools; further study will need to demonstrate if this results in improvement of surgical outcomes and robotic simulation education.

  20. Independent power producer parallel operation modeling in transient network simulations for interconnected distributed generation studies

    Energy Technology Data Exchange (ETDEWEB)

    Moura, Fabricio A.M.; Camacho, Jose R. [Universidade Federal de Uberlandia, School of Electrical Engineering, Rural Electricity and Alternative Sources Lab, PO Box 593, 38400.902 Uberlandia, MG (Brazil); Chaves, Marcelo L.R.; Guimaraes, Geraldo C. [Universidade Federal de Uberlandia, School of Electrical Engineering, Power Systems Dynamics Group, PO Box: 593, 38400.902 Uberlandia, MG (Brazil)

    2010-02-15

    The main task in this paper is to present a performance analysis of a distribution network in the presence of an independent power producer (IP) synchronous generator with its speed governor and voltage regulator modeled using TACS -Transient Analysis of Control Systems, for distributed generation studies. Regulators were implemented through their transfer functions in the S domain. However, since ATP-EMTP (Electromagnetic Transient Program) works in the time domain, a discretization is necessary to return the TACS output to time domain. It must be highlighted that this generator is driven by a steam turbine, and the whole system with regulators and the equivalent of the power authority system at the common coupling point (CCP) are modeled in the ''ATP-EMTP -Alternative Transients Program''. (author)

  1. FASTBUS simulation models in VHDL

    International Nuclear Information System (INIS)

    Appelquist, G.

    1992-11-01

    Four hardware simulation models implementing the FASTBUS protocol are described. The models are written in the VHDL hardware description language to obtain portability, i.e. without relations to any specific simulator. They include two complete FASTBUS devices, a full-duplex segment interconnect and ancillary logic for the segment. In addition, master and slave models using a high level interface to describe FASTBUS operations, are presented. With these models different configurations of FASTBUS systems can be evaluated and the FASTBUS transactions of new devices can be verified. (au)

  2. A robust simulation-optimization modeling system for effluent trading--a case study of nonpoint source pollution control.

    Science.gov (United States)

    Zhang, J L; Li, Y P; Huang, G H

    2014-04-01

    In this study, a robust simulation-optimization modeling system (RSOMS) is developed for supporting agricultural nonpoint source (NPS) effluent trading planning. The RSOMS can enhance effluent trading through incorporation of a distributed simulation model and an optimization model within its framework. The modeling system not only can handle uncertainties expressed as probability density functions and interval values but also deal with the variability of the second-stage costs that are above the expected level as well as capture the notion of risk under high-variability situations. A case study is conducted for mitigating agricultural NPS pollution with an effluent trading program in Xiangxi watershed. Compared with non-trading policy, trading scheme can successfully mitigate agricultural NPS pollution with an increased system benefit. Through trading scheme, [213.7, 288.8] × 10(3) kg of TN and [11.8, 30.2] × 10(3) kg of TP emissions from cropped area can be cut down during the planning horizon. The results can help identify desired effluent trading schemes for water quality management with the tradeoff between the system benefit and reliability being balanced and risk aversion being considered.

  3. Interaction of the cardiovascular system with an implanted rotary assist device: simulation study with a refined computer model.

    Science.gov (United States)

    Vollkron, Michael; Schima, Heinrich; Huber, Leopold; Wieselthaler, Georg

    2002-04-01

    In recent years, implanted rotary pumps have achieved the level of extended clinical application including complete mobilization and physical exercise of the recipients. A computer model was developed to study the interaction between a continuous-flow pump and the recovering cardiovascular system, the effects of changing pre- and afterloads, and the possibilities for indirect estimation of hemodynamic parameters and pump control. A numerical model of the cardiovascular system using Matlab Simulink simulation software was established. Data of circulatory system modules were derived from patients, our own in vitro and in vivo experiments, and the literature. Special care was taken to simulate properly the dynamic pressure-volume characteristics of both left and right ventricle, the Frank-Starling behavior, and the impedance of the proximal vessels. Excellent correlation with measured data was achieved including pressure and flow patterns within the time domain, response to varying loads, and effects of previously observed pressure-flow hysteresis in rotary pumps. Potential energy, external work, pressure-volume area, and other derived heart work parameters could be calculated. The model offers the possibility to perform parameter variations to study the effects of changing patient condition and therapy and to display them with three-dimensional graphics (demonstrated with the effects on right ventricular work and efficiency). The presented model gives an improved understanding of the interaction between the pump and both ventricles. It can be used for the investigation of various clinical and control questions in normal and pathological conditions of the left ventricular assist device recipient.

  4. Sensitivity Studies on the Influence of Aerosols on Cloud and Precipitation Development Using WRF Mesoscale Model Simulations

    Science.gov (United States)

    Thompson, G.; Eidhammer, T.; Rasmussen, R.

    2011-12-01

    Using the WRF model in simulations of shallow and deep precipitating cloud systems, we investigated the sensitivity to aerosols initiating as cloud condensation and ice nuclei. A global climatological dataset of sulfates, sea salts, and dust was used as input for a control experiment. Sensitivity experiments with significantly more polluted conditions were conducted to analyze the resulting impacts to cloud and precipitation formation. Simulations were performed using the WRF model with explicit treatment of aerosols added to the Thompson et al (2008) bulk microphysics scheme. The modified scheme achieves droplet formation using pre-tabulated CCN activation tables provided by a parcel model. The ice nucleation is parameterized as a function of dust aerosols as well as homogeneous freezing of deliquesced aerosols. The basic processes of aerosol activation and removal by wet scavenging are considered, but aerosol characteristic size or hygroscopicity does not change due to evaporating droplets. In other words, aerosol processing was ignored. Unique aspects of this study include the usage of one to four kilometer grid spacings and the direct parameterization of ice nucleation from aerosols rather than typical temperature and/or supersaturation relationships alone. Initial results from simulations of a deep winter cloud system and its interaction with significant orography show contrasting sensitivities in regions of warm rain versus mixed liquid and ice conditions. The classical view of higher precipitation amounts in relatively clean maritime clouds with fewer but larger droplets is confirmed for regions dominated by the warm-rain process. However, due to complex interactions with the ice phase and snow riming, the simulations revealed the reverse situation in high terrain areas dominated by snow reaching the surface. Results of other cloud systems will be summarized at the conference.

  5. Simulation of Snowmelt Runoff Using SRM Model and Comparison With Neural Networks ANN and ANFIS (Case Study: Kardeh dam basin

    Directory of Open Access Journals (Sweden)

    morteza akbari

    2017-03-01

    Full Text Available Introduction: Snowmelt runoff plays an important role in providing water and agricultural resources, especially in mountainous areas. There are different methods to simulate the process of snowmelt. Inter alia, degree-day model, based on temperature-index is more cited. Snowmelt Runoff Model is a conceptual hydrological model to simulate and predict the daily flow of rivers in the mountainous basins on the basis of comparing the accuracy of AVHRR and TM satellite images to determine snow cover in Karun Basin. Additionally, overestimation of snow-covered area decreased with increasing spatial resolution of satellite data.Studies conducted in the Zayandehrood watershed dam, showed that in the calculation of the snow map cover, changes from MODIS satellite imagery, at the time that the image does not exist, using the digital elevation model and regression analysis can provide to estimate the appropriate data from satellites. In the study of snow cover in eastern Turkey, in the mountainous regions of the Euphrates River, data from five meteorological stations and MODIS images were used with a resolution of 500 m. The results showed that satellite images have a good accuracy in estimating snow cover. In a Watershed in northern Pakistan in the period from 2000 to 2006, SRM model was used to estimate the snow cover using MODIS images. The purpose of this study was to evaluate the snowmelt runoff using remote sensing data and SRM model for flow simulation, based on statistical parameters in the Kardeh dam basin. Materials and Methods: Kardeh dam basin has an area of about 560 square kilometers and is located in the north of Mashhad. This area is in the East of Hezarmasjed – kopehdagh zone that is one of the main basins of Kashafrood. This basin is a mountainous area. About 261 km of the basin is located at above 2000 m. The lowest point of the basin is at the watershed outlet with1300 meters and the highest point in the basin, in the North West part

  6. Laboratory modeling, field study, and numerical simulation of bioremediation of petroleum contaminants

    International Nuclear Information System (INIS)

    Livingston, R.J.; Islam, M.R.

    1999-01-01

    Historical methods of cleaning up petroleum hydrocarbons from the vadose zone, the capillary zone, and the aquifers are not technically true cleanup technologies but rather transfer techniques. In addition, environmental engineers are realizing that the standard remediation techniques are not entirely effective in removing the hazardous material in a reasonable time frame. Long-chain hydrocarbons such as kerosene, diesel, and waste oil are particularly difficult to remediate using conventional techniques. The use of bioremediation as an alternative remediation technology is fast becoming the technique of choice among many environmental professionals. This method offers substantial benefits not found in other remediation processes. Bioremediation is very cost effective, nondestructive, relatively uncomplicated in implementing, requires non specialized equipment, and can be extremely effective in removing recalcitrant petroleum hydrocarbons. This study researched the availability of viable microbial populations in the arid climate in South Dakota. Exponential growth of the bacteria and the ability of bacteria to degrade long-chain hydrocarbons indicated that healthy populations do exist and could be used to mineralize organic hydrocarbons. Experimental results indicated that bioremediation can be effectively enhanced in landfills as well as in the subsurface using a supply of harmless nutrients. The biodegradation rate can be further enhanced with the use of edible surfactant that helped disperse the petroleum products. Also, the use of hydrogen peroxide enhanced the oxygen availability and increased the degradation rate. Interestingly, the bacterial growth rate is found to be high in difficult-to-biodegrade contaminants, such as waste oil. A numerical simulation program was also developed that describes the bacterial growth in the subsurface along with the reduction in substrate (contamination). Results from this program were found to be consistent with laboratory

  7. Modeling and CFD simulation of nutrient distribution in picoliter bioreactors for bacterial growth studies on single-cell level.

    Science.gov (United States)

    Westerwalbesloh, Christoph; Grünberger, Alexander; Stute, Birgit; Weber, Sophie; Wiechert, Wolfgang; Kohlheyer, Dietrich; von Lieres, Eric

    2015-11-07

    A microfluidic device for microbial single-cell cultivation of bacteria was modeled and simulated using COMSOL Multiphysics. The liquid velocity field and the mass transfer within the supply channels and cultivation chambers were calculated to gain insight in the distribution of supplied nutrients and metabolic products secreted by the cultivated bacteria. The goal was to identify potential substrate limitations or product accumulations within the cultivation device. The metabolic uptake and production rates, colony size, and growth medium composition were varied covering a wide range of operating conditions. Simulations with glucose as substrate did not show limitations within the typically used concentration range, but for alternative substrates limitations could not be ruled out. This lays the foundation for further studies and the optimization of existing picoliter bioreactor systems.

  8. Simulated Leaching (Migration) Study for a Model Container-Closure System Applicable to Parenteral and Ophthalmic Drug Products.

    Science.gov (United States)

    Jenke, Dennis; Egert, Thomas; Hendricker, Alan; Castner, James; Feinberg, Tom; Houston, Christopher; Hunt, Desmond G; Lynch, Michael; Nicholas, Kumudini; Norwood, Daniel L; Paskiet, Diane; Ruberto, Michael; Smith, Edward J; Holcomb, Frank; Markovic, Ingrid

    2017-01-01

    A simulating leaching (migration) study was performed on a model container-closure system relevant to parenteral and ophthalmic drug products. This container-closure system consisted of a linear low-density polyethylene bottle (primary container), a polypropylene cap and an elastomeric cap liner (closure), an adhesive label (labeling), and a foil overpouch (secondary container). The bottles were filled with simulating solvents (aqueous salt/acid mixture at pH 2.5, aqueous buffer at pH 9.5, and 1/1 v/v isopropanol/water), a label was affixed to the filled and capped bottles, the filled bottles were placed into the foil overpouch, and the filled and pouched units were stored either upright or inverted for up to 6 months at 40 °C. After storage, the leaching solutions were tested for leached substances using multiple complementary analytical techniques to address volatile, semi-volatile, and non-volatile organic and inorganic extractables as potential leachables.The leaching data generated supported several conclusions, including that (1) the extractables (leachables) profile revealed by a simulating leaching study can qualitatively be correlated with compositional information for materials of construction, (2) the chemical nature of both the extracting medium and the individual extractables (leachables) can markedly affect the resulting profile, and (3) while direct contact between a drug product and a system's material of construction may exacerbate the leaching of substances from that material by the drug product, direct contact is not a prerequisite for migration and leaching to occur. LAY ABSTRACT: The migration of container-related extractables from a model pharmaceutical container-closure system and into simulated drug product solutions was studied, focusing on circumstances relevant to parenteral and ophthalmic drug products. The model system was constructed specifically to address the migration of extractables from labels applied to the outside of the

  9. Tribology studies of the natural knee using an animal model in a new whole joint natural knee simulator.

    Science.gov (United States)

    Liu, Aiqin; Jennings, Louise M; Ingham, Eileen; Fisher, John

    2015-09-18

    The successful development of early-stage cartilage and meniscus repair interventions in the knee requires biomechanical and biotribological understanding of the design of the therapeutic interventions and their tribological function in the natural joint. The aim of this study was to develop and validate a porcine knee model using a whole joint knee simulator for investigation of the tribological function and biomechanical properties of the natural knee, which could then be used to pre-clinically assess the tribological performance of cartilage and meniscal repair interventions prior to in vivo studies. The tribological performance of standard artificial bearings in terms of anterior-posterior (A/P) shear force was determined in a newly developed six degrees of freedom tribological joint simulator. The porcine knee model was then developed and the tribological properties in terms of shear force measurements were determined for the first time for three levels of biomechanical constraints including A/P constrained, spring force semi-constrained and A/P unconstrained conditions. The shear force measurements showed higher values under the A/P constrained condition (predominantly sliding motion) compared to the A/P unconstrained condition (predominantly rolling motion). This indicated that the shear force simulation model was able to differentiate between tribological behaviours when the femoral and tibial bearing was constrained to slide or/and roll. Therefore, this porcine knee model showed the potential capability to investigate the effect of knee structural, biomechanical and kinematic changes, as well as different cartilage substitution therapies on the tribological function of natural knee joints. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Simulation Model Development for Mail Screening Process

    National Research Council Canada - National Science Library

    Vargo, Trish; Marvin, Freeman; Kooistra, Scott

    2005-01-01

    STUDY OBJECTIVE: Provide decision analysis support to the Homeland Defense Business Unit, Special Projects Team, in developing a simulation model to help determine the most effective way to eliminate backlog...

  11. Comparative Study for Modeling Reactor Internal Geometry in CFD Simulation of PHWR Internal Flow

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Gong Hee; Woo, Sweng Woong; Cheong, Ae Ju [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2013-10-15

    The main objective of the present study is to compare the results predicted by using either the real geometry of tubes or porous medium assumption and to assess the prediction performance of both methods. Estimating the local subcooling of the moderator in a CANDU calandria under transient conditions is one of the major concerns in the CANDU safety analysis. Therefore extensive CFD analyses have been performed for predicting the moderator temperature in a CANDU calandria or its similar shape. However most of previous studies used a porous medium assumption instead of considering the real geometry of calandria tube. A porous medium assumption has some possible weaknesses; The increased production of turbulence due to vortex shedding in the wake of the individual tubes is not considered in the turbulence model. It is difficult to identify the true effects of the outer ring of calandria tubes on the generation of the highly non-uniform flows in the reflector region. It is not clear how well the pressure loss models quantitatively represent the three-dimensional effects of the turbulent flows through the calandria tubes.

  12. Physically realistic modeling of maritime training simulation

    OpenAIRE

    Cieutat , Jean-Marc

    2003-01-01

    Maritime training simulation is an important matter of maritime teaching, which requires a lot of scientific and technical skills.In this framework, where the real time constraint has to be maintained, all physical phenomena cannot be studied; the most visual physical phenomena relating to the natural elements and the ship behaviour are reproduced only. Our swell model, based on a surface wave simulation approach, permits to simulate the shape and the propagation of a regular train of waves f...

  13. How operator admittance affects the response of a teleoperation system to assistive forces – A model analytic study and simulation

    International Nuclear Information System (INIS)

    Wildenbeest, J.G.W.; Abbink, D.A.; Boessenkool, H.; Heemskerk, C.J.M.; Koning, J.F.

    2013-01-01

    Highlights: ► We developed a computational model of a human operator controlling a teleoperation system based on feedforward control, while performing a free-space motion. ► We studied how assistive forces affect the response of the combined system of telemanipulator and operator, when operator admittance changes due to task instruction or arm configuration. ► Inappropriate assistive forces can lead to assistive forces that are either not perceived, or deflect the combined system; assistive forces should be tailored to operator admittance. ► It is required to study, measure and quantitatively model operator behavior for teleoperated tasks in more detail. -- Abstract: Haptic shared control is a promising approach to increase the effectiveness of remote handling operations. While in haptic shared control the operator is continuously guided with assistive forces, the operator's response to forces is not fully understood. This study describes the development of a computational model of a human operator controlling a teleoperation system based on feedforward control. In a simulation, the operator's response to repulsive forces in free-space motions was modeled for two degrees of freedom, for two operator endpoint admittances (estimated by means of closed-loop identification techniques). The simulation results show that similar repulsive forces lead to substantial discrepancies in response when admittance settings mismatch; wrongly estimated operator admittances can lead to assistive forces that are either not perceived, or deflect the combined system of human operator and telemanipulator. It is concluded that assistive forces should be tailored to the arm configuration and the type of task performed. In order to utilize haptic shared control to its full potential, it is required to study, measure and quantitatively model operator behavior for teleoperated tasks in more detail

  14. Study protocol: combining experimental methods, econometrics and simulation modelling to determine price elasticities for studying food taxes and subsidies (The Price ExaM Study

    Directory of Open Access Journals (Sweden)

    Wilma E. Waterlander

    2016-07-01

    Full Text Available Abstract Background There is a need for accurate and precise food price elasticities (PE, change in consumer demand in response to change in price to better inform policy on health-related food taxes and subsidies. Methods/Design The Price Experiment and Modelling (Price ExaM study aims to: I derive accurate and precise food PE values; II quantify the impact of price changes on quantity and quality of discrete food group purchases and; III model the potential health and disease impacts of a range of food taxes and subsidies. To achieve this, we will use a novel method that includes a randomised Virtual Supermarket experiment and econometric methods. Findings will be applied in simulation models to estimate population health impact (quality-adjusted life-years [QALYs] using a multi-state life-table model. The study will consist of four sequential steps: 1. We generate 5000 price sets with random price variation for all 1412 Virtual Supermarket food and beverage products. Then we add systematic price variation for foods to simulate five taxes and subsidies: a fruit and vegetable subsidy and taxes on sugar, saturated fat, salt, and sugar-sweetened beverages. 2. Using an experimental design, 1000 adult New Zealand shoppers complete five household grocery shops in the Virtual Supermarket where they are randomly assigned to one of the 5000 price sets each time. 3. Output data (i.e., multiple observations of price configurations and purchased amounts are used as inputs to econometric models (using Bayesian methods to estimate accurate PE values. 4. A disease simulation model will be run with the new PE values as inputs to estimate QALYs gained and health costs saved for the five policy interventions. Discussion The Price ExaM study has the potential to enhance public health and economic disciplines by introducing internationally novel scientific methods to estimate accurate and precise food PE values. These values will be used to model the potential

  15. Study protocol: combining experimental methods, econometrics and simulation modelling to determine price elasticities for studying food taxes and subsidies (The Price ExaM Study).

    Science.gov (United States)

    Waterlander, Wilma E; Blakely, Tony; Nghiem, Nhung; Cleghorn, Christine L; Eyles, Helen; Genc, Murat; Wilson, Nick; Jiang, Yannan; Swinburn, Boyd; Jacobi, Liana; Michie, Jo; Ni Mhurchu, Cliona

    2016-07-19

    There is a need for accurate and precise food price elasticities (PE, change in consumer demand in response to change in price) to better inform policy on health-related food taxes and subsidies. The Price Experiment and Modelling (Price ExaM) study aims to: I) derive accurate and precise food PE values; II) quantify the impact of price changes on quantity and quality of discrete food group purchases and; III) model the potential health and disease impacts of a range of food taxes and subsidies. To achieve this, we will use a novel method that includes a randomised Virtual Supermarket experiment and econometric methods. Findings will be applied in simulation models to estimate population health impact (quality-adjusted life-years [QALYs]) using a multi-state life-table model. The study will consist of four sequential steps: 1. We generate 5000 price sets with random price variation for all 1412 Virtual Supermarket food and beverage products. Then we add systematic price variation for foods to simulate five taxes and subsidies: a fruit and vegetable subsidy and taxes on sugar, saturated fat, salt, and sugar-sweetened beverages. 2. Using an experimental design, 1000 adult New Zealand shoppers complete five household grocery shops in the Virtual Supermarket where they are randomly assigned to one of the 5000 price sets each time. 3. Output data (i.e., multiple observations of price configurations and purchased amounts) are used as inputs to econometric models (using Bayesian methods) to estimate accurate PE values. 4. A disease simulation model will be run with the new PE values as inputs to estimate QALYs gained and health costs saved for the five policy interventions. The Price ExaM study has the potential to enhance public health and economic disciplines by introducing internationally novel scientific methods to estimate accurate and precise food PE values. These values will be used to model the potential health and disease impacts of various food pricing policy

  16. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    Most systems involve parameters and variables, which are random variables due to uncertainties. Probabilistic meth- ods are powerful in modelling such systems. In this second part, we describe probabilistic models and Monte Carlo simulation along with 'classical' matrix methods and differ- ential equations as most real ...

  17. Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2009-01-01

    This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial

  18. Modelling and Simulation: An Overview

    NARCIS (Netherlands)

    M.J. McAleer (Michael); F. Chan (Felix); L. Oxley (Les)

    2013-01-01

    textabstractThe papers in this special issue of Mathematics and Computers in Simulation cover the following topics: improving judgmental adjustment of model-based forecasts, whether forecast updates are progressive, on a constrained mixture vector autoregressive model, whether all estimators are

  19. Application of artificial neural networks in hydrological modeling: A case study of runoff simulation of a Himalayan glacier basin

    Science.gov (United States)

    Buch, A. M.; Narain, A.; Pandey, P. C.

    1994-01-01

    The simulation of runoff from a Himalayan Glacier basin using an Artificial Neural Network (ANN) is presented. The performance of the ANN model is found to be superior to the Energy Balance Model and the Multiple Regression model. The RMS Error is used as the figure of merit for judging the performance of the three models, and the RMS Error for the ANN model is the latest of the three models. The ANN is faster in learning and exhibits excellent system generalization characteristics.

  20. A Collective Study on Modeling and Simulation of Resistive Random Access Memory

    Science.gov (United States)

    Panda, Debashis; Sahu, Paritosh Piyush; Tseng, Tseung Yuen

    2018-01-01

    In this work, we provide a comprehensive discussion on the various models proposed for the design and description of resistive random access memory (RRAM), being a nascent technology is heavily reliant on accurate models to develop efficient working designs and standardize its implementation across devices. This review provides detailed information regarding the various physical methodologies considered for developing models for RRAM devices. It covers all the important models reported till now and elucidates their features and limitations. Various additional effects and anomalies arising from memristive system have been addressed, and the solutions provided by the models to these problems have been shown as well. All the fundamental concepts of RRAM model development such as device operation, switching dynamics, and current-voltage relationships are covered in detail in this work. Popular models proposed by Chua, HP Labs, Yakopcic, TEAM, Stanford/ASU, Ielmini, Berco-Tseng, and many others have been compared and analyzed extensively on various parameters. The working and implementations of the window functions like Joglekar, Biolek, Prodromakis, etc. has been presented and compared as well. New well-defined modeling concepts have been discussed which increase the applicability and accuracy of the models. The use of these concepts brings forth several improvements in the existing models, which have been enumerated in this work. Following the template presented, highly accurate models would be developed which will vastly help future model developers and the modeling community.

  1. A Study on Modeling Approaches in Discrete Event Simulation Using Design Patterns

    National Research Council Canada - National Science Library

    Kim, Leng Koh

    2007-01-01

    .... This modeling paradigm encompasses several modeling approaches active role of events, entities as independent components, and chaining components to enable interactivity that are excellent ways of building a DES system...

  2. Power flow modeling of Back-to-Back STATCOM: Comprehensive simulation studies including PV curves and PQ circles

    Directory of Open Access Journals (Sweden)

    Ahmet Mete Vural

    2017-09-01

    Full Text Available Power flow study in a power network embedded with FACTS device requires effort in program coding. Moreover, Newton-Raphson method should be modified by embedding injected power components into the algorithm. In this study, we have proposed a method for modeling of one of the newest FACTS concepts in power flow study without program coding or modification of existing Newton-Raphson algorithm. Real and reactive power injections for each voltage source converter of Back-to-Back Static Synchronous Compensator (BtB-STATCOM are PI regulated to their desired steady-state values. With this respect, reactive power injection of each voltage source converter as well as real power transfer among them can be assigned as control constraint. Operating losses are also taken into account in the proposed modeling approach. Furthermore, proposed model can be easily modified for the modeling of conventional STATCOM having only one voltage source converter or two STATCOMs operating independently. The proposed modeling approach is verified in PSCAD through a number of simulation scenarios in BtB-STATCOM and STATCOM embedded power systems, namely 1-Machine 4-Bus system and 3-Machine 7-Bus system. PV curves of local buses compensated by BtB-STATCOM and STATCOM are presented and compared. Steady-state performance of BtB-STATCOM and STATCOM is also compared in power flow handling.

  3. The Effect of Small Sample Size on Measurement Equivalence of Psychometric Questionnaires in MIMIC Model: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Jamshid Jamali

    2017-01-01

    Full Text Available Evaluating measurement equivalence (also known as differential item functioning (DIF is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.

  4. Vehicle dynamics modeling and simulation

    CERN Document Server

    Schramm, Dieter; Bardini, Roberto

    2014-01-01

    The authors examine in detail the fundamentals and mathematical descriptions of the dynamics of automobiles. In this context different levels of complexity will be presented, starting with basic single-track models up to complex three-dimensional multi-body models. A particular focus is on the process of establishing mathematical models on the basis of real cars and the validation of simulation results. The methods presented are explained in detail by means of selected application scenarios.

  5. Modeling and Simulation: An Overview

    OpenAIRE

    Michael McAleer; Felix Chan; Les Oxley

    2013-01-01

    The papers in this special issue of Mathematics and Computers in Simulation cover the following topics. Improving judgmental adjustment of model-based forecasts, whether forecast updates are progressive, on a constrained mixture vector autoregressive model, whether all estimators are born equal. The empirical properties of some estimators of long memory, characterising trader manipulation in a limitorder driven market, measuring bias in a term-structure model of commodity prices through the c...

  6. Assessing the ability of mechanistic volatilization models to simulate soil surface conditions: a study with the Volt'Air model.

    Science.gov (United States)

    Garcia, L; Bedos, C; Génermont, S; Braud, I; Cellier, P

    2011-09-01

    Ammonia and pesticide volatilization in the field is a surface phenomenon involving physical and chemical processes that depend on the soil surface temperature and water content. The water transfer, heat transfer and energy budget sub models of volatilization models are adapted from the most commonly accepted formalisms and parameterizations. They are less detailed than the dedicated models describing water and heat transfers and surface status. The aim of this work was to assess the ability of one of the available mechanistic volatilization models, Volt'Air, to accurately describe the pedo-climatic conditions of a soil surface at the required time and space resolution. The assessment involves: (i) a sensitivity analysis, (ii) an evaluation of Volt'Air outputs in the light of outputs from a reference Soil-Vegetation-Atmosphere Transfer model (SiSPAT) and three experimental datasets, and (iii) the study of three tests based on modifications of SiSPAT to establish the potential impact of the simplifying assumptions used in Volt'Air. The analysis confirmed that a 5 mm surface layer was well suited, and that Volt'Air surface temperature correlated well with the experimental measurements as well as with SiSPAT outputs. In terms of liquid water transfers, Volt'Air was overall consistent with SiSPAT, with discrepancies only during major rainfall events and dry weather conditions. The tests enabled us to identify the main source of the discrepancies between Volt'Air and SiSPAT: the lack of gaseous water transfer description in Volt'Air. They also helped to explain why neither Volt'Air nor SiSPAT was able to represent lower values of surface water content: current classical water retention and hydraulic conductivity models are not yet adapted to cases of very dry conditions. Given the outcomes of this study, we discuss to what extent the volatilization models can be improved and the questions they pose for current research in water transfer modeling and parameterization

  7. Stochastic models: theory and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2008-03-01

    Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

  8. Assessing Risk-Taking in a Driving Simulator Study: Modeling Longitudinal Semi-Continuous Driving Data Using a Two-Part Regression Model with Correlated Random Effects.

    Science.gov (United States)

    Tran, Van; Liu, Danping; Pradhan, Anuj K; Li, Kaigang; Bingham, C Raymond; Simons-Morton, Bruce G; Albert, Paul S

    2015-01-01

    Signalized intersection management is a common measure of risky driving in simulator studies. In a recent randomized trial, investigators were interested in whether teenage males exposed to a risk-accepting passenger took more intersection risks in a driving simulator compared with those exposed to a risk-averse peer passenger. Analyses in this trial are complicated by the longitudinal or repeated measures that are semi-continuous with clumping at zero. Specifically, the dependent variable in a randomized trial looking at the effect of risk-accepting versus risk-averse peer passengers on teenage simulator driving is comprised of two components. The discrete component measures whether the teen driver stops for a yellow light, and the continuous component measures the time the teen driver, who does not stop, spends in the intersection during a red light. To convey both components of this measure, we apply a two-part regression with correlated random effects model (CREM), consisting of a logistic regression to model whether the driver stops for a yellow light and a linear regression to model the time spent in the intersection during a red light. These two components are related through the correlation of their random effects. Using this novel analysis, we found that those exposed to a risk-averse passenger have a higher proportion of stopping at yellow lights and a longer mean time in the intersection during a red light when they did not stop at the light compared to those exposed to a risk-accepting passenger, consistent with the study hypotheses and previous analyses. Examining the statistical properties of the CREM approach through simulations, we found that in most situations, the CREM achieves greater power than competing approaches. We also examined whether the treatment effect changes across the length of the drive and provided a sample size recommendation for detecting such phenomenon in subsequent trials. Our findings suggest that CREM provides an efficient

  9. A Case Study Regarding Influence of Solvers in Matlab/Simulink for Induction Machine Model in Wind Turbine Simulations

    DEFF Research Database (Denmark)

    Iov, F.; Blaabjerg, Frede; Hansen, A.D.

    2002-01-01

    In the last years Matlab/Simulink® has become the most used software for modelling and simulation of dynamic systems. Wind energy conversion systems are for example such systems because they contain parts with different range for the time constant: wind, turbine, generator, power electronics...... the different implementations of induction machine model, influence of the solvers from Simulink and how the simulation speed can be increase for a wind turbine....

  10. Study and simulation of a multi-lithology stratigraphic model under maximum erosion rate constraint; Etude et simulation d'un modele statigraphique multi-lithologique sous contrainte de taux d'erosion maximal

    Energy Technology Data Exchange (ETDEWEB)

    Gervais, V.

    2004-11-01

    The subject of this report is the study and simulation of a model describing the infill of sedimentary basins on large scales in time and space. It simulates the evolution through time of the sediment layer in terms of geometry and rock properties. A parabolic equation is coupled to an hyperbolic equation by an input boundary condition at the top of the basin. The model also considers a unilaterality constraint on the erosion rate. In the first part of the report, the mathematical model is described and particular solutions are defined. The second part deals with the definition of numerical schemes and the simulation of the model. In the first chap-ter, finite volume numerical schemes are defined and studied. The Newton algorithm adapted to the unilateral constraint used to solve the schemes is given, followed by numerical results in terms of performance and accuracy. In the second chapter, a preconditioning strategy to solve the linear system by an iterative solver at each Newton iteration is defined, and numerical results are given. In the last part, a simplified model is considered in which a variable is decoupled from the other unknowns and satisfies a parabolic equation. A weak formulation is defined for the remaining coupled equations, for which the existence of a unique solution is obtained. The proof uses the convergence of a numerical scheme. (author)

  11. North Atlantic Coast Comprehensive Study (NACCS) Coastal Storm Model Simulations: Waves and Water Levels

    Science.gov (United States)

    2015-08-01

    Appendix E: Tar Ball Details .............................................................................................. 230 Appendix F: Model and...output files that are to be stored long term are compressed using gzip and then are combined ( tarred ) to- gether into a few tar ball files grouped in a...Appendix E: Tar Ball Details and Appendix F: Model and CSTORM File Descriptions, respectively. Files are stored by class/type, configuration, and run

  12. Model for Simulation Atmospheric Turbulence

    DEFF Research Database (Denmark)

    Lundtang Petersen, Erik

    1976-01-01

    A method that produces realistic simulations of atmospheric turbulence is developed and analyzed. The procedure makes use of a generalized spectral analysis, often called a proper orthogonal decomposition or the Karhunen-Loève expansion. A set of criteria, emphasizing a realistic appearance...... eigenfunctions and estimates of the distributions of the corresponding expansion coefficients. The simulation method utilizes the eigenfunction expansion procedure to produce preliminary time histories of the three velocity components simultaneously. As a final step, a spectral shaping procedure is then applied....... The method is unique in modeling the three velocity components simultaneously, and it is found that important cross-statistical features are reasonably well-behaved. It is concluded that the model provides a practical, operational simulator of atmospheric turbulence....

  13. The carbon balance of European croplands: a Trans-European, cross-site, multi model simulation study

    Science.gov (United States)

    Wattenbach, Martin; Sus, Oliver; Vuichard, Nicolas; Lehuger, Simon; Leip, Adrian; Gottschalk, Pia; Smith, Pete

    2010-05-01

    Croplands cover approximately 45% of Europe and play a significant role in the overall carbon budget of the continent. However, the estimation of the regional carbon balance is still uncertain. Here, we present a multi-site model comparison for four cropland ecosystem models namely the DNDC, ORCHIDEE-STICS, CERES-EGC and SPA model. We compare the accuracy of the models in predicting net ecosystem exchange (NEE), gross primary production (GPP), ecosystem respiration (Reco) as well as actual evapo-transpiration (ETa) for winter wheat (Triticum aestivum L.), winter barley (Hordeum vulgare L.) and maize (Zea mays L.) derived from eddy covariance measurements on five sites of the CarboEurope IP network. The models are all able to simulate mean daily GPP. The simulation results for mean daily ETa and Reco are, however, less accurate. The resulting simulation of daily NEE is adequate beside some cases where models fail due to a lack in phase and amplitude alignment. ORCHIDEE-STICS and the SPA demonstrate the best performance, nevertheless, they are not able to simulate full crop rotations under consideration of multiple management. CERES-EGC and especially DNDC although exhibiting a lower level of model accuracy are able to simulate such conditions resulting in more accurate annual cumulative NEE.

  14. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    This paper describes the modelling, simulating and optimizing including experimental verification as being carried out as part of a Ph.D. project being written resp. supervised by the authors. The work covers dynamic performance of both water-tube boilers and fire tube boilers. A detailed dynamic...... model of the boiler has been developed and simulations carried out by means of the Matlab integration routines. The model is prepared as a dynamic model consisting of both ordinary differential equations and algebraic equations, together formulated as a Differential-Algebraic-Equation system. Being able...... to operate a boiler plant dynamically means that the boiler designs must be able to absorb any fluctuations in water level and temperature gradients resulting from the pressure change in the boiler. On the one hand a large water-/steam space may be required, i.e. to build the boiler as big as possible. Due...

  15. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    This paper describes the modelling, simulating and optimizing including experimental verification as being carried out as part of a Ph.D. project being written resp. supervised by the authors. The work covers dynamic performance of both water-tube boilers and fire tube boilers. A detailed dynamic...... to the internal pressure the consequence of the increased volume (i.e. water-/steam space) is an increased wall thickness in the pressure part of the boiler. The stresses introduced in the boiler pressure part as a result of the temperature gradients are proportional to the square of the wall thickness...... model of the boiler has been developed and simulations carried out by means of the Matlab integration routines. The model is prepared as a dynamic model consisting of both ordinary differential equations and algebraic equations, together formulated as a Differential-Algebraic-Equation system. Being able...

  16. Reduced dimer production in solar-simulator-pumped continuous wave iodine lasers based on model simulations and scaling and pumping studies

    Science.gov (United States)

    Costen, Robert C.; Heinbockel, John H.; Miner, Gilda A.; Meador, Willard E., Jr.; Tabibi, Bagher M.; Lee, Ja H.; Williams, Michael D.

    1995-01-01

    A numerical rate equation model for a continuous wave iodine laser with longitudinally flowing gaseous lasant is validated by approximating two experiments that compare the perfluoroalkyl iodine lasants n-C3F7I and t-C4F9I. The salient feature of the simulations is that the production rate of the dimer (C4F9)2 is reduced by one order of magnitude relative to the dimer (C3F7)2. The model is then used to investigate the kinetic effects of this reduced dimer production, especially how it improves output power. Related parametric and scaling studies are also presented. When dimer production is reduced, more monomer radicals (t-C4F9) are available to combine with iodine ions, thus enhancing depletion of the laser lower level and reducing buildup of the principal quencher, molecular iodine. Fewer iodine molecules result in fewer downward transitions from quenching and more transitions from stimulated emission of lasing photons. Enhanced depletion of the lower level reduces the absorption of lasing photons. The combined result is more lasing photons and proportionally increased output power.

  17. Study of Near-Surface Models in Large-Eddy Simulations of a Neutrally Stratified Atmospheric Boundary Layer

    Science.gov (United States)

    Senocak, I.; Ackerman, A. S.; Kirkpatrick, M. P.; Stevens, D. E.; Mansour, N. N.

    2004-01-01

    Large-eddy simulation (LES) is a widely used technique in armospheric modeling research. In LES, large, unsteady, three dimensional structures are resolved and small structures that are not resolved on the computational grid are modeled. A filtering operation is applied to distinguish between resolved and unresolved scales. We present two near-surface models that have found use in atmospheric modeling. We also suggest a simpler eddy viscosity model that adopts Prandtl's mixing length model (Prandtl 1925) in the vicinity of the surface and blends with the dynamic Smagotinsky model (Germano et al, 1991) away from the surface. We evaluate the performance of these surface models by simulating a neutraly stratified atmospheric boundary layer.

  18. Modeling control in manufacturing simulation

    NARCIS (Netherlands)

    Zee, Durk-Jouke van der; Chick, S.; Sánchez, P.J.; Ferrin, D.; Morrice, D.J.

    2003-01-01

    A significant shortcoming of traditional simulation languages is the lack of attention paid to the modeling of control structures, i.e., the humans or systems responsible for manufacturing planning and control, their activities and the mutual tuning of their activities. Mostly they are hard coded

  19. Modeling and simulation of discrete event systems

    CERN Document Server

    Choi, Byoung Kyu

    2013-01-01

    Computer modeling and simulation (M&S) allows engineers to study and analyze complex systems. Discrete-event system (DES)-M&S is used in modern management, industrial engineering, computer science, and the military. As computer speeds and memory capacity increase, so DES-M&S tools become more powerful and more widely used in solving real-life problems. Based on over 20 years of evolution within a classroom environment, as well as on decades-long experience in developing simulation-based solutions for high-tech industries, Modeling and Simulation of Discrete-Event Systems is the only book on

  20. Field measurements, simulation modeling and development of analysis for moisture stressed corn and soybeans, 1982 studies

    Science.gov (United States)

    Blad, B. L.; Norman, J. M.; Gardner, B. R.

    1983-01-01

    The experimental design, data acquisition and analysis procedures for agronomic and reflectance data acquired over corn and soybeans at the Sandhills Agricultural Laboratory of the University of Nebraska are described. The following conclusions were reached: (1) predictive leaf area estimation models can be defined which appear valid over a wide range of soils; (2) relative grain yield estimates over moisture stressed corn were improved by combining reflectance and thermal data; (3) corn phenology estimates using the model of Badhwar and Henderson (1981) exhibited systematic bias but were reasonably accurate; (4) canopy reflectance can be modelled to within approximately 10% of measured values; and (5) soybean pubescence significantly affects canopy reflectance, energy balance and water use relationships.

  1. Some Sensitivity Studies of Chemical Transport Simulated in Models of the Soil-Plant-Litter System

    Energy Technology Data Exchange (ETDEWEB)

    Begovich, C.L.

    2002-10-28

    Fifteen parameters in a set of five coupled models describing carbon, water, and chemical dynamics in the soil-plant-litter system were varied in a sensitivity analysis of model response. Results are presented for chemical distribution in the components of soil, plants, and litter along with selected responses of biomass, internal chemical transport (xylem and phloem pathways), and chemical uptake. Response and sensitivity coefficients are presented for up to 102 model outputs in an appendix. Two soil properties (chemical distribution coefficient and chemical solubility) and three plant properties (leaf chemical permeability, cuticle thickness, and root chemical conductivity) had the greatest influence on chemical transport in the soil-plant-litter system under the conditions examined. Pollutant gas uptake (SO{sub 2}) increased with change in plant properties that increased plant growth. Heavy metal dynamics in litter responded to plant properties (phloem resistance, respiration characteristics) which induced changes in the chemical cycling to the litter system. Some of the SO{sub 2} and heavy metal responses were not expected but became apparent through the modeling analysis.

  2. Neuronal encoding of object and distance information: A model simulation study on naturalistic optic flow processing

    Directory of Open Access Journals (Sweden)

    Patrick eHennig

    2012-03-01

    Full Text Available We developed a model of the input circuitry of the FD1 cell, an identified motion-sensitive interneuron in the blowfly’s visual system. The model circuit successfully reproduces the FD1 cell’s most conspicuous property: Its larger responses to objects than to spatially extended patterns. The model circuit also mimics the time-dependent responses of FD1 to dynamically complex naturalistic stimuli, shaped by the blowfly’s saccadic flight and gaze strategy: The FD1 responses are enhanced when, as a consequence of self-motion, a nearby object crosses the receptive field during intersaccadic intervals. Moreover, the model predicts that these object-induced responses are superimposed by pronounced pattern-dependent fluctuations during movements on virtual test flights in a three-dimensional environment with systematic modifications of the environmental patterns. Hence, the FD1 cell is predicted to detect not unambiguously objects defined by the spatial layout of the environment, but to be also sensitive to objects distinguished by textural features. These ambiguous detection abilities suggest an encoding of information about objects - irrespective of the features by which the objects are defined - by a population of cells, with the FD1 cell presumably playing a prominent role in such an ensemble.

  3. Study of drought processes in Spain by means of offline Land-Surface Model simulations. Evaluation of model sensitivity to the meteorological forcing dataset.

    Science.gov (United States)

    Quintana-Seguí, Pere; Míguez-Macho, Gonzalo; Barella-Ortiz, Anaïs

    2017-04-01

    Drought affects different aspects of the continental water cycle, from precipitation (meteorological drought), to soil moisture (agricultural drought), streamflow, lake volume and piezometric levels (hydrological drought). The spatial and temporal scales of drought, together with its propagation through the system must be well understood. Drought is a hazard impacting all climates and regions of the world; but in some areas, such as Spain, its societal impacts may be especially severe, creating water resources related tensions between regions and sectors. Indices are often used to characterize different aspects of drought. Similar indices can be built for precipitation (SPI), soil moisture (SSMI), and streamflow (SSI), allowing to analyse the temporal scales of drought and its spatial patterns. Precipitation and streamflow data are abundant in Spain; however soil moisture data is scarce. Land-Surface Models (LSM) physically simulate the continental water cycle and, thus, are appropriate tools to quantify soil moisture and other relevant variables and processes. These models can be run offline, forced by a gridded dataset of meteorological variables, usually a re-analysis. The quality of the forcing dataset affects the quality of the subsequent modeling results and is, thus, crucial. The objective of this study is to investigate how sensitive LSM simulations are to the forcing dataset, with a focus on drought. A global and a local dataset are used at different resolutions. The global dataset is the eartH2Observe dataset, which is based on ERA-Interim. The local dataset is the SAFRAN meteorological analysis system. The LSMs used are SURFEX and LEAFHYDRO. Standardized indices of the relevant variables are produced for all the simulations performed. Then, we analyze how differently drought propagates through the system in the different simulations and how similar are spatial and temporal scales of drought. The results of this study will be useful to understand the

  4. A Modeling & Simulation Implementation Framework for Large-Scale Simulation

    Directory of Open Access Journals (Sweden)

    Song Xiao

    2012-10-01

    Full Text Available Classical High Level Architecture (HLA systems are facing development problems for lack of supporting fine-grained component integration and interoperation in large-scale complex simulation applications. To provide efficient methods of this issue, an extensible, reusable and composable simulation framework is proposed. To promote the reusability from coarse-grained federate to fine-grained components, this paper proposes a modelling & simulation framework which consists of component-based architecture, modelling methods, and simulation services to support and simplify the process of complex simulation application construction. Moreover, a standard process and simulation tools are developed to ensure the rapid and effective development of simulation application.

  5. R&D; studies on the hadronic calorimeter and physics simulations on the Standard Model and minimal supersymmetric Standard Model Higgs bosons in the CMS experiment

    CERN Document Server

    Duru, Firdevs

    2007-01-01

    This thesis consists of two main parts: R&D; studies done on the Compact Muon Solenoid (CMS) Hadronic Calorimeter (HCAL) and physics simulations on the Higgs boson for a Minimal Supersymmetric Standard Model (MSSM) and a Standard Model (SM) channel. In the first part, the air core light guides used in the read-out system of the Hadronic Forward (HF) calorimeter and the reflective materials used in them are studied. Then, tests and simulations were performed to find the most efficient way to collect Cerenkov light from the quartz plates, which are proposed as a substitute for the scintillator tiles in the Hadronic Endcap (HE) calorimeter due to radiation damage problems. In the second part physics simulations and their results are presented. The MSSM channel H/A[arrow right]ττ [arrow right]l l v v v v is studied to investigate the jet and missing transverse energy (MET) reconstruction of the CMS detector. The effects of the jet and MET corrections on the Higgs boson mass reconstruction are investigated. ...

  6. Nowcasting of deep convective clouds and heavy precipitation: Comparison study between NWP model simulation and extrapolation

    Czech Academy of Sciences Publication Activity Database

    Bližňák, Vojtěch; Sokol, Zbyněk; Zacharov, Petr, jr.

    2017-01-01

    Roč. 184, February (2017), s. 24-34 ISSN 0169-8095 R&D Projects: GA ČR(CZ) GPP209/12/P701; GA ČR GA13-34856S Institutional support: RVO:68378289 Keywords : meteorological satellite * convective storm * NWP model * verification * Czech Republic Subject RIV: DG - Athmosphere Sciences, Meteorology OBOR OECD: Meteorology and atmospheric sciences Impact factor: 3.778, year: 2016 http://www.sciencedirect.com/science/article/pii/S0169809516304288

  7. Modeling and Simulation for Safeguards

    Energy Technology Data Exchange (ETDEWEB)

    Swinhoe, Martyn T. [Los Alamos National Laboratory

    2012-07-26

    The purpose of this talk is to give an overview of the role of modeling and simulation in Safeguards R&D and introduce you to (some of) the tools used. Some definitions are: (1) Modeling - the representation, often mathematical, of a process, concept, or operation of a system, often implemented by a computer program; (2) Simulation - the representation of the behavior or characteristics of one system through the use of another system, especially a computer program designed for the purpose; and (3) Safeguards - the timely detection of diversion of significant quantities of nuclear material. The role of modeling and simulation are: (1) Calculate amounts of material (plant modeling); (2) Calculate signatures of nuclear material etc. (source terms); and (3) Detector performance (radiation transport and detection). Plant modeling software (e.g. FACSIM) gives the flows and amount of material stored at all parts of the process. In safeguards this allow us to calculate the expected uncertainty of the mass and evaluate the expected MUF. We can determine the measurement accuracy required to achieve a certain performance.

  8. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    Science.gov (United States)

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  9. Validity of microgravity simulation models on earth

    DEFF Research Database (Denmark)

    Regnard, J; Heer, M; Drummer, C

    2001-01-01

    Many studies have used water immersion and head-down bed rest as experimental models to simulate responses to microgravity. However, some data collected during space missions are at variance or in contrast with observations collected from experimental models. These discrepancies could reflect inc...

  10. Modeling, simulation, parametric study and economic assessment of reciprocating internal combustion engine integrated with multi-effect desalination unit

    International Nuclear Information System (INIS)

    Salimi, Mohsen; Amidpour, Majid

    2017-01-01

    Highlights: • Integration of small MED unit with gas engine power cycle is studied in this paper. • Modeling, simulation, parametric study and sensitivity analysis were performed. • A thermodynamic model for heat recovery and power generation of the gas engine has been presented. • Annualized Cost of System (ACS) has been employed for economic assessment. • Economic feasibilty dependence of integrated system on natural gas and water prices has been investigated. - Abstract: Due to thermal nature of multi-effect desalination (MED), its integration with a suitable power cycle is highly desirable for waste heat recovery. One of the proper power cycle for proposed integration is internal combustion engine (ICE). The exhaust gas heat of ICE is used to produce motive steam for the required heat for the first effect of MED system. Also, the water jacket heat is utilized in a heat exchanger to pre-heat the seawater. This paper studies a thermodynamic model for a tri-generation system composed of ICE integrated with MED. The ICE thermodynamic model has been used in place of different empirical efficiency relations to estimate performance – load curves reasonably. The entire system performance has been coded in MATLAB, and the results of proposed thermodynamic model for the engine have been verified by manufacturer catalogue. By increasing the engine load from 40% to 100%, the water production of MED unit will increase from 4.38 cubic meters per day to 26.78 cubic meters per day and the tri-generation efficiency from 31% to 56%. Economic analyses of the MED unit integrated with ICE was performed based on Annualized Cost of System method. This integration makes the system more economical. It has been determined that in higher market prices for fresh water (more than 7 US$ per cubic meter), the increase in effects number is more significant to the period of return decrement.

  11. Modeling, Simulation, and Kinetic Studies of Solvent-Free Biosynthesis of Benzyl Acetate

    Directory of Open Access Journals (Sweden)

    Vijay Kumar Garlapati

    2013-01-01

    Full Text Available Solvent-free biosynthesis of benzyl acetate through immobilized lipase-mediated transesterification has been modeled and optimized through statistical integrated artificial intelligence approach. A nonlinear response surface model has been successfully developed based on central composite design with transesterification variables, namely, molarity of alcohol, reaction time, temperature, and immobilized lipase amount as input variables and molar conversion (% as an output variable. Statistical integrated genetic algorithm optimization approach results in an optimized molar conversion of 96.32% with the predicted transesterification variables of 0.47 M alcohol molarity in a reaction time of 13.1 h, at 37.5°C using 13.31 U of immobilized lipase. Immobilized lipase withstands more than 98% relative activity up to 6 recycles and maintains 50% relative activity until 12 recycles. The kinetic constants of benzyl acetate, namely, Km and Vmax were found to be 310 mM and 0.10 mmol h−1 g−1, respectively.

  12. Assessment of Molecular Modeling & Simulation

    Energy Technology Data Exchange (ETDEWEB)

    None

    2002-01-03

    This report reviews the development and applications of molecular and materials modeling in Europe and Japan in comparison to those in the United States. Topics covered include computational quantum chemistry, molecular simulations by molecular dynamics and Monte Carlo methods, mesoscale modeling of material domains, molecular-structure/macroscale property correlations like QSARs and QSPRs, and related information technologies like informatics and special-purpose molecular-modeling computers. The panel's findings include the following: The United States leads this field in many scientific areas. However, Canada has particular strengths in DFT methods and homogeneous catalysis; Europe in heterogeneous catalysis, mesoscale, and materials modeling; and Japan in materials modeling and special-purpose computing. Major government-industry initiatives are underway in Europe and Japan, notably in multi-scale materials modeling and in development of chemistry-capable ab-initio molecular dynamics codes.

  13. Comparison of tenofovir plasma and tissue exposure using a population pharmacokinetic model and bootstrap: a simulation study from observed data.

    Science.gov (United States)

    Collins, Jon W; Heyward Hull, J; Dumond, Julie B

    2017-12-01

    Sparse tissue sampling with intensive plasma sampling creates a unique data analysis problem in determining drug exposure in clinically relevant tissues. Tissue exposure may govern drug efficacy, as many drugs exert their actions in tissues. We compared tissue area-under-the-curve (AUC) generated from bootstrapped noncompartmental analysis (NCA) methods and compartmental nonlinear mixed effect (NLME) modeling. A model of observed data after single-dose tenofovir disoproxil fumarate was used to simulate plasma and tissue concentrations for two destructive tissue sampling schemes. Two groups of 100 data sets with densely-sampled plasma and one tissue sample per individual were created. The bootstrapped NCA (SAS 9.3) used a trapezoidal method to calculate geometric mean tissue AUC per dataset. For NLME, individual post hoc estimates of tissue AUC were determined, and the geometric mean from each dataset calculated. Median normalized prediction error (NPE) and absolute normalized prediction error (ANPE) were calculated for each method from the true values of the modeled concentrations. Both methods produced similar tissue AUC estimates close to true values. Although the NLME-generated AUC estimates had larger NPEs, it had smaller ANPEs. Overall, NLME NPEs showed AUC under-prediction but improved precision and fewer outliers. The bootstrapped NCA method produced more accurate estimates but with some NPEs > 100%. In general, NLME is preferred, as it accommodates less intensive tissue sampling with reasonable results, and provides simulation capabilities for optimizing tissue distribution. However, if the main goal is an accurate AUC for the studied scenario, and relatively intense tissue sampling is feasible, the NCA bootstrap method is a reasonable, and potentially less time-intensive solution.

  14. Simulation Modelling Approach to Human Resources Management: Burnout Effect Case Study

    Directory of Open Access Journals (Sweden)

    Marjana Merkac Skok

    2013-07-01

    Full Text Available Human resources management has become one of the most important leverages in organizations for gaining competitive advantage. However, human resources management is in many occasions prone to nonlinear feedbacks with delayed effect. Burnout effect is one of the problems that are especially often faced by the experts in learning society. Burnout effect occurs because modern society is a fast-moving, achievement-oriented, very competitive and lead to many stressful situations, which individuals cannot handle always. We propose usage of system dynamics methodology in exploration of burnout effect, and its usage in learning of consequences of burnout effect. Several experiments have been conducted and presented which indicate increase and collapse behaviour in case of burnout experience by the individual. Experiments with the model explore the presence of burnout effect in several different situations, with different pace of its manifestations.

  15. System Operations Studies for Automated Guideway Transit Systems : Discrete Event Simulation Model Programmer's Manual

    Science.gov (United States)

    1982-07-01

    In order to examine specific automated guideway transit (AGT) developments and concepts, UMTA undertook a program of studies and technology investigations called Automated Guideway Transit Technology (AGTT) Program. The objectives of one segment of t...

  16. COMPUTATIONAL MODELING AND SIMULATION IN BIOLOGY TEACHING: A MINIMALLY EXPLORED FIELD OF STUDY WITH A LOT OF POTENTIAL

    Directory of Open Access Journals (Sweden)

    Sonia López

    2016-09-01

    Full Text Available This study is part of a research project that aims to characterize the epistemological, psychological and didactic presuppositions of science teachers (Biology, Physics, Chemistry that implement Computational Modeling and Simulation (CMS activities as a part of their teaching practice. We present here a synthesis of a literature review on the subject, evidencing how in the last two decades this form of computer usage for science teaching has boomed in disciplines such as Physics and Chemistry, but in a lesser degree in Biology. Additionally, in the works that dwell on the use of CMS in Biology, we identified a lack of theoretical bases that support their epistemological, psychological and/or didactic postures. Accordingly, this generates significant considerations for the fields of research and teacher instruction in Science Education.

  17. The fragrance hand immersion study - an experimental model simulating real-life exposure for allergic contact dermatitis on the hands

    DEFF Research Database (Denmark)

    Heydorn, S; Menné, T; Andersen, Klaus Ejner

    2003-01-01

    previously diagnosed with hand eczema to explore whether immersion of fingers in a solution with or without the patch-test-positive fragrance allergen would cause or exacerbate hand eczema on the exposed finger. The study was double blinded and randomized. All participants had a positive patch test to either...... hydroxycitronellal or Lyral (hydroxyisohexyl 3-cyclohexene carboxaldehyde). Each participant immersed a finger from each hand, once a day, in a solution containing the fragrance allergen or placebo. During the first 2 weeks, the concentration of fragrance allergen in the solution was low (approximately 10 p...... meter. 3 of 15 hand eczema patients developed eczema on the finger immersed in the fragrance-containing solution, 3 of 15 on the placebo finger and 3 of 15 on both fingers. Using this experimental exposure model simulating real-life exposure, we found no association between immersion of a finger...

  18. Microcomputer simulation model for facility performance assessment: a case study of nuclear spent fuel handling facility operations

    Energy Technology Data Exchange (ETDEWEB)

    Chockie, A.D.; Hostick, C.J.; Otis, P.T.

    1985-10-01

    A microcomputer based simulation model was recently developed at the Pacific Northwest Laboratory (PNL) to assist in the evaluation of design alternatives for a proposed facility to receive, consolidate and store nuclear spent fuel from US commercial power plants. Previous performance assessments were limited to deterministic calculations and Gantt chart representations of the facility operations. To insure that the design of the facility will be adequate to meet the specified throughput requirements, the simulation model was used to analyze such factors as material flow, equipment capability and the interface between the MRS facility and the nuclear waste transportation system. The simulation analysis model was based on commercially available software and application programs designed to represent the MRS waste handling facility operations. The results of the evaluation were used by the design review team at PNL to identify areas where design modifications should be considered. 4 figs.

  19. A detached eddy simulation model for the study of lateral separation zones along a large canyon-bound river

    Science.gov (United States)

    Alvarez, Laura V.; Schmeeckle, Mark W.; Grams, Paul E.

    2017-01-01

    Lateral flow separation occurs in rivers where banks exhibit strong curvature. In canyon-boundrivers, lateral recirculation zones are the principal storage of fine-sediment deposits. A parallelized,three-dimensional, turbulence-resolving model was developed to study the flow structures along lateralseparation zones located in two pools along the Colorado River in Marble Canyon. The model employs thedetached eddy simulation (DES) technique, which resolves turbulence structures larger than the grid spacingin the interior of the flow. The DES-3D model is validated using Acoustic Doppler Current Profiler flowmeasurements taken during the 2008 controlled flood release from Glen Canyon Dam. A point-to-pointvalidation using a number of skill metrics, often employed in hydrological research, is proposed here forfluvial modeling. The validation results show predictive capabilities of the DES model. The model reproducesthe pattern and magnitude of the velocity in the lateral recirculation zone, including the size and position ofthe primary and secondary eddy cells, and return current. The lateral recirculation zone is open, havingcontinuous import of fluid upstream of the point of reattachment and export by the recirculation returncurrent downstream of the point of separation. Differences in magnitude and direction of near-bed andnear-surface velocity vectors are found, resulting in an inward vertical spiral. Interaction between therecirculation return current and the main flow is dynamic, with large temporal changes in flow direction andmagnitude. Turbulence structures with a predominately vertical axis of vorticity are observed in the shearlayer becoming three-dimensional without preferred orientation downstream.

  20. Modeling soft-tissue deformation prior to cutting for surgical simulation: finite element analysis and study of cutting parameters.

    Science.gov (United States)

    Chanthasopeephan, Teeranoot; Desai, Jaydev P; Lau, Alan C W

    2007-03-01

    This paper presents an experimental study to understand the localized soft-tissue deformation phase immediately preceding crack growth as observed during the cutting of soft tissue. Such understanding serves as a building block to enable realistic haptic display in simulation of soft tissue cutting for surgical training. Experiments were conducted for soft tissue cutting with a scalpel blade while monitoring the cutting forces and blade displacement for various cutting speeds and cutting angles. The measured force-displacement curves in all the experiments of scalpel cutting of pig liver sample having a natural bulge in thickness exhibited a characteristic pattern: repeating units formed by a segment of linear loading (deformation) followed by a segment of sudden unloading (localized crack extension in the tissue). During the deformation phase immediately preceding crack extension in the tissue, the deformation resistance of the soft tissue was characterized with the local effective modulus (LEM). By iteratively solving an inverse problem formulated with the experimental data and finite element models, this measure of effective deformation resistance was determined. Then computational experiments of model order reduction were conducted to seek the most computationally efficient model that still retained fidelity. Starting with a 3-D finite element model of the liver specimen, three levels of model order reduction were carried out with computational effort in the ratio of 1.000:0.103:0.038. We also conducted parametric studies to understand the effect of cutting speed and cutting angle on LEM. Results showed that for a given cutting speed, the deformation resistance decreased as the cutting angle was varied from 90 degrees to 45 degrees. For a given cutting angle, the deformation resistance decreased with increase in cutting speed.

  1. Simulation studies of long-term saline water use: model validation and evaluation of schedules

    NARCIS (Netherlands)

    Tedeschi, A.; Menenti, M.

    2002-01-01

    In the Mediterranean environment characterized by hot, dry summers, a hydrologically oriented field experiment on vegetable crops was carried out between 1988 and 1993 at a site near Naples, Italy. The objective of the experiment was to study the impact of saline water on crop yield and soil

  2. Laboratory, field, and modeling studies of organophosphate pesticide fate and transport during simulated rainfall-runoff events

    Science.gov (United States)

    Coelho, C. J.; Brown, D. L.; Johns, M.; Lopez, R.

    2005-12-01

    Agricultural runoff is a major source of water pollution in California. Best Management Practices (BMPs) can be used by farmers to reduce pesticides loading to surface waters. In previous studies, it has been found that the organophosphate pesticide diazinon has an affinity for sorption on carbon sources. This study used rice straw compost as a BMP to increase carbon content of orchard soils and to investigate the sorption capability of the compost. Laboratory isotherm experiments were conducted to determine the sorption capacity and equilibrium coefficients of the compost and soil. Using the Kd values of the compost and soil as well as infiltration data from previous studies, a model was created to determine the sorption capability of the compost in the field. Field experiments are in progress using a rainfall simulation system, with five controlled and five compost sites all sprayed with diazinon. Results from the field trials will be compared with the model. Preliminary results have shown that the sorption capacity of the compost could significantly reduce organophosphate pesticide runoff into water ways

  3. Study of Z' {yields} e{sup +}e{sup -} in full simulation with regard to discrimination between models beyond the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Schafer, M

    2004-09-01

    Although experimental results so far agree with predictions of the standard model, it is widely felt to be incomplete. Many prospective theories beyond the standard model predict extra neutral gauge bosons, denoted by Z', which might be light enough to be accessible at the LHC. Observables sensitive to the properties of these extra gauge bosons might be used to discriminate between the different theories beyond the standard model. In the present work several of these observables (total decay width, leptonic cross-section and forward-backward asymmetries) are studied at generation level and with a full simulation in the ATLAS detector. The Z' {yields} e{sup +}e{sup -} decay channel was chosen and 2 values for the mass of Z': 1.5 TeV and 4 TeV. Background is studied as well and it is confirmed that a Z' boson could easily be discovered at the chosen masses. It is shown that even in full simulation the studied observables can be determined with a good precision. In a next step a discrimination strategy has to be developed given the presented methods to extract the variables and their precision. (author)

  4. Simulating spin models on GPU

    Science.gov (United States)

    Weigel, Martin

    2011-09-01

    Over the last couple of years it has been realized that the vast computational power of graphics processing units (GPUs) could be harvested for purposes other than the video game industry. This power, which at least nominally exceeds that of current CPUs by large factors, results from the relative simplicity of the GPU architectures as compared to CPUs, combined with a large number of parallel processing units on a single chip. To benefit from this setup for general computing purposes, the problems at hand need to be prepared in a way to profit from the inherent parallelism and hierarchical structure of memory accesses. In this contribution I discuss the performance potential for simulating spin models, such as the Ising model, on GPU as compared to conventional simulations on CPU.

  5. A Comparison of ML, WLSMV, and Bayesian Methods for Multilevel Structural Equation Models in Small Samples: A Simulation Study.

    Science.gov (United States)

    Holtmann, Jana; Koch, Tobias; Lochner, Katharina; Eid, Michael

    2016-01-01

    Multilevel structural equation models are increasingly applied in psychological research. With increasing model complexity, estimation becomes computationally demanding, and small sample sizes pose further challenges on estimation methods relying on asymptotic theory. Recent developments of Bayesian estimation techniques may help to overcome the shortcomings of classical estimation techniques. The use of potentially inaccurate prior information may, however, have detrimental effects, especially in small samples. The present Monte Carlo simulation study compares the statistical performance of classical estimation techniques with Bayesian estimation using different prior specifications for a two-level SEM with either continuous or ordinal indicators. Using two software programs (Mplus and Stan), differential effects of between- and within-level sample sizes on estimation accuracy were investigated. Moreover, it was tested to which extent inaccurate priors may have detrimental effects on parameter estimates in categorical indicator models. For continuous indicators, Bayesian estimation did not show performance advantages over ML. For categorical indicators, Bayesian estimation outperformed WLSMV solely in case of strongly informative accurate priors. Weakly informative inaccurate priors did not deteriorate performance of the Bayesian approach, while strong informative inaccurate priors led to severely biased estimates even with large sample sizes. With diffuse priors, Stan yielded better results than Mplus in terms of parameter estimates.

  6. Modelling and Simulation of the SVC for Power System Flow Studies: Electrical Network in voltage drop

    Directory of Open Access Journals (Sweden)

    Narimen Aouzellag LAHAÇANI

    2008-12-01

    Full Text Available The goal of any Flexible AC Transmission Systems (FACTS devices study is to measure their impact on the state of the electrical networks into which they are introduced. Their principal function is to improve the static and dynamic properties of the electrical networks and that by increasing the margins of static and dynamic stability and to allow the power transit to the thermal limits of the lines.To study this impact, it is necessary to establish the state of the network (bus voltages and angles, powers injected and forwarded in the lines before and after the introduction of FACTS devices. This brings to calculate the powers transit by using an iterative method such as Newton-Raphson. Undertaking a calculation without the introduction of FACTS devices followed by a calculation with the modifications induced by the integration of FACTS devices into the network, makes it possible to compare the results obtained in both cases and thus assess the interest of the use of devices FACTS.

  7. Simulating Effects of Long Term Use of Wastewater on Farmers Health Using System Dynamics Modeling (Case Study: Varamin Plain

    Directory of Open Access Journals (Sweden)

    Hamzehali Alizadeh

    2017-06-01

    Full Text Available Introduction: Agricultural activity in Varamin plain has been faced with many challenges in recent years, due to vicinity to Tehran the capital of Iran (competition for Latian dam reservoir, and competition with Tehran south network in allocation of Mamlou dam reservoir and treated wastewater of south wastewater treatment plant. Mamlou and Latian dam reservoirs, due to increase of population and industry sectors, allocated to urban utilization of Tehran. Based on national policy, the treated wastewater should be replaced with Latian dam reservoir water to supply water demand of agricultural sector. High volume transmission of wastewater to Varamin plain, will be have economical, environmental, and social effects. Several factors effect on wastewater management and success of utilization plans and any change in these factors may have various feedbacks on the other elements of wastewater use system. Hence, development of a model with capability of simulation of all factors, aspects and interactions that affect wastewater utilization is very necessary. The main objective of present study was development of water integrated model to study long-term effects of irrigation with Tehran treated wastewater, using system dynamics modeling (SD approach. Materials and Methods: Varamin Plain is one of the most important agricultural production centers of the country due to nearness to the large consumer market of Tehran and having fertile soil and knowledge of agriculture. The total agricultural irrigated land in Varamin Plain is 53486 hectares containing 17274 hectares of barley, 16926 hectares of wheat, 3866 hectares of tomato, 3521 hectares of vegetables, 3556 hectares of alfalfa, 2518 hectares of silage maize, 1771 hectares of melon, 1642 hectares of cotton, 1121 hectares of cucumber and 1291 hectares of other crops. In 2006 the irrigation requirement of the crop pattern was about 690 MCM and the actual agriculture water consumption was about 620 MCM

  8. A comparison of model-based imputation methods for handling missing predictor values in a linear regression model: A simulation study

    Science.gov (United States)

    Hasan, Haliza; Ahmad, Sanizah; Osman, Balkish Mohd; Sapri, Shamsiah; Othman, Nadirah

    2017-08-01

    In regression analysis, missing covariate data has been a common problem. Many researchers use ad hoc methods to overcome this problem due to the ease of implementation. However, these methods require assumptions about the data that rarely hold in practice. Model-based methods such as Maximum Likelihood (ML) using the expectation maximization (EM) algorithm and Multiple Imputation (MI) are more promising when dealing with difficulties caused by missing data. Then again, inappropriate methods of missing value imputation can lead to serious bias that severely affects the parameter estimates. The main objective of this study is to provide a better understanding regarding missing data concept that can assist the researcher to select the appropriate missing data imputation methods. A simulation study was performed to assess the effects of different missing data techniques on the performance of a regression model. The covariate data were generated using an underlying multivariate normal distribution and the dependent variable was generated as a combination of explanatory variables. Missing values in covariate were simulated using a mechanism called missing at random (MAR). Four levels of missingness (10%, 20%, 30% and 40%) were imposed. ML and MI techniques available within SAS software were investigated. A linear regression analysis was fitted and the model performance measures; MSE, and R-Squared were obtained. Results of the analysis showed that MI is superior in handling missing data with highest R-Squared and lowest MSE when percent of missingness is less than 30%. Both methods are unable to handle larger than 30% level of missingness.

  9. Properties of a soft-core model of methanol: An integral equation theory and computer simulation study

    Science.gov (United States)

    Huš, Matej; Munaò, Gianmarco; Urbic, Tomaz

    2014-01-01

    Thermodynamic and structural properties of a coarse-grained model of methanol are examined by Monte Carlo simulations and reference interaction site model (RISM) integral equation theory. Methanol particles are described as dimers formed from an apolar Lennard-Jones sphere, mimicking the methyl group, and a sphere with a core-softened potential as the hydroxyl group. Different closure approximations of the RISM theory are compared and discussed. The liquid structure of methanol is investigated by calculating site-site radial distribution functions and static structure factors for a wide range of temperatures and densities. Results obtained show a good agreement between RISM and Monte Carlo simulations. The phase behavior of methanol is investigated by employing different thermodynamic routes for the calculation of the RISM free energy, drawing gas-liquid coexistence curves that match the simulation data. Preliminary indications for a putative second critical point between two different liquid phases of methanol are also discussed. PMID:25362323

  10. General introduction to simulation models

    DEFF Research Database (Denmark)

    Hisham Beshara Halasa, Tariq; Boklund, Anette

    2012-01-01

    Monte Carlo simulation can be defined as a representation of real life systems to gain insight into their functions and to investigate the effects of alternative conditions or actions on the modeled system. Models are a simplification of a system. Most often, it is best to use experiments and fie...... as support decision making. However, several other factors affect decision making such as, ethics, politics and economics. Furthermore, the insight gained when models are build leads to point out areas where knowledge is lacking....... of FMD spread that can provide useful and trustworthy advises, there are four important issues, which the model should represent: 1) The herd structure of the country in question, 2) the dynamics of animal movements and contacts between herds, 3) the biology of the disease, and 4) the regulations...

  11. Microcomputer versus mainframe simulations: A case study

    Science.gov (United States)

    Bengtson, Neal M.

    1988-01-01

    The research was conducted to two parts. Part one consisted of a study of the feasibility of running the Space Transportation Model simulation on an office IBM-AT. The second part was to design simulation runs so as to study the effects of certain performance factors on the execution of the simulation model. The results of this research are given in the two reports which follow: Microcomputer vs. Mainframe Simulation: A Case Study and Fractional Factorial Designs of Simulation Runs for the Space Transportation System Operations Model. In the first part, a DOS batch job was written in order to simplify the execution of the simulation model on an office microcomputer. A comparison study was then performed of running the model on NASA-Langley's mainframe computer vs. running on the IBM-AT microcomputer. This was done in order to find the advantages and disadvantages of running the model on each machine with the objective of determining if running of the office PC was practical. The study concluded that it was. The large number of performance parameters in the Space Transportation model precluded running a full factorial design needed to determine the most significant design factors. The second report gives several suggested fractional factorial designs which require far fewer simulation runs in order to determine which factors have significant influence on results.

  12. Advances in Intelligent Modelling and Simulation Simulation Tools and Applications

    CERN Document Server

    Oplatková, Zuzana; Carvalho, Marco; Kisiel-Dorohinicki, Marek

    2012-01-01

    The human capacity to abstract complex systems and phenomena into simplified models has played a critical role in the rapid evolution of our modern industrial processes and scientific research. As a science and an art, Modelling and Simulation have been one of the core enablers of this remarkable human trace, and have become a topic of great importance for researchers and practitioners. This book was created to compile some of the most recent concepts, advances, challenges and ideas associated with Intelligent Modelling and Simulation frameworks, tools and applications. The first chapter discusses the important aspects of a human interaction and the correct interpretation of results during simulations. The second chapter gets to the heart of the analysis of entrepreneurship by means of agent-based modelling and simulations. The following three chapters bring together the central theme of simulation frameworks, first describing an agent-based simulation framework, then a simulator for electrical machines, and...

  13. Verifying and Validating Simulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statistical sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.

  14. Numerical simulation model of flood-induced flows in urban residential area and the study of damage reduction; Misshu shigaichi no hanran simulation model no kaihatsu to kozui higai keigen taisaku no kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    Fukuoka, S.; Mizuguchi, M. [Hiroshima Univ. (Japan); Yokoyama, H

    1998-08-21

    Most of large cities in Japan are situated at flood area of rivers. At these large cities, population, fortune, and central management function accumulate, and if a bank-damaged flood is occurred at the river, it is anxious for the flood to suffer a large damage to life, fortune, and social economy. And that, risk on a bank-damaged flood is always present. However, even when the bank-damaged flood occurs, destructive damage is no longer allowed and risk management countermeasure for controlling the damage to minimum is much desired. An object of this study consists in construction of a flood simulation model with high commonness and presumable behavior of flood-induced flow in the urban residential area and in study on a damage reduction countermeasure by using this model. At first, a fluid force acting to a house group with various arrangement was measured and a calculation equation explainable in unity for the measured fluid force was introduced. Secondly, on the flood-induced flow at the urban residential area, a common curve coordinate system was adopted to intend precise modeling of road nets and house groups and to construct the flood simulation model. 16 refs., 21 figs.

  15. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    DEFF Research Database (Denmark)

    Sørensen, Kim; Condra, Thomas Joseph; Houbak, Niels

    2004-01-01

    In the present work a framework for optimizing the design of boilers for dynamic operation has been developed. A cost function to be minimized during the optimization has been formulated and for the present design variables related to the Boiler Volume and the Boiler load Gradient (i.e. ring rate...... on the boiler) have been dened. Furthermore a number of constraints related to: minimum and maximum boiler load gradient, minimum boiler size, Shrinking and Swelling and Steam Space Load have been dened. For dening the constraints related to the required boiler volume a dynamic model for simulating the boiler...... performance has been developed. Outputs from the simulations are shrinking and swelling of water level in the drum during for example a start-up of the boiler, these gures combined with the requirements with respect to allowable water level uctuations in the drum denes the requirements with respect to drum...

  16. Cost reduction of the head stack assembly process in the hard disk drive industry with simulation modeling and optimization: A case study

    Directory of Open Access Journals (Sweden)

    Supawan Srithip

    2017-10-01

    Full Text Available This research with the main goal of cost reduction in the hard disk drive industry focused on the head stack assembly process is action research between university research teams and industry. It aims to study the head stack assembly process and to investigate the problems that need to be solved in order to reduce costs by using computer simulations and optimizations. Steps in simulation methodology were applied starting from data collection, model building, model verification, model validation, experimentation, and optimization. Several factors and their effects were investigated that could lead to production improvement of 7.94% increase in numbers of assembled head stack or 7.32% decrease in production cycle time. This paper demonstrates simulation optimization methodology applied to problem solving. Also it illustrates the successful case of using simulation optimization in cost reduction.

  17. Computer Simulation Study of Human Locomotion with a Three-Dimensional Entire-Body Neuro-Musculo-Skeletal Model

    Science.gov (United States)

    Hase, Kazunori; Obuchi, Shuichi

    The three-dimensional entire-body neuro-musculo-skeletal model generating normal walking motion was modified to synthesize pathological walking including asymmetricalcompensatorymotions. Inadditiontotheneuronalparameters, musculo-skeletal parameters were employed as search parameters to represent affected musculo-skeletal systems. This model successfully generated pathological walking patterns, such as walking by a person with one lower extremity shorter than the other and walking by a person with an affected gluteus medius muscle. The simulated walking patterns were of the entire body, three-dimensional, continuous and asymmetrical, and demonstrated the characteristics of actual pathological walking. The walking model with an artificial foot also predicted not only the walking pattern adapted to the artificial foot but also the design parameters of the artificial foot adapted to the effective walking pattern simultaneously. Such simulation methods will establish a novel methodology that we call computational rehabilitation engineering.

  18. Primary implant stability in a bone model simulating clinical situations for the posterior maxilla: an in vitro study

    Science.gov (United States)

    2016-01-01

    Purpose The aim of this study was to determine the influence of anatomical conditions on primary stability in the models simulating posterior maxilla. Methods Polyurethane blocks were designed to simulate monocortical (M) and bicortical (B) conditions. Each condition had four subgroups measuring 3 mm (M3, B3), 5 mm (M5, B5), 8 mm (M8, B8), and 12 mm (M12, B12) in residual bone height (RBH). After implant placement, the implant stability quotient (ISQ), Periotest value (PTV), insertion torque (IT), and reverse torque (RT) were measured. Two-factor ANOVA (two cortical conditions×four RBHs) and additional analyses for simple main effects were performed. Results A significant interaction between cortical condition and RBH was demonstrated for all methods measuring stability with two-factor ANOVA. In the analyses for simple main effects, ISQ and PTV were statistically higher in the bicortical groups than the corresponding monocortical groups, respectively. In the monocortical group, ISQ and PTV showed a statistically significant rise with increasing RBH. Measurements of IT and RT showed a similar tendency, measuring highest in the M3 group, followed by the M8, the M5, and the M12 groups. In the bicortical group, all variables showed a similar tendency, with different degrees of rise and decline. The B8 group showed the highest values, followed by the B12, the B5, and the B3 groups. The highest coefficient was demonstrated between ISQ and PTV. Conclusions Primary stability was enhanced by the presence of bicortex and increased RBH, which may be better demonstrated by ISQ and PTV than by IT and RT. PMID:27588215

  19. Analyzing Strategic Business Rules through Simulation Modeling

    Science.gov (United States)

    Orta, Elena; Ruiz, Mercedes; Toro, Miguel

    Service Oriented Architecture (SOA) holds promise for business agility since it allows business process to change to meet new customer demands or market needs without causing a cascade effect of changes in the underlying IT systems. Business rules are the instrument chosen to help business and IT to collaborate. In this paper, we propose the utilization of simulation models to model and simulate strategic business rules that are then disaggregated at different levels of an SOA architecture. Our proposal is aimed to help find a good configuration for strategic business objectives and IT parameters. The paper includes a case study where a simulation model is built to help business decision-making in a context where finding a good configuration for different business parameters and performance is too complex to analyze by trial and error.

  20. Simulated annealing model of acupuncture

    Science.gov (United States)

    Shang, Charles; Szu, Harold

    2015-05-01

    The growth control singularity model suggests that acupuncture points (acupoints) originate from organizers in embryogenesis. Organizers are singular points in growth control. Acupuncture can cause perturbation of a system with effects similar to simulated annealing. In clinical trial, the goal of a treatment is to relieve certain disorder which corresponds to reaching certain local optimum in simulated annealing. The self-organizing effect of the system is limited and related to the person's general health and age. Perturbation at acupoints can lead a stronger local excitation (analogous to higher annealing temperature) compared to perturbation at non-singular points (placebo control points). Such difference diminishes as the number of perturbed points increases due to the wider distribution of the limited self-organizing activity. This model explains the following facts from systematic reviews of acupuncture trials: 1. Properly chosen single acupoint treatment for certain disorder can lead to highly repeatable efficacy above placebo 2. When multiple acupoints are used, the result can be highly repeatable if the patients are relatively healthy and young but are usually mixed if the patients are old, frail and have multiple disorders at the same time as the number of local optima or comorbidities increases. 3. As number of acupoints used increases, the efficacy difference between sham and real acupuncture often diminishes. It predicted that the efficacy of acupuncture is negatively correlated to the disease chronicity, severity and patient's age. This is the first biological - physical model of acupuncture which can predict and guide clinical acupuncture research.

  1. Tentative study of flow patterns in the North Aegean Sea using NOAA-AVHRR images and 2D model simulation

    Directory of Open Access Journals (Sweden)

    G. Zodiatis

    1996-11-01

    Full Text Available A statistical technique for image processing, the maximum cross correlation (MCC method, was utilized on sequences of NOAA-AVHRR thermal data in order to explore the surface advective current dynamics at the discharge region of the Hellespont in the North Aegean Sea. A 2D numerical flow model was also used in order to simulate the barotropic flow pattern of the surface water layer. The model was forced with diurnal wind fields obtained for the same period as the satellite infrared images. The currents (magnitude and direction derived from the two methods compare satisfactorily despite the fact that some model simplifications were made.

  2. A comparison study between observations and simulation results of Barghouthi model for O+ and H+ outflows in the polar wind

    Directory of Open Access Journals (Sweden)

    I. A. Barghouthi

    2011-11-01

    Full Text Available To advance our understanding of the effect of wave-particle interactions on ion outflows in the polar wind region and the resulting ion heating and escape from low altitudes to higher altitudes, we carried out a comparison between polar wind simulations obtained using Barghouthi model with corresponding observations obtained from different satellites. The Barghouthi model describes O+ and H+ outflows in the polar wind region in the range 1.7 RE to 13.7 RE, including the effects of gravity, polarization electrostatic field, diverging geomagnetic field lines, and wave-particle interactions. Wave-particle interactions were included into the model by using a particle diffusion equation, which depends on diffusion coefficients determined from estimates of the typical electric field spectral density at relevant altitudes and frequencies. We provide a formula for the velocity diffusion coefficient that depends on altitude and velocity, in which the velocity part depends on the perpendicular wavelength of the electromagnetic turbulence λ⊥. Because of the shortage of information about λ⊥, it was included into the model as a parameter. We produce different simulations (i.e. ion velocity distributions, ions density, ion drift velocity, ion parallel and perpendicular temperatures for O+ and H+ ions, and for different λ⊥. We discuss the simulations in terms of wave-particle interactions, perpendicular adiabatic cooling, parallel adiabatic cooling, mirror force, and ion potential energy. The main findings of the simulations are as follows: (1 O+ ions are highly energized at all altitudes in the simulation tube due to wave-particle interactions that heat the ions in the perpendicular direction, and part of this gained energy transfer to the parallel direction by mirror force, resulting in accelerating O+ ions along geomagnetic field lines from lower altitudes to higher altitudes. (2 The effect of wave-particle interactions is negligible for H

  3. Type-segregated aerosol effects on regional monsoon activity: A study using ground-based experiments and model simulations

    Science.gov (United States)

    Vijayakumar, K.; Devara, P. C. S.; Sonbawne, S. M.

    2014-12-01

    Classification of observed aerosols into key types [e.g., clean-maritime (CM), desert-dust (DD), urban-industrial/biomass-burning (UI/BB), black carbon (BC), organic carbon (OC) and mixed-type aerosols (MA)] would facilitate to infer aerosol sources, effects, and feedback mechanisms, not only to improve the accuracy of satellite retrievals but also to quantify the assessment of aerosol radiative impacts on climate. In this paper, we report the results of a study conducted in this direction, employing a Cimel Sun-sky radiometer at the Indian Institute of Tropical Meteorology (IITM), Pune, India during 2008 and 2009, which represent two successive contrasting monsoon years. The study provided an observational evidence to show that the local sources are subject to heavy loading of absorbing aerosols (dust and black carbon), with strong seasonality closely linked to the monsoon annual rainfall cycle over Pune, a tropical urban station in India. The results revealed the absence of CM aerosols in the pre-monsoon as well as in the monsoon seasons of 2009 as opposed to 2008. Higher loading of dust aerosols is observed in the pre-monsoon and monsoon seasons of 2009; majority may be coated with fine BC aerosols from local emissions, leading to reduction in regional rainfall. Further, significant decrease in coarse-mode AOD and presence of carbonaceous aerosols, affecting the aerosol-cloud interaction and monsoon-rain processes via microphysics and dynamics, is considered responsible for the reduction in rainfall during 2009. Additionally, we discuss how optical depth, contributed by different types of aerosols, influences the distribution of monsoon rainfall over an urban region using the Monitoring Atmospheric Composition and Climate (MACC) aerosol reanalysis. Furthermore, predictions of the Dust REgional Atmospheric Model (DREAM) simulations combined with HYSPLIT (HYbrid Single Particle Lagrangian Integrated Trajectory) cluster model are also discussed in support of the

  4. Impulse pumping modelling and simulation

    International Nuclear Information System (INIS)

    Pierre, B; Gudmundsson, J S

    2010-01-01

    Impulse pumping is a new pumping method based on propagation of pressure waves. Of particular interest is the application of impulse pumping to artificial lift situations, where fluid is transported from wellbore to wellhead using pressure waves generated at wellhead. The motor driven element of an impulse pumping apparatus is therefore located at wellhead and can be separated from the flowline. Thus operation and maintenance of an impulse pump are facilitated. The paper describes the different elements of an impulse pumping apparatus, reviews the physical principles and details the modelling of the novel pumping method. Results from numerical simulations of propagation of pressure waves in water-filled pipelines are then presented for illustrating impulse pumping physical principles, and validating the described modelling with experimental data.

  5. Bridging experiments, models and simulations

    DEFF Research Database (Denmark)

    Carusi, Annamaria; Burrage, Kevin; Rodríguez, Blanca

    2012-01-01

    understanding of living organisms and also how they can reduce, replace, and refine animal experiments. A fundamental requirement to fulfill these expectations and achieve the full potential of computational physiology is a clear understanding of what models represent and how they can be validated. The present...... of biovariability; 2) testing and developing robust techniques and tools as a prerequisite to conducting physiological investigations; 3) defining and adopting standards to facilitate the interoperability of experiments, models, and simulations; 4) and understanding physiological validation as an iterative process...... that contributes to defining the specific aspects of cardiac electrophysiology the MSE system targets, rather than being only an external test, and that this is driven by advances in experimental and computational methods and the combination of both....

  6. Modeling and CFD Simulation of nutrient Distribution in picoliter bioreactors for bacterial growth studies on single-cell level

    OpenAIRE

    Westerwalbesloh, Christoph; Grünberger, Alexander; Stute, Birgit; Weber, Sophie; Wiechert, Wolfgang; Kohlheyer, Dietrich; von Lieres, Eric

    2015-01-01

    A microfluidic device for microbial single-cell cultivation of bacteria was modeled and simulated using COMSOL Multiphysics. The liquid velocity field and the mass transfer within the supply channels and cultivation chambers were calculated to gain insight in the distribution of supplied nutrients and metabolic products secreted by the cultivated bacteria. The goal was to identify potential substrate limitations or product accumulations within the cultivation device. The metabolic uptake and ...

  7. A model for simulating the active dispersal of juvenile sea turtles with a case study on western Pacific leatherback turtles.

    Science.gov (United States)

    Gaspar, Philippe; Lalire, Maxime

    2017-01-01

    Oceanic currents are known to broadly shape the dispersal of juvenile sea turtles during their pelagic stage. Accordingly, simple passive drift models are widely used to investigate the distribution at sea of various juvenile sea turtle populations. However, evidence is growing that juveniles do not drift purely passively but also display some swimming activity likely directed towards favorable habitats. We therefore present here a novel Sea Turtle Active Movement Model (STAMM) in which juvenile sea turtles actively disperse under the combined effects of oceanic currents and habitat-driven movements. This model applies to all sea turtle species but is calibrated here for leatherback turtles (Dermochelys coriacea). It is first tested in a simulation of the active dispersal of juveniles originating from Jamursba-Medi, a main nesting beach of the western Pacific leatherback population. Dispersal into the North Pacific Ocean is specifically investigated. Simulation results demonstrate that, while oceanic currents broadly shape the dispersal area, modeled habitat-driven movements strongly structure the spatial and temporal distribution of juveniles within this area. In particular, these movements lead juveniles to gather in the North Pacific Transition Zone (NPTZ) and to undertake seasonal north-south migrations. More surprisingly, juveniles in the NPTZ are simulated to swim mostly towards west which considerably slows down their progression towards the American west coast. This increases their residence time, and hence the risk of interactions with fisheries, in the central and eastern part of the North Pacific basin. Simulated habitat-driven movements also strongly reduce the risk of cold-induced mortality. This risk appears to be larger among the juveniles that rapidly circulate into the Kuroshio than among those that first drift into the North Equatorial Counter Current (NECC). This mechanism might induce marked interannual variability in juvenile survival as the

  8. A model for simulating the active dispersal of juvenile sea turtles with a case study on western Pacific leatherback turtles.

    Directory of Open Access Journals (Sweden)

    Philippe Gaspar

    Full Text Available Oceanic currents are known to broadly shape the dispersal of juvenile sea turtles during their pelagic stage. Accordingly, simple passive drift models are widely used to investigate the distribution at sea of various juvenile sea turtle populations. However, evidence is growing that juveniles do not drift purely passively but also display some swimming activity likely directed towards favorable habitats. We therefore present here a novel Sea Turtle Active Movement Model (STAMM in which juvenile sea turtles actively disperse under the combined effects of oceanic currents and habitat-driven movements. This model applies to all sea turtle species but is calibrated here for leatherback turtles (Dermochelys coriacea. It is first tested in a simulation of the active dispersal of juveniles originating from Jamursba-Medi, a main nesting beach of the western Pacific leatherback population. Dispersal into the North Pacific Ocean is specifically investigated. Simulation results demonstrate that, while oceanic currents broadly shape the dispersal area, modeled habitat-driven movements strongly structure the spatial and temporal distribution of juveniles within this area. In particular, these movements lead juveniles to gather in the North Pacific Transition Zone (NPTZ and to undertake seasonal north-south migrations. More surprisingly, juveniles in the NPTZ are simulated to swim mostly towards west which considerably slows down their progression towards the American west coast. This increases their residence time, and hence the risk of interactions with fisheries, in the central and eastern part of the North Pacific basin. Simulated habitat-driven movements also strongly reduce the risk of cold-induced mortality. This risk appears to be larger among the juveniles that rapidly circulate into the Kuroshio than among those that first drift into the North Equatorial Counter Current (NECC. This mechanism might induce marked interannual variability in juvenile

  9. Simulation of Regionally Ecological Land Based on a Cellular Automation Model: A Case Study of Beijing, China

    Directory of Open Access Journals (Sweden)

    Xiubin Li

    2012-08-01

    Full Text Available Ecological land is like the “liver” of a city and is very useful to public health. Ecological land change is a spatially dynamic non-linear process under the interaction between natural and anthropogenic factors at different scales. In this study, by setting up natural development scenario, object orientation scenario and ecosystem priority scenario, a Cellular Automation (CA model has been established to simulate the evolution pattern of ecological land in Beijing in the year 2020. Under the natural development scenario, most of ecological land will be replaced by construction land and crop land. But under the scenarios of object orientation and ecosystem priority, the ecological land area will increase, especially under the scenario of ecosystem priority. When considering the factors such as total area of ecological land, loss of key ecological land and spatial patterns of land use, the scenarios from priority to inferiority are ecosystem priority, object orientation and natural development, so future land management policies in Beijing should be focused on conversion of cropland to forest, wetland protection and prohibition of exploitation of natural protection zones, water source areas and forest parks to maintain the safety of the regional ecosystem.

  10. Toward a better integration of roughness in rockfall simulations - a sensitivity study with the RockyFor3D model

    Science.gov (United States)

    Monnet, Jean-Matthieu; Bourrier, Franck; Milenkovic, Milutin

    2017-04-01

    Advances in numerical simulation and analysis of real-size field experiments have supported the development of process-based rockfall simulation models. Availability of high resolution remote sensing data and high-performance computing now make it possible to implement them for operational applications, e.g. risk zoning and protection structure design. One key parameter regarding rock propagation is the surface roughness, sometimes defined as the variation in height perpendicular to the slope (Pfeiffer and Bowen, 1989). Roughness-related input parameters for rockfall models are usually determined by experts on the field. In the RockyFor3D model (Dorren, 2015), three values related to the distribution of obstacles (deposited rocks, stumps, fallen trees,... as seen from the incoming rock) relatively to the average slope are estimated. The use of high resolution digital terrain models (DTMs) questions both the scale usually adopted by experts for roughness assessment and the relevance of modeling hypotheses regarding the rock / ground interaction. Indeed, experts interpret the surrounding terrain as obstacles or ground depending on the overall visibility and on the nature of objects. Digital models represent the terrain with a certain amount of smoothing, depending on the sensor capacities. Besides, the rock rebound on the ground is modeled by changes in the velocities of the gravity center of the block due to impact. Thus, the use of a DTM with resolution smaller than the block size might have little relevance while increasing computational burden. The objective of this work is to investigate the issue of scale relevance with simulations based on RockyFor3D in order to derive guidelines for roughness estimation by field experts. First a sensitivity analysis is performed to identify the combinations of parameters (slope, soil roughness parameter, rock size) where the roughness values have a critical effect on rock propagation on a regular hillside. Second, a more

  11. Nuclear reactor core modelling in multifunctional simulators

    Energy Technology Data Exchange (ETDEWEB)

    Puska, E.K. [VTT Energy, Nuclear Energy, Espoo (Finland)

    1999-06-01

    studied to assess the possibilities for using three-dimensional cores in training simulators. The core model results have been compared with the Loviisa WWER-type plant measurement data in steady state and in some transients. Hypothetical control rod withdrawal, ejection and boron dilution transients have been calculated with various three-dimensional core models for the Loviisa WWER-440 core. Several ATWS analyses for the WWER-1000/91 plant have been performed using the three-dimensional core model. In this context, the results of APROS have been compared in detail with the results of the HEXTRAN code. The three-dimensional Olkiluoto BWR-type core model has been used for transient calculation and for severe accident re-criticality studies. The one-dimensional core model is at present used in several plant analyser and training simulator applications and it has been used extensively for safety analyses in the Loviisa WWER-440 plant modernisation project. (orig.) 75 refs. The thesis includes also eight previous publications by author

  12. Towards the formulation of a realistic 3D model for simulation of magnetron injection guns for gyrotrons. A preliminary study

    Energy Technology Data Exchange (ETDEWEB)

    Sabchevski, S. [Bulgarian Academy of Sciences (Bulgaria). Institute of Electronics; Zhelyazkov, I. [Sofia Univ. (Bulgaria). Faculty of Physics; Illy, S.; Piosczyk, B.; Borie, E.

    2008-07-15

    Numerical experiments based on adequate, self-consistent physical models implemented in simulation codes are widely used for computer-aided design (CAD), analysis and optimization of the electron optical systems (EOS) of the gyrotrons. An essential part of the physical model is the emission model, i.e., the relations that govern the value of the beam current extracted from the emitter as well as its energy spectrum, spatial and angular distribution. In this paper, we present a compendium of the basic theory, the most essential formulas and discuss the most important factors responsible for the nonuniformity of the emission and velocity spread. We also review the emission models realized in various ray-tracing and Particle-In-Cell (PIC) codes and present a general formulation of a 3D emission model based on the principle of decomposition of the region near the cathode to a set of equivalent diodes. It is believed that the information summarized in this compendium will be helpful for the development of novel modules for calculation of the initial distribution in both the available 2D computer programs that are being upgraded now and in the novel 3D simulation tools development of which is in progress now. (orig.)

  13. Simulation study of a magnetocardiogram based on a virtual heart model: effect of a cardiac equivalent source and a volume conductor

    International Nuclear Information System (INIS)

    Shou Guo-Fa; Xia Ling; Dai Ling; Ma Ping; Tang Fa-Kuan

    2011-01-01

    In this paper, we present a magnetocardiogram (MCG) simulation study using the boundary element method (BEM) and based on the virtual heart model and the realistic human volume conductor model. The different contributions of cardiac equivalent source models and volume conductor models to the MCG are deeply and comprehensively investigated. The single dipole source model, the multiple dipoles source model and the equivalent double layer (EDL) source model are analysed and compared with the cardiac equivalent source models. Meanwhile, the effect of the volume conductor model on the MCG combined with these cardiac equivalent sources is investigated. The simulation results demonstrate that the cardiac electrophysiological information will be partly missed when only the single dipole source is taken, while the EDL source is a good option for MCG simulation and the effect of the volume conductor is smallest for the EDL source. Therefore, the EDL source is suitable for the study of MCG forward and inverse problems, and more attention should be paid to it in future MCG studies. (general)

  14. Numerical simulation for regional ozone concentrations: A case study by weather research and forecasting/chemistry (WRF/Chem) model

    Energy Technology Data Exchange (ETDEWEB)

    Habib Al Razi, Khandakar Md; Hiroshi, Moritomi [Environmental and Renewable Energy System, Graduate School of Engineering, Gifu University, 1-1 Yanagido, Gifu City, 501-1193 (Japan)

    2013-07-01

    The objective of this research is to better understand and predict the atmospheric concentration distribution of ozone and its precursor (in particular, within the Planetary Boundary Layer (Within 110 km to 12 km) over Kasaki City and the Greater Tokyo Area using fully coupled online WRF/Chem (Weather Research and Forecasting/Chemistry) model. In this research, a serious and continuous high ozone episode in the Greater Tokyo Area (GTA) during the summer of 14–18 August 2010 was investigated using the observation data. We analyzed the ozone and other trace gas concentrations, as well as the corresponding weather conditions in this high ozone episode by WRF/Chem model. The simulation results revealed that the analyzed episode was mainly caused by the impact of accumulation of pollution rich in ozone over the Greater Tokyo Area. WRF/Chem has shown relatively good performance in modeling of this continuous high ozone episode, the simulated and the observed concentrations of ozone, NOx and NO2 are basically in agreement at Kawasaki City, with best correlation coefficients of 0.87, 0.70 and 0.72 respectively. Moreover, the simulations of WRF/Chem with WRF preprocessing software (WPS) show a better agreement with meteorological observations such as surface winds and temperature profiles in the ground level of this area. As a result the surface ozone simulation performances have been enhanced in terms of the peak ozone and spatial patterns, whereas WRF/Chem has been succeeded to generate meteorological fields as well as ozone, NOx, NO2 and NO.

  15. Bootstrap model selection had similar performance for selecting authentic and noise variables compared to backward variable elimination: a simulation study.

    Science.gov (United States)

    Austin, Peter C

    2008-10-01

    Researchers have proposed using bootstrap resampling in conjunction with automated variable selection methods to identify predictors of an outcome and to develop parsimonious regression models. Using this method, multiple bootstrap samples are drawn from the original data set. Traditional backward variable elimination is used in each bootstrap sample, and the proportion of bootstrap samples in which each candidate variable is identified as an independent predictor of the outcome is determined. The performance of this method for identifying predictor variables has not been examined. Monte Carlo simulation methods were used to determine the ability of bootstrap model selection methods to correctly identify predictors of an outcome when those variables that are selected for inclusion in at least 50% of the bootstrap samples are included in the final regression model. We compared the performance of the bootstrap model selection method to that of conventional backward variable elimination. Bootstrap model selection tended to result in an approximately equal proportion of selected models being equal to the true regression model compared with the use of conventional backward variable elimination. Bootstrap model selection performed comparatively to backward variable elimination for identifying the true predictors of a binary outcome.

  16. Challenges for Modeling and Simulation

    National Research Council Canada - National Science Library

    Johnson, James

    2002-01-01

    This document deals with modeling and stimulation. The strengths are study processes that rarely or never occur, evaluate a wide range of alternatives, generate new ideas, new concepts and innovative solutions...

  17. Experiments with Interaction between the National Water Model and the Reservoir System Simulation Model: A Case Study of Russian River Basin

    Science.gov (United States)

    Kim, J.; Johnson, L.; Cifelli, R.; Chandra, C. V.; Gochis, D.; McCreight, J. L.; Yates, D. N.; Read, L.; Flowers, T.; Cosgrove, B.

    2017-12-01

    NOAA National Water Center (NWC) in partnership with the National Centers for Environmental Prediction (NCEP), the National Center for Atmospheric Research (NCAR) and other academic partners have produced operational hydrologic predictions for the nation using a new National Water Model (NWM) that is based on the community WRF-Hydro modeling system since the summer of 2016 (Gochis et al., 2015). The NWM produces a variety of hydrologic analysis and prediction products, including gridded fields of soil moisture, snowpack, shallow groundwater levels, inundated area depths, evapotranspiration as well as estimates of river flow and velocity for approximately 2.7 million river reaches. Also included in the NWM are representations for more than 1,200 reservoirs which are linked into the national channel network defined by the USGS NHDPlusv2.0 hydrography dataset. Despite the unprecedented spatial and temporal coverage of the NWM, many known deficiencies exist, including the representation of lakes and reservoirs. This study addresses the implementation of a reservoir assimilation scheme through coupling of a reservoir simulation model to represent the influence of managed flows. We examine the use of the reservoir operations to dynamically update lake/reservoir storage volume states, characterize flow characteristics of river reaches flowing into and out of lakes and reservoirs, and incorporate enhanced reservoir operating rules for the reservoir model options within the NWM. Model experiments focus on a pilot reservoir domain-Lake Mendocino, CA, and its contributing watershed, the East Fork Russian River. This reservoir is modeled using United States Army Corps of Engineers (USACE) HEC-ResSim developed for application to examine forecast informed reservoir operations (FIRO) in the Russian River basin.

  18. Conducting Simulation Studies in Psychometrics

    Science.gov (United States)

    Feinberg, Richard A.; Rubright, Jonathan D.

    2016-01-01

    Simulation studies are fundamental to psychometric discourse and play a crucial role in operational and academic research. Yet, resources for psychometricians interested in conducting simulations are scarce. This Instructional Topics in Educational Measurement Series (ITEMS) module is meant to address this deficiency by providing a comprehensive…

  19. ADS Model in the TIRELIRE-STRATEGIE Fuel Cycle Simulation Code Application to Minor Actinides Transmutation Studies

    International Nuclear Information System (INIS)

    Garzenne, Claude; Massara, Simone; Tetart, Philippe

    2006-01-01

    Accelerator Driven Systems offer the advantage, thanks to the core sub-criticality, to burn highly radioactive elements such as americium and curium in a dedicated stratum, and then to avoid polluting with these elements the main part of the nuclear fleet, which is optimized for electricity production. This paper presents firstly the ADS model implemented in the fuel cycle simulation code TIRELIRE-STRATEGIE that we developed at EDF R and D Division for nuclear power scenario studies. Then we show and comment the results of TIRELIRE-STRATEGIE calculation of a transition scenario between the current French nuclear fleet, and a fast reactor fleet entirely deployed towards the end of the 21. century, consistently with the EDF prospective view, with 3 options for the minor actinides management:1) vitrified with fission products to be sent to the final disposal; 2) extracted together with plutonium from the spent fuel to be transmuted in Generation IV fast reactors; 3) eventually extracted separately from plutonium to be incinerated in a ADSs double stratum. The comparison of nuclear fuel cycle material fluxes and inventories between these options shows that ADSs are not more efficient than critical fast reactors for reducing the high level waste radio-toxicity; that minor actinides inventory and fluxes in the fuel cycle are more than twice as high in case of a double ADSs stratum than in case of minor actinides transmutation in Generation IV FBRs; and that about fourteen 400 MWth ADS are necessary to incinerate minor actinides issued from a 60 GWe Generation IV fast reactor fleet, corresponding to the current French nuclear fleet installed power. (authors)

  20. Simulation Models for Socioeconomic Inequalities in Health: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Niko Speybroeck

    2013-11-01

    Full Text Available Background: The emergence and evolution of socioeconomic inequalities in health involves multiple factors interacting with each other at different levels. Simulation models are suitable for studying such complex and dynamic systems and have the ability to test the impact of policy interventions in silico. Objective: To explore how simulation models were used in the field of socioeconomic inequalities in health. Methods: An electronic search of studies assessing socioeconomic inequalities in health using a simulation model was conducted. Characteristics of the simulation models were extracted and distinct simulation approaches were identified. As an illustration, a simple agent-based model of the emergence of socioeconomic differences in alcohol abuse was developed. Results: We found 61 studies published between 1989 and 2013. Ten different simulation approaches were identified. The agent-based model illustration showed that multilevel, reciprocal and indirect effects of social determinants on health can be modeled flexibly. Discussion and Conclusions: Based on the review, we discuss the utility of using simulation models for studying health inequalities, and refer to good modeling practices for developing such models. The review and the simulation model example suggest that the use of simulation models may enhance the understanding and debate about existing and new socioeconomic inequalities of health frameworks.

  1. Estimating the predictive ability of genetic risk models in simulated data based on published results from genome-wide association studies.

    Science.gov (United States)

    Kundu, Suman; Mihaescu, Raluca; Meijer, Catherina M C; Bakker, Rachel; Janssens, A Cecile J W

    2014-01-01

    There is increasing interest in investigating genetic risk models in empirical studies, but such studies are premature when the expected predictive ability of the risk model is low. We assessed how accurately the predictive ability of genetic risk models can be estimated in simulated data that are created based on the odds ratios (ORs) and frequencies of single-nucleotide polymorphisms (SNPs) obtained from genome-wide association studies (GWASs). We aimed to replicate published prediction studies that reported the area under the receiver operating characteristic curve (AUC) as a measure of predictive ability. We searched GWAS articles for all SNPs included in these models and extracted ORs and risk allele frequencies to construct genotypes and disease status for a hypothetical population. Using these hypothetical data, we reconstructed the published genetic risk models and compared their AUC values to those reported in the original articles. The accuracy of the AUC values varied with the method used for the construction of the risk models. When logistic regression analysis was used to construct the genetic risk model, AUC values estimated by the simulation method were similar to the published values with a median absolute difference of 0.02 [range: 0.00, 0.04]. This difference was 0.03 [range: 0.01, 0.06] and 0.05 [range: 0.01, 0.08] for unweighted and weighted risk scores. The predictive ability of genetic risk models can be estimated using simulated data based on results from GWASs. Simulation methods can be useful to estimate the predictive ability in the absence of empirical data and to decide whether empirical investigation of genetic risk models is warranted.

  2. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    Science.gov (United States)

    BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of a...

  3. Crowd Human Behavior for Modeling and Simulation

    Science.gov (United States)

    2009-08-06

    Crowd Human Behavior for Modeling and Simulation Elizabeth Mezzacappa, Ph.D. & Gordon Cooke, MEME Target Behavioral Response Laboratory, ARDEC...TYPE Conference Presentation 3. DATES COVERED 00-00-2008 to 00-00-2009 4. TITLE AND SUBTITLE Crowd Human Behavior for Modeling and Simulation...34understanding human behavior " and "model validation and verification" and will focus on modeling and simulation of crowds from a social scientist???s

  4. Using Akaike's information theoretic criterion in mixed-effects modeling of pharmacokinetic data: a simulation study [version 3; referees: 2 approved, 1 approved with reservations

    Directory of Open Access Journals (Sweden)

    Erik Olofsen

    2015-07-01

    Full Text Available Akaike's information theoretic criterion for model discrimination (AIC is often stated to "overfit", i.e., it selects models with a higher dimension than the dimension of the model that generated the data. However, with experimental pharmacokinetic data it may not be possible to identify the correct model, because of the complexity of the processes governing drug disposition. Instead of trying to find the correct model, a more useful objective might be to minimize the prediction error of drug concentrations in subjects with unknown disposition characteristics. In that case, the AIC might be the selection criterion of choice. We performed Monte Carlo simulations using a model of pharmacokinetic data (a power function of time with the property that fits with common multi-exponential models can never be perfect - thus resembling the situation with real data. Prespecified models were fitted to simulated data sets, and AIC and AICc (the criterion with a correction for small sample sizes values were calculated and averaged. The average predictive performances of the models, quantified using simulated validation sets, were compared to the means of the AICs. The data for fits and validation consisted of 11 concentration measurements each obtained in 5 individuals, with three degrees of interindividual variability in the pharmacokinetic volume of distribution. Mean AICc corresponded very well, and better than mean AIC, with mean predictive performance. With increasing interindividual variability, there was a trend towards larger optimal models, but with respect to both lowest AICc and best predictive performance. Furthermore, it was observed that the mean square prediction error itself became less suitable as a validation criterion, and that a predictive performance measure should incorporate interindividual variability. This simulation study showed that, at least in a relatively simple mixed-effects modelling context with a set of prespecified models

  5. Product Costing in FMT: Comparing Deterministic and Stochastic Models Using Computer-Based Simulation for an Actual Case Study

    DEFF Research Database (Denmark)

    Nielsen, Steen

    2000-01-01

    This paper expands the traditional product costing technique be including a stochastic form in a complex production process for product costing. The stochastic phenomenon in flesbile manufacturing technologies is seen as an important phenomenon that companies try to decreas og eliminate. DFM has ...... been used for evaluating the appropriateness of the firm's production capability. In this paper a simulation model is developed to analyze the relevant cost behaviour with respect to DFM and to develop a more streamlined process in the layout of the manufacturing process....

  6. Modeling and spectral simulation of matrix-isolated molecules by density functional calculations: a case study on formic acid dimer.

    Science.gov (United States)

    Ito, Fumiyuki

    2010-12-07

    The supermolecule approach has been used to model molecules embedded in solid argon matrix, wherein interaction between the guest and the host atoms in the first solvation shell is evaluated with the use of density functional calculations. Structural stability and simulated spectra have been obtained for formic acid dimer (FAD)-Ar(n) (n = 21-26) clusters. The calculations at the B971∕6-31++G(3df,3pd) level have shown that the tetrasubstitutional site on Ar(111) plane is likely to incorporate FAD most stably, in view of consistency with the matrix shifts available experimentally.

  7. Probabilistic physics-of-failure models for component reliabilities using Monte Carlo simulation and Weibull analysis: a parametric study

    International Nuclear Information System (INIS)

    Hall, P.L.; Strutt, J.E.

    2003-01-01

    In reliability engineering, component failures are generally classified in one of three ways: (1) early life failures; (2) failures having random onset times; and (3) late life or 'wear out' failures. When the time-distribution of failures of a population of components is analysed in terms of a Weibull distribution, these failure types may be associated with shape parameters β having values 1 respectively. Early life failures are frequently attributed to poor design (e.g. poor materials selection) or problems associated with manufacturing or assembly processes. We describe a methodology for the implementation of physics-of-failure models of component lifetimes in the presence of parameter and model uncertainties. This treats uncertain parameters as random variables described by some appropriate statistical distribution, which may be sampled using Monte Carlo methods. The number of simulations required depends upon the desired accuracy of the predicted lifetime. Provided that the number of sampled variables is relatively small, an accuracy of 1-2% can be obtained using typically 1000 simulations. The resulting collection of times-to-failure are then sorted into ascending order and fitted to a Weibull distribution to obtain a shape factor β and a characteristic life-time η. Examples are given of the results obtained using three different models: (1) the Eyring-Peck (EP) model for corrosion of printed circuit boards; (2) a power-law corrosion growth (PCG) model which represents the progressive deterioration of oil and gas pipelines; and (3) a random shock-loading model of mechanical failure. It is shown that for any specific model the values of the Weibull shape parameters obtained may be strongly dependent on the degree of uncertainty of the underlying input parameters. Both the EP and PCG models can yield a wide range of values of β, from β>1, characteristic of wear-out behaviour, to β<1, characteristic of early-life failure, depending on the degree of

  8. Simulation in International Studies

    Science.gov (United States)

    Boyer, Mark A.

    2011-01-01

    Social scientists have long worked to replicate real-world phenomena in their research and teaching environments. Unlike our biophysical science colleagues, we are faced with an area of study that is not governed by the laws of physics and other more predictable relationships. As a result, social scientists, and international studies scholars more…

  9. Theoretical modelling, experimental studies and clinical simulations of urethral cooling catheters for use during prostate thermal therapy

    International Nuclear Information System (INIS)

    Davidson, Sean R H; Sherar, Michael D

    2003-01-01

    Urethral cooling catheters are used to prevent thermal damage to the urethra during thermal therapy of the prostate. Quantification of a catheter's heat transfer characteristics is necessary for prediction of the catheter's influence on the temperature and thermal dose distribution in periurethral tissue. Two cooling catheters with different designs were examined: the Dornier Urowave catheter and a prototype device from BSD Medical Corp. A convection coefficient, h, was used to characterize the cooling ability of each catheter. The value of the convection coefficient (h = 330 W m -2 deg C -1 for the Dornier catheter, h = 160 W m -2 deg C -1 for the BSD device) was obtained by comparing temperatures measured in a tissue-equivalent phantom material to temperatures predicted by a finite element method simulation of the phantom experiments. The coefficient was found to be insensitive to the rate of coolant flow inside the catheter between 40 and 120 ml min -1 . The convection coefficient method for modelling urethral catheters was incorporated into simulations of microwave heating of the prostate. Results from these simulations indicate that the Dornier device is significantly more effective than the BSD catheter at cooling the tissue surrounding the urethra

  10. Modeling, simulation and optimization of bipedal walking

    CERN Document Server

    Berns, Karsten

    2013-01-01

    The model-based investigation of motions of anthropomorphic systems is an important interdisciplinary research topic involving specialists from many fields such as Robotics, Biomechanics, Physiology, Orthopedics, Psychology, Neurosciences, Sports, Computer Graphics and Applied Mathematics. This book presents a study of basic locomotion forms such as walking and running is of particular interest due to the high demand on dynamic coordination, actuator efficiency and balance control. Mathematical models and numerical simulation and optimization techniques are explained, in combination with experimental data, which can help to better understand the basic underlying mechanisms of these motions and to improve them. Example topics treated in this book are Modeling techniques for anthropomorphic bipedal walking systems Optimized walking motions for different objective functions Identification of objective functions from measurements Simulation and optimization approaches for humanoid robots Biologically inspired con...

  11. Preliminary Study of Soil Available Nutrient Simulation Using a Modified WOFOST Model and Time-Series Remote Sensing Observations

    OpenAIRE

    Zhiqiang Cheng; Jihua Meng; Yanyou Qiao; Yiming Wang; Wenquan Dong; Yanxin Han

    2018-01-01

    The approach of using multispectral remote sensing (RS) to estimate soil available nutrients (SANs) has been recently developed and shows promising results. This method overcomes the limitations of commonly used methods by building a statistical model that connects RS-based crop growth and nutrient content. However, the stability and accuracy of this model require improvement. In this article, we replaced the statistical model by integrating the World Food Studies (WOFOST) model and time seri...

  12. Simulation Model for DMEK Donor Preparation.

    Science.gov (United States)

    Mittal, Vikas; Mittal, Ruchi; Singh, Swati; Narang, Purvasha; Sridhar, Priti

    2018-04-09

    To demonstrate a simulation model for donor preparation in Descemet membrane endothelial keratoplasty (DMEK). The inner transparent membrane of the onion (Allium cepa) was used as a simulation model for human Descemet membrane (DM). Surgical video (see Video, Supplemental Digital Content 1, http://links.lww.com/ICO/A663) demonstrating all the steps was recorded. This model closely simulates human DM and helps DMEK surgeons learn the nuances of DM donor preparation steps with ease. The technique is repeatable, and the model is cost-effective. The described simulation model can assist surgeons and eye bank technicians to learn steps in donor preparation in DMEK.

  13. Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2014-01-01

    This book includes extended and revised versions of a set of selected papers from the 2012 International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2012) which was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and held in Rome, Italy. SIMULTECH 2012 was technically co-sponsored by the Society for Modeling & Simulation International (SCS), GDR I3, Lionphant Simulation, Simulation Team and IFIP and held in cooperation with AIS Special Interest Group of Modeling and Simulation (AIS SIGMAS) and the Movimento Italiano Modellazione e Simulazione (MIMOS).

  14. An introduction to enterprise modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ostic, J.K.; Cannon, C.E. [Los Alamos National Lab., NM (United States). Technology Modeling and Analysis Group

    1996-09-01

    As part of an ongoing effort to continuously improve productivity, quality, and efficiency of both industry and Department of Energy enterprises, Los Alamos National Laboratory is investigating various manufacturing and business enterprise simulation methods. A number of enterprise simulation software models are being developed to enable engineering analysis of enterprise activities. In this document the authors define the scope of enterprise modeling and simulation efforts, and review recent work in enterprise simulation at Los Alamos National Laboratory as well as at other industrial, academic, and research institutions. References of enterprise modeling and simulation methods and a glossary of enterprise-related terms are provided.

  15. Structured building model reduction toward parallel simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University

    2013-08-26

    Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.

  16. Modeling and simulation of economic processes

    Directory of Open Access Journals (Sweden)

    Bogdan Brumar

    2010-12-01

    Full Text Available In general, any activity requires a longer action often characterized by a degree of uncertainty, insecurity, in terms of size of the objective pursued. Because of the complexity of real economic systems, the stochastic dependencies between different variables and parameters considered, not all systems can be adequately represented by a model that can be solved by analytical methods and covering all issues for management decision analysis-economic horizon real. Often in such cases, it is considered that the simulation technique is the only alternative available. Using simulation techniques to study real-world systems often requires a laborious work. Making a simulation experiment is a process that takes place in several stages.

  17. SIMULATION STUDIES FOR UPQC

    OpenAIRE

    Dr. PRASAD M. JOSHI; Mukul Kumar

    2016-01-01

    Nowadays, widespread use of power electronics based load in house-hold and industry have increased the importance and application of power Quality studies. Power electronic devices having property of non-linearity draw harmonic and reactive power from the source. Unified Power Quality conditioner (UPQC) is the device that mitigates the harmonics and improves the quality of power. This paper presents the working principle of UOQC based on instantaneous p-q theory which compensates the load cur...

  18. Study of cellular retention of HMPAO and ECD in a model simulating the blood-brain barrier

    International Nuclear Information System (INIS)

    Ponce, C.; Pittet, N.; Slosman, D.O.

    1997-01-01

    The HMPAO and ECD are two technetium-labelled lipophilic agents clinically used in the imagery of cerebral perfusion. These molecules cross the membranes and are retained inside the cell after being converted to a hydrophilic form. The aim of this study is to establish the distribution of this retention at the level of blood-brain barrier (BBB) and nerve cells. The incorporation of HMPAO or ECD was studied on a model of co-culture simulating the BBB by means of a T84 single-cell layer of tight junction separated from another layer of U373 astrocyte cells. The cell quality and tight junction permeability were evaluated by the cellular retention of 111-indium chloride and by para-cellular diffusion of 14 C mannitol,d-1. The values reported below were obtained at 180 minutes when the radiotracers were added near the 'T84 layer'. The cell quality is validated by the low cellular retention of the indium chloride(2.3±0.3 μg -1 for the T84 cells and 8.2±5.8 μg -1 for the U373 cells). The activity of 14 C mannitol,d-1 diminishes by 23 ± 5 % in the added compartment. The retention of ECD by the U373 cells is significantly higher (20.7 ±4.5 g -1 ) than that of T84 cells (2.9 ± 0.2 μg -1 ). For HMPAO a non-significant tendency could be observed (49 ± 34 μg -1 for the U373 cells and 38 ± 25 μg -1 for the T84 cells)> The results of cellular retention of indium by HMPAO or ECD when added near 'U373 layer' are not significantly different.In conclusion, independently of the side exposed to the radiotracers, one observes an enhanced incorporation of the U373 cells. The ensemble of these results represent additional arguments in favour of a specific cellular incorporation of the radiotracers, independent of the BBB permittivity

  19. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  20. Modelling and simulation of thermal power plants

    Energy Technology Data Exchange (ETDEWEB)

    Eborn, J.

    1998-02-01

    Mathematical modelling and simulation are important tools when dealing with engineering systems that today are becoming increasingly more complex. Integrated production and recycling of materials are trends that give rise to heterogenous systems, which are difficult to handle within one area of expertise. Model libraries are an excellent way to package engineering knowledge of systems and units to be reused by those who are not experts in modelling. Many commercial packages provide good model libraries, but they are usually domain-specific and closed. Heterogenous, multi-domain systems requires open model libraries written in general purpose modelling languages. This thesis describes a model database for thermal power plants written in the object-oriented modelling language OMOLA. The models are based on first principles. Subunits describe volumes with pressure and enthalpy dynamics and flows of heat or different media. The subunits are used to build basic units such as pumps, valves and heat exchangers which can be used to build system models. Several applications are described; a heat recovery steam generator, equipment for juice blending, steam generation in a sulphuric acid plant and a condensing steam plate heat exchanger. Model libraries for industrial use must be validated against measured data. The thesis describes how parameter estimation methods can be used for model validation. Results from a case-study on parameter optimization of a non-linear drum boiler model show how the technique can be used 32 refs, 21 figs

  1. Assessing the relationship between computational speed and precision: a case study comparing an interpreted versus compiled programming language using a stochastic simulation model in diabetes care.

    Science.gov (United States)

    McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P

    2010-01-01

    Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.

  2. Simulation modeling and analysis with Arena

    CERN Document Server

    Altiok, Tayfur

    2007-01-01

    Simulation Modeling and Analysis with Arena is a highly readable textbook which treats the essentials of the Monte Carlo discrete-event simulation methodology, and does so in the context of a popular Arena simulation environment.” It treats simulation modeling as an in-vitro laboratory that facilitates the understanding of complex systems and experimentation with what-if scenarios in order to estimate their performance metrics. The book contains chapters on the simulation modeling methodology and the underpinnings of discrete-event systems, as well as the relevant underlying probability, statistics, stochastic processes, input analysis, model validation and output analysis. All simulation-related concepts are illustrated in numerous Arena examples, encompassing production lines, manufacturing and inventory systems, transportation systems, and computer information systems in networked settings.· Introduces the concept of discrete event Monte Carlo simulation, the most commonly used methodology for modeli...

  3. Theory, modeling and simulation: Annual report 1993

    International Nuclear Information System (INIS)

    Dunning, T.H. Jr.; Garrett, B.C.

    1994-07-01

    Developing the knowledge base needed to address the environmental restoration issues of the US Department of Energy requires a fundamental understanding of molecules and their interactions in insolation and in liquids, on surfaces, and at interfaces. To meet these needs, the PNL has established the Environmental and Molecular Sciences Laboratory (EMSL) and will soon begin construction of a new, collaborative research facility devoted to advancing the understanding of environmental molecular science. Research in the Theory, Modeling, and Simulation program (TMS), which is one of seven research directorates in the EMSL, will play a critical role in understanding molecular processes important in restoring DOE's research, development and production sites, including understanding the migration and reactions of contaminants in soils and groundwater, the development of separation process for isolation of pollutants, the development of improved materials for waste storage, understanding the enzymatic reactions involved in the biodegradation of contaminants, and understanding the interaction of hazardous chemicals with living organisms. The research objectives of the TMS program are to apply available techniques to study fundamental molecular processes involved in natural and contaminated systems; to extend current techniques to treat molecular systems of future importance and to develop techniques for addressing problems that are computationally intractable at present; to apply molecular modeling techniques to simulate molecular processes occurring in the multispecies, multiphase systems characteristic of natural and polluted environments; and to extend current molecular modeling techniques to treat complex molecular systems and to improve the reliability and accuracy of such simulations. The program contains three research activities: Molecular Theory/Modeling, Solid State Theory, and Biomolecular Modeling/Simulation. Extended abstracts are presented for 89 studies

  4. Theory, modeling and simulation: Annual report 1993

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, T.H. Jr.; Garrett, B.C.

    1994-07-01

    Developing the knowledge base needed to address the environmental restoration issues of the US Department of Energy requires a fundamental understanding of molecules and their interactions in insolation and in liquids, on surfaces, and at interfaces. To meet these needs, the PNL has established the Environmental and Molecular Sciences Laboratory (EMSL) and will soon begin construction of a new, collaborative research facility devoted to advancing the understanding of environmental molecular science. Research in the Theory, Modeling, and Simulation program (TMS), which is one of seven research directorates in the EMSL, will play a critical role in understanding molecular processes important in restoring DOE`s research, development and production sites, including understanding the migration and reactions of contaminants in soils and groundwater, the development of separation process for isolation of pollutants, the development of improved materials for waste storage, understanding the enzymatic reactions involved in the biodegradation of contaminants, and understanding the interaction of hazardous chemicals with living organisms. The research objectives of the TMS program are to apply available techniques to study fundamental molecular processes involved in natural and contaminated systems; to extend current techniques to treat molecular systems of future importance and to develop techniques for addressing problems that are computationally intractable at present; to apply molecular modeling techniques to simulate molecular processes occurring in the multispecies, multiphase systems characteristic of natural and polluted environments; and to extend current molecular modeling techniques to treat complex molecular systems and to improve the reliability and accuracy of such simulations. The program contains three research activities: Molecular Theory/Modeling, Solid State Theory, and Biomolecular Modeling/Simulation. Extended abstracts are presented for 89 studies.

  5. A 2-D FEM thermal model to simulate water flow in a porous media: Campi Flegrei caldera case study

    Directory of Open Access Journals (Sweden)

    V. Romano

    2012-05-01

    Full Text Available Volcanic and geothermal aspects both exist in many geologically young areas. In these areas the heat transfer process is of fundamental importance, so that the thermal and fluid-dynamic processes characterizing a viscous fluid in a porous medium are very important to understand the complex dynamics of the these areas. The Campi Flegrei caldera, located west of the city of Naples, within the central-southern sector of the large graben of Campanian plain, is a region where both volcanic and geothermal phenomena are present. The upper part of the geothermal system can be considered roughly as a succession of volcanic porous material (tuff saturated by a mixture formed mainly by water and carbon dioxide. We have implemented a finite elements approach in transient conditions to simulate water flow in a 2-D porous medium to model the changes of temperature in the geothermal system due to magmatic fluid inflow, accounting for a transient phase, not considered in the analytical solutions and fluid compressibility. The thermal model is described by means of conductive/convective equations, in which we propose a thermal source represented by a parabolic shape function to better simulate an increase of temperature in the central part (magma chamber of a box, simulating the Campi Flegrei caldera and using more recent evaluations, from literature, for the medium's parameters (specific heat capacity, density, thermal conductivity, permeability. A best-fit velocity for the permeant is evaluated by comparing the simulated temperatures with those measured in wells drilled by Agip (Italian Oil Agency in the 1980s in the framework of geothermal exploration. A few tens of days are enough to reach the thermal steady state, showing the quick response of the system to heat injection. The increase in the pressure due to the heat transport is then used to compute ground deformation, in particular the vertical displacements characteristics of the Campi Flegrei caldera

  6. Reusable Component Model Development Approach for Parallel and Distributed Simulation

    Science.gov (United States)

    Zhu, Feng; Yao, Yiping; Chen, Huilong; Yao, Feng

    2014-01-01

    Model reuse is a key issue to be resolved in parallel and distributed simulation at present. However, component models built by different domain experts usually have diversiform interfaces, couple tightly, and bind with simulation platforms closely. As a result, they are difficult to be reused across different simulation platforms and applications. To address the problem, this paper first proposed a reusable component model framework. Based on this framework, then our reusable model development approach is elaborated, which contains two phases: (1) domain experts create simulation computational modules observing three principles to achieve their independence; (2) model developer encapsulates these simulation computational modules with six standard service interfaces to improve their reusability. The case study of a radar model indicates that the model developed using our approach has good reusability and it is easy to be used in different simulation platforms and applications. PMID:24729751

  7. IDEF method-based simulation model design and development framework

    Directory of Open Access Journals (Sweden)

    Ki-Young Jeong

    2009-09-01

    Full Text Available The purpose of this study is to provide an IDEF method-based integrated framework for a business process simulation model to reduce the model development time by increasing the communication and knowledge reusability during a simulation project. In this framework, simulation requirements are collected by a function modeling method (IDEF0 and a process modeling method (IDEF3. Based on these requirements, a common data model is constructed using the IDEF1X method. From this reusable data model, multiple simulation models are automatically generated using a database-driven simulation model development approach. The framework is claimed to help both requirement collection and experimentation phases during a simulation project by improving system knowledge, model reusability, and maintainability through the systematic use of three descriptive IDEF methods and the features of the relational database technologies. A complex semiconductor fabrication case study was used as a testbed to evaluate and illustrate the concepts and the framework. Two different simulation software products were used to develop and control the semiconductor model from the same knowledge base. The case study empirically showed that this framework could help improve the simulation project processes by using IDEF-based descriptive models and the relational database technology. Authors also concluded that this framework could be easily applied to other analytical model generation by separating the logic from the data.

  8. Modelling and simulation of a heat exchanger

    Science.gov (United States)

    Xia, Lei; Deabreu-Garcia, J. Alex; Hartley, Tom T.

    1991-01-01

    Two models for two different control systems are developed for a parallel heat exchanger. First by spatially lumping a heat exchanger model, a good approximate model which has a high system order is produced. Model reduction techniques are applied to these to obtain low order models that are suitable for dynamic analysis and control design. The simulation method is discussed to ensure a valid simulation result.

  9. Ion thruster modeling: Particle simulations and experimental validations

    International Nuclear Information System (INIS)

    Wang, Joseph; Polk, James; Brinza, David

    2003-01-01

    This paper presents results from ion thruster modeling studies performed in support of NASA's Deep Space 1 mission and NSTAR project. Fully 3-dimensional computer particle simulation models are presented for ion optics plasma flow and ion thruster plume. Ion optics simulation results are compared with measurements obtained from ground tests of the NSTAR ion thruster. Plume simulation results are compared with in-flight measurements from the Deep Space 1 spacecraft. Both models show excellent agreement with experimental data

  10. Numerical simulation and parametric study of laminar mixed convection nanofluid flow in flat tubes using two phase mixture model

    Directory of Open Access Journals (Sweden)

    Safikhani Hamed

    2016-01-01

    Full Text Available In this article, the laminar mixed convection of Al2O3-Water nanofluid flow in a horizontal flat tube has been numerically simulated. The two-phase mixture model has been employed to solve the nanofluid flow, and constant heat flux has been considered as the wall boundary condition. The effects of different and important parameters such as the Reynolds number (Re, Grashof number (Gr, nanoparticles volume fraction (Φ and nanoparticle diameter (dp on the thermal and hydrodynamic performances of nanofluid flow have been analyzed. The results of numerical simulation were compared with similar existing data and good agreement is observed between them. It will be demonstrated that the Nusselt number (Nu and the friction factor (Cf are different for each of the upper, lower, left and right walls of the flat tube. The increase of Re, Gr and f and the reduction of dp lead to the increase of Nu. Similarly, the increase of Re and f results in the increase of Cf. Therefore, the best way to increase the amount of heat transfer in flat tubes using nanofluids is to increase the Gr and reduce the dp.

  11. Estimating the impact of enterprise resource planning project management decisions on post-implementation maintenance costs: a case study using simulation modelling

    Science.gov (United States)

    Fryling, Meg

    2010-11-01

    Organisations often make implementation decisions with little consideration for the maintenance phase of an enterprise resource planning (ERP) system, resulting in significant recurring maintenance costs. Poor cost estimations are likely related to the lack of an appropriate framework for enterprise-wide pre-packaged software maintenance, which requires an ongoing relationship with the software vendor (Markus, M.L., Tanis, C., and Fenema, P.C., 2000. Multisite ERP implementation. CACM, 43 (4), 42-46). The end result is that critical project decisions are made with little empirical data, resulting in substantial long-term cost impacts. The product of this research is a formal dynamic simulation model that enables theory testing, scenario exploration and policy analysis. The simulation model ERPMAINT1 was developed by combining and extending existing frameworks in several research domains, and by incorporating quantitative and qualitative case study data. The ERPMAINT1 model evaluates tradeoffs between different ERP project management decisions and their impact on post-implementation total cost of ownership (TCO). Through model simulations a variety of dynamic insights were revealed that could assist ERP project managers. Major findings from the simulation show that upfront investments in mentoring and system exposure translate to long-term cost savings. The findings also indicate that in addition to customisations, add-ons have a significant impact on TCO.

  12. Computer Models Simulate Fine Particle Dispersion

    Science.gov (United States)

    2010-01-01

    Through a NASA Seed Fund partnership with DEM Solutions Inc., of Lebanon, New Hampshire, scientists at Kennedy Space Center refined existing software to study the electrostatic phenomena of granular and bulk materials as they apply to planetary surfaces. The software, EDEM, allows users to import particles and obtain accurate representations of their shapes for modeling purposes, such as simulating bulk solids behavior, and was enhanced to be able to more accurately model fine, abrasive, cohesive particles. These new EDEM capabilities can be applied in many industries unrelated to space exploration and have been adopted by several prominent U.S. companies, including John Deere, Pfizer, and Procter & Gamble.

  13. SYSTEM DYNAMIC MODELLING AND SIMULATION FOR CULTIVATION OF FOREST LAND: CASE STUDY PERUM PERHUTANI, CENTRAL JAVA, INDONESIA

    Directory of Open Access Journals (Sweden)

    Candra Musi

    2017-07-01

    Full Text Available The deforestation and forest degradation rates have a propensity to rise every year. The problems in pertaining with the issue is not solely preoccupied on the ecological concern but also to the socio-economic impacts. The complexity of forest management is a serious barrier in determining a better management policy. Modeling system is a simple method to describe the real situation in nature. A qualitative approach is used to identify the relationship between the dynamics of important behaviors. The causal relationships among the factors were investigated by using causal loop diagram. The model conceptualization was constructed by using a stock-flow diagram. The result of the simulation model was used to determine the alternative policies for better forest management. The results indicated that the tenant welfare would be enhanced through the provision of production-sharing by 25% and the Corporate Social Responsibility by 2%, which yields a reduction in cultivated area of ​​916.61 ha within a period of 67 years or a decline of land area by an average of 13.68 ha per year.

  14. The Use of Model Matching Video Analysis and Computational Simulation to Study the Ankle Sprain Injury Mechanism

    Directory of Open Access Journals (Sweden)

    Daniel Tik-Pui Fong

    2012-10-01

    Full Text Available Lateral ankle sprains continue to be the most common injury sustained by athletes and create an annual healthcare burden of over $4 billion in the U.S. alone. Foot inversion is suspected in these cases, but the mechanism of injury remains unclear. While kinematics and kinetics data are crucial in understanding the injury mechanisms, ligament behaviour measures – such as ligament strains – are viewed as the potential causal factors of ankle sprains. This review article demonstrates a novel methodology that integrates model matching video analyses with computational simulations in order to investigate injury-producing events for a better understanding of such injury mechanisms. In particular, ankle joint kinematics from actual injury incidents were deduced by model matching video analyses and then input into a generic computational model based on rigid bone surfaces and deformable ligaments of the ankle so as to investigate the ligament strains that accompany these sprain injuries. These techniques may have the potential for guiding ankle sprain prevention strategies and targeted rehabilitation therapies.

  15. Software Requirements Specification Verifiable Fuel Cycle Simulation (VISION) Model

    International Nuclear Information System (INIS)

    D. E. Shropshire; W. H. West

    2005-01-01

    The purpose of this Software Requirements Specification (SRS) is to define the top-level requirements for a Verifiable Fuel Cycle Simulation Model (VISION) of the Advanced Fuel Cycle (AFC). This simulation model is intended to serve a broad systems analysis and study tool applicable to work conducted as part of the AFCI (including costs estimates) and Generation IV reactor development studies

  16. Systematic simulations of modified gravity: chameleon models

    Energy Technology Data Exchange (ETDEWEB)

    Brax, Philippe [Institut de Physique Theorique, CEA, IPhT, CNRS, URA 2306, F-91191Gif/Yvette Cedex (France); Davis, Anne-Christine [DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Li, Baojiu [Institute for Computational Cosmology, Department of Physics, Durham University, Durham DH1 3LE (United Kingdom); Winther, Hans A. [Institute of Theoretical Astrophysics, University of Oslo, 0315 Oslo (Norway); Zhao, Gong-Bo, E-mail: philippe.brax@cea.fr, E-mail: a.c.davis@damtp.cam.ac.uk, E-mail: baojiu.li@durham.ac.uk, E-mail: h.a.winther@astro.uio.no, E-mail: gong-bo.zhao@port.ac.uk [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom)

    2013-04-01

    In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference with the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc{sup −1}, since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future.

  17. Systematic simulations of modified gravity: chameleon models

    International Nuclear Information System (INIS)

    Brax, Philippe; Davis, Anne-Christine; Li, Baojiu; Winther, Hans A.; Zhao, Gong-Bo

    2013-01-01

    In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference with the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc −1 , since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future

  18. VHDL simulation with access to transistor models

    Science.gov (United States)

    Gibson, J.

    1991-01-01

    Hardware description languages such as VHDL have evolved to aid in the design of systems with large numbers of elements and a wide range of electronic and logical abstractions. For high performance circuits, behavioral models may not be able to efficiently include enough detail to give designers confidence in a simulation's accuracy. One option is to provide a link between the VHDL environment and a transistor level simulation environment. The coupling of the Vantage Analysis Systems VHDL simulator and the NOVA simulator provides the combination of VHDL modeling and transistor modeling.

  19. Modeling and simulation of gamma camera

    International Nuclear Information System (INIS)

    Singh, B.; Kataria, S.K.; Samuel, A.M.

    2002-08-01

    Simulation techniques play a vital role in designing of sophisticated instruments and also for the training of operating and maintenance staff. Gamma camera systems have been used for functional imaging in nuclear medicine. Functional images are derived from the external counting of the gamma emitting radioactive tracer that after introduction in to the body mimics the behavior of native biochemical compound. The position sensitive detector yield the coordinates of the gamma ray interaction with the detector and are used to estimate the point of gamma ray emission within the tracer distribution space. This advanced imaging device is thus dependent on the performance of algorithm for coordinate computing, estimation of point of emission, generation of image and display of the image data. Contemporary systems also have protocols for quality control and clinical evaluation of imaging studies. Simulation of this processing leads to understanding of the basic camera design problems. This report describes a PC based package for design and simulation of gamma camera along with the options of simulating data acquisition and quality control of imaging studies. Image display and data processing the other options implemented in SIMCAM will be described in separate reports (under preparation). Gamma camera modeling and simulation in SIMCAM has preset configuration of the design parameters for various sizes of crystal detector with the option to pack the PMT on hexagon or square lattice. Different algorithm for computation of coordinates and spatial distortion removal are allowed in addition to the simulation of energy correction circuit. The user can simulate different static, dynamic, MUGA and SPECT studies. The acquired/ simulated data is processed for quality control and clinical evaluation of the imaging studies. Results show that the program can be used to assess these performances. Also the variations in performance parameters can be assessed due to the induced

  20. UDP-N-Acetyl glucosamine pyrophosphorylase as novel target for controlling Aedes aegypti – molecular modeling, docking and simulation studies

    Directory of Open Access Journals (Sweden)

    Bhagath Kumar Palaka

    2014-12-01

    Full Text Available Aedes aegypti is a vector that transmits diseases like dengue fever, chikungunya, and yellow fever. It is distributed in all tropical and subtropical regions of the world. According to WHO reports, 40% of the world’s population is currently at risk for dengue fever. As vaccines are not available for such diseases, controlling mosquito population becomes necessary. Hence, this study aims at UDP-N-acetyl glucosamine pyrophosphorylase of Aedes aegypti (AaUAP, an essential enzyme for chitin metabolim in insects, as a drug target. Structure of AaUAP was predicted and validated using in-silico approach. Further, docking studies were performed using a set of 10 inhibitors out of which NAG9 was found to have good docking score, which was further supported by simulation studies. Hence, we propose that NAG9 can be considered as a potential hit in designing new inhibitors to control Aedes aegypti.

  1. Distortional effect of beam-hardening artefacts on microCT: a simulation study based on an in vitro caries model.

    Science.gov (United States)

    Kovács, Miklós; Danyi, Róbert; Erdélyi, Miklós; Fejérdy, Pál; Dobó-Nagy, Csaba

    2009-10-01

    The aim of this study was to assess quantitatively the degrading effect of artefacts caused by beam hardening on the microscopic computerized tomography (microCT) measurements of an in vitro caries model. A simulation-based method was described, with which the degrading effect of microCT artefacts on certain parameters of the observed structure could be determined. Simulations were carried out with polychromatic and monochromatic X-ray source, and a linearization method with a second-order polynomial fit algorithm was used in specific cases to correct the beam hardening artefact. The virtual test object was a half-crown of a tooth with an artificial caries lesion. For simulation with monochromatic X-ray source, the relative error of lesion depth and thickness measurements of the remineralized layer was found to be 1%-2%. For polychromatic X-ray source, and omitting beam hardening correction, the relative error exceeded 6%. After appropriate beam-hardening correction, the relative error of the measurement could be reduced to 1%-2%. With the adjustment simulated in this study, microCT having polychromatic X-ray source resulted in the same level of error as with monochromatic source if the linearization method to correct the beam hardening was used. The presented simulation-based method is a useful way to determine artefact-caused distortions for other studies testing objects with different material and geometry.

  2. Digital Simulation Games for Social Studies Classrooms

    Science.gov (United States)

    Devlin-Scherer, Roberta; Sardone, Nancy B.

    2010-01-01

    Data from ten teacher candidates studying teaching methods were analyzed to determine perceptions toward digital simulation games in the area of social studies. This research can be used as a conceptual model of how current teacher candidates react to new methods of instruction and determine how education programs might change existing curricula…

  3. Modelling interplanetary CMEs using magnetohydrodynamic simulations

    Directory of Open Access Journals (Sweden)

    P. J. Cargill

    Full Text Available The dynamics of Interplanetary Coronal Mass Ejections (ICMEs are discussed from the viewpoint of numerical modelling. Hydrodynamic models are shown to give a good zero-order picture of the plasma properties of ICMEs, but they cannot model the important magnetic field effects. Results from MHD simulations are shown for a number of cases of interest. It is demonstrated that the strong interaction of the ICME with the solar wind leads to the ICME and solar wind velocities being close to each other at 1 AU, despite their having very different speeds near the Sun. It is also pointed out that this interaction leads to a distortion of the ICME geometry, making cylindrical symmetry a dubious assumption for the CME field at 1 AU. In the presence of a significant solar wind magnetic field, the magnetic fields of the ICME and solar wind can reconnect with each other, leading to an ICME that has solar wind-like field lines. This effect is especially important when an ICME with the right sense of rotation propagates down the heliospheric current sheet. It is also noted that a lack of knowledge of the coronal magnetic field makes such simulations of little use in space weather forecasts that require knowledge of the ICME magnetic field strength.

    Key words. Interplanetary physics (interplanetary magnetic fields Solar physics, astrophysics, and astronomy (flares and mass ejections Space plasma physics (numerical simulation studies

  4. Modeling and Simulation Fundamentals Theoretical Underpinnings and Practical Domains

    CERN Document Server

    Sokolowski, John A

    2010-01-01

    An insightful presentation of the key concepts, paradigms, and applications of modeling and simulation. Modeling and simulation has become an integral part of research and development across many fields of study, having evolved from a tool to a discipline in less than two decades. Modeling and Simulation Fundamentals offers a comprehensive and authoritative treatment of the topic and includes definitions, paradigms, and applications to equip readers with the skills needed to work successfully as developers and users of modeling and simulation. Featuring contributions written by leading experts

  5. Modeling, Simulation and Position Control of 3DOF Articulated Manipulator

    Directory of Open Access Journals (Sweden)

    Hossein Sadegh Lafmejani

    2014-08-01

    Full Text Available In this paper, the modeling, simulation and control of 3 degrees of freedom articulated robotic manipulator have been studied. First, we extracted kinematics and dynamics equations of the mentioned manipulator by using the Lagrange method. In order to validate the analytical model of the manipulator we compared the model simulated in the simulation environment of Matlab with the model was simulated with the SimMechanics toolbox. A sample path has been designed for analyzing the tracking subject. The system has been linearized with feedback linearization and then a PID controller was applied to track a reference trajectory. Finally, the control results have been compared with a nonlinear PID controller.

  6. Policy advice derived from simulation models

    NARCIS (Netherlands)

    Brenner, T.; Werker, C.

    2009-01-01

    When advising policy we face the fundamental problem that economic processes are connected with uncertainty and thus policy can err. In this paper we show how the use of simulation models can reduce policy errors. We suggest that policy is best based on socalled abductive simulation models, which

  7. Model Validation for Simulations of Vehicle Systems

    Science.gov (United States)

    2012-08-01

    jackknife”, Annals of Statistics, 7:1-26, 1979. [45] B. Efron and G. Gong, “A leisurely look at the bootstrap, the jackknife, and cross-validation”, The...battery model developed in the Automotive Research Center, a US Army Center of Excellence for modeling and simulation of ground vehicle systems...Sandia National Laboratories and a battery model developed in the Automotive Research Center, a US Army Center of Excellence for modeling and simulation

  8. Transient Modeling and Simulation of Compact Photobioreactors

    OpenAIRE

    Ribeiro, Robert Luis Lara; Mariano, André Bellin; Souza, Jeferson Avila; Vargas, Jose Viriato Coelho

    2008-01-01

    In this paper, a mathematical model is developed to make possible the simulation of microalgae growth and its dependency on medium temperature and light intensity. The model is utilized to simulate a compact photobioreactor response in time with physicochemical parameters of the microalgae Phaeodactylum tricornutum. The model allows for the prediction of the transient and local evolution of the biomass concentration in the photobioreactor with low computational time. As a result, the model is...

  9. Efficient Turbulence Modeling for CFD Wake Simulations

    DEFF Research Database (Denmark)

    van der Laan, Paul

    , that can accurately and efficiently simulate wind turbine wakes. The linear k-ε eddy viscosity model (EVM) is a popular turbulence model in RANS; however, it underpredicts the velocity wake deficit and cannot predict the anisotropic Reynolds-stresses in the wake. In the current work, nonlinear eddy...... viscosity models (NLEVM) are applied to wind turbine wakes. NLEVMs can model anisotropic turbulence through a nonlinear stress-strain relation, and they can improve the velocity deficit by the use of a variable eddy viscosity coefficient, that delays the wake recovery. Unfortunately, all tested NLEVMs show...... numerically unstable behavior for fine grids, which inhibits a grid dependency study for numerical verification. Therefore, a simpler EVM is proposed, labeled as the k-ε - fp EVM, that has a linear stress-strain relation, but still has a variable eddy viscosity coefficient. The k-ε - fp EVM is numerically...

  10. MODELING AND SIMULATION OF A HYDROCRACKING UNIT

    Directory of Open Access Journals (Sweden)

    HASSAN A. FARAG

    2016-06-01

    Full Text Available Hydrocracking is used in the petroleum industry to convert low quality feed stocks into high valued transportation fuels such as gasoline, diesel, and jet fuel. The aim of the present work is to develop a rigorous steady state two-dimensional mathematical model which includes conservation equations of mass and energy for simulating the operation of a hydrocracking unit. Both the catalyst bed and quench zone have been included in this integrated model. The model equations were numerically solved in both axial and radial directions using Matlab software. The presented model was tested against a real plant data in Egypt. The results indicated that a very good agreement between the model predictions and industrial values have been reported for temperature profiles, concentration profiles, and conversion in both radial and axial directions at the hydrocracking unit. Simulation of the quench zone conversion and temperature profiles in the quench zone was also included and gave a low deviation from the actual ones. In concentration profiles, the percentage deviation in the first reactor was found to be 9.28 % and 9.6% for the second reactor. The effect of several parameters such as: Pellet Heat Transfer Coefficient, Effective Radial Thermal Conductivity, Wall Heat Transfer Coefficient, Effective Radial Diffusivity, and Cooling medium (quench zone has been included in this study. The variation of Wall Heat Transfer Coefficient, Effective Radial Diffusivity for the near-wall region, gave no remarkable changes in the temperature profiles. On the other hand, even small variations of Effective Radial Thermal Conductivity, affected the simulated temperature profiles significantly, and this effect could not be compensated by the variations of the other parameters of the model.

  11. TMS modeling toolbox for realistic simulation.

    Science.gov (United States)

    Cho, Young Sun; Suh, Hyun Sang; Lee, Won Hee; Kim, Tae-Seong

    2010-01-01

    Transcranial magnetic stimulation (TMS) is a technique for brain stimulation using rapidly changing magnetic fields generated by coils. It has been established as an effective stimulation technique to treat patients suffering from damaged brain functions. Although TMS is known to be painless and noninvasive, it can also be harmful to the brain by incorrect focusing and excessive stimulation which might result in seizure. Therefore there is ongoing research effort to elucidate and better understand the effect and mechanism of TMS. Lately Boundary element method (BEM) and Finite element method (FEM) have been used to simulate the electromagnetic phenomenon of TMS. However, there is a lack of general tools to generate the models of TMS due to some difficulties in realistic modeling of the human head and TMS coils. In this study, we have developed a toolbox through which one can generate high-resolution FE TMS models. The toolbox allows creating FE models of the head with isotropic and anisotropic electrical conductivities in five different tissues of the head and the coils in 3D. The generated TMS model is importable to FE software packages such as ANSYS for further and efficient electromagnetic analysis. We present a set of demonstrative results of realistic simulation of TMS with our toolbox.

  12. Long-term manure carbon sequestration in soil simulated with the Daisy model on the basis of short-term incubation study

    DEFF Research Database (Denmark)

    Karki, Yubaraj Kumar; Børgesen, Christen Duus; Thomsen, Ingrid Kaag

    2013-01-01

    This study focused on simulating the long-term soil carbon sequestration after application of anaerobically digested and non-digested cattle manure using the Daisy model. The model was parameterized and calibrated for soil carbon (C) release during a 247 days incubation study including a coarse...... application of the two manures (70 kg organic manure N ha-1 plus 90 kg mineral N ha-1) and compared with a mineral N reference (120 kg N ha-1 yr-1). Carbon retention in soil was related to the initial C in non-digested manure, and after 52 years of repeated manure application extra C retention was equivalent...

  13. Modeling studies on simultaneous adsorption of phenol and resorcinol onto granular activated carbon from simulated aqueous solution.

    Science.gov (United States)

    Kumar, Shashi; Zafar, Mohd; Prajapati, Jitendra K; Kumar, Surendra; Kannepalli, Sivaram

    2011-01-15

    The modelling study on simultaneous adsorption of phenol and resorcinol onto granular activated carbon (GAC) in multicomponent solution was carried out at 303K by conducting batch experiments at initial concentration range of 100-1000 mg/l. Three equilibrium isotherm models for multicomponent adsorption studies were considered. In order to determine the parameters of multicomponent adsorption isotherms, individual adsorption studies of phenol and resorcinol on GAC were also carried out. The experimental data of single and multicomponent adsorption were fitted to these models. The parameters of multicomponent models were estimated using error minimization technique on MATLAB R2007a. It has been observed that for low initial concentration of adsorbate (100-200mg/l), modified Langmuir model represents the data very well with the adsorption constant (Q(0)), 216.1, 0.032 and average relative error (ARE) of 8.34, 8.31 for phenol and resorcinol respectively. Whereas, for high initial concentration of adsorbate (400-1000 mg/l), extended Freundlich model represents the data very well with adsorption constant (K(F)) of 25.41, 24.25 and ARE of 7.0, 6.46 for phenol and resorcinol respectively. The effect of pH of solution, adsorbent dose and initial concentrations of phenol and resorcinol on adsorption behaviour was also investigated. Copyright © 2010 Elsevier B.V. All rights reserved.

  14. Deformation of the Durom acetabular component and its impact on tribology in a cadaveric model--a simulator study.

    Directory of Open Access Journals (Sweden)

    Feng Liu

    Full Text Available BACKGROUND: Recent studies have shown that the acetabular component frequently becomes deformed during press-fit insertion. The aim of this study was to explore the deformation of the Durom cup after implantation and to clarify the impact of deformation on wear and ion release of the Durom large head metal-on-metal (MOM total hips in simulators. METHODS: Six Durom cups impacted into reamed acetabula of fresh cadavers were used as the experimental group and another 6 size-paired intact Durom cups constituted the control group. All 12 Durom MOM total hips were put through a 3 million cycle (MC wear test in simulators. RESULTS: The 6 cups in the experimental group were all deformed, with a mean deformation of 41.78 ± 8.86 µm. The average volumetric wear rate in the experimental group and in the control group in the first million cycle was 6.65 ± 0.29 mm(3/MC and 0.89 ± 0.04 mm(3/MC (t = 48.43, p = 0.000. The ion levels of Cr and Co in the experimental group were also higher than those in the control group before 2.0 MC. However there was no difference in the ion levels between 2.0 and 3.0 MC. CONCLUSIONS: This finding implies that the non-modular acetabular component of Durom total hip prosthesis is likely to become deformed during press-fit insertion, and that the deformation will result in increased volumetric wear and increased ion release. CLINICAL RELEVANCE: This study was determined to explore the deformation of the Durom cup after implantation and to clarify the impact of deformation on wear and ion release of the prosthesis. Deformation of the cup after implantation increases the wear of MOM bearings and the resulting ion levels. The clinical use of the Durom large head prosthesis should be with great care.

  15. Whole-building Hygrothermal Simulation Model

    DEFF Research Database (Denmark)

    Rode, Carsten; Grau, Karl

    2003-01-01

    An existing integrated simulation tool for dynamic thermal simulation of building was extended with a transient model for moisture release and uptake in building materials. Validation of the new model was begun with comparison against measurements in an outdoor test cell furnished with single...... materials. Almost quasi-steady, cyclic experiments were used to compare the indoor humidity variation and the numerical results of the integrated simulation tool with the new moisture model. Except for the case with chipboard as furnishing, the predictions of indoor humidity with the detailed model were...

  16. Simulation of volumetrically heated pebble beds in solid breeding blankets for fusion reactors. Modelling, experimental validation and sensitivity studies

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez Gonzalez, Francisco Alberto

    2016-10-14

    The Breeder Units contains pebble beds of lithium orthosilicate (Li{sub 4}SiO{sub 4}) as tritium breeder material and beryllium as neutron multiplier. In this dissertation a closed validation strategy for the thermo-mechanical validation of the Breeder Units has been developed. This strategy is based on the development of dedicated testing and modeling tools, which are needed for the qualification of the thermo-mechanical functionality of these components in an out-of-pile experimental campaign. The neutron flux in the Breeder Units induces a nonhomogeneous volumetric heating in the pebble beds that must be mimicked in an out-of-pile experiment with an external heating system minimizing the intrusion in the pebble beds. Therefore, a heater system that simulates this volumetric heating has been developed. This heater system is based on ohmic heating and linear heater elements, which approximates the point heat sources of the granular material by linear sources. These linear sources represent ''linear pebbles'' in discrete locations close enough to relatively reproduce the thermal gradients occurring in the functional materials. The heater concept has been developed for the Li{sub 4}SiO{sub 4} and it is based on a hexagonal matrix arrangement of linear and parallel heater elements of diameter 1 mm separated by 7 mm. A set of uniformly distributed thermocouples in the transversal and longitudinal direction in the pebble bed midplane allows a 2D temperature reconstruction of that measurement plane by means of biharmonic spline interpolation. This heating system has been implemented in a relevant Breeder Unit region and its proof-of-concept has been tested in a PRE-test Mock-Up eXperiment (PREMUX) that has been designed and constructed in the frame of this dissertation. The packing factor of the pebble bed with and without the heating system does not show significant differences, giving an indirect evidence of the low intrusion of the system. Such

  17. Simulation-based modeling of building complexes construction management

    Science.gov (United States)

    Shepelev, Aleksandr; Severova, Galina; Potashova, Irina

    2018-03-01

    The study reported here examines the experience in the development and implementation of business simulation games based on network planning and management of high-rise construction. Appropriate network models of different types and levels of detail have been developed; a simulation model including 51 blocks (11 stages combined in 4 units) is proposed.

  18. Active site modeling in copper azurin molecular dynamics simulations

    NARCIS (Netherlands)

    Rizzuti, B; Swart, M; Sportelli, L; Guzzi, R

    Active site modeling in molecular dynamics simulations is investigated for the reduced state of copper azurin. Five simulation runs (5 ns each) were performed at room temperature to study the consequences of a mixed electrostatic/constrained modeling for the coordination between the metal and the

  19. Simulation modeling for the health care manager.

    Science.gov (United States)

    Kennedy, Michael H

    2009-01-01

    This article addresses the use of simulation software to solve administrative problems faced by health care managers. Spreadsheet add-ins, process simulation software, and discrete event simulation software are available at a range of costs and complexity. All use the Monte Carlo method to realistically integrate probability distributions into models of the health care environment. Problems typically addressed by health care simulation modeling are facility planning, resource allocation, staffing, patient flow and wait time, routing and transportation, supply chain management, and process improvement.

  20. Study of Cardiac Defibrillation Through Numerical Simulations

    Science.gov (United States)

    Bragard, J.; Marin, S.; Cherry, E. M.; Fenton, F. H.

    Three-dimensional numerical simulations of the defibrillation problem are presented. In particular, in this study we use the rabbit ventricular geometry as a realistic model system for evaluating the efficacy of defibrillatory shocks. Statistical data obtained from the simulations were analyzed in term of a dose-response curve. Good quantitative agreement between our numerical results and clinically relevant values is obtained. An electric field strength of about 6.6 V/cm indicates a fifty percent probability of successful defibrillation for a 12-ms monophasic shock. Our validated model will be useful for optimizing defibrillation protocols.

  1. Modeling and simulation of blood collection systems.

    Science.gov (United States)

    Alfonso, Edgar; Xie, Xiaolan; Augusto, Vincent; Garraud, Olivier

    2012-03-01

    This paper addresses the modeling and simulation of blood collection systems in France for both fixed site and mobile blood collection with walk in whole blood donors and scheduled plasma and platelet donors. Petri net models are first proposed to precisely describe different blood collection processes, donor behaviors, their material/human resource requirements and relevant regulations. Petri net models are then enriched with quantitative modeling of donor arrivals, donor behaviors, activity times and resource capacity. Relevant performance indicators are defined. The resulting simulation models can be straightforwardly implemented with any simulation language. Numerical experiments are performed to show how the simulation models can be used to select, for different walk in donor arrival patterns, appropriate human resource planning and donor appointment strategies.

  2. Mathematical modeling and numerical study of a spray in a rarefied gas. Application to the simulation of dust particle transport in ITER in case of vacuum loss accident

    International Nuclear Information System (INIS)

    Charles, F.

    2009-11-01

    -In-Cell method. Starting from these models, we perform some numerical simulations of a loss-of-vacuum event in the framework of safety studies in ITER. (author)

  3. PEM Fuel Cells with Bio-Ethanol Processor Systems A Multidisciplinary Study of Modelling, Simulation, Fault Diagnosis and Advanced Control

    CERN Document Server

    Feroldi, Diego; Outbib, Rachid

    2012-01-01

    An apparently appropriate control scheme for PEM fuel cells may actually lead to an inoperable plant when it is connected to other unit operations in a process with recycle streams and energy integration. PEM Fuel Cells with Bio-Ethanol Processor Systems presents a control system design that provides basic regulation of the hydrogen production process with PEM fuel cells. It then goes on to construct a fault diagnosis system to improve plant safety above this control structure. PEM Fuel Cells with Bio-Ethanol Processor Systems is divided into two parts: the first covers fuel cells and the second discusses plants for hydrogen production from bio-ethanol to feed PEM fuel cells. Both parts give detailed analyses of modeling, simulation, advanced control, and fault diagnosis. They give an extensive, in-depth discussion of the problems that can occur in fuel cell systems and propose a way to control these systems through advanced control algorithms. A significant part of the book is also given over to computer-aid...

  4. Modeling and Simulation of Matrix Converter

    DEFF Research Database (Denmark)

    Liu, Fu-rong; Klumpner, Christian; Blaabjerg, Frede

    2005-01-01

    This paper discusses the modeling and simulation of matrix converter. Two models of matrix converter are presented: one is based on indirect space vector modulation and the other is based on power balance equation. The basis of these two models is• given and the process on modeling is introduced...

  5. Influence of B{sub 1}-inhomogeneity on pharmacokinetic modeling of dynamic contrast-enhanced MRI: A simulation study

    Energy Technology Data Exchange (ETDEWEB)

    Park, Bun Woo [Dept. of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul (Korea, Republic of); Choi, Byung Se [Dept. of Radiology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of); and others

    2017-08-01

    To simulate the B1-inhomogeneity-induced variation of pharmacokinetic parameters on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). B1-inhomogeneity-induced flip angle (FA) variation was estimated in a phantom study. Monte Carlo simulation was performed to assess the FA-deviation-induced measurement error of the pre-contrast R1, contrast-enhancement ratio, Gd-concentration, and two-compartment pharmacokinetic parameters (Ktrans, ve, and vp). B1-inhomogeneity resulted in −23–5% fluctuations (95% confidence interval [CI] of % error) of FA. The 95% CIs of FA-dependent % errors in the gray matter and blood were as follows: −16.7–61.8% and −16.7–61.8% for the pre-contrast R1, −1.0–0.3% and −5.2–1.3% for the contrast-enhancement ratio, and −14.2–58.1% and −14.1–57.8% for the Gd-concentration, respectively. These resulted in −43.1–48.4% error for Ktrans, −32.3–48.6% error for the ve, and −43.2–48.6% error for vp. The pre-contrast R1 was more vulnerable to FA error than the contrast-enhancement ratio, and was therefore a significant cause of the Gd-concentration error. For example, a −10% FA error led to a 23.6% deviation in the pre-contrast R1, −0.4% in the contrast-enhancement ratio, and 23.6% in the Gd-concentration. In a simulated condition with a 3% FA error in a target lesion and a −10% FA error in a feeding vessel, the % errors of the pharmacokinetic parameters were −23.7% for Ktrans, −23.7% for ve, and −23.7% for vp. Even a small degree of B1-inhomogeneity can cause a significant error in the measurement of pharmacokinetic parameters on DCE-MRI, while the vulnerability of the pre-contrast R1 calculations to FA deviations is a significant cause of the miscalculation.

  6. Study protocol: combining experimental methods, econometrics and simulation modelling to determine price elasticities for studying food taxes and subsidies (The Price ExaM Study)

    OpenAIRE

    Waterlander, Wilma E.; Blakely, Tony; Nghiem, Nhung; Cleghorn, Christine L.; Eyles, Helen; Genc, Murat; Wilson, Nick; Jiang, Yannan; Swinburn, Boyd; Jacobi, Liana; Michie, Jo; Ni Mhurchu, Cliona

    2016-01-01

    Abstract Background There is a need for accurate and precise food price elasticities (PE, change in consumer demand in response to change in price) to better inform policy on health-related food taxes and subsidies. Methods/Design The Price Experiment and Modelling (Price ExaM) study aims to: I) derive accurate and precise food PE values; II) quantify the impact of price changes on quantity and quality of discrete food group purchases and; III) model the potential health and disease impacts o...

  7. Effect of ski boot rear stiffness (SBRS) on maximal ACL force during injury prone landing movements in alpine ski racing: A study with a musculoskeletal simulation model.

    Science.gov (United States)

    Eberle, Robert; Heinrich, Dieter; Kaps, Peter; Oberguggenberger, Michael; Nachbauer, Werner

    2017-06-01

    A common anterior cruciate ligament (ACL) injury situation in alpine ski racing is landing back-weighted after a jump. Simulated back-weighted landing situations showed higher ACL-injury risk for increasing ski boot rear stiffness (SBRS) without considering muscles. It is well known that muscle forces affect ACL tensile forces during landing. The purpose of this study is to investigate the effect of different SBRS on the maximal ACL tensile forces during injury prone landings considering muscle forces by a two-dimensional musculoskeletal simulation model. Injury prone situations for ACL-injuries were generated by the musculoskeletal simulation model using measured kinematics of a non-injury situation and the method of Monte Carlo simulation. Subsequently, the SBRS was varied for injury prone landings. The maximal ACL tensile forces and contributing factors to the ACL forces were compared for the different SBRS. In the injury prone landings the maximal ACL tensile forces increased with increasing SBRS. It was found that the higher maximal ACL force was caused by higher forces acting on the tibia by the boot and by higher quadriceps muscle forces both due to the higher SBRS. Practical experience suggested that the reduction of SBRS is not accepted by ski racers due to performance reasons. Thus, preventive measures may concentrate on the reduction of the quadriceps muscle force during impact.

  8. Assessing the impact of releases of radionuclides into sewage systems in urban environment - simulation, modelling and experimental studies - LUCIA

    International Nuclear Information System (INIS)

    Sundelll-Bergman, S.; Avila, R.; Cruz, I. de la; Xu, S.; Puhakainen, M.; Heikkinene, T.; Rahola, T.; Hosseini, A.; Nielsen, Sven; Sigurgeirsson, M.

    2009-06-01

    This report summarises the findings of a project on assessing the impact of releases of radionuclides into sewage systems and was established to provide more knowledge and suitable tools for emergency preparedness purposes in urban areas. It was known that the design of sewage plants, and their wastewater treatments, is rather similar between the Nordic countries. One sewage plant in each of the five Nordic countries was selected for assessing the impact of radionuclide releases from hospitals into their sewerage systems. Measurements and model predictions of dose assessments to different potentially exposed members of the public were carried out. The results from the dose assessments indicate that in case of routine releases annual doses to the three hypothetical groups of individuals are most likely insignificant. Estimated doses for workers are below 10 μSv/y, for the two studied radionuclides 99mTc and 131I. If uncertainties in the predictions of activity concentrations in sludge are considered, then the probability of obtaining doses above 10 μSv/y may not be insignificant. The models and approaches developed can also be applied in case of accidental releases. A laboratory inter-comparison exercise was also organised to compare analytical results across the laboratories participating in the project, using both 131I, dominating man-made radionuclide in sewage systems due to the medical use. A process oriented model of the biological treatment is also proposed in the report that does not require as much input data as for the LUCIA model. This model is a combination of a simplified well known Activated Sludge Model No.1 (Henze, 1987) and the Kd concept used in the LUCIA model. The simplified model is able to estimate the concentrations and the retention time of the sludge in different parts of the treatment plant, which in turn, can be used as a tool for the dose assessment purpose.filled by the activity. (au)

  9. Assessing the impact of releases of radionuclides into sewage systems in urban environment - simulation, modelling and experimental studies - LUCIA

    Energy Technology Data Exchange (ETDEWEB)

    Sundelll-Bergman, S. (Vattenfall Power Consultant, Stockholm (Sweden)); Avila, R.; Cruz, I. de la (Facilia AB, (Sweden)); Xu, S. (Swedish Radiation Safety Authority, (Sweden)); Puhakainen, M.; Heikkinene, T.; Rahola, T. (STUK (Finland)); Hosseini, A. (Norwegian Radiation Protection Authority (Norway)); Nielsen, Sven (Risoe National Laboratory for Sustainable Energy, DTU (Denmark)); Sigurgeirsson, M. (Geislavarnir rikisins (Iceland))

    2009-06-15

    This report summarises the findings of a project on assessing the impact of releases of radionuclides into sewage systems and was established to provide more knowledge and suitable tools for emergency preparedness purposes in urban areas. It was known that the design of sewage plants, and their wastewater treatments, is rather similar between the Nordic countries. One sewage plant in each of the five Nordic countries was selected for assessing the impact of radionuclide releases from hospitals into their sewerage systems. Measurements and model predictions of dose assessments to different potentially exposed members of the public were carried out. The results from the dose assessments indicate that in case of routine releases annual doses to the three hypothetical groups of individuals are most likely insignificant. Estimated doses for workers are below 10 muSv/y, for the two studied radionuclides 99mTc and 131I. If uncertainties in the predictions of activity concentrations in sludge are considered, then the probability of obtaining doses above 10 muSv/y may not be insignificant. The models and approaches developed can also be applied in case of accidental releases. A laboratory inter-comparison exercise was also organised to compare analytical results across the laboratories participating in the project, using both 131I, dominating man-made radionuclide in sewage systems due to the medical use. A process oriented model of the biological treatment is also proposed in the report that does not require as much input data as for the LUCIA model. This model is a combination of a simplified well known Activated Sludge Model No.1 (Henze, 1987) and the Kd concept used in the LUCIA model. The simplified model is able to estimate the concentrations and the retention time of the sludge in different parts of the treatment plant, which in turn, can be used as a tool for the dose assessment purpose.filled by the activity. (au)

  10. A Case Study on Observed and Simulated CO2 Concentration Profiles in Hefei based on Raman Lidar and GEOS-Chem Model

    Directory of Open Access Journals (Sweden)

    Wang Yinan

    2016-01-01

    Full Text Available Observations of atmospheric CO2 concentration profiles provide significative constraints on the global/regional inversions of carbon sources and sinks. Anhui Institute of Optics and Fine Mechanics of Chinese Academy of Sciences developed a Raman Lidar system to detect the vertical distribution of atmospheric CO2. This paper compared the observations with the modeled results from a three-dimensional global chemistry transport model-GEOS-Chem, which showed a good agreement in the trend of change with lidar measurements. The case study indicated a potential for better simulating vertical distribution of atmospheric CO2 by combining with lidar measurements.

  11. Comparative study of regionalization methods for simulating low-flows from a small number of model parameters

    Science.gov (United States)

    Garcia, Florine; Folton, Nathalie; Oudin, Ludovic; Arnaud, Patrick

    2015-04-01

    Issues with water resource management result from both an increasing demand and climate changes. The situations of low-flows, droughts and more generally lack of water are critically scrutinized. In this context, there is a need for tools to assist water agencies in the prediction and management of reference low-flows at gauged and ungauged catchment locations. IRSTEA developed GR2M-LoiEau, a conceptual distributed rainfall-runoff model, which is combined with a regionalized model of snow storage and melt. GR2M-LoiEau relies on two parameters which are regionalized and mapped throughout France. This model allows to cartography annual and monthly reference low-flows. The input meteorological data come from the distributed mesoscale atmospheric analysis system SAFRAN, which provides daily solid and liquid precipitations and temperatures data from everywhere in the French territory. In order to fully exploit these daily meteorological data to estimate daily statistics on low flows, a new version of GR2M-LoiEau is being developed at a daily time step, yet keeping only a few regionalized parameters. The aim of this study is to design a comprehensive set of tests to allow comparing low-flows obtained with different regionalization methods used to estimate low-flow model parameters. The new version of GR2M-LoiEau being not yet operational, the tests are made with GR4J (Perrin, 2002), a conceptual rainfall-runoff model, which already provides daily estimations, but involves four parameters that cannot easily be regionalized. Many studies showed the good prediction performances of this model. This work includes two parts. On the one hand, good criteria must be identified to evaluate and compare model results, good predictions of the model being expected about low flows and reference low flows, but also annual means and high flows. On the other hand, two methods of regionalization will have to be compared to estimate model parameters. The first one is rough, all the

  12. Systematic modelling and simulation of refrigeration systems

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1998-01-01

    The task of developing a simulation model of a refrigeration system can be very difficult and time consuming. In order for this process to be effective, a systematic method for developing the system model is required. This method should aim at guiding the developer to clarify the purpose of the s......The task of developing a simulation model of a refrigeration system can be very difficult and time consuming. In order for this process to be effective, a systematic method for developing the system model is required. This method should aim at guiding the developer to clarify the purpose...... of the simulation, to select appropriate component models and to set up the equations in a well-arranged way. In this paper the outline of such a method is proposed and examples showing the use of this method for simulation of refrigeration systems are given....

  13. Study on a Dynamic Vegetation Model for Simulating Land Surface Flux Exchanges at Lien-Hua-Chih Flux Observation Site in Taiwan

    Science.gov (United States)

    Yeh, T. Y.; Li, M. H.; Chen, Y. Y.; Ryder, J.; McGrath, M.; Otto, J.; Naudts, K.; Luyssaert, S.; MacBean, N.; Bastrikov, V.

    2016-12-01

    Dynamic vegetation model ORCHIDEE (Organizing Carbon and Hydrology In Dynamic EcosystEms) is a state of art land surface component of the IPSL (Institute Pierre Simon Laplace) Earth System Model. It has been used world-wide to investigate variations of water, carbon, and energy exchanges between the land surface and the atmosphere. In this study we assessed the applicability of using ORCHIDEE-CAN, a new feature with 3-D CANopy structure (Naudts et al., 2015; Ryder et al., 2016), to simulate surface fluxes measured at tower-based eddy covariance fluxes at the Lien-Hua-Chih experimental watershed in Taiwan. The atmospheric forcing including radiation, air temperature, wind speed, and the dynamics of vertical canopy structure for driving the model were obtained from the observations site. Suitable combinations of default plant function types were examined to meet in-situ observations of soil moisture and leaf area index from 2009 to 2013. The simulated top layer soil moisture was ranging from 0.1 to 0.4 and total leaf area was ranging from 2.2 to 4.4, respectively. A sensitivity analysis was performed to investigate the sensitive of model parameters and model skills of ORCHIDEE-CAN on capturing seasonal variations of surface fluxes. The most sensitive parameters were suggested and calibrated by an automatic data assimilation tool ORCHDAS (ORCHIDEE Data Assimilation Systems; http://orchidas.lsce.ipsl.fr/). Latent heat, sensible heat, and carbon fluxes simulated by the model were compared with long-term observations at the site. ORCHIDEE-CAN by making use of calibrated surface parameters was used to study variations of land-atmosphere interactions on a variety of temporal scale in associations with changes in both land and atmospheric conditions. Ref: Naudts, K., et al.,: A vertically discretised canopy description for ORCHIDEE (SVN r2290) and the modifications to the energy, water and carbon fluxes, Geoscientific Model Development, 8, 2035-2065, doi:10.5194/gmd-8

  14. Modeling and Simulation Techniques for Large-Scale Communications Modeling

    National Research Council Canada - National Science Library

    Webb, Steve

    1997-01-01

    .... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.

  15. Evaluation of bootstrap methods for estimating uncertainty of parameters in nonlinear mixed-effects models: a simulation study in population pharmacokinetics.

    Science.gov (United States)

    Thai, Hoai-Thu; Mentré, France; Holford, Nicholas H G; Veyrat-Follet, Christine; Comets, Emmanuelle

    2014-02-01

    Bootstrap methods are used in many disciplines to estimate the uncertainty of parameters, including multi-level or linear mixed-effects models. Residual-based bootstrap methods which resample both random effects and residuals are an alternative approach to case bootstrap, which resamples the individuals. Most PKPD applications use the case bootstrap, for which software is available. In this study, we evaluated the performance of three bootstrap methods (case bootstrap, nonparametric residual bootstrap and parametric bootstrap) by a simulation study and compared them to that of an asymptotic method (Asym) in estimating uncertainty of parameters in nonlinear mixed-effects models (NLMEM) with heteroscedastic error. This simulation was conducted using as an example of the PK model for aflibercept, an anti-angiogenic drug. As expected, we found that the bootstrap methods provided better estimates of uncertainty for parameters in NLMEM with high nonlinearity and having balanced designs compared to the Asym, as implemented in MONOLIX. Overall, the parametric bootstrap performed better than the case bootstrap as the true model and variance distribution were used. However, the case bootstrap is faster and simpler as it makes no assumptions on the model and preserves both between subject and residual variability in one resampling step. The performance of the nonparametric residual bootstrap was found to be limited when applying to NLMEM due to its failure to reflate the variance before resampling in unbalanced designs where the Asym and the parametric bootstrap performed well and better than case bootstrap even with stratification.

  16. Trace gas composition in the Asian summer monsoon anticyclone: a case study based on aircraft observations and model simulations

    Science.gov (United States)

    Gottschaldt, Klaus-D.; Schlager, Hans; Baumann, Robert; Bozem, Heiko; Eyring, Veronika; Hoor, Peter; Jöckel, Patrick; Jurkat, Tina; Voigt, Christiane; Zahn, Andreas; Ziereis, Helmut

    2017-05-01

    We present in situ measurements of the trace gas composition of the upper tropospheric (UT) Asian summer monsoon anticyclone (ASMA) performed with the High Altitude and Long Range Research Aircraft (HALO) in the frame of the Earth System Model Validation (ESMVal) campaign. Air masses with enhanced O3 mixing ratios were encountered after entering the ASMA at its southern edge at about 150 hPa on 18 September 2012. This is in contrast to the presumption that the anticyclone's interior is dominated by recently uplifted air with low O3 in the monsoon season. We also observed enhanced CO and HCl in the ASMA, which are tracers for boundary layer pollution and tropopause layer (TL) air or stratospheric in-mixing respectively. In addition, reactive nitrogen was enhanced in the ASMA. Along the HALO flight track across the ASMA boundary, strong gradients of these tracers separate anticyclonic from outside air. Lagrangian trajectory calculations using HYSPLIT show that HALO sampled a filament of UT air three times, which included air masses uplifted from the lower or mid-troposphere north of the Bay of Bengal. The trace gas gradients between UT and uplifted air masses were preserved during transport within a belt of streamlines fringing the central part of the anticyclone (fringe), but are smaller than the gradients across the ASMA boundary. Our data represent the first in situ observations across the southern part and downstream of the eastern ASMA flank. Back-trajectories starting at the flight track furthermore indicate that HALO transected the ASMA where it was just splitting into a Tibetan and an Iranian part. The O3-rich filament is diverted from the fringe towards the interior of the original anticyclone, and is at least partially bound to become part of the new Iranian eddy. A simulation with the ECHAM/MESSy Atmospheric Chemistry (EMAC) model is found to reproduce the observations reasonably well. It shows that O3-rich air is entrained by the outer streamlines of the

  17. Cooperatif Learning Models Simulation : From Abstract to Concrete

    Directory of Open Access Journals (Sweden)

    Agustini Ketut

    2018-01-01

    Full Text Available This study aimed to develop a simulation of cooperative learning model that used students as prospective teachers in improving the quality of learning, especially for preparedness in the classroom of the microteaching learning. A wider range of outcomes can be used more widely by teachers and lecturers in order to improve the professionalism as educators. The method used is research and development (R&D, using Dick & Carey development model. To produce as expected, there are several steps that must be done through global research, among others, do steps (a conduct in-depth theoretical study related to the simulation software that will be generated based on cooperative learning models to be developed , (b formulate figure simulation software system is based on the results of theoretical study and (c conduct a formative evaluation is done by content expert, design expert, and media expert to the validity of the simulation media, one to one student evaluation, small group evaluation and field trial evaluation. Simulation results showed that the Cooperative Learning Model can simulated three models by well. Student response through the simulation models is very positive by 60 % and 40% positive. The implication of this research result is that student of teacher candidate can apply cooperative learning model well when teaching real in training school hence student need to be given real simulation example how cooperative learning is implemented in class.

  18. Four Models of In Situ Simulation

    DEFF Research Database (Denmark)

    Musaeus, Peter; Krogh, Kristian; Paltved, Charlotte

    2014-01-01

    Introduction In situ simulation is characterized by being situated in the clinical environment as opposed to the simulation laboratory. But in situ simulation bears a family resemblance to other types of on the job training. We explore a typology of in situ simulation and suggest that there are f...... to team intervention and philosophies informing what good situated learning research is. This study generates system knowledge that might inform scenario development for in situ simulation.......Introduction In situ simulation is characterized by being situated in the clinical environment as opposed to the simulation laboratory. But in situ simulation bears a family resemblance to other types of on the job training. We explore a typology of in situ simulation and suggest...... that there are four fruitful approaches to in situ simulation: (1) In situ simulation informed by reported critical incidents and adverse events from emergency departments (ED) in which team training is about to be conducted to write scenarios. (2) In situ simulation through ethnographic studies at the ED. (3) Using...

  19. Modeling and simulation of complex systems a framework for efficient agent-based modeling and simulation

    CERN Document Server

    Siegfried, Robert

    2014-01-01

    Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard

  20. Heinrich events modeled in transient glacial simulations

    Science.gov (United States)

    Ziemen, Florian; Kapsch, Marie; Mikolajewicz, Uwe

    2017-04-01

    Heinrich events are among the most prominent events of climate variability recorded in proxies across the northern hemisphere. They are the archetype of ice sheet — climate interactions on millennial time scales. Nevertheless, the exact mechanisms that cause Heinrich events are still under debate, and their climatic consequences are far from being fully understood. We address open questions by studying Heinrich events in a coupled ice sheet model (ISM) atmosphere-ocean-vegetation general circulation model (AOVGCM) framework, where this variability occurs as part of the model generated internal variability. The framework consists of a northern hemisphere setup of the modified Parallel Ice Sheet Model (mPISM) coupled to the global AOVGCM ECHAM5/MPIOM/LPJ. The simulations were performed fully coupled and with transient orbital and greenhouse gas forcing. They span from several millennia before the last glacial maximum into the deglaciation. To make these long simulations feasible, the atmosphere is accelerated by a factor of 10 relative to the other model components using a periodical-synchronous coupling technique. To disentangle effects of the Heinrich events and the deglaciation, we focus on the events occurring before the deglaciation. The modeled Heinrich events show a peak ice discharge of about 0.05 Sv and raise the sea level by 2.3 m on average. The resulting surface water freshening reduces the Atlantic meridional overturning circulation and ocean heat release. The reduction in ocean heat release causes a sub-surface warming and decreases the air temperature and precipitation regionally and downstream into Eurasia. The surface elevation decrease of the ice sheet enhances moisture transport onto the ice sheet and thus increases precipitation over the Hudson Bay area, thereby accelerating the recovery after an event.

  1. Magnetosphere Modeling: From Cartoons to Simulations

    Science.gov (United States)

    Gombosi, T. I.

    2017-12-01

    Over the last half a century physics-based global computer simulations became a bridge between experiment and basic theory and now it represents the "third pillar" of geospace research. Today, many of our scientific publications utilize large-scale simulations to interpret observations, test new ideas, plan campaigns, or design new instruments. Realistic simulations of the complex Sun-Earth system have been made possible by the dramatically increased power of both computing hardware and numerical algorithms. Early magnetosphere models were based on simple E&M concepts (like the Chapman-Ferraro cavity) and hydrodynamic analogies (bow shock). At the beginning of the space age current system models were developed culminating in the sophisticated Tsyganenko-type description of the magnetic configuration. The first 3D MHD simulations of the magnetosphere were published in the early 1980s. A decade later there were several competing global models that were able to reproduce many fundamental properties of the magnetosphere. The leading models included the impact of the ionosphere by using a height-integrated electric potential description. Dynamic coupling of global and regional models started in the early 2000s by integrating a ring current and a global magnetosphere model. It has been recognized for quite some time that plasma kinetic effects play an important role. Presently, global hybrid simulations of the dynamic magnetosphere are expected to be possible on exascale supercomputers, while fully kinetic simulations with realistic mass ratios are still decades away. In the 2010s several groups started to experiment with PIC simulations embedded in large-scale 3D MHD models. Presently this integrated MHD-PIC approach is at the forefront of magnetosphere simulations and this technique is expected to lead to some important advances in our understanding of magnetosheric physics. This talk will review the evolution of magnetosphere modeling from cartoons to current systems

  2. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    A familiar example of a feedback loop is the business model in which part of the output or profit is fedback as input or additional capital - for instance, a company may choose to reinvest 10% of the profit for expansion of the business. Such simple models, like ..... would help scientists, engineers and managers towards better.

  3. Complex Simulation Model of Mobile Fading Channel

    Directory of Open Access Journals (Sweden)

    Tomas Marek

    2005-01-01

    Full Text Available In the mobile communication environment the mobile channel is the main limiting obstacle to reach the best performance of wireless system. Modeling of the radio channel consists of two basic fading mechanisms - Long-term fading and Short-term fading. The contribution deals with simulation of complex mobile radio channel, which is the channel with all fading components. Simulation model is based on Clarke-Gans theoretical model for fading channel and is developed in MATLAB environment. Simulation results have shown very good coincidence with theory. This model was developed for hybrid adaptation 3G uplink simulator (described in this issue during the research project VEGA - 1/0140/03.

  4. SEIR model simulation for Hepatitis B

    Science.gov (United States)

    Side, Syafruddin; Irwan, Mulbar, Usman; Sanusi, Wahidah

    2017-09-01

    Mathematical modelling and simulation for Hepatitis B discuss in this paper. Population devided by four variables, namely: Susceptible, Exposed, Infected and Recovered (SEIR). Several factors affect the population in this model is vaccination, immigration and emigration that occurred in the population. SEIR Model obtained Ordinary Differential Equation (ODE) non-linear System 4-D which then reduces to 3-D. SEIR model simulation undertaken to predict the number of Hepatitis B cases. The results of the simulation indicates the number of Hepatitis B cases will increase and then decrease for several months. The result of simulation using the number of case in Makassar also found the basic reproduction number less than one, that means, Makassar city is not an endemic area of Hepatitis B.

  5. Simulation models for food separation by adsorption process

    African Journals Online (AJOL)

    Separation of simulated industrial food products, by method of adsorption, has been studied. A thermodynamic approach has been applied to study the liquid adsorption where benzene and cyclohexane have been used to simulate edible oils in a system that employs silica gel as the adsorbent. Different models suggested ...

  6. Simulation models for food separation by adsorption process | Aoyi ...

    African Journals Online (AJOL)

    Separation of simulated industrial food products, by method of adsorption, has been studied. A thermodynamic approach has been applied to study the liquid adsorption where benzene and cyclohexane have been used to simulate edible oils in a system that employs silica gel as the adsorbent. Different models suggested ...

  7. Multi-agent modeling and simulation of farmland use change in the farming-pastoral zone: A case study of Qianjingou Town in Inner Mongolia, China

    Science.gov (United States)

    Yan, H.

    2015-12-01

    Farmland is the most basic material conditions for guaranteeing rural livelihoods and national food security, and exploring management strategies that take both of the sustainable rural livelihoods and sustainable farmland use into account has vital significance of theory and practice. Farmland is a complex and self-adaptive system that couples human and natural systems together, and natural factors and social factors that are related to its changing process need to be considered when modeling farmland changing process. This paper takes Qianjingou Town in Inner Mongolia farming-pastoral zone as study area. From the perspective of the relationship between households' livelihoods and farmland use, this study builds the process mechanism of farmland use change based on questionnaires data, and constructs multi-agent simulation model of farmland use change with the help of Eclipse and Repast toolbox. Through simulating the relationship between natural factors (with geographical location) and households' behaviors, this paper systematically simulates households' renting and abandoning farmland behaviors, and truly describes dynamic interactions between households' livelihoods and factors related to farmland use change. These factors include natural factors (net primary productivity, road accessibility, slope and relief amplitude) and social factors (households' family structures, economic development and government policies). In the end, this study scientifically predicts farmland use change trend in the future 30 years. The simulation results show that, the number of abandoned and sublet farmland plots has a gradually increasing trend, the number of non-farm households and pure-outwork households has a remarkable increasing trend, and the number of part-farm households and pure-farm households shows a decreasing trend. Households' livelihoods sustainability in the study area is confronted with increasing pressure, and households' nonfarm employment has an increasing

  8. Simulation data mapping in virtual cardiac model.

    Science.gov (United States)

    Jiquan, Liu; Jingyi, Feng; Duan, Huilong; Siping, Chen

    2004-01-01

    Although 3D heart and torso model with realistic geometry are basis of simulation computation in LFX virtual cardiac model, the simulation results are mostly output in 2D format. To solve such a problem and enhance the virtual reality of LFX virtual cardiac model, the methods of voxel mapping and vertex project mapping were presented. With these methods, excitation isochrone map (EIM) was mapped from heart model with realistic geometry to real visible man heart model, and body surface potential map (BSPM) was mapped from torso model with realistic geometry to real visible man body surface. By visualizing in the 4Dview, which is a real-time 3D medical image visualization platform, the visualization results of EIM and BSPM simulation data before and after mapping were also provided. According to the visualization results, the output format of EIM and BSPM simulation data of LFX virtual cardiac model were extended from 2D to 4D (spatio-temporal) and from cardiac model with realistic geometry to real cardiac model, and more realistic and effective simulation was achieved.

  9. Fully Adaptive Radar Modeling and Simulation Development

    Science.gov (United States)

    2017-04-01

    AFRL-RY-WP-TR-2017-0074 FULLY ADAPTIVE RADAR MODELING AND SIMULATION DEVELOPMENT Kristine L. Bell and Anthony Kellems Metron, Inc...SMALL BUSINESS INNOVATION RESEARCH (SBIR) PHASE I REPORT. Approved for public release; distribution unlimited. See additional restrictions...2017 4. TITLE AND SUBTITLE FULLY ADAPTIVE RADAR MODELING AND SIMULATION DEVELOPMENT 5a. CONTRACT NUMBER FA8650-16-M-1774 5b. GRANT NUMBER 5c

  10. Theory, modeling, and simulation annual report, 1992

    Energy Technology Data Exchange (ETDEWEB)

    1993-05-01

    This report briefly discusses research on the following topics: development of electronic structure methods; modeling molecular processes in clusters; modeling molecular processes in solution; modeling molecular processes in separations chemistry; modeling interfacial molecular processes; modeling molecular processes in the atmosphere; methods for periodic calculations on solids; chemistry and physics of minerals; graphical user interfaces for computational chemistry codes; visualization and analysis of molecular simulations; integrated computational chemistry environment; and benchmark computations.

  11. Modeling of magnetic particle suspensions for simulations

    CERN Document Server

    Satoh, Akira

    2017-01-01

    The main objective of the book is to highlight the modeling of magnetic particles with different shapes and magnetic properties, to provide graduate students and young researchers information on the theoretical aspects and actual techniques for the treatment of magnetic particles in particle-based simulations. In simulation, we focus on the Monte Carlo, molecular dynamics, Brownian dynamics, lattice Boltzmann and stochastic rotation dynamics (multi-particle collision dynamics) methods. The latter two simulation methods can simulate both the particle motion and the ambient flow field simultaneously. In general, specialized knowledge can only be obtained in an effective manner under the supervision of an expert. The present book is written to play such a role for readers who wish to develop the skill of modeling magnetic particles and develop a computer simulation program using their own ability. This book is therefore a self-learning book for graduate students and young researchers. Armed with this knowledge,...

  12. Modelling and Simulation of Wave Loads

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    1985-01-01

    A simple model of the wave load on stender members of offshore structures is described . The wave elevation of the sea stateis modelled by a stationary Gaussian process. A new procedure to simulate realizations of the wave loads is developed. The simulation method assumes that the wave particle...... velocity can be approximated by a Gaussian Markov process. Known approximate results for the first passage density or equivalently, the distribution of the extremes of wave loads are presented and compared with rather precise simulation results. It is demonstrated that the approximate results...

  13. Modelling and Simulation of Wave Loads

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    velocity can be approximated by a Gaussian Markov process. Known approximate results for the first-passage density or equivalently, the distribution of the extremes of wave loads are presented and compared with rather precise simulation results. It is demonstrated that the approximate results......A simple model of the wave load on slender members of offshore structures is described. The wave elevation of the sea state is modelled by a stationary Gaussian process. A new procedure to simulate realizations of the wave loads is developed. The simulation method assumes that the wave particle...

  14. Modelling and Simulation of Wave Loads

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    1985-01-01

    velocity can be approximated by a Gaussian Markov process. Known approximate results for the first passage density or equivalently, the distribution of the extremes of wave loads are presented and compared with rather precise simulation results. It is demonstrated that the approximate results......A simple model of the wave load on stender members of offshore structures is described . The wave elevation of the sea stateis modelled by a stationary Gaussian process. A new procedure to simulate realizations of the wave loads is developed. The simulation method assumes that the wave particle...

  15. Minimum-complexity helicopter simulation math model

    Science.gov (United States)

    Heffley, Robert K.; Mnich, Marc A.

    1988-01-01

    An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.

  16. A simulation model for football championships

    OpenAIRE

    Koning, Ruud H.; Koolhaas, Michael; Renes, Gusta

    2001-01-01

    In this paper we discuss a simulation/probability model that identifies the team that is most likely to win a tournament. The model can also be used to answer other questions like ‘which team had a lucky draw?’ or ‘what is the probability that two teams meet at some moment in the tournament?’. Input to the simulation/probability model are scoring intensities, that are estimated as a weighted average of goals scored. The model has been used in practice to write articles for the popular press, ...

  17. Using Modeling and Simulation to Predict Operator Performance and Automation-Induced Complacency With Robotic Automation: A Case Study and Empirical Validation.

    Science.gov (United States)

    Wickens, Christopher D; Sebok, Angelia; Li, Huiyang; Sarter, Nadine; Gacy, Andrew M

    2015-09-01

    The aim of this study was to develop and validate a computational model of the automation complacency effect, as operators work on a robotic arm task, supported by three different degrees of automation. Some computational models of complacency in human-automation interaction exist, but those are formed and validated within the context of fairly simplified monitoring failures. This research extends model validation to a much more complex task, so that system designers can establish, without need for human-in-the-loop (HITL) experimentation, merits and shortcomings of different automation degrees. We developed a realistic simulation of a space-based robotic arm task that could be carried out with three different levels of trajectory visualization and execution automation support. Using this simulation, we performed HITL testing. Complacency was induced via several trials of correctly performing automation and then was assessed on trials when automation failed. Following a cognitive task analysis of the robotic arm operation, we developed a multicomponent model of the robotic operator and his or her reliance on automation, based in part on visual scanning. The comparison of model predictions with empirical results revealed that the model accurately predicted routine performance and predicted the responses to these failures after complacency developed. However, the scanning models do not account for the entire attention allocation effects of complacency. Complacency modeling can provide a useful tool for predicting the effects of different types of imperfect automation. The results from this research suggest that focus should be given to supporting situation awareness in automation development. © 2015, Human Factors and Ergonomics Society.

  18. SU-E-I-80: Quantification of Respiratory and Cardiac Motion Effect in SPECT Acquisitions Using Anthropomorphic Models: A Monte Carlo Simulation Study

    Energy Technology Data Exchange (ETDEWEB)

    Papadimitroulas, P; Kostou, T; Kagadis, G [University of Patras, Rion, Ahaia (Greece); Loudos, G [Technological Educational Institute of Athens, Egaleo, Attika (Greece)

    2015-06-15

    Purpose: The purpose of the present study was to quantify, evaluate the impact of cardiac and respiratory motion on clinical nuclear imaging protocols. Common SPECT and scintigraphic scans are studied using Monte Carlo (MC) simulations, comparing the resulted images with and without motion. Methods: Realistic simulations were executed using the GATE toolkit and the XCAT anthropomorphic phantom as a reference model for human anatomy. Three different radiopharmaceuticals based on 99mTc were studied, namely 99mTc-MDP, 99mTc—N—DBODC and 99mTc—DTPA-aerosol for bone, myocardium and lung scanning respectively. The resolution of the phantom was set to 3.5 mm{sup 3}. The impact of the motion on spatial resolution was quantified using a sphere with 3.5 mm diameter and 10 separate time frames, in the ECAM modeled SPECT scanner. Finally, respiratory motion impact on resolution and imaging of lung lesions was investigated. The MLEM algorithm was used for data reconstruction, while the literature derived biodistributions of the pharmaceuticals were used as activity maps in the simulations. Results: FWHM was extracted for a static and a moving sphere which was ∼23 cm away from the entrance of the SPECT head. The difference in the FWHM was 20% between the two simulations. Profiles in thorax were compared in the case of bone scintigraphy, showing displacement and blurring of the bones when respiratory motion was inserted in the simulation. Large discrepancies were noticed in the case of myocardium imaging when cardiac motion was incorporated during the SPECT acquisition. Finally the borders of the lungs are blurred when respiratory motion is included resulting to a dislocation of ∼2.5 cm. Conclusion: As we move to individualized imaging and therapy procedures, quantitative and qualitative imaging is of high importance in nuclear diagnosis. MC simulations combined with anthropomorphic digital phantoms can provide an accurate tool for applications like motion correction

  19. Rasch modelling to deal with changes in the questionnaires used during long-term follow-up of cohort studies: a simulation study.

    Science.gov (United States)

    Rouquette, Alexandra; Côté, Sylvana M; Hardouin, Jean-Benoit; Falissard, Bruno

    2016-08-24

    A specific measurement issue often occurs in cohort studies with long-term follow-up: the substitution of the classic instruments used to assess one or several factors or outcomes studied by new, more reliable, more accurate or more convenient instruments. This study aimed to compare three techniques to deal with this issue when the substituted instrument is a questionnaire measuring a subjective phenomenon: one using only the items shared by the different questionnaires over time, i.e. computation of the raw score; the two others using every item, i.e. computation of the standardised score or estimation of the latent variable score using the Rasch model. Two hundred databases were simulated, corresponding to longitudinal 10-item questionnaire data from three trajectory groups of subjects for the subjective phenomenon of interest ("increasing", "stable-low" or "stable-high" mean trajectory over time). Three copies of these databases were generated and the subjects' responses to some items were removed at some collection times leading to a number of shared items over time varying from 4 to 10 in the 800 datasets. The performances of Latent Class Growth Analysis (LCGA) applied to the raw score, the standardised score or the latent variable score were studied on these databases according to the number of shared items over time. Surprisingly, LCGA applied to the latent variable score estimate did not perform as well as LCGA applied to the standardised score, where it was the most efficient whatever the number of shared items. However, the proportions of correctly classified subjects by LCGA applied to the latent variable score were more balanced across trajectory groups. The use of the standardised score to deal with questionnaire changes over time was more efficient than the raw score and also, surprisingly, than the latent variable score. LCGA applied to the raw score was the least efficient and exhibited the most unbalanced misclassifications across trajectory

  20. Rasch modelling to deal with changes in the questionnaires used during long-term follow-up of cohort studies: a simulation study

    Directory of Open Access Journals (Sweden)

    Alexandra Rouquette

    2016-08-01

    Full Text Available Abstract Background A specific measurement issue often occurs in cohort studies with long-term follow-up: the substitution of the classic instruments used to assess one or several factors or outcomes studied by new, more reliable, more accurate or more convenient instruments. This study aimed to compare three techniques to deal with this issue when the substituted instrument is a questionnaire measuring a subjective phenomenon: one using only the items shared by the different questionnaires over time, i.e. computation of the raw score; the two others using every item, i.e. computation of the standardised score or estimation of the latent variable score using the Rasch model. Methods Two hundred databases were simulated, corresponding to longitudinal 10-item questionnaire data from three trajectory groups of subjects for the subjective phenomenon of interest (“increasing”, “stable-low” or “stable-high” mean trajectory over time. Three copies of these databases were generated and the subjects’ responses to some items were removed at some collection times leading to a number of shared items over time varying from 4 to 10 in the 800 datasets. The performances of Latent Class Growth Analysis (LCGA applied to the raw score, the standardised score or the latent variable score were studied on these databases according to the number of shared items over time. Results Surprisingly, LCGA applied to the latent variable score estimate did not perform as well as LCGA applied to the standardised score, where it was the most efficient whatever the number of shared items. However, the proportions of correctly classified subjects by LCGA applied to the latent variable score were more balanced across trajectory groups. Conclusions The use of the standardised score to deal with questionnaire changes over time was more efficient than the raw score and also, surprisingly, than the latent variable score. LCGA applied to the raw score was the least

  1. Modeling lift operations with SASmacr Simulation Studio

    Science.gov (United States)

    Kar, Leow Soo

    2016-10-01

    Lifts or elevators are an essential part of multistorey buildings which provide vertical transportation for its occupants. In large and high-rise apartment buildings, its occupants are permanent, while in buildings, like hospitals or office blocks, the occupants are temporary or users of the buildings. They come in to work or to visit, and thus, the population of such buildings are much higher than those in residential apartments. It is common these days that large office blocks or hospitals have at least 8 to 10 lifts serving its population. In order to optimize the level of service performance, different transportation schemes are devised to control the lift operations. For example, one lift may be assigned to solely service the even floors and another solely for the odd floors, etc. In this paper, a basic lift system is modelled using SAS Simulation Studio to study the effect of factors such as the number of floors, capacity of the lift car, arrival rate and exit rate of passengers at each floor, peak and off peak periods on the system performance. The simulation is applied to a real lift operation in Sunway College's North Building to validate the model.

  2. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    Modelling Deterministic Systems. N K Srinivasan gradu- ated from Indian. Institute of Science and obtained his Doctorate from Columbia Univer- sity, New York. He has taught in several universities, and later did system analysis, wargaming and simula- tion for defence. His other areas of interest are reliability engineer-.

  3. Coupled Monte Carlo simulation and Copula theory for uncertainty analysis of multiphase flow simulation models.

    Science.gov (United States)

    Jiang, Xue; Na, Jin; Lu, Wenxi; Zhang, Yu

    2017-11-01

    Simulation-optimization techniques are effective in identifying an optimal remediation strategy. Simulation models with uncertainty, primarily in the form of parameter uncertainty with different degrees of correlation, influence the reliability of the optimal remediation strategy. In this study, a coupled Monte Carlo simulation and Copula theory is proposed for uncertainty analysis of a simulation model when parameters are correlated. Using the self-adaptive weight particle swarm optimization Kriging method, a surrogate model was constructed to replace the simulation model and reduce the computational burden and time consumption resulting from repeated and multiple Monte Carlo simulations. The Akaike information criterion (AIC) and the Bayesian information criterion (BIC) were employed to identify whether the t Copula function or the Gaussian Copula is the optimal Copula function to match the relevant structure of the parameters. The results show that both the AIC and BIC values of the t Copula function are less than those of the Gaussian Copula function. This indicates that the t Copula function is the optimal function for matching the relevant structure of the parameters. The outputs of the simulation model when parameter correlation was considered and when it was ignored were compared. The results show that the amplitude of the fluctuation interval when parameter correlation was considered is less than the corresponding amplitude when parameter estimation was ignored. Moreover, it was demonstrated that considering the correlation among parameters is essential for uncertainty analysis of a simulation model, and the results of uncertainty analysis should be incorporated into the remediation strategy optimization process.

  4. Modelling and simulating fire tube boiler performance

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    A model for a flue gas boiler covering the flue gas and the water-/steam side has been formulated. The model has been formulated as a number of sub models that are merged into an overall model for the complete boiler. Sub models have been defined for the furnace, the convection zone (split in 2......: a zone submerged in water and a zone covered by steam), a model for the material in the boiler (the steel) and 2 models for resp. the water/steam zone (the boiling) and the steam. The dynamic model has been developed as a number of Differential-Algebraic-Equation system (DAE). Subsequently Mat......Lab/Simulink has been applied for carrying out the simulations. To be able to verify the simulated results experiments has been carried out on a full scale boiler plant....

  5. Field, model, and computer simulation study of some aspects of the origin and distribution of Colorado Plateau-type uranium deposits

    Science.gov (United States)

    Ethridge, F.G.; Sunada, D.K.; Tyler, Noel; Andrews, Sarah

    1982-01-01

    Numerous hypotheses have been proposed to account for the nature and distribution of tabular uranium and vanadium-uranium deposits of the Colorado Plateau. In one of these hypotheses it is suggested that the deposits resulted from geochemical reactions at the interface between a relatively stagnant groundwater solution and a dynamic, ore-carrying groundwater solution which permeated the host sandstones (Shawe, 1956; Granger, et al., 1961; Granger, 1968, 1976; and Granger and Warren, 1979). The study described here was designed to investigate some aspects of this hypothesis, particularly the nature of fluid flow in sands and sandstones, the nature and distribution of deposits, and the relations between the deposits and the host sandstones. The investigation, which was divided into three phases, involved physical model, field, and computer simulation studies. During the initial phase of the investigation, physical model studies were conducted in porous-media flumes. These studies verified the fact that humic acid precipitates could form at the interface between a humic acid solution and a potassium aluminum sulfate solution and that the nature and distribution of these precipitates were related to flow phenomena and to the nature and distribution of the host porous-media. During the second phase of the investigation field studies of permeability and porosity patterns in Holocene stream deposits were investigated and the data obtained were used to design more realistic porous media models. These model studies, which simulated actual stream deposits, demonstrated that precipitates possess many characteristics, in terms of their nature and relation to host sandstones, that are similar to ore deposits of the Colorado Plateau. The final phase of the investigation involved field studies of actual deposits, additional model studies in a large indoor flume, and computer simulation studies. The field investigations provided an up-to-date interpretation of the depositional

  6. Molecular Dynamics Simulation Study of Parallel Telomeric DNA Quadruplexes at Different Ionic Strengths: Evaluation of Water and Ion Models

    Czech Academy of Sciences Publication Activity Database

    Rebic, M.; Laaksonen, A.; Šponer, Jiří; Uličný, J.; Mocci, F.

    2016-01-01

    Roč. 120, č. 30 (2016), s. 7380-7391 ISSN 1520-6106 R&D Projects: GA ČR(CZ) GA16-13721S Institutional support: RVO:68081707 Keywords : amber force-field * nucleic-acids * biomolecular simulations Subject RIV: BO - Biophysics OBOR OECD: Physical chemistry Impact factor: 3.177, year: 2016

  7. A beer game simulation model for studying the impact of information sharing to diminish the bullwhip effect in supply chains: an educational support tool in supply chain management

    Directory of Open Access Journals (Sweden)

    Éder Vasco Pinheiro

    2016-06-01

    Full Text Available This paper simulates the Beer Distribution Game using object oriented simulation software. A five echelon supply chain with bidirectional relationships is reproduced, employing simulation to demonstrate the impact of information on the generation of the bullwhip effect. In doing so, this study intends to provide a simple didactic tool to assist academically in supply chain management. As the result of the simulations, it was possible to demonstrate the occurrence of the bullwhip effect and how information sharing can diminish it.

  8. Model Driven Development of Simulation Models : Defining and Transforming Conceptual Models into Simulation Models by Using Metamodels and Model Transformations

    NARCIS (Netherlands)

    Küçükkeçeci Çetinkaya, D.

    2013-01-01

    Modeling and simulation (M&S) is an effective method for analyzing and designing systems and it is of interest to scientists and engineers from all disciplines. This thesis proposes the application of a model driven software development approach throughout the whole set of M&S activities and it

  9. Communicating Insights from Complex Simulation Models: A Gaming Approach.

    Science.gov (United States)

    Vennix, Jac A. M.; Geurts, Jac L. A.

    1987-01-01

    Describes design principles followed in developing an interactive microcomputer-based simulation to study financial and economic aspects of the Dutch social security system. The main goals are to improve participants' insights into the formal simulation model, and to improve policy development skills. Plans for future research are also discussed.…

  10. Improving hydrological simulations by incorporating GRACE data for model calibration

    Science.gov (United States)

    Bai, Peng; Liu, Xiaomang; Liu, Changming

    2018-02-01

    Hydrological model parameters are typically calibrated by observed streamflow data. This calibration strategy is questioned when the simulated hydrological variables of interest are not limited to streamflow. Well-performed streamflow simulations do not guarantee the reliable reproduction of other hydrological variables. One of the reasons is that hydrological model parameters are not reasonably identified. The Gravity Recovery and Climate Experiment (GRACE)-derived total water storage change (TWSC) data provide an opportunity to constrain hydrological model parameterizations in combination with streamflow observations. In this study, a multi-objective calibration scheme based on GRACE-derived TWSC and streamflow observations was compared with the traditional single-objective calibration scheme based on only streamflow simulations. Two hydrological models were employed on 22 catchments in China with different climatic conditions. The model evaluations were performed using observed streamflows, GRACE-derived TWSC, and actual evapotranspiration (ET) estimates from flux towers and from the water balance approach. Results showed that the multi-objective calibration scheme provided more reliable TWSC and ET simulations without significant deterioration in the accuracy of streamflow simulations than the single-objective calibration. The improvement in TWSC and ET simulations was more significant in relatively dry catchments than in relatively wet catchments. In addition, hydrological models calibrated using GRACE-derived TWSC data alone cannot obtain accurate runoff simulations in ungauged catchments. This study highlights the importance of including additional constraints in addition to streamflow observations to improve performances of hydrological models.

  11. Simulation and modeling of turbulent flows

    CERN Document Server

    Gatski, Thomas B; Lumley, John L

    1996-01-01

    This book provides students and researchers in fluid engineering with an up-to-date overview of turbulent flow research in the areas of simulation and modeling. A key element of the book is the systematic, rational development of turbulence closure models and related aspects of modern turbulent flow theory and prediction. Starting with a review of the spectral dynamics of homogenous and inhomogeneous turbulent flows, succeeding chapters deal with numerical simulation techniques, renormalization group methods and turbulent closure modeling. Each chapter is authored by recognized leaders in their respective fields, and each provides a thorough and cohesive treatment of the subject.

  12. Dynamic modeling and simulation of wind turbines

    International Nuclear Information System (INIS)

    Ghafari Seadat, M.H.; Kheradmand Keysami, M.; Lari, H.R.

    2002-01-01

    Using wind energy for generating electricity in wind turbines is a good way for using renewable energies. It can also help to protect the environment. The main objective of this paper is dynamic modeling by energy method and simulation of a wind turbine aided by computer. In this paper, the equations of motion are extracted for simulating the system of wind turbine and then the behavior of the system become obvious by solving the equations. The turbine is considered with three blade rotor in wind direction, induced generator that is connected to the network and constant revolution for simulation of wind turbine. Every part of the wind turbine should be simulated for simulation of wind turbine. The main parts are blades, gearbox, shafts and generator

  13. Hybrid simulation models of production networks

    CERN Document Server

    Kouikoglou, Vassilis S

    2001-01-01

    This book is concerned with a most important area of industrial production, that of analysis and optimization of production lines and networks using discrete-event models and simulation. The book introduces a novel approach that combines analytic models and discrete-event simulation. Unlike conventional piece-by-piece simulation, this method observes a reduced number of events between which the evolution of the system is tracked analytically. Using this hybrid approach, several models are developed for the analysis of production lines and networks. The hybrid approach combines speed and accuracy for exceptional analysis of most practical situations. A number of optimization problems, involving buffer design, workforce planning, and production control, are solved through the use of hybrid models.

  14. The behaviour of adaptive boneremodeling simulation models

    NARCIS (Netherlands)

    Weinans, H.; Huiskes, R.; Grootenboer, H.J.

    1992-01-01

    The process of adaptive bone remodeling can be described mathematically and simulated in a computer model, integrated with the finite element method. In the model discussed here, cortical and trabecular bone are described as continuous materials with variable density. The remodeling rule applied to

  15. Analytical system dynamics modeling and simulation

    CERN Document Server

    Fabien, Brian C

    2008-01-01

    This book offering a modeling technique based on Lagrange's energy method includes 125 worked examples. Using this technique enables one to model and simulate systems as diverse as a six-link, closed-loop mechanism or a transistor power amplifier.

  16. Equivalent drawbead model in finite element simulations

    NARCIS (Netherlands)

    Carleer, Bart D.; Carleer, B.D.; Meinders, Vincent T.; Huetink, Han; Lee, J.K.; Kinzel, G.L.; Wagoner, R.

    1996-01-01

    In 3D simulations of the deep drawing process the drawbead geometries are seldom included. Therefore equivalent drawbeads are used. In order to investigate the drawbead behaviour a 2D plane strain finite element model was used. For verification of this model experiments were performed. The analyses

  17. A simulation model for football championships

    NARCIS (Netherlands)

    Koning, RH; Koolhaas, M; Renes, G; Ridder, G

    2003-01-01

    In this paper we discuss a simulation/probability model that identifies the team that is most likely to win a tournament. The model can also be used to answer other questions like 'which team bad a lucky draw?' or 'what is the probability that two teams meet at some moment in the tournament?' Input

  18. A simulation model for football championships

    NARCIS (Netherlands)

    Koning, Ruud H.; Koolhaas, Michael; Renes, Gusta

    2001-01-01

    In this paper we discuss a simulation/probability model that identifies the team that is most likely to win a tournament. The model can also be used to answer other questions like ‘which team had a lucky draw?’ or ‘what is the probability that two teams meet at some moment in the tournament?’. Input

  19. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  20. Simulation studies of protein-induced bilayer deformations, and lipid-induced protein tilting, on a mesoscopic model for lipid bilayers with embedded proteins

    DEFF Research Database (Denmark)

    Venturoli, M.; Smit, B.; Sperotto, Maria Maddalena

    2005-01-01

    in membranes, we considered proteins of different hydrophobic length ( as well as different sizes). We studied the cooperative behavior of the lipid-protein system at mesoscopic time-and lengthscales. In particular, we correlated in a systematic way the protein-induced bilayer perturbation, and the lipid......Biological membranes are complex and highly cooperative structures. To relate biomembrane structure to their biological function it is often necessary to consider simpler systems. Lipid bilayers composed of one or two lipid species, and with embedded proteins, provide a model system for biological...... membranes. Here we present a mesoscopic model for lipid bilayers with embedded proteins, which we have studied with the help of the dissipative particle dynamics simulation technique. Because hydrophobic matching is believed to be one of the main physical mechanisms regulating lipid-protein interactions...

  1. Study of cellular retention of HMPAO and ECD in a model simulating the blood-brain barrier; Etude de la retention cellulaire de l`HMPAO et de l`ECD dans un modele simulant la barriere hematoencephalique

    Energy Technology Data Exchange (ETDEWEB)

    Ponce, C.; Pittet, N.; Slosman, D.O. [HUG, 1211 Geneve 14, (Switzerland)

    1997-12-31

    The HMPAO and ECD are two technetium-labelled lipophilic agents clinically used in the imagery of cerebral perfusion. These molecules cross the membranes and are retained inside the cell after being converted to a hydrophilic form. The aim of this study is to establish the distribution of this retention at the level of blood-brain barrier (BBB) and nerve cells. The incorporation of HMPAO or ECD was studied on a model of co-culture simulating the BBB by means of a T84 single-cell layer of tight junction separated from another layer of U373 astrocyte cells. The cell quality and tight junction permeability were evaluated by the cellular retention of 111-indium chloride and by para-cellular diffusion of {sup 14}C mannitol,d-1. The values reported below were obtained at 180 minutes when the radiotracers were added near the `T84 layer`. The cell quality is validated by the low cellular retention of the indium chloride(2.3{+-}0.3 {mu}g{sup -1} for the T84 cells and 8.2{+-}5.8 {mu}g{sup -1} for the U373 cells). The activity of {sup 14}C mannitol,d-1 diminishes by 23 {+-} 5 % in the added compartment. The retention of ECD by the U373 cells is significantly higher (20.7 {+-}4.5 g{sup -1}) than that of T84 cells (2.9 {+-} 0.2 {mu}g{sup -1}). For HMPAO a non-significant tendency could be observed (49 {+-} 34 {mu}g{sup -1} for the U373 cells and 38 {+-} 25 {mu}g{sup -1} for the T84 cells)> The results of cellular retention of indium by HMPAO or ECD when added near `U373 layer` are not significantly different.In conclusion, independently of the side exposed to the radiotracers, one observes an enhanced incorporation of the U373 cells. The ensemble of these results represent additional arguments in favour of a specific cellular incorporation of the radiotracers, independent of the BBB permittivity

  2. Landscape Modelling and Simulation Using Spatial Data

    Directory of Open Access Journals (Sweden)

    Amjed Naser Mohsin AL-Hameedawi

    2017-08-01

    Full Text Available In this paper a procedure was performed for engendering spatial model of landscape acclimated to reality simulation. This procedure based on combining spatial data and field measurements with computer graphics reproduced using Blender software. Thereafter that we are possible to form a 3D simulation based on VIS ALL packages. The objective was to make a model utilising GIS, including inputs to the feature attribute data. The objective of these efforts concentrated on coordinating a tolerable spatial prototype, circumscribing facilitation scheme and outlining the intended framework. Thus; the eventual result was utilized in simulation form. The performed procedure contains not only data gathering, fieldwork and paradigm providing, but extended to supply a new method necessary to provide the respective 3D simulation mapping production, which authorises the decision makers as well as investors to achieve permanent acceptance an independent navigation system for Geoscience applications.

  3. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together...... to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal...... and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work...

  4. A queuing model for road traffic simulation

    International Nuclear Information System (INIS)

    Guerrouahane, N.; Aissani, D.; Bouallouche-Medjkoune, L.; Farhi, N.

    2015-01-01

    We present in this article a stochastic queuing model for the raod traffic. The model is based on the M/G/c/c state dependent queuing model, and is inspired from the deterministic Godunov scheme for the road traffic simulation. We first propose a variant of M/G/c/c state dependent model that works with density-flow fundamental diagrams rather than density-speed relationships. We then extend this model in order to consider upstream traffic demand as well as downstream traffic supply. Finally, we show how to model a whole raod by concatenating raod sections as in the deterministic Godunov scheme

  5. Quantitative interface models for simulating microstructure evolution

    International Nuclear Information System (INIS)

    Zhu, J.Z.; Wang, T.; Zhou, S.H.; Liu, Z.K.; Chen, L.Q.

    2004-01-01

    To quantitatively simulate microstructural evolution in real systems, we investigated three different interface models: a sharp-interface model implemented by the software DICTRA and two diffuse-interface models which use either physical order parameters or artificial order parameters. A particular example is considered, the diffusion-controlled growth of a γ ' precipitate in a supersaturated γ matrix in Ni-Al binary alloys. All three models use the thermodynamic and kinetic parameters from the same databases. The temporal evolution profiles of composition from different models are shown to agree with each other. The focus is on examining the advantages and disadvantages of each model as applied to microstructure evolution in alloys

  6. Dosimetry study of [I-131] and [I-125]- meta-iodobenz guanidine in a simulating model for neuroblastoma metastasis.

    Science.gov (United States)

    Roa, W H; Yaremko, B; McEwan, A; Amanie, J; Yee, D; Cho, J; McQuarrie, S; Riauka, T; Sloboda, R; Wiebe, L; Loebenberg, R; Janicki, C

    2013-02-01

    The physical properties of I-131 may be suboptimal for the delivery of therapeutic radiation to bone marrow metastases, which are common in the natural history of neuroblastoma. In vitro and preliminary clinical studies have implied improved efficacy of I-125 relative to I-131 in certain clinical situations, although areas of uncertainty remain regarding intratumoral dosimetry. This prompted our study using human neuroblastoma multicellular spheroids as a model of metastasis. 3D dose calculations were made using voxel-based Medical Internal Radiation Dosimetry (MIRD) and dose-point-kernel (DPK) techniques. Dose distributions for I-131 and I-125 labeled mIBG were calculated for spheroids (metastases) of various sizes from 0.01 cm to 3 cm diameter, and the relative dose delivered to the tumors was compared for the same limiting dose to the bone marrow. Based on the same data, arguments were advanced based upon the principles of tumor control probability (TCP) to emphasize the potential theoretical utility of I-125 over I-131 in specific clinical situations. I-125-mIBG can deliver a higher and more uniform dose to tumors compared to I-131 mIBG without increasing the dose to the bone marrow. Depending on the tumor size and biological half-life, the relative dose to tumors of less than 1 mm diameter can increase several-fold. TCP calculations indicate that tumor control increases with increasing administered activity, and that I-125 is more effective than I-131 for tumor diameters of 0.01 cm or less. This study suggests that I-125-mIBG is dosimetrically superior to I-131-mIBG therapy for small bone marrow metastases from neuroblastoma. It is logical to consider adding I-125-mIBG to I-131-mIBG in multi-modality therapy as these two isotopes could be complementary in terms of their cumulative dosimetry.

  7. Impact of reactive settler models on simulated WWTP performance.

    Science.gov (United States)

    Gernaey, K V; Jeppsson, U; Batstone, D J; Ingildsen, P

    2006-01-01

    Including a reactive settler model in a wastewater treatment plant model allows representation of the biological reactions taking place in the sludge blanket in the settler, something that is neglected in many simulation studies. The idea of including a reactive settler model is investigated for an ASM1 case study. Simulations with a whole plant model including the non-reactive Takács settler model are used as a reference, and are compared to simulation results considering two reactive settler models. The first is a return sludge model block removing oxygen and a user-defined fraction of nitrate, combined with a non-reactive Takács settler. The second is a fully reactive ASM1 Takács settler model. Simulations with the ASM1 reactive settler model predicted a 15.3% and 7.4% improvement of the simulated N removal performance, for constant (steady-state) and dynamic influent conditions respectively. The oxygen/nitrate return sludge model block predicts a 10% improvement of N removal performance under dynamic conditions, and might be the better modelling option for ASM1 plants: it is computationally more efficient and it will not overrate the importance of decay processes in the settler.

  8. Impact of reactive settler models on simulated WWTP performance

    DEFF Research Database (Denmark)

    Gernaey, Krist; Jeppsson, Ulf; Batstone, Damien J.

    2006-01-01

    Including a reactive settler model in a wastewater treatment plant model allows representation of the biological reactions taking place in the sludge blanket in the settler, something that is neglected in many simulation studies. The idea of including a reactive settler model is investigated for ...

  9. Simulating tidal turbines with mesh optimisation and RANS turbulence models

    NARCIS (Netherlands)

    Abolghasemi, A.; Piggott, M.D.; Spinneken, J.; Vire, A.; Cotter, C.J.

    2015-01-01

    A versatile numerical model for the simulation of flow past horizontal axis tidal turbines has been developed. Currently most large-scale marine models employed to study marine energy use the shallow water equations and therefore can fail to account for important turbulent physics. The model

  10. PRELIMINARY MULTIDOMAIN MODELLING AND SIMULATION ...

    African Journals Online (AJOL)

    Renewable energy sources have gained much attention due to the recent energy crisis and the urge to get clean energy. Among the main options being studied, wind energy is a strong contender because of its reliability due to the maturity of the technology, good infrastructure and relative cost competitiveness. It is also ...

  11. A fuzzy-stochastic simulation-optimization model for planning electric power systems with considering peak-electricity demand: A case study of Qingdao, China

    International Nuclear Information System (INIS)

    Yu, L.; Li, Y.P.; Huang, G.H.

    2016-01-01

    In this study, a FSSOM (fuzzy-stochastic simulation-optimization model) is developed for planning EPS (electric power systems) with considering peak demand under uncertainty. FSSOM integrates techniques of SVR (support vector regression), Monte Carlo simulation, and FICMP (fractile interval chance-constrained mixed-integer programming). In FSSOM, uncertainties expressed as fuzzy boundary intervals and random variables can be effectively tackled. In addition, SVR coupled Monte Carlo technique is used for predicting the peak-electricity demand. The FSSOM is applied to planning EPS for the City of Qingdao, China. Solutions of electricity generation pattern to satisfy the city's peak demand under different probability levels and p-necessity levels have been generated. Results reveal that the city's electricity supply from renewable energies would be low (only occupying 8.3% of the total electricity generation). Compared with the energy model without considering peak demand, the FSSOM can better guarantee the city's power supply and thus reduce the system failure risk. The findings can help decision makers not only adjust the existing electricity generation/supply pattern but also coordinate the conflict interaction among system cost, energy supply security, pollutant mitigation, as well as constraint-violation risk. - Highlights: • FSSOM (Fuzzy-stochastic simulation-optimization model) is developed for planning EPS. • It can address uncertainties as fuzzy-boundary intervals and random variables. • FSSOM can satisfy peak-electricity demand and optimize power allocation. • Solutions under different probability levels and p-necessity levels are analyzed. • Results create tradeoff among system cost and peak-electricity demand violation risk.

  12. Systems modeling and simulation applications for critical care medicine.

    Science.gov (United States)

    Dong, Yue; Chbat, Nicolas W; Gupta, Ashish; Hadzikadic, Mirsad; Gajic, Ognjen

    2012-06-15

    Critical care delivery is a complex, expensive, error prone, medical specialty and remains the focal point of major improvement efforts in healthcare delivery. Various modeling and simulation techniques offer unique opportunities to better understand the interactions between clinical physiology and care delivery. The novel insights gained from the systems perspective can then be used to develop and test new treatment strategies and make critical care delivery more efficient and effective. However, modeling and simulation applications in critical care remain underutilized. This article provides an overview of major computer-based simulation techniques as applied to critical care medicine. We provide three application examples of different simulation techniques, including a) pathophysiological model of acute lung injury, b) process modeling of critical care delivery, and c) an agent-based model to study interaction between pathophysiology and healthcare delivery. Finally, we identify certain challenges to, and opportunities for, future research in the area.

  13. Simulation of finite size effects of the fiber bundle model

    Science.gov (United States)

    Hao, Da-Peng; Tang, Gang; Xun, Zhi-Peng; Xia, Hui; Han, Kui

    2018-01-01

    In theory, the macroscopic fracture of materials should correspond with the thermodynamic limit of the fiber bundle model. However, the simulation of a fiber bundle model with an infinite size is unrealistic. To study the finite size effects of the fiber bundle model, fiber bundle models of various size are simulated in detail. The effects of system size on the constitutive behavior, critical stress, maximum avalanche size, avalanche size distribution, and increased step number of external load are explored. The simulation results imply that there is no feature size or cut size for macroscopic mechanical and statistical properties of the model. The constitutive curves near the macroscopic failure for various system size can collapse well with a simple scaling relationship. Simultaneously, the introduction of a simple extrapolation method facilitates the acquisition of more accurate simulation results in a large-limit system, which is better for comparison with theoretical results.

  14. Predictive accuracy of risk factors and markers: a simulation study of the effect of novel markers on different performance measures for logistic regression models.

    Science.gov (United States)

    Austin, Peter C; Steyerberg, Ewout W

    2013-02-20

    The change in c-statistic is frequently used to summarize the change in predictive accuracy when a novel risk factor is added to an existing logistic regression model. We explored the relationship between the absolute change in the c-statistic, Brier score, generalized R(2) , and the discrimination slope when a risk factor was added to an existing model in an extensive set of Monte Carlo simulations. The increase in model accuracy due to the inclusion of a novel marker was proportional to both the prevalence of the marker and to the odds ratio relating the marker to the outcome but inversely proportional to the accuracy of the logistic regression model with the marker omitted. We observed greater improvements in model accuracy when the novel risk factor or marker was uncorrelated with the existing predictor variable compared with when the risk factor has a positive correlation with the existing predictor variable. We illustrated these findings by using a study on mortality prediction in patients hospitalized with heart failure. In conclusion, the increase in predictive accuracy by adding a marker should be considered in the context of the accuracy of the initial model. Copyright © 2012 John Wiley & Sons, Ltd.

  15. Simulation, calibration and validation protocols for the model 3D-CMCC-CNR-FEM: a case study in the Bonis’ watershed (Calabria, Italy

    Directory of Open Access Journals (Sweden)

    Collalti A

    2017-08-01

    Full Text Available Simulation, calibration and validation protocols for the model 3D-CMCC-CNR-FEM: a case study in the Bonis’ watershed (Calabria, Italy. At present, the climate changes issue is perhaps the greatest threat that is affecting people and the environment. Forest ecosystems have a key role in the mitigation of climate change. In this context, the prediction of the evolution and growth dynamics of the forests including carbon and water fluxes, and in relation to forest management has become a primary objective. The present study aims at defining a protocol for data collection and the workflow for using the 3D-CMCC-CNR-FEM model in a small mountain watershed in the Calabria region. Within this work we synergistically integrate data coming from different methods (e.g., LiDAR, eddy covariance and sample area to predict forest dynamics (growth, carbon and water fluxes. Carbon and water fluxes will be simulated considering also the effects of forest management.

  16. The study of simulated microgravity effects on cardiac myocytes using a 3D heart tissue-equivalent model encapsulated in alginate microbeads

    Science.gov (United States)

    Li, Yu; Tian, Weiming; Zheng, Hongxia; Yu, Lei; Zhang, Yao; Han, Fengtong

    Long duration spaceflight may increase the risk and occurrence of potentially life-threatening heart rhythm disturbances associated with alterations of cardiac myocytes, myocyte connec-tivity, and extracellular matrix resulting from prolonged exposure to zero-or low-gravity. For understanding of the effects of microgravity, either traditional 2-dimensional (2D) cell cultures of adherent cell populations or animal models were typically used. The 2D in vitro systems do not allow assessment of the dynamic effects of intercellular interactions within tissues, whereas potentially confounding factors tend to be overlooked in animal models. Therefore novel cell culture model representative of the cellular interactions and with extracellular matrix present in tissues needs to be used. In this study, 3D multi-cellular heart tissue-equivalent model was constructed by culturing neonatal rat myocardial cells in alginate microbeads for one week. With this model we studied the simulated microgravity effects on myocardiocytes by incubat-ing the microbeads in NASA rotary cell culture system with a rate of 15rpm. Cytoskeletal changes, mitochondrial membrane potential and reactive oxygen production were studied after incubating for 24h, 48h and 72h respectively. Compared with 3D ground-culture group, sig-nificant cytoskeleton depolymerization characterized by pseudo-feet disappearance, significant increase of mitochondrial membrane potential, and greater reactive oxygen production were observed in after incubating 24h, 48h, and 72h, in NASA system. The beating rate of 3D heart tissue-equivalent decreased significantly at 24h, and all the samples stopped beating after 48h incubation while the beating rate of control group did not change. This study indicated that mi-crogravity affects both the structure and function of myocardial cells. Our results suggest that a 3D heart tissue-equivalent model maybe better for attempting to elucidate the microgravity effects on myocardiocytes in

  17. Simulation-Based Internal Models for Safer Robots

    Directory of Open Access Journals (Sweden)

    Christian Blum

    2018-01-01

    Full Text Available In this paper, we explore the potential of mobile robots with simulation-based internal models for safety in highly dynamic environments. We propose a robot with a simulation of itself, other dynamic actors and its environment, inside itself. Operating in real time, this simulation-based internal model is able to look ahead and predict the consequences of both the robot’s own actions and those of the other dynamic actors in its vicinity. Hence, the robot continuously modifies its own actions in order to actively maintain its own safety while also achieving its goal. Inspired by the problem of how mobile robots could move quickly and safely through crowds of moving humans, we present experimental results which compare the performance of our internal simulation-based controller with a purely reactive approach as a proof-of-concept study for the practical use of simulation-based internal models.

  18. Beyond Modeling: All-Atom Olfactory Receptor Model Simulations

    Directory of Open Access Journals (Sweden)

    Peter C Lai

    2012-05-01

    Full Text Available Olfactory receptors (ORs are a type of GTP-binding protein-coupled receptor (GPCR. These receptors are responsible for mediating the sense of smell through their interaction with odor ligands. OR-odorant interactions marks the first step in the process that leads to olfaction. Computational studies on model OR structures can validate experimental functional studies as well as generate focused and novel hypotheses for further bench investigation by providing a view of these interactions at the molecular level. Here we have shown the specific advantages of simulating the dynamic environment that is associated with OR-odorant interactions. We present a rigorous methodology that ranges from the creation of a computationally-derived model of an olfactory receptor to simulating the interactions between an OR and an odorant molecule. Given the ubiquitous occurrence of GPCRs in the membranes of cells, we anticipate that our OR-developed methodology will serve as a model for the computational structural biology of all GPCRs.

  19. Poly(ethylene glycol) (PEG) in a Polyethylene (PE) Framework: A Simple Model for Simulation Studies of a Soluble Polymer in an Open Framework.

    Science.gov (United States)

    Xie, Liangxu; Chan, Kwong-Yu; Quirke, Nick

    2017-10-24

    Canonical molecular dynamics simulations are performed to investigate the behavior of single-chain and multiple-chain poly(ethylene glycol) (PEG) contained within a cubic framework spanned by polyethylene (PE) chains. This simple model is the first of its kind to study the chemical physics of polymer-threaded organic frameworks, which are materials with potential applications in catalysis and separation processes. For a single-chain 9-mer, 14-mer, and 18-mer in a small framework, the PEG will interact strongly with the framework and assume a more linear geometry chain with an increased radius of gyration R g compared to that of a large framework. The interaction between PEG and the framework decreases with increasing mesh size in both vacuum and water. In the limit of a framework with an infinitely large cavity (infinitely long linkers), PEG behavior approaches simulation results without a framework. The solvation of PEG is simulated by adding explicit TIP3P water molecules to a 6-chain PEG 14-mer aggregate confined in a framework. The 14-mer chains are readily solvated and leach out of a large 2.6 nm mesh framework. There are fewer water-PEG interactions in a small 1.0 nm mesh framework, as indicated by a smaller number of hydrogen bonds. The PEG aggregate, however, still partially dissolves but is retained within the 1.0 nm framework. The preliminary results illustrate the effectiveness of the simple model in studying polymer-threaded framework materials and in optimizing polymer or framework parameters for high performance.

  20. Optimal model-based deficit irrigation scheduling using AquaCrop: a simulation study with cotton, potato and tomato

    DEFF Research Database (Denmark)

    Linker, Raphael; Ioslovich, Ilya; Sylaios, Georgios

    2016-01-01

    variables are the irrigation amounts for each day of the season. The objective function is the expected yield calculated with the use of a model. In the present work we solved this optimization problem for three crops modeled by the model AquaCrop. This optimization problem is non-trivial due to the non......-smooth behavior of the objective function and the fact that it involves multiple integer variables. We developed an optimization scheme for generating sub-optimal irrigation schedules that take implicitly into account the response of the crop to water stress, and used these as initial guesses for a full...

  1. GIS and agent based spatial-temporal simulation modeling for assessing tourism social carrying capacity: a study on Mount Emei scenic area, China

    Science.gov (United States)

    Zhang, Renjun

    2007-06-01

    Each scenic area can sustain a specific level of acceptance of tourist development and use, beyond which further development can result in socio-cultural deterioration or a decline in the quality of the experience gained by visitors. This specific level is called carrying capacity. Social carrying capacity can be defined as the maximum level of use (in terms of numbers and activities) that can be absorbed by an area without an unacceptable decline in the quality of experience of visitors and without an unacceptable adverse impact on the society of the area. It is difficult to assess the carrying capacity, because the carrying capacity is determined by not only the number of visitors, but also the time, the type of the recreation, the characters of each individual and the physical environment. The objective of this study is to build a spatial-temporal simulation model to simulate the spatial-temporal distribution of tourists. This model is a tourist spatial behaviors simulator (TSBS). Based on TSBS, the changes of each visitor's travel patterns such as location, cost, and other states data are recoded in a state table. By analyzing this table, the intensity of the tourist use in any area can be calculated; the changes of the quality of tourism experience can be quantized and analyzed. So based on this micro simulation method the social carrying capacity can be assessed more accurately, can be monitored proactively and managed adaptively. In this paper, the carrying capacity of Mount Emei scenic area is analyzed as followed: The author selected the intensity of the crowd as the monitoring Indicators. it is regarded that longer waiting time means more crowded. TSBS was used to simulate the spatial-temporal distribution of tourists. the average of waiting time all the visitors is calculated. And then the author assessed the social carrying capacity of Mount Emei scenic area, found the key factors have impacted on social carrying capacity. The results show that the TSBS

  2. Kanban simulation model for production process optimization

    Directory of Open Access Journals (Sweden)

    Golchev Riste

    2015-01-01

    Full Text Available A long time has passed since the KANBAN system has been established as an efficient method for coping with the excessive inventory. Still, the possibilities for its improvement through its integration with other different approaches should be investigated further. The basic research challenge of this paper is to present benefits of KANBAN implementation supported with Discrete Event Simulation (DES. In that direction, at the beginning, the basics of KANBAN system are presented with emphasis on the information and material flow, together with a methodology for implementation of KANBAN system. Certain analysis on combining the simulation with this methodology is presented. The paper is concluded with a practical example which shows that through understanding the philosophy of the implementation methodology of KANBAN system and the simulation methodology, a simulation model can be created which can serve as a basis for a variety of experiments that can be conducted within a short period of time, resulting with production process optimization.

  3. Vermont Yankee simulator BOP model upgrade

    International Nuclear Information System (INIS)

    Alejandro, R.; Udbinac, M.J.

    2006-01-01

    The Vermont Yankee simulator has undergone significant changes in the 20 years since the original order was placed. After the move from the original Unix to MS Windows environment, and upgrade to the latest version of SimPort, now called MASTER, the platform was set for an overhaul and replacement of major plant system models. Over a period of a few months, the VY simulator team, in partnership with WSC engineers, replaced outdated legacy models of the main steam, condenser, condensate, circulating water, feedwater and feedwater heaters, and main turbine and auxiliaries. The timing was ideal, as the plant was undergoing a power up-rate, so the opportunity was taken to replace the legacy models with industry-leading, true on-line object oriented graphical models. Due to the efficiency of design and ease of use of the MASTER tools, VY staff performed the majority of the modeling work themselves with great success, with only occasional assistance from WSC, in a relatively short time-period, despite having to maintain all of their 'regular' simulator maintenance responsibilities. This paper will provide a more detailed view of the VY simulator, including how it is used and how it has benefited from the enhancements and upgrades implemented during the project. (author)

  4. Broadcasting simulation case studies to the didactic classroom.

    Science.gov (United States)

    Kalmakis, Karen A; Cunningham, Helene; Lamoureux, Erin T; Ahmed, Elshaymaa M

    2010-01-01

    To explore the potential of using simulation in new ways, it is time to think "outside the lab." To do this, the authors expanded the use of case-study simulations by broadcasting them to classrooms where didactic content could be reinforced with simulation content. Advantages included students' active classroom engagement, simultaneously sharing simulations with many students, modeling students' thinking in clinical situations, and connecting theory to practice.

  5. Effect of Alternate Nostril Breathing Exercise on Experimentally Induced Anxiety in Healthy Volunteers Using the Simulated Public Speaking Model: A Randomized Controlled Pilot Study.

    Science.gov (United States)

    Kamath, Ashwin; Urval, Rathnakar P; Shenoy, Ashok K

    2017-01-01

    A randomized controlled pilot study was carried out to determine the effect of a 15-minute practice of ANB exercise on experimentally induced anxiety using the simulated public speaking model in yoga-naïve healthy young adults. Thirty consenting medical students were equally divided into test and control groups. The test group performed alternate nostril breathing exercise for 15 minutes, while the control group sat in a quiet room before participating in the simulated public speaking test (SPST). Visual Analog Mood Scale and Self-Statements during Public Speaking scale were used to measure the mood state at different phases of the SPST. The psychometric scores of both groups were comparable at baseline. Repeated-measures ANOVA showed a significant effect of phase ( p < 0.05), but group and gender did not have statistically significant influence on the mean anxiety scores. However, the test group showed a trend towards lower mean scores for the anxiety factor when compared with the control group. Considering the limitations of this pilot study and the trend seen towards lower anxiety in the test group, alternate nostril breathing may have potential anxiolytic effect in acute stressful situations. A study with larger sample size is therefore warranted. This trial is registered with CTRI/2014/03/004460.

  6. TRANSFORM - TRANsient Simulation Framework of Reconfigurable Models

    Energy Technology Data Exchange (ETDEWEB)

    2017-09-01

    Existing development tools for early stage design and scoping of energy systems are often time consuming to use, proprietary, and do not contain the necessary function to model complete systems (i.e., controls, primary, and secondary systems) in a common platform. The Modelica programming language based TRANSFORM tool (1) provides a standardized, common simulation environment for early design of energy systems (i.e., power plants), (2) provides a library of baseline component modules to be assembled into full plant models using available geometry, design, and thermal-hydraulic data, (3) defines modeling conventions for interconnecting component models, and (4) establishes user interfaces and support tools to facilitate simulation development (i.e., configuration and parameterization), execution, and results display and capture.

  7. Sediment distribution study in the Gulf of Kachchh, India, from 3D hydrodynamic model simulation and satellite data

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.; Zhao, C.; Osawa, T.; Sugimori, Y.

    on the hydrostatic and Boussinesq approximations and uses a vertical double sigma coordinate with a step-like grid. In addition to the momentum and continuity equations, the model solves two-transport equations for salinity and temperature and an equation of state...

  8. Biological transportation networks: Modeling and simulation

    KAUST Repository

    Albi, Giacomo

    2015-09-15

    We present a model for biological network formation originally introduced by Cai and Hu [Adaptation and optimization of biological transport networks, Phys. Rev. Lett. 111 (2013) 138701]. The modeling of fluid transportation (e.g., leaf venation and angiogenesis) and ion transportation networks (e.g., neural networks) is explained in detail and basic analytical features like the gradient flow structure of the fluid transportation network model and the impact of the model parameters on the geometry and topology of network formation are analyzed. We also present a numerical finite-element based discretization scheme and discuss sample cases of network formation simulations.

  9. A universal simulator for ecological models

    DEFF Research Database (Denmark)

    Holst, Niels

    2013-01-01

    Software design is an often neglected issue in ecological models, even though bad software design often becomes a hindrance for re-using, sharing and even grasping an ecological model. In this paper, the methodology of agile software design was applied to the domain of ecological models. Thus...... the principles for a universal design of ecological models were arrived at. To exemplify this design, the open-source software Universal Simulator was constructed using C++ and XML and is provided as a resource for inspiration....

  10. Object Oriented Modelling and Dynamical Simulation

    DEFF Research Database (Denmark)

    Wagner, Falko Jens; Poulsen, Mikael Zebbelin

    1998-01-01

    This report with appendix describes the work done in master project at DTU.The goal of the project was to develop a concept for simulation of dynamical systems based on object oriented methods.The result was a library of C++-classes, for use when both building componentbased models and when...

  11. Modelling, simulation and visualisation for electromagnetic non-destructive testing

    International Nuclear Information System (INIS)

    Ilham Mukriz Zainal Abidin; Abdul Razak Hamzah

    2010-01-01

    This paper reviews the state-of-the art and the recent development of modelling, simulation and visualization for eddy current Non-Destructive Testing (NDT) technique. Simulation and visualization has aid in the design and development of electromagnetic sensors and imaging techniques and systems for Electromagnetic Non-Destructive Testing (ENDT); feature extraction and inverse problems for Quantitative Non-Destructive Testing (QNDT). After reviewing the state-of-the art of electromagnetic modelling and simulation, case studies of Research and Development in eddy current NDT technique via magnetic field mapping and thermography for eddy current distribution are discussed. (author)

  12. Reproducibility in Computational Neuroscience Models and Simulations

    Science.gov (United States)

    McDougal, Robert A.; Bulanova, Anna S.; Lytton, William W.

    2016-01-01

    Objective Like all scientific research, computational neuroscience research must be reproducible. Big data science, including simulation research, cannot depend exclusively on journal articles as the method to provide the sharing and transparency required for reproducibility. Methods Ensuring model reproducibility requires the use of multiple standard software practices and tools, including version control, strong commenting and documentation, and code modularity. Results Building on these standard practices, model sharing sites and tools have been developed that fit into several categories: 1. standardized neural simulators, 2. shared computational resources, 3. declarative model descriptors, ontologies and standardized annotations; 4. model sharing repositories and sharing standards. Conclusion A number of complementary innovations have been proposed to enhance sharing, transparency and reproducibility. The individual user can be encouraged to make use of version control, commenting, documentation and modularity in development of models. The community can help by requiring model sharing as a condition of publication and funding. Significance Model management will become increasingly important as multiscale models become larger, more detailed and correspondingly more difficult to manage by any single investigator or single laboratory. Additional big data management complexity will come as the models become more useful in interpreting experiments, thus increasing the need to ensure clear alignment between modeling data, both parameters and results, and experiment. PMID:27046845

  13. Thermohydraulic modeling and simulation of breeder reactors

    International Nuclear Information System (INIS)

    Agrawal, A.K.; Khatib-Rahbar, M.; Curtis, R.T.; Hetrick, D.L.; Girijashankar, P.V.

    1982-01-01

    This paper deals with the modeling and simulation of system-wide transients in LMFBRs. Unprotected events (i.e., the presumption of failure of the plant protection system) leading to core-melt are not considered in this paper. The existing computational capabilities in the area of protected transients in the US are noted. Various physical and numerical approximations that are made in these codes are discussed. Finally, the future direction in the area of model verification and improvements is discussed

  14. The Model of Gas Supply Capacity Simulation In Regional Energy Security Framework: Policy Studies PT. X Cirebon Area

    Science.gov (United States)

    Nuryadin; Ronny Rahman Nitibaskara, Tb; Herdiansyah, Herdis; Sari, Ravita

    2017-10-01

    The needs of energy are increasing every year. The unavailability of energy will cause economic losses and weaken energy security. To overcome the availability of gas supply in the future, planning are cruacially needed. Therefore, it is necessary to approach the system, so that the process of gas distribution is running properly. In this research, system dynamic method will be used to measure how much supply capacity planning is needed until 2050, with parameters of demand in industrial, household and commercial sectors. From the model obtained PT.X Cirebon area in 2031 was not able to meet the needs of gas customers in the Cirebon region, as well as with Businnes as usual scenario, the ratio of gas fulfillment only until 2027. The implementation of the national energy policy that is the use of NRE as government intervention in the model is produced up to 2035 PT.X Cirebon area is still able to supply the gas needs of its customers.

  15. Numerical simulation for regional ozone concentrations: A case study by weather research and forecasting/chemistry (WRF/Chem) model

    OpenAIRE

    Khandakar Md Habib Al Razi, Moritomi Hiroshi

    2013-01-01

    The objective of this research is to better understand and predict the atmospheric concentration distribution of ozone and its precursor (in particular, within the Planetary Boundary Layer (Within 110 km to 12 km) over Kasaki City and the Greater Tokyo Area using fully coupled online WRF/Chem (Weather Research and Forecasting/Chemistry) model. In this research, a serious and continuous high ozone episode in the Greater Tokyo Area (GTA) during the summer of 14–18 August 2010 was investigated u...

  16. Agricultural trade liberalisation on greenhouse gas emissions. A simulation study using the GTAP-IMAGE modelling framework

    International Nuclear Information System (INIS)

    Verburg, R.; Woltjer, G.; Tabeau, A.; Eickhout, B.; Stehfest, E.

    2008-02-01

    This report explores the effects of agricultural trade liberalisation on greenhouse gas emissions and on changing commodity production areas by coupling of the modeling tools GTAP (economic model) and IMAGE (environmental model). Four scenarios are explored with developments up tot 2050. The scenarios include a baseline, full liberalisation and two partial liberalisation scenarios for which the latter scenarios include removal of trade barriers or removal of milk quota by 2015 only. The results indicate that liberalisation leads to a further increase in greenhouse gas emissions adding to an already observed increase in emissions observed in the baseline scenario. CO2 emission increase is caused by vegetation clearance due to a rapid expansion of agricultural areas in South America and South East Asia. Increased methane emissions are also calculated in these areas caused by less intensive cattle farming. Global production of the commodities milk, dairy and beef does not change between full liberalisation and the baseline but clear shifts from North America and Europe to South America and South East Asia are expected

  17. Twitter's tweet method modelling and simulation

    Science.gov (United States)

    Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.

    2015-02-01

    This paper seeks to purpose the concept of Twitter marketing methods. The tools that Twitter provides are modelled and simulated using iThink in the context of a Twitter media-marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following models have been developed for a twitter marketing agent/company and tested in real circumstances and with real numbers. These models were finalized through a number of revisions and iterators of the design, develop, simulate, test and evaluate. It also addresses these methods that suit most organized promotion through targeting, to the Twitter social media service. The validity and usefulness of these Twitter marketing methods models for the day-to-day decision making are authenticated by the management of the company organization. It implements system dynamics concepts of Twitter marketing methods modelling and produce models of various Twitter marketing situations. The Tweet method that Twitter provides can be adjusted, depending on the situation, in order to maximize the profit of the company/agent.

  18. Advances in NLTE modeling for integrated simulations

    Science.gov (United States)

    Scott, H. A.; Hansen, S. B.

    2010-01-01

    The last few years have seen significant progress in constructing the atomic models required for non-local thermodynamic equilibrium (NLTE) simulations. Along with this has come an increased understanding of the requirements for accurately modeling the ionization balance, energy content and radiative properties of different atomic species for a wide range of densities and temperatures. Much of this progress is the result of a series of workshops dedicated to comparing the results from different codes and computational approaches applied to a series of test problems. The results of these workshops emphasized the importance of atomic model completeness, especially in doubly-excited states and autoionization transitions, to calculating ionization balance, and the importance of accurate, detailed atomic data to producing reliable spectra. We describe a simple screened-hydrogenic model that calculates NLTE ionization balance with sufficient accuracy, at a low enough computational cost for routine use in radiation-hydrodynamics codes. The model incorporates term splitting, Δ n = 0 transitions, and approximate UTA widths for spectral calculations, with results comparable to those of much more detailed codes. Simulations done with this model have been increasingly successful at matching experimental data for laser-driven systems and hohlraums. Accurate and efficient atomic models are just one requirement for integrated NLTE simulations. Coupling the atomic kinetics to hydrodynamics and radiation transport constrains both discretizations and algorithms to retain energy conservation, accuracy and stability. In particular, the strong coupling between radiation and populations can require either very short time steps or significantly modified radiation transport algorithms to account for NLTE material response. Considerations such as these continue to provide challenges for NLTE simulations.

  19. Advances in NLTE Modeling for Integrated Simulations

    International Nuclear Information System (INIS)

    Scott, H.A.; Hansen, S.B.

    2009-01-01

    The last few years have seen significant progress in constructing the atomic models required for non-local thermodynamic equilibrium (NLTE) simulations. Along with this has come an increased understanding of the requirements for accurately modeling the ionization balance, energy content and radiative properties of different elements for a wide range of densities and temperatures. Much of this progress is the result of a series of workshops dedicated to comparing the results from different codes and computational approaches applied to a series of test problems. The results of these workshops emphasized the importance of atomic model completeness, especially in doubly excited states and autoionization transitions, to calculating ionization balance, and the importance of accurate, detailed atomic data to producing reliable spectra. We describe a simple screened-hydrogenic model that calculates NLTE ionization balance with surprising accuracy, at a low enough computational cost for routine use in radiation-hydrodynamics codes. The model incorporates term splitting, Δn = 0 transitions, and approximate UTA widths for spectral calculations, with results comparable to those of much more detailed codes. Simulations done with this model have been increasingly successful at matching experimental data for laser-driven systems and hohlraums. Accurate and efficient atomic models are just one requirement for integrated NLTE simulations. Coupling the atomic kinetics to hydrodynamics and radiation transport constrains both discretizations and algorithms to retain energy conservation, accuracy and stability. In particular, the strong coupling between radiation and populations can require either very short timesteps or significantly modified radiation transport algorithms to account for NLTE material response. Considerations such as these continue to provide challenges for NLTE simulations.

  20. Sensitivity analysis for models of greenhouse gas emissions at farm level. Case study of N2O emissions simulated by the CERES-EGC model

    International Nuclear Information System (INIS)

    Drouet, J.-L.; Capian, N.; Fiorelli, J.-L.; Blanfort, V.; Capitaine, M.; Duretz, S.; Gabrielle, B.; Martin, R.; Lardy, R.; Cellier, P.; Soussana, J.-F.

    2011-01-01

    Modelling complex systems such as farms often requires quantification of a large number of input factors. Sensitivity analyses are useful to reduce the number of input factors that are required to be measured or estimated accurately. Three methods of sensitivity analysis (the Morris method, the rank regression and correlation method and the Extended Fourier Amplitude Sensitivity Test method) were compared in the case of the CERES-EGC model applied to crops of a dairy farm. The qualitative Morris method provided a screening of the input factors. The two other quantitative methods were used to investigate more thoroughly the effects of input factors on output variables. Despite differences in terms of concepts and assumptions, the three methods provided similar results. Among the 44 factors under study, N 2 O emissions were mainly sensitive to the fraction of N 2 O emitted during denitrification, the maximum rate of nitrification, the soil bulk density and the cropland area. - Highlights: → Three methods of sensitivity analysis were compared in the case of a soil-crop model. → The qualitative Morris method provided a screening of the input factors. → The quantitative EFAST method provided a thorough analysis of the input factors. → The three methods provided similar results regarding sensitivity of N 2 O emissions. → N 2 O emissions were mainly sensitive to a few, especially four, input factors. - Three methods of sensitivity analysis were compared to analyse their efficiency in assessing the sensitivity of a complex soil-crop model to its input factors.

  1. SIMULATION MODELING OF IT PROJECTS BASED ON PETRI NETS

    Directory of Open Access Journals (Sweden)

    Александр Михайлович ВОЗНЫЙ

    2015-05-01

    Full Text Available An integrated simulation model of IT project based on a modified Petri net model that combines product and model of project tasks has been proposed. Substantive interpretation of the components of the simulation model has been presented, the process of simulation has been described. The conclusions about the integration of the product model and the model of works project were made.

  2. Sensitivity of chemical transport model simulations to the duration of chemical and transport operators: a case study with GEOS-Chem v10-01

    Science.gov (United States)

    Philip, S.; Martin, R. V.; Keller, C. A.

    2015-11-01

    Chemical transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemical transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to temporal resolution. Subsequently, we compare the tracers simulated with operator durations from 10 to 60 min as typically used by global chemical transport models, and identify the timesteps that optimize both computational expense and simulation accuracy. We found that longer transport timesteps increase concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production at longer transport timesteps. Longer chemical timesteps decrease sulfate and ammonium but increase nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by an order of magnitude from fine (5 min) to coarse (60 min) temporal resolution. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, ozone, carbon monoxide and secondary inorganic aerosols with a finer temporal or spatial resolution taken as truth. Simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) temporal resolution. Chemical timesteps twice that of the transport timestep offer more simulation accuracy per unit computation. However, simulation error from coarser spatial resolution generally exceeds that from longer timesteps; e.g. degrading from 2° × 2.5° to 4° × 5

  3. Sensitivity of chemistry-transport model simulations to the duration of chemical and transport operators: a case study with GEOS-Chem v10-01

    Science.gov (United States)

    Philip, Sajeev; Martin, Randall V.; Keller, Christoph A.

    2016-05-01

    Chemistry-transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemistry-transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to operator duration. Subsequently, we compare the species simulated with operator durations from 10 to 60 min as typically used by global chemistry-transport models, and identify the operator durations that optimize both computational expense and simulation accuracy. We find that longer continuous transport operator duration increases concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production with longer transport operator duration. Longer chemical operator duration decreases sulfate and ammonium but increases nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by up to a factor of 5 from fine (5 min) to coarse (60 min) operator duration. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, secondary inorganic aerosols, ozone and carbon monoxide with a finer temporal or spatial resolution taken as "truth". Relative simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) operator duration. Chemical operator duration twice that of the transport operator duration offers more simulation accuracy per unit computation. However, the relative simulation error from coarser spatial resolution generally

  4. Persistence of DNA studied in different ex vivo and in vivo rat models simulating the human gut situation

    DEFF Research Database (Denmark)

    Wilcks, Andrea; van Hoek, A.H.A.M.; Joosten, R.G.

    2004-01-01

    This study aimed to evaluate the possibility of DNA sequences from genetically modified plants to persist in the gastrointestinal (GI) tract. PCR analysis and transformation assays were used to study DNA persistence and integrity in various ex vivo and in vivo systems using gnotobiotic rats. DNA...

  5. 2D and 3D simulation of cavitating flows: development of an original algorithm in code Saturne and study of the influence of turbulence modeling

    International Nuclear Information System (INIS)

    Chebli, Rezki

    2014-01-01

    Cavitation is one of the most demanding physical phenomena influencing the performance of hydraulic machines. It is therefore important to predict correctly its inception and development, in order to quantify the performance drop it induces, and also to characterize the resulting flow instabilities. The aim of this work is to develop an unsteady 3D algorithm for the numerical simulation of cavitation in an industrial CFD solver 'Code Saturne'. It is based on a fractional step method and preserves the minimum/maximum principle of the void fraction. An implicit solver, based on a transport equation of the void fraction coupled with the Navier-Stokes equations is proposed. A specific numerical treatment of the cavitation source terms provides physical values of the void fraction (between 0 and 1) without including any artificial numerical limitation. The influence of RANS turbulence models on the simulation of cavitation on 2D geometries (Venturi and Hydrofoil) is then studied. It confirms the capability of the two-equation eddy viscosity models, k-epsilon and k-omega-SST, with the modification proposed by Reboud et al. (1998) to reproduce the main features of the unsteady sheet cavity behavior. The second order model RSM-SSG, based on the Reynolds stress transport, appears able to reproduce the highly unsteady flow behavior without including any arbitrary modification. The three-dimensional effects involved in the instability mechanisms are also analyzed. This work allows us to achieve a numerical tool, validated on complex configurations of cavitating flows, to improve the understanding of the physical mechanisms that control the three-dimensional unsteady effects involved in the mechanisms of instability. (author)

  6. A parallel computational model for GATE simulations.

    Science.gov (United States)

    Rannou, F R; Vega-Acevedo, N; El Bitar, Z

    2013-12-01

    GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Multiphase reacting flows modelling and simulation

    CERN Document Server

    Marchisio, Daniele L

    2007-01-01

    The papers in this book describe the most widely applicable modeling approaches and are organized in six groups covering from fundamentals to relevant applications. In the first part, some fundamentals of multiphase turbulent reacting flows are covered. In particular the introduction focuses on basic notions of turbulence theory in single-phase and multi-phase systems as well as on the interaction between turbulence and chemistry. In the second part, models for the physical and chemical processes involved are discussed. Among other things, particular emphasis is given to turbulence modeling strategies for multiphase flows based on the kinetic theory for granular flows. Next, the different numerical methods based on Lagrangian and/or Eulerian schemes are presented. In particular the most popular numerical approaches of computational fluid dynamics codes are described (i.e., Direct Numerical Simulation, Large Eddy Simulation, and Reynolds-Averaged Navier-Stokes approach). The book will cover particle-based meth...

  8. Fault diagnosis based on continuous simulation models

    Science.gov (United States)

    Feyock, Stefan

    1987-01-01

    The results are described of an investigation of techniques for using continuous simulation models as basis for reasoning about physical systems, with emphasis on the diagnosis of system faults. It is assumed that a continuous simulation model of the properly operating system is available. Malfunctions are diagnosed by posing the question: how can we make the model behave like that. The adjustments that must be made to the model to produce the observed behavior usually provide definitive clues to the nature of the malfunction. A novel application of Dijkstra's weakest precondition predicate transformer is used to derive the preconditions for producing the required model behavior. To minimize the size of the search space, an envisionment generator based on interval mathematics was developed. In addition to its intended application, the ability to generate qualitative state spaces automatically from quantitative simulations proved to be a fruitful avenue of investigation in its own right. Implementations of the Dijkstra transform and the envisionment generator are reproduced in the Appendix.

  9. Numerical model simulation of atmospheric coolant plumes

    International Nuclear Information System (INIS)

    Gaillard, P.

    1980-01-01

    The effect of humid atmospheric coolants on the atmosphere is simulated by means of a three-dimensional numerical model. The atmosphere is defined by its natural vertical profiles of horizontal velocity, temperature, pressure and relative humidity. Effluent discharge is characterised by its vertical velocity and the temperature of air satured with water vapour. The subject of investigation is the area in the vicinity of the point of discharge, with due allowance for the wake effect of the tower and buildings and, where application, wind veer with altitude. The model equations express the conservation relationships for mometum, energy, total mass and water mass, for an incompressible fluid behaving in accordance with the Boussinesq assumptions. Condensation is represented by a simple thermodynamic model, and turbulent fluxes are simulated by introduction of turbulent viscosity and diffusivity data based on in-situ and experimental water model measurements. The three-dimensional problem expressed in terms of the primitive variables (u, v, w, p) is governed by an elliptic equation system which is solved numerically by application of an explicit time-marching algorithm in order to predict the steady-flow velocity distribution, temperature, water vapour concentration and the liquid-water concentration defining the visible plume. Windstill conditions are simulated by a program processing the elliptic equations in an axisymmetrical revolution coordinate system. The calculated visible plumes are compared with plumes observed on site with a view to validate the models [fr

  10. Studies of acid-base homeostasis during simulated weightlessness: Application of the water immersion model to man

    Science.gov (United States)

    Epstein, M.

    1975-01-01

    The effects of water immersion on acid-base homeostasis were investigated under carefully controlled conditions. Studies of renal acidification were carried out on seven healthy male subjects, each consuming a diet containing 150 meq sodium and 100 meq potassium. Control and immersion studies were carried out on each subject on the fourth and sixth days, respectively, of dietary equilibration, by which time all subjects had achieved sodium balance. The experimental protocols on study days were similar (except for the amount of water administered).

  11. Simulation Tools for Electrical Machines Modelling: Teaching and ...

    African Journals Online (AJOL)

    Simulation tools are used both for research and teaching to allow a good comprehension of the systems under study before practical implementations. This paper illustrates the way MATLAB is used to model non-linearites in synchronous machine. The machine is modeled in rotor reference frame with currents as state ...

  12. Simulation cobweb model of price formation with delayed supply

    Directory of Open Access Journals (Sweden)

    Yatsenko Roman Nikolaevich

    2013-03-01

    Full Text Available The article presents a simulation cobweb model of price formation with delayed supply. It considers cases with absence and availability of random factors. Randomness is presented in the model as a concept of games with nature with the use of Markov chains. The article studies activity of the retail link in the described environment.

  13. Arctic Ocean freshwater: How robust are model simulations?

    NARCIS (Netherlands)

    Jahn, A.; Aksenov, Y.; de Cuevas, B.A.; de Steur, L.; Häkkinen, S.; Hansen, E.; Herbaut, C.; Houssais, M.N.; Karcher, M.; Kauker, F.; Lique, C.; Nguyen, A.; Pemberton, P.; Worthen, D.; Zhang, J.

    2012-01-01

    The Arctic freshwater (FW) has been the focus of many modeling studies, due to the potential impact of Arctic FW on the deep water formation in the North Atlantic. A comparison of the hindcasts from ten ocean-sea ice models shows that the simulation of the Arctic FW budget is quite different in the

  14. A Simulation Model for Extensor Tendon Repair

    Directory of Open Access Journals (Sweden)

    Elizabeth Aronstam

    2017-07-01

    Full Text Available Audience: This simulation model is designed for use by emergency medicine residents. Although we have instituted this at the PGY-2 level of our residency curriculum, it is appropriate for any level of emergency medicine residency training. It might also be adapted for use for a variety of other learners, such as practicing emergency physicians, orthopedic surgery residents, or hand surgery trainees. Introduction: Tendon injuries commonly present to the emergency department, so it is essential that emergency physicians be competent in evaluating such injuries. Indeed, extensor tendon repair is included as an ACGME Emergency Medicine Milestone (Milestone 13, Wound Management, Level 5 – “Performs advanced wound repairs, such as tendon repairs…”.1 However, emergency medicine residents may have limited opportunity to develop these skills due to a lack of patients, competition from other trainees, or preexisting referral patterns. Simulation may provide an alternative means to effectively teach these skills in such settings. Previously described tendon repair simulation models that were designed for surgical trainees have used rubber worms4, licorice5, feeding tubes, catheters6,7, drinking straws8, microfoam tape9, sheep forelimbs10 and cadavers.11 These models all suffer a variety of limitations, including high cost, lack of ready availability, or lack of realism. Objectives: We sought to develop an extensor tendon repair simulation model for emergency medicine residents, designed to meet ACGME Emergency Medicine Milestone 13, Level 5. We wished this model to be simple, inexpensive, and realistic. Methods: The learner responsible content/educational handout component of our innovation teaches residents about emergency department extensor tendon repair, and includes: 1 relevant anatomy 2 indications and contraindications for emergency department extensor tendon repair 3 physical exam findings 4 tendon suture techniques and 5 aftercare. During

  15. Managing health care decisions and improvement through simulation modeling.

    Science.gov (United States)

    Forsberg, Helena Hvitfeldt; Aronsson, Håkan; Keller, Christina; Lindblad, Staffan

    2011-01-01

    Simulation modeling is a way to test changes in a computerized environment to give ideas for improvements before implementation. This article reviews research literature on simulation modeling as support for health care decision making. The aim is to investigate the experience and potential value of such decision support and quality of articles retrieved. A literature search was conducted, and the selection criteria yielded 59 articles derived from diverse applications and methods. Most met the stated research-quality criteria. This review identified how simulation can facilitate decision making and that it may induce learning. Furthermore, simulation offers immediate feedback about proposed changes, allows analysis of scenarios, and promotes communication on building a shared system view and understanding of how a complex system works. However, only 14 of the 59 articles reported on implementation experiences, including how decision making was supported. On the basis of these articles, we proposed steps essential for the success of simulation projects, not just in the computer, but also in clinical reality. We also presented a novel concept combining simulation modeling with the established plan-do-study-act cycle for improvement. Future scientific inquiries concerning implementation, impact, and the value for health care management are needed to realize the full potential of simulation modeling.

  16. Tutorial: Parallel Computing of Simulation Models for Risk Analysis.

    Science.gov (United States)

    Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D

    2016-10-01

    Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.

  17. Simulation as a surgical teaching model.

    Science.gov (United States)

    Ruiz-Gómez, José Luis; Martín-Parra, José Ignacio; González-Noriega, Mónica; Redondo-Figuero, Carlos Godofredo; Manuel-Palazuelos, José Carlos

    2018-01-01

    Teaching of surgery has been affected by many factors over the last years, such as the reduction of working hours, the optimization of the use of the operating room or patient safety. Traditional teaching methodology fails to reduce the impact of these factors on surgeońs training. Simulation as a teaching model minimizes such impact, and is more effective than traditional teaching methods for integrating knowledge and clinical-surgical skills. Simulation complements clinical assistance with training, creating a safe learning environment where patient safety is not affected, and ethical or legal conflicts are avoided. Simulation uses learning methodologies that allow teaching individualization, adapting it to the learning needs of each student. It also allows training of all kinds of technical, cognitive or behavioural skills. Copyright © 2017 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.

  18. Mathematical models and numerical simulation in electromagnetism

    CERN Document Server

    Bermúdez, Alfredo; Salgado, Pilar

    2014-01-01

    The book represents a basic support for a master course in electromagnetism oriented to numerical simulation. The main goal of the book is that the reader knows the boundary-value problems of partial differential equations that should be solved in order to perform computer simulation of electromagnetic processes. Moreover it includes a part devoted to electric circuit theory  based on ordinary differential equations. The book is mainly oriented to electric engineering applications, going from the general to the specific, namely, from the full Maxwell’s equations to the particular cases of electrostatics, direct current, magnetostatics and eddy currents models. Apart from standard exercises related to analytical calculus, the book includes some others oriented to real-life applications solved with MaxFEM free simulation software.

  19. Stereoscopic (3D) versus monoscopic (2D) laparoscopy: comparative study of performance using advanced HD optical systems in a surgical simulator model.

    Science.gov (United States)

    Schoenthaler, Martin; Schnell, Daniel; Wilhelm, Konrad; Schlager, Daniel; Adams, Fabian; Hein, Simon; Wetterauer, Ulrich; Miernik, Arkadiusz

    2016-04-01

    To compare task performances of novices and experts using advanced high-definition 3D versus 2D optical systems in a surgical simulator model. Fifty medical students (novices in laparoscopy) were randomly assigned to perform five standardized tasks adopted from the Fundamentals of Laparoscopic Surgery (FLS) curriculum in either a 2D or 3D laparoscopy simulator system. In addition, eight experts performed the same tasks. Task performances were evaluated using a validated scoring system of the SAGES/FLS program. Participants were asked to rate 16 items in a questionnaire. Overall task performance of novices was significantly better using stereoscopic visualization. Superiority of performances in 3D reached a level of significance for tasks peg transfer and precision cutting. No significant differences were noted in performances of experts when using either 2D or 3D. Overall performances of experts compared to novices were better in both 2D and 3D. Scorings in the questionnaires showed a tendency toward lower scores in the group of novices using 3D. Stereoscopic imaging significantly improves performance of laparoscopic phantom tasks of novices. The current study confirms earlier data based on a large number of participants and a standardized task and scoring system. Participants felt more confident and comfortable when using a 3D laparoscopic system. However, the question remains open whether these findings translate into faster and safer operations in a clinical setting.

  20. Diversity modelling for electrical power system simulation

    International Nuclear Information System (INIS)

    Sharip, R M; Abu Zarim, M A U A

    2013-01-01

    This paper considers diversity of generation and demand profiles against the different future energy scenarios and evaluates these on a technical basis. Compared to previous studies, this research applied a forecasting concept based on possible growth rates from publically electrical distribution scenarios concerning the UK. These scenarios were created by different bodies considering aspects such as environment, policy, regulation, economic and technical. In line with these scenarios, forecasting is on a long term timescale (up to every ten years from 2020 until 2050) in order to create a possible output of generation mix and demand profiles to be used as an appropriate boundary condition for the network simulation. The network considered is a segment of rural LV populated with a mixture of different housing types. The profiles for the 'future' energy and demand have been successfully modelled by applying a forecasting method. The network results under these profiles shows for the cases studied that even though the value of the power produced from each Micro-generation is often in line with the demand requirements of an individual dwelling there will be no problems arising from high penetration of Micro-generation and demand side management for each dwellings considered. The results obtained highlight the technical issues/changes for energy delivery and management to rural customers under the future energy scenarios

  1. Diversity modelling for electrical power system simulation

    Science.gov (United States)

    Sharip, R. M.; Abu Zarim, M. A. U. A.

    2013-12-01

    This paper considers diversity of generation and demand profiles against the different future energy scenarios and evaluates these on a technical basis. Compared to previous studies, this research applied a forecasting concept based on possible growth rates from publically electrical distribution scenarios concerning the UK. These scenarios were created by different bodies considering aspects such as environment, policy, regulation, economic and technical. In line with these scenarios, forecasting is on a long term timescale (up to every ten years from 2020 until 2050) in order to create a possible output of generation mix and demand profiles to be used as an appropriate boundary condition for the network simulation. The network considered is a segment of rural LV populated with a mixture of different housing types. The profiles for the 'future' energy and demand have been successfully modelled by applying a forecasting method. The network results under these profiles shows for the cases studied that even though the value of the power produced from each Micro-generation is often in line with the demand requirements of an individual dwelling there will be no problems arising from high penetration of Micro-generation and demand side management for each dwellings considered. The results obtained highlight the technical issues/changes for energy delivery and management to rural customers under the future energy scenarios.

  2. How operator admittance affects the response of a teleoperation system to assistive forces – A model analytic study and simulation

    NARCIS (Netherlands)

    Wildenbeest, J. G. W.; Abbink, D. A.; Boessenkool, H.; Heemskerk, C. J. M.; Koning, J. F.

    2013-01-01

    Haptic shared control is a promising approach to increase the effectiveness of remote handling operations. While in haptic shared control the operator is continuously guided with assistive forces, the operator's response to forces is not fully understood. This study describes the development of

  3. Peer Influence, Peer Selection and Adolescent Alcohol Use: a Simulation Study Using a Dynamic Network Model of Friendship Ties and Alcohol Use.

    Science.gov (United States)

    Wang, Cheng; Hipp, John R; Butts, Carter T; Jose, Rupa; Lakon, Cynthia M

    2017-05-01

    While studies suggest that peer influence can in some cases encourage adolescent substance use, recent work demonstrates that peer influence may be on average protective for cigarette smoking, raising questions about whether this effect occurs for other substance use behaviors. Herein, we focus on adolescent drinking, which may follow different social dynamics than smoking. We use a data-calibrated Stochastic Actor-Based (SAB) Model of adolescent friendship tie choice and drinking behavior to explore the impact of manipulating the size of peer influence and selection effects on drinking in two school-based networks. We first fit a SAB Model to data on friendship tie choice and adolescent drinking behavior within two large schools (n = 2178 and n = 976) over three time points using data from the National Longitudinal Study of Adolescent to Adult Health. We then alter the size of the peer influence and selection parameters with all other effects fixed at their estimated values and simulate the social systems forward 1000 times under varying conditions. Whereas peer selection appears to contribute to drinking behavior similarity among adolescents, there is no evidence that it leads to higher levels of drinking at the school level. A stronger peer influence effect lowers the overall level of drinking in both schools. There are many similarities in the patterning of findings between this study of drinking and previous work on smoking, suggesting that peer influence and selection may function similarly with respect to these substances.

  4. Facebook's personal page modelling and simulation

    Science.gov (United States)

    Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.

    2015-02-01

    In this paper we will try to define the utility of Facebook's Personal Page marketing method. This tool that Facebook provides, is modelled and simulated using iThink in the context of a Facebook marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following model has been developed for a social media marketing agent/company, Facebook platform oriented and tested in real circumstances. This model is finalized through a number of revisions and iterators of the design, development, simulation, testing and evaluation processes. The validity and usefulness of this Facebook marketing model for the day-to-day decision making are authenticated by the management of the company organization. Facebook's Personal Page method can be adjusted, depending on the situation, in order to maximize the total profit of the company which is to bring new customers, keep the interest of the old customers and deliver traffic to its website.

  5. Modeling and simulation of photovoltaic solar panel

    International Nuclear Information System (INIS)

    Belarbi, M.; Haddouche, K.; Midoun, A.

    2006-01-01

    In this article, we present a new approach for estimating the model parameters of a photovoltaic solar panel according to the irradiance and temperature. The parameters of the one diode model are given from the knowledge of three operating points: short-circuit, open circuit, and maximum power. In the first step, the adopted approach concerns the resolution of the system of equations constituting the three operating points to write all the model parameters according to series resistance. Secondly, we make an iterative resolution at the optimal operating point by using the Newton-Raphson method to calculate the series resistance value as well as the model parameters. Once the panel model is identified, we consider other equations for taking into account the irradiance and temperature effect. The simulation results show the convergence speed of the model parameters and the possibility of visualizing the electrical behaviour of the panel according to the irradiance and temperature. Let us note that a sensitivity of the algorithm at the optimal operating point was observed owing to the fact that a small variation of the optimal voltage value leads to a very great variation of the identified parameters values. With the identified model, we can develop algorithms of maximum power point tracking, and make simulations of a solar water pumping system.(Author)

  6. Software Platform Evaluation - Verifiable Fuel Cycle Simulation (VISION) Model

    International Nuclear Information System (INIS)

    J. J. Jacobson; D. E. Shropshire; W. B. West

    2005-01-01

    The purpose of this Software Platform Evaluation (SPE) is to document the top-level evaluation of potential software platforms on which to construct a simulation model that satisfies the requirements for a Verifiable Fuel Cycle Simulation Model (VISION) of the Advanced Fuel Cycle (AFC). See the Software Requirements Specification for Verifiable Fuel Cycle Simulation (VISION) Model (INEEL/EXT-05-02643, Rev. 0) for a discussion of the objective and scope of the VISION model. VISION is intended to serve as a broad systems analysis and study tool applicable to work conducted as part of the AFCI (including costs estimates) and Generation IV reactor development studies. This document will serve as a guide for selecting the most appropriate software platform for VISION. This is a ''living document'' that will be modified over the course of the execution of this work

  7. A simulation model for material accounting systems

    International Nuclear Information System (INIS)

    Coulter, C.A.; Thomas, K.E.

    1987-01-01

    A general-purpose model that was developed to simulate the operation of a chemical processing facility for nuclear materials has been extended to describe material measurement and accounting procedures as well. The model now provides descriptors for material balance areas, a large class of measurement instrument types and their associated measurement errors for various classes of materials, the measurement instruments themselves with their individual calibration schedules, and material balance closures. Delayed receipt of measurement results (as for off-line analytical chemistry assay), with interim use of a provisional measurement value, can be accurately represented. The simulation model can be used to estimate inventory difference variances for processing areas that do not operate at steady state, to evaluate the timeliness of measurement information, to determine process impacts of measurement requirements, and to evaluate the effectiveness of diversion-detection algorithms. Such information is usually difficult to obtain by other means. Use of the measurement simulation model is illustrated by applying it to estimate inventory difference variances for two material balance area structures of a fictitious nuclear material processing line

  8. The fragrance hand immersion study - an experimental model simulating real-life exposure for allergic contact dermatitis on the hands

    DEFF Research Database (Denmark)

    Heydorn, S; Menné, T; Andersen, K E

    2003-01-01

    Recently, we showed that 10 x 2% of consecutively patch-tested hand eczema patients had a positive patch test to a selection of fragrances containing fragrances relevant to hand exposure. In this study, we used repeated skin exposure to a patch test-positive fragrance allergen in patients...... previously diagnosed with hand eczema to explore whether immersion of fingers in a solution with or without the patch-test-positive fragrance allergen would cause or exacerbate hand eczema on the exposed finger. The study was double blinded and randomized. All participants had a positive patch test to either...... hydroxycitronellal or Lyral (hydroxyisohexyl 3-cyclohexene carboxaldehyde). Each participant immersed a finger from each hand, once a day, in a solution containing the fragrance allergen or placebo. During the first 2 weeks, the concentration of fragrance allergen in the solution was low (approximately 10 p...

  9. Models Robustness for Simulating Drainage and NO3-N Fluxes

    Science.gov (United States)

    Jabro, Jay; Jabro, Ann

    2013-04-01

    Computer models simulate and forecast appropriate agricultural practices to reduce environmental impact. The objectives of this study were to assess and compare robustness and performance of three models -- LEACHM, NCSWAP, and SOIL-SOILN--for simulating drainage and NO3-N leaching fluxes in an intense pasture system without recalibration. A 3-yr study was conducted on a Hagerstown silt loam to measure drainage and NO3-N fluxes below 1 m depth from N-fertilized orchardgrass using intact core lysimeters. Five N-fertilizer treatments were replicated five times in a randomized complete block experimental design. The models were validated under orchardgrass using soil, water and N transformation rate parameters and C pools fractionation derived from a previous study conducted on similar soils under corn. The model efficiency (MEF) of drainage and NO3-N fluxes were 0.53, 0.69 for LEACHM; 0.75, 0.39 for NCSWAP; and 0.94, 0.91for SOIL-SOILN. The models failed to produce reasonable simulations of drainage and NO3-N fluxes in January, February and March due to limited water movement associated with frozen soil and snow accumulation and melt. The differences between simulated and measured NO3-N leaching and among models' performances may also be related to soil N and C transformation processes embedded in the models These results are a monumental progression in the validation of computer models which will lead to continued diffusion across diverse stakeholders.

  10. Interaction and Impact Studies for Distributed Energy Resource, Transactive Energy, and Electric Grid, using High Performance Computing ?based Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kelley, B M

    2017-02-10

    The electric utility industry is undergoing significant transformations in its operation model, including a greater emphasis on automation, monitoring technologies, and distributed energy resource management systems (DERMS). With these changes and new technologies, while driving greater efficiencies and reliability, these new models may introduce new vectors of cyber attack. The appropriate cybersecurity controls to address and mitigate these newly introduced attack vectors and potential vulnerabilities are still widely unknown and performance of the control is difficult to vet. This proposal argues that modeling and simulation (M&S) is a necessary tool to address and better understand these problems introduced by emerging technologies for the grid. M&S will provide electric utilities a platform to model its transmission and distribution systems and run various simulations against the model to better understand the operational impact and performance of cybersecurity controls.

  11. Potts-model grain growth simulations: Parallel algorithms and applications

    Energy Technology Data Exchange (ETDEWEB)

    Wright, S.A.; Plimpton, S.J.; Swiler, T.P. [and others

    1997-08-01

    Microstructural morphology and grain boundary properties often control the service properties of engineered materials. This report uses the Potts-model to simulate the development of microstructures in realistic materials. Three areas of microstructural morphology simulations were studied. They include the development of massively parallel algorithms for Potts-model grain grow simulations, modeling of mass transport via diffusion in these simulated microstructures, and the development of a gradient-dependent Hamiltonian to simulate columnar grain growth. Potts grain growth models for massively parallel supercomputers were developed for the conventional Potts-model in both two and three dimensions. Simulations using these parallel codes showed self similar grain growth and no finite size effects for previously unapproachable large scale problems. In addition, new enhancements to the conventional Metropolis algorithm used in the Potts-model were developed to accelerate the calculations. These techniques enable both the sequential and parallel algorithms to run faster and use essentially an infinite number of grain orientation values to avoid non-physical grain coalescence events. Mass transport phenomena in polycrystalline materials were studied in two dimensions using numerical diffusion techniques on microstructures generated using the Potts-model. The results of the mass transport modeling showed excellent quantitative agreement with one dimensional diffusion problems, however the results also suggest that transient multi-dimension diffusion effects cannot be parameterized as the product of the grain boundary diffusion coefficient and the grain boundary width. Instead, both properties are required. Gradient-dependent grain growth mechanisms were included in the Potts-model by adding an extra term to the Hamiltonian. Under normal grain growth, the primary driving term is the curvature of the grain boundary, which is included in the standard Potts-model Hamiltonian.

  12. A Model Management Approach for Co-Simulation Model Evaluation

    NARCIS (Netherlands)

    Zhang, X.C.; Broenink, Johannes F.; Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2011-01-01

    Simulating formal models is a common means for validating the correctness of the system design and reduce the time-to-market. In most of the embedded control system design, multiple engineering disciplines and various domain-specific models are often involved, such as mechanical, control, software

  13. eShopper modeling and simulation

    Science.gov (United States)

    Petrushin, Valery A.

    2001-03-01

    The advent of e-commerce gives an opportunity to shift the paradigm of customer communication into a highly interactive mode. The new generation of commercial Web servers, such as the Blue Martini's server, combines the collection of data on a customer behavior with real-time processing and dynamic tailoring of a feedback page. The new opportunities for direct product marketing and cross selling are arriving. The key problem is what kind of information do we need to achieve these goals, or in other words, how do we model the customer? The paper is devoted to customer modeling and simulation. The focus is on modeling an individual customer. The model is based on the customer's transaction data, click stream data, and demographics. The model includes the hierarchical profile of a customer's preferences to different types of products and brands; consumption models for the different types of products; the current focus, trends, and stochastic models for time intervals between purchases; product affinity models; and some generalized features, such as purchasing power, sensitivity to advertising, price sensitivity, etc. This type of model is used for predicting the date of the next visit, overall spending, and spending for different types of products and brands. For some type of stores (for example, a supermarket) and stable customers, it is possible to forecast the shopping lists rather accurately. The forecasting techniques are discussed. The forecasting results can be used for on- line direct marketing, customer retention, and inventory management. The customer model can also be used as a generative model for simulating the customer's purchasing behavior in different situations and for estimating customer's features.

  14. Simulation modelling in agriculture: General considerations. | R.I. ...

    African Journals Online (AJOL)

    The computer does all the necessary arithmetic when the hypothesis is invoked to predict the future behaviour of the simulated system under given conditions.A general ... in the advisory service. Keywords: agriculture; botany; computer simulation; modelling; simulation model; simulation modelling; south africa; techniques ...

  15. Aqueous Electrolytes: Model Parameters and Process Simulation

    DEFF Research Database (Denmark)

    Thomsen, Kaj

    This thesis deals with aqueous electrolyte mixtures. The Extended UNIQUAC model is being used to describe the excess Gibbs energy of such solutions. Extended UNIQUAC parameters for the twelve ions Na+, K+, NH4+, H+, Cl-, NO3-, SO42-, HSO4-, OH-, CO32-, HCO3-, and S2O82- are estimated. A computer ...... program including a steady state process simulator for the design, simulation, and optimization of fractional crystallization processes is presented.......This thesis deals with aqueous electrolyte mixtures. The Extended UNIQUAC model is being used to describe the excess Gibbs energy of such solutions. Extended UNIQUAC parameters for the twelve ions Na+, K+, NH4+, H+, Cl-, NO3-, SO42-, HSO4-, OH-, CO32-, HCO3-, and S2O82- are estimated. A computer...

  16. Molecular Modeling Studies of 11β-Hydroxysteroid Dehydrogenase Type 1 Inhibitors through Receptor-Based 3D-QSAR and Molecular Dynamics Simulations

    Directory of Open Access Journals (Sweden)

    Haiyan Qian

    2016-09-01

    Full Text Available 11β-Hydroxysteroid dehydrogenase type 1 (11β-HSD1 is a potential target for the treatment of numerous human disorders, such as diabetes, obesity, and metabolic syndrome. In this work, molecular modeling studies combining molecular docking, 3D-QSAR, MESP, MD simulations and free energy calculations were performed on pyridine amides and 1,2,4-triazolopyridines as 11β-HSD1 inhibitors to explore structure-activity relationships and structural requirement for the inhibitory activity. 3D-QSAR models, including CoMFA and CoMSIA, were developed from the conformations obtained by docking strategy. The derived pharmacophoric features were further supported by MESP and Mulliken charge analyses using density functional theory. In addition, MD simulations and free energy calculations were employed to determine the detailed binding process and to compare the binding modes of inhibitors with different bioactivities. The binding free energies calculated by MM/PBSA showed a good correlation with the experimental biological activities. Free energy analyses and per-residue energy decomposition indicated the van der Waals interaction would be the major driving force for the interactions between an inhibitor and 11β-HSD1. These unified results may provide that hydrogen bond interactions with Ser170 and Tyr183 are favorable for enhancing activity. Thr124, Ser170, Tyr177, Tyr183, Val227, and Val231 are the key amino acid residues in the binding pocket. The obtained results are expected to be valuable for the rational design of novel potent 11β-HSD1 inhibitors.

  17. Molecular Modeling Studies of 11β-Hydroxysteroid Dehydrogenase Type 1 Inhibitors through Receptor-Based 3D-QSAR and Molecular Dynamics Simulations.

    Science.gov (United States)

    Qian, Haiyan; Chen, Jiongjiong; Pan, Youlu; Chen, Jianzhong

    2016-09-19

    11β-Hydroxysteroid dehydrogenase type 1 (11β-HSD1) is a potential target for the treatment of numerous human disorders, such as diabetes, obesity, and metabolic syndrome. In this work, molecular modeling studies combining molecular docking, 3D-QSAR, MESP, MD simulations and free energy calculations were performed on pyridine amides and 1,2,4-triazolopyridines as 11β-HSD1 inhibitors to explore structure-activity relationships and structural requirement for the inhibitory activity. 3D-QSAR models, including CoMFA and CoMSIA, were developed from the conformations obtained by docking strategy. The derived pharmacophoric features were further supported by MESP and Mulliken charge analyses using density functional theory. In addition, MD simulations and free energy calculations were employed to determine the detailed binding process and to compare the binding modes of inhibitors with different bioactivities. The binding free energies calculated by MM/PBSA showed a good correlation with the experimental biological activities. Free energy analyses and per-residue energy decomposition indicated the van der Waals interaction would be the major driving force for the interactions between an inhibitor and 11β-HSD1. These unified results may provide that hydrogen bond interactions with Ser170 and Tyr183 are favorable for enhancing activity. Thr124, Ser170, Tyr177, Tyr183, Val227, and Val231 are the key amino acid residues in the binding pocket. The obtained results are expected to be valuable for the rational design of novel potent 11β-HSD1 inhibitors.

  18. Insight into the interaction mechanism of human SGLT2 with its inhibitors: 3D-QSAR studies, homology modeling, and molecular docking and molecular dynamics simulations.

    Science.gov (United States)

    Dong, Lili; Feng, Ruirui; Bi, Jiawei; Shen, Shengqiang; Lu, Huizhe; Zhang, Jianjun

    2018-03-06

    Human sodium-dependent glucose co-transporter 2 (hSGLT2) is a crucial therapeutic target in the treatment of type 2 diabetes. In this study, both comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) were applied to generate three-dimensional quantitative structure-activity relationship (3D-QSAR) models. In the most accurate CoMFA-based and CoMSIA-based QSAR models, the cross-validated coefficients (r 2 cv ) were 0.646 and 0.577, respectively, while the non-cross-validated coefficients (r 2 ) were 0.997 and 0.991, respectively, indicating that both models were reliable. In addition, we constructed a homology model of hSGLT2 in the absence of a crystal structure. Molecular docking was performed to explore the bonding mode of inhibitors to the active site of hSGLT2. Molecular dynamics (MD) simulations and binding free energy calculations using MM-PBSA and MM-GBSA were carried out to further elucidate the interaction mechanism. With regards to binding affinity, we found that hydrogen-bond interactions of Asn51 and Glu75, located in the active site of hSGLT2, with compound 40 were critical. Hydrophobic and electrostatic interactions were shown to enhance activity, in agreement with the results obtained from docking and 3D-QSAR analysis. Our study results shed light on the interaction mode between inhibitors and hSGLT2 and may aid in the development of C-aryl glucoside SGLT2 inhibitors.

  19. Mathematical and Simulation Model Development of Switched Reluctance Motor

    Directory of Open Access Journals (Sweden)

    S. V. Aleksandrovsky

    2011-01-01

    Full Text Available The switched reluctance motor (SRM represents a great interest while being applied in various fields as an alternative to asynchronous motors with a short-circuit rotor. A SRM disadvantage is a nonlinearity of its characteristics. Due to this reason it is desirable to execute investigations using a developed simulation model. The simulation results (electromagnetic torque and current are in good agreement with those values studied in the literature.

  20. A Placement Model for Flight Simulators.

    Science.gov (United States)

    1982-09-01

    simulator basing strategies. Captains David R. VanDenburg and Jon D. Veith developed a mathematical model to assist in the placement analysis of A-7...Institute for Defense Analysis, Arlington VA, August 1977. AD A049979. 23. Sugarman , Robert C., Steven L. Johnson, and William F. H. Ring. "B-I Systems...USAF Cost and Plan- nin& Factors. AFR 173-13. Washington: Govern- ment Printing Office, I February 1982. * 30. Van Denburg, Captain David R., USAF

  1. The COD Model: Simulating Workgroup Performance

    Science.gov (United States)

    Biggiero, Lucio; Sevi, Enrico

    Though the question of the determinants of workgroup performance is one of the most central in organization science, precise theoretical frameworks and formal demonstrations are still missing. In order to fill in this gap the COD agent-based simulation model is here presented and used to study the effects of task interdependence and bounded rationality on workgroup performance. The first relevant finding is an algorithmic demonstration of the ordering of interdependencies in terms of complexity, showing that the parallel mode is the most simplex, followed by the sequential and then by the reciprocal. This result is far from being new in organization science, but what is remarkable is that now it has the strength of an algorithmic demonstration instead of being based on the authoritativeness of some scholar or on some episodic empirical finding. The second important result is that the progressive introduction of realistic limits to agents' rationality dramatically reduces workgroup performance and addresses to a rather interesting result: when agents' rationality is severely bounded simple norms work better than complex norms. The third main finding is that when the complexity of interdependence is high, then the appropriate coordination mechanism is agents' direct and active collaboration, which means teamwork.

  2. Molecular modeling and simulation studies of recombinant laccase from Yersinia enterocolitica suggests significant role in the biotransformation of non-steroidal anti-inflammatory drugs

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Deepti; Rawat, Surender [Laboratory of Enzymology and Recombinant DNA Technology, Department of Microbiology, Maharshi Dayanand University, Rohtak 124001, Haryana (India); Waseem, Mohd; Gupta, Sunita; Lynn, Andrew [School of Computational & Integrative Sciences, Jawaharlal Nehru University, New Delhi 110067 (India); Nitin, Mukesh; Ramchiary, Nirala [School of Life Sciences, Jawaharlal Nehru University, New Delhi 110067 (India); Sharma, Krishna Kant, E-mail: kekulsharma@gmail.com [Laboratory of Enzymology and Recombinant DNA Technology, Department of Microbiology, Maharshi Dayanand University, Rohtak 124001, Haryana (India)

    2016-01-08

    The YacK gene from Yersinia enterocolitica strain 7, cloned in pET28a vector and expressed in Escherichia coli BL21 (DE3), showed laccase activity when oxidized with 2,2′-azino-bis(3-ethylbenzthiazoline-6-sulfonic acid) (ABTS) and guaiacol. The recombinant laccase protein was purified and characterized biochemically with a molecular mass of ≈58 KDa on SDS-PAGE and showed positive zymogram with ABTS. The protein was highly robust with optimum pH 9.0 and stable at 70 °C upto 12 h with residual activity of 70%. Kinetic constants, K{sub m} values, for ABTS and guaiacol were 675 μM and 2070 μM, respectively, with corresponding Vmax values of 0.125 μmol/ml/min and 6500 μmol/ml/min. It also possess antioxidative property against BSA and Cu{sup 2+}/H{sub 2}O{sub 2} model system. Constant pH MD simulation studies at different protonation states of the system showed ABTS to be most stable at acidic pH, whereas, diclofenac at neutral pH. Interestingly, aspirin drifted out of the binding pocket at acidic and neutral pH, but showed stable binding at alkaline pH. The biotransformation of diclofenac and aspirin by laccase also corroborated the in silico results. This is the first report on biotransformation of non-steroidal anti-inflammatory drugs (NSAIDs) using recombinant laccase from gut bacteria, supported by in silico simulation studies. - Highlights: • Laccase from Yersinia enterocolitica strain 7 was expressed in Escherichia coli BL21 (DE3). • Recombinant laccase was found to be thermostable and alkali tolerant. • The in silico and experimental studied proves the biotransformation of NSAIDs. • Laccase binds to ligands differentially under different protonation state. • Laccase also possesses free radical scavenging property.

  3. Molecular modeling and simulation studies of recombinant laccase from Yersinia enterocolitica suggests significant role in the biotransformation of non-steroidal anti-inflammatory drugs

    International Nuclear Information System (INIS)

    Singh, Deepti; Rawat, Surender; Waseem, Mohd; Gupta, Sunita; Lynn, Andrew; Nitin, Mukesh; Ramchiary, Nirala; Sharma, Krishna Kant

    2016-01-01

    The YacK gene from Yersinia enterocolitica strain 7, cloned in pET28a vector and expressed in Escherichia coli BL21 (DE3), showed laccase activity when oxidized with 2,2′-azino-bis(3-ethylbenzthiazoline-6-sulfonic acid) (ABTS) and guaiacol. The recombinant laccase protein was purified and characterized biochemically with a molecular mass of ≈58 KDa on SDS-PAGE and showed positive zymogram with ABTS. The protein was highly robust with optimum pH 9.0 and stable at 70 °C upto 12 h with residual activity of 70%. Kinetic constants, K m values, for ABTS and guaiacol were 675 μM and 2070 μM, respectively, with corresponding Vmax values of 0.125 μmol/ml/min and 6500 μmol/ml/min. It also possess antioxidative property against BSA and Cu 2+ /H 2 O 2 model system. Constant pH MD simulation studies at different protonation states of the system showed ABTS to be most stable at acidic pH, whereas, diclofenac at neutral pH. Interestingly, aspirin drifted out of the binding pocket at acidic and neutral pH, but showed stable binding at alkaline pH. The biotransformation of diclofenac and aspirin by laccase also corroborated the in silico results. This is the first report on biotransformation of non-steroidal anti-inflammatory drugs (NSAIDs) using recombinant laccase from gut bacteria, supported by in silico simulation studies. - Highlights: • Laccase from Yersinia enterocolitica strain 7 was expressed in Escherichia coli BL21 (DE3). • Recombinant laccase was found to be thermostable and alkali tolerant. • The in silico and experimental studied proves the biotransformation of NSAIDs. • Laccase binds to ligands differentially under different protonation state. • Laccase also possesses free radical scavenging property.

  4. Proceedings of the 17. IASTED international conference on modelling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wamkeue, R. (comp.) [Quebec Univ., Abitibi-Temiscaminque, PQ (Canada)

    2006-07-01

    The International Association of Science and Technology for Development (IASTED) hosted this conference to provide a forum for international researchers and practitioners interested in all areas of modelling and simulation. The conference featured 12 sessions entitled: (1) automation, control and robotics, (2) hydraulic and hydrologic modelling, (3) applications in processes and design optimization, (4) environmental systems, (5) biomedicine and biomechanics, (6) communications, computers and informatics 1, (7) economics, management and operations research 1, (8) modelling and simulation methodologies 1, (9) economics, management and operations research 2, (10) modelling, optimization, identification and simulation, (11) communications, computers and informatics 2, and, (12) modelling and simulation methodologies 2. Participants took the opportunity to present the latest research, results, and ideas in mathematical modelling; physically-based modelling; agent-based modelling; dynamic modelling; 3-dimensional modelling; computational geometry; time series analysis; finite element methods; discrete event simulation; web-based simulation; Monte Carlo simulation; simulation optimization; simulation uncertainty; fuzzy systems; data modelling; computer aided design; and, visualization. Case studies in engineering design were also presented along with simulation tools and languages. The conference also highlighted topical issues in environmental systems modelling such as air modelling and simulation, atmospheric modelling, hazardous materials, mobile source emissions, ecosystem modelling, hydrological modelling, aquatic ecosystems, terrestrial ecosystems, biological systems, agricultural modelling, terrain analysis, meteorological modelling, earth system modelling, climatic modelling, and natural resource management. The conference featured 110 presentations, of which 3 have been catalogued separately for inclusion in this database. refs., tabs., figs.

  5. A Multiagent Modeling Environment for Simulating Work Practice in Organizations

    Science.gov (United States)

    Sierhuis, Maarten; Clancey, William J.; vanHoof, Ron

    2004-01-01

    In this paper we position Brahms as a tool for simulating organizational processes. Brahms is a modeling and simulation environment for analyzing human work practice, and for using such models to develop intelligent software agents to support the work practice in organizations. Brahms is the result of more than ten years of research at the Institute for Research on Learning (IRL), NYNEX Science & Technology (the former R&D institute of the Baby Bell telephone company in New York, now Verizon), and for the last six years at NASA Ames Research Center, in the Work Systems Design and Evaluation group, part of the Computational Sciences Division (Code IC). Brahms has been used on more than ten modeling and simulation research projects, and recently has been used as a distributed multiagent development environment for developing work practice support tools for human in-situ science exploration on planetary surfaces, in particular a human mission to Mars. Brahms was originally conceived of as a business process modeling and simulation tool that incorporates the social systems of work, by illuminating how formal process flow descriptions relate to people s actual located activities in the workplace. Our research started in the early nineties as a reaction to experiences with work process modeling and simulation . Although an effective tool for convincing management of the potential cost-savings of the newly designed work processes, the modeling and simulation environment was only able to describe work as a normative workflow. However, the social systems, uncovered in work practices studied by the design team played a significant role in how work actually got done-actual lived work. Multi- tasking, informal assistance and circumstantial work interactions could not easily be represented in a tool with a strict workflow modeling paradigm. In response, we began to develop a tool that would have the benefits of work process modeling and simulation, but be distinctively able to

  6. Quench Simulation Studies: Program documentation of SPQR

    CERN Document Server

    Sonnemann, F

    2001-01-01

    Quench experiments are being performed on prototypes of the superconducting magnets and busbars to determine the adequate design and protection. Many tests can only be understood correctly with the help of quench simulations that model the thermo-hydraulic and electrodynamic processes during a quench. In some cases simulations are the only method to scale the experimental results of prototype measurements to match the situation of quenching superconducting elements in the LHC. This note introduces the theoretical quench model and the use of the simulation program SPQR (Simulation Program for Quench Research), which has been developed to compute the quench process in superconducting magnets and busbars. The model approximates the heat balance equation with the finite difference method including the temperature dependence of the material parameters. SPQR allows the simulation of longitudinal quench propagation along a superconducting cable, the transverse propagation between adjacent conductors, heat transfer i...

  7. Survey of chemically amplified resist models and simulator algorithms

    Science.gov (United States)

    Croffie, Ebo H.; Yuan, Lei; Cheng, Mosong; Neureuther, Andrew R.

    2001-08-01

    Modeling has become indespensable tool for chemically amplified resist (CAR) evaluations. It has been used extensively to study acid diffusion and its effects on resist image formation. Several commercial and academic simulators have been developed for CAR process simulation. For commercial simulators such as PROLITH (Finle Technologies) and Solid-C (Sigma-C), the user is allowed to choose between an empirical model or a concentration dependant diffusion model. The empirical model is faster but not very accurate for 2-dimension resist simulations. In this case there is a trade off between the speed of the simulator and the accuracy of the results. An academic simulator such as STORM (U.C. Berkeley) gives the user a choice of different algorithms including Fast Imaging 2nd order finite difference algorithm and Moving Boundary finite element algorithm. A user interested in simulating the volume shrinkage and polymer stress effects during post exposure bake will need the Moving Boundary algorithm whereas a user interested in the latent image formation without polymer deformations will find the Fast Imaging algorithm more appropriate. The Fast Imaging algorithm is generally faster and requires less computer memory. This choice of algorithm presents a trade off between speed and level of detail in resist profile prediction. This paper surveys the different models and simulator algorithms available in the literature. Contributions in the field of CAR modeling including contributions to characterization of CAR exposure and post exposure bake (PEB) processes for different resist systems. Several numerical algorithms and their performances will also be discussed in this paper.

  8. Reactive transport models and simulation with ALLIANCES

    International Nuclear Information System (INIS)

    Leterrier, N.; Deville, E.; Bary, B.; Trotignon, L.; Hedde, T.; Cochepin, B.; Stora, E.

    2009-01-01

    Many chemical processes influence the evolution of nuclear waste storage. As a result, simulations based only upon transport and hydraulic processes fail to describe adequately some industrial scenarios. We need to take into account complex chemical models (mass action laws, kinetics...) which are highly non-linear. In order to simulate the coupling of these chemical reactions with transport, we use a classical Sequential Iterative Approach (SIA), with a fixed point algorithm, within the mainframe of the ALLIANCES platform. This approach allows us to use the various transport and chemical modules available in ALLIANCES, via an operator-splitting method based upon the structure of the chemical system. We present five different applications of reactive transport simulations in the context of nuclear waste storage: 1. A 2D simulation of the lixiviation by rain water of an underground polluted zone high in uranium oxide; 2. The degradation of the steel envelope of a package in contact with clay. Corrosion of the steel creates corrosion products and the altered package becomes a porous medium. We follow the degradation front through kinetic reactions and the coupling with transport; 3. The degradation of a cement-based material by the injection of an aqueous solution of zinc and sulphate ions. In addition to the reactive transport coupling, we take into account in this case the hydraulic retroaction of the porosity variation on the Darcy velocity; 4. The decalcification of a concrete beam in an underground storage structure. In this case, in addition to the reactive transport simulation, we take into account the interaction between chemical degradation and the mechanical forces (cracks...), and the retroactive influence on the structure changes on transport; 5. The degradation of the steel envelope of a package in contact with a clay material under a temperature gradient. In this case the reactive transport simulation is entirely directed by the temperature changes and

  9. A Simulation and Modeling Framework for Space Situational Awareness

    Science.gov (United States)

    Olivier, S.

    This paper describes the development and initial demonstration of a new, integrated modeling and simulation framework, encompassing the space situational awareness enterprise, for quantitatively assessing the benefit of specific sensor systems, technologies and data analysis techniques. This framework includes detailed models for threat scenarios, signatures, sensors, observables and knowledge extraction algorithms. The framework is based on a flexible, scalable architecture to enable efficient simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel computer systems available, for example, at Lawrence Livermore National Laboratory. We will describe the details of the modeling and simulation framework, including hydrodynamic models of satellite intercept and debris generation, orbital propagation algorithms, radar cross section calculations, optical and infra-red brightness calculations, generic radar system models, generic optical and infra-red system models, specific Space Surveillance Network models, object detection algorithms, orbit determination algorithms, and visualization tools. The specific modeling of the Space Surveillance Network is performed in collaboration with the Air Force Space Command Space Control Group. We will demonstrate the use of this integrated simulation and modeling framework on specific threat scenarios, including space debris and satellite maneuvers, and we will examine the results of case studies involving the addition of new sensor systems, used in conjunction with the Space Surveillance Network, for improving space situational awareness.

  10. Simulation models generator. Applications in scheduling

    Directory of Open Access Journals (Sweden)

    Omar Danilo Castrillón

    2013-08-01

    Rev.Mate.Teor.Aplic. (ISSN 1409-2433 Vol. 20(2: 231–241, July 2013 generador de modelos de simulacion 233 will, in order to have an approach to reality to evaluate decisions in order to take more assertive. To test prototype was used as the modeling example of a production system with 9 machines and 5 works as a job shop configuration, testing stops processing times and stochastic machine to measure rates of use of machines and time average jobs in the system, as measures of system performance. This test shows the goodness of the prototype, to save the user the simulation model building

  11. Modeling and simulation of reactive flows

    CERN Document Server

    Bortoli, De AL; Pereira, Felipe

    2015-01-01

    Modelling and Simulation of Reactive Flows presents information on modeling and how to numerically solve reactive flows. The book offers a distinctive approach that combines diffusion flames and geochemical flow problems, providing users with a comprehensive resource that bridges the gap for scientists, engineers, and the industry. Specifically, the book looks at the basic concepts related to reaction rates, chemical kinetics, and the development of reduced kinetic mechanisms. It considers the most common methods used in practical situations, along with equations for reactive flows, and va

  12. Simulation and optimization of ammonia removal at low temperature for a double channel oxidation ditch based on fully coupled activated sludge model (FCASM): a full-scale study.

    Science.gov (United States)

    Yang, Min; Sun, Peide; Wang, Ruyi; Han, Jingyi; Wang, Jianqiao; Song, Yingqi; Cai, Jing; Tang, Xiudi

    2013-09-01

    An optimal operating condition for ammonia removal at low temperature, based on fully coupled activated sludge model (FCASM), was determined in a full-scale oxidation ditch process wastewater treatment plant (WWTP). The FCASM-based mechanisms model was calibrated and validated with the data measured on site. Several important kinetic parameters of the modified model were tested through respirometry experiment. Validated model was used to evaluate the relationship between ammonia removal and operating parameters, such as temperature (T), dissolved oxygen (DO), solid retention time (SRT) and hydraulic retention time of oxidation ditch (HRT). The simulated results showed that low temperature have a negative effect on the ammonia removal. Through orthogonal simulation tests of the last three factors and combination with the analysis of variance, the optimal operating mode acquired of DO, SRT, HRT for the WWTP at low temperature were 3.5 mg L(-1), 15 d and 14 h, respectively. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Suppressing correlations in massively parallel simulations of lattice models

    Science.gov (United States)

    Kelling, Jeffrey; Ódor, Géza; Gemming, Sibylle

    2017-11-01

    For lattice Monte Carlo simulations parallelization is crucial to make studies of large systems and long simulation time feasible, while sequential simulations remain the gold-standard for correlation-free dynamics. Here, various domain decomposition schemes are compared, concluding with one which delivers virtually correlation-free simulations on GPUs. Extensive simulations of the octahedron model for 2 + 1 dimensional Kardar-Parisi-Zhang surface growth, which is very sensitive to correlation in the site-selection dynamics, were performed to show self-consistency of the parallel runs and agreement with the sequential algorithm. We present a GPU implementation providing a speedup of about 30 × over a parallel CPU implementation on a single socket and at least 180 × with respect to the sequential reference.

  14. Mixing characteristics of sludge simulant in a model anaerobic digester.

    Science.gov (United States)

    Low, Siew Cheng; Eshtiaghi, Nicky; Slatter, Paul; Baudez, Jean-Christophe; Parthasarathy, Rajarathinam

    2016-03-01

    This study aims to investigate the mixing characteristics of a transparent sludge simulant in a mechanically agitated model digester using flow visualisation technique. Video images of the flow patterns were obtained by recording the progress of an acid-base reaction and analysed to determine the active and inactive volumes as a function of time. The doughnut-shaped inactive region formed above and below the impeller in low concentration simulant decreases in size with time and disappears finally. The 'cavern' shaped active mixing region formed around the impeller in simulant solutions with higher concentrations increases with increasing agitation time and reaches a steady state equilibrium size, which is a function of specific power input. These results indicate that the active volume is jointly determined by simulant rheology and specific power input. A mathematical correlation is proposed to estimate the active volume as a function of simulant concentration in terms of yield Reynolds number.

  15. A study of the effect of overshooting deep convection on the water content of the TTL and lower stratosphere from Cloud Resolving Model simulations

    Directory of Open Access Journals (Sweden)

    D. P. Grosvenor

    2007-09-01

    Full Text Available Simulations of overshooting, tropical deep convection using a Cloud Resolving Model with bulk microphysics are presented in order to examine the effect on the water content of the TTL (Tropical Tropopause Layer and lower stratosphere. This case study is a subproject of the HIBISCUS (Impact of tropical convection on the upper troposphere and lower stratosphere at global scale campaign, which took place in Bauru, Brazil (22° S, 49° W, from the end of January to early March 2004.

    Comparisons between 2-D and 3-D simulations suggest that the use of 3-D dynamics is vital in order to capture the mixing between the overshoot and the stratospheric air, which caused evaporation of ice and resulted in an overall moistening of the lower stratosphere. In contrast, a dehydrating effect was predicted by the 2-D simulation due to the extra time, allowed by the lack of mixing, for the ice transported to the region to precipitate out of the overshoot air.

    Three different strengths of convection are simulated in 3-D by applying successively lower heating rates (used to initiate the convection in the boundary layer. Moistening is produced in all cases, indicating that convective vigour is not a factor in whether moistening or dehydration is produced by clouds that penetrate the tropopause, since the weakest case only just did so. An estimate of the moistening effect of these clouds on an air parcel traversing a convective region is made based on the domain mean simulated moistening and the frequency of convective events observed by the IPMet (Instituto de Pesquisas Meteorológicas, Universidade Estadual Paulista radar (S-band type at 2.8 Ghz to have the same 10 dBZ echo top height as those simulated. These suggest a fairly significant mean moistening of 0.26, 0.13 and 0.05 ppmv in the strongest, medium and weakest cases, respectively, for heights between 16 and 17 km. Since the cold point and WMO (World Meteorological Organization tropopause in

  16. Molecular modeling and molecular dynamics simulation study of archaeal leucyl-tRNA synthetase in complex with different mischarged tRNA in editing conformation.

    Science.gov (United States)

    Rayevsky, A V; Sharifi, M; Tukalo, M A

    2017-09-01

    Aminoacyl-tRNA synthetases (aaRSs) play important roles in maintaining the accuracy of protein synthesis. Some aaRSs accomplish this via editing mechanisms, among which leucyl-tRNA synthetase (LeuRS) edits non-cognate amino acid norvaline mainly by post-transfer editing. However, the molecular basis for this pathway for eukaryotic and archaeal LeuRS remain unclear. In this study, a complex of archaeal P. horikoshii LeuRS (PhLeuRS) with misacylated tRNA Leu was modeled wherever tRNA's acceptor stem was oriented directly into the editing site. To understand the distinctive features of organization we reconstructed a complex of PhLeuRS with tRNA and visualize post-transfer editing interactions mode by performing molecular dynamics (MD) simulation studies. To study molecular basis for substrate selectivity by PhLeuRS's editing site we utilized MD simulation of the entire LeuRS complexes using a diverse charged form of tRNAs, namely norvalyl-tRNA Leu and isoleucyl-tRNA Leu . In general, the editing site organization of LeuRS from P.horikoshii has much in common with bacterial LeuRS. The MD simulation results revealed that the post-transfer editing substrate norvalyl-A76, binds more strongly than isoleucyl-A76. Moreover, the branched side chain of isoleucine prevents water molecules from being closer and hence the hydrolysis reaction slows significantly. To investigate a possible mechanism of the post-transfer editing reaction, by PhLeuRS we have determined that two water molecules (the attacking and assisting water molecules) are localized near the carbonyl group of the amino acid to be cleaved off. These water molecules approach the substrate from the opposite side to that observed for Thermus thermophilus LeuRS (TtLeuRS). Based on the results obtained, it was suggested that the post-transfer editing mechanism of PhLeuRS differs from that of prokaryotic TtLeuRS. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Wedge Experiment Modeling and Simulation for Reactive Flow Model Calibration

    Science.gov (United States)

    Maestas, Joseph T.; Dorgan, Robert J.; Sutherland, Gerrit T.

    2017-06-01

    Wedge experiments are a typical method for generating pop-plot data (run-to-detonation distance versus input shock pressure), which is used to assess an explosive material's initiation behavior. Such data can be utilized to calibrate reactive flow models by running hydrocode simulations and successively tweaking model parameters until a match between experiment is achieved. Typical simulations are performed in 1D and typically use a flyer impact to achieve the prescribed shock loading pressure. In this effort, a wedge experiment performed at the Army Research Lab (ARL) was modeled using CTH (SNL hydrocode) in 1D, 2D, and 3D space in order to determine if there was any justification in using simplified models. A simulation was also performed using the BCAT code (CTH companion tool) that assumes a plate impact shock loading. Results from the simulations were compared to experimental data and show that the shock imparted into an explosive specimen is accurately captured with 2D and 3D simulations, but changes significantly in 1D space and with the BCAT tool. The difference in shock profile is shown to only affect numerical predictions for large run distances. This is attributed to incorrectly capturing the energy fluence for detonation waves versus flat shock loading. Portions of this work were funded through the Joint Insensitive Munitions Technology Program.

  18. SIMULATION MODEL BASED ON REGIONAL DEVELOPMENT AND VIRTUAL CHANGES

    Directory of Open Access Journals (Sweden)

    Petr Dlask

    2015-10-01

    Full Text Available This paper reports on change as an indicator that can be provide more focused goals in studies of development. The paper offers an answer to the question: How might management gain information from a simulation model and thus influence reality through pragmatic changes. We focus on where and when to influence, manage, and control basic technical-economic proposals. These proposals are mostly formed as simulation models. Unfortunately, however, they do not always provide an explanation of formation changes. A wide variety of simulation tools have become available, e.g. Simulink, Wolfram SystemModeler, VisSim, SystemBuild, STELLA, Adams, SIMSCRIPT, COMSOL Multiphysics, etc. However, there is only limited support for the construction of simulation models of a technical-economic nature. Mathematics has developed the concept of differentiation. Economics has developed the concept of marginality. Technical-economic design has yet to develop an equivalent methodology. This paper discusses an,alternative approach that uses the phenomenon of change, and provides a way from professional knowledge, which can be seen as a purer kind of information, to a more dynamic computing model (a simulation model that interprets changes as method. The validation of changes, as a result for use in managerial decision making, and condition for managerial decision making, can thus be improved.

  19. Comparison of population-averaged and cluster-specific models for the analysis of cluster randomized trials with missing binary outcomes: a simulation study

    Directory of Open Access Journals (Sweden)

    Ma Jinhui

    2013-01-01

    Full Text Available Abstracts Background The objective of this simulation study is to compare the accuracy and efficiency of population-averaged (i.e. generalized estimating equations (GEE and cluster-specific (i.e. random-effects logistic regression (RELR models for analyzing data from cluster randomized trials (CRTs with missing binary responses. Methods In this simulation study, clustered responses were generated from a beta-binomial distribution. The number of clusters per trial arm, the number of subjects per cluster, intra-cluster correlation coefficient, and the percentage of missing data were allowed to vary. Under the assumption of covariate dependent missingness, missing outcomes were handled by complete case analysis, standard multiple imputation (MI and within-cluster MI strategies. Data were analyzed using GEE and RELR. Performance of the methods was assessed using standardized bias, empirical standard error, root mean squared error (RMSE, and coverage probability. Results GEE performs well on all four measures — provided the downward bias of the standard error (when the number of clusters per arm is small is adjusted appropriately — under the following scenarios: complete case analysis for CRTs with a small amount of missing data; standard MI for CRTs with variance inflation factor (VIF 50. RELR performs well only when a small amount of data was missing, and complete case analysis was applied. Conclusion GEE performs well as long as appropriate missing data strategies are adopted based on the design of CRTs and the percentage of missing data. In contrast, RELR does not perform well when either standard or within-cluster MI strategy is applied prior to the analysis.

  20. Microgrid Modeling and Simulation Study

    Science.gov (United States)

    2016-09-01

    sustainability while reducing setup time, maintenance requirements, manpower requirements, logistical transport , and life-cycle cost. Results Results are...prior to making acquisition decisions. 1 Van Broekhoven SB, Shields E, Nguyen SVT, Limpaecher ER, Lamb ...monitoring. All communication layers—the physical/data link layer, the transport layer, and the application layer—must be standardized. Within the

  1. a Discrete Mathematical Model to Simulate Malware Spreading

    Science.gov (United States)

    Del Rey, A. Martin; Sánchez, G. Rodriguez

    2012-10-01

    With the advent and worldwide development of Internet, the study and control of malware spreading has become very important. In this sense, some mathematical models to simulate malware propagation have been proposed in the scientific literature, and usually they are based on differential equations exploiting the similarities with mathematical epidemiology. The great majority of these models study the behavior of a particular type of malware called computer worms; indeed, to the best of our knowledge, no model has been proposed to simulate the spreading of a computer virus (the traditional type of malware which differs from computer worms in several aspects). In this sense, the purpose of this work is to introduce a new mathematical model not based on continuous mathematics tools but on discrete ones, to analyze and study the epidemic behavior of computer virus. Specifically, cellular automata are used in order to design such model.

  2. Integrating Visualizations into Modeling NEST Simulations.

    Science.gov (United States)

    Nowke, Christian; Zielasko, Daniel; Weyers, Benjamin; Peyser, Alexander; Hentschel, Bernd; Kuhlen, Torsten W

    2015-01-01

    Modeling large-scale spiking neural networks showing realistic biological behavior in their dynamics is a complex and tedious task. Since these networks consist of millions of interconnected neurons, their simulation produces an immense amount of data. In recent years it has become possible to simulate even larger networks. However, solutions to assist researchers in understanding the simulation's complex emergent behavior by means of visualization are still lacking. While developing tools to partially fill this gap, we encountered the challenge to integrate these tools easily into the neuroscientists' daily workflow. To understand what makes this so challenging, we looked into the workflows of our collaborators and analyzed how they use the visualizations to solve their daily problems. We identified two major issues: first, the analysis process can rapidly change focus which requires to switch the visualization tool that assists in the current problem domain. Second, because of the heterogeneous data that results from simulations, researchers want to relate data to investigate these effectively. Since a monolithic application model, processing and visualizing all data modalities and reflecting all combinations of possible workflows in a holistic way, is most likely impossible to develop and to maintain, a software architecture that offers specialized visualization tools that run simultaneously and can be linked together to reflect the current workflow, is a more feasible approach. To this end, we have developed a software architecture that allows neuroscientists to integrate visualization tools more closely into the modeling tasks. In addition, it forms the basis for semantic linking of different visualizations to reflect the current workflow. In this paper, we present this architecture and substantiate the usefulness of our approach by common use cases we encountered in our collaborative work.

  3. Integrating Visualizations into Modeling NEST Simulations

    Directory of Open Access Journals (Sweden)

    Christian eNowke

    2015-12-01

    Full Text Available Modeling large-scale spiking neural networks showing realistic biological behavior in their dynamics is a complex and tedious task. Since these networks consist of millions of interconnected neurons, their simulation produces an immense amount of data. In recent years it has become possible to simulate even larger networks. However, solutions to assist researchers in understanding the simulation's complex emergent behavior by means of visualization are still lacking. While developing tools to partially fill this gap, we encountered the challenge to integrate these tools easily into the neuroscientists' daily workflow. To understand what makes this so challenging, we looked into the workflows of our collaborators and analyzed how they use the visualizations to solve their daily problems. We identified two major issues: first, the analysis process can rapidly change focus which requires to switch the visualization tool that assists in the current problem domain. Second, because of the heterogeneous data that results from simulations, researchers want to relate data to investigate these effectively. Since a monolithic application model, processing and visualizing all data modalities and reflecting all combinations of possible workflows in a holistic way, is most likely impossible to develop and to maintain, a software architecture that offers specialized visualization tools that run simultaneously and can be linked together to reflect the current workflow, is a more feasible approach. To this end, we have developed a software architecture that allows neuroscientists to integrate visualization tools more closely into the modeling tasks. In addition, it forms the basis for semantic linking of different visualizations to reflect the current workflow. In this paper, we present this architecture and substantiate the usefulness of our approach by common use cases we encountered in our collaborative work.

  4. Integrating Visualizations into Modeling NEST Simulations

    Science.gov (United States)

    Nowke, Christian; Zielasko, Daniel; Weyers, Benjamin; Peyser, Alexander; Hentschel, Bernd; Kuhlen, Torsten W.

    2015-01-01

    Modeling large-scale spiking neural networks showing realistic biological behavior in their dynamics is a complex and tedious task. Since these networks consist of millions of interconnected neurons, their simulation produces an immense amount of data. In recent years it has become possible to simulate even larger networks. However, solutions to assist researchers in understanding the simulation's complex emergent behavior by means of visualization are still lacking. While developing tools to partially fill this gap, we encountered the challenge to integrate these tools easily into the neuroscientists' daily workflow. To understand what makes this so challenging, we looked into the workflows of our collaborators and analyzed how they use the visualizations to solve their daily problems. We identified two major issues: first, the analysis process can rapidly change focus which requires to switch the visualization tool that assists in the current problem domain. Second, because of the heterogeneous data that results from simulations, researchers want to relate data to investigate these effectively. Since a monolithic application model, processing and visualizing all data modalities and reflecting all combinations of possible workflows in a holistic way, is most likely impossible to develop and to maintain, a software architecture that offers specialized visualization tools that run simultaneously and can be linked together to reflect the current workflow, is a more feasible approach. To this end, we have developed a software architecture that allows neuroscientists to integrate visualization tools more closely into the modeling tasks. In addition, it forms the basis for semantic linking of different visualizations to reflect the current workflow. In this paper, we present this architecture and substantiate the usefulness of our approach by common use cases we encountered in our collaborative work. PMID:26733860

  5. Desktop Modeling and Simulation: Parsimonious, yet Effective Discrete-Event Simulation Analysis

    Science.gov (United States)

    Bradley, James R.

    2012-01-01

    This paper evaluates how quickly students can be trained to construct useful discrete-event simulation models using Excel The typical supply chain used by many large national retailers is described, and an Excel-based simulation model is constructed of it The set of programming and simulation skills required for development of that model are then determined we conclude that six hours of training are required to teach the skills to MBA students . The simulation presented here contains all fundamental functionallty of a simulation model, and so our result holds for any discrete-event simulation model. We argue therefore that Industry workers with the same technical skill set as students having completed one year in an MBA program can be quickly trained to construct simulation models. This result gives credence to the efficacy of Desktop Modeling and Simulation whereby simulation analyses can be quickly developed, run, and analyzed with widely available software, namely Excel.

  6. A practical guide for operational validation of discrete simulation models

    Directory of Open Access Journals (Sweden)

    Fabiano Leal

    2011-04-01

    Full Text Available As the number of simulation experiments increases, the necessity for validation and verification of these models demands special attention on the part of the simulation practitioners. By analyzing the current scientific literature, it is observed that the operational validation description presented in many papers does not agree on the importance designated to this process and about its applied techniques, subjective or objective. With the expectation of orienting professionals, researchers and students in simulation, this article aims to elaborate a practical guide through the compilation of statistical techniques in the operational validation of discrete simulation models. Finally, the guide's applicability was evaluated by using two study objects, which represent two manufacturing cells, one from the automobile industry and the other from a Brazilian tech company. For each application, the guide identified distinct steps, due to the different aspects that characterize the analyzed distributions

  7. The folding mechanism and key metastable state identification of the PrP127-147 monomer studied by molecular dynamics simulations and Markov state model analysis.

    Science.gov (United States)

    Zhou, Shuangyan; Wang, Qianqian; Wang, Yuwei; Yao, Xiaojun; Han, Wei; Liu, Huanxiang

    2017-05-10

    The structural transition of prion proteins from a native α-helix (PrP C ) to a misfolded β-sheet-rich conformation (PrP Sc ) is believed to be the main cause of a number of prion diseases in humans and animals. Understanding the molecular basis of misfolding and aggregation of prion proteins will be valuable for unveiling the etiology of prion diseases. However, due to the limitation of conventional experimental techniques and the heterogeneous property of oligomers, little is known about the molecular architecture of misfolded PrP Sc and the mechanism of structural transition from PrP C to PrP Sc . The prion fragment 127-147 (PrP127-147) has been reported to be a critical region for PrP Sc formation in Gerstmann-Straussler-Scheinker (GSS) syndrome and thus has been used as a model for the study of prion aggregation. In the present study, we employ molecular dynamics (MD) simulation techniques to study the conformational change of this fragment that could be relevant to the PrP C -PrP Sc transition. Employing extensive replica exchange molecular dynamics (REMD) and conventional MD simulations, we sample a huge number of conformations of PrP127-147. Using the Markov state model (MSM), we identify the metastable conformational states of this fragment and the kinetic network of transitions between the states. The resulting MSM reveals that disordered random-coiled conformations are the dominant structures. A key metastable folded state with typical extended β-sheet structures is identified with Pro137 being located in a turn region, consistent with a previous experimental report. Conformational analysis reveals that intrapeptide hydrophobic interaction and two key residue interactions, including Arg136-His140 and Pro137-His140, contribute a lot to the formation of ordered extended β-sheet states. However, network pathway analysis from the most populated disordered state indicates that the formation of extended β-sheet states is quite slow (at the millisecond

  8. A systematic Monte Carlo simulation study of the primitive model planar electrical double layer over an extended range of concentrations, electrode charges, cation diameters and valences

    Directory of Open Access Journals (Sweden)

    Mónika Valiskó

    2018-02-01

    Full Text Available The purpose of this study is to provide data for the primitive model of the planar electrical double layer, where ions are modeled as charged hard spheres, the solvent as an implicit dielectric background (with dielectric constant ϵ = 78.5, and the electrode as a smooth, uniformly charged, hard wall. We use canonical and grand canonical Monte Carlo simulations to compute the concentration profiles, from which the electric field and electrostatic potential profiles are obtained by solving Poisson’s equation. We report data for an extended range of parameters including 1:1, 2:1, and 3:1 electrolytes at concentrations c = 0.0001 − 1 M near electrodes carrying surface charges up to σ = ±0.5 Cm−2. The anions are monovalent with a fixed diameter d− = 3 Å, while the charge and diameter of cations are varied in the range z+ = 1, 2, 3 and d+ = 1.5, 3, 6, and 9 Å (the temperature is 298.15 K. We provide all the raw data in the supplementary material.

  9. Assessing phylogenetic accuracy : a simulation study

    NARCIS (Netherlands)

    Heijerman, T.

    1995-01-01

    A simulation model of phylogeny, called GENESIS, was developed to evaluate and to estimate the qualities of various numerical taxonomic procedures. The model produces sets of imaginary species with known character state distributions and with known phylogenies. The model can be made to

  10. Quantification and reduction of the collimator-detector response effect in SPECT by applying a system model during iterative image reconstruction: a simulation study.

    Science.gov (United States)

    Kalantari, Faraz; Rajabi, Hossein; Saghari, Mohsen

    2012-03-01

    Detector blurring and non-ideal collimation decrease the spatial resolution of the single-photon emission computed tomography (SPECT) images. Iterative reconstruction algorithms such as ordered subsets expectation maximization (OSEM) can incorporate degrading factors during reconstruction. We investigated the quantitative errors associated with poor SPECT resolution and evaluated the importance of two-dimensional (2D) and three-dimensional (3D) resolution recovery by modelling system response during iterative image reconstruction. Different phantoms consisted of the NURBS-based cardiac-torso (NCAT) liver phantom with small tumors, the Zubal brain phantom and the NCAT heart phantom were used in this study. Monte Carlo simulation was used to create SPECT projections. Gaussian functions were used to model collimator detector response (CDR). Modeled CDRs were applied during OSEM. Both noise-free and noisy projections were created. Even with noise-free projections, conventional OSEM algorithm provided limited quantitative accuracy compared to both 2D and 3D resolution recovery. The 3D implementation of resolution recovery, however, yielded superior results compared to its 2D implementation. For the liver phantom, the ability to distinguish small tumors in both transverse and axial planes was improved. For the brain phantom, gray to white matter activity ratio was increased from 3.14 ± 0.04 in simple OSEM to 3.84 ± 0.06 in 3D OSEM. For the NCAT heart phantom, 3D resolution recovery, results in images with thinner wall and higher contrast for different noise levels. There are considerable quantitative errors associated with CDR, especially when the size of the target is comparable with the spatial resolution of the system. Between different reconstruction algorithms, 3D OSEM that consider the 3D nature of CDR, improve both the visual quality and the quantitative accuracy of any SPECT studies.

  11. Best Practices for Crash Modeling and Simulation

    Science.gov (United States)

    Fasanella, Edwin L.; Jackson, Karen E.

    2002-01-01

    Aviation safety can be greatly enhanced by the expeditious use of computer simulations of crash impact. Unlike automotive impact testing, which is now routine, experimental crash tests of even small aircraft are expensive and complex due to the high cost of the aircraft and the myriad of crash impact conditions that must be considered. Ultimately, the goal is to utilize full-scale crash simulations of aircraft for design evaluation and certification. The objective of this publication is to describe "best practices" for modeling aircraft impact using explicit nonlinear dynamic finite element codes such as LS-DYNA, DYNA3D, and MSC.Dytran. Although "best practices" is somewhat relative, it is hoped that the authors' experience will help others to avoid some of the common pitfalls in modeling that are not documented in one single publication. In addition, a discussion of experimental data analysis, digital filtering, and test-analysis correlation is provided. Finally, some examples of aircraft crash simulations are described in several appendices following the main report.

  12. Effects of agriculture crop residue burning on aerosol properties and long-range transport over northern India: A study using satellite data and model simulations

    Science.gov (United States)

    Vijayakumar, K.; Safai, P. D.; Devara, P. C. S.; Rao, S. Vijaya Bhaskara; Jayasankar, C. K.

    2016-09-01

    Agriculture crop residue burning in the tropics is a major source of the global atmospheric aerosols and monitoring their long-range transport is an important element in climate change studies. In this paper, we study the effects of agriculture crop residue burning on aerosol properties and long-range transport over northern India during a smoke event that occurred between 09 and 17 November 2013, with the help of satellite measurements and model simulation data. Satellite data observations on aerosol properties suggested transport of particles from agriculture crop residue burning in Indo-Gangetic Plains (IGP) over large regions. Additionally, ECMWF winds at 850 hPa have been used to trace the source, path and spatial extent of smoke events. Most of the smoke aerosols, during the study period, travel from a west-to-east pathway from the source-to-sink region. Furthermore, aerosol vertical profiles from CALIPSO show a layer of thick smoke extending from surface to an altitude of about 3 km. Smoke aerosols emitted from biomass burning activity from Punjab have been found to be a major contributor to the deterioration of local air quality over the NE Indian region due to their long range transport.

  13. Simulation Tools Model Icing for Aircraft Design

    Science.gov (United States)

    2012-01-01

    Here s a simple science experiment to try: Place an unopened bottle of distilled water in your freezer. After 2-3 hours, if the water is pure enough, you will notice that it has not frozen. Carefully pour the water into a bowl with a piece of ice in it. When it strikes the ice, the water will instantly freeze. One of the most basic and commonly known scientific facts is that water freezes at around 32 F. But this is not always the case. Water lacking any impurities for ice crystals to form around can be supercooled to even lower temperatures without freezing. High in the atmosphere, water droplets can achieve this delicate, supercooled state. When a plane flies through clouds containing these droplets, the water can strike the airframe and, like the supercooled water hitting the ice in the experiment above, freeze instantly. The ice buildup alters the aerodynamics of the plane - reducing lift and increasing drag - affecting its performance and presenting a safety issue if the plane can no longer fly effectively. In certain circumstances, ice can form inside aircraft engines, another potential hazard. NASA has long studied ways of detecting and countering atmospheric icing conditions as part of the Agency s efforts to enhance aviation safety. To do this, the Icing Branch at Glenn Research Center utilizes a num