WorldWideScience

Sample records for simulator theory manual

  1. MSTS - Multiphase Subsurface Transport Simulator theory manual

    International Nuclear Information System (INIS)

    White, M.D.; Nichols, W.E.

    1993-05-01

    The US Department of Energy, through the Yucca Mountain Site Characterization Project Office, has designated the Yucca Mountain site in Nevada for detailed study as the candidate US geologic repository for spent nuclear fuel and high-level radioactive waste. Site characterization will determine the suitability of the Yucca Mountain site for the potential waste repository. If the site is determined suitable, subsequent studies and characterization will be conducted to obtain authorization from the Nuclear Regulatory Commission to construct the potential waste repository. A principal component of the characterization and licensing processes involves numerically predicting the thermal and hydrologic response of the subsurface environment of the Yucca Mountain site to the potential repository over a 10,000-year period. The thermal and hydrologic response of the subsurface environment to the repository is anticipated to include complex processes of countercurrent vapor and liquid migration, multiple-phase heat transfer, multiple-phase transport, and geochemical reactions. Numerical simulators based on mathematical descriptions of these subsurface phenomena are required to make numerical predictions of the thermal and hydrologic response of the Yucca Mountain subsurface environment The engineering simulator called the Multiphase Subsurface Transport Simulator (MSTS) was developed at the request of the Yucca Mountain Site Characterization Project Office to produce numerical predictions of subsurface flow and transport phenomena at the potential Yucca Mountain site. This document delineates the design architecture and describes the specific computational algorithms that compose MSTS. Details for using MSTS and sample problems are given in the open-quotes User's Guide and Referenceclose quotes companion document

  2. Salinas : theory manual.

    Energy Technology Data Exchange (ETDEWEB)

    Walsh, Timothy Francis; Reese, Garth M.; Bhardwaj, Manoj Kumar

    2004-08-01

    This manual describes the theory behind many of the constructs in Salinas. For a more detailed description of how to use Salinas , we refer the reader to Salinas, User's Notes. Many of the constructs in Salinas are pulled directly from published material. Where possible, these materials are referenced herein. However, certain functions in Salinas are specific to our implementation. We try to be far more complete in those areas. The theory manual was developed from several sources including general notes, a programer-notes manual, the user's notes and of course the material in the open literature.

  3. CTF Theory Manual

    Energy Technology Data Exchange (ETDEWEB)

    Avramova, Maria N. [Pennsylvania State Univ., University Park, PA (United States); Salko, Robert K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-05-25

    Coolant-Boiling in Rod Arrays|Two Fluids (COBRA-TF) is a thermal/ hydraulic (T/H) simulation code designed for light water reactor (LWR) vessel analysis. It uses a two-fluid, three-field (i.e. fluid film, fluid drops, and vapor) modeling approach. Both sub-channel and 3D Cartesian forms of 9 conservation equations are available for LWR modeling. The code was originally developed by Pacific Northwest Laboratory in 1980 and had been used and modified by several institutions over the last few decades. COBRA-TF also found use at the Pennsylvania State University (PSU) by the Reactor Dynamics and Fuel Management Group (RDFMG) and has been improved, updated, and subsequently re-branded as CTF. As part of the improvement process, it was necessary to generate sufficient documentation for the open-source code which had lacked such material upon being adopted by RDFMG. This document serves mainly as a theory manual for CTF, detailing the many two-phase heat transfer, drag, and important accident scenario models contained in the code as well as the numerical solution process utilized. Coding of the models is also discussed, all with consideration for updates that have been made when transitioning from COBRA-TF to CTF. Further documentation outside of this manual is also available at RDFMG which focus on code input deck generation and source code global variable and module listings.

  4. RELAP-7 Theory Manual

    Energy Technology Data Exchange (ETDEWEB)

    Berry, Ray Alden [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zou, Ling [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhao, Haihua [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhang, Hongbin [Idaho National Lab. (INL), Idaho Falls, ID (United States); Peterson, John William [Idaho National Lab. (INL), Idaho Falls, ID (United States); Martineau, Richard Charles [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kadioglu, Samet Yucel [Idaho National Lab. (INL), Idaho Falls, ID (United States); Andrs, David [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-03-01

    This document summarizes the physical models and mathematical formulations used in the RELAP-7 code. In summary, the MOOSE based RELAP-7 code development is an ongoing effort. The MOOSE framework enables rapid development of the RELAP-7 code. The developmental efforts and results demonstrate that the RELAP-7 project is on a path to success. This theory manual documents the main features implemented into the RELAP-7 code. Because the code is an ongoing development effort, this RELAP-7 Theory Manual will evolve with periodic updates to keep it current with the state of the development, implementation, and model additions/revisions.

  5. Salinas : theory manual.

    Energy Technology Data Exchange (ETDEWEB)

    Walsh, Timothy Francis; Reese, Garth M.; Bhardwaj, Manoj Kumar

    2011-11-01

    Salinas provides a massively parallel implementation of structural dynamics finite element analysis, required for high fidelity, validated models used in modal, vibration, static and shock analysis of structural systems. This manual describes the theory behind many of the constructs in Salinas. For a more detailed description of how to use Salinas, we refer the reader to Salinas, User's Notes. Many of the constructs in Salinas are pulled directly from published material. Where possible, these materials are referenced herein. However, certain functions in Salinas are specific to our implementation. We try to be far more complete in those areas. The theory manual was developed from several sources including general notes, a programmer notes manual, the user's notes and of course the material in the open literature.

  6. SAM Theory Manual

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Rui [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-03-01

    The System Analysis Module (SAM) is an advanced and modern system analysis tool being developed at Argonne National Laboratory under the U.S. DOE Office of Nuclear Energy’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. SAM development aims for advances in physical modeling, numerical methods, and software engineering to enhance its user experience and usability for reactor transient analyses. To facilitate the code development, SAM utilizes an object-oriented application framework (MOOSE), and its underlying meshing and finite-element library (libMesh) and linear and non-linear solvers (PETSc), to leverage modern advanced software environments and numerical methods. SAM focuses on modeling advanced reactor concepts such as SFRs (sodium fast reactors), LFRs (lead-cooled fast reactors), and FHRs (fluoride-salt-cooled high temperature reactors) or MSRs (molten salt reactors). These advanced concepts are distinguished from light-water reactors in their use of single-phase, low-pressure, high-temperature, and low Prandtl number (sodium and lead) coolants. As a new code development, the initial effort has been focused on modeling and simulation capabilities of heat transfer and single-phase fluid dynamics responses in Sodium-cooled Fast Reactor (SFR) systems. The system-level simulation capabilities of fluid flow and heat transfer in general engineering systems and typical SFRs have been verified and validated. This document provides the theoretical and technical basis of the code to help users understand the underlying physical models (such as governing equations, closure models, and component models), system modeling approaches, numerical discretization and solution methods, and the overall capabilities in SAM. As the code is still under ongoing development, this SAM Theory Manual will be updated periodically to keep it consistent with the state of the development.

  7. SAFSIM theory manual: A computer program for the engineering simulation of flow systems

    Energy Technology Data Exchange (ETDEWEB)

    Dobranich, D.

    1993-12-01

    SAFSIM (System Analysis Flow SIMulator) is a FORTRAN computer program for simulating the integrated performance of complex flow systems. SAFSIM provides sufficient versatility to allow the engineering simulation of almost any system, from a backyard sprinkler system to a clustered nuclear reactor propulsion system. In addition to versatility, speed and robustness are primary SAFSIM development goals. SAFSIM contains three basic physics modules: (1) a fluid mechanics module with flow network capability; (2) a structure heat transfer module with multiple convection and radiation exchange surface capability; and (3) a point reactor dynamics module with reactivity feedback and decay heat capability. Any or all of the physics modules can be implemented, as the problem dictates. SAFSIM can be used for compressible and incompressible, single-phase, multicomponent flow systems. Both the fluid mechanics and structure heat transfer modules employ a one-dimensional finite element modeling approach. This document contains a description of the theory incorporated in SAFSIM, including the governing equations, the numerical methods, and the overall system solution strategies.

  8. MASTODON Theory Manual

    Energy Technology Data Exchange (ETDEWEB)

    Coleman, Justin [Idaho National Lab. (INL), Idaho Falls, ID (United States); Slaughter, Andrew [Idaho National Lab. (INL), Idaho Falls, ID (United States); Veeraraghavan, Swetha [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bolisetti, Chandrakanth [Idaho National Lab. (INL), Idaho Falls, ID (United States); Numanoglu, Ozgun Alp [Univ. of Illinois, Urbana-Champaign, IL (United States); Spears, Robert [Idaho National Lab. (INL), Idaho Falls, ID (United States); Hoffman, William [Idaho National Lab. (INL), Idaho Falls, ID (United States); Hurt, Efe [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-05-05

    Multi-hazard Analysis for STOchastic time-DOmaiN phenomena (MASTODON) is a finite element application that aims at analyzing the response of 3-D soil-structure systems to natural and man-made hazards such as earthquakes, floods and fire. MASTODON currently focuses on the simulation of seismic events and has the capability to perform extensive ‘source-to-site’ simulations including earthquake fault rupture, nonlinear wave propagation and nonlinear soil-structure interaction (NLSSI) analysis. MASTODON is being developed to be a dynamic probabilistic risk assessment framework that enables analysts to not only perform deterministic analyses, but also easily perform probabilistic or stochastic simulations for the purpose of risk assessment.

  9. MASTODON Theory Manual

    International Nuclear Information System (INIS)

    Coleman, Justin; Slaughter, Andrew; Veeraraghavan, Swetha; Bolisetti, Chandrakanth; Numanoglu, Ozgun Alp; Spears, Robert; Hoffman, William; Hurt, Efe

    2017-01-01

    Multi-hazard Analysis for STOchastic time-DOmaiN phenomena (MASTODON) is a finite element application that aims at analyzing the response of 3-D soil-structure systems to natural and man-made hazards such as earthquakes, floods and fire. MASTODON currently focuses on the simulation of seismic events and has the capability to perform extensive ‘source-to-site’ simulations including earthquake fault rupture, nonlinear wave propagation and nonlinear soil-structure interaction (NLSSI) analysis. MASTODON is being developed to be a dynamic probabilistic risk assessment framework that enables analysts to not only perform deterministic analyses, but also easily perform probabilistic or stochastic simulations for the purpose of risk assessment.

  10. Sierra Structural Dynamics Theory Manual

    Energy Technology Data Exchange (ETDEWEB)

    Reese, Garth M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-19

    Sierra/SD provides a massively parallel implementation of structural dynamics finite element analysis, required for high fidelity, validated models used in modal, vibration, static and shock analysis of structural systems. This manual describes the theory behind many of the constructs in Sierra/SD. For a more detailed description of how to use Sierra/SD , we refer the reader to Sierra/SD, User's Notes . Many of the constructs in Sierra/SD are pulled directly from published material. Where possible, these materials are referenced herein. However, certain functions in Sierra/SD are specific to our implementation. We try to be far more complete in those areas. The theory manual was developed from several sources including general notes, a programmer notes manual, the user's notes and of course the material in the open literature. This page intentionally left blank.

  11. RAVEN Theory Manual

    Energy Technology Data Exchange (ETDEWEB)

    Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Cogliati, Joshua Joseph [Idaho National Lab. (INL), Idaho Falls, ID (United States); Wang, Congjian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Maljovec, Daniel Patrick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Talbot, Paul William [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-06-01

    RAVEN is a software framework able to perform parametric and stochastic analysis based on the response of complex system codes. The initial development was aimed at providing dynamic risk analysis capabilities to the thermohydraulic code RELAP-7, currently under development at Idaho National Laboratory (INL). Although the initial goal has been fully accomplished, RAVEN is now a multi-purpose stochastic and uncertainty quantification platform, capable of communicating with any system code. In fact, the provided Application Programming Interfaces (APIs) allow RAVEN to interact with any code as long as all the parameters that need to be perturbed are accessible by input files or via python interfaces. RAVEN is capable of investigating system response and explore input space using various sampling schemes such as Monte Carlo, grid, or Latin hypercube. However, RAVEN strength lies in its system feature discovery capabilities such as: constructing limit surfaces, separating regions of the input space leading to system failure, and using dynamic supervised learning techniques. The development of RAVEN started in 2012 when, within the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program, the need to provide a modern risk evaluation framework arose. RAVEN’s principal assignment is to provide the necessary software and algorithms in order to employ the concepts developed by the Risk Informed Safety Margin Characterization (RISMC) program. RISMC is one of the pathways defined within the Light Water Reactor Sustainability (LWRS) program. In the RISMC approach, the goal is not just to identify the frequency of an event potentially leading to a system failure, but the proximity (or lack thereof) to key safety-related events. Hence, the approach is interested in identifying and increasing the safety margins related to those events. A safety margin is a numerical value quantifying the probability that a safety metric (e.g. peak pressure in a pipe) is exceeded under

  12. RAVEN Theory Manual

    International Nuclear Information System (INIS)

    Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego; Cogliati, Joshua Joseph; Wang, Congjian; Maljovec, Daniel Patrick; Talbot, Paul William; Smith, Curtis Lee

    2016-01-01

    RAVEN is a software framework able to perform parametric and stochastic analysis based on the response of complex system codes. The initial development was aimed at providing dynamic risk analysis capabilities to the thermohydraulic code RELAP-7, currently under development at Idaho National Laboratory (INL). Although the initial goal has been fully accomplished, RAVEN is now a multi-purpose stochastic and uncertainty quantification platform, capable of communicating with any system code. In fact, the provided Application Programming Interfaces (APIs) allow RAVEN to interact with any code as long as all the parameters that need to be perturbed are accessible by input files or via python interfaces. RAVEN is capable of investigating system response and explore input space using various sampling schemes such as Monte Carlo, grid, or Latin hypercube. However, RAVEN strength lies in its system feature discovery capabilities such as: constructing limit surfaces, separating regions of the input space leading to system failure, and using dynamic supervised learning techniques. The development of RAVEN started in 2012 when, within the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program, the need to provide a modern risk evaluation framework arose. RAVEN's principal assignment is to provide the necessary software and algorithms in order to employ the concepts developed by the Risk Informed Safety Margin Characterization (RISMC) program. RISMC is one of the pathways defined within the Light Water Reactor Sustainability (LWRS) program. In the RISMC approach, the goal is not just to identify the frequency of an event potentially leading to a system failure, but the proximity (or lack thereof) to key safety-related events. Hence, the approach is interested in identifying and increasing the safety margins related to those events. A safety margin is a numerical value quantifying the probability that a safety metric (e.g. peak pressure in a pipe) is exceeded under

  13. Tokamak simulation code manual

    International Nuclear Information System (INIS)

    Chung, Moon Kyoo; Oh, Byung Hoon; Hong, Bong Keun; Lee, Kwang Won

    1995-01-01

    The method to use TSC (Tokamak Simulation Code) developed by Princeton plasma physics laboratory is illustrated. In KT-2 tokamak, time dependent simulation of axisymmetric toroidal plasma and vertical stability have to be taken into account in design phase using TSC. In this report physical modelling of TSC are described and examples of application in JAERI and SERI are illustrated, which will be useful when TSC is installed KAERI computer system. (Author) 15 refs., 6 figs., 3 tabs

  14. Sierra/SM theory manual.

    Energy Technology Data Exchange (ETDEWEB)

    Crane, Nathan Karl

    2013-07-01

    Presented in this document are the theoretical aspects of capabilities contained in the Sierra/SM code. This manuscript serves as an ideal starting point for understanding the theoretical foundations of the code. For a comprehensive study of these capabilities, the reader is encouraged to explore the many references to scientific articles and textbooks contained in this manual. It is important to point out that some capabilities are still in development and may not be presented in this document. Further updates to this manuscript will be made as these capabilites come closer to production level.

  15. Salinas. Theory Manual Version 2.8

    Energy Technology Data Exchange (ETDEWEB)

    Reese, Garth M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Walsh, Timothy [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bhardwaj, Manoj K. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2009-02-01

    Salinas provides a massively parallel implementation of structural dynamics finite element analysis, required for high fidelity, validated models used in modal, vibration, static and shock analysis of structural systems. This manual describes the theory behind many of the constructs in Salinas. For a more detailed description of how to use Salinas , we refer the reader to Salinas, Users Notes. Many of the constructs in Salinas are pulled directly from published material. Where possible, these materials are referenced herein. However, certain functions in Salinas are specific to our implementation. We try to be far more complete in those areas. The theory manual was developed from several sources including general notes, a programmer notes manual, the user's notes and of course the material in the open literature.

  16. Amplification in Technical Manuals: Theory and Practice.

    Science.gov (United States)

    Killingsworth, M. Jimmie; And Others

    1989-01-01

    Examines how amplification (rhetorical techniques by which discourse is extended to enhance its appeal and information value) tends to increase and improve the coverage, rationale, warnings, behavioral alternatives, examples, previews, and general emphasis of technical manuals. Shows how classical and modern rhetorical theories can be applied to…

  17. Grizzly Usage and Theory Manual

    Energy Technology Data Exchange (ETDEWEB)

    Spencer, B. W. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Backman, M. [Univ. of Tennessee, Knoxville, TN (United States); Chakraborty, P. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schwen, D. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhang, Y. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Huang, H. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bai, X. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Jiang, W. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-03-01

    Grizzly is a multiphysics simulation code for characterizing the behavior of nuclear power plant (NPP) structures, systems and components (SSCs) subjected to a variety of age-related aging mechanisms. Grizzly simulates both the progression of aging processes, as well as the capacity of aged components to safely perform. This initial beta release of Grizzly includes capabilities for engineering-scale thermo-mechanical analysis of reactor pressure vessels (RPVs). Grizzly will ultimately include capabilities for a wide range of components and materials. Grizzly is in a state of constant development, and future releases will broaden the capabilities of this code for RPV analysis, as well as expand it to address degradation in other critical NPP components.

  18. User's manual of Tokamak Simulation Code

    International Nuclear Information System (INIS)

    Nakamura, Yukiharu; Nishino, Tooru; Tsunematsu, Toshihide; Sugihara, Masayoshi.

    1992-12-01

    User's manual for use of Tokamak Simulation Code (TSC), which simulates the time-evolutional process of deformable motion of axisymmetric toroidal plasma, is summarized. For the use at JAERI computer system, the TSC is linked with the data management system GAEA. This manual is forcused on the procedure for the input and output by using the GAEA system. Model equations to give axisymmetric motion, outline of code system, optimal method to get the well converged solution are also described. (author)

  19. User Manual for SSG Power Simulation 2

    DEFF Research Database (Denmark)

    Jensen, Palle Meinert; Gilling, Lasse; Kofoed, Jens Peter

    This manual gives a detailed description of the use of the computer program SSG Power Simulation 2. Furthermore, the underlying mathematics and algorithms are briefly described. The program is based on experimental data from model testing of Seawave Slot-Cone Generator (SSG) presented in Kofoed...

  20. Transmission pipeline calculations and simulations manual

    CERN Document Server

    Menon, E Shashi

    2014-01-01

    Transmission Pipeline Calculations and Simulations Manual is a valuable time- and money-saving tool to quickly pinpoint the essential formulae, equations, and calculations needed for transmission pipeline routing and construction decisions. The manual's three-part treatment starts with gas and petroleum data tables, followed by self-contained chapters concerning applications. Case studies at the end of each chapter provide practical experience for problem solving. Topics in this book include pressure and temperature profile of natural gas pipelines, how to size pipelines for specified f

  1. Evaluation of manual and automatic manually triggered ventilation performance and ergonomics using a simulation model.

    Science.gov (United States)

    Marjanovic, Nicolas; Le Floch, Soizig; Jaffrelot, Morgan; L'Her, Erwan

    2014-05-01

    In the absence of endotracheal intubation, the manual bag-valve-mask (BVM) is the most frequently used ventilation technique during resuscitation. The efficiency of other devices has been poorly studied. The bench-test study described here was designed to evaluate the effectiveness of an automatic, manually triggered system, and to compare it with manual BVM ventilation. A respiratory system bench model was assembled using a lung simulator connected to a manikin to simulate a patient with unprotected airways. Fifty health-care providers from different professional groups (emergency physicians, residents, advanced paramedics, nurses, and paramedics; n = 10 per group) evaluated manual BVM ventilation, and compared it with an automatic manually triggered device (EasyCPR). Three pathological situations were simulated (restrictive, obstructive, normal). Standard ventilation parameters were recorded; the ergonomics of the system were assessed by the health-care professionals using a standard numerical scale once the recordings were completed. The tidal volume fell within the standard range (400-600 mL) for 25.6% of breaths (0.6-45 breaths) using manual BVM ventilation, and for 28.6% of breaths (0.3-80 breaths) using the automatic manually triggered device (EasyCPR) (P < .0002). Peak inspiratory airway pressure was lower using the automatic manually triggered device (EasyCPR) (10.6 ± 5 vs 15.9 ± 10 cm H2O, P < .001). The ventilation rate fell consistently within the guidelines, in the case of the automatic manually triggered device (EasyCPR) only (10.3 ± 2 vs 17.6 ± 6, P < .001). Significant pulmonary overdistention was observed when using the manual BVM device during the normal and obstructive sequences. The nurses and paramedics considered the ergonomics of the automatic manually triggered device (EasyCPR) to be better than those of the manual device. The use of an automatic manually triggered device may improve ventilation efficiency and decrease the risk of

  2. Plasma theory and simulation research

    International Nuclear Information System (INIS)

    Birdsall, C.K.

    1989-01-01

    Our research group uses both theory and simulation as tools in order to increase the understanding of instabilities, heating, diffusion, transport and other phenomena in plasmas. We also work on the improvement of simulation, both theoretically and practically. Our focus has been more and more on the plasma edge (the ''sheath''), interactions with boundaries, leading to simulations of whole devices (someday a numerical tokamak)

  3. Solar Simulation Laboratory Description and Manual.

    Science.gov (United States)

    1985-06-01

    2000 was sent back to Cyborg Corp. three times over a five month period for repairs. The solar lab is presently using a loaner from Cyborg Corp. The IBM...PC/XT is connected to the ISAAC 2000 by a RS232 connection. All programs were written in advanced basic ("BASICA"). BASICA was used because Cyborg ...2067 CH/P Temperature.Control Bath TechnicalManual, November. 1980. A 30. Cyborg Corporation, Version 1.2, IS..AC.....Co..mpiut.er.liz e.d Data

  4. Integral Pressurized Water Reactor Simulator Manual

    International Nuclear Information System (INIS)

    2017-01-01

    This publication provides detailed explanations of the theoretical concepts that the simulator users have to know to gain a comprehensive understanding of the physics and technology of integral pressurized water reactors. It provides explanations of each of the simulator screens and various controls that a user can monitor and modify. A complete description of all the simulator features is also provided. A detailed set of exercises is provided in the Exercise Handbook accompanying this publication.

  5. VOT Enterprises, Inc. An Accounting Task Simulation. Employee's Manual [Student's Guide] and Employer's Manual [Teacher's Guide].

    Science.gov (United States)

    Dixon, Rose; And Others

    This accounting task simulation is designed for use in office occupations programs at the secondary level. The primary purpose is to give the student the opportunity to become familiar with the tasks and duties that may be performed by accounting personnel in a real work situation. The employer's manual provides general information for the student…

  6. User manual for storage simulation construction set

    International Nuclear Information System (INIS)

    Sehgal, Anil; Volz, Richard A.

    1999-01-01

    The Storage Simulation Set (SSCS) is a tool for composing storage system models using Telegrip. It is an application written in C++ and motif. With this system, the models of a storage system can be composed rapidly and accurately. The aspects of the SSCS are described within this report

  7. Manual on theory and practical aspects of bioassay

    International Nuclear Information System (INIS)

    Nuraini Hambali.

    1985-06-01

    This manual is set to provide necessary basic guidance on theory and practical aspects of bioassay specially for the newcomer in this field and the man in the laboratory. The first part is a brief information on the entry of radionuclides into the body, the metabolism and the programs of bioassay. All other factors to be considered in assessing internal contamination in man have also been brought up. In the second part, various procedures of radiochemical separations, detection and measurements are abstracted from journals and other revisions. Some methods have been attempted and to be followed where appropriate. (author)

  8. Material control system simulator user's manual

    International Nuclear Information System (INIS)

    Hollstien, R.B.

    1978-01-01

    This report describes the use of a Material Control System Simulator (MCSS) program for determination of material accounting uncertainty and system response to particular adversary action sequences that constitute plausible material diversion attempts. The program is intended for use in situations where randomness, uncertainty, or interaction of adversary actions and material control system components make it difficult to assess safeguards effectiveness against particular material diversion attempts

  9. Material control system simulator program reference manual

    Energy Technology Data Exchange (ETDEWEB)

    Hollstien, R.B.

    1978-01-24

    A description is presented of a Material Control System Simulator (MCSS) program for determination of material accounting uncertainty and system response to particular adversary action sequences that constitute plausible material diversion attempts. The program is intended for use in situations where randomness, uncertainty, or interaction of adversary actions and material control system components make it difficult to assess safeguards effectiveness against particular material diversion attempts. Although MCSS may be used independently in the design or analysis of material handling and processing systems, it has been tailored toward the determination of material accountability and the response of material control systems to adversary action sequences.

  10. Material control system simulator program reference manual

    International Nuclear Information System (INIS)

    Hollstien, R.B.

    1978-01-01

    A description is presented of a Material Control System Simulator (MCSS) program for determination of material accounting uncertainty and system response to particular adversary action sequences that constitute plausible material diversion attempts. The program is intended for use in situations where randomness, uncertainty, or interaction of adversary actions and material control system components make it difficult to assess safeguards effectiveness against particular material diversion attempts. Although MCSS may be used independently in the design or analysis of material handling and processing systems, it has been tailored toward the determination of material accountability and the response of material control systems to adversary action sequences

  11. Prediction method for the lubricating oil temperature of manual transaxle; Manual transaxle no yuon yosoku simulation

    Energy Technology Data Exchange (ETDEWEB)

    Iritani, M; Kaneda, K; Ibaraki, K [Toyota Central Research and Development Labs., Inc., Aichi (Japan); Suzuki, K; Morita, Y [Toyota Motor Corp., Aichi (Japan)

    1997-10-01

    Heat transfer and flow characteristics in manual transaxle (MT) are not clear. The measurement of the heat flux and the heat generations and the flow visualization were conducted for quantitative analysis of the heat transfer phenomena in the MT. A simulating technique for the lubricant temperature was developed with these experimental data and the prediction accuracy was within 3degC under the various operating conditions. The simulation was verified to be useful for estimating the lubricant temperature reduction by the lubricant volume reduction, air flow improvement around MT, etc. 7 refs., 8 figs.

  12. MPACT Theory Manual, Version 2.2.0

    Energy Technology Data Exchange (ETDEWEB)

    Downar, Thomas [Univ. of Michigan, Ann Arbor, MI (United States); Collins, Benjamin S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gehin, Jess C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Godfrey, Andrew T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jabaay, Daniel [Univ. of Michigan, Ann Arbor, MI (United States); Kelley, Blake W. [Univ. of Michigan, Ann Arbor, MI (United States); Clarno, Kevin T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kim, Kang [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kochunas, Brendan [Univ. of Michigan, Ann Arbor, MI (United States); Larsen, Edward W. [Univ. of Michigan, Ann Arbor, MI (United States); Liu, Yuxuan [Univ. of Michigan, Ann Arbor, MI (United States); Liu, Zhouyu [Univ. of Michigan, Ann Arbor, MI (United States); Martin, William R. [Univ. of Michigan, Ann Arbor, MI (United States); Palmtag, Scott [Core Physics, Inc., Cary, NC (United States); Rose, Michael [Univ. of Michigan, Ann Arbor, MI (United States); Saller, Thomas [Univ. of Michigan, Ann Arbor, MI (United States); Stimpson, Shane [Univ. of Michigan, Ann Arbor, MI (United States); Trahan, Travis [Univ. of Michigan, Ann Arbor, MI (United States); Wang, J. W. [Univ. of Michigan, Ann Arbor, MI (United States); Wieselquist, William A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Young, Mitchell [Univ. of Michigan, Ann Arbor, MI (United States); Zhu, Ang [Univ. of Michigan, Ann Arbor, MI (United States)

    2016-06-09

    This theory manual describes the three-dimensional (3-D) whole-core, pin-resolved transport calculation methodology employed in the MPACT code. To provide sub-pin level power distributions with sufficient accuracy, MPACT employs the method of characteristics (MOC) solutions in the framework of a 3-D coarse mesh finite difference (CMFD) formulation. MPACT provides a 3D MOC solution, but also a 2D/1D solution in which the 2D planar solution is provided by MOC and the axial coupling is resolved by one-dimensional (1-D) lower order (diffusion or P3) solutions. In Chapter 2 of the manual, the MOC methodology is described for calculating the regional angular and scalar fluxes from the Boltzmann transport equation. In Chapter 3, the 2D/1D methodology is described, together with the description of the CMFD iteration process involving dynamic homogenization and solution of the multigroup CMFD linear system. A description of the MPACT depletion algorithm is given in Chapter 4, followed by a discussion of the subgroup and ESSM resonance processing methods in Chapter 5. The final Chapter 6 describes a simplified thermal hydraulics model in MPACT.

  13. Numerical simulation of manual operation at MID stand control room

    International Nuclear Information System (INIS)

    Doca, C.; Dobre, A.; Predescu, D.; Mielcioiu, A.

    2003-01-01

    Since 2000 at INR Pitesti a package of software products devoted to numerical simulation of manual operations at fueling machine control room was developed. So far, specified, designed, worked out and implemented was the PUPITRU code. The following issues were solved: graphical aspects of specific computer - human operator interface; functional and graphical simulation of the whole associated equipment of the control desk components; implementation of the main notation as used in the automated schemes of the control desk in view of the fast identification of the switches, lamps, instrumentation, etc.; implementation within PUPITRU code of the entire data base used in the frame of MID tests; implementation of a number of about 1000 numerical simulation equations describing specific operational MID testing situations

  14. User Manual for the Allpix$^2$ Simulation Framework

    CERN Document Server

    AUTHOR|(SzGeCERN)818092; Spannagel, Simon; Hynds, Daniel

    2017-01-01

    Several simulation tools exist for the detailed study of position sensitive silicon detectors, covering aspects ranging from the electrical properties of sensors to the behaviour of charged particles traversing a given detector setup. Each of these toolkits performs a very specialised task, and for the complete description of a silicon detector several such software packages must typically be used. Allpix$^2$ builds upon this work by providing a complete and easy-to-use C++ software package for simulating detector performance, from the interaction of particles to the digitisation of propagated carriers by the front-end electronics. A modular framework is used to flexibly add or remove modules from the simulation chain, each performing specific tasks such as interfacing to Geant4 to deposit energy in the detector and provide an accurate description of material effects, or the propagation of deposited charges through the sensor bulk. This document presents the user manual of the software as of release version 1...

  15. Stochastic models: theory and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2008-03-01

    Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

  16. Galaxy Alignments: Theory, Modelling & Simulations

    Science.gov (United States)

    Kiessling, Alina; Cacciato, Marcello; Joachimi, Benjamin; Kirk, Donnacha; Kitching, Thomas D.; Leonard, Adrienne; Mandelbaum, Rachel; Schäfer, Björn Malte; Sifón, Cristóbal; Brown, Michael L.; Rassat, Anais

    2015-11-01

    The shapes of galaxies are not randomly oriented on the sky. During the galaxy formation and evolution process, environment has a strong influence, as tidal gravitational fields in the large-scale structure tend to align nearby galaxies. Additionally, events such as galaxy mergers affect the relative alignments of both the shapes and angular momenta of galaxies throughout their history. These "intrinsic galaxy alignments" are known to exist, but are still poorly understood. This review will offer a pedagogical introduction to the current theories that describe intrinsic galaxy alignments, including the apparent difference in intrinsic alignment between early- and late-type galaxies and the latest efforts to model them analytically. It will then describe the ongoing efforts to simulate intrinsic alignments using both N-body and hydrodynamic simulations. Due to the relative youth of this field, there is still much to be done to understand intrinsic galaxy alignments and this review summarises the current state of the field, providing a solid basis for future work.

  17. Physical habitat simulation system reference manual: version II

    Science.gov (United States)

    Milhous, Robert T.; Updike, Marlys A.; Schneider, Diane M.

    1989-01-01

    stream system basis. Such analysis is outside the scope of this manual, which concentrates on simulation of physical habitat based on depth, velocity, and a channel index. The results form PHABSIM can be used alone or by using a series of habitat time series programs that have been developed to generate monthly or daily habitat time series from the Weighted Usable Area versus streamflow table resulting from the habitat simulation programs and streamflow time series data. Monthly and daily streamflow time series may be obtained from USGS gages near the study site or as the output of river system management models.

  18. Discrete and continuous simulation theory and practice

    CERN Document Server

    Bandyopadhyay, Susmita

    2014-01-01

    When it comes to discovering glitches inherent in complex systems-be it a railway or banking, chemical production, medical, manufacturing, or inventory control system-developing a simulation of a system can identify problems with less time, effort, and disruption than it would take to employ the original. Advantageous to both academic and industrial practitioners, Discrete and Continuous Simulation: Theory and Practice offers a detailed view of simulation that is useful in several fields of study.This text concentrates on the simulation of complex systems, covering the basics in detail and exploring the diverse aspects, including continuous event simulation and optimization with simulation. It explores the connections between discrete and continuous simulation, and applies a specific focus to simulation in the supply chain and manufacturing field. It discusses the Monte Carlo simulation, which is the basic and traditional form of simulation. It addresses future trends and technologies for simulation, with par...

  19. Maintenance Personnel Performance Simulation (MAPPS) model. Users' Manual

    International Nuclear Information System (INIS)

    Kopstein, F.F.; Wolf, J.J.

    1985-09-01

    This report (MAPPS User's Manual) is the last report to be published from this program and provides detailed guidelines for utilization of the MAPPS model. Although the model has been developed to be highly user-friendly and provides interactive means for controlling and running of the model, the user's manual is provided as a guide for the user in the event clarification or direction is required. The user will find that in general the model requires primarily user input that is self explanatory. Once initial familiarization with the model has been achieved by the user, the amount of interaction between the user's manual and the computer model will be minimal. It is suggested however that even the experienced user keep the user's manual handy for quick reference. 5 refs., 10 figs., 7 tabs

  20. Theory and simulation of laser plasma coupling

    International Nuclear Information System (INIS)

    Kruer, W.L.

    1979-01-01

    The theory and simulation of these coupling processes are considered. Particular emphasis is given to their nonlinear evolution. First a brief introduction to computer simulation of plasmas using particle codes is given. Then the absorption of light via the generation of plasma waves is considered, followed by a discussion of stimulated scattering of intense light. Finally these calculations are compared with experimental results

  1. Theory and verification manual for BARC-R6 software

    International Nuclear Information System (INIS)

    Rastogi, Rohit; Bhasin, Vivek; Vaze, K.K.; Kushwaha, H.S.

    2001-01-01

    This report presents the technical description of the BARC-R6 computer code that has been developed in Reactor Safety Division, BARC. The program has been designed to run on Windows95 based PCs. The R6 method is a tool, which can perform analysis considering fracture and plastic instability together as failure criteria. The R6 method for analyzing flawed components requires construction of number of graphs. The results of study are arrived at by analyzing these graphs. BARC- R6 is designed with a highly intuitive graphical user interface for input as well as output. This report presents the implementation details of the program. The current version has provision for any flaw orientation and type in pipes and elbows. This program will be highly useful for fracture assessment and Leak Before Break (LBB) qualification for Nuclear Power Plant (NPP) piping. The complete PHT piping of any typical NPP can be assessed with in a very short time. It is capable of handling Option-1 and Option-2 Failure Assessment Line (FAL) for both Category-1 and Category-3 type analysis. It has provision for exhaustive sensitivity analyses for judging the significance of margins with respect to variation in material properties, crack sizes etc. The report is divided into 6 sections. The first section introduces the document. The details offer method are presented in the second section. This is followed by the technical description of the program. The fourth section lists many benchmark problems solved to highlight its usage and validation. These benchmark problems cover the wide range of cases encountered in the practical situations. The user manual is provided in the fifth section. The sixth section contains the references used in this report. This is followed by an appendix containing the co-relations for stress intensity factor and limit loads, used in the BARC-R6 code. (author)

  2. Flight dynamics analysis and simulation of heavy lift airships. Volume 2: Technical manual

    Science.gov (United States)

    Ringland, R. F.; Tischler, M. B.; Jex, H. R.; Emmen, R. D.; Ashkenas, I. L.

    1982-01-01

    The mathematical models embodied in the simulation are described in considerable detail and with supporting evidence for the model forms chosen. In addition the trimming and linearization algorithms used in the simulation are described. Appendices to the manual identify reference material for estimating the needed coefficients for the input data and provide example simulation results.

  3. SIERRA/Aero Theory Manual Version 4.44

    Energy Technology Data Exchange (ETDEWEB)

    Sierra Thermal/Fluid Team

    2017-04-01

    SIERRA/Aero is a two and three dimensional, node-centered, edge-based finite volume code that approximates the compressible Navier-Stokes equations on unstructured meshes. It is applicable to inviscid and high Reynolds number laminar and turbulent flows. Currently, two classes of turbulence models are provided: Reynolds Averaged Navier-Stokes (RANS) and hybrid methods such as Detached Eddy Simulation (DES). Large Eddy Simulation (LES) models are currently under development. The gas may be modeled either as ideal, or as a non-equilibrium, chemically reacting mixture of ideal gases. This document describes the mathematical models contained in the code, as well as certain implementation details. First, the governing equations are presented, followed by a description of the spatial discretization. Next, the time discretization is described, and finally the boundary conditions. Throughout the document, SIERRA/ Aero is referred to simply as Aero for brevity.

  4. SIERRA/Aero Theory Manual Version 4.46.

    Energy Technology Data Exchange (ETDEWEB)

    Sierra Thermal/Fluid Team

    2017-09-01

    SIERRA/Aero is a two and three dimensional, node-centered, edge-based finite volume code that approximates the compressible Navier-Stokes equations on unstructured meshes. It is applicable to inviscid and high Reynolds number laminar and turbulent flows. Currently, two classes of turbulence models are provided: Reynolds Averaged Navier-Stokes (RANS) and hybrid methods such as Detached Eddy Simulation (DES). Large Eddy Simulation (LES) models are currently under development. The gas may be modeled either as ideal, or as a non-equilibrium, chemically reacting mixture of ideal gases. This document describes the mathematical models contained in the code, as well as certain implementation details. First, the governing equations are presented, followed by a description of the spatial discretization. Next, the time discretization is described, and finally the boundary conditions. Throughout the document, SIERRA/ Aero is referred to simply as Aero for brevity.

  5. SEACAS Theory Manuals: Part II. Nonlinear Continuum Mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Attaway, S.W.; Laursen, T.A.; Zadoks, R.I.

    1998-09-01

    This report summarizes the key continuum mechanics concepts required for the systematic prescription and numerical solution of finite deformation solid mechanics problems. Topics surveyed include measures of deformation appropriate for media undergoing large deformations, stress measures appropriate for such problems, balance laws and their role in nonlinear continuum mechanics, the role of frame indifference in description of large deformation response, and the extension of these theories to encompass two dimensional idealizations, structural idealizations, and rigid body behavior. There are three companion reports that describe the problem formulation, constitutive modeling, and finite element technology for nonlinear continuum mechanics systems.

  6. Filtration theory using computer simulations

    Energy Technology Data Exchange (ETDEWEB)

    Bergman, W.; Corey, I. [Lawrence Livermore National Lab., CA (United States)

    1997-08-01

    We have used commercially available fluid dynamics codes based on Navier-Stokes theory and the Langevin particle equation of motion to compute the particle capture efficiency and pressure drop through selected two- and three-dimensional fiber arrays. The approach we used was to first compute the air velocity vector field throughout a defined region containing the fiber matrix. The particle capture in the fiber matrix is then computed by superimposing the Langevin particle equation of motion over the flow velocity field. Using the Langevin equation combines the particle Brownian motion, inertia and interception mechanisms in a single equation. In contrast, most previous investigations treat the different capture mechanisms separately. We have computed the particle capture efficiency and the pressure drop through one, 2-D and two, 3-D fiber matrix elements. 5 refs., 11 figs.

  7. BISON Theory Manual The Equations behind Nuclear Fuel Analysis

    International Nuclear Information System (INIS)

    Hales, J. D.; Williamson, R. L.; Novascone, S. R.; Pastore, G.; Spencer, B. W.; Stafford, D. S.; Gamble, K. A.; Perez, D. M.; Liu, W.

    2016-01-01

    BISON is a finite element-based nuclear fuel performance code applicable to a variety of fuel forms including light water reactor fuel rods, TRISO particle fuel, and metallic rod and plate fuel. It solves the fully-coupled equations of thermomechanics and species diffusion, for either 2D axisymmetric or 3D geometries. Fuel models are included to describe temperature and burnup dependent thermal properties, fission product swelling, densification, thermal and irradiation creep, fracture, and fission gas production and release. Plasticity, irradiation growth, and thermal and irradiation creep models are implemented for clad materials. Models are also available to simulate gap heat transfer, mechanical contact, and the evolution of the gap/plenum pressure with plenum volume, gas temperature, and fission gas addition. BISON is based on the MOOSE framework and can therefore efficiently solve problems using standard workstations or very large high-performance computers. This document describes the theoretical and numerical foundations of BISON.

  8. BISON Theory Manual The Equations behind Nuclear Fuel Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hales, J. D. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Williamson, R. L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Novascone, S. R. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pastore, G. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Spencer, B. W. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Stafford, D. S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gamble, K. A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Perez, D. M. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Liu, W. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    BISON is a finite element-based nuclear fuel performance code applicable to a variety of fuel forms including light water reactor fuel rods, TRISO particle fuel, and metallic rod and plate fuel. It solves the fully-coupled equations of thermomechanics and species diffusion, for either 2D axisymmetric or 3D geometries. Fuel models are included to describe temperature and burnup dependent thermal properties, fission product swelling, densification, thermal and irradiation creep, fracture, and fission gas production and release. Plasticity, irradiation growth, and thermal and irradiation creep models are implemented for clad materials. Models are also available to simulate gap heat transfer, mechanical contact, and the evolution of the gap/plenum pressure with plenum volume, gas temperature, and fission gas addition. BISON is based on the MOOSE framework and can therefore efficiently solve problems using standard workstations or very large high-performance computers. This document describes the theoretical and numerical foundations of BISON.

  9. Simulation of a Schema Theory-Based Knowledge Delivery System for Scientists.

    Science.gov (United States)

    Vaughan, W. S., Jr.; Mavor, Anne S.

    A future, automated, interactive, knowledge delivery system for use by researchers was tested using a manual cognitive model. Conceptualized from schema/frame/script theories in cognitive psychology and artificial intelligence, this hypothetical system was simulated by two psychologists who interacted with four researchers in microbiology to…

  10. Massachusetts reservoir simulation tool—User’s manual

    Science.gov (United States)

    Levin, Sara B.

    2016-10-06

    IntroductionThe U.S. Geological Survey developed the Massachusetts Reservoir Simulation Tool to examine the effects of reservoirs on natural streamflows in Massachusetts by simulating the daily water balance of reservoirs. The simulation tool was developed to assist environmental managers to better manage water withdrawals in reservoirs and to preserve downstream aquatic habitats.

  11. Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Turinsky, Paul J.

    2005-01-01

    Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. A meaningful adaption will result in high-fidelity and robust adapted core simulator models. To perform adaption, we propose an inverse theory approach in which the multitudes of input data to core simulators, i.e., reactor physics and thermal-hydraulic data, are to be adjusted to improve agreement with measured observables while keeping core simulator models unadapted. At first glance, devising such adaption for typical core simulators with millions of input and observables data would spawn not only several prohibitive challenges but also numerous disparaging concerns. The challenges include the computational burdens of the sensitivity-type calculations required to construct Jacobian operators for the core simulator models. Also, the computational burdens of the uncertainty-type calculations required to estimate the uncertainty information of core simulator input data present a demanding challenge. The concerns however are mainly related to the reliability of the adjusted input data. The methodologies of adaptive simulation are well established in the literature of data adjustment. We adopt the same general framework for data adjustment; however, we refrain from solving the fundamental adjustment equations in a conventional manner. We demonstrate the use of our so-called Efficient Subspace Methods (ESMs) to overcome the computational and storage burdens associated with the core adaption problem. We illustrate the successful use of ESM-based adaptive techniques for a typical boiling water reactor core simulator adaption problem

  12. An Instrumented Glove to Assess Manual Dexterity in Simulation-Based Neurosurgical Education

    Directory of Open Access Journals (Sweden)

    Juan Diego Lemos

    2017-04-01

    Full Text Available The traditional neurosurgical apprenticeship scheme includes the assessment of trainee’s manual skills carried out by experienced surgeons. However, the introduction of surgical simulation technology presents a new paradigm where residents can refine surgical techniques on a simulator before putting them into practice in real patients. Unfortunately, in this new scheme, an experienced surgeon will not always be available to evaluate trainee’s performance. For this reason, it is necessary to develop automatic mechanisms to estimate metrics for assessing manual dexterity in a quantitative way. Authors have proposed some hardware-software approaches to evaluate manual dexterity on surgical simulators. This paper presents IGlove, a wearable device that uses inertial sensors embedded on an elastic glove to capture hand movements. Metrics to assess manual dexterity are estimated from sensors signals using data processing and information analysis algorithms. It has been designed to be used with a neurosurgical simulator called Daubara NS Trainer, but can be easily adapted to another benchtop- and manikin-based medical simulators. The system was tested with a sample of 14 volunteers who performed a test that was designed to simultaneously evaluate their fine motor skills and the IGlove’s functionalities. Metrics obtained by each of the participants are presented as results in this work; it is also shown how these metrics are used to automatically evaluate the level of manual dexterity of each volunteer.

  13. Fluctuation Solution Theory Properties from Molecular Simulation

    DEFF Research Database (Denmark)

    Abildskov, Jens; Wedberg, R.; O’Connell, John P.

    2013-01-01

    The thermodynamic properties obtained in the Fluctuation Solution Theory are based on spatial integrals of molecular TCFs between component pairs in the mixture. Molecular simulation, via either MD or MC calculations, can yield these correlation functions for model inter- and intramolecular...

  14. Simulation Assessment Validation Environment (SAVE). Software User’s Manual

    Science.gov (United States)

    2000-09-01

    TOOL USAGE Figure 2-14: Factory Simulation Tool Usage This tool directly emulates real-world system behaviors that are associated with each resource...manufacturing simulation tools and Computer Aided Design (CAD) tools. The Factory/Schedule Simulation tool is used to simulate real-world system behaviors ...char Name>_pfchar Part Usage(Produced)/Part/Feature/Char <partuseName>_<partName>_<matName>_<char Name>_cmat PartUsage( Comsumed )/Part/Material

  15. Theory and Simulation of Multicomponent Osmotic Systems.

    Science.gov (United States)

    Karunaweera, Sadish; Gee, Moon Bae; Weerasinghe, Samantha; Smith, Paul E

    2012-05-28

    Most cellular processes occur in systems containing a variety of components many of which are open to material exchange. However, computer simulations of biological systems are almost exclusively performed in systems closed to material exchange. In principle, the behavior of biomolecules in open and closed systems will be different. Here, we provide a rigorous framework for the analysis of experimental and simulation data concerning open and closed multicomponent systems using the Kirkwood-Buff (KB) theory of solutions. The results are illustrated using computer simulations for various concentrations of the solutes Gly, Gly(2) and Gly(3) in both open and closed systems, and in the absence or presence of NaCl as a cosolvent. In addition, KB theory is used to help rationalize the aggregation properties of the solutes. Here one observes that the picture of solute association described by the KB integrals, which are directly related to the solution thermodynamics, and that provided by more physical clustering approaches are different. It is argued that the combination of KB theory and simulation data provides a simple and powerful tool for the analysis of complex multicomponent open and closed systems.

  16. User's Manual for the Simulating Waves Nearshore Model (SWAN)

    National Research Council Canada - National Science Library

    Allard, Richard

    2002-01-01

    The Simulating WAves Nearshore (SWAN) model is a numerical wave model used to obtain realistic estimates of wave parameters in coastal areas, lakes, and estuaries from given wind, bottom, and current conditions...

  17. Simulation Methodology in Nursing Education and Adult Learning Theory

    Science.gov (United States)

    Rutherford-Hemming, Tonya

    2012-01-01

    Simulation is often used in nursing education as a teaching methodology. Simulation is rooted in adult learning theory. Three learning theories, cognitive, social, and constructivist, explain how learners gain knowledge with simulation experiences. This article takes an in-depth look at each of these three theories as each relates to simulation.…

  18. Defects and diffusion, theory & simulation II

    CERN Document Server

    Fisher, David J

    2010-01-01

    This second volume in a new series covering entirely general results in the fields of defects and diffusion includes 356 abstracts of papers which appeared between the end of 2009 and the end of 2010. As well as the abstracts, the volume includes original papers on theory/simulation, semiconductors and metals: ""Predicting Diffusion Coefficients from First Principles ..."" (Mantina, Chen & Liu), ""Gouge Assessment for Pipes ..."" (Meliani, Pluvinage & Capelle), ""Simulation of the Impact Behaviour of ... Hollow Sphere Structures"" (Ferrano, Speich, Rimkus, Merkel & Öchsner), ""Elastic-Plastic

  19. Simulation of random walks in field theory

    International Nuclear Information System (INIS)

    Rensburg, E.J.J. van

    1988-01-01

    The numerical simulation of random walks is considered using the Monte Carlo method previously proposed. The algorithm is tested and then generalised to generate Edwards random walks. The renormalised masses of the Edwards model are calculated and the results are compared with those obtained from a simple perturbation theory calculation for small values of the bare coupling constant. The efficiency of this algorithm is discussed and compared with an alternative approach. (author)

  20. Fluid dynamics theory, computation, and numerical simulation

    CERN Document Server

    Pozrikidis, C

    2001-01-01

    Fluid Dynamics Theory, Computation, and Numerical Simulation is the only available book that extends the classical field of fluid dynamics into the realm of scientific computing in a way that is both comprehensive and accessible to the beginner The theory of fluid dynamics, and the implementation of solution procedures into numerical algorithms, are discussed hand-in-hand and with reference to computer programming This book is an accessible introduction to theoretical and computational fluid dynamics (CFD), written from a modern perspective that unifies theory and numerical practice There are several additions and subject expansions in the Second Edition of Fluid Dynamics, including new Matlab and FORTRAN codes Two distinguishing features of the discourse are solution procedures and algorithms are developed immediately after problem formulations are presented, and numerical methods are introduced on a need-to-know basis and in increasing order of difficulty Matlab codes are presented and discussed for a broad...

  1. Fluid Dynamics Theory, Computation, and Numerical Simulation

    CERN Document Server

    Pozrikidis, Constantine

    2009-01-01

    Fluid Dynamics: Theory, Computation, and Numerical Simulation is the only available book that extends the classical field of fluid dynamics into the realm of scientific computing in a way that is both comprehensive and accessible to the beginner. The theory of fluid dynamics, and the implementation of solution procedures into numerical algorithms, are discussed hand-in-hand and with reference to computer programming. This book is an accessible introduction to theoretical and computational fluid dynamics (CFD), written from a modern perspective that unifies theory and numerical practice. There are several additions and subject expansions in the Second Edition of Fluid Dynamics, including new Matlab and FORTRAN codes. Two distinguishing features of the discourse are: solution procedures and algorithms are developed immediately after problem formulations are presented, and numerical methods are introduced on a need-to-know basis and in increasing order of difficulty. Matlab codes are presented and discussed for ...

  2. Manual for the Jet Event and Background Simulation Library

    Energy Technology Data Exchange (ETDEWEB)

    Heinz, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Soltz, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Angerami, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-11

    Jets are the collimated streams of particles resulting from hard scattering in the initial state of high-energy collisions. In heavy-ion collisions, jets interact with the quark-gluon plasma (QGP) before freezeout, providing a probe into the internal structure and properties of the QGP. In order to study jets, background must be subtracted from the measured event, potentially introducing a bias. We aim to understand and quantify this subtraction bias. PYTHIA, a library to simulate pure jet events, is used to simulate a model for a signature with one pure jet (a photon) and one quenched jet, where all quenched particle momenta are reduced by a user-de ned constant fraction. Background for the event is simulated using multiplicity values generated by the TRENTO initial state model of heavy-ion collisions fed into a thermal model consisting of a 3-dimensional Boltzmann distribution for particle types and momenta. Data from the simulated events is used to train a statistical model, which computes a posterior distribution of the quench factor for a data set. The model was tested rst on pure jet events and then on full events including the background. This model will allow for a quantitative determination of biases induced by various methods of background subtraction.

  3. Diffusive epidemic process: theory and simulation

    International Nuclear Information System (INIS)

    Maia, Daniel Souza; Dickman, Ronald

    2007-01-01

    We study the continuous absorbing-state phase transition in the one-dimensional diffusive epidemic process via mean-field theory and Monte Carlo simulation. In this model, particles of two species (A and B) hop on a lattice and undergo reactions B → A and A+B → 2B; the total particle number is conserved. We formulate the model as a continuous-time Markov process described by a master equation. A phase transition between the (absorbing) B-free state and an active state is observed as the parameters (reaction and diffusion rates, and total particle density) are varied. Mean-field theory reveals a surprising, nonmonotonic dependence of the critical recovery rate on the diffusion rate of B particles. A computational realization of the process that is faithful to the transition rates defining the model is devised, allowing for direct comparison with theory. Using the quasi-stationary simulation method we determine the order parameter and the survival time in systems of up to 4000 sites. Due to strong finite-size effects, the results converge only for large system sizes. We find no evidence for a discontinuous transition. Our results are consistent with the existence of three distinct universality classes, depending on whether A particles diffusive more rapidly, less rapidly or at the same rate as B particles. We also perform quasi-stationary simulations of the triplet creation model, which yield results consistent with a discontinuous transition at high diffusion rates

  4. The TESS [Tandem Experiment Simulation Studies] computer code user's manual

    International Nuclear Information System (INIS)

    Procassini, R.J.

    1990-01-01

    TESS (Tandem Experiment Simulation Studies) is a one-dimensional, bounded particle-in-cell (PIC) simulation code designed to investigate the confinement and transport of plasma in a magnetic mirror device, including tandem mirror configurations. Mirror plasmas may be modeled in a system which includes an applied magnetic field and/or a self-consistent or applied electrostatic potential. The PIC code TESS is similar to the PIC code DIPSI (Direct Implicit Plasma Surface Interactions) which is designed to study plasma transport to and interaction with a solid surface. The codes TESS and DIPSI are direct descendants of the PIC code ES1 that was created by A. B. Langdon. This document provides the user with a brief description of the methods used in the code and a tutorial on the use of the code. 10 refs., 2 tabs

  5. Material control system simulator user's manual

    Energy Technology Data Exchange (ETDEWEB)

    Hollstien, R.B.

    1978-01-24

    This report describes the use of a Material Control System Simulator (MCSS) program for determination of material accounting uncertainty and system response to particular adversary action sequences that constitute plausible material diversion attempts. The program is intended for use in situations where randomness, uncertainty, or interaction of adversary actions and material control system components make it difficult to assess safeguards effectiveness against particular material diversion attempts.

  6. Theory and Simulations of Solar System Plasmas

    Science.gov (United States)

    Goldstein, Melvyn L.

    2011-01-01

    "Theory and simulations of solar system plasmas" aims to highlight results from microscopic to global scales, achieved by theoretical investigations and numerical simulations of the plasma dynamics in the solar system. The theoretical approach must allow evidencing the universality of the phenomena being considered, whatever the region is where their role is studied; at the Sun, in the solar corona, in the interplanetary space or in planetary magnetospheres. All possible theoretical issues concerning plasma dynamics are welcome, especially those using numerical models and simulations, since these tools are mandatory whenever analytical treatments fail, in particular when complex nonlinear phenomena are at work. Comparative studies for ongoing missions like Cassini, Cluster, Demeter, Stereo, Wind, SDO, Hinode, as well as those preparing future missions and proposals, like, e.g., MMS and Solar Orbiter, are especially encouraged.

  7. Theory, modeling and simulation: Annual report 1993

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, T.H. Jr.; Garrett, B.C.

    1994-07-01

    Developing the knowledge base needed to address the environmental restoration issues of the US Department of Energy requires a fundamental understanding of molecules and their interactions in insolation and in liquids, on surfaces, and at interfaces. To meet these needs, the PNL has established the Environmental and Molecular Sciences Laboratory (EMSL) and will soon begin construction of a new, collaborative research facility devoted to advancing the understanding of environmental molecular science. Research in the Theory, Modeling, and Simulation program (TMS), which is one of seven research directorates in the EMSL, will play a critical role in understanding molecular processes important in restoring DOE`s research, development and production sites, including understanding the migration and reactions of contaminants in soils and groundwater, the development of separation process for isolation of pollutants, the development of improved materials for waste storage, understanding the enzymatic reactions involved in the biodegradation of contaminants, and understanding the interaction of hazardous chemicals with living organisms. The research objectives of the TMS program are to apply available techniques to study fundamental molecular processes involved in natural and contaminated systems; to extend current techniques to treat molecular systems of future importance and to develop techniques for addressing problems that are computationally intractable at present; to apply molecular modeling techniques to simulate molecular processes occurring in the multispecies, multiphase systems characteristic of natural and polluted environments; and to extend current molecular modeling techniques to treat complex molecular systems and to improve the reliability and accuracy of such simulations. The program contains three research activities: Molecular Theory/Modeling, Solid State Theory, and Biomolecular Modeling/Simulation. Extended abstracts are presented for 89 studies.

  8. Theory, modeling and simulation: Annual report 1993

    International Nuclear Information System (INIS)

    Dunning, T.H. Jr.; Garrett, B.C.

    1994-07-01

    Developing the knowledge base needed to address the environmental restoration issues of the US Department of Energy requires a fundamental understanding of molecules and their interactions in insolation and in liquids, on surfaces, and at interfaces. To meet these needs, the PNL has established the Environmental and Molecular Sciences Laboratory (EMSL) and will soon begin construction of a new, collaborative research facility devoted to advancing the understanding of environmental molecular science. Research in the Theory, Modeling, and Simulation program (TMS), which is one of seven research directorates in the EMSL, will play a critical role in understanding molecular processes important in restoring DOE's research, development and production sites, including understanding the migration and reactions of contaminants in soils and groundwater, the development of separation process for isolation of pollutants, the development of improved materials for waste storage, understanding the enzymatic reactions involved in the biodegradation of contaminants, and understanding the interaction of hazardous chemicals with living organisms. The research objectives of the TMS program are to apply available techniques to study fundamental molecular processes involved in natural and contaminated systems; to extend current techniques to treat molecular systems of future importance and to develop techniques for addressing problems that are computationally intractable at present; to apply molecular modeling techniques to simulate molecular processes occurring in the multispecies, multiphase systems characteristic of natural and polluted environments; and to extend current molecular modeling techniques to treat complex molecular systems and to improve the reliability and accuracy of such simulations. The program contains three research activities: Molecular Theory/Modeling, Solid State Theory, and Biomolecular Modeling/Simulation. Extended abstracts are presented for 89 studies

  9. A Theory-Based Contextual Nutrition Education Manual Enhanced Nutrition Teaching Skill.

    Science.gov (United States)

    Kupolati, Mojisola D; MacIntyre, Una E; Gericke, Gerda J

    2018-01-01

    Background: A theory-based contextual nutrition education manual (NEM) may enhance effective teaching of nutrition in schools. School nutrition education should lead to the realization of such benefits as improved health, scholarly achievement leading to manpower development and consequently the nation's development. The purpose of the study was to develop a contextual NEM for teachers of Grade 5 and 6 learners in the Bronkhorstspruit district, South Africa, and to assess teachers' perception on the use of the manual for teaching nutrition. Methods: This descriptive case study used an interpretivist paradigm. The study involved teachers ( N = 6) who taught nutrition in Life Skills (LS) and Natural Science and Technology (NST) in a randomly selected primary school in the Bronkhorstspruit district. Findings from a nutrition education needs assessment were integrated with the constructs of the Social cognitive theory (SCT) and the Meaningful learning model (MLM) and the existing curriculum of the Department of Basic Education (DoBE) to develop a contextual NEM. The manual was used by the teachers to teach nutrition to Grades 5 and 6 learners during the 2015 academic year as a pilot project. A focus group discussion (FDG) was conducted with teachers to gauge their perceptions of the usefulness of the NEM. Data were analyzed using the thematic approach of the framework method for qualitative research. Results: Teachers described the NEM as rich in information, easy to use and perceived the supporting materials and activities as being effective. The goal setting activities contained in the NEM were deemed to be ineffective. Teachers felt that they did not have enough time to teach all the important things that the learners needed to know. Conclusion: Teachers perceived the NEM as helpful toward improving their nutrition teaching skills.The NEM template may furthermore guide teachers in planning theory-based nutrition lessons.

  10. Lattice gauge theories and Monte Carlo simulations

    International Nuclear Information System (INIS)

    Rebbi, C.

    1981-11-01

    After some preliminary considerations, the discussion of quantum gauge theories on a Euclidean lattice takes up the definition of Euclidean quantum theory and treatment of the continuum limit; analogy is made with statistical mechanics. Perturbative methods can produce useful results for strong or weak coupling. In the attempts to investigate the properties of the systems for intermediate coupling, numerical methods known as Monte Carlo simulations have proved valuable. The bulk of this paper illustrates the basic ideas underlying the Monte Carlo numerical techniques and the major results achieved with them according to the following program: Monte Carlo simulations (general theory, practical considerations), phase structure of Abelian and non-Abelian models, the observables (coefficient of the linear term in the potential between two static sources at large separation, mass of the lowest excited state with the quantum numbers of the vacuum (the so-called glueball), the potential between two static sources at very small distance, the critical temperature at which sources become deconfined), gauge fields coupled to basonic matter (Higgs) fields, and systems with fermions

  11. Workshop on nuclear structure and decay data: Theory and evaluation manual - Pt. 2

    International Nuclear Information System (INIS)

    Nichols, A.L.; McLaughlin, P.K.; p.mclaughlin@iaea.org

    2004-11-01

    A two-week Workshop on Nuclear Structure and Decay Data: Theory and Evaluation was organized and administrated by the IAEA Nuclear Data Section, and hosted at the Abdus Salam International Centre for Theoretical Physics (ICTP) in Trieste, Italy from 17 to 28 November 2003. The aims and contents of this workshop are summarized, along with the agenda, list of participants, comments and recommendations. Workshop materials are also included that are freely available on CD-ROM (all relevant PowerPoint presentations and manuals along with appropriate computer codes). (author)

  12. Workshop on nuclear structure and decay data: Theory and evaluation manual - Pt. 1

    International Nuclear Information System (INIS)

    Nichols, A.L.; McLaughlin, P.K.; p.mclaughlin@iaea.org

    2004-11-01

    A two-week Workshop on Nuclear Structure and Decay Data: Theory and Evaluation was organized and administrated by the IAEA Nuclear Data Section, and hosted at the Abdus Salam International Centre for Theoretical Physics (ICTP) in Trieste, Italy from 17 to 28 November 2003. The aims and contents of this workshop are summarized, along with the agenda, list of participants, comments and recommendations. Workshop materials are also included that are freely available on CD-ROM (all relevant PowerPoint presentations and manuals along with appropriate computer codes). (author)

  13. Quasilinear theory and simulation of Buneman instability

    International Nuclear Information System (INIS)

    Pavan, J.; Yoon, P. H.; Umeda, T.

    2011-01-01

    In a recently developed nonlinear theory of Buneman instability, a simplifying assumption of self-similarity was imposed for the electron distribution function, based upon which, a set of moment kinetic equations was derived and solved together with nonlinear wave kinetic equation [P. H. Yoon and T. Umeda, Phys. Plasmas 17, 112317 (2010)]. It was found that the theoretical result compared reasonably against one-dimensional electrostatic Vlasov simulation. In spite of this success, however, the simulated distribution deviated appreciably from the assumed self-similar form during the late stages of nonlinear evolution. In order to rectify this shortcoming, in this paper, the distribution function is computed on the basis of rigorous velocity space diffusion equation. A novel theoretical scheme is developed so that both the quasilinear particle diffusion equation and the adiabatic dispersion relation can be solved for an arbitrary particle distribution function. Comparison with Vlasov simulation over relatively early quasilinear phase of the instability shows a reasonable agreement, despite the fact that quasilinear theory lacks coherent nonlinear effects as well as mode-mode coupling effects.

  14. WASTES: Wastes system transportation and economic simulation: Version 2, Programmer's reference manual

    International Nuclear Information System (INIS)

    Buxbaum, M.E.; Shay, M.R.

    1986-11-01

    The WASTES Version II (WASTES II) Programmer's Reference Manual was written to document code development activities performed under the Monitored Retrievable Storage (MRS) Program at Pacific Northwest Laboratory (PNL). The manual will also serve as a valuable tool for programmers involved in maintenance of and updates to the WASTES II code. The intended audience for this manual are experienced FORTRAN programmers who have only a limited knowledge of nuclear reactor operation, the nuclear fuel cycle, or nuclear waste management practices. It is assumed that the readers of this manual have previously reviewed the WASTES II Users Guide published as PNL Report 5714. The WASTES II code is written in FORTRAN 77 as an extension to the SLAM commercial simulation package. The model is predominately a FORTRAN based model that makes extensive use of the SLAM file maintenance and time management routines. This manual documents the general manner in which the code is constructed and the interactions between SLAM and the WASTES subroutines. The functionality of each of the major WASTES subroutines is illustrated with ''block flow'' diagrams. The basic function of each of these subroutines, the algorithms used in them, and a discussion of items of particular note in the subroutine are reviewed in this manual. The items of note may include an assumption, a coding practice that particularly applies to a subroutine, or sections of the code that are particularly intricate or whose mastery may be difficult. The appendices to the manual provide extensive detail on the use of arrays, subroutines, included common blocks, parameters, variables, and files

  15. Compact toroidal plasmas: Simulations and theory

    International Nuclear Information System (INIS)

    Harned, D.S.; Hewett, D.W.; Lilliequist, C.G.

    1983-01-01

    Realistic FRC equilibria are calculated and their stability to the n=1 tilting mode is studied. Excluding kinetic effects, configurations ranging from elliptical to racetrack are unstable. Particle simulations of FRCs show that particle loss on open field lines can cause sufficient plasma rotation to drive the n=2 rotational instability. The allowed frequencies of the shear Alfven wave are calculated for use in heating of spheromaks. An expanded spheromak is introduced and its stability properties are studied. Transport calculations of CTs are described. A power balance model shows that many features of gun-generated CT plasmas can be explained by the dominance of impurity radiation. It is shown how the Taylor relaxation theory, applied to gun-generated CT plasmas, leads to the possibility of steady-state current drive. Lastly, applications of accelerated CTs are considered. (author)

  16. Theories and simulations of complex social systems

    CERN Document Server

    Mago, Vijay

    2014-01-01

    Research into social systems is challenging due to their complex nature. Traditional methods of analysis are often difficult to apply effectively as theories evolve over time. This can be due to a lack of appropriate data, or too much uncertainty. It can also be the result of problems which are not yet understood well enough in the general sense so that they can be classified, and an appropriate solution quickly identified. Simulation is one tool that deals well with these challenges, fits in well with the deductive process, and is useful for testing theory. This field is still relatively new, and much of the work is necessarily innovative, although it builds upon a rich and varied foundation. There are a number of existing modelling paradigms being applied to complex social systems research. Additionally, new methods and measures are being devised through the process of conducting research. We expect that readers will enjoy the collection of high quality research works from new and accomplished researchers. ...

  17. Improving the quality of manually acquired data: Applying the theory of planned behaviour to data quality

    International Nuclear Information System (INIS)

    Murphy, Glen D.

    2009-01-01

    The continued reliance of manual data capture in engineering asset intensive organisations highlights the critical role played by those responsible for recording raw data. The potential for data quality variance across individual operators also exposes the need to better manage this particular group. This paper evaluates the relative importance of the human factors associated with data quality. Using the theory of planned behaviour this paper considers the impact of attitudes, perceptions and behavioural intentions on the data collection process in an engineering asset context. Two additional variables are included, those of time pressure and operator feedback. Time pressure is argued to act as a moderator between intention and data collection behaviour, while perceived behavioural control will moderate the relationship between feedback and data collection behaviour. Overall the paper argues that the presence of best practice procedures or threats of disciplinary sanction are insufficient controls to determine data quality. Instead those concerned with improving the data collection performance of operators should consider the operator's perceptions of group attitude towards data quality, the level of feedback provided to data collectors and the impact of time pressures on procedure compliance. A range of practical recommendations are provided to those wishing to improve the quality of their manually acquired data.

  18. Plasma confinement theory and transport simulation

    International Nuclear Information System (INIS)

    Ross, D.W.

    1989-06-01

    An overview of the program has been given in the contract proposal. The principal objectives are: to provide theoretical interpretation and computer modelling for the TEXT tokamak, and to advance the simulation studies of tokamaks generally, functioning as a National Transport Center. We also carry out equilibrium and stability studies in support of the TEXT upgrade, and work has continued on Alfven waves and MFENET software development. The focus of the program is to lay the groundwork for detailed comparison with experiment of the various transport theories to improve physics understanding and confidence in predictions of future machine behavior. This involves: to collect, in retrievable form, the data from TEXT and other tokamaks; to make the data available through easy-to-use interfaces; to develop criteria for success in fitting models to the data; to maintain the Texas transport code CHAPO and make it available to users; to collect theoretical models and implement them in the transport code; and to carry out simulation studies and evaluate fits to the data. In the following we outline the progress made in fiscal year 1989. Of special note are the proposed participation of our data base project in the ITER program, and a proposed q-profile diagnostic based on our neutral transport studies. We have emphasized collaboration with the TEXT experimentalists, making as much use as possible of the measured fluctuation spectra. 52 refs

  19. STICAP: A linear circuit analysis program with stiff systems capability. Volume 1: Theory manual. [network analysis

    Science.gov (United States)

    Cooke, C. H.

    1975-01-01

    STICAP (Stiff Circuit Analysis Program) is a FORTRAN 4 computer program written for the CDC-6400-6600 computer series and SCOPE 3.0 operating system. It provides the circuit analyst a tool for automatically computing the transient responses and frequency responses of large linear time invariant networks, both stiff and nonstiff (algorithms and numerical integration techniques are described). The circuit description and user's program input language is engineer-oriented, making simple the task of using the program. Engineering theories underlying STICAP are examined. A user's manual is included which explains user interaction with the program and gives results of typical circuit design applications. Also, the program structure from a systems programmer's viewpoint is depicted and flow charts and other software documentation are given.

  20. A metronome for pacing manual ventilation in a neonatal resuscitation simulation.

    Science.gov (United States)

    Cocucci, Cecilia; Madorno, Matías; Aguilar, Adriana; Acha, Leila; Szyld, Edgardo; Musante, Gabriel

    2015-01-01

    During manual positive pressure ventilation (PPV), delivering a recommended respiratory rate (RR) is operator dependent. We tested the efficacy of a metronome as a standardised method to improve the accuracy of delivered RR during manual PPV in a neonatal resuscitation simulation. We conducted a blinded simulation in two consecutive stages. Using a self-inflating bag, 36 CPR trained operators provided PPV to a modified neonatal manikin via an endotracheal tube. Pressure and flow signals were captured by a respiratory function monitor. In the first standard stage, participants delivered RR as they would in delivery room. Prior to the second stage, they were asked about what their target RR had been and a metronome was set to that target. Subsequently, operators repeated PPV attempting to coordinate their delivered RR with the metronome. To evaluate accuracy we generated the variable RR Gap as the absolute difference between delivered and target RR. The primary outcome was the difference in RR Gap between stages. Mean (SD) target RR was 50 (8.7) inflations/min. During the initial stage, median (IQR) RR Gap was 11.6 (4.7-18.3) inflations/min and 20/36 participants (55.5%) had a mean delivered RR beyond the recommended range. When paced by the metronome, RR Gap was reduced to 0.2 (0.1-0.4) inflations/min and 32/36 participants (89%) fell within the recommended range. The use of a metronome improved the accuracy of delivered RR during manual PPV. Novel approaches to deliver an accurate RR during manual PPV need to be tested in more realistic scenarios. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  1. Nonlattice Simulation for Supersymmetric Gauge Theories in One Dimension

    International Nuclear Information System (INIS)

    Hanada, Masanori; Nishimura, Jun; Takeuchi, Shingo

    2007-01-01

    Lattice simulation of supersymmetric gauge theories is not straightforward. In some cases the lack of manifest supersymmetry just necessitates cumbersome fine-tuning, but in the worse cases the chiral and/or Majorana nature of fermions makes it difficult to even formulate an appropriate lattice theory. We propose circumventing all these problems inherent in the lattice approach by adopting a nonlattice approach for one-dimensional supersymmetric gauge theories, which are important in the string or M theory context. In particular, our method can be used to investigate the gauge-gravity duality from first principles, and to simulate M theory based on the matrix theory conjecture

  2. Coherent Synchrotron Radiation: Theory and Simulations

    International Nuclear Information System (INIS)

    Novokhatski, Alexander

    2012-01-01

    The physics of coherent synchrotron radiation (CSR) emitted by ultra-relativistic electron bunches, known since the last century, has become increasingly important with the development of high peak current free electron lasers and shorter bunch lengths in storage rings. Coherent radiation can be described as a low frequency part of the familiar synchrotron radiation in bending magnets. As this part is independent of the electron energy, the fields of different electrons of a short bunch can be in phase and the total power of the radiation will be quadratic with the number of electrons. Naturally the frequency spectrum of the longitudinal electron distribution in a bunch is of the same importance as the overall electron bunch length. The interest in the utilization of high power radiation from the terahertz and far infrared region in the field of chemical, physical and biological processes has led synchrotron radiation facilities to pay more attention to the production of coherent radiation. Several laboratories have proposed the construction of a facility wholly dedicated to terahertz production using the coherent radiation in bending magnets initiated by the longitudinal instabilities in the ring. Existing synchrotron radiation facilities also consider such a possibility among their future plans. There is a beautiful introduction to CSR in the 'ICFA Beam Dynamics Newsletter' N 35 (Editor C. Biscari). In this paper we recall the basic properties of CSR from the theory and what new effects, we can get from the precise simulations of the coherent radiation using numerical solutions of Maxwell's equations. In particular, transverse variation of the particle energy loss in a bunch, discovered in these simulations, explains the slice emittance growth in bending magnets of the bunch compressors and transverse de-coherence in undulators. CSR may play same the role as the effect of quantum fluctuations of synchrotron radiation in damping rings. It can limit the minimum

  3. The Simulation and Animation of Virtual Humans to Better Understand Ergonomic Conditions at Manual Workplaces

    Directory of Open Access Journals (Sweden)

    Jürgen Rossmann

    2010-08-01

    Full Text Available This article extends an approach to simulate and control anthro- pomorphic kinematics as multiagent-systems. These "anthro- pomorphic multiagent-systems" have originally been developed to control coordinated multirobot systems in industrial applica- tions, as well as to simulate humanoid robots. Here, we apply the approach of the anthropomorphic multiagent-systems to propose a "Virtual Human" - a model of human kinematics - to analyze ergonomic conditions at manual workplaces. Ergonom- ics provide a wide range of methods to evaluate human postures and movements. By the simulation and animation of the Virtual Human we develop examples of how results from the field of ergonomics can help to consider the human factor during the design and optimization phases of production lines.

  4. Manual Skill Acquisition During Transesophageal Echocardiography Simulator Training of Cardiology Fellows: A Kinematic Assessment.

    Science.gov (United States)

    Matyal, Robina; Montealegre-Gallegos, Mario; Mitchell, John D; Kim, Han; Bergman, Remco; Hawthorne, Katie M; O'Halloran, David; Wong, Vanessa; Hess, Phillip E; Mahmood, Feroze

    2015-12-01

    To investigate whether a transesophageal echocardiography (TEE) simulator with motion analysis can be used to impart proficiency in TEE in an integrated curriculum-based model. A prospective cohort study. A tertiary-care university hospital. TEE-naïve cardiology fellows. Participants underwent an 8-session multimodal TEE training program. Manual skills were assessed at the end of sessions 2 and 8 using motion analysis of the TEE simulator's probe. At the end of the course, participants performed an intraoperative TEE; their examinations were video captured, and a blinded investigator evaluated the total time and image transitions needed for each view. Results are reported as mean±standard deviation, or median (interquartile range) where appropriate. Eleven fellows completed the knowledge and kinematic portions of the study. Five participants were excluded from the evaluation in the clinical setting because of interim exposure to TEE or having participated in a TEE rotation after the training course. An increase of 12.95% in post-test knowledge scores was observed. From the start to the end of the course, there was a significant reduction (pcardiology fellows can be complemented with kinematic analyses to objectify acquisition of manual skills during simulator-based training. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Optimizing the balance between task automation and human manual control in simulated submarine track management.

    Science.gov (United States)

    Chen, Stephanie I; Visser, Troy A W; Huf, Samuel; Loft, Shayne

    2017-09-01

    Automation can improve operator performance and reduce workload, but can also degrade operator situation awareness (SA) and the ability to regain manual control. In 3 experiments, we examined the extent to which automation could be designed to benefit performance while ensuring that individuals maintained SA and could regain manual control. Participants completed a simulated submarine track management task under varying task load. The automation was designed to facilitate information acquisition and analysis, but did not make task decisions. Relative to a condition with no automation, the continuous use of automation improved performance and reduced subjective workload, but degraded SA. Automation that was engaged and disengaged by participants as required (adaptable automation) moderately improved performance and reduced workload relative to no automation, but degraded SA. Automation engaged and disengaged based on task load (adaptive automation) provided no benefit to performance or workload, and degraded SA relative to no automation. Automation never led to significant return-to-manual deficits. However, all types of automation led to degraded performance on a nonautomated task that shared information processing requirements with automated tasks. Given these outcomes, further research is urgently required to establish how to design automation to maximize performance while keeping operators cognitively engaged. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. SERA: Simulation Environment for Radiotherapy Applications - Users Manual Version 1CO

    Energy Technology Data Exchange (ETDEWEB)

    Venhuizen, James Robert; Wessol, Daniel Edward; Wemple, Charles Alan; Wheeler, Floyd J; Harkin, G. J.; Frandsen, M. W.; Albright, C. L.; Cohen, M.T.; Rossmeier, M.; Cogliati, J.J.

    2002-06-01

    This document is the user manual for the Simulation Environment for Radiotherapy Applications (SERA) software program developed for boron-neutron capture therapy (BNCT) patient treatment planning by researchers at the Idaho National Engineering and Environmental Laboratory (INEEL) and students and faculty at Montana State University (MSU) Computer Science Department. This manual corresponds to the final release of the program, Version 1C0, developed to run under the RedHat Linux Operating System (version 7.2 or newer) or the Solaris™ Operating System (version 2.6 or newer). SERA is a suite of command line or interactively launched software modules, including graphical, geometric reconstruction, and execution interface modules for developing BNCT treatment plans. The program allows the user to develop geometric models of the patient as derived from Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) images, perform dose computation for these geometric models, and display the computed doses on overlays of the original images as three dimensional representations. This manual provides a guide to the practical use of SERA, but is not an exhaustive treatment of each feature of the code.

  7. SERA: Simulation Environment for Radiotherapy Applications - Users Manual Version 1CO

    International Nuclear Information System (INIS)

    Venhuizen, James Robert; Wessol, Daniel Edward; Wemple, Charles Alan; Wheeler, Floyd J; Harkin, G. J.; Frandsen, M. W.; Albright, C. L.; Cohen, M.T.; Rossmeier, M.; Cogliati, J.J.

    2002-01-01

    This document is the user manual for the Simulation Environment for Radiotherapy Applications (SERA) software program developed for boron-neutron capture therapy (BNCT) patient treatment planning by researchers at the Idaho National Engineering and Environmental Laboratory (INEEL) and students and faculty at Montana State University (MSU) Computer Science Department. This manual corresponds to the final release of the program, Version 1C0, developed to run under the RedHat Linux Operating System (version 7.2 or newer) or the Solaris Operating System (version 2.6 or newer). SERA is a suite of command line or interactively launched software modules, including graphical, geometric reconstruction, and execution interface modules for developing BNCT treatment plans. The program allows the user to develop geometric models of the patient as derived from Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) images, perform dose computation for these geometric models, and display the computed doses on overlays of the original images as three dimensional representations. This manual provides a guide to the practical use of SERA, but is not an exhaustive treatment of each feature of the code

  8. Quantum Link Models and Quantum Simulation of Gauge Theories

    International Nuclear Information System (INIS)

    Wiese, U.J.

    2015-01-01

    This lecture is about Quantum Link Models and Quantum Simulation of Gauge Theories. The lecture consists out of 4 parts. The first part gives a brief history of Computing and Pioneers of Quantum Computing and Quantum Simulations of Quantum Spin Systems are introduced. The 2nd lecture is about High-Temperature Superconductors versus QCD, Wilson’s Lattice QCD and Abelian Quantum Link Models. The 3rd lecture deals with Quantum Simulators for Abelian Lattice Gauge Theories and Non-Abelian Quantum Link Models. The last part of the lecture discusses Quantum Simulators mimicking ‘Nuclear’ physics and the continuum limit of D-Theorie models. (nowak)

  9. Assessment of Robotic Patient Simulators for Training in Manual Physical Therapy Examination Techniques

    Science.gov (United States)

    Ishikawa, Shun; Okamoto, Shogo; Isogai, Kaoru; Akiyama, Yasuhiro; Yanagihara, Naomi; Yamada, Yoji

    2015-01-01

    Robots that simulate patients suffering from joint resistance caused by biomechanical and neural impairments are used to aid the training of physical therapists in manual examination techniques. However, there are few methods for assessing such robots. This article proposes two types of assessment measures based on typical judgments of clinicians. One of the measures involves the evaluation of how well the simulator presents different severities of a specified disease. Experienced clinicians were requested to rate the simulated symptoms in terms of severity, and the consistency of their ratings was used as a performance measure. The other measure involves the evaluation of how well the simulator presents different types of symptoms. In this case, the clinicians were requested to classify the simulated resistances in terms of symptom type, and the average ratios of their answers were used as performance measures. For both types of assessment measures, a higher index implied higher agreement among the experienced clinicians that subjectively assessed the symptoms based on typical symptom features. We applied these two assessment methods to a patient knee robot and achieved positive appraisals. The assessment measures have potential for use in comparing several patient simulators for training physical therapists, rather than as absolute indices for developing a standard. PMID:25923719

  10. Assessment of robotic patient simulators for training in manual physical therapy examination techniques.

    Directory of Open Access Journals (Sweden)

    Shun Ishikawa

    Full Text Available Robots that simulate patients suffering from joint resistance caused by biomechanical and neural impairments are used to aid the training of physical therapists in manual examination techniques. However, there are few methods for assessing such robots. This article proposes two types of assessment measures based on typical judgments of clinicians. One of the measures involves the evaluation of how well the simulator presents different severities of a specified disease. Experienced clinicians were requested to rate the simulated symptoms in terms of severity, and the consistency of their ratings was used as a performance measure. The other measure involves the evaluation of how well the simulator presents different types of symptoms. In this case, the clinicians were requested to classify the simulated resistances in terms of symptom type, and the average ratios of their answers were used as performance measures. For both types of assessment measures, a higher index implied higher agreement among the experienced clinicians that subjectively assessed the symptoms based on typical symptom features. We applied these two assessment methods to a patient knee robot and achieved positive appraisals. The assessment measures have potential for use in comparing several patient simulators for training physical therapists, rather than as absolute indices for developing a standard.

  11. Assessment of robotic patient simulators for training in manual physical therapy examination techniques.

    Science.gov (United States)

    Ishikawa, Shun; Okamoto, Shogo; Isogai, Kaoru; Akiyama, Yasuhiro; Yanagihara, Naomi; Yamada, Yoji

    2015-01-01

    Robots that simulate patients suffering from joint resistance caused by biomechanical and neural impairments are used to aid the training of physical therapists in manual examination techniques. However, there are few methods for assessing such robots. This article proposes two types of assessment measures based on typical judgments of clinicians. One of the measures involves the evaluation of how well the simulator presents different severities of a specified disease. Experienced clinicians were requested to rate the simulated symptoms in terms of severity, and the consistency of their ratings was used as a performance measure. The other measure involves the evaluation of how well the simulator presents different types of symptoms. In this case, the clinicians were requested to classify the simulated resistances in terms of symptom type, and the average ratios of their answers were used as performance measures. For both types of assessment measures, a higher index implied higher agreement among the experienced clinicians that subjectively assessed the symptoms based on typical symptom features. We applied these two assessment methods to a patient knee robot and achieved positive appraisals. The assessment measures have potential for use in comparing several patient simulators for training physical therapists, rather than as absolute indices for developing a standard.

  12. Regional demand forecasting and simulation model: user's manual. Task 4, final report

    Energy Technology Data Exchange (ETDEWEB)

    Parhizgari, A M

    1978-09-25

    The Department of Energy's Regional Demand Forecasting Model (RDFOR) is an econometric and simulation system designed to estimate annual fuel-sector-region specific consumption of energy for the US. Its purposes are to (1) provide the demand side of the Project Independence Evaluation System (PIES), (2) enhance our empirical insights into the structure of US energy demand, and (3) assist policymakers in their decisions on and formulations of various energy policies and/or scenarios. This report provides a self-contained user's manual for interpreting, utilizing, and implementing RDFOR simulation software packages. Chapters I and II present the theoretical structure and the simulation of RDFOR, respectively. Chapter III describes several potential scenarios which are (or have been) utilized in the RDFOR simulations. Chapter IV presents an overview of the complete software package utilized in simulation. Chapter V provides the detailed explanation and documentation of this package. The last chapter describes step-by-step implementation of the simulation package using the two scenarios detailed in Chapter III. The RDFOR model contains 14 fuels: gasoline, electricity, natural gas, distillate and residual fuels, liquid gases, jet fuel, coal, oil, petroleum products, asphalt, petroleum coke, metallurgical coal, and total fuels, spread over residential, commercial, industrial, and transportation sectors.

  13. JacketSE: An Offshore Wind Turbine Jacket Sizing Tool; Theory Manual and Sample Usage with Preliminary Validation

    Energy Technology Data Exchange (ETDEWEB)

    Damiani, Rick [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-02-08

    This manual summarizes the theory and preliminary verifications of the JacketSE module, which is an offshore jacket sizing tool that is part of the Wind-Plant Integrated System Design & Engineering Model toolbox. JacketSE is based on a finite-element formulation and on user-prescribed inputs and design standards' criteria (constraints). The physics are highly simplified, with a primary focus on satisfying ultimate limit states and modal performance requirements. Preliminary validation work included comparing industry data and verification against ANSYS, a commercial finite-element analysis package. The results are encouraging, and future improvements to the code are recommended in this manual.

  14. Classical diffusion: theory and simulation codes

    International Nuclear Information System (INIS)

    Grad, H.; Hu, P.N.

    1978-03-01

    A survey is given of the development of classical diffusion theory which arose from the observation of Grad and Hogan that the Pfirsch-Schluter and Neoclassical theories are very special and frequently inapplicable because they require that plasma mass flow be treated as transport rather than as a state variable of the plasma. The subsequent theory, efficient numerical algorithms, and results of various operating codes are described

  15. Simulation model for wind energy storage systems. Volume II. Operation manual. [SIMWEST code

    Energy Technology Data Exchange (ETDEWEB)

    Warren, A.W.; Edsinger, R.W.; Burroughs, J.D.

    1977-08-01

    The effort developed a comprehensive computer program for the modeling of wind energy/storage systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel and pneumatic). An acronym for the program is SIMWEST (Simulation Model for Wind Energy Storage). The level of detail of SIMWEST is consistent with a role of evaluating the economic feasibility as well as the general performance of wind energy systems. The software package consists of two basic programs and a library of system, environmental, and load components. Volume II, the SIMWEST operation manual, describes the usage of the SIMWEST program, the design of the library components, and a number of simple example simulations intended to familiarize the user with the program's operation. Volume II also contains a listing of each SIMWEST library subroutine.

  16. Follow 1.1 - a program for visualization of Thermal-Hydraulic computer simulations. User's manual

    International Nuclear Information System (INIS)

    Hyvarinen, J.

    1990-04-01

    FOLLOW is a computer program designed to function as an analyst's aid when performing large thermalhydraulic and related safety calculations using the well known simulation codes RELAP5, MELCOR, SMABRE and TRAB. The code is a by-product of the effort to improve the analysis capabilities of the Finnish Centre for Radiation and Nuclear Safety (STUK). FOLLOW's most important application is as an on-line 'window' into the progress of the simulation calculation. The thermal-hydraulic analyses related to nuclear safety routinely require very long calculation times. FOLLOW provides a possibility to follow the course of the simulation and thus make observations of the results already during the simulation. FOLLOW's various outputs have been designed to mimic those available at nuclear power plant operators' console. Thus FOLLOW can also be used much like a nuclear power plant simulator. This manual describes the usages, features and input requirements of FOLLOW version 1.1, including a sample problem input and various outputs. (orig.)

  17. Determining the energy performance of manually controlled solar shades: A stochastic model based co-simulation analysis

    International Nuclear Information System (INIS)

    Yao, Jian

    2014-01-01

    Highlights: • Driving factor for adjustment of manually controlled solar shades was determined. • A stochastic model for manual solar shades was constructed using Markov method. • Co-simulation with Energyplus was carried out in BCVTB. • External shading even manually controlled should be used prior to LOW-E windows. • Previous studies on manual solar shades may overestimate energy savings. - Abstract: Solar shading devices play a significant role in reducing building energy consumption and maintaining a comfortable indoor condition. In this paper, a typical office building with internal roller shades in hot summer and cold winter zone was selected to determine the driving factor of control behavior of manual solar shades. Solar radiation was determined as the major factor in driving solar shading adjustment based on field measurements and logit analysis and then a stochastic model for manually adjusted solar shades was constructed by using Markov method. This model was used in BCVTB for further co-simulation with Energyplus to determine the impact of the control behavior of solar shades on energy performance. The results show that manually adjusted solar shades, whatever located inside or outside, have a relatively high energy saving performance than clear-pane windows while only external shades perform better than regularly used LOW-E windows. Simulation also indicates that using an ideal assumption of solar shade adjustment as most studies do in building simulation may lead to an overestimation of energy saving by about 16–30%. There is a need to improve occupants’ actions on shades to more effectively respond to outdoor conditions in order to lower energy consumption, and this improvement can be easily achieved by using simple strategies as a guide to control manual solar shades

  18. A theory manual for multi-physics code coupling in LIME.

    Energy Technology Data Exchange (ETDEWEB)

    Belcourt, Noel; Bartlett, Roscoe Ainsworth; Pawlowski, Roger Patrick; Schmidt, Rodney Cannon; Hooper, Russell Warren

    2011-03-01

    The Lightweight Integrating Multi-physics Environment (LIME) is a software package for creating multi-physics simulation codes. Its primary application space is when computer codes are currently available to solve different parts of a multi-physics problem and now need to be coupled with other such codes. In this report we define a common domain language for discussing multi-physics coupling and describe the basic theory associated with multiphysics coupling algorithms that are to be supported in LIME. We provide an assessment of coupling techniques for both steady-state and time dependent coupled systems. Example couplings are also demonstrated.

  19. On the inclusion of macroscopic theory in Monte Carlo simulation using game theory

    International Nuclear Information System (INIS)

    Tatarkiewicz, J.

    1980-01-01

    This paper presents the inclusion of macroscopic damage theory into Monte Carlo particle-range simulation using game theory. A new computer code called RADDI was developed on the basis of this inclusion. Results of Monte Carlo damage simulation after 6.3 MeV proton bombardment of silicon are compared with experimental data of Bulgakov et al. (orig.)

  20. Introducing Simulation via the Theory of Records

    Science.gov (United States)

    Johnson, Arvid C.

    2011-01-01

    While spreadsheet simulation can be a useful method by which to help students to understand some of the more advanced concepts in an introductory statistics course, introducing the simulation methodology at the same time as these concepts can result in student cognitive overload. This article describes a spreadsheet model that has been…

  1. Monte Carlo simulations of lattice gauge theories

    International Nuclear Information System (INIS)

    Forcrand, P. de; Minnesota Univ., Minneapolis, MN

    1989-01-01

    Lattice gauge simulations are presented in layman's terms. The need for large computer resources is justified. The main aspects of implementations on vector and parallel machines are explained. An overview of state of the art simulations and dedicated hardware projects is presented. 8 refs.; 1 figure; 1 table

  2. Managing bottlenecks in manual automobile assembly systems using discrete event simulation

    Directory of Open Access Journals (Sweden)

    Dewa, M.

    2013-08-01

    Full Text Available Batch model lines are quite handy when the demand for each product is moderate. However, they are characterised by high work-in-progress inventories, lost production time when changing over models, and reduced flexibility when it comes to altering production rates as product demand changes. On the other hand, mixed model lines can offer reduced work-in-progress inventory and increased flexibility. The object of this paper is to illustrate that a manual automobile assembling system can be optimised through managing bottlenecks by ensuring high workstation utilisation, reducing queue lengths before stations and reducing station downtime. A case study from the automobile industry is used for data collection. A model is developed through the use of simulation software. The model is then verified and validated before a detailed bottleneck analysis is conducted. An operational strategy is then proposed for optimal bottleneck management. Although the paper focuses on improving automobile assembly systems in batch mode, the methodology can also be applied in single model manual and automated production lines.

  3. Users manual for an expert system (HSPEXP) for calibration of the hydrological simulation program; Fortran

    Science.gov (United States)

    Lumb, A.M.; McCammon, R.B.; Kittle, J.L.

    1994-01-01

    Expert system software was developed to assist less experienced modelers with calibration of a watershed model and to facilitate the interaction between the modeler and the modeling process not provided by mathematical optimization. A prototype was developed with artificial intelligence software tools, a knowledge engineer, and two domain experts. The manual procedures used by the domain experts were identified and the prototype was then coded by the knowledge engineer. The expert system consists of a set of hierarchical rules designed to guide the calibration of the model through a systematic evaluation of model parameters. When the prototype was completed and tested, it was rewritten for portability and operational use and was named HSPEXP. The watershed model Hydrological Simulation Program--Fortran (HSPF) is used in the expert system. This report is the users manual for HSPEXP and contains a discussion of the concepts and detailed steps and examples for using the software. The system has been tested on watersheds in the States of Washington and Maryland, and the system correctly identified the model parameters to be adjusted and the adjustments led to improved calibration.

  4. Mars Tumbleweed Simulation Using Singular Perturbation Theory

    Science.gov (United States)

    Raiszadeh, Behzad; Calhoun, Phillip

    2005-01-01

    The Mars Tumbleweed is a new surface rover concept that utilizes Martian winds as the primary source of mobility. Several designs have been proposed for the Mars Tumbleweed, all using aerodynamic drag to generate force for traveling about the surface. The Mars Tumbleweed, in its deployed configuration, must be large and lightweight to provide the ratio of drag force to rolling resistance necessary to initiate motion from the Martian surface. This paper discusses the dynamic simulation details of a candidate Tumbleweed design. The dynamic simulation model must properly evaluate and characterize the motion of the tumbleweed rover to support proper selection of system design parameters. Several factors, such as model flexibility, simulation run times, and model accuracy needed to be considered in modeling assumptions. The simulation was required to address the flexibility of the rover and its interaction with the ground, and properly evaluate its mobility. Proper assumptions needed to be made such that the simulated dynamic motion is accurate and realistic while not overly burdened by long simulation run times. This paper also shows results that provided reasonable correlation between the simulation and a drop/roll test of a tumbleweed prototype.

  5. NCC simulation model. Phase 2: Simulating the operations of the Network Control Center and NCC message manual

    Science.gov (United States)

    Benjamin, Norman M.; Gill, Tepper; Charles, Mary

    1994-01-01

    The network control center (NCC) provides scheduling, monitoring, and control of services to the NASA space network. The space network provides tracking and data acquisition services to many low-earth orbiting spacecraft. This report describes the second phase in the development of simulation models for the FCC. Phase one concentrated on the computer systems and interconnecting network.Phase two focuses on the implementation of the network message dialogs and the resources controlled by the NCC. Performance measures were developed along with selected indicators of the NCC's operational effectiveness.The NCC performance indicators were defined in terms of the following: (1) transfer rate, (2) network delay, (3) channel establishment time, (4) line turn around time, (5) availability, (6) reliability, (7) accuracy, (8) maintainability, and (9) security. An NCC internal and external message manual is appended to this report.

  6. Media-fill simulation tests in manual and robotic aseptic preparation of injection solutions in syringes.

    Science.gov (United States)

    Krämer, Irene; Federici, Matteo; Kaiser, Vanessa; Thiesen, Judith

    2016-04-01

    The purpose of this study was to evaluate the contamination rate of media-fill products either prepared automated with a robotic system (APOTECAchemo™) or prepared manually at cytotoxic workbenches in the same cleanroom environment and by experienced operators. Media fills were completed by microbiological environmental control in the critical zones and used to validate the cleaning and disinfection procedures of the robotic system. The aseptic preparation of patient individual ready-to-use injection solutions was simulated by using double concentrated tryptic soy broth as growth medium, water for injection and plastic syringes as primary packaging materials. Media fills were either prepared automated (500 units) in the robot or manually (500 units) in cytotoxic workbenches in the same cleanroom over a period of 18 working days. The test solutions were incubated at room temperature (22℃) over 4 weeks. Products were visually inspected for turbidity after a 2-week and 4-week period. Following incubation, growth promotion tests were performed with Staphylococcus epidermidis. During the media-fill procedures, passive air monitoring was performed with settle plates and surface monitoring with contact plates on predefined locations as well as fingerprints. The plates got incubated for 5-7 days at room temperature, followed by 2-3 days at 30-35℃ and the colony forming units (cfu) counted after both periods. The robot was cleaned and disinfected according to the established standard operating procedure on two working days prior to the media-fill session, while on six other working days only six critical components were sanitized at the end of the media-fill sessions. Every day UV irradiation was operated for 4 h after finishing work. None of the 1000 media-fill products prepared in the two different settings showed turbidity after the incubation period thereby indicating no contamination with microorganisms. All products remained uniform, clear, and light

  7. Using Historical Simulations to Teach Political Theory

    Science.gov (United States)

    Gorton, William; Havercroft, Jonathan

    2012-01-01

    As teachers of political theory, our goal is not merely to help students understand the abstract reasoning behind key ideas and texts of our discipline. We also wish to convey the historical contexts that informed these ideas and texts, including the political aims of their authors. But the traditional lecture-and-discussion approach tends to…

  8. Plasma confinement theory and transport simulation

    International Nuclear Information System (INIS)

    Ross, D.W.

    1993-02-01

    The objectives continue to be: (1) to advance the transport studies of tokamaks, including development and maintenance of the Magnetic Fusion Energy Database, and (2) to provide theoretical interpretation, modeling and equilibrium and stability for TEXT-Upgrade. Recent publications and reports, and conference presentations of the Fusion Research Center theory group are listed

  9. Compensatory strategies during manual wheelchair propulsion in response to weakness in individual muscle groups: A simulation study.

    Science.gov (United States)

    Slowik, Jonathan S; McNitt-Gray, Jill L; Requejo, Philip S; Mulroy, Sara J; Neptune, Richard R

    2016-03-01

    The considerable physical demand placed on the upper extremity during manual wheelchair propulsion is distributed among individual muscles. The strategy used to distribute the workload is likely influenced by the relative force-generating capacities of individual muscles, and some strategies may be associated with a higher injury risk than others. The objective of this study was to use forward dynamics simulations of manual wheelchair propulsion to identify compensatory strategies that can be used to overcome weakness in individual muscle groups and identify specific strategies that may increase injury risk. Identifying these strategies can provide rationale for the design of targeted rehabilitation programs aimed at preventing the development of pain and injury in manual wheelchair users. Muscle-actuated forward dynamics simulations of manual wheelchair propulsion were analyzed to identify compensatory strategies in response to individual muscle group weakness using individual muscle mechanical power and stress as measures of upper extremity demand. The simulation analyses found the upper extremity to be robust to weakness in any single muscle group as the remaining groups were able to compensate and restore normal propulsion mechanics. The rotator cuff muscles experienced relatively high muscle stress levels and exhibited compensatory relationships with the deltoid muscles. These results underline the importance of strengthening the rotator cuff muscles and supporting muscles whose contributions do not increase the potential for impingement (i.e., the thoracohumeral depressors) and minimize the risk of upper extremity injury in manual wheelchair users. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Fluid dynamics theory, computation, and numerical simulation

    CERN Document Server

    Pozrikidis, C

    2017-01-01

    This book provides an accessible introduction to the basic theory of fluid mechanics and computational fluid dynamics (CFD) from a modern perspective that unifies theory and numerical computation. Methods of scientific computing are introduced alongside with theoretical analysis and MATLAB® codes are presented and discussed for a broad range of topics: from interfacial shapes in hydrostatics, to vortex dynamics, to viscous flow, to turbulent flow, to panel methods for flow past airfoils. The third edition includes new topics, additional examples, solved and unsolved problems, and revised images. It adds more computational algorithms and MATLAB programs. It also incorporates discussion of the latest version of the fluid dynamics software library FDLIB, which is freely available online. FDLIB offers an extensive range of computer codes that demonstrate the implementation of elementary and advanced algorithms and provide an invaluable resource for research, teaching, classroom instruction, and self-study. This ...

  11. Nuclear Lattice Simulations with Chiral Effective Field Theory

    OpenAIRE

    Lee, Dean

    2008-01-01

    We present recent results on lattice simulations using chiral effective field theory. In particular we discuss lattice simulations for dilute neutron matter at next-to-leading order and three-body forces in light nuclei at next-to-next-to-leading order.

  12. Developing a Theory-Based Simulation Educator Resource.

    Science.gov (United States)

    Thomas, Christine M; Sievers, Lisa D; Kellgren, Molly; Manning, Sara J; Rojas, Deborah E; Gamblian, Vivian C

    2015-01-01

    The NLN Leadership Development Program for Simulation Educators 2014 faculty development group identified a lack of a common language/terminology to outline the progression of expertise of simulation educators. The group analyzed Benner's novice-to-expert model and applied its levels of experience to simulation educator growth. It established common operational categories of faculty development and used them to organize resources that support progression toward expertise. The resulting theory-based Simulator Educator Toolkit outlines levels of ability and provides quality resources to meet the diverse needs of simulation educators and team members.

  13. Reconstruction of Nietzsche’s Theory of Simulation

    Directory of Open Access Journals (Sweden)

    Elizbar Elizbarashvili

    2014-03-01

    Full Text Available The article shows the interior plane of contact of thinking of German philosopher Friedrich Nietzsche and French philosopher Jean Baudrillard. We have formed the metaphor world of Nietzsche and his philosophy and found the common code between these metaphors and philosophic language of the language of the simulation theory by Jean Baudrillard. The decoding and interpretation of the material was made on its basis. As a result, we came to the conclusion that Nietzsche philosophy had the simulation plane before postmodernism and it is possible to reconstruct the simulation theory of his philosophy at the rational level. The article considers the specified mechanisms of Nietzsche simulation theory. Zarathustra personality, the great tempter and connects the mechanisms of faith and courage.

  14. Quantum decision-maker theory and simulation

    Science.gov (United States)

    Zak, Michail; Meyers, Ronald E.; Deacon, Keith S.

    2000-07-01

    A quantum device simulating the human decision making process is introduced. It consists of quantum recurrent nets generating stochastic processes which represent the motor dynamics, and of classical neural nets describing the evolution of probabilities of these processes which represent the mental dynamics. The autonomy of the decision making process is achieved by a feedback from the mental to motor dynamics which changes the stochastic matrix based upon the probability distribution. This feedback replaces unavailable external information by an internal knowledge- base stored in the mental model in the form of probability distributions. As a result, the coupled motor-mental dynamics is described by a nonlinear version of Markov chains which can decrease entropy without an external source of information. Applications to common sense based decisions as well as to evolutionary games are discussed. An example exhibiting self-organization is computed using quantum computer simulation. Force on force and mutual aircraft engagements using the quantum decision maker dynamics are considered.

  15. Plasma confinement theory and transport simulation

    International Nuclear Information System (INIS)

    Ross, D.W.

    1993-10-01

    The objectives of the Fusion Research Center Theory Program continue to be: (1) to advance the transport studies of tokamaks, including development and maintenance of the Magnetic Fusion Energy Database; and (2) to provide theoretical interpretation, modeling and equilibrium and stability studies for the TEXT-Upgrade tokamak. Publications and reports and conference presentations for the grant period are listed. Work is described in five basic categories: A. Magnetic Fusion Energy Database; B. Computational Support and Numerical Modeling; C. Support for TEXT-Upgrade and Diagnostics; D. Transport Studies; E. Alfven Waves

  16. Theory, modeling, and simulation annual report, 1992

    Energy Technology Data Exchange (ETDEWEB)

    1993-05-01

    This report briefly discusses research on the following topics: development of electronic structure methods; modeling molecular processes in clusters; modeling molecular processes in solution; modeling molecular processes in separations chemistry; modeling interfacial molecular processes; modeling molecular processes in the atmosphere; methods for periodic calculations on solids; chemistry and physics of minerals; graphical user interfaces for computational chemistry codes; visualization and analysis of molecular simulations; integrated computational chemistry environment; and benchmark computations.

  17. Constructing International Relations Simulations: Examining the Pedagogy of IR Simulations through a Constructivist Learning Theory Lens

    Science.gov (United States)

    Asal, Victor; Kratoville, Jayson

    2013-01-01

    Simulations are being used more and more in political science generally and in international relations specifically. While there is a growing body of literature describing different simulations and a small amount of literature that empirically tests the impact of simulations, scholars have written very little linking the pedagogic theory behind…

  18. Learning Theory Foundations of Simulation-Based Mastery Learning.

    Science.gov (United States)

    McGaghie, William C; Harris, Ilene B

    2018-06-01

    Simulation-based mastery learning (SBML), like all education interventions, has learning theory foundations. Recognition and comprehension of SBML learning theory foundations are essential for thoughtful education program development, research, and scholarship. We begin with a description of SBML followed by a section on the importance of learning theory foundations to shape and direct SBML education and research. We then discuss three principal learning theory conceptual frameworks that are associated with SBML-behavioral, constructivist, social cognitive-and their contributions to SBML thought and practice. We then discuss how the three learning theory frameworks converge in the course of planning, conducting, and evaluating SBML education programs in the health professions. Convergence of these learning theory frameworks is illustrated by a description of an SBML education and research program in advanced cardiac life support. We conclude with a brief coda.

  19. Crossover from equilibration to aging: Nonequilibrium theory versus simulations.

    Science.gov (United States)

    Mendoza-Méndez, P; Lázaro-Lázaro, E; Sánchez-Díaz, L E; Ramírez-González, P E; Pérez-Ángel, G; Medina-Noyola, M

    2017-08-01

    Understanding glasses and the glass transition requires comprehending the nature of the crossover from the ergodic (or equilibrium) regime, in which the stationary properties of the system have no history dependence, to the mysterious glass transition region, where the measured properties are nonstationary and depend on the protocol of preparation. In this work we use nonequilibrium molecular dynamics simulations to test the main features of the crossover predicted by the molecular version of the recently developed multicomponent nonequilibrium self-consistent generalized Langevin equation theory. According to this theory, the glass transition involves the abrupt passage from the ordinary pattern of full equilibration to the aging scenario characteristic of glass-forming liquids. The same theory explains that this abrupt transition will always be observed as a blurred crossover due to the unavoidable finiteness of the time window of any experimental observation. We find that within their finite waiting-time window, the simulations confirm the general trends predicted by the theory.

  20. Working Memory for Linguistic and Non-linguistic Manual Gestures: Evidence, Theory, and Application.

    Science.gov (United States)

    Rudner, Mary

    2018-01-01

    Linguistic manual gestures are the basis of sign languages used by deaf individuals. Working memory and language processing are intimately connected and thus when language is gesture-based, it is important to understand related working memory mechanisms. This article reviews work on working memory for linguistic and non-linguistic manual gestures and discusses theoretical and applied implications. Empirical evidence shows that there are effects of load and stimulus degradation on working memory for manual gestures. These effects are similar to those found for working memory for speech-based language. Further, there are effects of pre-existing linguistic representation that are partially similar across language modalities. But above all, deaf signers score higher than hearing non-signers on an n-back task with sign-based stimuli, irrespective of their semantic and phonological content, but not with non-linguistic manual actions. This pattern may be partially explained by recent findings relating to cross-modal plasticity in deaf individuals. It suggests that in linguistic gesture-based working memory, semantic aspects may outweigh phonological aspects when processing takes place under challenging conditions. The close association between working memory and language development should be taken into account in understanding and alleviating the challenges faced by deaf children growing up with cochlear implants as well as other clinical populations.

  1. Viscous wing theory development. Volume 2: GRUMWING computer program user's manual

    Science.gov (United States)

    Chow, R. R.; Ogilvie, P. L.

    1986-01-01

    This report is a user's manual which describes the operation of the computer program, GRUMWING. The program computes the viscous transonic flow over three-dimensional wings using a boundary layer type viscid-inviscid interaction approach. The inviscid solution is obtained by an approximate factorization (AFZ)method for the full potential equation. The boundary layer solution is based on integral entrainment methods.

  2. Working Memory for Linguistic and Non-linguistic Manual Gestures: Evidence, Theory, and Application

    Directory of Open Access Journals (Sweden)

    Mary Rudner

    2018-05-01

    Full Text Available Linguistic manual gestures are the basis of sign languages used by deaf individuals. Working memory and language processing are intimately connected and thus when language is gesture-based, it is important to understand related working memory mechanisms. This article reviews work on working memory for linguistic and non-linguistic manual gestures and discusses theoretical and applied implications. Empirical evidence shows that there are effects of load and stimulus degradation on working memory for manual gestures. These effects are similar to those found for working memory for speech-based language. Further, there are effects of pre-existing linguistic representation that are partially similar across language modalities. But above all, deaf signers score higher than hearing non-signers on an n-back task with sign-based stimuli, irrespective of their semantic and phonological content, but not with non-linguistic manual actions. This pattern may be partially explained by recent findings relating to cross-modal plasticity in deaf individuals. It suggests that in linguistic gesture-based working memory, semantic aspects may outweigh phonological aspects when processing takes place under challenging conditions. The close association between working memory and language development should be taken into account in understanding and alleviating the challenges faced by deaf children growing up with cochlear implants as well as other clinical populations.

  3. Reionization and Cosmic Dawn: theory and simulations

    Science.gov (United States)

    Mesinger, Andrei

    2018-05-01

    We highlight recent progress in the sophistication and diversification of the simulations of cosmic dawn and reionization. The application of these modeling tools to recent observations has allowed us narrow down the timing of reionization. The midpoint of reionization is constrained to z = 7.6-0.7+0.8 (1 σ), with the strongest constraints coming from the optical depth to the CMB measured with the Planck satellite and the first detection of ongoing reionization from the spectra of the z = 7.1 QSOs ULASJ1120+0641. However, we still know virtually nothing about the astrophysical sources during the first billion years. The revolution in our understanding will be led by upcoming interferometric observations of the cosmic 21-cm signal. The properties of the sources and sinks of UV and X-ray photons are encoded in the 3D patterns of the signal. The development of Bayesian parameter recovery techniques, which tap into the wealth of the 21-cm signal, will soon usher in an era of precision astrophysical cosmology.

  4. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Brian M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eldred, Michael S [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jakeman, John Davis [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stephens, John Adam [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vigil, Dena M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wildey, Timothy Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bohnhoff, William J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hu, Kenneth T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dalbey, Keith R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bauman, Lara E [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hough, Patricia Diane [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-05-01

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.

  5. Theory of compressive modeling and simulation

    Science.gov (United States)

    Szu, Harold; Cha, Jae; Espinola, Richard L.; Krapels, Keith

    2013-05-01

    Modeling and Simulation (M&S) has been evolving along two general directions: (i) data-rich approach suffering the curse of dimensionality and (ii) equation-rich approach suffering computing power and turnaround time. We suggest a third approach. We call it (iii) compressive M&S (CM&S); because the basic Minimum Free-Helmholtz Energy (MFE) facilitating CM&S can reproduce and generalize Candes, Romberg, Tao & Donoho (CRT&D) Compressive Sensing (CS) paradigm as a linear Lagrange Constraint Neural network (LCNN) algorithm. CM&S based MFE can generalize LCNN to 2nd order as Nonlinear augmented LCNN. For example, during the sunset, we can avoid a reddish bias of sunlight illumination due to a long-range Rayleigh scattering over the horizon. With CM&S we can take instead of day camera, a night vision camera. We decomposed long wave infrared (LWIR) band with filter into 2 vector components (8~10μm and 10~12μm) and used LCNN to find pixel by pixel the map of Emissive-Equivalent Planck Radiation Sources (EPRS). Then, we up-shifted consistently, according to de-mixed sources map, to the sub-micron RGB color image. Moreover, the night vision imaging can also be down-shifted at Passive Millimeter Wave (PMMW) imaging, suffering less blur owing to dusty smokes scattering and enjoying apparent smoothness of surface reflectivity of man-made objects under the Rayleigh resolution. One loses three orders of magnitudes in the spatial Rayleigh resolution; but gains two orders of magnitude in the reflectivity, and gains another two orders in the propagation without obscuring smog . Since CM&S can generate missing data and hard to get dynamic transients, CM&S can reduce unnecessary measurements and their associated cost and computing in the sense of super-saving CS: measuring one & getting one's neighborhood free .

  6. A general sensitivity theory for simulations of nonlinear systems

    International Nuclear Information System (INIS)

    Kenton, M.A.

    1981-01-01

    A general sensitivity theory is developed for nonlinear lumped-parameter system simulations. The point-of-departure is general perturbation theory, which has long been used for linear systems in nuclear engineering and reactor physics. The theory allows the sensitivity of particular figures-of-merit of the system behavior to be calculated with respect to any parameter.An explicit procedure is derived for applying the theory to physical systems undergoing sudden events (e.g., reactor scrams, tank ruptures). A related problem, treating figures-of-merit defined as functions of extremal values of system variables occurring at sudden events, is handled by the same procedure. The general calculational scheme for applying the theory to numerical codes is discussed. It is shown that codes which use pre-packaged implicit integration subroutines can be augmented to include sensitivity theory: a companion set of subroutines to solve the sensitivity problem is listed. This combined system analysis code is applied to a simple model for loss of post-accident heat removal in a liquid metal-cooled fast breeder reactor. The uses of the theory for answering more general sensitivity questions are discussed. One application of the theory is to systematically determine whether specific physical processes in a model contribute significantly to the figures-of-merit. Another application of the theory is for selecting parameter values which enable a model to match experimentally observed behavior

  7. Social cognitive theory, metacognition, and simulation learning in nursing education.

    Science.gov (United States)

    Burke, Helen; Mancuso, Lorraine

    2012-10-01

    Simulation learning encompasses simple, introductory scenarios requiring response to patients' needs during basic hygienic care and during situations demanding complex decision making. Simulation integrates principles of social cognitive theory (SCT) into an interactive approach to learning that encompasses the core principles of intentionality, forethought, self-reactiveness, and self-reflectiveness. Effective simulation requires an environment conducive to learning and introduces activities that foster symbolic coding operations and mastery of new skills; debriefing builds self-efficacy and supports self-regulation of behavior. Tailoring the level of difficulty to students' mastery level supports successful outcomes and motivation to set higher standards. Mindful selection of simulation complexity and structure matches course learning objectives and supports progressive development of metacognition. Theory-based facilitation of simulated learning optimizes efficacy of this learning method to foster maturation of cognitive processes of SCT, metacognition, and self-directedness. Examples of metacognition that are supported through mindful, theory-based implementation of simulation learning are provided. Copyright 2012, SLACK Incorporated.

  8. Thermodynamic Models from Fluctuation Solution Theory Analysis of Molecular Simulations

    DEFF Research Database (Denmark)

    Christensen, Steen; Peters, Günther H.j.; Hansen, Flemming Yssing

    2007-01-01

    Fluctuation solution theory (FST) is employed to analyze results of molecular dynamics (MD) simulations of liquid mixtures. The objective is to generate parameters for macroscopic GE-models, here the modified Margules model. We present a strategy for choosing the number of parameters included...

  9. U(1) Wilson lattice gauge theories in digital quantum simulators

    Science.gov (United States)

    Muschik, Christine; Heyl, Markus; Martinez, Esteban; Monz, Thomas; Schindler, Philipp; Vogell, Berit; Dalmonte, Marcello; Hauke, Philipp; Blatt, Rainer; Zoller, Peter

    2017-10-01

    Lattice gauge theories describe fundamental phenomena in nature, but calculating their real-time dynamics on classical computers is notoriously difficult. In a recent publication (Martinez et al 2016 Nature 534 516), we proposed and experimentally demonstrated a digital quantum simulation of the paradigmatic Schwinger model, a U(1)-Wilson lattice gauge theory describing the interplay between fermionic matter and gauge bosons. Here, we provide a detailed theoretical analysis of the performance and the potential of this protocol. Our strategy is based on analytically integrating out the gauge bosons, which preserves exact gauge invariance but results in complicated long-range interactions between the matter fields. Trapped-ion platforms are naturally suited to implementing these interactions, allowing for an efficient quantum simulation of the model, with a number of gate operations that scales polynomially with system size. Employing numerical simulations, we illustrate that relevant phenomena can be observed in larger experimental systems, using as an example the production of particle-antiparticle pairs after a quantum quench. We investigate theoretically the robustness of the scheme towards generic error sources, and show that near-future experiments can reach regimes where finite-size effects are insignificant. We also discuss the challenges in quantum simulating the continuum limit of the theory. Using our scheme, fundamental phenomena of lattice gauge theories can be probed using a broad set of experimentally accessible observables, including the entanglement entropy and the vacuum persistence amplitude.

  10. Manual Labour, Intellectual Labour and Digital (Academic Labour. The Practice/Theory Debate in the Digital Humanities

    Directory of Open Access Journals (Sweden)

    Christophe Magis

    2018-01-01

    Full Text Available Although it hasn’t much been considered as such, the Digital Humanities movements (or at least the most theoretically informed parts of it offers a critique “from within” of the recent mutation of the higher education and research systems. This paper offers an analysis, from a Critical Theory perspective, of a key element of this critique: the theory vs. practice debate, which, in the Digital Humanities, is translated into the famous “hack” versus “yack” motto, where DHers usually call for the pre-eminence of the former over the latter. I show how this debate aims to criticize the social situation of employment in academia in the digital age and can further be interpreted with the Cultural industry theoretical concept, as a continuance of the domination of the intellectual labour (ie. yack in this case over the manual labour (hack. Nevertheless, I argue that, pushing this debate to its very dialectical limit in the post-industrial academic labour situation, one realizes that the two terms aren’t in opposition anymore: the actual theory as well as the actual practice are below their very critical concepts in the academic labour. Therefore, I call for a reconfiguration of this debate, aiming at the rediscovering of an actual theory in the academic production, as well as a rediscovering of a praxis, the latter being outside of the scientific realm and rules: it is political.

  11. Quantum chemistry simulation on quantum computers: theories and experiments.

    Science.gov (United States)

    Lu, Dawei; Xu, Boruo; Xu, Nanyang; Li, Zhaokai; Chen, Hongwei; Peng, Xinhua; Xu, Ruixue; Du, Jiangfeng

    2012-07-14

    It has been claimed that quantum computers can mimic quantum systems efficiently in the polynomial scale. Traditionally, those simulations are carried out numerically on classical computers, which are inevitably confronted with the exponential growth of required resources, with the increasing size of quantum systems. Quantum computers avoid this problem, and thus provide a possible solution for large quantum systems. In this paper, we first discuss the ideas of quantum simulation, the background of quantum simulators, their categories, and the development in both theories and experiments. We then present a brief introduction to quantum chemistry evaluated via classical computers followed by typical procedures of quantum simulation towards quantum chemistry. Reviewed are not only theoretical proposals but also proof-of-principle experimental implementations, via a small quantum computer, which include the evaluation of the static molecular eigenenergy and the simulation of chemical reaction dynamics. Although the experimental development is still behind the theory, we give prospects and suggestions for future experiments. We anticipate that in the near future quantum simulation will become a powerful tool for quantum chemistry over classical computations.

  12. Simulating plasma instabilities in SU(3) gauge theory

    International Nuclear Information System (INIS)

    Berges, Juergen; Gelfand, Daniil; Scheffler, Sebastian; Sexty, Denes

    2009-01-01

    We compute nonequilibrium dynamics of plasma instabilities in classical-statistical lattice gauge theory in 3+1 dimensions. The simulations are done for the first time for the SU(3) gauge group relevant for quantum chromodynamics. We find a qualitatively similar behavior as compared to earlier investigations in SU(2) gauge theory. The characteristic growth rates are about 25% lower for given energy density, such that the isotropization process is slower. Measured in units of the characteristic screening mass, the primary growth rate is independent of the number of colors.

  13. A Bayesian Framework for False Belief Reasoning in Children: A Rational Integration of Theory-Theory and Simulation Theory.

    Science.gov (United States)

    Asakura, Nobuhiko; Inui, Toshio

    2016-01-01

    Two apparently contrasting theories have been proposed to account for the development of children's theory of mind (ToM): theory-theory and simulation theory. We present a Bayesian framework that rationally integrates both theories for false belief reasoning. This framework exploits two internal models for predicting the belief states of others: one of self and one of others. These internal models are responsible for simulation-based and theory-based reasoning, respectively. The framework further takes into account empirical studies of a developmental ToM scale (e.g., Wellman and Liu, 2004): developmental progressions of various mental state understandings leading up to false belief understanding. By representing the internal models and their interactions as a causal Bayesian network, we formalize the model of children's false belief reasoning as probabilistic computations on the Bayesian network. This model probabilistically weighs and combines the two internal models and predicts children's false belief ability as a multiplicative effect of their early-developed abilities to understand the mental concepts of diverse beliefs and knowledge access. Specifically, the model predicts that children's proportion of correct responses on a false belief task can be closely approximated as the product of their proportions correct on the diverse belief and knowledge access tasks. To validate this prediction, we illustrate that our model provides good fits to a variety of ToM scale data for preschool children. We discuss the implications and extensions of our model for a deeper understanding of developmental progressions of children's ToM abilities.

  14. Implementation of quantum game theory simulations using Python

    Science.gov (United States)

    Madrid S., A.

    2013-05-01

    This paper provides some examples about quantum games simulated in Python's programming language. The quantum games have been developed with the Sympy Python library, which permits solving quantum problems in a symbolic form. The application of these methods of quantum mechanics to game theory gives us more possibility to achieve results not possible before. To illustrate the results of these methods, in particular, there have been simulated the quantum battle of the sexes, the prisoner's dilemma and card games. These solutions are able to exceed the classic bottle neck and obtain optimal quantum strategies. In this form, python demonstrated that is possible to do more advanced and complicated quantum games algorithms.

  15. A study on special test stand of automatic and manual descent control in presence of simulated g-load effect

    Science.gov (United States)

    Glazkov, Yury; Artjuchin, Yury; Astakhov, Alexander; Vas'kov, Alexander; Malyshev, Veniamin; Mitroshin, Edward; Glinsky, Valery; Moiseenko, Vasily; Makovlev, Vyacheslav

    The development of aircraft-type reusable space vehicles (RSV) involves the problem of complete compatibility of automatic, director and manual control. Task decision is complicated, in particular, due to considerable quantitative and qualitative changes of vehicle dynamic characteristics, little stability margins (and even of unstability) of the RSV, and stringent requirements to control accuracy at some flight phases. Besides, during control a pilot is affected by g-loads which hamper motor activity and deteriorate its accuracy, alter the functional status of the visual analyser, and influence higher nervous activity. A study of g-load effects on the control efficiency, especially in manual and director modes, is of primary importance. The main tools for study of a rational selection of manual and director vehicle control systems and as an aid in formulating recommendations for optimum crew-automatic control system interactions are special complex and functional flight simulator test stands. The proposed simulator stand includes a powerful digital computer complex combined with the control system of the centrifuge. The interior of a pilot's vehicle cabin is imitated. A situation image system, pyscho-physical monitoring system, physician, centrifuge operator, and instructor stations are linked with the test stand.

  16. Application of the fuzzy theory to simulation of batch fermentation

    Energy Technology Data Exchange (ETDEWEB)

    Filev, D P; Kishimoto, M; Sengupta, S; Yoshida, T; Taguchi, H

    1985-12-01

    A new approach for system identification with a linguistic model of batch fermentation processes is proposed. The fuzzy theory was applied in order to reduce the uncertainty of quantitative description of the processes by use of qualitative characteristics. An example of fuzzy modeling was illustrated in the simulation of batch ethanol production from molasses after interpretation of the new method, and extension of the fuzzy model was also discussed for several cases of different measurable variables.

  17. Cosmological simulations using a static scalar-tensor theory

    Energy Technology Data Exchange (ETDEWEB)

    RodrIguez-Meza, M A [Depto. de Fisica, Instituto Nacional de Investigaciones Nucleares, Col. Escandon, Apdo. Postal 18-1027, 11801 Mexico D.F (Mexico); Gonzalez-Morales, A X [Departamento Ingenierias, Universidad Iberoamericana, Prol. Paseo de la Reforma 880 Lomas de Santa Fe, Mexico D.F. Mexico (Mexico); Gabbasov, R F [Depto. de Fisica, Instituto Nacional de Investigaciones Nucleares, Col. Escandon, Apdo. Postal 18-1027, 11801 Mexico D.F (Mexico); Cervantes-Cota, Jorge L [Depto. de Fisica, Instituto Nacional de Investigaciones Nucleares, Col. Escandon, Apdo. Postal 18-1027, 11801 Mexico D.F (Mexico)

    2007-11-15

    We present {lambda}CDM N-body cosmological simulations in the framework of of a static general scalar-tensor theory of gravity. Due to the influence of the non-minimally coupled scalar field, the gravitational potential is modified by a Yukawa type term, yielding a new structure formation dynamics. We present some preliminary results and, in particular, we compute the density and velocity profiles of the most massive group.

  18. Theory, Modeling and Simulation Annual Report 2000; FINAL

    International Nuclear Information System (INIS)

    Dixon, David A; Garrett, Bruce C; Straatsma, TP; Jones, Donald R; Studham, Scott; Harrison, Robert J; Nichols, Jeffrey A

    2001-01-01

    This annual report describes the 2000 research accomplishments for the Theory, Modeling, and Simulation (TM and S) directorate, one of the six research organizations in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) at Pacific Northwest National Laboratory (PNNL). EMSL is a U.S. Department of Energy (DOE) national scientific user facility and is the centerpiece of the DOE commitment to providing world-class experimental, theoretical, and computational capabilities for solving the nation's environmental problems

  19. A horizontal vane radiometer: experiment, theory and simulation

    OpenAIRE

    Wolfe, David; Lazarra, Andres; Garcia, Alejandro

    2015-01-01

    The existence of two motive forces on a Crookes radiometer has complicated the investigation of either force independently. The thermal creep shear force in particular has been subject to differing interpretations of the direction in which it acts and its order of magnitude. In this article we provide a horizontal vane radiometer design which isolates the thermal creep shear force. The horizontal vane radiometer is explored through experiment, kinetic theory, and the Direct Simulation Monte C...

  20. Preliminary assessment of faculty and student perception of a haptic virtual reality simulator for training dental manual dexterity.

    Science.gov (United States)

    Gal, Gilad Ben; Weiss, Ervin I; Gafni, Naomi; Ziv, Amitai

    2011-04-01

    Virtual reality force feedback simulators provide a haptic (sense of touch) feedback through the device being held by the user. The simulator's goal is to provide a learning experience resembling reality. A newly developed haptic simulator (IDEA Dental, Las Vegas, NV, USA) was assessed in this study. Our objectives were to assess the simulator's ability to serve as a tool for dental instruction, self-practice, and student evaluation, as well as to evaluate the sensation it provides. A total of thirty-three evaluators were divided into two groups. The first group consisted of twenty-one experienced dental educators; the second consisted of twelve fifth-year dental students. Each participant performed drilling tasks using the simulator and filled out a questionnaire regarding the simulator and potential ways of using it in dental education. The results show that experienced dental faculty members as well as advanced dental students found that the simulator could provide significant potential benefits in the teaching and self-learning of manual dental skills. Development of the simulator's tactile sensation is needed to attune it to genuine sensation. Further studies relating to aspects of the simulator's structure and its predictive validity, its scoring system, and the nature of the performed tasks should be conducted.

  1. Dimensional analysis, similarity, analogy, and the simulation theory

    International Nuclear Information System (INIS)

    Davis, A.A.

    1978-01-01

    Dimensional analysis, similarity, analogy, and cybernetics are shown to be four consecutive steps in application of the simulation theory. This paper introduces the classes of phenomena which follow the same formal mathematical equations as models of the natural laws and the interior sphere of restraints groups of phenomena in which one can introduce simplfied nondimensional mathematical equations. The simulation by similarity in a specific field of physics, by analogy in two or more different fields of physics, and by cybernetics in nature in two or more fields of mathematics, physics, biology, economics, politics, sociology, etc., appears as a unique theory which permits one to transport the results of experiments from the models, convenably selected to meet the conditions of researches, constructions, and measurements in the laboratories to the originals which are the primary objectives of the researches. Some interesting conclusions which cannot be avoided in the use of simplified nondimensional mathematical equations as models of natural laws are presented. Interesting limitations on the use of simulation theory based on assumed simplifications are recognized. This paper shows as necessary, in scientific research, that one write mathematical models of general laws which will be applied to nature in its entirety. The paper proposes the extent of the second law of thermodynamics as the generalized law of entropy to model life and its activities. This paper shows that the physical studies and philosophical interpretations of phenomena and natural laws cannot be separated in scientific work; they are interconnected and one cannot be put above the others

  2. Instabilities of collisionless current sheets: Theory and simulations

    International Nuclear Information System (INIS)

    Silin, I.; Buechner, J.; Zelenyi, L.

    2002-01-01

    The problem of Harris current sheet stability is investigated. A linear dispersion relation in the long-wavelength limit is derived for instabilities, propagating in the neutral plane at an arbitrary angle to the magnetic field but symmetric across the sheet. The role of electrostatic perturbations is especially investigated. It appears, that for the tearing-mode instability electrostatic effects are negligible. However, for obliquely propagating modes the modulation of the electrostatic potential φ is essential. In order to verify the theoretical results, the limiting cases of tearing and sausage instabilities are compared to the two-dimensional (2D) Vlasov code simulations. For tearing the agreement between theory and simulations is good for all mass ratios. For sausage-modes, the theory predicts fast stabilization for mass ratios m i /m e ≥10. This is not observed in simulations due to the diminishing of the wavelength for higher mass ratios, which leads beyond the limit of applicability of the theory developed here

  3. A horizontal vane radiometer: Experiment, theory, and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wolfe, David; Larraza, Andres, E-mail: larraza@nps.edu [Department of Physics, Naval Postgraduate School, Monterey, California 93940 (United States); Garcia, Alejandro [Department of Physics and Astronomy, San Jose State University, San Jose, California 95152 (United States)

    2016-03-15

    The existence of two motive forces on a Crookes radiometer has complicated the investigation of either force independently. The thermal creep shear force in particular has been subject to differing interpretations of the direction in which it acts and its order of magnitude. In this article, we provide a horizontal vane radiometer design which isolates the thermal creep shear force. The horizontal vane radiometer is explored through experiment, kinetic theory, and the Direct Simulation Monte Carlo (DSMC) method. The qualitative agreement between the three methods of investigation is good except for a dependence of the force on the width of the vane even when the temperature gradient is narrower than the vane which is present in the DSMC method results but not in the theory. The experimental results qualitatively resemble the theory in this regard. The quantitative agreement between the three methods of investigation is better than an order of magnitude in the cases examined. The theory is closer to the experimental values for narrow vanes and the simulations are closer to the experimental values for the wide vanes. We find that the thermal creep force acts from the hot side to the cold side of the vane. We also find the peak in the radiometer’s angular speed as a function of pressure is explained as much by the behavior of the drag force as by the behavior of the thermal creep force.

  4. A horizontal vane radiometer: Experiment, theory, and simulation

    International Nuclear Information System (INIS)

    Wolfe, David; Larraza, Andres; Garcia, Alejandro

    2016-01-01

    The existence of two motive forces on a Crookes radiometer has complicated the investigation of either force independently. The thermal creep shear force in particular has been subject to differing interpretations of the direction in which it acts and its order of magnitude. In this article, we provide a horizontal vane radiometer design which isolates the thermal creep shear force. The horizontal vane radiometer is explored through experiment, kinetic theory, and the Direct Simulation Monte Carlo (DSMC) method. The qualitative agreement between the three methods of investigation is good except for a dependence of the force on the width of the vane even when the temperature gradient is narrower than the vane which is present in the DSMC method results but not in the theory. The experimental results qualitatively resemble the theory in this regard. The quantitative agreement between the three methods of investigation is better than an order of magnitude in the cases examined. The theory is closer to the experimental values for narrow vanes and the simulations are closer to the experimental values for the wide vanes. We find that the thermal creep force acts from the hot side to the cold side of the vane. We also find the peak in the radiometer’s angular speed as a function of pressure is explained as much by the behavior of the drag force as by the behavior of the thermal creep force.

  5. Simulation and theory of spontaneous TAE frequency sweeping

    International Nuclear Information System (INIS)

    Wang Ge; Berk, H.L.

    2012-01-01

    A simulation model, based on the linear tip model of Rosenbluth, Berk and Van Dam (RBV), is developed to study frequency sweeping of toroidal Alfvén eigenmodes (TAEs). The time response of the background wave in the RBV model is given by a Volterra integral equation. This model captures the properties of TAE waves both in the gap and in the continuum. The simulation shows that phase space structures form spontaneously at frequencies close to the linearly predicted frequency, due to resonant particle–wave interactions and background dissipation. The frequency sweeping signals are found to chirp towards the upper and lower continua. However, the chirping signals penetrate only the lower continuum, whereupon the frequency chirps and mode amplitude increases in synchronism to produce an explosive solution. An adiabatic theory describing the evolution of a chirping signal is developed which replicates the chirping dynamics of the simulation in the lower continuum. This theory predicts that a decaying chirping signal will terminate at the upper continuum though in the numerical simulation the hole disintegrates before the upper continuum is reached. (paper)

  6. Simulation and theory of spontaneous TAE frequency sweeping

    Science.gov (United States)

    Wang, Ge; Berk, H. L.

    2012-09-01

    A simulation model, based on the linear tip model of Rosenbluth, Berk and Van Dam (RBV), is developed to study frequency sweeping of toroidal Alfvén eigenmodes (TAEs). The time response of the background wave in the RBV model is given by a Volterra integral equation. This model captures the properties of TAE waves both in the gap and in the continuum. The simulation shows that phase space structures form spontaneously at frequencies close to the linearly predicted frequency, due to resonant particle-wave interactions and background dissipation. The frequency sweeping signals are found to chirp towards the upper and lower continua. However, the chirping signals penetrate only the lower continuum, whereupon the frequency chirps and mode amplitude increases in synchronism to produce an explosive solution. An adiabatic theory describing the evolution of a chirping signal is developed which replicates the chirping dynamics of the simulation in the lower continuum. This theory predicts that a decaying chirping signal will terminate at the upper continuum though in the numerical simulation the hole disintegrates before the upper continuum is reached.

  7. SIERRA Low Mach Module: Fuego Theory Manual Version 4.44

    Energy Technology Data Exchange (ETDEWEB)

    Sierra Thermal/Fluid Team

    2017-04-01

    The SIERRA Low Mach Module: Fuego along with the SIERRA Participating Media Radiation Module: Syrinx, henceforth referred to as Fuego and Syrinx, respectively, are the key elements of the ASCI fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Syrinx represents the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the core architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.

  8. SIERRA Low Mach Module: Fuego Theory Manual Version 4.46.

    Energy Technology Data Exchange (ETDEWEB)

    Sierra Thermal/Fluid Team

    2017-09-01

    The SIERRA Low Mach Module: Fuego along with the SIERRA Participating Media Radiation Module: Syrinx, henceforth referred to as Fuego and Syrinx, respectively, are the key elements of the ASCI fire environment simulation project. The fire environment simulation project is directed at characterizing both open large-scale pool fires and building enclosure fires. Fuego represents the turbulent, buoyantly-driven incompressible flow, heat transfer, mass transfer, combustion, soot, and absorption coefficient model portion of the simulation software. Syrinx represents the participating-media thermal radiation mechanics. This project is an integral part of the SIERRA multi-mechanics software development project. Fuego depends heavily upon the core architecture developments provided by SIERRA for massively parallel computing, solution adaptivity, and mechanics coupling on unstructured grids.

  9. Recycling Resources. [Student Handbook, Sound Filmstrips, 12-Inch Record, Pollution Simulation Game, Teacher's Manual

    Science.gov (United States)

    Hatch, C. Richard

    A 15- to 20-hour course on materials recycling, teaching junior high school students about environmental problems and solutions, is developed in this set of materials. It attempts to stimulate them to participate in community efforts aimed at improving the environment. Items in the kit include: (1) teacher's manual, with lesson plans enumerating…

  10. Nonlinear turbulence theory and simulation of Buneman instability

    International Nuclear Information System (INIS)

    Yoon, P. H.; Umeda, T.

    2010-01-01

    In the present paper, the weak turbulence theory for reactive instabilities, formulated in a companion paper [P. H. Yoon, Phys. Plasmas 17, 112316 (2010)], is applied to the strong electron-ion two-stream (or Buneman) instability. The self-consistent theory involves quasilinear velocity space diffusion equation for the particles and nonlinear wave kinetic equation that includes quasilinear (or induced emission) term as well as nonlinear wave-particle interaction term (or a term that represents an induced scattering off ions). We have also performed one-dimensional electrostatic Vlasov simulation in order to benchmark the theoretical analysis. Under the assumption of self-similar drifting Gaussian distribution function for the electrons it is shown that the current reduction and the accompanying electron heating as well as electric field turbulence generation can be discussed in a self-consistent manner. Upon comparison with the Vlasov simulation result it is found that quasilinear wave kinetic equation alone is insufficient to account for the final saturation amplitude. Upon including the nonlinear scattering term in the wave kinetic equation, however, we find that a qualitative agreement with the simulation is recovered. From this, we conclude that the combined quasilinear particle diffusion plus induced emission and scattering (off ions) processes adequately account for the nonlinear development of the Buneman instability.

  11. Theory and simulation of charge transfer through DNA - nanotube contacts

    International Nuclear Information System (INIS)

    Rink, Gunda; Kong Yong; Koslowski, Thorsten

    2006-01-01

    We address the problem of charge transfer between a single-stranded adenine oligomer and semiconducting boron nitride nanotubes from a theoretical and numerical perspective. The model structures have been motivated by computer simulations; sample geometries are used as the input of an electronic structure theory that is based upon an extended Su-Schrieffer-Heeger Hamiltonian. By analyzing the emerging potential energy surfaces, we obtain hole transfer rates via Marcus' theory of charge transfer. In the presence of nanotubes, these rates exceed those of isolated DNA single strands by a factor of up to 10 4 . This enhancement can be rationalized and quantified as a combination of a template effect and the participation of the tube within a superexchange mechanism

  12. Turbulent diffusion of chemically reacting flows: Theory and numerical simulations.

    Science.gov (United States)

    Elperin, T; Kleeorin, N; Liberman, M; Lipatnikov, A N; Rogachevskii, I; Yu, R

    2017-11-01

    The theory of turbulent diffusion of chemically reacting gaseous admixtures developed previously [T. Elperin et al., Phys. Rev. E 90, 053001 (2014)PLEEE81539-375510.1103/PhysRevE.90.053001] is generalized for large yet finite Reynolds numbers and the dependence of turbulent diffusion coefficient on two parameters, the Reynolds number and Damköhler number (which characterizes a ratio of turbulent and reaction time scales), is obtained. Three-dimensional direct numerical simulations (DNSs) of a finite-thickness reaction wave for the first-order chemical reactions propagating in forced, homogeneous, isotropic, and incompressible turbulence are performed to validate the theoretically predicted effect of chemical reactions on turbulent diffusion. It is shown that the obtained DNS results are in good agreement with the developed theory.

  13. Simulations of dimensionally reduced effective theories of high temperature QCD

    CERN Document Server

    Hietanen, Ari

    Quantum chromodynamics (QCD) is the theory describing interaction between quarks and gluons. At low temperatures, quarks are confined forming hadrons, e.g. protons and neutrons. However, at extremely high temperatures the hadrons break apart and the matter transforms into plasma of individual quarks and gluons. In this theses the quark gluon plasma (QGP) phase of QCD is studied using lattice techniques in the framework of dimensionally reduced effective theories EQCD and MQCD. Two quantities are in particular interest: the pressure (or grand potential) and the quark number susceptibility. At high temperatures the pressure admits a generalised coupling constant expansion, where some coefficients are non-perturbative. We determine the first such contribution of order g^6 by performing lattice simulations in MQCD. This requires high precision lattice calculations, which we perform with different number of colors N_c to obtain N_c-dependence on the coefficient. The quark number susceptibility is studied by perf...

  14. UNSAT-H Version 3.0: Unsaturated Soil Water and Heat Flow Model Theory, User Manual, and Examples

    International Nuclear Information System (INIS)

    Fayer, M.J.

    2000-01-01

    The UNSAT-H model was developed at Pacific Northwest National Laboratory (PNNL) to assess the water dynamics of arid sites and, in particular, estimate recharge fluxes for scenarios pertinent to waste disposal facilities. During the last 4 years, the UNSAT-H model received support from the Immobilized Waste Program (IWP) of the Hanford Site's River Protection Project. This program is designing and assessing the performance of on-site disposal facilities to receive radioactive wastes that are currently stored in single- and double-shell tanks at the Hanford Site (LMHC 1999). The IWP is interested in estimates of recharge rates for current conditions and long-term scenarios involving the vadose zone disposal of tank wastes. Simulation modeling with UNSAT-H is one of the methods being used to provide those estimates (e.g., Rockhold et al. 1995; Fayer et al. 1999). To achieve the above goals for assessing water dynamics and estimating recharge rates, the UNSAT-H model addresses soil water infiltration, redistribution, evaporation, plant transpiration, deep drainage, and soil heat flow as one-dimensional processes. The UNSAT-H model simulates liquid water flow using Richards' equation (Richards 1931), water vapor diffusion using Fick's law, and sensible heat flow using the Fourier equation. This report documents UNSAT-H .Version 3.0. The report includes the bases for the conceptual model and its numerical implementation, benchmark test cases, example simulations involving layered soils and plants, and the code manual. Version 3.0 is an, enhanced-capability update of UNSAT-H Version 2.0 (Fayer and Jones 1990). New features include hysteresis, an iterative solution of head and temperature, an energy balance check, the modified Picard solution technique, additional hydraulic functions, multiple-year simulation capability, and general enhancements

  15. Coherent Smith-Purcell radiation: Theories and simulations

    International Nuclear Information System (INIS)

    Donohue, J.T.; Gardelle, J.

    2008-01-01

    Smith-Purcell (SP) radiation has been observed many times over the past fifty years, and several theories have been proposed to explain it. However, it is only quite recently that Andrews, Brau and collaborators made a considerable advance in understanding how coherent SP radiation may be produced from an initially continuous beam. Their work received support from 2-D simulations which were performed using the Particle-in-Cell (PIC) code 'MAGIC'. Here we present a review of our 2-D simulations of coherent SP and discuss how they relate to the model of Andrews and Brau. We also describe briefly a SP experiment in the microwave domain using a sheet beam that is planned for 2008

  16. Unravelling functional neurology: a scoping review of theories and clinical applications in a context of chiropractic manual therapy.

    Science.gov (United States)

    Meyer, Anne-Laure; Meyer, Amanda; Etherington, Sarah; Leboeuf-Yde, Charlotte

    2017-01-01

    Functional Neurology (FN), a seemingly attractive treatment approach used by some chiropractors, proposes to have an effect on a multitude of conditions but some of its concepts are controversial. A scoping review was performed to describe, in the context of chiropractic manual therapy, 1) the FN theories, and 2) its clinical applications (i.e. its indications, examination procedures, treatment modalities, treatment plans, and clinical outcomes) using four sources: i) one key textbook, ii) the scientific peer-reviewed literature, iii) websites from chiropractors using FN, and iv) semi-structured interviews of chiropractors using FN. The scientific literature was searched in PubMed, PsycINFO, and SPORTDiscus, completed by a hand search in the journal Functional Neurology, Rehabilitation and Ergonomics (November 2016 and March 2017, respectively). The only textbook on the topic we found was included and articles were chosen if they had an element of manual therapy. There was no restriction for study design but discussion papers were excluded. Websites were found in Google using the search term "Functional Neurology". Chiropractors, known to use FN, were invited based on their geographical location. Theories were mainly uncovered in the textbook as were all aspects of the clinical applications except treatment plans. The other three sources were used for the five aspects of clinical applications. Results were summarized and reported extensively in tables. Eleven articles were included, five websites scrutinized, and four semi-structured interviews performed. FN is based on the belief that reversible lesions in the nervous system are the cause of a multitude of conditions and that specific clusters of neurons can be positively affected by manipulative therapy, but also by many other stimuli. Diagnostic procedures include both conventional and unusual tests, with an interpretation specific to FN. Initial treatment is intense and clinical outcomes reported as positive

  17. Theory and simulations of adhesion receptor dimerization on membrane surfaces.

    Science.gov (United States)

    Wu, Yinghao; Honig, Barry; Ben-Shaul, Avinoam

    2013-03-19

    The equilibrium constants of trans and cis dimerization of membrane bound (2D) and freely moving (3D) adhesion receptors are expressed and compared using elementary statistical-thermodynamics. Both processes are mediated by the binding of extracellular subdomains whose range of motion in the 2D environment is reduced upon dimerization, defining a thin reaction shell where dimer formation and dissociation take place. We show that the ratio between the 2D and 3D equilibrium constants can be expressed as a product of individual factors describing, respectively, the spatial ranges of motions of the adhesive domains, and their rotational freedom within the reaction shell. The results predicted by the theory are compared to those obtained from a novel, to our knowledge, dynamical simulations methodology, whereby pairs of receptors perform realistic translational, internal, and rotational motions in 2D and 3D. We use cadherins as our model system. The theory and simulations explain how the strength of cis and trans interactions of adhesive receptors are affected both by their presence in the constrained intermembrane space and by the 2D environment of membrane surfaces. Our work provides fundamental insights as to the mechanism of lateral clustering of adhesion receptors after cell-cell contact and, more generally, to the formation of lateral microclusters of proteins on cell surfaces. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  18. Simulation and theory of island growth on stepped substrates

    International Nuclear Information System (INIS)

    Pownall, C.D.

    1999-10-01

    The nucleation, growth and coalescence of islands on stepped substrates is investigated by Monte Carlo simulations and analytical theories. Substrate steps provide a preferential site for the nucleation of islands, making many of the important processes one-dimensional in nature, and are of potentially major importance in the development of low-dimensional structures as a means of growing highly ordered chains of 'quantum dots' or continuous 'quantum wires'. A model is developed in which island nucleation is entirely restricted to the step edge, islands grow in compact morphologies by monomer capture, and eventually coalesce with one another until a single continuous cluster of islands covers the entire step. A series of analytical theories is developed to describe the dynamics of the whole evolution. The initial nucleation and aggregation regimes are modeled using the traditional approach of rate equations, rooted in mean field theory, but incorporating corrections to account for correlations in the nucleation and capture processes. This approach is found to break down close to the point at which the island density saturates and a new approach is developed based upon geometric and probabilistic arguments to describe the saturation behaviour, including the characteristic dynamic scaling which is found to persist through the coalescence regime as well. A further new theory, incorporating arguments based on the geometry of Capture Zones, is presented which reproduces the dynamics of the coalescence regime. The, latter part of the. thesis considers the spatial properties of the system, in particular the spacing of the islands along the step. An expression is derived which describes the distribution of gap sizes, and this is solved using a recently-developed relaxation method. An important result is the discovery that larger critical island sizes tend to yield more evenly spaced arrays of islands. The extent of this effect is analysed by solving for critical island

  19. NASA One-Dimensional Combustor Simulation--User Manual for S1D_ML

    Science.gov (United States)

    Stueber, Thomas J.; Paxson, Daniel E.

    2014-01-01

    The work presented in this paper is to promote research leading to a closed-loop control system to actively suppress thermo-acoustic instabilities. To serve as a model for such a closed-loop control system, a one-dimensional combustor simulation composed using MATLAB software tools has been written. This MATLAB based process is similar to a precursor one-dimensional combustor simulation that was formatted as FORTRAN 77 source code. The previous simulation process requires modification to the FORTRAN 77 source code, compiling, and linking when creating a new combustor simulation executable file. The MATLAB based simulation does not require making changes to the source code, recompiling, or linking. Furthermore, the MATLAB based simulation can be run from script files within the MATLAB environment or with a compiled copy of the executable file running in the Command Prompt window without requiring a licensed copy of MATLAB. This report presents a general simulation overview. Details regarding how to setup and initiate a simulation are also presented. Finally, the post-processing section describes the two types of files created while running the simulation and it also includes simulation results for a default simulation included with the source code.

  20. SubDyn User's Guide and Theory Manual

    Energy Technology Data Exchange (ETDEWEB)

    Damiani, Rick [National Renewable Energy Lab. (NREL), Golden, CO (United States); Jonkman, Jason [National Renewable Energy Lab. (NREL), Golden, CO (United States); Hayman, Greg [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-09-01

    SubDyn is a time-domain structural-dynamics module for multimember fixed-bottom substructures created by the National Renewable Energy Laboratory (NREL) through U.S. Department of Energy Wind and Water Power Program support. The module has been coupled into the FAST aero-hydro-servo-elastic computer-aided engineering (CAE) tool. Substructure types supported by SubDyn include monopiles, tripods, jackets, and other lattice-type substructures common for offshore wind installations in shallow and transitional water depths. SubDyn can also be used to model lattice support structures for land-based wind turbines. This document is organized as follows. Section 1 details how to obtain the SubDyn and FAST software archives and run both the stand-alone SubDyn or SubDyn coupled to FAST. Section 2 describes the SubDyn input files. Section 3 discusses the output files generated by SubDyn; these include echo files, a summary file, and the results file. Section 4 provides modeling guidance when using SubDyn. The SubDyn theory is covered in Section 5. Section 6 outlines future work, and Section 7 contains a list of references. Example input files are shown in Appendixes A and B. A summary of available output channels are found in Appendix C. Instructions for compiling the stand-alone SubDyn program are detailed in Appendix D. Appendix E tracks the major changes we have made to SubDyn for each public release.

  1. Repository Integration Program: RIP performance assessment and strategy evaluation model theory manual and user's guide

    International Nuclear Information System (INIS)

    1995-11-01

    This report describes the theory and capabilities of RIP (Repository Integration Program). RIP is a powerful and flexible computational tool for carrying out probabilistic integrated total system performance assessments for geologic repositories. The primary purpose of RIP is to provide a management tool for guiding system design and site characterization. In addition, the performance assessment model (and the process of eliciting model input) can act as a mechanism for integrating the large amount of available information into a meaningful whole (in a sense, allowing one to keep the ''big picture'' and the ultimate aims of the project clearly in focus). Such an integration is useful both for project managers and project scientists. RIP is based on a '' top down'' approach to performance assessment that concentrates on the integration of the entire system, and utilizes relatively high-level descriptive models and parameters. The key point in the application of such a ''top down'' approach is that the simplified models and associated high-level parameters must incorporate an accurate representation of their uncertainty. RIP is designed in a very flexible manner such that details can be readily added to various components of the model without modifying the computer code. Uncertainty is also handled in a very flexible manner, and both parameter and model (process) uncertainty can be explicitly considered. Uncertainty is propagated through the integrated PA model using an enhanced Monte Carlo method. RIP must rely heavily on subjective assessment (expert opinion) for much of its input. The process of eliciting the high-level input parameters required for RIP is critical to its successful application. As a result, in order for any project to successfully apply a tool such as RIP, an enormous amount of communication and cooperation must exist between the data collectors, the process modelers, and the performance. assessment modelers

  2. SCDAP/RELAP5/MOD 3.1 code manual: Damage progression model theory. Volume 2

    International Nuclear Information System (INIS)

    Davis, K.L.

    1995-06-01

    The SCDAP/RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during a severe accident. The code models the coupled behavior of the reactor coolant system, the core, fission products released during a severe accident transient as well as large and small break loss of coolant accidents, operational transients such as anticipated transient without SCRAM, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits as much of a particular system to be modeled as necessary. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater conditioning systems. This volume contains detailed descriptions of the severe accident models and correlations. It provides the user with the underlying assumptions and simplifications used to generate and implement the basic equations into the code, so an intelligent assessment of the applicability and accuracy of the resulting calculation can be made

  3. SCDAP/RELAP5/MOD 3.1 code manual: Damage progression model theory. Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    Davis, K.L. [ed.; Allison, C.M.; Berna, G.A. [Lockheed Idaho Technologies Co., Idaho Falls, ID (United States)] [and others

    1995-06-01

    The SCDAP/RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during a severe accident. The code models the coupled behavior of the reactor coolant system, the core, fission products released during a severe accident transient as well as large and small break loss of coolant accidents, operational transients such as anticipated transient without SCRAM, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits as much of a particular system to be modeled as necessary. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater conditioning systems. This volume contains detailed descriptions of the severe accident models and correlations. It provides the user with the underlying assumptions and simplifications used to generate and implement the basic equations into the code, so an intelligent assessment of the applicability and accuracy of the resulting calculation can be made.

  4. SCDAP/RELAP5/MOD 3.1 code manual: Interface theory. Volume 1

    International Nuclear Information System (INIS)

    Coryell, E.W.

    1995-06-01

    The SCDAP/RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during a severe accident. The code models the coupled behavior of the reactor coolant system, core, fission product released during a severe accident transient as well as large and small break loss of coolant accidents, operational transients such as anticipated transient without SCRAM, loss of off-site power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits as much of a particular system to be modeled as necessary. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater conditioning systems. This volume describes the organization and manner of the interface between severe accident models which are resident in the SCDAP portion of the code and hydrodynamic models which are resident in the RELAP5 portion of the code. A description of the organization and structure of SCDAP/RELAP5 is presented. Additional information is provided regarding the manner in which models in one portion of the code impact other parts of the code, and models which are dependent on and derive information from other subcodes

  5. Foundational Elements of Applied Simulation Theory: Development and Implementation of a Longitudinal Simulation Educator Curriculum.

    Science.gov (United States)

    Chiu, Michelle; Posner, Glenn; Humphrey-Murto, Susan

    2017-01-27

    Simulation-based education has gained popularity, yet many faculty members feel inadequately prepared to teach using this technique. Fellowship training in medical education exists, but there is little information regarding simulation or formal educational programs therein. In our institution, simulation fellowships were offered by individual clinical departments. We recognized the need for a formal curriculum in educational theory. Kern's approach to curriculum development was used to develop, implement, and evaluate the Foundational Elements of Applied Simulation Theory (FEAST) curriculum. Needs assessments resulted in a 26-topic curriculum; each biweekly session built upon the previous. Components essential to success included setting goals and objectives for each interactive session and having dedicated faculty, collaborative leadership and administrative support for the curriculum. Evaluation data was collated and analyzed annually via anonymous feedback surveys, focus groups, and retrospective pre-post self-assessment questionnaires. Data collected from 32 fellows over five years of implementation showed that the curriculum improved knowledge, challenged thinking, and was excellent preparation for a career in simulation-based medical education. Themes arising from focus groups demonstrated that participants valued faculty expertise and the structure, practicality, and content of the curriculum. We present a longitudinal simulation educator curriculum that adheres to a well-described framework of curriculum development. Program evaluation shows that FEAST has increased participant knowledge in key areas relevant to simulation-based education and that the curriculum has been successful in meeting the needs of novice simulation educators. Insights and practice points are offered for educators wishing to implement a similar curriculum in their institution.

  6. Charging of the Van Allen Probes: Theory and Simulations

    Science.gov (United States)

    Delzanno, G. L.; Meierbachtol, C.; Svyatskiy, D.; Denton, M.

    2017-12-01

    The electrical charging of spacecraft has been a known problem since the beginning of the space age. Its consequences can vary from moderate (single event upsets) to catastrophic (total loss of the spacecraft) depending on a variety of causes, some of which could be related to the surrounding plasma environment, including emission processes from the spacecraft surface. Because of its complexity and cost, this problem is typically studied using numerical simulations. However, inherent unknowns in both plasma parameters and spacecraft material properties can lead to inaccurate predictions of overall spacecraft charging levels. The goal of this work is to identify and study the driving causes and necessary parameters for particular spacecraft charging events on the Van Allen Probes (VAP) spacecraft. This is achieved by making use of plasma theory, numerical simulations, and on-board data. First, we present a simple theoretical spacecraft charging model, which assumes a spherical spacecraft geometry and is based upon the classical orbital-motion-limited approximation. Some input parameters to the model (such as the warm plasma distribution function) are taken directly from on-board VAP data, while other parameters are either varied parametrically to assess their impact on the spacecraft potential, or constrained through spacecraft charging data and statistical techniques. Second, a fully self-consistent numerical simulation is performed by supplying these parameters to CPIC, a particle-in-cell code specifically designed for studying plasma-material interactions. CPIC simulations remove some of the assumptions of the theoretical model and also capture the influence of the full geometry of the spacecraft. The CPIC numerical simulation results will be presented and compared with on-board VAP data. This work will set the foundation for our eventual goal of importing the full plasma environment from the LANL-developed SHIELDS framework into CPIC, in order to more accurately

  7. Electrohydrodynamics of drops in strong electric fields: Simulations and theory

    Science.gov (United States)

    Saintillan, David; Das, Debasish

    2016-11-01

    Weakly conducting dielectric liquid drops suspended in another dielectric liquid exhibit a wide range of dynamical behaviors when subject to an applied uniform electric field contingent on field strength and material properties. These phenomena are best described by the much celebrated Maylor-Taylor leaky dielectric model that hypothesizes charge accumulation on the drop-fluid interface and prescribes a balance between charge relaxation, the jump in Ohmic currents and charge convection by the interfacial fluid flow. Most previous numerical simulations based on this model have either neglected interfacial charge convection or restricted themselves to axisymmetric drops. In this work, we develop a three-dimensional boundary element method for the complete leaky dielectric model to systematically study the deformation and dynamics of liquid drops in electric fields. The inclusion of charge convection in our simulation permits us to investigate drops in the Quincke regime, in which experiments have demonstrated symmetry-breaking bifurcations leading to steady electrorotation. Our simulation results show excellent agreement with existing experimental data and small deformation theories. ACSPRF Grant 53240-ND9.

  8. Theory, Modeling and Simulation: Research progress report 1994--1995

    Energy Technology Data Exchange (ETDEWEB)

    Garrett, B.C.; Dixon, D.A.; Dunning, T.H.

    1997-01-01

    The Pacific Northwest National Laboratory (PNNL) has established the Environmental Molecular Sciences Laboratory (EMSL). In April 1994, construction began on the new EMSL, a collaborative research facility devoted to advancing the understanding of environmental molecular science. Research in the Theory, Modeling, and Simulation (TM and S) program will play a critical role in understanding molecular processes important in restoring DOE`s research, development, and production sites, including understanding the migration and reactions of contaminants in soils and ground water, developing processes for isolation and processing of pollutants, developing improved materials for waste storage, understanding the enzymatic reactions involved in the biodegradation of contaminants, and understanding the interaction of hazardous chemicals with living organisms. The research objectives of the TM and S program are fivefold: to apply available electronic structure and dynamics techniques to study fundamental molecular processes involved in the chemistry of natural and contaminated systems; to extend current electronic structure and dynamics techniques to treat molecular systems of future importance and to develop new techniques for addressing problems that are computationally intractable at present; to apply available molecular modeling techniques to simulate molecular processes occurring in the multi-species, multi-phase systems characteristic of natural and polluted environments; to extend current molecular modeling techniques to treat ever more complex molecular systems and to improve the reliability and accuracy of such simulations; and to develop technologies for advanced parallel architectural computer systems. Research highlights of 82 projects are given.

  9. ROMI 4.0: Rough mill simulator 4.0 users manual

    Science.gov (United States)

    R. Edward Thomas; Timo Grueneberg; Urs. Buehlmann

    2015-01-01

    The Rough MIll simulator (ROMI Version 4.0) is a computer software package for personal computers (PCs) that simulates current industrial practices for rip-first, chop-first, and rip and chop-first lumber processing. This guide shows how to set up the software; design, implement, and execute simulations; and examine the results. ROMI 4.0 accepts cutting bills with as...

  10. CoaSim Guile Manual — Using the Guile-based CoaSim Simulator

    DEFF Research Database (Denmark)

    Mailund, T

    2006-01-01

    CoaSim is a tool for simulating the coalescent process with recombination and geneconversion, under either constant population size or exponential population growth. It effectively constructs the ancestral recombination graph for a given number of chromosomes and uses this to simulate samples...

  11. BIOACCUMULATION AND AQUATIC SYSTEM SIMULATOR (BASS) USER'S MANUAL BETA TEST VERSION 2.1

    Science.gov (United States)

    BASS (Bioaccumulation and Aquatic System Simulator) is a Fortran 95 simulation program that predicts the population and bioaccumulation dynamics of age-structured fish assemblages that are exposed to hydrophobic organic pollutants and class B and borderline metals that complex wi...

  12. USER MANUAL FOR EXPRESS, THE EXAMS-PRZM EXPOSURE SIMULATION SHELL

    Science.gov (United States)

    The Environmental Fate and Effects Division (EFED) of EPA's Office of Pesticide Programs(OPP) uses a suite of ORD simulation models for the exposure analysis portion of regulatory risk assessments. These models (PRZM, EXAMS, AgDisp) are complex, process-based simulation codes tha...

  13. Operating system for a real-time multiprocessor propulsion system simulator. User's manual

    Science.gov (United States)

    Cole, G. L.

    1985-01-01

    The NASA Lewis Research Center is developing and evaluating experimental hardware and software systems to help meet future needs for real-time, high-fidelity simulations of air-breathing propulsion systems. Specifically, the real-time multiprocessor simulator project focuses on the use of multiple microprocessors to achieve the required computing speed and accuracy at relatively low cost. Operating systems for such hardware configurations are generally not available. A real time multiprocessor operating system (RTMPOS) that supports a variety of multiprocessor configurations was developed at Lewis. With some modification, RTMPOS can also support various microprocessors. RTMPOS, by means of menus and prompts, provides the user with a versatile, user-friendly environment for interactively loading, running, and obtaining results from a multiprocessor-based simulator. The menu functions are described and an example simulation session is included to demonstrate the steps required to go from the simulation loading phase to the execution phase.

  14. MHD turbulent dynamo in astrophysics: Theory and numerical simulation

    Science.gov (United States)

    Chou, Hongsong

    2001-10-01

    This thesis treats the physics of dynamo effects through theoretical modeling of magnetohydrodynamic (MHD) systems and direct numerical simulations of MHD turbulence. After a brief introduction to astrophysical dynamo research in Chapter 1, the following issues in developing dynamic models of dynamo theory are addressed: In Chapter 2, nonlinearity that arises from the back reaction of magnetic field on velocity field is considered in a new model for the dynamo α-effect. The dependence of α-coefficient on magnetic Reynolds number, kinetic Reynolds number, magnetic Prandtl number and statistical properties of MHD turbulence is studied. In Chapter 3, the time-dependence of magnetic helicity dynamics and its influence on dynamo effects are studied with a theoretical model and 3D direct numerical simulations. The applicability of and the connection between different dynamo models are also discussed. In Chapter 4, processes of magnetic field amplification by turbulence are numerically simulated with a 3D Fourier spectral method. The initial seed magnetic field can be a large-scale field, a small-scale magnetic impulse, and a combination of these two. Other issues, such as dynamo processes due to helical Alfvénic waves and the implication and validity of the Zeldovich relation, are also addressed in Appendix B and Chapters 4 & 5, respectively. Main conclusions and future work are presented in Chapter 5. Applications of these studies are intended for astrophysical magnetic field generation through turbulent dynamo processes, especially when nonlinearity plays central role. In studying the physics of MHD turbulent dynamo processes, the following tools are developed: (1)A double Fourier transform in both space and time for the linearized MHD equations (Chapter 2 and Appendices A & B). (2)A Fourier spectral numerical method for direct simulation of 3D incompressible MHD equations (Appendix C).

  15. Simulating activation propagation in social networks using the graph theory

    Directory of Open Access Journals (Sweden)

    František Dařena

    2010-01-01

    Full Text Available The social-network formation and analysis is nowadays one of objects that are in a focus of intensive research. The objective of the paper is to suggest the perspective of representing social networks as graphs, with the application of the graph theory to problems connected with studying the network-like structures and to study spreading activation algorithm for reasons of analyzing these structures. The paper presents the process of modeling multidimensional networks by means of directed graphs with several characteristics. The paper also demonstrates using Spreading Activation algorithm as a good method for analyzing multidimensional network with the main focus on recommender systems. The experiments showed that the choice of parameters of the algorithm is crucial, that some kind of constraint should be included and that the algorithm is able to provide a stable environment for simulations with networks.

  16. Probability theory versus simulation of petroleum potential in play analysis

    Science.gov (United States)

    Crovelli, R.A.

    1987-01-01

    An analytic probabilistic methodology for resource appraisal of undiscovered oil and gas resources in play analysis is presented. This play-analysis methodology is a geostochastic system for petroleum resource appraisal in explored as well as frontier areas. An objective was to replace an existing Monte Carlo simulation method in order to increase the efficiency of the appraisal process. Underlying the two methods is a single geologic model which considers both the uncertainty of the presence of the assessed hydrocarbon and its amount if present. The results of the model are resource estimates of crude oil, nonassociated gas, dissolved gas, and gas for a geologic play in terms of probability distributions. The analytic method is based upon conditional probability theory and a closed form solution of all means and standard deviations, along with the probabilities of occurrence. ?? 1987 J.C. Baltzer A.G., Scientific Publishing Company.

  17. Density-functional theory simulation of large quantum dots

    Science.gov (United States)

    Jiang, Hong; Baranger, Harold U.; Yang, Weitao

    2003-10-01

    Kohn-Sham spin-density functional theory provides an efficient and accurate model to study electron-electron interaction effects in quantum dots, but its application to large systems is a challenge. Here an efficient method for the simulation of quantum dots using density-function theory is developed; it includes the particle-in-the-box representation of the Kohn-Sham orbitals, an efficient conjugate-gradient method to directly minimize the total energy, a Fourier convolution approach for the calculation of the Hartree potential, and a simplified multigrid technique to accelerate the convergence. We test the methodology in a two-dimensional model system and show that numerical studies of large quantum dots with several hundred electrons become computationally affordable. In the noninteracting limit, the classical dynamics of the system we study can be continuously varied from integrable to fully chaotic. The qualitative difference in the noninteracting classical dynamics has an effect on the quantum properties of the interacting system: integrable classical dynamics leads to higher-spin states and a broader distribution of spacing between Coulomb blockade peaks.

  18. Simulation and quasilinear theory of proton firehose instability

    Energy Technology Data Exchange (ETDEWEB)

    Seough, Jungjoon [Korean Astronomy and Space Science Institute, Daejeon (Korea, Republic of); Faculty of Human Development, University of Toyama, 3190, Gofuku, Toyama City, Toyama, 930-8555 (Japan); Yoon, Peter H. [University of Maryland, College Park, Maryland 20742 (United States); School of Space Research, Kyung Hee University, Yongin, Gyeonggi 446-701 (Korea, Republic of); Hwang, Junga [Korean Astronomy and Space Science Institute, Daejeon (Korea, Republic of); Korea, University of Science and Technology, Daejeon (Korea, Republic of)

    2015-01-15

    The electromagnetic proton firehose instability is driven by excessive parallel temperature anisotropy, T{sub ∥} > T{sub ⊥} (or more precisely, parallel pressure anisotropy, P{sub ∥} > P{sub ⊥}) in high-beta plasmas. Together with kinetic instabilities driven by excessive perpendicular temperature anisotropy, namely, electromagnetic proton cyclotron and mirror instabilities, its role in providing the upper limit for the temperature anisotropy in the solar wind is well-known. A recent Letter [Seough et al., Phys. Rev. Lett. 110, 071103 (2013)] employed quasilinear kinetic theory for these instabilities to explain the observed temperature anisotropy upper bound in the solar wind. However, the validity of quasilinear approach has not been rigorously tested until recently. In a recent paper [Seough et al., Phys. Plasmas 21, 062118 (2014)], a comparative study is carried out for the first time in which quasilinear theory of proton cyclotron instability is tested against results obtained from the particle-in-cell simulation method, and it was demonstrated that the agreement was rather excellent. The present paper addresses the same issue involving the proton firehose instability. Unlike the proton cyclotron instability, however, it is found that the quasilinear approximation enjoys only a limited range of validity, especially for the wave dynamics and for the relatively high-beta regime. Possible causes and mechanisms responsible for the discrepancies are speculated and discussed.

  19. Nonlinear polarization of ionic liquids: theory, simulations, experiments

    Science.gov (United States)

    Kornyshev, Alexei

    2010-03-01

    Room temperature ionic liquids (RTILs) composed of large, often asymmetric, organic cations and simple or complex inorganic or organic anions do not freeze at ambient temperatures. Their rediscovery some 15 years ago is widely accepted as a ``green revolution'' in chemistry, offering an unlimited number of ``designer'' solvents for chemical and photochemical reactions, homogeneous catalysis, lubrication, and solvent-free electrolytes for energy generation and storage. As electrolytes they are non-volatile, some can sustain without decomposition up to 6 times higher voltages than aqueous electrolytes, and many are environmentally friendly. The studies of RTILs and their applications have reached a critical stage. So many of them can be synthesized - about a thousand are known already - their mixtures can further provide ``unlimited'' number of combinations! Thus, establishing some general laws that could direct the best choice of a RTIL for a given application became crucial; guidance is expected from theory and modelling. But for a physical theory, RTILs comprise a peculiar and complex class of media, the description of which lies at the frontier line of condensed matter theoretical physics: dense room temperature ionic plasmas with ``super-strong'' Coulomb correlations, which behave like glasses at short time-scale, but like viscous liquids at long-time scale. This talk will introduce RTILs to physicists and overview the current understanding of the nonlinear response of RTILs to electric field. It will focus on the theory, simulations, and experimental characterisation of the structure and nonlinear capacitance of the electrical double layer at a charged electrode. It will also discuss pros and contras of supercapacitor applications of RTILs.

  20. Theory and Simulation of an Inverse Free Electron Laser Experiment

    Science.gov (United States)

    Guo, S. K.; Bhattacharjee, A.; Fang, J. M.; Marshall, T. C.

    1996-11-01

    An experimental demonstration of the acceleration of electrons using a high power CO2 laser in an inverse free electron laser (IFEL) is underway at the Brookhaven National Laboratory. This experiment has generated data, which we are attempting to simulate. Included in our studies are such effects as: a low-loss metallic waveguide with a dielectric coating on the walls; multi-mode coupling due to self-consistent interaction between the electrons and the optical wave; space charge (which is significant at lower laser power); energy-spread of the electrons; arbitrary wiggler field profile; and slippage. Two types of wiggler profile have been considered: a linear taper of the period, and a step-taper of the period (the period is ~ 3cm, the field is ~ 1T, and the wiggler length is 47cm). The energy increment of the electrons ( ~ 1-2%) is analyzed in detail as a function of laser power, wiggler parameters, and the initial beam energy (40MeV). For laser power ~ 0.5GW, the predictions of the simulations are in good accord with experimental results. A matter currently under study is the discrepancy between theory and observations for the electron energy distribution observed at the end of the IFEL. This work is supported by the Department of Energy.

  1. Toward a Theory of Entrepreneurial Rents: a Simulation of the Market Process

    NARCIS (Netherlands)

    Keyhani, M; Levesque, M.; Madhok, A.

    2015-01-01

    While strategy theory relies heavily on equilibrium theories of economic rents such as Ricardian and monopoly rents, we do not yet have a comprehensive theory of disequilibrium or entrepreneurial rents. We use cooperative game theory to structure computer simulations of the market process in which

  2. Operationalising elaboration theory for simulation instruction design: a Delphi study.

    Science.gov (United States)

    Haji, Faizal A; Khan, Rabia; Regehr, Glenn; Ng, Gary; de Ribaupierre, Sandrine; Dubrowski, Adam

    2015-06-01

    The aim of this study was to assess the feasibility of incorporating the Delphi process within the simplifying conditions method (SCM) described in elaboration theory (ET) to identify conditions impacting the complexity of procedural skills for novice learners. We generated an initial list of conditions impacting the complexity of lumbar puncture (LP) from key informant interviews (n = 5) and a literature review. Eighteen clinician-educators from six different medical specialties were subsequently recruited as expert panellists. Over three Delphi rounds, these panellists rated: (i) their agreement with the inclusion of the simple version of the conditions in a representative ('epitome') training scenario, and (ii) how much the inverse (complex) version increases LP complexity for a novice. Cronbach's α-values were used to assess inter-rater agreement. All panellists completed Rounds 1 and 2 of the survey and 17 completed Round 3. In Round 1, Cronbach's α-values were 0.89 and 0.94 for conditions that simplify and increase LP complexity, respectively; both values increased to 0.98 in Rounds 2 and 3. With the exception of 'high CSF (cerebral spinal fluid) pressure', panellists agreed with the inclusion of all conditions in the simplest (epitome) training scenario. Panellists rated patient movement, spinal anatomy, patient cooperativeness, body habitus, and the presence or absence of an experienced assistant as having the greatest impact on the complexity of LP. This study demonstrated the feasibility of using expert consensus to establish conditions impacting the complexity of procedural skills, and the benefits of incorporating the Delphi method into the SCM. These data can be used to develop and sequence simulation scenarios in a progressively challenging manner. If the theorised learning gains associated with ET are realised, the methods described in this study may be applied to the design of simulation training for other procedural and non-procedural skills

  3. Netflow Theory Manual

    Energy Technology Data Exchange (ETDEWEB)

    Bozinoski, Radoslav; William Winters

    2016-01-01

    The purpose of this report is to document the theoretical models utilized by the computer code NETFLOW. This report will focus on the theoretical models used to analyze high Mach number fully compressible transonic flows in piping networks.

  4. NUFACTS-nuclear fuel cycle activity simulator: reference manual. Final report

    International Nuclear Information System (INIS)

    Triplett, M.B.; Waddell, J.D.; Breese, T.A.

    1978-01-01

    The Nuclear Fuel Cycle Activity Simulator (NUFACTS) is a package of FORTRAN subroutines which facilitate the simulation of a diversity of nuclear power growth scenarios. An approach to modeling the nuclear fuel cycle has been developed that is highly adaptive and capable of addressing a variety of problems. Being a simulation model rather than an optimization model, NUFACTS mimics the events and processes that are characteristic of the nuclear fuel cycle. This approach enables the model user to grasp the modeling approach rather quickly. Within this report descriptions of the model and its components are provided with several emphases. First, a discussion of modeling approach and basic assumptions is provided. Next, instructions are provided for generating data, inputting the data properly, and running the code. Finally, detailed descriptions of individual program element are given as an aid to modifying and extending the present capabilities

  5. User`s guide and documentation manual for ``PC-Gel`` simulator

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Ming-Ming; Gao, Hong W.

    1993-10-01

    PC-GEL is a three-dimensional, three-phase (oil, water, and gas) permeability modification simulator developed by incorporating an in-situ gelation model into a black oil simulator (BOAST) for personal computer application. The features included in the simulator are: transport of each chemical species of the polymer/crosslinker system in porous media, gelation reaction kinetics of the polymer with crosslinking agents, rheology of the polymer and gel, inaccessible pore volume to macromolecules, adsorption of chemical species on rock surfaces, retention of gel on the rock matrix, and permeability reduction caused by the adsorption of polymer and gel. The in-situ gelation model and simulator were validated against data reported in the literature. The simulator PC-GEL is useful for simulating and optimizing any combination of primary production, waterflooding, polymer flooding, and permeability modification treatments. A general background of permeability modification using crosslinked polymer gels is given in Section I and the governing equations, mechanisms, and numerical solutions of PC-GEL are given in Section II. Steps for preparing an input data file with reservoir and gel-chemical transport data, and recurrent data are described in Sections III and IV, respectively. Example data inputs are enclosed after explanations of each input line to help the user prepare data files. Major items of the output files are reviewed in Section V. Finally, three sample problems for running PC-GEL are described in Section VI, and input files and part of the output files of these problems are listed in the appendices. For the user`s reference a copy of the source code of PC-GEL computer program is attached in Appendix A.

  6. CSTEM User Manual

    Science.gov (United States)

    Hartle, M.; McKnight, R. L.

    2000-01-01

    This manual is a combination of a user manual, theory manual, and programmer manual. The reader is assumed to have some previous exposure to the finite element method. This manual is written with the idea that the CSTEM (Coupled Structural Thermal Electromagnetic-Computer Code) user needs to have a basic understanding of what the code is actually doing in order to properly use the code. For that reason, the underlying theory and methods used in the code are described to a basic level of detail. The manual gives an overview of the CSTEM code: how the code came into existence, a basic description of what the code does, and the order in which it happens (a flowchart). Appendices provide a listing and very brief description of every file used by the CSTEM code, including the type of file it is, what routine regularly accesses the file, and what routine opens the file, as well as special features included in CSTEM.

  7. Manual for the Jet Event and Background Simulation Library(JEBSimLib)

    Energy Technology Data Exchange (ETDEWEB)

    Heinz, Matthias [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Soltz, Ron [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Angerami, Aaron [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-08-29

    Jets are the collimated streams of particles resulting from hard scattering in the initial state of high-energy collisions. In heavy-ion collisions, jets interact with the quark-gluon plasma (QGP) before freezeout, providing a probe into the internal structure and properties of the QGP. In order to study jets, background must be subtracted from the measured event, potentially introducing a bias. We aim to understand and quantify this subtraction bias. PYTHIA, a library to simulate pure jet events, is used to simulate a model for a signature with one pure jet (a photon) and one quenched jet, where all quenched particle momenta are reduced by a user-de ned constant fraction. Background for the event is simulated using multiplicity values generated by the TRENTO initial state model of heavy-ion collisions fed into a thermal model consisting of a 3-dimensional Boltzmann distribution for particle types and momenta. Data from the simulated events is used to train a statistical model, which computes a posterior distribution of the quench factor for a data set. The model was tested rst on pure jet events and then on full events including the background. This model will allow for a quantitative determination of biases induced by various methods of background subtraction.

  8. The GRASP 3: Graphical Reliability Analysis Simulation Program. Version 3: A users' manual and modelling guide

    Science.gov (United States)

    Phillips, D. T.; Manseur, B.; Foster, J. W.

    1982-01-01

    Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.

  9. Atom probe tomography simulations and density functional theory calculations of bonding energies in Cu3Au

    KAUST Repository

    Boll, Torben; Zhu, Zhiyong; Al-Kassab, Talaat; Schwingenschlö gl, Udo

    2012-01-01

    In this article the Cu-Au binding energy in Cu3Au is determined by comparing experimental atom probe tomography (APT) results to simulations. The resulting bonding energy is supported by density functional theory calculations. The APT simulations

  10. Third and fourth quarter progress report on plasma theory and simulation, July 1-December 31, 1986

    International Nuclear Information System (INIS)

    Birdsall, C.K.

    1987-01-01

    Our group uses theory and simulation as tools in order to increase the understanding of plasma instabilities, heating, transport, plasma-wall interactions, and large potentials in plasmas. We also work on the improvement of simulation both theoretically and practically

  11. Numerical simulation of the manual operation of the charging/discharging machine (MID) control desk

    International Nuclear Information System (INIS)

    Doca, C; Dobre, A

    2004-01-01

    Since the year 2000 at 7th Division TAR of Institute for Nuclear Research - Pitesti continuous efforts were made to implement a software product package devoted to numerical simulation of operations at the test bench of charging/discharging machine (MID). Till now there were specified, designed, worked out and implemented on a computer the PUPITRU code, the present version fulfilling the following requirements: - graphical output specific for the computer/human operator interface: - design at a 1 : 4 scale for each of the 25 drawers of the control desk; - graphical and functional simulation of all the physical objects mounted in these drawers, namely: 12 measuring analog instruments with linear and non-linear dials (ampermeters), 21 measuring digital instruments (voltmeters), 24 two up/down settings switches, 13 switches with three up/down settings, 23 switches with two left/right hand settings, one switch with three left/right hand settings, one switch with four left/right hand settings, 2 switches with five left/right hand settings, 68, 16, 23, 53, 81 signaling lamps of white, yellow, orange, red and green light, respectively; implementation in the frame of PUPITRU code of the main notations used in the automation schemes in the execution design of the control desk, in view of a quick identification of the physical objects: switches, lamps, instruments, etc. ; - implementation in the frame of PUPITRU code of the full database (mnemonics and numerical values) used in the frame of MID tests; - implementation of over 1000 equations of numerical simulation appropriate to the situations characteristic for test bench and MID operation. At the moment, the final functional simulation for all the control desk components is finalized. In this paper a description and a demonstration run of the PUPITRU code is presented. (authors)

  12. User's manual for computer code SOLTES-1 (simulator of large thermal energy systems)

    International Nuclear Information System (INIS)

    Fewell, M.E.; Grandjean, N.R.; Dunn, J.C.; Edenburn, M.W.

    1978-09-01

    SOLTES simulates the steady-state response of thermal energy systems to time-varying data such as weather and loads. Thermal energy system models of both simple and complex systems can easily be modularly constructed from a library of routines. These routines mathematically model solar collectors, pumps, switches, thermal energy storage, thermal boilers, auxiliary boilers, heat exchangers, extraction turbines, extraction turbine/generators, condensers, regenerative heaters, air conditioners, heating and cooling of buildings, process vapor, etc.; SOLTES also allows user-supplied routines. The analyst need only specify fluid names to obtain readout of property data for heat-transfer fluids and constants that characterize power-cycle working fluids from a fluid property data bank. A load management capability allows SOLTES to simulate total energy systems that simultaneously follow heat and power loads and demands. Generalized energy accounting is available, and values for system performance parameters may be automatically determined by SOLTES. Because of its modularity and flexibility, SOLTES can be used to simulate a wide variety of thermal energy systems such as solar power/total energy, fossil fuel power plants/total energy, nuclear power plants/total energy, solar energy heating and cooling, geothermal energy, and solar hot water heaters

  13. On the consistency of scale among experiments, theory, and simulation

    Science.gov (United States)

    McClure, James E.; Dye, Amanda L.; Miller, Cass T.; Gray, William G.

    2017-02-01

    As a tool for addressing problems of scale, we consider an evolving approach known as the thermodynamically constrained averaging theory (TCAT), which has broad applicability to hydrology. We consider the case of modeling of two-fluid-phase flow in porous media, and we focus on issues of scale as they relate to various measures of pressure, capillary pressure, and state equations needed to produce solvable models. We apply TCAT to perform physics-based data assimilation to understand how the internal behavior influences the macroscale state of two-fluid porous medium systems. A microfluidic experimental method and a lattice Boltzmann simulation method are used to examine a key deficiency associated with standard approaches. In a hydrologic process such as evaporation, the water content will ultimately be reduced below the irreducible wetting-phase saturation determined from experiments. This is problematic since the derived closure relationships cannot predict the associated capillary pressures for these states. We demonstrate that the irreducible wetting-phase saturation is an artifact of the experimental design, caused by the fact that the boundary pressure difference does not approximate the true capillary pressure. Using averaging methods, we compute the true capillary pressure for fluid configurations at and below the irreducible wetting-phase saturation. Results of our analysis include a state function for the capillary pressure expressed as a function of fluid saturation and interfacial area.

  14. Molecular Theory and Simulation of Water-Oil Contacts

    Science.gov (United States)

    Tan, Liang

    The statistical mechanical theory of hydrophobic interactions was initiated decades ago for purely repulsive hydrophobic species, in fact, originally for hard-sphere solutes in liquid water. Systems which treat only repulsive solute-water interactions obviously differ from the real world situation. The issue of the changes to be expected from inclusion of realistic attractive solute-water interactions has been of specific interest also for decades. We consider the local molecular field (LMF) theory for the effects of solute attractive forces on hydrophobic interactions. The principal result of LMF theory is outlined, then tested by obtaining radial distribution functions (rdfs) for Ar atoms in water, with and without attractive interactions distinguished by the Weeks-Chandler-Andersen (WCA) separation. Change from purely repulsive atomic solute interactions to include realistic attractive interactions substantially diminishes the strength of hydrophobic bonds. Since attractions make a big contribution to hydrophobic interactions, Pratt-Chandler theory, which did not include attractions, should not be naively compared to computer simulation results with general physical interactions, including attractions. Lack of general appreciation of this point has lead to mistaken comparisons throughout the history of this subject. The rdfs permit evaluation of osmotic second virial coefficients B2. Those B 2 are consistent with the conclusion that incorporation of attractive interactions leads to more positive (repulsive) values. In all cases here, B2 becomes more attractive with increasing temperature below T = 360K, the so-call inverse temperature behavior. In 2010, the Gulf of Mexico Macondo well (Deepwater Horizon) oil spill focused the attention of the world on water-oil phase equilibrium. In response to the disaster, chemical dispersants were applied to break oil slicks into droplets and thus to avoid large-scale fouling of beaches and to speed up biodegradation

  15. From Theory to Practice: A Crisis Simulation Exercise

    Science.gov (United States)

    Aertsen, Tamara; Jaspaert, Koen; Van Gorp, Baldwin

    2013-01-01

    In this article, an educational project is described that was formulated with the aim to give master's students in business communication the opportunity to experience how theory could be applied to shape practice. A 4-week project was developed in which students were urged to use communication theory and linguistic theory to manage the…

  16. Learning theories and tools for the assessment of core nursing competencies in simulation: A theoretical review.

    Science.gov (United States)

    Lavoie, Patrick; Michaud, Cécile; Bélisle, Marilou; Boyer, Louise; Gosselin, Émilie; Grondin, Myrian; Larue, Caroline; Lavoie, Stéphan; Pepin, Jacinthe

    2018-02-01

    To identify the theories used to explain learning in simulation and to examine how these theories guided the assessment of learning outcomes related to core competencies in undergraduate nursing students. Nurse educators face the challenge of making explicit the outcomes of competency-based education, especially when competencies are conceptualized as holistic and context dependent. Theoretical review. Research papers (N = 182) published between 1999-2015 describing simulation in nursing education. Two members of the research team extracted data from the papers, including theories used to explain how simulation could engender learning and tools used to assess simulation outcomes. Contingency tables were created to examine the associations between theories, outcomes and tools. Some papers (N = 79) did not provide an explicit theory. The 103 remaining papers identified one or more learning or teaching theories; the most frequent were the National League for Nursing/Jeffries Simulation Framework, Kolb's theory of experiential learning and Bandura's social cognitive theory and concept of self-efficacy. Students' perceptions of simulation, knowledge and self-confidence were the most frequently assessed, mainly via scales designed for the study where they were used. Core competencies were mostly assessed with an observational approach. This review highlighted the fact that few studies examined the use of simulation in nursing education through learning theories and via assessment of core competencies. It also identified observational tools used to assess competencies in action, as holistic and context-dependent constructs. © 2017 John Wiley & Sons Ltd.

  17. Case report: a work simulation program for a manual worker with a fracture injury.

    Science.gov (United States)

    Chan, Chi-Chung; Chow, Jonathan H.W.

    2000-01-01

    Work rehabilitation programs targeting different client groups are available in nearly all major hospital occupational therapy departments in Hong Kong. Clients receiving work rehabilitation are referred from various out-patient clinics and other occupational therapists. Those clients experience limitation in work after their injuries or diseases and plan to return to work after rehabilitation. Program objectives are 1) to assist clients to reach maximum work capacity as rapidly as possible 2) to ensure clients return to work safely 3) to improve clients' work readiness. This case report describes an individualized work simulation program at a general hospital in Hong Kong provided for a typical client who is preparing to return to his worker role. Specific job analysis, goals and program rationale for the client are discussed.

  18. Multi-institutional comparison of simulated treatment delivery errors in ssIMRT, manually planned VMAT and autoplan-VMAT plans for nasopharyngeal radiotherapy

    DEFF Research Database (Denmark)

    Pogson, Elise M; Aruguman, Sankar; Hansen, Christian R

    2017-01-01

    PURPOSE: To quantify the impact of simulated errors for nasopharynx radiotherapy across multiple institutions and planning techniques (auto-plan generated Volumetric Modulated Arc Therapy (ap-VMAT), manually planned VMAT (mp-VMAT) and manually planned step and shoot Intensity Modulated Radiation...... Therapy (mp-ssIMRT)). METHODS: Ten patients were retrospectively planned with VMAT according to three institution's protocols. Within one institution two further treatment plans were generated using differing treatment planning techniques. This resulted in mp-ssIMRT, mp-VMAT, and ap-VMAT plans. Introduced...

  19. Is the aptitude of manual skills enough for assessing the training effect of students using a laparoscopy simulator?

    Directory of Open Access Journals (Sweden)

    Zielke, Andreas

    2005-12-01

    Full Text Available Background: The aim of this study was to determine if students are suitable candidates to assess the learning effect through a virtual reality laparoscopy simulator (LapSim®. Materials and methods: 14 medical students in their final year without any previous experience with a virtual reality simulator were recruited as subjects. In order to establish a "base line" all subjects were instructed into the "clip application" task - a basis module of the laparoscopy simulator - at the beginning of the study. They were then randomized into two groups. Group A (n=7 had parameter adjusted to an easy level of performance, while group B (n=7 was adjusted to a difficult level. In both levels, errors simulated clinically relevant situations such as vessel rupture and subsequent bleeding. Each participant had to repeat the clip application task ten times consecutively. Results: The mean time for completion ten repetitions was 15 min pro participant in group A and 20 min in group B. From the first to the fifth repetition group A improved significantly the task completion time from 238.9 s to 103.3 s (p<0.007 consecutively and also improved the error score from 312 to 177 (p<0.07. At the tenth repetition they increased the task completion time from 103.3 s to 152.2 s (p<0.09 and increased their error score from 177 to 202 (p=0.25. From the first to the fifth repetition group B also improved the task completion time from 131.6 s to 104.5 s (p<0.31 consecutively and improved the error score from 235 to 208 (p<0.32 but at the tenth repetition they increased the task completion time from 104.5 s to 142.4 s (p<0.45 and clearly increased their error score from 208 to 244 (p<0.38. Conclusion: These results suggest that medical students, who lack clinical background, may be not suitable candidates for assessing the efficiency of a training model using a laparoscopy simulator. If medical students are appointed for such studies, they should receive didactic sessions in

  20. The Water, Energy, and Carbon Dioxide Sequestration Simulation Model (WECSsim). A user's manual

    Energy Technology Data Exchange (ETDEWEB)

    Kobos, Peter Holmes; Roach, Jesse Dillon; Klise, Geoffrey Taylor; Heath, Jason E.; Dewers, Thomas A.; Gutierrez, Karen A.; Malczynski, Leonard A.; Borns, David James; McNemar, Andrea

    2014-01-01

    The Water, Energy, and Carbon Sequestration Simulation Model (WECSsim) is a national dynamic simulation model that calculates and assesses capturing, transporting, and storing CO2 in deep saline formations from all coal and natural gas-fired power plants in the U.S. An overarching capability of WECSsim is to also account for simultaneous CO2 injection and water extraction within the same geological saline formation. Extracting, treating, and using these saline waters to cool the power plant is one way to develop more value from using saline formations as CO2 storage locations. WECSsim allows for both one-to-one comparisons of a single power plant to a single saline formation along with the ability to develop a national CO2 storage supply curve and related national assessments for these formations. This report summarizes the scope, structure, and methodology of WECSsim along with a few key results. Developing WECSsim from a small scoping study to the full national-scale modeling effort took approximately 5 years. This report represents the culmination of that effort. The key findings from the WECSsim model indicate the U.S. has several decades' worth of storage for CO2 in saline formations when managed appropriately. Competition for subsurface storage capacity, intrastate flows of CO2 and water, and a supportive regulatory environment all play a key role as to the performance and cost profile across the range from a single power plant to all coal and natural gas-based plants' ability to store CO2. The overall system's cost to capture, transport, and store CO2 for the national assessment range from $74 to $208 / tonne stored ($96 to 272 / tonne avoided) for the first 25 to 50% of the 1126 power plants to between $1,585 to well beyond $2,000 / tonne stored ($2,040 to well beyond $2,000 / tonne avoided) for the remaining 75 to 100% of the plants. The latter range

  1. Effect of flashlight guidance on manual ventilation performance in cardiopulmonary resuscitation: A randomized controlled simulation study.

    Science.gov (United States)

    Kim, Ji Hoon; Beom, Jin Ho; You, Je Sung; Cho, Junho; Min, In Kyung; Chung, Hyun Soo

    2018-01-01

    Several auditory-based feedback devices have been developed to improve the quality of ventilation performance during cardiopulmonary resuscitation (CPR), but their effectiveness has not been proven in actual CPR situations. In the present study, we investigated the effectiveness of visual flashlight guidance in maintaining high-quality ventilation performance. We conducted a simulation-based, randomized, parallel trial including 121 senior medical students. All participants were randomized to perform ventilation during 2 minutes of CPR with or without flashlight guidance. For each participant, we measured mean ventilation rate as a primary outcome and ventilation volume, inspiration velocity, and ventilation interval as secondary outcomes using a computerized device system. Mean ventilation rate did not significantly differ between flashlight guidance and control groups (P = 0.159), but participants in the flashlight guidance group exhibited significantly less variation in ventilation rate than participants in the control group (Pguidance group. Our results demonstrate that flashlight guidance is effective in maintaining a constant ventilation rate and interval. If confirmed by further studies in clinical practice, flashlight guidance could be expected to improve the quality of ventilation performed during CPR.

  2. GERTS GQ User's Manual.

    Science.gov (United States)

    Akiba, Y.; And Others

    This user's manual for the simulation program Graphical Evaluation and Review Technique (GERT) GQ contains sections on nodes, branches, program input description and format, and program output, as well as examples. Also included is a programmer's manual which contains information on scheduling, subroutine descriptions, COMMON Variables, and…

  3. NASTRAN thermal analyzer: Theory and application including a guide to modeling engineering problems, volume 1. [thermal analyzer manual

    Science.gov (United States)

    Lee, H. P.

    1977-01-01

    The NASTRAN Thermal Analyzer Manual describes the fundamental and theoretical treatment of the finite element method, with emphasis on the derivations of the constituent matrices of different elements and solution algorithms. Necessary information and data relating to the practical applications of engineering modeling are included.

  4. Digital Quantum Simulation of Z2 Lattice Gauge Theories with Dynamical Fermionic Matter

    Science.gov (United States)

    Zohar, Erez; Farace, Alessandro; Reznik, Benni; Cirac, J. Ignacio

    2017-02-01

    We propose a scheme for digital quantum simulation of lattice gauge theories with dynamical fermions. Using a layered optical lattice with ancilla atoms that can move and interact with the other atoms (simulating the physical degrees of freedom), we obtain a stroboscopic dynamics which yields the four-body plaquette interactions, arising in models with (2 +1 ) and higher dimensions, without the use of perturbation theory. As an example we show how to simulate a Z2 model in (2 +1 ) dimensions.

  5. Regional deep hyperthermia: impact of observer variability in CT-based manual tissue segmentation on simulated temperature distribution

    Science.gov (United States)

    Aklan, Bassim; Hartmann, Josefin; Zink, Diana; Siavooshhaghighi, Hadi; Merten, Ricarda; Putz, Florian; Ott, Oliver; Fietkau, Rainer; Bert, Christoph

    2017-06-01

    The aim of this study was to systematically investigate the influence of the inter- and intra-observer segmentation variation of tumors and organs at risk on the simulated temperature coverage of the target. CT scans of six patients with tumors in the pelvic region acquired for radiotherapy treatment planning were used for hyperthermia treatment planning. To study the effect of inter-observer variation, three observers manually segmented in the CT images of each patient the following structures: fat, muscle, bone and the bladder. The gross tumor volumes (GTV) were contoured by three radiation oncology residents and used as the hyperthermia target volumes. For intra-observer variation, one of the observers of each group contoured the structures of each patient three times with a time span of one week between the segmentations. Moreover, the impact of segmentation variations in organs at risk (OARs) between the three inter-observers was investigated on simulated temperature distributions using only one GTV. The spatial overlap between individual segmentations was assessed by the Dice similarity coefficient (DSC) and the mean surface distance (MSD). Additionally, the temperatures T90/T10 delivered to 90%/10% of the GTV, respectively, were assessed for each observer combination. The results of the segmentation similarity evaluation showed that the DSC of the inter-observer variation of fat, muscle, the bladder, bone and the target was 0.68  ±  0.12, 0.88  ±  0.05, 0.73  ±  0.14, 0.91  ±  0.04 and 0.64  ±  0.11, respectively. Similar results were found for the intra-observer variation. The MSD results were similar to the DSCs for both observer variations. A statistically significant difference (p  <  0.05) was found for T90 and T10 in the predicted target temperature due to the observer variability. The conclusion is that intra- and inter-observer variations have a significant impact on the temperature coverage of the

  6. Nucleic acids: theory and computer simulation, Y2K.

    Science.gov (United States)

    Beveridge, D L; McConnell, K J

    2000-04-01

    Molecular dynamics simulations on DNA and RNA that include solvent are now being performed under realistic environmental conditions of water activity and salt. Improvements to force-fields and treatments of long-range interactions have significantly increased the reliability of simulations. New studies of sequence effects, axis bending, solvation and conformational transitions have appeared.

  7. War and Peace in International Relations Theory: A Classroom Simulation

    Science.gov (United States)

    Sears, Nathan Alexander

    2018-01-01

    Simulations are increasingly common pedagogical tools in political science and international relations courses. This article develops a classroom simulation that aims to facilitate students' theoretical understanding of the topic of war and peace in international relations, and accomplishes this by incorporating important theoretical concepts…

  8. Plasma theory and simulation. Quarterly progress report I, II, January 1-June 30, 1984

    International Nuclear Information System (INIS)

    Birdsall, C.K.

    1984-01-01

    Our group uses theory and simulation as tools in order to increase the understanding of instabilities, heating, transport, and other phenomena in plasmas. We also work on the improvement of simulation both theoretically and practically. Research in plasma theory and simulation has centered on the following: (1) electron Bernstein wave investigations; (2) simulation of plasma-sheath region, including ion reflection; (3) single ended plasma device, general behavior dc or ac; (4) single ended plasma device, unstable states; (5) corrections to time-independent Q-machine equilibria; (6) multifluid derivation of the Alfven ion-cyclotron linear dispersion relation; and (7) potential barrier between hot and cool plasmas

  9. Computational physics an introduction to Monte Carlo simulations of matrix field theory

    CERN Document Server

    Ydri, Badis

    2017-01-01

    This book is divided into two parts. In the first part we give an elementary introduction to computational physics consisting of 21 simulations which originated from a formal course of lectures and laboratory simulations delivered since 2010 to physics students at Annaba University. The second part is much more advanced and deals with the problem of how to set up working Monte Carlo simulations of matrix field theories which involve finite dimensional matrix regularizations of noncommutative and fuzzy field theories, fuzzy spaces and matrix geometry. The study of matrix field theory in its own right has also become very important to the proper understanding of all noncommutative, fuzzy and matrix phenomena. The second part, which consists of 9 simulations, was delivered informally to doctoral students who are working on various problems in matrix field theory. Sample codes as well as sample key solutions are also provided for convenience and completness. An appendix containing an executive arabic summary of t...

  10. Nucleic acid polymeric properties and electrostatics: Directly comparing theory and simulation with experiment.

    Science.gov (United States)

    Sim, Adelene Y L

    2016-06-01

    Nucleic acids are biopolymers that carry genetic information and are also involved in various gene regulation functions such as gene silencing and protein translation. Because of their negatively charged backbones, nucleic acids are polyelectrolytes. To adequately understand nucleic acid folding and function, we need to properly describe its i) polymer/polyelectrolyte properties and ii) associating ion atmosphere. While various theories and simulation models have been developed to describe nucleic acids and the ions around them, many of these theories/simulations have not been well evaluated due to complexities in comparison with experiment. In this review, I discuss some recent experiments that have been strategically designed for straightforward comparison with theories and simulation models. Such data serve as excellent benchmarks to identify limitations in prevailing theories and simulation parameters. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Modelling of ballistic low energy ion solid interaction - conventional analytic theories versus computer simulations

    International Nuclear Information System (INIS)

    Littmark, U.

    1994-01-01

    The ''philosophy'' behind, and the ''psychology'' of the development from analytic theory to computer simulations in the field of atomic collisions in solids is discussed and a few examples of achievements and perspectives are given. (orig.)

  12. State of Theory and Computer Simulations of Radiation Effects in Ceramics

    International Nuclear Information System (INIS)

    Corrales, Louis R.; Weber, William J.

    2003-01-01

    This article presents opinions based on the presentations and discussions at a Workshop on Theory and Computer Simulations of Radiation Effects in Ceramics held in August 2002 at Pacific Northwest National Laboratory in Richland, WA, USA. The workshop was focused on the current state-of-the-art of theory, modeling and simulation of radiation effects in oxide ceramics, directions for future breakthroughs, and creating a close integration with experiment

  13. Pdap Manual

    DEFF Research Database (Denmark)

    Pedersen, Mads Mølgaard; Larsen, Torben J.

    Pdap, Python Data Analysis Program, is a program for post processing, analysis, visualization and presentation of data e.g. simulation results and measurements. It is intended but not limited to the domain of wind turbines. It combines an intuitive graphical user interface with python scripting...... that allows automation and implementation of custom functions. This manual gives a short introduction to the graphical user interface, describes the mathematical background for some of the functions, describes the scripting API and finally a few examples on how automate analysis via scripting is presented....... The newest version, and more documentation and help on how to used, extend and automate Pdap can be found at the webpage www.hawc2.dk...

  14. Microcanonical simulations in classical and quantum field theory

    International Nuclear Information System (INIS)

    Olson, D.P.

    1988-01-01

    In the first part of this thesis, a stochastic adaptation of the microcanonical simulation method is applied to the numerical simulation of the Su-Schrieffer-Heeger Hamiltonian for polyacetylene, a one-dimensional polymer were fermion-boson interactions play a dominant role in the dynamics of the system. The pure microcanonical simulation method fails in the marginally ergodic case and a stochastic adaptation, the hybrid microcanonical method, is employed to resolve problems with ergodicity. The hybrid method is shown to be an efficient method for higher dimensional fermionic quantum systems. In the second part of this thesis, a numerical simulation of the evolution of a network of global cosmic strings is an expanding Robertson-Walker universe is carried out. The system is quenched through an order-disorder phase transition and the nature of the string distribution is examined. While the string distribution observed at the phase transition is in good agreement with earlier estimates, the simulation reveals that the dynamics of the strings are suppressed by interactions with the Goldstone field. The network decays by topological annihilation and no spatial correlations are observed at any point in the simulation

  15. A comparison between integral equation theory and molecular dynamics simulations of dense, flexible polymer liquids

    International Nuclear Information System (INIS)

    Curro, J.G.; Schweizer, K.S.; Grest, G.S.; Kremer, K.; Corporate Research Science Laboratory, Exxon Research and Engineering Company, Annandale, New Jersey 08801; Institut fur Festkorperforschung der Kernforschungsanlage Julich, D-5170 Julich, Federal Republic of Germany)

    1989-01-01

    Recently we (J.G.C. and K.S.S.) formulated a tractable ''reference interaction site model'' (RISM) integral equation theory of flexible polymer liquids. The purpose of this paper is to compare the results of the theory with recent molecular dynamics simulations (G.S.G. and K.K.) on dense chain liquids of degree of polymerization N=50 and 200. Specific comparisons were made between theory and simulation for the intramolecular structure factor ω(k) and the intermolecular radial distribution function g(r) in the liquid. In particular it was possible to independently test the assumptions inherent in the RISM theory and the additional ideality approximation that was made in the initial application of the theory. This comparison was accomplished by calculating the intermolecular g(r) using the simulated intramolecular structure factor, as well as, ω(k) derived from a freely jointed chain model.The RISM theory results, using the simulated ω(k), were found to be in excellent agreement, over all length scales, with the g(r) from molecular dynamics simulations. The theoretical predictions using the ''ideal'' intramolecular structure factor tended to underestimate g(r) near contact, indicating local intramolecular expansion of the chains. This local expansion can be incorporated into the theory self consistently by including the effects of the ''medium induced'' potential on the intramolecular structure

  16. Theory in social simulation: Status-Power theory, national culture and emergence of the glass ceiling

    NARCIS (Netherlands)

    Hofstede, G.J.

    2013-01-01

    This is a conceptual exploration of the work of some
    eminent social scientists thought to be amenable to agent-based
    modelling of social reality. Kemper’s status-power theory and
    Hofstede’s dimensions of national culture are the central
    theories. The article reviews empirical work on

  17. Novel hybrid optical correlator: theory and optical simulation.

    Science.gov (United States)

    Casasent, D; Herold, R L

    1975-02-01

    The inverse transform of the product of two Fourier transform holograms is analyzed and shown to contain the correlation of the two images from which the holograms were formed. The theory, analysis, and initial experimental demonstration of the feasibility of a novel correlation scheme using this multiplied Fourier transform hologram system are presented.

  18. Insights into Participants' Behaviours in Educational Games, Simulations and Workshops: A Catastrophe Theory Application to Motivation.

    Science.gov (United States)

    Cryer, Patricia

    1988-01-01

    Develops models for participants' behaviors in games, simulations, and workshops based on Catastrophe Theory and Herzberg's two-factor theory of motivation. Examples are given of how these models can be used, both for describing and understanding the behaviors of individuals, and for eliciting insights into why participants behave as they do. (11…

  19. Making Decisions about an Educational Game, Simulation or Workshop: A 'Game Theory' Perspective.

    Science.gov (United States)

    Cryer, Patricia

    1988-01-01

    Uses game theory to help practitioners make decisions about educational games, simulations, or workshops whose outcomes depend to some extent on chance. Highlights include principles for making decisions involving risk; elementary laws of probability; utility theory; and principles for making decisions involving uncertainty. (eight references)…

  20. New techniques and results for worldline simulations of lattice field theories

    Science.gov (United States)

    Giuliani, Mario; Orasch, Oliver; Gattringer, Christof

    2018-03-01

    We use the complex ø4 field at finite density as a model system for developing further techniques based on worldline formulations of lattice field theories. More specifically we: 1) Discuss new variants of the worm algorithm for updating the ø4 theory and related systems with site weights. 2) Explore the possibility of canonical simulations in the worldline formulation. 3) Study the connection of 2-particle condensation at low temperature to scattering parameters of the theory.

  1. Theory and simulations of electron vortices generated by magnetic pushing

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, A. S.; Angus, J. R.; Swanekamp, S. B.; Schumer, J. W. [Plasma Physics Division, Naval Research Laboratory, Washington, DC 20375 (United States); Ottinger, P. F. [An Independent Consultant through ENGILITY, Chantilly, Virginia 20151 (United States)

    2013-08-15

    Vortex formation and propagation are observed in kinetic particle-in-cell (PIC) simulations of magnetic pushing in the plasma opening switch. These vortices are studied here within the electron-magnetohydrodynamic (EMHD) approximation using detailed analytical modeling. PIC simulations of these vortices have also been performed. Strong v×B forces in the vortices give rise to significant charge separation, which necessitates the use of the EMHD approximation in which ions are fixed and the electrons are treated as a fluid. A semi-analytic model of the vortex structure is derived, and then used as an initial condition for PIC simulations. Density-gradient-dependent vortex propagation is then examined using a series of PIC simulations. It is found that the vortex propagation speed is proportional to the Hall speed v{sub Hall}≡cB{sub 0}/4πn{sub e}eL{sub n}. When ions are allowed to move, PIC simulations show that the electric field in the vortex can accelerate plasma ions, which leads to dissipation of the vortex. This electric field contributes to the separation of ion species that has been observed to occur in pulsed-power experiments with a plasma-opening switch.

  2. Smart modeling and simulation for complex systems practice and theory

    CERN Document Server

    Ren, Fenghui; Zhang, Minjie; Ito, Takayuki; Tang, Xijin

    2015-01-01

    This book aims to provide a description of these new Artificial Intelligence technologies and approaches to the modeling and simulation of complex systems, as well as an overview of the latest scientific efforts in this field such as the platforms and/or the software tools for smart modeling and simulating complex systems. These tasks are difficult to accomplish using traditional computational approaches due to the complex relationships of components and distributed features of resources, as well as the dynamic work environments. In order to effectively model the complex systems, intelligent technologies such as multi-agent systems and smart grids are employed to model and simulate the complex systems in the areas of ecosystem, social and economic organization, web-based grid service, transportation systems, power systems and evacuation systems.

  3. A test of the embodied simulation theory of object perception: potentiation of responses to artifacts and animals.

    Science.gov (United States)

    Matheson, Heath E; White, Nicole C; McMullen, Patricia A

    2014-07-01

    Theories of embodied object representation predict a tight association between sensorimotor processes and visual processing of manipulable objects. Previous research has shown that object handles can 'potentiate' a manual response (i.e., button press) to a congruent location. This potentiation effect is taken as evidence that objects automatically evoke sensorimotor simulations in response to the visual presentation of manipulable objects. In the present series of experiments, we investigated a critical prediction of the theory of embodied object representations that potentiation effects should be observed with manipulable artifacts but not non-manipulable animals. In four experiments we show that (a) potentiation effects are observed with animals and artifacts; (b) potentiation effects depend on the absolute size of the objects and (c) task context influences the presence/absence of potentiation effects. We conclude that potentiation effects do not provide evidence for embodied object representations, but are suggestive of a more general stimulus-response compatibility effect that may depend on the distribution of attention to different object features.

  4. Plasma theory and simulation: Third and fourth quarterly progress report, July 1, 1986-December 31, 1986

    International Nuclear Information System (INIS)

    Birdsall, C.K.

    1986-01-01

    Our group uses theory and simulation as tools in order to increase the understanding of plasma instabilities, heating, transport, plasma-wall interactions, and large potentials in plasmas. We also work on the improvement of simulation both theoretically and practically. Two separate papers are included in this report

  5. Theory, simulation, and experimental studies of zonal flows

    International Nuclear Information System (INIS)

    Hahm, T. S.; Burrell, K.H.; Lin, Z.; Nazikian, R.; Synakowski, E.J.

    2000-01-01

    The authors report on current theoretical understanding of the characteristics of self-generated zonal flows as observed in nonlinear gyrokinetic simulations of toroidal ITG turbulence [Science 281, 1835 (1998)], and discuss various possibilities for experimental measurements of signature of zonal flows

  6. Digital Quantum Simulation of Z_{2} Lattice Gauge Theories with Dynamical Fermionic Matter.

    Science.gov (United States)

    Zohar, Erez; Farace, Alessandro; Reznik, Benni; Cirac, J Ignacio

    2017-02-17

    We propose a scheme for digital quantum simulation of lattice gauge theories with dynamical fermions. Using a layered optical lattice with ancilla atoms that can move and interact with the other atoms (simulating the physical degrees of freedom), we obtain a stroboscopic dynamics which yields the four-body plaquette interactions, arising in models with (2+1) and higher dimensions, without the use of perturbation theory. As an example we show how to simulate a Z_{2} model in (2+1) dimensions.

  7. Polymorphic phase transitions: Macroscopic theory and molecular simulation.

    Science.gov (United States)

    Anwar, Jamshed; Zahn, Dirk

    2017-08-01

    Transformations in the solid state are of considerable interest, both for fundamental reasons and because they underpin important technological applications. The interest spans a wide spectrum of disciplines and application domains. For pharmaceuticals, a common issue is unexpected polymorphic transformation of the drug or excipient during processing or on storage, which can result in product failure. A more ambitious goal is that of exploiting the advantages of metastable polymorphs (e.g. higher solubility and dissolution rate) while ensuring their stability with respect to solid state transformation. To address these issues and to advance technology, there is an urgent need for significant insights that can only come from a detailed molecular level understanding of the involved processes. Whilst experimental approaches at best yield time- and space-averaged structural information, molecular simulation offers unprecedented, time-resolved molecular-level resolution of the processes taking place. This review aims to provide a comprehensive and critical account of state-of-the-art methods for modelling polymorph stability and transitions between solid phases. This is flanked by revisiting the associated macroscopic theoretical framework for phase transitions, including their classification, proposed molecular mechanisms, and kinetics. The simulation methods are presented in tutorial form, focusing on their application to phase transition phenomena. We describe molecular simulation studies for crystal structure prediction and polymorph screening, phase coexistence and phase diagrams, simulations of crystal-crystal transitions of various types (displacive/martensitic, reconstructive and diffusive), effects of defects, and phase stability and transitions at the nanoscale. Our selection of literature is intended to illustrate significant insights, concepts and understanding, as well as the current scope of using molecular simulations for understanding polymorphic

  8. Thermodynamic and transport properties of nitrogen fluid: Molecular theory and computer simulations

    Science.gov (United States)

    Eskandari Nasrabad, A.; Laghaei, R.

    2018-04-01

    Computer simulations and various theories are applied to compute the thermodynamic and transport properties of nitrogen fluid. To model the nitrogen interaction, an existing potential in the literature is modified to obtain a close agreement between the simulation results and experimental data for the orthobaric densities. We use the Generic van der Waals theory to calculate the mean free volume and apply the results within the modified Cohen-Turnbull relation to obtain the self-diffusion coefficient. Compared to experimental data, excellent results are obtained via computer simulations for the orthobaric densities, the vapor pressure, the equation of state, and the shear viscosity. We analyze the results of the theory and computer simulations for the various thermophysical properties.

  9. Towards socio-material approaches in simulation-based education: lessons from complexity theory.

    Science.gov (United States)

    Fenwick, Tara; Dahlgren, Madeleine Abrandt

    2015-04-01

    Review studies of simulation-based education (SBE) consistently point out that theory-driven research is lacking. The literature to date is dominated by discourses of fidelity and authenticity - creating the 'real' - with a strong focus on the developing of clinical procedural skills. Little of this writing incorporates the theory and research proliferating in professional studies more broadly, which show how professional learning is embodied, relational and situated in social - material relations. A key concern for medical educators concerns how to better prepare students for the unpredictable and dynamic ambiguity of professional practice; this has stimulated the movement towards socio-material theories in education that address precisely this question. Among the various socio-material theories that are informing new developments in professional education, complexity theory has been of particular importance for medical educators interested in updating current practices. This paper outlines key elements of complexity theory, illustrated with examples from empirical study, to argue its particular relevance for improving SBE. Complexity theory can make visible important material dynamics, and their problematic consequences, that are not often noticed in simulated experiences in medical training. It also offers conceptual tools that can be put to practical use. This paper focuses on concepts of emergence, attunement, disturbance and experimentation. These suggest useful new approaches for designing simulated settings and scenarios, and for effective pedagogies before, during and following simulation sessions. Socio-material approaches such as complexity theory are spreading through research and practice in many aspects of professional education across disciplines. Here, we argue for the transformative potential of complexity theory in medical education using simulation as our focus. Complexity tools open questions about the socio-material contradictions inherent in

  10. Repetitive control of an electrostatic microbridge actuator: theory and simulation

    International Nuclear Information System (INIS)

    Zhao, Haiyu; Rahn, Christopher D

    2010-01-01

    Electrostatic microactuators are used extensively in MEMS sensors, RF switches and microfluidic pumps. The high bandwidth operation required by these applications complicates the implementation of feedback controllers. This paper designs, proves stability and simulates a feedforward repetitive controller for an electrostatic microbridge. High residual stress creates tension in the microbridge that dominates bending stiffness so a pinned string model with uniform electrostatic force loading is used for model-based control. The control objective is to force the microbridge displacement to follow prescribed spatial and periodic time trajectories. Viscous damping ensures boundedness of the distributed transverse displacement in response to bounded inputs. The average displacement is measured by capacitive sensing and processed offline using a repetitive control algorithm that updates a high speed waveform generator's parameters. Simulations show that the performance depends on the amount of damping. With less than 1% damping in a representative microbridge structure, repetitive control reduces the midspan displacement overshoot by 83%

  11. Kinetic Theory and Simulation of Single-Channel Water Transport

    Science.gov (United States)

    Tajkhorshid, Emad; Zhu, Fangqiang; Schulten, Klaus

    Water translocation between various compartments of a system is a fundamental process in biology of all living cells and in a wide variety of technological problems. The process is of interest in different fields of physiology, physical chemistry, and physics, and many scientists have tried to describe the process through physical models. Owing to advances in computer simulation of molecular processes at an atomic level, water transport has been studied in a variety of molecular systems ranging from biological water channels to artificial nanotubes. While simulations have successfully described various kinetic aspects of water transport, offering a simple, unified model to describe trans-channel translocation of water turned out to be a nontrivial task.

  12. Defects and diffusion, theory and simulation an annual retrospective I

    CERN Document Server

    Fisher, David J

    2009-01-01

    This first volume, in a new series covering entirely general results in the fields of defects and diffusion, includes abstracts of papers which appeared between the beginning of 2008 and the end of October 2009 (journal availability permitting).This new series replaces the 'general' section which was previously part of each issue of the Metals, Ceramics and Semiconductor retrospective series. As well as 356 abstracts, the volume includes original papers on all of the usual material groups: ""Predicting Diffusion Coefficients from First Principles via Eyring's Reaction Rate Theory"" (Mantina, C

  13. Formation of Plasma Around a Small Meteoroid: Simulation and Theory

    Science.gov (United States)

    Sugar, G.; Oppenheim, M. M.; Dimant, Y. S.; Close, S.

    2018-05-01

    High-power large-aperture radars detect meteors by reflecting radio waves off dense plasma that surrounds a hypersonic meteoroid as it ablates in the Earth's atmosphere. If the plasma density profile around the meteoroid is known, the plasma's radar cross section can be used to estimate meteoroid properties such as mass, density, and composition. This paper presents head echo plasma density distributions obtained via two numerical simulations of a small ablating meteoroid and compares the results to an analytical solution found in Dimant and Oppenheim (2017a, https://doi.org/10.1002/2017JA023960, 2017b, https://doi.org/10.1002/2017JA023963). The first simulation allows ablated meteoroid particles to experience only a single collision to match an assumption in the analytical solution, while the second is a more realistic simulation by allowing multiple collisions. The simulation and analytical results exhibit similar plasma density distributions. At distances much less than λT, the average distance an ablated particle travels from the meteoroid before a collision with an atmospheric particle, the plasma density falls off as 1/R, where R is the distance from the meteoroid center. At distances substantially greater than λT, the plasma density profile has an angular dependence, falling off as 1/R2 directly behind the meteoroid, 1/R3 in a plane perpendicular to the meteoroid's path that contains the meteoroid center, and exp[-1.5(R/λT2/3)]/R in front of the meteoroid. When used for calculating meteoroid masses, this new plasma density model can give masses that are orders of magnitude different than masses calculated from a spherically symmetric Gaussian distribution, which has been used to calculate masses in the past.

  14. Z{sub c}(3900): confronting theory and lattice simulations

    Energy Technology Data Exchange (ETDEWEB)

    Albaladejo, Miguel; Fernandez-Soler, Pedro; Nieves, Juan [Instituto de Fisica Corpuscular (IFIC), Centro Mixto CSIC-Universidad de Valencia, Institutos de Investigacion de Paterna, Valencia (Spain)

    2016-10-15

    We consider a recent T-matrix analysis by Albaladejo et al. (Phys Lett B 755:337, 2016), which accounts for the J/ψπ and D{sup *} anti D coupled-channels dynamics, and which successfully describes the experimental information concerning the recently discovered Z{sub c}(3900){sup ±}. Within such scheme, the data can be similarly well described in two different scenarios, where Z{sub c}(3900) is either a resonance or a virtual state. To shed light into the nature of this state, we apply this formalism in a finite box with the aim of comparing with recent Lattice QCD (LQCD) simulations. We see that the energy levels obtained for both scenarios agree well with those obtained in the single-volume LQCD simulation reported in Prelovsek et al. (Phys Rev D 91:014504, 2015), thus making it difficult to disentangle the two possibilities. We also study the volume dependence of the energy levels obtained with our formalism and suggest that LQCD simulations performed at several volumes could help in discerning the actual nature of the intriguing Z{sub c}(3900) state. (orig.)

  15. A Simulational approach to teaching statistical mechanics and kinetic theory

    International Nuclear Information System (INIS)

    Karabulut, H.

    2005-01-01

    A computer simulation demonstrating how Maxwell-Boltzmann distribution is reached in gases from a nonequilibrium distribution is presented. The algorithm can be generalized to the cases of gas particles (atoms or molecules) with internal degrees of freedom such as electronic excitations and vibrational-rotational energy levels. Another generalization of the algorithm is the case of mixture of two different gases. By choosing the collision cross sections properly one can create quasi equilibrium distributions. For example by choosing same atom cross sections large and different atom cross sections very small one can create mixture of two gases with different temperatures where two gases slowly interact and come to equilibrium in a long time. Similarly, for the case one kind of atom with internal degrees of freedom one can create situations that internal degrees of freedom come to the equilibrium much later than translational degrees of freedom. In all these cases the equilibrium distribution that the algorithm gives is the same as expected from the statistical mechanics. The algorithm can also be extended to cover the case of chemical equilibrium where species A and B react to form AB molecules. The laws of chemical equilibrium can be observed from this simulation. The chemical equilibrium simulation can also help to teach the elusive concept of chemical potential

  16. Shifting from manual to automatic gear when growing old : Good advice? Results from a driving simulator study

    NARCIS (Netherlands)

    Piersma, Dafne; de Waard, Dick; de Waard, D; Brookhuis, K; Wiczorek, R; di Nocera, F; Brouwer, R; Barham, P; Weikert, C; Kluge, A; Gerbino, W; Toffetti, A

    2014-01-01

    Older people may be advised to switch from manual to automatic gear shifting, because they may have difficulties with dividing their attention between gear shifting and other driving tasks such as perceiving other traffic participants. The question is whether older drivers show a better driving

  17. User’s Manual for the Simulation of Energy Consumption and Emissions from Rail Traffic Software Package

    DEFF Research Database (Denmark)

    Cordiero, Tiago M.; Lindgreen, Erik Bjørn Grønning; Sorenson, Spencer C

    2005-01-01

    The ARTEMIS rail emissions model was implemented in a Microsoft Excel software package that includes data from the GISCO database on railway traffic. This report is the user’s manual for the aforementioned software that includes information on how to run the program and an overview on how...... of Excel Macros (Visual Basic) and database sheets included in one Excel file...

  18. Understanding the persistence of measles: reconciling theory, simulation and observation.

    Science.gov (United States)

    Keeling, Matt J; Grenfell, Bryan T

    2002-01-01

    Ever since the pattern of localized extinction associated with measles was discovered by Bartlett in 1957, many models have been developed in an attempt to reproduce this phenomenon. Recently, the use of constant infectious and incubation periods, rather than the more convenient exponential forms, has been presented as a simple means of obtaining realistic persistence levels. However, this result appears at odds with rigorous mathematical theory; here we reconcile these differences. Using a deterministic approach, we parameterize a variety of models to fit the observed biennial attractor, thus determining the level of seasonality by the choice of model. We can then compare fairly the persistence of the stochastic versions of these models, using the 'best-fit' parameters. Finally, we consider the differences between the observed fade-out pattern and the more theoretically appealing 'first passage time'. PMID:11886620

  19. Evaluating clinical simulations for learning procedural skills: a theory-based approach.

    Science.gov (United States)

    Kneebone, Roger

    2005-06-01

    Simulation-based learning is becoming widely established within medical education. It offers obvious benefits to novices learning invasive procedural skills, especially in a climate of decreasing clinical exposure. However, simulations are often accepted uncritically, with undue emphasis being placed on technological sophistication at the expense of theory-based design. The author proposes four key areas that underpin simulation-based learning, and summarizes the theoretical grounding for each. These are (1) gaining technical proficiency (psychomotor skills and learning theory, the importance of repeated practice and regular reinforcement), (2) the place of expert assistance (a Vygotskian interpretation of tutor support, where assistance is tailored to each learner's needs), (3) learning within a professional context (situated learning and contemporary apprenticeship theory), and (4) the affective component of learning (the effect of emotion on learning). The author then offers four criteria for critically evaluating new or existing simulations, based on the theoretical framework outlined above. These are: (1) Simulations should allow for sustained, deliberate practice within a safe environment, ensuring that recently-acquired skills are consolidated within a defined curriculum which assures regular reinforcement; (2) simulations should provide access to expert tutors when appropriate, ensuring that such support fades when no longer needed; (3) simulations should map onto real-life clinical experience, ensuring that learning supports the experience gained within communities of actual practice; and (4) simulation-based learning environments should provide a supportive, motivational, and learner-centered milieu which is conducive to learning.

  20. Decentralized adaptive control of manipulators - Theory, simulation, and experimentation

    Science.gov (United States)

    Seraji, Homayoun

    1989-01-01

    The author presents a simple decentralized adaptive-control scheme for multijoint robot manipulators based on the independent joint control concept. The control objective is to achieve accurate tracking of desired joint trajectories. The proposed control scheme does not use the complex manipulator dynamic model, and each joint is controlled simply by a PID (proportional-integral-derivative) feedback controller and a position-velocity-acceleration feedforward controller, both with adjustable gains. Simulation results are given for a two-link direct-drive manipulator under adaptive independent joint control. The results illustrate trajectory tracking under coupled dynamics and varying payload. The proposed scheme is implemented on a MicroVAX II computer for motion control of the three major joints of a PUMA 560 arm. Experimental results are presented to demonstrate that trajectory tracking is achieved despite coupled nonlinear joint dynamics.

  1. Theory-based transport simulation of tokamaks: density scaling

    International Nuclear Information System (INIS)

    Ghanem, E.S.; Kinsey, J.; Singer, C.; Bateman, G.

    1992-01-01

    There has been a sizeable amount of work in the past few years using theoretically based flux-surface-average transport models to simulate various types of experimental tokamak data. Here we report two such studies, concentrating on the response of the plasma to variation of the line averaged electron density. The first study reported here uses a transport model described by Ghanem et al. to examine the response of global energy confinement time in ohmically heated discharges. The second study reported here uses a closely related and more recent transport model described by Bateman to examine the response of temperature profiles to changes in line-average density in neutral-beam-heated discharges. Work on developing a common theoretical model for these and other scaling experiments is in progress. (author) 5 refs., 2 figs

  2. Nonstationary signals phase-energy approach-theory and simulations

    CERN Document Server

    Klein, R; Braun, S; 10.1006/mssp.2001.1398

    2001-01-01

    Modern time-frequency methods are intended to deal with a variety of nonstationary signals. One specific class, prevalent in the area of rotating machines, is that of harmonic signals of varying frequencies and amplitude. This paper presents a new adaptive phase-energy (APE) approach for time-frequency representation of varying harmonic signals. It is based on the concept of phase (frequency) paths and the instantaneous power spectral density (PSD). It is this path which represents the dynamic behaviour of the system generating the observed signal. The proposed method utilises dynamic filters based on an extended Nyquist theorem, enabling extraction of signal components with optimal signal-to-noise ratio. The APE detects the most energetic harmonic components (frequency paths) in the analysed signal. Tests on simulated signals show the superiority of the APE in resolution and resolving power as compared to STFT and wavelets wave- packet decomposition. The dynamic filters also enable the reconstruction of the ...

  3. Simulation of circularly polarized luminescence spectra using coupled cluster theory

    Energy Technology Data Exchange (ETDEWEB)

    McAlexander, Harley R.; Crawford, T. Daniel, E-mail: crawdad@vt.edu [Department of Chemistry, Virginia Tech, Blacksburg, Virginia 24061 (United States)

    2015-04-21

    We report the first computations of circularly polarized luminescence (CPL) rotatory strengths at the equation-of-motion coupled cluster singles and doubles (EOM-CCSD) level of theory. Using a test set of eight chiral ketones, we compare both dipole and rotatory strengths for absorption (electronic circular dichroism) and emission to the results from time-dependent density-functional theory (TD-DFT) and available experimental data for both valence and Rydberg transitions. For two of the compounds, we obtained optimized geometries of the lowest several excited states using both EOM-CCSD and TD-DFT and determined that structures and EOM-CCSD transition properties obtained with each structure were sufficiently similar that TD-DFT optimizations were acceptable for the remaining test cases. Agreement between EOM-CCSD and the Becke three-parameter exchange function and Lee-Yang-Parr correlation functional (B3LYP) corrected using the Coulomb attenuating method (CAM-B3LYP) is typically good for most of the transitions, though agreement with the uncorrected B3LYP functional is significantly worse for all reported properties. The choice of length vs. velocity representation of the electric dipole operator has little impact on the EOM-CCSD transition strengths for nearly all of the states we examined. For a pair of closely related β, γ-enones, (1R)-7-methylenebicyclo[2.2.1]heptan-2-one and (1S)-2-methylenebicyclo[2.2.1]heptan-7-one, we find that EOM-CCSD and CAM-B3LYP agree with the energetic ordering of the two possible excited-state conformations, resulting in good agreement with experimental rotatory strengths in both absorption and emission, whereas B3LYP yields a qualitatively incorrect result for the CPL signal of (1S)-2-methylenebicyclo[2.2.1]heptan-7-one. Finally, we predict that one of the compounds considered here, trans-bicyclo[3.3.0]octane-3,7-dione, is unique in that it exhibits an achiral ground state and a chiral first excited state, leading to a strong CPL

  4. Theory and simulation of ion noise in microwave tubes

    Science.gov (United States)

    Manheimer, W. M.; Freund, H. P.; Levush, B.; Antonsen, T. M.

    2001-01-01

    Since there is always some ambient gas in electron beam devices, background ionization is ubiquitous. For long pulse times, the electrostatic potentials associated with this ionization can reach significant levels and give rise to such observed phenomena as phase noise in microwave tubes. This noise is usually associated with the motion of ions in the device; therefore, it is called ion noise. It often manifests itself as a slow phase fluctuation on the output signal. Observations of noise in microwave tubes such as coupled-cavity traveling wave tubes (CC-TWTs) and klystrons have been discussed in the literature. In this paper, a hybrid model is discussed in which the electron beam is described by the beam envelope equation, and the ions generated by beam ionization are treated as discrete particles using the one-dimensional equations of motion. The theoretical model provides good qualitative as well as reasonable quantitative insight into the origin of ion noise phenomena. The numerical results indicate that the model reproduces the salient features of the phase oscillations observed experimentally. That is, the scaling of the frequency of the phase oscillations with gas pressure in the device and the sensitive dependence of the phase oscillations on the focusing magnetic field. Two distinct time scales are observed in simulation. The fastest time scale oscillation is related to the bounce motion of ions in the axial potential wells formed by the scalloping of the electron beam. Slower sawtooth oscillations are observed to correlate with the well-to-well interactions induced by the ion coupling to the electron equilibrium. These oscillations are also correlated with ion dumping to the cathode or collector. As a practical matter, simulations indicate that the low frequency oscillations can be reduced significantly by using a well-matched electron beam propagating from the electron gun into the interaction circuit.

  5. Theory and simulation of ion noise in microwave tubes

    International Nuclear Information System (INIS)

    Manheimer, W.M.; Freund, H.P.; Levush, B.; Antonsen, T.M. Jr.

    2001-01-01

    Since there is always some ambient gas in electron beam devices, background ionization is ubiquitous. For long pulse times, the electrostatic potentials associated with this ionization can reach significant levels and give rise to such observed phenomena as phase noise in microwave tubes. This noise is usually associated with the motion of ions in the device; therefore, it is called ion noise. It often manifests itself as a slow phase fluctuation on the output signal. Observations of noise in microwave tubes such as coupled-cavity traveling wave tubes (CC-TWTs) and klystrons have been discussed in the literature. In this paper, a hybrid model is discussed in which the electron beam is described by the beam envelope equation, and the ions generated by beam ionization are treated as discrete particles using the one-dimensional equations of motion. The theoretical model provides good qualitative as well as reasonable quantitative insight into the origin of ion noise phenomena. The numerical results indicate that the model reproduces the salient features of the phase oscillations observed experimentally. That is, the scaling of the frequency of the phase oscillations with gas pressure in the device and the sensitive dependence of the phase oscillations on the focusing magnetic field. Two distinct time scales are observed in simulation. The fastest time scale oscillation is related to the bounce motion of ions in the axial potential wells formed by the scalloping of the electron beam. Slower sawtooth oscillations are observed to correlate with the well-to-well interactions induced by the ion coupling to the electron equilibrium. These oscillations are also correlated with ion dumping to the cathode or collector. As a practical matter, simulations indicate that the low frequency oscillations can be reduced significantly by using a well-matched electron beam propagating from the electron gun into the interaction circuit

  6. Stochastic Processes and Queueing Theory used in Cloud Computer Performance Simulations

    Directory of Open Access Journals (Sweden)

    Florin-Catalin ENACHE

    2015-10-01

    Full Text Available The growing character of the cloud business has manifested exponentially in the last 5 years. The capacity managers need to concentrate on a practical way to simulate the random demands a cloud infrastructure could face, even if there are not too many mathematical tools to simulate such demands.This paper presents an introduction into the most important stochastic processes and queueing theory concepts used for modeling computer performance. Moreover, it shows the cases where such concepts are applicable and when not, using clear programming examples on how to simulate a queue, and how to use and validate a simulation, when there are no mathematical concepts to back it up.

  7. Statistical mechanics of dense plasmas: numerical simulation and theory

    International Nuclear Information System (INIS)

    DeWitt, H.E.

    1977-10-01

    Recent Monte Carlo calculations from Paris and from Livermore for dense one and two component plasmas have led to systematic and accurate results for the thermodynamic properties of dense Coulombic fluids. This talk will summarize the results of these numerical experiments, and the simple analytic expressions for the equation of state and other thermodynamic functions that have been obtained. The thermal energy for the one component plasma has a simple power law dependence on temperature that is identical to Monte Carlo results on strongly coupled fluids governed by l/r/sup n/ potentials. A universal model for fluids governed by simple repulsive forces is suggested. For two component plasmas the ion-sphere model is shown to accurately reproduce the Monte Carlo data for the static portion of the energy. Electron screening is included using the Lindhard dielectric function and linear response theory. Free energy expressions have been constructed for one and two component plasmas that allow easy computation of all thermodynamic functions

  8. Manual Therapy

    OpenAIRE

    Hakgüder, Aral; Kokino, Siranuş

    2002-01-01

    Manual therapy has been used in the treatment of pain and dysfunction of spinal and peripheral joints for more than a hundred years. Manual medicine includes manipulation, mobilization, and postisometric relaxation techniques. The aim of manual therapy is to enhance restricted movement caused by blockage of joints keeping postural balance, restore function and maintain optimal body mechanics. Anatomic, biomechanical, and neurophysiological evaluations of the leucomotor system is essential for...

  9. A molecular dynamics algorithm for simulation of field theories in the canonical ensemble

    International Nuclear Information System (INIS)

    Kogut, J.B.; Sinclair, D.K.

    1986-01-01

    We add a single scalar degree of freedom (''demon'') to the microcanonical ensemble which converts its molecular dynamics into a simulation method for the canonical ensemble (euclidean path integral) of the underlying field theory. This generalization of the microcanonical molecular dynamics algorithm simulates the field theory at fixed coupling with a completely deterministic procedure. We discuss the finite size effects of the method, the equipartition theorem and ergodicity. The method is applied to the planar model in two dimensions and SU(3) lattice gauge theory with four species of light, dynamical quarks in four dimensions. The method is much less sensitive to its discrete time step than conventional Langevin equation simulations of the canonical ensemble. The method is a straightforward generalization of a procedure introduced by S. Nose for molecular physics. (orig.)

  10. Liquid-Vapor Phase Transition: Thermomechanical Theory, Entropy Stable Numerical Formulation, and Boiling Simulations

    Science.gov (United States)

    2015-05-01

    vapor bubbles may generate near blades [40]. This is the phenomenon of cavitation and it is still a limiting factor for ship propeller design. Phase...van der Waals theory with hydrodynamics [39]. The fluid equations based on the van der Waals theory are called the Navier-Stokes-Korteweg equations... cavitating flows, the liquid- vapor phase transition induced by pressure variations. A potential challenge for such a simulation is a proper design of open

  11. SU (2) lattice gauge theory simulations on Fermi GPUs

    International Nuclear Information System (INIS)

    Cardoso, Nuno; Bicudo, Pedro

    2011-01-01

    In this work we explore the performance of CUDA in quenched lattice SU (2) simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware and software architecture developed by NVIDIA for computing on the GPU. We present an analysis and performance comparison between the GPU and CPU in single and double precision. Analyses with multiple GPUs and two different architectures (G200 and Fermi architectures) are also presented. In order to obtain a high performance, the code must be optimized for the GPU architecture, i.e., an implementation that exploits the memory hierarchy of the CUDA programming model. We produce codes for the Monte Carlo generation of SU (2) lattice gauge configurations, for the mean plaquette, for the Polyakov Loop at finite T and for the Wilson loop. We also present results for the potential using many configurations (50,000) without smearing and almost 2000 configurations with APE smearing. With two Fermi GPUs we have achieved an excellent performance of 200x the speed over one CPU, in single precision, around 110 Gflops/s. We also find that, using the Fermi architecture, double precision computations for the static quark-antiquark potential are not much slower (less than 2x slower) than single precision computations.

  12. Canonical simulations with worldlines: An exploratory study in ϕ24 lattice field theory

    Science.gov (United States)

    Orasch, Oliver; Gattringer, Christof

    2018-01-01

    In this paper, we explore the perspectives for canonical simulations in the worldline formulation of a lattice field theory. Using the charged ϕ4 field in two dimensions as an example, we present the details of the canonical formulation based on worldlines and outline the algorithmic strategies for canonical worldline simulations. We discuss the steps for converting the data from the canonical approach to the grand canonical picture which we use for cross-checking our results. The canonical approach presented here can easily be generalized to other lattice field theories with a worldline representation.

  13. Polymer deformation in Brownian ratchets: theory and molecular dynamics simulations.

    Science.gov (United States)

    Kenward, Martin; Slater, Gary W

    2008-11-01

    We examine polymers in the presence of an applied asymmetric sawtooth (ratchet) potential which is periodically switched on and off, using molecular dynamics (MD) simulations with an explicit Lennard-Jones solvent. We show that the distribution of the center of mass for a polymer in a ratchet is relatively wide for potential well depths U0 on the order of several kBT. The application of the ratchet potential also deforms the polymer chains. With increasing U0 the Flory exponent varies from that for a free three-dimensional (3D) chain, nu=35 (U0=0), to that corresponding to a 2D compressed (pancake-shaped) polymer with a value of nu=34 for moderate U0. This has the added effect of decreasing a polymer's diffusion coefficient from its 3D value D3D to that of a pancaked-shaped polymer moving parallel to its minor axis D2D. The result is that a polymer then has a time-dependent diffusion coefficient D(t) during the ratchet off time. We further show that this suggests a different method to operate a ratchet, where the off time of the ratchet, toff, is defined in terms of the relaxation time of the polymer, tauR. We also derive a modified version of the Bader ratchet model [Bader, Proc. Natl. Acad. Sci. U.S.A. 96, 13165 (1999)] which accounts for this deformation and we present a simple expression to describe the time dependent diffusion coefficient D(t). Using this model we then illustrate that polymer deformation can be used to modulate polymer migration in a ratchet potential.

  14. Application of renormalization group theory to the large-eddy simulation of transitional boundary layers

    Science.gov (United States)

    Piomelli, Ugo; Zang, Thomas A.; Speziale, Charles G.; Lund, Thomas S.

    1990-01-01

    An eddy viscosity model based on the renormalization group theory of Yakhot and Orszag (1986) is applied to the large-eddy simulation of transition in a flat-plate boundary layer. The simulation predicts with satisfactory accuracy the mean velocity and Reynolds stress profiles, as well as the development of the important scales of motion. The evolution of the structures characteristic of the nonlinear stages of transition is also predicted reasonably well.

  15. APPLICATION OF QUEUING THEORY TO AUTOMATED TELLER MACHINE (ATM) FACILITIES USING MONTE CARLO SIMULATION

    OpenAIRE

    UDOANYA RAYMOND MANUEL; ANIEKAN OFFIONG

    2014-01-01

    This paper presents the importance of applying queuing theory to the Automated Teller Machine (ATM) using Monte Carlo Simulation in order to determine, control and manage the level of queuing congestion found within the Automated Teller Machine (ATM) centre in Nigeria and also it contains the empirical data analysis of the queuing systems obtained at the Automated Teller Machine (ATM) located within the Bank premises for a period of three (3) months. Monte Carlo Simulation is applied to th...

  16. Coulometer operator's manual

    International Nuclear Information System (INIS)

    Criscuolo, A.L.

    1977-07-01

    The coulometer control system automates the titration of uranium and plutonium as performed by the CMB-1 group. The system consists of a printer, microcontroller, and coulometer, all of which are controlled by an algorithm stored in the microcontroller read-only memory. This manual describes the titration procedure using the coulometer control system, its theory and maintenance

  17. Theory and Simulations of ELM Control with a Snowflake Divertor

    Energy Technology Data Exchange (ETDEWEB)

    Ryutov, D.; Cohen, B.; Cohen, R.; Makowski, M. A.; Menard, J.; Rognlien, T.; Soukhanovskii, V.; Umansky, M.; Xu, X., E-mail: ryutov1@llnl.gov [Lawrence Livermore National Laboratory, Livermore (United States); Kolemen, E. [Princeton Plasma Physics Laboratory, Princeton (United States)

    2012-09-15

    Full text: This paper is concerned with the use of a snowflake (SF) divertor for the control and mitigation of edge localized modes (ELMs). Our research is focused on the following three issues: 1. Effect of the SF geometry on neoclassical ion orbits near the separatrix, including prompt ion losses and the related control mechanism for the electric field and plasma flow in the pedestal; 2. Influence of the thereby modified flow and of high poloidal plasma beta in the divertor region on plasma turbulence and transport in the snowflake-plus geometry; 3. Reaction of the SF divertor to type-1 ELM events. Neoclassical ion orbits in the vicinity of the SF separatrix are changed due to a much weaker poloidal field near the null and much longer particle dwell-time in this area. This leads to an increase of the prompt ion loss, which then affects the radial electric field profile near the separatrix. The resulting E x B flow shear in the pedestal region affects the onset of ELMs. The electric field and velocity shear are then used as a background for two-fluid simulations of the edge plasma turbulence in a realistic geometry with the 3D BOUT code. A SF-plus geometry is chosen, so that the separatrix topology remains the same as for the standard X-point divertor, whereas the magnetic shear both inside and outside the separatrix increases dramatically. It is found that mesoscale instabilities are suppressed when the geometry is close to a perfect SF. In situations where complete suppression of ELMs is impossible, the SF divertor offers a path to reducing heat loads during ELM events to an acceptable level. Two effects, both related to the weakness of the poloidal field near the SF null, act synergistically in the same favorable direction. The first is the onset of strong, curvature-driven convection in the divertor, triggered by the increase of the poloidal pressure during the ELM and leading to the splitting of the heat flux between all four (as is the case in a SF geometry

  18. Why do drivers maintain short headways in fog? A driving-simulator study evaluating feeling of risk and lateral control during automated and manual car following.

    Science.gov (United States)

    Saffarian, M; Happee, R; Winter, J C F de

    2012-01-01

    Drivers in fog tend to maintain short headways, but the reasons behind this phenomenon are not well understood. This study evaluated the effect of headway on lateral control and feeling of risk in both foggy and clear conditions. Twenty-seven participants completed four sessions in a driving simulator: clear automated (CA), clear manual (CM), fog automated (FA) and fog manual (FM). In CM and FM, the drivers used the steering wheel, throttle and brake pedals. In CA and FA, a controller regulated the distance to the lead car, and the driver only had to steer. Drivers indicated how much risk they felt on a touchscreen. Consistent with our hypothesis, feeling of risk and steering activity were elevated when the lead car was not visible. These results might explain why drivers adopt short headways in fog. Practitioner Summary: Fog poses a serious road safety hazard. Our driving-simulator study provides the first experimental evidence to explain the role of risk-feeling and lateral control in headway reduction. These results are valuable for devising effective driver assistance and support systems.

  19. Structures manual

    Science.gov (United States)

    2001-01-01

    This manual was written as a guide for use by design personnel in the Vermont Agency of Transportation Structures Section. This manual covers the design responsibilities of the Section. It does not cover other functions that are a part of the Structu...

  20. Quality Manual

    Science.gov (United States)

    Koch, Michael

    The quality manual is the “heart” of every management system related to quality. Quality assurance in analytical laboratories is most frequently linked with ISO/IEC 17025, which lists the standard requirements for a quality manual. In this chapter examples are used to demonstrate, how these requirements can be met. But, certainly, there are many other ways to do this.

  1. GENOVA: a generalized perturbation theory program for various applications to CANDU core physics analysis (II) - a user's manual

    International Nuclear Information System (INIS)

    Kim, Do Heon; Choi, Hang Bok

    2001-03-01

    A user's guide for GENOVA, a GENeralized perturbation theory (GPT)-based Optimization and uncertainty analysis program for Canada deuterium uranium (CANDU) physics VAriables, was prepared. The program was developed under the framework of CANDU physics design and analysis code RFSP. The generalized perturbation method was implemented in GENOVA to estimate the zone controller unit (ZCU) level upon refueling operation and calculate various sensitivity coefficients for fuel management study and uncertainty analyses, respectively. This documentation contains descriptions and directions of four major modules of GENOVA such as ADJOINT, GADJINT, PERTURB, and PERTXS so that it can be used as a practical guide for GENOVA users. This documentation includes sample inputs for the ZCU level estimation and sensitivity coefficient calculation, which are the main application of GENOVA. The GENOVA can be used as a supplementary tool of the current CANDU physics design code for advanced CANDU core analysis and fuel development

  2. Testing the applicability of Nernst-Planck theory in ion channels: comparisons with Brownian dynamics simulations.

    Directory of Open Access Journals (Sweden)

    Chen Song

    Full Text Available The macroscopic Nernst-Planck (NP theory has often been used for predicting ion channel currents in recent years, but the validity of this theory at the microscopic scale has not been tested. In this study we systematically tested the ability of the NP theory to accurately predict channel currents by combining and comparing the results with those of Brownian dynamics (BD simulations. To thoroughly test the theory in a range of situations, calculations were made in a series of simplified cylindrical channels with radii ranging from 3 to 15 Å, in a more complex 'catenary' channel, and in a realistic model of the mechanosensitive channel MscS. The extensive tests indicate that the NP equation is applicable in narrow ion channels provided that accurate concentrations and potentials can be input as the currents obtained from the combination of BD and NP match well with those obtained directly from BD simulations, although some discrepancies are seen when the ion concentrations are not radially uniform. This finding opens a door to utilising the results of microscopic simulations in continuum theory, something that is likely to be useful in the investigation of a range of biophysical and nano-scale applications and should stimulate further studies in this direction.

  3. Testing the applicability of Nernst-Planck theory in ion channels: comparisons with Brownian dynamics simulations.

    Science.gov (United States)

    Song, Chen; Corry, Ben

    2011-01-01

    The macroscopic Nernst-Planck (NP) theory has often been used for predicting ion channel currents in recent years, but the validity of this theory at the microscopic scale has not been tested. In this study we systematically tested the ability of the NP theory to accurately predict channel currents by combining and comparing the results with those of Brownian dynamics (BD) simulations. To thoroughly test the theory in a range of situations, calculations were made in a series of simplified cylindrical channels with radii ranging from 3 to 15 Å, in a more complex 'catenary' channel, and in a realistic model of the mechanosensitive channel MscS. The extensive tests indicate that the NP equation is applicable in narrow ion channels provided that accurate concentrations and potentials can be input as the currents obtained from the combination of BD and NP match well with those obtained directly from BD simulations, although some discrepancies are seen when the ion concentrations are not radially uniform. This finding opens a door to utilising the results of microscopic simulations in continuum theory, something that is likely to be useful in the investigation of a range of biophysical and nano-scale applications and should stimulate further studies in this direction.

  4. Theory and simulation studies of effective interactions, phase behavior and morphology in polymer nanocomposites.

    Science.gov (United States)

    Ganesan, Venkat; Jayaraman, Arthi

    2014-01-07

    Polymer nanocomposites are a class of materials that consist of a polymer matrix filled with inorganic/organic nanoscale additives that enhance the inherent macroscopic (mechanical, optical and electronic) properties of the polymer matrix. Over the past few decades such materials have received tremendous attention from experimentalists, theoreticians, and computational scientists. These studies have revealed that the macroscopic properties of polymer nanocomposites depend strongly on the (microscopic) morphology of the constituent nanoscale additives in the polymer matrix. As a consequence, intense research efforts have been directed to understand the relationships between interactions, morphology, and the phase behavior of polymer nanocomposites. Theory and simulations have proven to be useful tools in this regard due to their ability to link molecular level features of the polymer and nanoparticle additives to the resulting morphology within the composite. In this article we review recent theory and simulation studies, presenting briefly the methodological developments underlying PRISM theories, density functional theory, self-consistent field theory approaches, and atomistic and coarse-grained molecular simulations. We first discuss the studies on polymer nanocomposites with bare or un-functionalized nanoparticles as additives, followed by a review of recent work on composites containing polymer grafted or functionalized nanoparticles as additives. We conclude each section with a brief outlook on some potential future directions.

  5. Theory and simulation of discrete kinetic beta induced Alfven eigenmode in tokamak plasmas

    International Nuclear Information System (INIS)

    Wang, X; Zonca, F; Chen, L

    2010-01-01

    It is shown, both analytically and by numerical simulations, that, in the presence of thermal ion kinetic effects, the beta induced Alfven eigenmode (BAE)-shear Alfven wave continuous spectrum can be discretized into radially trapped eigenstates known as kinetic BAE (KBAE). While thermal ion compressibility gives rise to finite BAE accumulation point frequency, the discretization occurs via the finite Larmor radius and finite orbit width effects. Simulations and analytical theories agree both qualitatively and quantitatively. Simulations also demonstrate that KBAE can be readily excited by the finite radial gradients of energetic particles.

  6. A portable high-quality random number generator for lattice field theory simulations

    International Nuclear Information System (INIS)

    Luescher, M.

    1993-09-01

    The theory underlying a proposed random number generator for numerical simulations in elementary particle physics and statistical mechanics is discussed. The generator is based on an algorithm introduced by Marsaglia and Zaman, with an important added feature leading to demonstrably good statistical properties. It can be implemented exactly on any computer complying with the IEEE-754 standard for single precision floating point arithmetic. (orig.)

  7. Simulating Serious Games: A Discrete-Time Computational Model Based on Cognitive Flow Theory

    Science.gov (United States)

    Westera, Wim

    2018-01-01

    This paper presents a computational model for simulating how people learn from serious games. While avoiding the combinatorial explosion of a games micro-states, the model offers a meso-level pathfinding approach, which is guided by cognitive flow theory and various concepts from learning sciences. It extends a basic, existing model by exposing…

  8. Fluid of Hard Spheres with a Modified Dipole: Simulation and Theory

    Czech Academy of Sciences Publication Activity Database

    Jirsák, Jan; Nezbeda, Ivo

    2008-01-01

    Roč. 73, č. 4 (2008), s. 541-557 ISSN 0010-0765 R&D Projects: GA AV ČR 1ET400720409; GA AV ČR IAA400720710 Institutional research plan: CEZ:AV0Z40720504 Keywords : molecular simulation * monte carlo method * perturbation theory Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 0.784, year: 2008

  9. Bicontinuous Phases in Diblock Copolymer/Homopolymer Blends: Simulation and Self-Consistent Field Theory

    KAUST Repository

    Martínez-Veracoechea, Francisco J.; Escobedo, Fernando A.

    2009-01-01

    A combination of particle-based simulations and self-consistent field theory (SCFT) is used to study the stabilization of multiple ordered bicontinuous phases in blends of a diblock copolymer (DBC) and a homopolymer. The double-diamond phase (DD

  10. Sound propagation in dry granular materials : discrete element simulations, theory, and experiments

    NARCIS (Netherlands)

    Mouraille, O.J.P.

    2009-01-01

    In this study sound wave propagation through different types of dry confined granular systems is studied. With three-dimensional discrete element simulations, theory and experiments, the influence of several micro-scale properties: friction, dissipation, particle rotation, and contact disorder, on

  11. Acoustofluidics: Theory and simulation of streaming and radiation forces at ultrasound resonances in microfluidic devices

    DEFF Research Database (Denmark)

    Bruus, Henrik

    2009-01-01

    fields, which are directly related to the acoustic radiation force on single particles and to the acoustic streaming of the liquid. For the radiation pressure effects, there is good agreement between theory and simulation, while the numeric results for the acoustic streaming effects are more problematic...

  12. Theory and simulation of epitaxial rotation. Light particles adsorbed on graphite

    DEFF Research Database (Denmark)

    Vives, E.; Lindgård, P.-A.

    1993-01-01

    We present a theory and Monte Carlo simulations of adsorbed particles on a corrugated substrate. We have focused on the case of rare gases and light molecules, H-2 and D2, adsorbed on graphite. The competition between the particle-particle and particle-substrate interactions gives rise to frustra...... found a modulated 4 x 4 structure. Energy, structure-factor intensities, peak positions, and epitaxial rotation angles as a function of temperature and coverage have been determined from the simulations. Good agreement with theory and experimental data is found.......We present a theory and Monte Carlo simulations of adsorbed particles on a corrugated substrate. We have focused on the case of rare gases and light molecules, H-2 and D2, adsorbed on graphite. The competition between the particle-particle and particle-substrate interactions gives rise...... between the commensurate and incommensurate phase for the adsorbed systems. From our simulations and our theory, we are, able to understand the gamma phase of D2 as an ordered phase stabilized by disorder. It can be described as a 2q-modulated structure. In agreement with the experiments, we have also...

  13. Adaptive Core Simulation Employing Discrete Inverse Theory - Part II: Numerical Experiments

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Turinsky, Paul J.

    2005-01-01

    Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. The companion paper, ''Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory,'' describes in detail the theoretical background of the proposed adaptive techniques. This paper, Part II, demonstrates several computational experiments conducted to assess the fidelity and robustness of the proposed techniques. The intent is to check the ability of the adapted core simulator model to predict future core observables that are not included in the adaption or core observables that are recorded at core conditions that differ from those at which adaption is completed. Also, this paper demonstrates successful utilization of an efficient sensitivity analysis approach to calculate the sensitivity information required to perform the adaption for millions of input core parameters. Finally, this paper illustrates a useful application for adaptive simulation - reducing the inconsistencies between two different core simulator code systems, where the multitudes of input data to one code are adjusted to enhance the agreement between both codes for important core attributes, i.e., core reactivity and power distribution. Also demonstrated is the robustness of such an application

  14. Atomic Quantum Simulations of Abelian and non-Abelian Gauge Theories

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    Using a Fermi-Bose mixture of ultra-cold atoms in an optical lattice, in a collaboration of atomic and particle physicists, we have constructed a quantum simulator for a U(1) gauge theory coupled to fermionic matter. The construction is based on quantum link models which realize continuous gauge symmetry with discrete quantum variables. At low energies, quantum link models with staggered fermions emerge from a Hubbard-type model which can be quantum simulated. This allows investigations of string breaking as well as the real-time evolution after a quench in gauge theories, which are inaccessible to classical simulation methods. Similarly, using ultracold alkaline-earth atoms in optical lattices, we have constructed a quantum simulator for U(N) and SU(N) lattice gauge theories with fermionic matter based on quantum link models. These systems share qualitative features with QCD, including chiral symmetry breaking and restoration at non-zero temperature or baryon density. Unlike classical simulations, a quantum ...

  15. Repository Integration Program: RIP performance assessment and strategy evaluation model theory manual and user`s guide

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-11-01

    This report describes the theory and capabilities of RIP (Repository Integration Program). RIP is a powerful and flexible computational tool for carrying out probabilistic integrated total system performance assessments for geologic repositories. The primary purpose of RIP is to provide a management tool for guiding system design and site characterization. In addition, the performance assessment model (and the process of eliciting model input) can act as a mechanism for integrating the large amount of available information into a meaningful whole (in a sense, allowing one to keep the ``big picture`` and the ultimate aims of the project clearly in focus). Such an integration is useful both for project managers and project scientists. RIP is based on a `` top down`` approach to performance assessment that concentrates on the integration of the entire system, and utilizes relatively high-level descriptive models and parameters. The key point in the application of such a ``top down`` approach is that the simplified models and associated high-level parameters must incorporate an accurate representation of their uncertainty. RIP is designed in a very flexible manner such that details can be readily added to various components of the model without modifying the computer code. Uncertainty is also handled in a very flexible manner, and both parameter and model (process) uncertainty can be explicitly considered. Uncertainty is propagated through the integrated PA model using an enhanced Monte Carlo method. RIP must rely heavily on subjective assessment (expert opinion) for much of its input. The process of eliciting the high-level input parameters required for RIP is critical to its successful application. As a result, in order for any project to successfully apply a tool such as RIP, an enormous amount of communication and cooperation must exist between the data collectors, the process modelers, and the performance. assessment modelers.

  16. Boundary based on exchange symmetry theory for multilevel simulations. I. Basic theory.

    Science.gov (United States)

    Shiga, Motoyuki; Masia, Marco

    2013-07-28

    In this paper, we lay the foundations for a new method that allows multilevel simulations of a diffusive system, i.e., a system where a flux of particles through the boundaries might disrupt the primary region. The method is based on the use of flexible restraints that maintain the separation between inner and outer particles. It is shown that, by introducing a bias potential that accounts for the exchange symmetry of the system, the correct statistical distribution is preserved. Using a toy model consisting of non-interacting particles in an asymmetric potential well, we prove that the method is formally exact, and that it could be simplified by considering only up to a couple of particle exchanges without a loss of accuracy. A real-world test is then made by considering a hybrid MM(∗)/MM calculation of cesium ion in water. In this case, the single exchange approximation is sound enough that the results superimpose to the exact solutions. Potential applications of this method to many different hybrid QM/MM systems are discussed, as well as its limitations and strengths in comparison to existing approaches.

  17. Interactions between Nanoparticles and Polymer Brushes: Molecular Dynamics Simulations and Self-consistent Field Theory Calculations

    Science.gov (United States)

    Cheng, Shengfeng; Wen, Chengyuan; Egorov, Sergei

    2015-03-01

    Molecular dynamics simulations and self-consistent field theory calculations are employed to study the interactions between a nanoparticle and a polymer brush at various densities of chains grafted to a plane. Simulations with both implicit and explicit solvent are performed. In either case the nanoparticle is loaded to the brush at a constant velocity. Then a series of simulations are performed to compute the force exerted on the nanoparticle that is fixed at various distances from the grafting plane. The potential of mean force is calculated and compared to the prediction based on a self-consistent field theory. Our simulations show that the explicit solvent leads to effects that are not captured in simulations with implicit solvent, indicating the importance of including explicit solvent in molecular simulations of such systems. Our results also demonstrate an interesting correlation between the force on the nanoparticle and the density profile of the brush. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for this research.

  18. Organizational culture shapes the adoption and incorporation of simulation into nursing curricula: a grounded theory study.

    Science.gov (United States)

    Taplay, Karyn; Jack, Susan M; Baxter, Pamela; Eva, Kevin; Martin, Lynn

    2014-01-01

    Purpose. To create a substantive mid-range theory explaining how the organizational cultures of undergraduate nursing programs shape the adoption and incorporation of mid-to high-level technical fidelity simulators as a teaching strategy within curricula. Method. A constructivist grounded theory was used to guide this study which was conducted in Ontario, Canada, during 2011-12. Semistructured interviews (n = 43) with participants that included nursing administrators, nursing faculty, and simulation leaders across multiple programs (n = 13) informed this study. Additionally, key documents (n = 67) were reviewed. Purposeful and theoretical sampling was used and data were collected and analyzed simultaneously. Data were compared among and between sites. Findings. The organizational elements that shape simulation in nursing (OESSN) model depicts five key organizational factors at the nursing program level that shaped the adoption and incorporation of simulation: (1) leaders working in tandem, (2) information exchange, (3) physical locale, (4) shared motivators, and (5) scaffolding to manage change. Conclusions. The OESSN model provides an explanation of the organizational factors that contributed to the adoption and incorporation of simulation into nursing curricula. Nursing programs that use the OESSN model may experience a more rapid or broad uptake of simulation when organizational factors that impact adoption and incorporation are considered and planned for.

  19. Design/CPN. A Reference Manual

    DEFF Research Database (Denmark)

    Jensen et. al, Kurt

    /CPN WWW pages. To speed up the access to the Design/CPN manuals we recommend to keep a local copy - which may be shared by all users in your organisation. In this way you do not need to go via our WWW server each time you need to look in a manual. For some of the largest manuals, we also supply files......Note: The manuals are available as PDF files . There are two sets of manuals - one for the Unix platform and another for the Mac platform. Each set of manual consists of: Tutorial (for the Design/CPN editor and simulator) Reference Manual (for the Design/CPN editor and simulator) Programmer......'s Manual (with Design/OA functions and Charts) Occurrence Graph Manual (integrated tutorial and reference manual) OE/OS Graph Manual (integrated tutorial and reference manual) Other Manuals (e.g. a short overview of CPN ML).   The Tutorial, Reference Manual and Programmer's Manual are made for Design...

  20. Electronics Research Laboratory, Plasma Theory and Simulation Group annual progress report, January 1, 1989--December 31, 1989

    International Nuclear Information System (INIS)

    Birdsall, C.K.

    1989-01-01

    This is a brief progress report, covering our research in general plasma theory and simulation, plasma-wall physics theory and simulation, and code development. Reports written in this period are included with this mailing. A publications list plus abstracts for two major meetings are included

  1. Asthma management simulation for children: translating theory, methods, and strategies to effect behavior change.

    Science.gov (United States)

    Shegog, Ross; Bartholomew, L Kay; Gold, Robert S; Pierrel, Elaine; Parcel, Guy S; Sockrider, Marianna M; Czyzewski, Danita I; Fernandez, Maria E; Berlin, Nina J; Abramson, Stuart

    2006-01-01

    Translating behavioral theories, models, and strategies to guide the development and structure of computer-based health applications is well recognized, although a continued challenge for program developers. A stepped approach to translate behavioral theory in the design of simulations to teach chronic disease management to children is described. This includes the translation steps to: 1) define target behaviors and their determinants, 2) identify theoretical methods to optimize behavioral change, and 3) choose educational strategies to effectively apply these methods and combine these into a cohesive computer-based simulation for health education. Asthma is used to exemplify a chronic health management problem and a computer-based asthma management simulation (Watch, Discover, Think and Act) that has been evaluated and shown to effect asthma self-management in children is used to exemplify the application of theory to practice. Impact and outcome evaluation studies have indicated the effectiveness of these steps in providing increased rigor and accountability, suggesting their utility for educators and developers seeking to apply simulations to enhance self-management behaviors in patients.

  2. A multi-species exchange model for fully fluctuating polymer field theory simulations.

    Science.gov (United States)

    Düchs, Dominik; Delaney, Kris T; Fredrickson, Glenn H

    2014-11-07

    Field-theoretic models have been used extensively to study the phase behavior of inhomogeneous polymer melts and solutions, both in self-consistent mean-field calculations and in numerical simulations of the full theory capturing composition fluctuations. The models commonly used can be grouped into two categories, namely, species models and exchange models. Species models involve integrations of functionals that explicitly depend on fields originating both from species density operators and their conjugate chemical potential fields. In contrast, exchange models retain only linear combinations of the chemical potential fields. In the two-component case, development of exchange models has been instrumental in enabling stable complex Langevin (CL) simulations of the full complex-valued theory. No comparable stable CL approach has yet been established for field theories of the species type. Here, we introduce an extension of the exchange model to an arbitrary number of components, namely, the multi-species exchange (MSE) model, which greatly expands the classes of soft material systems that can be accessed by the complex Langevin simulation technique. We demonstrate the stability and accuracy of the MSE-CL sampling approach using numerical simulations of triblock and tetrablock terpolymer melts, and tetrablock quaterpolymer melts. This method should enable studies of a wide range of fluctuation phenomena in multiblock/multi-species polymer blends and composites.

  3. Users manual for Aerospace Nuclear Safety Program six-degree-of-freedom reentry simulation (TMAGRA6C)

    International Nuclear Information System (INIS)

    Sharbaugh, R.C.

    1990-02-01

    This report documents the updated six-degree-of-freedom reentry simulation TMAGRA6C used in the Aerospace Nuclear Safety Program, ANSP. The simulation provides for the inclusion of the effects of ablation on the aerodynamic stability and drag of reentry bodies, specifically the General Purpose Heat Source, GPHS. The existing six-degree-of-freedom reentry body simulations (TMAGRA6A and TMAGRA6B) used in the JHU/APL Nuclear Safety Program do not include aerodynamic effects resulting from geometric changes to the configuration due to ablation from reentry flights. A wind tunnel test was conducted in 1989 to obtain the effects of ablation on the hypersonic aerodynamics of the GPHS module. The analyzed data were used to form data sets which are included herein in tabular form. These are used as incremental aerodynamic inputs in the new TMAGRA6C six-degree-of-freedom reentry simulation. 20 refs., 13 figs., 2 tabs

  4. The process of adopting and incorporating simulation into undergraduate nursing curricula: a grounded theory study.

    Science.gov (United States)

    Taplay, Karyn; Jack, Susan M; Baxter, Pamela; Eva, Kevin; Martin, Lynn

    2015-01-01

    The aim of this study is to explain the process of adopting and incorporating simulation as a teaching strategy in undergraduate nursing programs, define uptake, and discuss potential outcomes. In many countries, simulation is increasingly adopted as a common teaching strategy. However, there is a dearth of knowledge related to the process of adoption and incorporation. We used an interpretive, constructivist approach to grounded theory to guide this research study. We conducted the study was in Ontario, Canada, during 2011-2012. Using multiple data sources, we informed the development of this theory including in-depth interviews (n = 43) and a review of key organizational documents, such as mission and vision statements (n = 67) from multiple nursing programs (n = 13). The adoption and uptake of mid- to high-fidelity simulation equipment is a multistep iterative process involving various organizational levels within the institution that entails a seven-phase process: (a) securing resources, (b) nursing leaders working in tandem, (c) getting it out of the box, (d) learning about simulation and its potential for teaching, (e) finding a fit, (f) trialing the equipment, and (g) integrating into the curriculum. These findings could assist nursing programs in Canada and internationally that wish to adopt or further incorporate simulation into their curricula and highlight potential organizational and program level outcomes. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  5. Overview of theory and simulations in the Heavy Ion Fusion Science Virtual National Laboratory

    Science.gov (United States)

    Friedman, Alex

    2007-07-01

    The Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL) is a collaboration of Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, and Princeton Plasma Physics Laboratory. These laboratories, in cooperation with researchers at other institutions, are carrying out a coordinated effort to apply intense ion beams as drivers for studies of the physics of matter at extreme conditions, and ultimately for inertial fusion energy. Progress on this endeavor depends upon coordinated application of experiments, theory, and simulations. This paper describes the state of the art, with an emphasis on the coordination of modeling and experiment; developments in the simulation tools, and in the methods that underly them, are also treated.

  6. Nonequilibrium Gyrokinetic Fluctuation Theory and Sampling Noise in Gyrokinetic Particle-in-cell Simulations

    International Nuclear Information System (INIS)

    Krommes, John A.

    2007-01-01

    The present state of the theory of fluctuations in gyrokinetic (GK) plasmas and especially its application to sampling noise in GK particle-in-cell (PIC) simulations is reviewed. Topics addressed include the Δf method, the fluctuation-dissipation theorem for both classical and GK many-body plasmas, the Klimontovich formalism, sampling noise in PIC simulations, statistical closure for partial differential equations, the theoretical foundations of spectral balance in the presence of arbitrary noise sources, and the derivation of Kadomtsev-type equations from the general formalism

  7. Theory and Monte-Carlo simulation of adsorbates on corrugated surfaces

    DEFF Research Database (Denmark)

    Vives, E.; Lindgård, P.-A.

    1993-01-01

    -phase between the commensurate and incommensurate phase stabilized by defects. Special attention has been given to the study of the epitaxial rotation angles of the different phases. Available experimental data is in agreement with the simulations and with a general theory for the epitaxial rotation which takes......Phase transitions in systems of adsorbed molecules on corrugated surfaces are studied by means of Monte Carlo simulation. Particularly, we have studied the phase diagram of D2 on graphite as a function of coverage and temperature. We have demonstrated the existence of an intermediate gamma...

  8. Bridging scales from molecular simulations to classical thermodynamics: density functional theory of capillary condensation in nanopores

    International Nuclear Information System (INIS)

    Neimark, Alexander V; Ravikovitch, Peter I; Vishnyakov, Aleksey

    2003-01-01

    With the example of the capillary condensation of Lennard-Jones fluid in nanopores ranging from 1 to 10 nm, we show that the non-local density functional theory (NLDFT) with properly chosen parameters of intermolecular interactions bridges the scale gap from molecular simulations to macroscopic thermodynamics. On the one hand, NLDFT correctly approximates the results of Monte Carlo simulations (shift of vapour-liquid equilibrium, spinodals, density profiles, adsorption isotherms) for pores wider than about 2 nm. On the other hand, NLDFT smoothly merges (above 7-10 nm) with the Derjaguin-Broekhoff-de Boer equations which represent augmented Laplace-Kelvin equations of capillary condensation and desorption

  9. Nonequilibrium Gyrokinetic Fluctuation Theory and Sampling Noise in Gyrokinetic Particle-in-cell Simulations

    Energy Technology Data Exchange (ETDEWEB)

    John A. Krommes

    2007-10-09

    The present state of the theory of fluctuations in gyrokinetic GK plasmas and especially its application to sampling noise in GK particle-in-cell PIC simulations is reviewed. Topics addressed include the Δf method, the fluctuation-dissipation theorem for both classical and GK many-body plasmas, the Klimontovich formalism, sampling noise in PIC simulations, statistical closure for partial differential equations, the theoretical foundations of spectral balance in the presence of arbitrary noise sources, and the derivation of Kadomtsev-type equations from the general formalism.

  10. Demonstration for novel self-organization theory by three-dimensional magnetohydrodynamic simulation

    International Nuclear Information System (INIS)

    Kondoh, Yoshiomi; Hosaka, Yasuo; Liang, Jia-Ling.

    1993-03-01

    It is demonstrated by three-dimensional simulations for resistive magnetohydrodynamic (MHD) plasmas with both 'spatially nonuniform resistivity η' and 'uniformη' that the attractor of the dissipative structure in the resistive MHD plasmas is given by ∇ x (ηj) = (α/2)B which is derived from a novel self-organization theory based on the minimum dissipation rate profile. It is shown by the simulations that the attractor is reduced to ∇ x B = λB in the special case with the 'uniformη' and no pressure gradient. (author)

  11. Integration of multiple theories for the simulation of laser interference lithography processes.

    Science.gov (United States)

    Lin, Te-Hsun; Yang, Yin-Kuang; Fu, Chien-Chung

    2017-11-24

    The periodic structure of laser interference lithography (LIL) fabrication is superior to other lithography technologies. In contrast to traditional lithography, LIL has the advantages of being a simple optical system with no mask requirements, low cost, high depth of focus, and large patterning area in a single exposure. Generally, a simulation pattern for the periodic structure is obtained through optical interference prior to its fabrication through LIL. However, the LIL process is complex and combines the fields of optical and polymer materials; thus, a single simulation theory cannot reflect the real situation. Therefore, this research integrates multiple theories, including those of optical interference, standing waves, and photoresist characteristics, to create a mathematical model for the LIL process. The mathematical model can accurately estimate the exposure time and reduce the LIL process duration through trial and error.

  12. Integration of multiple theories for the simulation of laser interference lithography processes

    Science.gov (United States)

    Lin, Te-Hsun; Yang, Yin-Kuang; Fu, Chien-Chung

    2017-11-01

    The periodic structure of laser interference lithography (LIL) fabrication is superior to other lithography technologies. In contrast to traditional lithography, LIL has the advantages of being a simple optical system with no mask requirements, low cost, high depth of focus, and large patterning area in a single exposure. Generally, a simulation pattern for the periodic structure is obtained through optical interference prior to its fabrication through LIL. However, the LIL process is complex and combines the fields of optical and polymer materials; thus, a single simulation theory cannot reflect the real situation. Therefore, this research integrates multiple theories, including those of optical interference, standing waves, and photoresist characteristics, to create a mathematical model for the LIL process. The mathematical model can accurately estimate the exposure time and reduce the LIL process duration through trial and error.

  13. MODELLING AND SIMULATING RISKS IN THE TRAINING OF THE HUMAN RESOURCES BY APPLYING THE CHAOS THEORY

    OpenAIRE

    Eugen ROTARESCU

    2012-01-01

    The article approaches the modelling and simulation of risks in the training of the human resources, as well as the forecast of the degree of human resources training impacted by risks by applying the mathematical tools offered by the Chaos Theory and mathematical statistics. We will highlight that the level of knowledge, skills and abilities of the human resources from an organization are autocorrelated in time and they depend on the level of a previous moment of the training, as well as on ...

  14. Simulations of N = 2 super Yang-Mills theory in two dimensions

    International Nuclear Information System (INIS)

    Catterall, Simon

    2006-01-01

    We present results from lattice simulations of N = 2 super Yang-Mills theory in two dimensions. The lattice formulation we use was developed and retains both gauge invariance and an exact (twisted) supersymmetry for any lattice spacing. Results for both U(2) and SU(2) gauge groups are given. We focus on supersymmetric Ward identities, the phase of the Pfaffian resulting from integration over the Grassmann fields and the nature of the quantum moduli space

  15. Time-dependent transport of energetic particles in magnetic turbulence: computer simulations versus analytical theory

    Science.gov (United States)

    Arendt, V.; Shalchi, A.

    2018-06-01

    We explore numerically the transport of energetic particles in a turbulent magnetic field configuration. A test-particle code is employed to compute running diffusion coefficients as well as particle distribution functions in the different directions of space. Our numerical findings are compared with models commonly used in diffusion theory such as Gaussian distribution functions and solutions of the cosmic ray Fokker-Planck equation. Furthermore, we compare the running diffusion coefficients across the mean magnetic field with solutions obtained from the time-dependent version of the unified non-linear transport theory. In most cases we find that particle distribution functions are indeed of Gaussian form as long as a two-component turbulence model is employed. For turbulence setups with reduced dimensionality, however, the Gaussian distribution can no longer be obtained. It is also shown that the unified non-linear transport theory agrees with simulated perpendicular diffusion coefficients as long as the pure two-dimensional model is excluded.

  16. A game theory simulator for assessing the performances of competitive electricity markets

    International Nuclear Information System (INIS)

    Bompard, Ettore; Carpaneto, Enrico; Ciwei, Gao; Napoli, Roberto; Benini, Michele; Gallanti, Massimo; Migliavacca, Gianluigi

    2008-01-01

    In the last years, electricity markets were created all over the world following different basis concepts. Market structure, market rules, demand levels, market concentration and energy sources to produce electricity have a strong influence on market performances. Modifications on these aspects may significantly affect market outcomes. Sensitivity analyses need proper simulation tools. In this paper a medium run electricity market simulator (MREMS) based on game theory is presented. This simulator incorporates two different games, one for the unit commitment of thermal units and one for strategic bidding and hourly market clearing. Either a Forchheimer (one leader) or Bertrand (all player are leaders) or even intermediate model with a whatever number of leaders can be selected, in dependence on the strategic behavior of the producers, allowing for the simulation of markets with different levels of concentration. The simulator was applied to analyse producers' behavior during the first operative year of the Italian power exchange. A comparison between simulation and true market results was carried out in order to test the simulator and validate its simplifying hypotheses. MREMS, yet capable to be used stand-alone, was conceived as the heart of a long-term market simulator (LREMS) allowing to simulate the long-run evolution of the generation park (investments in new plants, refurbishment and dismission of older ones). LREMS is a hierarchic simulator: a long-term ''outer'' game takes yearly investment decisions based on mid-term price projections provided by MREMS. Although this paper is mainly devoted to describe MREMS, one specific section will provide an overview of the ''outer'' game implemented by LREMS. (author)

  17. Biosafety Manual

    Energy Technology Data Exchange (ETDEWEB)

    King, Bruce W.

    2010-05-18

    Work with or potential exposure to biological materials in the course of performing research or other work activities at Lawrence Berkeley National Laboratory (LBNL) must be conducted in a safe, ethical, environmentally sound, and compliant manner. Work must be conducted in accordance with established biosafety standards, the principles and functions of Integrated Safety Management (ISM), this Biosafety Manual, Chapter 26 (Biosafety) of the Health and Safety Manual (PUB-3000), and applicable standards and LBNL policies. The purpose of the Biosafety Program is to protect workers, the public, agriculture, and the environment from exposure to biological agents or materials that may cause disease or other detrimental effects in humans, animals, or plants. This manual provides workers; line management; Environment, Health, and Safety (EH&S) Division staff; Institutional Biosafety Committee (IBC) members; and others with a comprehensive overview of biosafety principles, requirements from biosafety standards, and measures needed to control biological risks in work activities and facilities at LBNL.

  18. Full-band quantum simulation of electron devices with the pseudopotential method: Theory, implementation, and applications

    Science.gov (United States)

    Pala, M. G.; Esseni, D.

    2018-03-01

    This paper presents the theory, implementation, and application of a quantum transport modeling approach based on the nonequilibrium Green's function formalism and a full-band empirical pseudopotential Hamiltonian. We here propose to employ a hybrid real-space/plane-wave basis that results in a significant reduction of the computational complexity compared to a full plane-wave basis. To this purpose, we provide a theoretical formulation in the hybrid basis of the quantum confinement, the self-energies of the leads, and the coupling between the device and the leads. After discussing the theory and the implementation of the new simulation methodology, we report results for complete, self-consistent simulations of different electron devices, including a silicon Esaki diode, a thin-body silicon field effect transistor (FET), and a germanium tunnel FET. The simulated transistors have technologically relevant geometrical features with a semiconductor film thickness of about 4 nm and a channel length ranging from 10 to 17 nm. We believe that the newly proposed formalism may find applications also in transport models based on ab initio Hamiltonians, as those employed in density functional theory methods.

  19. SHARP User Manual

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Y. Q. [Argonne National Lab. (ANL), Argonne, IL (United States); Shemon, E. R. [Argonne National Lab. (ANL), Argonne, IL (United States); Thomas, J. W. [Argonne National Lab. (ANL), Argonne, IL (United States); Mahadevan, Vijay S. [Argonne National Lab. (ANL), Argonne, IL (United States); Rahaman, Ronald O. [Argonne National Lab. (ANL), Argonne, IL (United States); Solberg, Jerome [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-03-31

    SHARP is an advanced modeling and simulation toolkit for the analysis of nuclear reactors. It is comprised of several components including physical modeling tools, tools to integrate the physics codes for multi-physics analyses, and a set of tools to couple the codes within the MOAB framework. Physics modules currently include the neutronics code PROTEUS, the thermal-hydraulics code Nek5000, and the structural mechanics code Diablo. This manual focuses on performing multi-physics calculations with the SHARP ToolKit. Manuals for the three individual physics modules are available with the SHARP distribution to help the user to either carry out the primary multi-physics calculation with basic knowledge or perform further advanced development with in-depth knowledge of these codes. This manual provides step-by-step instructions on employing SHARP, including how to download and install the code, how to build the drivers for a test case, how to perform a calculation and how to visualize the results. Since SHARP has some specific library and environment dependencies, it is highly recommended that the user read this manual prior to installing SHARP. Verification tests cases are included to check proper installation of each module. It is suggested that the new user should first follow the step-by-step instructions provided for a test problem in this manual to understand the basic procedure of using SHARP before using SHARP for his/her own analysis. Both reference output and scripts are provided along with the test cases in order to verify correct installation and execution of the SHARP package. At the end of this manual, detailed instructions are provided on how to create a new test case so that user can perform novel multi-physics calculations with SHARP. Frequently asked questions are listed at the end of this manual to help the user to troubleshoot issues.

  20. SHARP User Manual

    International Nuclear Information System (INIS)

    Yu, Y. Q.; Shemon, E. R.; Thomas, J. W.; Mahadevan, Vijay S.; Rahaman, Ronald O.; Solberg, Jerome

    2016-01-01

    SHARP is an advanced modeling and simulation toolkit for the analysis of nuclear reactors. It is comprised of several components including physical modeling tools, tools to integrate the physics codes for multi-physics analyses, and a set of tools to couple the codes within the MOAB framework. Physics modules currently include the neutronics code PROTEUS, the thermal-hydraulics code Nek5000, and the structural mechanics code Diablo. This manual focuses on performing multi-physics calculations with the SHARP ToolKit. Manuals for the three individual physics modules are available with the SHARP distribution to help the user to either carry out the primary multi-physics calculation with basic knowledge or perform further advanced development with in-depth knowledge of these codes. This manual provides step-by-step instructions on employing SHARP, including how to download and install the code, how to build the drivers for a test case, how to perform a calculation and how to visualize the results. Since SHARP has some specific library and environment dependencies, it is highly recommended that the user read this manual prior to installing SHARP. Verification tests cases are included to check proper installation of each module. It is suggested that the new user should first follow the step-by-step instructions provided for a test problem in this manual to understand the basic procedure of using SHARP before using SHARP for his/her own analysis. Both reference output and scripts are provided along with the test cases in order to verify correct installation and execution of the SHARP package. At the end of this manual, detailed instructions are provided on how to create a new test case so that user can perform novel multi-physics calculations with SHARP. Frequently asked questions are listed at the end of this manual to help the user to troubleshoot issues.

  1. GRACE manual

    International Nuclear Information System (INIS)

    Ishikawa, T.; Kawabata, S.; Shimizu, Y.; Kaneko, T.; Kato, K.; Tanaka, H.

    1993-02-01

    This manual is composed of three kinds of objects, theoretical background for calculating the cross section of elementary process, usage and technical details of the GRACE system. Throughout this manual we take the tree level process e + e - → W + W - γ as an example, including the e ± -scalar boson interactions. The real FORTRAN source code for this process is attached in the relevant sections as well as the results of calculation, which might be a great help for understanding the practical use of the system. (J.P.N.)

  2. Atom probe tomography simulations and density functional theory calculations of bonding energies in Cu3Au

    KAUST Repository

    Boll, Torben

    2012-10-01

    In this article the Cu-Au binding energy in Cu3Au is determined by comparing experimental atom probe tomography (APT) results to simulations. The resulting bonding energy is supported by density functional theory calculations. The APT simulations are based on the Müller-Schottky equation, which is modified to include different atomic neighborhoods and their characteristic bonds. The local environment is considered up to the fifth next nearest neighbors. To compare the experimental with simulated APT data, the AtomVicinity algorithm, which provides statistical information about the positions of the neighboring atoms, is applied. The quality of this information is influenced by the field evaporation behavior of the different species, which is connected to the bonding energies. © Microscopy Society of America 2012.

  3. THE ANALYSIS OF THE COMPREHENSIVE INSURANCE DEMAND FOR TURKEY USING UTILITY THEORY AND SYSTEM SIMULATION

    Directory of Open Access Journals (Sweden)

    Murat KIRKAĞAÇ

    2017-03-01

    Full Text Available In this study, the demand for comprehensive insurance is analysed using utility theory and system simulation. A simulation study is performed to assess the behaviour of individuals with different income levels for the demand of comprehensive insurance. Simulation assumptions and input-output variables are determined using the real data set from a Turkish insurance company and the report about the insurance activities in Turkey for year 2014. The effects of income level, expected claim severity and premium level on the demand for insurance are investigated. It is concluded that while an increase in income level and expected claim severity causes an increase in the demand, an increase in premium level causes a decrease in the demand.

  4. Theory and computer simulation of structure, transport, and flow of fluid in micropores

    International Nuclear Information System (INIS)

    Davis, H.T.; Bitsanis, I.; Vanderlick, T.K.; Tirrell, M.V.

    1987-01-01

    An overview is given of recent progress made in our laboratory on this topic. The density profiles of fluid in micropores are found by solving numerically an approximate Yvon-Born-Green equation. A related local average density model (LADM) allows prediction of transport and flow in inhomogeneous fluids from density profiles. A rigorous extension of the Enskog theory of transport is also outlined. Simple results of this general approach for the tracer diffusion and Couette flow between planar micropore walls are presented. Equilibrium and flow (molecular dynamics) simulations are compared with the theoretical predictions. Simulated density profiles of the micropore fluid exhibit substantial fluid layering. The number and sharpness of fluid layers depend sensitively on the pore width. The solvation force and the pore average density and diffusivity are oscillating functions of the pore width. The theoretical predictions for these quantities agree qualitatively with the simulation results. The flow simulations indicate that the flow does not affect the fluid structure and diffusivity even at extremely high shear rates (10/sup 10/s/sup -1/). The fluid structure induces large deviations of the shear stress and the effective viscosity from the bulk fluid values. The flow velocity profiles are correlated with the density profiles and differ from those of a bulk fluid. The LADM and extended Enskog theory predictions for the velocity profiles and the pore average diffusivity agree very well with each other and with the simulation results. The LADM predictions for the shear stress and the effective viscosity agrees fairly well with the simulation results

  5. Backward wave oscillators with rippled wall resonators: Analytic theory and numerical simulation

    International Nuclear Information System (INIS)

    Swegle, J.A.; Poukey, J.W.

    1985-01-01

    The 3-D analytic theory is based on the approximation that the device is infinitely long. In the absence of an electron beam, the theory is exact and allows us to compute the dispersion characteristics of the cold structure. With the inclusion of a thin electron beam, we can compute the growth rates resulting from the interaction between a waveguide mode of the structure and the slower space charge wave on the beam. In the limit of low beam currents, the full dispersion relation based on an electromagnetic analysis can be placed in correspondence with the circuit theory of Pierce. Numerical simulations permit us to explore the saturated, large amplitude operating regime for TM axisymmetric modes. The scaling of operating frequency, peak power, and operating efficiency with beam and resonator parameters is examined. The analytic theory indicates that growth rates are largest for the TM 01 modes and decrease with both the radial and azimuthal mode numbers. Another interesting trend is that for a fixed cathode voltage and slow wave structure, growth rates peak for a beam current below the space charge limiting value and decrease for both larger and smaller currents. The simulations show waves that grow from noise without any input signal, so that the system functions as an oscillator. The TM 01 mode predominates in all simulations. While a minimum device length is required for the start of oscillations, it appears that if the slow wave structure is too long, output power is decreased by a transfer of wave energy back to the electrons. Comparisons have been made between the analytical and numerical results, as well as with experimental data obtained at Sandia National Laboratories

  6. Lattice simulation of a center symmetric three dimensional effective theory for SU(2) Yang-Mills

    International Nuclear Information System (INIS)

    Smith, Dominik

    2010-01-01

    We present lattice simulations of a center symmetric dimensionally reduced effective field theory for SU(2) Yang Mills which employ thermal Wilson lines and three-dimensional magnetic fields as fundamental degrees of freedom. The action is composed of a gauge invariant kinetic term, spatial gauge fields and a potential for theWilson line which includes a ''fuzzy'' bag term to generate non-perturbative fluctuations between Z(2) degenerate ground states. The model is studied in the limit where the gauge fields are set to zero as well as the full model with gauge fields. We confirm that, at moderately weak coupling, the ''fuzzy'' bag term leads to eigenvalue repulsion in a finite region above the deconfining phase transition which shrinks in the extreme weak-coupling limit. A non-trivial Z(N) symmetric vacuum arises in the confined phase. The effective potential for the Polyakov loop in the theory with gauge fields is extracted from the simulations including all modes of the loop as well as for cooled configurations where the hard modes have been averaged out. The former is found to exhibit a non-analytic contribution while the latter can be described by a mean-field like ansatz with quadratic and quartic terms, plus a Vandermonde potential which depends upon the location within the phase diagram. Other results include the exact location of the phase boundary in the plane spanned by the coupling parameters, correlation lengths of several operators in the magnetic and electric sectors and the spatial string tension. We also present results from simulations of the full 4D Yang-Mills theory and attempt to make a qualitative comparison to the 3D effective theory. (orig.)

  7. Lattice simulation of a center symmetric three dimensional effective theory for SU(2) Yang-Mills

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Dominik

    2010-11-17

    We present lattice simulations of a center symmetric dimensionally reduced effective field theory for SU(2) Yang Mills which employ thermal Wilson lines and three-dimensional magnetic fields as fundamental degrees of freedom. The action is composed of a gauge invariant kinetic term, spatial gauge fields and a potential for theWilson line which includes a ''fuzzy'' bag term to generate non-perturbative fluctuations between Z(2) degenerate ground states. The model is studied in the limit where the gauge fields are set to zero as well as the full model with gauge fields. We confirm that, at moderately weak coupling, the ''fuzzy'' bag term leads to eigenvalue repulsion in a finite region above the deconfining phase transition which shrinks in the extreme weak-coupling limit. A non-trivial Z(N) symmetric vacuum arises in the confined phase. The effective potential for the Polyakov loop in the theory with gauge fields is extracted from the simulations including all modes of the loop as well as for cooled configurations where the hard modes have been averaged out. The former is found to exhibit a non-analytic contribution while the latter can be described by a mean-field like ansatz with quadratic and quartic terms, plus a Vandermonde potential which depends upon the location within the phase diagram. Other results include the exact location of the phase boundary in the plane spanned by the coupling parameters, correlation lengths of several operators in the magnetic and electric sectors and the spatial string tension. We also present results from simulations of the full 4D Yang-Mills theory and attempt to make a qualitative comparison to the 3D effective theory. (orig.)

  8. An object oriented code for simulating supersymmetric Yang-Mills theories

    Science.gov (United States)

    Catterall, Simon; Joseph, Anosh

    2012-06-01

    We present SUSY_LATTICE - a C++ program that can be used to simulate certain classes of supersymmetric Yang-Mills (SYM) theories, including the well known N=4 SYM in four dimensions, on a flat Euclidean space-time lattice. Discretization of SYM theories is an old problem in lattice field theory. It has resisted solution until recently when new ideas drawn from orbifold constructions and topological field theories have been brought to bear on the question. The result has been the creation of a new class of lattice gauge theories in which the lattice action is invariant under one or more supersymmetries. The resultant theories are local, free of doublers and also possess exact gauge-invariance. In principle they form the basis for a truly non-perturbative definition of the continuum SYM theories. In the continuum limit they reproduce versions of the SYM theories formulated in terms of twisted fields, which on a flat space-time is just a change of the field variables. In this paper, we briefly review these ideas and then go on to provide the details of the C++ code. We sketch the design of the code, with particular emphasis being placed on SYM theories with N=(2,2) in two dimensions and N=4 in three and four dimensions, making one-to-one comparisons between the essential components of the SYM theories and their corresponding counterparts appearing in the simulation code. The code may be used to compute several quantities associated with the SYM theories such as the Polyakov loop, mean energy, and the width of the scalar eigenvalue distributions. Program summaryProgram title: SUSY_LATTICE Catalogue identifier: AELS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9315 No. of bytes in distributed program

  9. Integrating Soft Set Theory and Fuzzy Linguistic Model to Evaluate the Performance of Training Simulation Systems.

    Science.gov (United States)

    Chang, Kuei-Hu; Chang, Yung-Chia; Chain, Kai; Chung, Hsiang-Yu

    2016-01-01

    The advancement of high technologies and the arrival of the information age have caused changes to the modern warfare. The military forces of many countries have replaced partially real training drills with training simulation systems to achieve combat readiness. However, considerable types of training simulation systems are used in military settings. In addition, differences in system set up time, functions, the environment, and the competency of system operators, as well as incomplete information have made it difficult to evaluate the performance of training simulation systems. To address the aforementioned problems, this study integrated analytic hierarchy process, soft set theory, and the fuzzy linguistic representation model to evaluate the performance of various training simulation systems. Furthermore, importance-performance analysis was adopted to examine the influence of saving costs and training safety of training simulation systems. The findings of this study are expected to facilitate applying military training simulation systems, avoiding wasting of resources (e.g., low utility and idle time), and providing data for subsequent applications and analysis. To verify the method proposed in this study, the numerical examples of the performance evaluation of training simulation systems were adopted and compared with the numerical results of an AHP and a novel AHP-based ranking technique. The results verified that not only could expert-provided questionnaire information be fully considered to lower the repetition rate of performance ranking, but a two-dimensional graph could also be used to help administrators allocate limited resources, thereby enhancing the investment benefits and training effectiveness of a training simulation system.

  10. Development and validation of rear impact computer simulation model of an adult manual transit wheelchair with a seated occupant.

    Science.gov (United States)

    Salipur, Zdravko; Bertocci, Gina

    2010-01-01

    It has been shown that ANSI WC19 transit wheelchairs that are crashworthy in frontal impact exhibit catastrophic failures in rear impact and may not be able to provide stable seating support and thus occupant protection for the wheelchair occupant. Thus far only limited sled test and computer simulation data have been available to study rear impact wheelchair safety. Computer modeling can be used as an economic and comprehensive tool to gain critical knowledge regarding wheelchair integrity and occupant safety. This study describes the development and validation of a computer model simulating an adult wheelchair-seated occupant subjected to a rear impact event. The model was developed in MADYMO and validated rigorously using the results of three similar sled tests conducted to specifications provided in the draft ISO/TC 173 standard. Outcomes from the model can provide critical wheelchair loading information to wheelchair and tiedown manufacturers, resulting in safer wheelchair designs for rear impact conditions. (c) 2009 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. Continuum modeling of twinning, amorphization, and fracture: theory and numerical simulations

    Science.gov (United States)

    Clayton, J. D.; Knap, J.

    2018-03-01

    A continuum mechanical theory is used to model physical mechanisms of twinning, solid-solid phase transformations, and failure by cavitation and shear fracture. Such a sequence of mechanisms has been observed in atomic simulations and/or experiments on the ceramic boron carbide. In the present modeling approach, geometric quantities such as the metric tensor and connection coefficients can depend on one or more director vectors, also called internal state vectors. After development of the general nonlinear theory, a first problem class considers simple shear deformation of a single crystal of this material. For homogeneous fields or stress-free states, algebraic systems or ordinary differential equations are obtained that can be solved by numerical iteration. Results are in general agreement with atomic simulation, without introduction of fitted parameters. The second class of problems addresses the more complex mechanics of heterogeneous deformation and stress states involved in deformation and failure of polycrystals. Finite element calculations, in which individual grains in a three-dimensional polycrystal are fully resolved, invoke a partially linearized version of the theory. Results provide new insight into effects of crystal morphology, activity or inactivity of different inelasticity mechanisms, and imposed deformation histories on strength and failure of the aggregate under compression and shear. The importance of incorporation of inelastic shear deformation in realistic models of amorphization of boron carbide is noted, as is a greater reduction in overall strength of polycrystals containing one or a few dominant flaws rather than many diffusely distributed microcracks.

  12. A Monte Carlo simulation for the field theory with quartic interaction

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Sergio Mittmann dos [Instituto Federal de Educacao, Ciencia e Tecnologia do Rio Grande do Sul (IFRS), Porto Alegre, RS (Brazil)

    2011-07-01

    Full text: In the work [1-S. M. Santos, B. E. J. Bodmann and A. T. Gomez, Um novo metodo computacional para a teoria de campos na rede: resultados preliminares, IV Escola do Centro Brasileiro de Pesquisas Fisicas (CBPF), Rio de Janeiro, 2002; and 2-S. M. Santos and B. E. J. Bodmann, Simulacao na rede de teorias de campos quanticos, XXVIII Congresso Nacional de Matematica Aplicada e Computacional (CNMAC), Sao Paulo, 2005], a computational method on the lattice was elaborated for the problem known as scalar field theory with quartic interaction (for instance, see: J. R. Klauder, Beyound conventional quantization, Cambridge: Cambridge University Press, 2000). This one introduced an algorithm, which allows the simulation of a given field theory and is independent of the lattice spacing, by redefining the fields and the parameters (the mass m and the coupling constant g). This kind of approach permits varying the dimension of the lattice without changing the computational complexity of the algorithm. A simulation was made using the Monte Carlo method, where the renormalized mass m{sub R}, the renormalized coupling constant g{sub R} and the two point correlation function were determined with success. In the present work, the genuine computational method is used for new simulations. Now, the Monte Carlo method is not used just for the simulation of the algorithm, like in [1, 2], but also for defining the adjust parameters (the mass and the coupling constant), introduced ad hoc in [1, 2]. This work presents the first simulations' outcomes, where best results that [1, 2] were determined, for the renormalized mass and the renormalized coupling constant. (author)

  13. Simulations of a stretching bar using a plasticity model from the shear transformation zone theory

    Energy Technology Data Exchange (ETDEWEB)

    Rycroft, Chris H.; Gibou, Frederic

    2010-06-05

    An Eulerian simulation is developed to study an elastoplastic model of amorphous materials that is based upon the shear transformation zone theory developed by Langer and coworkers. In this theory, plastic deformation is controlled by an effective temperature that measures the amount of configurational disorder in the material. The simulation is used to model ductile fracture in a stretching bar that initially contains a small notch, and the effects of many of the model parameters are examined. The simulation tracks the shape of the bar using the level set method. Within the bar, a finite difference discretization is employed that makes use of the essentially non-oscillatory (ENO) scheme. The system of equations is moderately stiff due to the presence of large elastic constants, and one of the key numerical challenges is to accurately track the level set and construct extrapolated field values for use in boundary conditions. A new approach to field extrapolation is discussed that is second order accurate and requires a constant amount of work per gridpoint.

  14. Reliability theory for repair service organization simulation and increase of innovative attraction of industrial enterprises

    Science.gov (United States)

    Dolzhenkova, E. V.; Iurieva, L. V.

    2018-05-01

    The study presents the author's algorithm for the industrial enterprise repair service organization simulation based on the reliability theory, as well as the results of its application. The monitoring of the industrial enterprise repair service organization is proposed to perform on the basis of the enterprise's state indexes for the main resources (equipment, labour, finances, repair areas), which allows quantitative evaluation of the reliability level as a resulting summary rating of the said parameters and the ensuring of an appropriate level of the operation reliability of the serviced technical objects. Under the conditions of the tough competition, the following approach is advisable: the higher efficiency of production and a repair service itself, the higher the innovative attractiveness of an industrial enterprise. The results of the calculations show that in order to prevent inefficient losses of production and to reduce the repair costs, it is advisable to apply the reliability theory. The overall reliability rating calculated on the basis of the author's algorithm has low values. The processing of the statistical data forms the reliability characteristics for the different workshops and services of an industrial enterprise, which allows one to define the failure rates of the various units of equipment and to establish the reliability indexes necessary for the subsequent mathematical simulation. The proposed simulating algorithm contributes to an increase of the efficiency of the repair service organization and improvement of the innovative attraction of an industrial enterprise.

  15. Characterization of Bitumen Micro-Mechanical Behaviors Using AFM, Phase Dynamics Theory and MD Simulation

    Directory of Open Access Journals (Sweden)

    Yue Hou

    2017-02-01

    Full Text Available Fundamental understanding of micro-mechanical behaviors in bitumen, including phase separation, micro-friction, micro-abrasion, etc., can help the pavement engineers better understand the bitumen mechanical performances at macroscale. Recent researches show that the microstructure evolution in bitumen will directly affect its surface structure and micro-mechanical performance. In this study, the bitumen microstructure and micro-mechanical behaviors are studied using Atomic Force Microscopy (AFM experiments, Phase Dynamics Theory and Molecular Dynamics (MD Simulation. The AFM experiment results show that different phase-structure will occur at the surface of the bitumen samples under certain thermodynamic conditions at microscale. The phenomenon can be explained using the phase dynamics theory, where the effects of stability parameter and temperature on bitumen microstructure and micro-mechanical behavior are studied combined with MD Simulation. Simulation results show that the saturates phase, in contrast to the naphthene aromatics phase, plays a major role in bitumen micro-mechanical behavior. A high stress zone occurs at the interface between the saturates phase and the naphthene aromatics phase, which may form discontinuities that further affect the bitumen frictional performance.

  16. Characterization of Bitumen Micro-Mechanical Behaviors Using AFM, Phase Dynamics Theory and MD Simulation.

    Science.gov (United States)

    Hou, Yue; Wang, Linbing; Wang, Dawei; Guo, Meng; Liu, Pengfei; Yu, Jianxin

    2017-02-21

    Fundamental understanding of micro-mechanical behaviors in bitumen, including phase separation, micro-friction, micro-abrasion, etc., can help the pavement engineers better understand the bitumen mechanical performances at macroscale. Recent researches show that the microstructure evolution in bitumen will directly affect its surface structure and micro-mechanical performance. In this study, the bitumen microstructure and micro-mechanical behaviors are studied using Atomic Force Microscopy (AFM) experiments, Phase Dynamics Theory and Molecular Dynamics (MD) Simulation. The AFM experiment results show that different phase-structure will occur at the surface of the bitumen samples under certain thermodynamic conditions at microscale. The phenomenon can be explained using the phase dynamics theory, where the effects of stability parameter and temperature on bitumen microstructure and micro-mechanical behavior are studied combined with MD Simulation. Simulation results show that the saturates phase, in contrast to the naphthene aromatics phase, plays a major role in bitumen micro-mechanical behavior. A high stress zone occurs at the interface between the saturates phase and the naphthene aromatics phase, which may form discontinuities that further affect the bitumen frictional performance.

  17. Recent advances in the theory and simulation of pellet ablation and fast fuel relocation in tokamaks

    International Nuclear Information System (INIS)

    Parks, P.B.; Baylor, L.R.; Ishizaki, R.; Jardin, S.C.; Samtaney, R.

    2005-01-01

    This paper presents new theory and simulation of pellet ablation, and the rapid cross-field redistribution of the ionized pellet mass following pellet injection in tokamaks. The first 2-D time-dependent simulations describing the expansion of pellet ablation flow against the magnetic field is presented here using the Eulerian code CAP. The early-time expansion is characterized by the formation of an ellipsoidal diamagnetic cavity surrounding the pellet, which diverts heat flux around the pellet, thereby reducing the ablation rate. Near-pellet cloud properties from CAP provide initial conditions for the subsequent ExB advection of the ionized clouds caused by polarization in the inhomogeneous toroidal magnetic field. The first complete set of time-dependent equations describing mass redistribution has been developed and solved for numerically using the PRL code. New effects identified, including curvature drive by near sonic field-aligned flows, rotational transform of the magnetic field lines and magnetic shear are considered from the viewpoint of the parallel vorticity equation. Close agreement between theory and experimental fuel deposition profiles are obtained for both inner and outer wall pellet injection on the DIII-D tokamak, providing improved predictive capability for ITER. A new 3-D MHD simulation code AMR was started, which provides the required fine-scale mesh size needed for accurate modeling of pellet clouds having sharp perpendicular-to-B gradients. (author)

  18. Theories, Methods and Numerical Technology of Sheet Metal Cold and Hot Forming Analysis, Simulation and Engineering Applications

    CERN Document Server

    Hu, Ping; Liu, Li-zhong; Zhu, Yi-guo

    2013-01-01

    Over the last 15 years, the application of innovative steel concepts in the automotive industry has increased steadily. Numerical simulation technology of hot forming of high-strength steel allows engineers to modify the formability of hot forming steel metals and to optimize die design schemes. Theories, Methods and Numerical Technology of Sheet Metal Cold and Hot Forming focuses on hot and cold forming theories, numerical methods, relative simulation and experiment techniques for high-strength steel forming and die design in the automobile industry. Theories, Methods and Numerical Technology of Sheet Metal Cold and Hot Forming introduces the general theories of cold forming, then expands upon advanced hot forming theories and simulation methods, including: • the forming process, • constitutive equations, • hot boundary constraint treatment, and • hot forming equipment and experiments. Various calculation methods of cold and hot forming, based on the authors’ experience in commercial CAE software f...

  19. Classical density functional theory & simulations on a coarse-grained model of aromatic ionic liquids.

    Science.gov (United States)

    Turesson, Martin; Szparaga, Ryan; Ma, Ke; Woodward, Clifford E; Forsman, Jan

    2014-05-14

    A new classical density functional approach is developed to accurately treat a coarse-grained model of room temperature aromatic ionic liquids. Our major innovation is the introduction of charge-charge correlations, which are treated in a simple phenomenological way. We test this theory on a generic coarse-grained model for aromatic RTILs with oligomeric forms for both cations and anions, approximating 1-alkyl-3-methyl imidazoliums and BF₄⁻, respectively. We find that predictions by the new density functional theory for fluid structures at charged surfaces are very accurate, as compared with molecular dynamics simulations, across a range of surface charge densities and lengths of the alkyl chain. Predictions of interactions between charged surfaces are also presented.

  20. Nonlocal strain gradient theory calibration using molecular dynamics simulation based on small scale vibration of nanotubes

    Energy Technology Data Exchange (ETDEWEB)

    Mehralian, Fahimeh [Mechanical Engineering Department, Shahrekord University, Shahrekord (Iran, Islamic Republic of); Tadi Beni, Yaghoub, E-mail: tadi@eng.sku.ac.ir [Faculty of Engineering, Shahrekord University, Shahrekord (Iran, Islamic Republic of); Karimi Zeverdejani, Mehran [Mechanical Engineering Department, Shahrekord University, Shahrekord (Iran, Islamic Republic of)

    2017-06-01

    Featured by two small length scale parameters, nonlocal strain gradient theory is utilized to investigate the free vibration of nanotubes. A new size-dependent shell model formulation is developed by using the first order shear deformation theory. The governing equations and boundary conditions are obtained using Hamilton's principle and solved for simply supported boundary condition. As main purpose of this study, since the values of two small length scale parameters are still unknown, they are calibrated by the means of molecular dynamics simulations (MDs). Then, the influences of different parameters such as nonlocal parameter, scale factor, length and thickness on vibration characteristics of nanotubes are studied. It is also shown that increase in thickness and decrease in length parameters intensify the effect of nonlocal parameter and scale factor.

  1. Metrication manual

    International Nuclear Information System (INIS)

    Harper, A.F.A.; Digby, R.B.; Thong, S.P.; Lacey, F.

    1978-04-01

    In April 1978 a meeting of senior metrication officers convened by the Commonwealth Science Council of the Commonwealth Secretariat, was held in London. The participants were drawn from Australia, Bangladesh, Britain, Canada, Ghana, Guyana, India, Jamaica, Papua New Guinea, Solomon Islands and Trinidad and Tobago. Among other things, the meeting resolved to develop a set of guidelines to assist countries to change to SI and to compile such guidelines in the form of a working manual

  2. Rate Theory Modeling and Simulation of Silicide Fuel at LWR Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Miao, Yinbin [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Ye, Bei [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Hofman, Gerard [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Yacout, Abdellatif [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States). Fuel Modeling and Simulation; Mei, Zhi-Gang [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division

    2016-08-29

    As a promising candidate for the accident tolerant fuel (ATF) used in light water reactors (LWRs), the fuel performance of uranium silicide (U3Si2) at LWR conditions needs to be well understood. In this report, rate theory model was developed based on existing experimental data and density functional theory (DFT) calculations so as to predict the fission gas behavior in U3Si2 at LWR conditions. The fission gas behavior of U3Si2 can be divided into three temperature regimes. During steady-state operation, the majority of the fission gas stays in intragranular bubbles, whereas the dominance of intergranular bubbles and fission gas release only occurs beyond 1000 K. The steady-state rate theory model was also used as reference to establish a gaseous swelling correlation of U3Si2 for the BISON code. Meanwhile, the overpressurized bubble model was also developed so that the fission gas behavior at LOCA can be simulated. LOCA simulation showed that intragranular bubbles are still dominant after a 70 second LOCA, resulting in a controllable gaseous swelling. The fission gas behavior of U3Si2 at LWR conditions is benign according to the rate theory prediction at both steady-state and LOCA conditions, which provides important references to the qualification of U3Si2 as a LWR fuel material with excellent fuel performance and enhanced accident tolerance.

  3. Toward a Unified Theory of Work: Organizational Simulations and Policy Analyses

    National Research Council Canada - National Science Library

    Vaughan, David

    2002-01-01

    .... This unified theory of work will connect theories of human traits and states, theories of task and job characteristics, theories of job/task performance, and perhaps theories of organizational behavior...

  4. Confabulating, misremembering, relearning: The simulation theory of memory and unsuccessful remembering

    Directory of Open Access Journals (Sweden)

    Kourken Michaelian

    2016-11-01

    Full Text Available This articles develops a taxonomy of memory errors in terms of three conditions: the accuracy of the memory representation, the reliability of the memory process, and the internality (with respect to the remembering subject of that process. Unlike previous taxonomies, which appeal to retention of information rather than reliability or internality, this taxonomy can accommodate not only misremembering (e.g., the DRM effect, falsidical confabulation, and veridical relearning but also veridical confabulation and falsidical relearning. Moreover, because it does not assume that successful remembering presupposes retention of information, the taxonomy is compatible with recent simulation theories of remembering.

  5. Dispersion and damping of two-dimensional dust acoustic waves: theory and simulation

    International Nuclear Information System (INIS)

    Upadhyaya, Nitin; Miskovic, Z L; Hou, L-J

    2010-01-01

    A two-dimensional generalized hydrodynamics (GH) model is developed to study the full spectrum of both longitudinal and transverse dust acoustic waves (DAW) in strongly coupled complex (dusty) plasmas, with memory-function-formalism being implemented to enforce high-frequency sum rules. Results are compared with earlier theories (such as quasi-localized charge approximation and its extended version) and with a self-consistent Brownian dynamics simulation. It is found that the GH approach provides a good account, not only of dispersion relations, but also of damping rates of the DAW modes in a wide range of coupling strengths, an issue hitherto not fully addressed for dusty plasmas.

  6. The interstellar medium, expanding nebulae and triggered star formation theory and simulations

    CERN Document Server

    Bisbas, Thomas G

    2016-01-01

    This brief brings together the theoretical aspects of star formation and ionized regions with the most up-to-date simulations and observations. Beginning with the basic theory of star formation, the physics of expanding HII regions is reviewed in detail and a discussion on how a massive star can give birth to tens or hundreds of other stars follows. The theoretical description of star formation is shown in simplified and state-of-the-art numerical simulations, describing in a more clear way how feedback from massive stars can trigger star and planet formation. This is also combined with spectacular images of nebulae taken by talented amateur astronomers. The latter is very likely to stimulate the reader to observe the structure of nebulae from a different point of view, and better understand the associated star formation therein.

  7. Simulations of nanocrystals under pressure: Combining electronic enthalpy and linear-scaling density-functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Corsini, Niccolò R. C., E-mail: niccolo.corsini@imperial.ac.uk; Greco, Andrea; Haynes, Peter D. [Department of Physics and Department of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Hine, Nicholas D. M. [Department of Physics and Department of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Cavendish Laboratory, J. J. Thompson Avenue, Cambridge CB3 0HE (United Kingdom); Molteni, Carla [Department of Physics, King' s College London, Strand, London WC2R 2LS (United Kingdom)

    2013-08-28

    We present an implementation in a linear-scaling density-functional theory code of an electronic enthalpy method, which has been found to be natural and efficient for the ab initio calculation of finite systems under hydrostatic pressure. Based on a definition of the system volume as that enclosed within an electronic density isosurface [M. Cococcioni, F. Mauri, G. Ceder, and N. Marzari, Phys. Rev. Lett.94, 145501 (2005)], it supports both geometry optimizations and molecular dynamics simulations. We introduce an approach for calibrating the parameters defining the volume in the context of geometry optimizations and discuss their significance. Results in good agreement with simulations using explicit solvents are obtained, validating our approach. Size-dependent pressure-induced structural transformations and variations in the energy gap of hydrogenated silicon nanocrystals are investigated, including one comparable in size to recent experiments. A detailed analysis of the polyamorphic transformations reveals three types of amorphous structures and their persistence on depressurization is assessed.

  8. E × B electron drift instability in Hall thrusters: Particle-in-cell simulations vs. theory

    Science.gov (United States)

    Boeuf, J. P.; Garrigues, L.

    2018-06-01

    The E × B Electron Drift Instability (E × B EDI), also called Electron Cyclotron Drift Instability, has been observed in recent particle simulations of Hall thrusters and is a possible candidate to explain anomalous electron transport across the magnetic field in these devices. This instability is characterized by the development of an azimuthal wave with wavelength in the mm range and velocity on the order of the ion acoustic velocity, which enhances electron transport across the magnetic field. In this paper, we study the development and convection of the E × B EDI in the acceleration and near plume regions of a Hall thruster using a simplified 2D axial-azimuthal Particle-In-Cell simulation. The simulation is collisionless and the ionization profile is not-self-consistent but rather is given as an input parameter of the model. The aim is to study the development and properties of the instability for different values of the ionization rate (i.e., of the total ion production rate or current) and to compare the results with the theory. An important result is that the wavelength of the simulated azimuthal wave scales as the electron Debye length and that its frequency is on the order of the ion plasma frequency. This is consistent with the theory predicting destruction of electron cyclotron resonance of the E × B EDI in the non-linear regime resulting in the transition to an ion acoustic instability. The simulations also show that for plasma densities smaller than under nominal conditions of Hall thrusters the field fluctuations induced by the E × B EDI are no longer sufficient to significantly enhance electron transport across the magnetic field, and transit time instabilities develop in the axial direction. The conditions and results of the simulations are described in detail in this paper and they can serve as benchmarks for comparisons between different simulation codes. Such benchmarks would be very useful to study the role of numerical noise (numerical

  9. Applicability of mode-coupling theory to polyisobutylene: a molecular dynamics simulation study.

    Science.gov (United States)

    Khairy, Y; Alvarez, F; Arbe, A; Colmenero, J

    2013-10-01

    The applicability of Mode Coupling Theory (MCT) to the glass-forming polymer polyisobutylene (PIB) has been explored by using fully atomistic molecular dynamics simulations. MCT predictions for the so-called asymptotic regime have been successfully tested on the dynamic structure factor and the self-correlation function of PIB main-chain carbons calculated from the simulated cell. The factorization theorem and the time-temperature superposition principle are satisfied. A consistent fitting procedure of the simulation data to the MCT asymptotic power-laws predicted for the α-relaxation regime has delivered the dynamic exponents of the theory-in particular, the exponent parameter λ-the critical non-ergodicity parameters, and the critical temperature T(c). The obtained values of λ and T(c) agree, within the uncertainties involved in both studies, with those deduced from depolarized light scattering experiments [A. Kisliuk et al., J. Polym. Sci. Part B: Polym. Phys. 38, 2785 (2000)]. Both, λ and T(c)/T(g) values found for PIB are unusually large with respect to those commonly obtained in low molecular weight systems. Moreover, the high T(c)/T(g) value is compatible with a certain correlation of this parameter with the fragility in Angell's classification. Conversely, the value of λ is close to that reported for real polymers, simulated "realistic" polymers and simple polymer models with intramolecular barriers. In the framework of the MCT, such finding should be the signature of two different mechanisms for the glass-transition in real polymers: intermolecular packing and intramolecular barriers combined with chain connectivity.

  10. Characterizing representational learning: A combined simulation and tutorial on perturbation theory

    Directory of Open Access Journals (Sweden)

    Antje Kohnle

    2017-11-01

    Full Text Available Analyzing, constructing, and translating between graphical, pictorial, and mathematical representations of physics ideas and reasoning flexibly through them (“representational competence” is a key characteristic of expertise in physics but is a challenge for learners to develop. Interactive computer simulations and University of Washington style tutorials both have affordances to support representational learning. This article describes work to characterize students’ spontaneous use of representations before and after working with a combined simulation and tutorial on first-order energy corrections in the context of quantum-mechanical time-independent perturbation theory. Data were collected from two institutions using pre-, mid-, and post-tests to assess short- and long-term gains. A representational competence level framework was adapted to devise level descriptors for the assessment items. The results indicate an increase in the number of representations used by students and the consistency between them following the combined simulation tutorial. The distributions of representational competence levels suggest a shift from perceptual to semantic use of representations based on their underlying meaning. In terms of activity design, this study illustrates the need to support students in making sense of the representations shown in a simulation and in learning to choose the most appropriate representation for a given task. In terms of characterizing representational abilities, this study illustrates the usefulness of a framework focusing on perceptual, syntactic, and semantic use of representations.

  11. Protection motivation theory and social distancing behaviour in response to a simulated infectious disease epidemic.

    Science.gov (United States)

    Williams, Lynn; Rasmussen, Susan; Kleczkowski, Adam; Maharaj, Savi; Cairns, Nicole

    2015-01-01

    Epidemics of respiratory infectious disease remain one of the most serious health risks facing the population. Non-pharmaceutical interventions (e.g. hand-washing or wearing face masks) can have a significant impact on the course of an infectious disease epidemic. The current study investigated whether protection motivation theory (PMT) is a useful framework for understanding social distancing behaviour (i.e. the tendency to reduce social contacts) in response to a simulated infectious disease epidemic. There were 230 participants (109 males, 121 females, mean age 32.4 years) from the general population who completed self-report measures assessing the components of PMT. In addition, participants completed a computer game which simulated an infectious disease epidemic in order to provide a measure of social distancing behaviour. The regression analyses revealed that none of the PMT variables were significant predictors of social distancing behaviour during the simulation task. However, fear (β = .218, p < .001), response efficacy (β = .175, p < .01) and self-efficacy (β = .251, p < .001) were all significant predictors of intention to engage in social distancing behaviour. Overall, the PMT variables (and demographic factors) explain 21.2% of the variance in intention. The findings demonstrated that PMT was a useful framework for understanding intention to engage in social distancing behaviour, but not actual behaviour during the simulated epidemic. These findings may reflect an intention-behaviour gap in relation to social distancing behaviour.

  12. Characterizing representational learning: A combined simulation and tutorial on perturbation theory

    Science.gov (United States)

    Kohnle, Antje; Passante, Gina

    2017-12-01

    Analyzing, constructing, and translating between graphical, pictorial, and mathematical representations of physics ideas and reasoning flexibly through them ("representational competence") is a key characteristic of expertise in physics but is a challenge for learners to develop. Interactive computer simulations and University of Washington style tutorials both have affordances to support representational learning. This article describes work to characterize students' spontaneous use of representations before and after working with a combined simulation and tutorial on first-order energy corrections in the context of quantum-mechanical time-independent perturbation theory. Data were collected from two institutions using pre-, mid-, and post-tests to assess short- and long-term gains. A representational competence level framework was adapted to devise level descriptors for the assessment items. The results indicate an increase in the number of representations used by students and the consistency between them following the combined simulation tutorial. The distributions of representational competence levels suggest a shift from perceptual to semantic use of representations based on their underlying meaning. In terms of activity design, this study illustrates the need to support students in making sense of the representations shown in a simulation and in learning to choose the most appropriate representation for a given task. In terms of characterizing representational abilities, this study illustrates the usefulness of a framework focusing on perceptual, syntactic, and semantic use of representations.

  13. Manual for Cyclic Triaxial Test

    DEFF Research Database (Denmark)

    Shajarati, Amir; Sørensen, Kris Wessel; Nielsen, Søren Kjær

    This manual describes the different steps that is included in the procedure for conducting a cyclic triaxial test at the geotechnical Laboratory at Aalborg University. Furthermore it contains a chapter concerning some of the background theory for the static triaxial tests. The cyclic/dynamic tria......This manual describes the different steps that is included in the procedure for conducting a cyclic triaxial test at the geotechnical Laboratory at Aalborg University. Furthermore it contains a chapter concerning some of the background theory for the static triaxial tests. The cyclic...

  14. Interplay simulation/testing in the optimisation of gearshift quality for manual transmissions; Zusammenspiel Simulation/Erprobung bezueglich optimierte Schaltqualitaet von Handschaltgetriebe

    Energy Technology Data Exchange (ETDEWEB)

    Leist, S.; Donin, R. [Ricardo Deutschland GmbH, Schwaebisch Gmuend (Germany); Kelly, D. [Ricardo DTS, Leamington Spa (United Kingdom)

    2005-07-01

    Passenger comfort is an aspect of the development of modern passenger cars that cannot be overlooked. Achieving a high degree of shift comfort, i.e. a combination of shift feeling and low shift forces, is a great challenge, since the mechanical processes taking place are highly non-linear and arbitrary. A purely test-based programme of shift quality development is cost- and time-intensive. Such a programme runs the risk of not delivering the necessary results on time. Therefore it is necessary to develop a methodology that facilitates a complementary interplay between simulation and testing, and in which the vehicle is considered as a complete system. Sub-systems such as shift lever, inner and outer shift mechanism, transmission, clutch and engine must be considered very carefully with regard to the system requirements, which are often conflicting. Over the last 15 years, Ricardo has developed a tool with which shift quality can be objectively assessed-''GSQA - Gear Shift Quality Assessment''. Today, with the help of its detailed dynamic simulation models, Ricardo is in a position to solve complex shift quality problems in the concept phase as well as in series production. (orig.)

  15. Increasing the Number of Replications in Item Response Theory Simulations: Automation through SAS and Disk Operating System

    Science.gov (United States)

    Gagne, Phill; Furlow, Carolyn; Ross, Terris

    2009-01-01

    In item response theory (IRT) simulation research, it is often necessary to use one software package for data generation and a second software package to conduct the IRT analysis. Because this can substantially slow down the simulation process, it is sometimes offered as a justification for using very few replications. This article provides…

  16. Quantum Simulation with Circuit-QED Lattices: from Elementary Building Blocks to Many-Body Theory

    Science.gov (United States)

    Zhu, Guanyu

    Recent experimental and theoretical progress in superconducting circuits and circuit QED (quantum electrodynamics) has helped to develop high-precision techniques to control, manipulate, and detect individual mesoscopic quantum systems. A promising direction is hence to scale up from individual building blocks to form larger-scale quantum many-body systems. Although realizing a scalable fault-tolerant quantum computer still faces major barriers of decoherence and quantum error correction, it is feasible to realize scalable quantum simulators with state-of-the-art technology. From the technological point of view, this could serve as an intermediate stage towards the final goal of a large-scale quantum computer, and could help accumulating experience with the control of quantum systems with a large number of degrees of freedom. From the physical point of view, this opens up a new regime where condensed matter systems can be simulated and studied, here in the context of strongly correlated photons and two-level systems. In this thesis, we mainly focus on two aspects of circuit-QED based quantum simulation. First, we discuss the elementary building blocks of the quantum simulator, in particular a fluxonium circuit coupled to a superconducting resonator. We show the interesting properties of the fluxonium circuit as a qubit, including the unusual structure of its charge matrix elements. We also employ perturbation theory to derive the effective Hamiltonian of the coupled system in the dispersive regime, where qubit and the photon frequencies are detuned. The observables predicted with our theory, including dispersive shifts and Kerr nonlinearity, are compared with data from experiments, such as homodyne transmission and two-tone spectroscopy. These studies also relate to the problem of detection in a circuit-QED quantum simulator. Second, we study many-body physics of circuit-QED lattices, serving as quantum simulators. In particular, we focus on two different

  17. Photonic-Doppler-Velocimetry, Paraxial-Scalar Diffraction Theory and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ambrose, W. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-07-20

    In this report I describe current progress on a paraxial, scalar-field theory suitable for simulating what is measured in Photonic Doppler Velocimetry (PDV) experiments in three dimensions. I have introduced a number of approximations in this work in order to bring the total computation time for one experiment down to around 20 hours. My goals were: to develop an approximate method of calculating the peak frequency in a spectral sideband at an instant of time based on an optical diffraction theory for a moving target, to compare the ‘measured’ velocity to the ‘input’ velocity to gain insights into how and to what precision PDV measures the component of the mass velocity along the optical axis, and to investigate the effects of small amounts of roughness on the measured velocity. This report illustrates the progress I have made in describing how to perform such calculations with a full three dimensional picture including tilted target, tilted mass velocity (not necessarily in the same direction), and small amounts of surface roughness. With the method established for a calculation at one instant of time, measured velocities can be simulated for a sequence of times, similar to the process of sampling velocities in experiments. Improvements in these methods are certainly possible at hugely increased computational cost. I am hopeful that readers appreciate the insights possible at the current level of approximation.

  18. Numerical simulations of stellar collapse in scalar-tensor theories of gravity

    International Nuclear Information System (INIS)

    Gerosa, Davide; Sperhake, Ulrich; Ott, Christian D

    2016-01-01

    We present numerical-relativity simulations of spherically symmetric core collapse and compact-object formation in scalar-tensor theories of gravity. The additional scalar degree of freedom introduces a propagating monopole gravitational-wave mode. Detection of monopole scalar waves with current and future gravitational-wave experiments may constitute smoking gun evidence for strong-field modifications of general relativity. We collapse both polytropic and more realistic pre-supernova profiles using a high-resolution shock-capturing scheme and an approximate prescription for the nuclear equation of state. The most promising sources of scalar radiation are protoneutron stars collapsing to black holes. In case of a galactic core collapse event forming a black hole, Advanced LIGO may be able to place independent constraints on the parameters of the theory at a level comparable to current solar-system and binary-pulsar measurements. In the region of the parameter space admitting spontaneously scalarised stars, transition to configurations with prominent scalar hair before black-hole formation further enhances the emitted signal. Although a more realistic treatment of the microphysics is necessary to fully investigate the occurrence of spontaneous scalarisation of neutron star remnants, we speculate that formation of such objects could constrain the parameters of the theory beyond the current bounds obtained with solar-system and binary-pulsar experiments. (paper)

  19. Atomic-scale simulation of dust grain collisions: Surface chemistry and dissipation beyond existing theory

    Science.gov (United States)

    Quadery, Abrar H.; Doan, Baochi D.; Tucker, William C.; Dove, Adrienne R.; Schelling, Patrick K.

    2017-10-01

    The early stages of planet formation involve steps where submicron-sized dust particles collide to form aggregates. However, the mechanism through which millimeter-sized particles aggregate to kilometer-sized planetesimals is still not understood. Dust grain collision experiments carried out in the environment of the Earth lead to the prediction of a 'bouncing barrier' at millimeter-sizes. Theoretical models, e.g., Johnson-Kendall-Roberts and Derjaguin-Muller-Toporov theories, lack two key features, namely the chemistry of dust grain surfaces, and a mechanism for atomic-scale dissipation of energy. Moreover, interaction strengths in these models are parameterized based on experiments done in the Earth's environment. To address these issues, we performed atomic-scale simulations of collisions between nonhydroxylated and hydroxylated amorphous silica nanoparticles. We used the ReaxFF approach which enables modeling chemical reactions using an empirical potential. We found that nonhydroxylated nanograins tend to adhere with much higher probability than suggested by existing theories. By contrast, hydroxylated nanograins exhibit a strong tendency to bounce. Also, the interaction between dust grains has the characteristics of a strong chemical force instead of weak van der Waals forces. This suggests that the formation of strong chemical bonds and dissipation via internal atomic vibration may result in aggregation beyond what is expected based on our current understanding. Our results also indicate that experiments should more carefully consider surface conditions to mimic the space environment. We also report results of simulations with molten silica nanoparticles. It is found that molten particles are more likely to adhere due to viscous dissipation, which supports theories that suggest aggregation to kilometer scales might require grains to be in a molten state.

  20. Bicontinuous Phases in Diblock Copolymer/Homopolymer Blends: Simulation and Self-Consistent Field Theory

    KAUST Repository

    Martínez-Veracoechea, Francisco J.

    2009-03-10

    A combination of particle-based simulations and self-consistent field theory (SCFT) is used to study the stabilization of multiple ordered bicontinuous phases in blends of a diblock copolymer (DBC) and a homopolymer. The double-diamond phase (DD) and plumber\\'s nightmare phase (P) were spontaneously formed in the range of homopolymer volume fraction simulated via coarse-grained molecular dynamics. To the best of our knowledge, this is the first time that such phases have been obtained in continuum-space molecular simulations of DBC systems. Though tentative phase boundaries were delineated via free-energy calculations, macrophase separation could not be satisfactorily assessed within the framework of particle-based simulations. Therefore, SCFT was used to explore the DBC/homopolymer phase diagram in more detail, showing that although in many cases two-phase coexistence of a DBC-rich phase and a homopolymer-rich phase does precede the stability of complex bicontinuous phases the DD phase can be stable in a relatively wide region of the phase diagram. Whereas the P phase was always metastable with respect to macrophase separation under the thermodynamic conditions explored with SCFT, it was sometimes nearly stable, suggesting that full stability could be achieved in other unexplored regions of parameter space. Moreover, even the predicted DD- and P-phase metastability regions were located significantly far from the spinodal line, suggesting that these phases could be observed in experiments as "long-lived" metastable phases under those conditions. This conjecture is also consistent with large-system molecular dynamics simulations that showed that the time scale of mesophase formation is much faster than that of macrophase separation. © 2009 American Chemical Society.

  1. Continuum percolation of polydisperse rods in quadrupole fields: Theory and simulations

    Science.gov (United States)

    Finner, Shari P.; Kotsev, Mihail I.; Miller, Mark A.; van der Schoot, Paul

    2018-01-01

    We investigate percolation in mixtures of nanorods in the presence of external fields that align or disalign the particles with the field axis. Such conditions are found in the formulation and processing of nanocomposites, where the field may be electric, magnetic, or due to elongational flow. Our focus is on the effect of length polydispersity, which—in the absence of a field—is known to produce a percolation threshold that scales with the inverse weight average of the particle length. Using a model of non-interacting spherocylinders in conjunction with connectedness percolation theory, we show that a quadrupolar field always increases the percolation threshold and that the universal scaling with the inverse weight average no longer holds if the field couples to the particle length. Instead, the percolation threshold becomes a function of higher moments of the length distribution, where the order of the relevant moments crucially depends on the strength and type of field applied. The theoretical predictions compare well with the results of our Monte Carlo simulations, which eliminate finite size effects by exploiting the fact that the universal scaling of the wrapping probability function holds even in anisotropic systems. Theory and simulation demonstrate that the percolation threshold of a polydisperse mixture can be lower than that of the individual components, confirming recent work based on a mapping onto a Bethe lattice as well as earlier computer simulations involving dipole fields. Our work shows how the formulation of nanocomposites may be used to compensate for the adverse effects of aligning fields that are inevitable under practical manufacturing conditions.

  2. Using queuing theory and simulation model to optimize hospital pharmacy performance.

    Science.gov (United States)

    Bahadori, Mohammadkarim; Mohammadnejhad, Seyed Mohsen; Ravangard, Ramin; Teymourzadeh, Ehsan

    2014-03-01

    Hospital pharmacy is responsible for controlling and monitoring the medication use process and ensures the timely access to safe, effective and economical use of drugs and medicines for patients and hospital staff. This study aimed to optimize the management of studied outpatient pharmacy by developing suitable queuing theory and simulation technique. A descriptive-analytical study conducted in a military hospital in Iran, Tehran in 2013. A sample of 220 patients referred to the outpatient pharmacy of the hospital in two shifts, morning and evening, was selected to collect the necessary data to determine the arrival rate, service rate, and other data needed to calculate the patients flow and queuing network performance variables. After the initial analysis of collected data using the software SPSS 18, the pharmacy queuing network performance indicators were calculated for both shifts. Then, based on collected data and to provide appropriate solutions, the queuing system of current situation for both shifts was modeled and simulated using the software ARENA 12 and 4 scenarios were explored. Results showed that the queue characteristics of the studied pharmacy during the situation analysis were very undesirable in both morning and evening shifts. The average numbers of patients in the pharmacy were 19.21 and 14.66 in the morning and evening, respectively. The average times spent in the system by clients were 39 minutes in the morning and 35 minutes in the evening. The system utilization in the morning and evening were, respectively, 25% and 21%. The simulation results showed that reducing the staff in the morning from 2 to 1 in the receiving prescriptions stage didn't change the queue performance indicators. Increasing one staff in filling prescription drugs could cause a decrease of 10 persons in the average queue length and 18 minutes and 14 seconds in the average waiting time. On the other hand, simulation results showed that in the evening, decreasing the staff

  3. Using Queuing Theory and Simulation Model to Optimize Hospital Pharmacy Performance

    Science.gov (United States)

    Bahadori, Mohammadkarim; Mohammadnejhad, Seyed Mohsen; Ravangard, Ramin; Teymourzadeh, Ehsan

    2014-01-01

    Background: Hospital pharmacy is responsible for controlling and monitoring the medication use process and ensures the timely access to safe, effective and economical use of drugs and medicines for patients and hospital staff. Objectives: This study aimed to optimize the management of studied outpatient pharmacy by developing suitable queuing theory and simulation technique. Patients and Methods: A descriptive-analytical study conducted in a military hospital in Iran, Tehran in 2013. A sample of 220 patients referred to the outpatient pharmacy of the hospital in two shifts, morning and evening, was selected to collect the necessary data to determine the arrival rate, service rate, and other data needed to calculate the patients flow and queuing network performance variables. After the initial analysis of collected data using the software SPSS 18, the pharmacy queuing network performance indicators were calculated for both shifts. Then, based on collected data and to provide appropriate solutions, the queuing system of current situation for both shifts was modeled and simulated using the software ARENA 12 and 4 scenarios were explored. Results: Results showed that the queue characteristics of the studied pharmacy during the situation analysis were very undesirable in both morning and evening shifts. The average numbers of patients in the pharmacy were 19.21 and 14.66 in the morning and evening, respectively. The average times spent in the system by clients were 39 minutes in the morning and 35 minutes in the evening. The system utilization in the morning and evening were, respectively, 25% and 21%. The simulation results showed that reducing the staff in the morning from 2 to 1 in the receiving prescriptions stage didn't change the queue performance indicators. Increasing one staff in filling prescription drugs could cause a decrease of 10 persons in the average queue length and 18 minutes and 14 seconds in the average waiting time. On the other hand, simulation

  4. MINTEQ user's manual

    International Nuclear Information System (INIS)

    Peterson, S.R.; Hostetler, C.J.; Deutsch, W.J.; Cowan, C.E.

    1987-02-01

    This manual will aid the user in applying the MINTEQ geochemical computer code to model aqueous solutions and the interactions of aqueous solutions with hypothesized assemblages of solid phases. The manual will provide a basic understanding of how the MINTEQ computer code operates and the important principles that are incorporated into the code and instruct a user of the MINTEQ code on how to create input files to simulate a variety of geochemical problems. Chapters 2 through 8 are for the user who has some experience with or wishes to review the principles important to geochemical computer codes. These chapters include information on the methodology MINTEQ uses to incorporate these principles into the code. Chapters 9 through 11 are for the user who wants to know how to create input data files to model various types of problems. 35 refs., 2 figs., 5 tabs

  5. Fluids density functional theory and initializing molecular dynamics simulations of block copolymers

    Science.gov (United States)

    Brown, Jonathan R.; Seo, Youngmi; Maula, Tiara Ann D.; Hall, Lisa M.

    2016-03-01

    Classical, fluids density functional theory (fDFT), which can predict the equilibrium density profiles of polymeric systems, and coarse-grained molecular dynamics (MD) simulations, which are often used to show both structure and dynamics of soft materials, can be implemented using very similar bead-based polymer models. We aim to use fDFT and MD in tandem to examine the same system from these two points of view and take advantage of the different features of each methodology. Additionally, the density profiles resulting from fDFT calculations can be used to initialize the MD simulations in a close to equilibrated structure, speeding up the simulations. Here, we show how this method can be applied to study microphase separated states of both typical diblock and tapered diblock copolymers in which there is a region with a gradient in composition placed between the pure blocks. Both methods, applied at constant pressure, predict a decrease in total density as segregation strength or the length of the tapered region is increased. The predictions for the density profiles from fDFT and MD are similar across materials with a wide range of interfacial widths.

  6. Introducing simulation-based education to healthcare professionals: exploring the challenge of integrating theory into educational practice.

    Science.gov (United States)

    Katoue, Maram G; Iblagh, Nadia; Somerville, Susan; Ker, Jean

    2015-11-01

    Introducing simulation-based education to the curricular programme of healthcare professionals can be challenging. This study explored the early experiences of healthcare professionals in the use of simulation. This was in the context of the Kuwait-Scotland transformational health innovation network programme. Two cohorts of healthcare professionals undertook a simulation module as part of faculty development programme in Kuwait. Participants' initial perceptions of simulators were gathered using a structured questionnaire in the clinical skills centre. Their subsequent ability to demonstrate the application of simulation was evaluated through analyses of the video-recordings of teaching sessions they undertook and written reflections of their experiences of using simulation. In theory, participants were able to identify simulators' classification and fidelity. They also recognised some of the challenges of using simulators. In their teaching sessions, most participants focused on using part-task trainers to teach procedural skills. In their written reflections, they did not articulate a justification for their choice of simulator or its limitations. This study demonstrated a theory-to-practice gap in the early use of simulation by healthcare educators. The findings highlight the need for deliberate practice and adequate mentorship for educators to develop confidence and competence in the use of simulation as part of their educational practice. © The Author(s) 2015.

  7. Peptide dynamics by molecular dynamics simulation and diffusion theory method with improved basis sets

    Energy Technology Data Exchange (ETDEWEB)

    Hsu, Po Jen; Lai, S. K., E-mail: sklai@coll.phy.ncu.edu.tw [Complex Liquids Laboratory, Department of Physics, National Central University, Chungli 320, Taiwan and Molecular Science and Technology Program, Taiwan International Graduate Program, Academia Sinica, Taipei 115, Taiwan (China); Rapallo, Arnaldo [Istituto per lo Studio delle Macromolecole (ISMAC) Consiglio Nazionale delle Ricerche (CNR), via E. Bassini 15, C.A.P 20133 Milano (Italy)

    2014-03-14

    solvent, we performed in this work the classical molecular dynamics simulation on a realistic model solution with the peptide embedded in an explicit water environment, and calculated its dynamic properties both as an outcome of the simulations, and by the diffusion theory in reduced statistical-mechanical approach within HBA on the premise that the mode-coupling approach to the diffusion theory can give both the long-range and local dynamics starting from equilibrium averages which were obtained from detailed atomistic simulations.

  8. Peptide dynamics by molecular dynamics simulation and diffusion theory method with improved basis sets

    International Nuclear Information System (INIS)

    Hsu, Po Jen; Lai, S. K.; Rapallo, Arnaldo

    2014-01-01

    solvent, we performed in this work the classical molecular dynamics simulation on a realistic model solution with the peptide embedded in an explicit water environment, and calculated its dynamic properties both as an outcome of the simulations, and by the diffusion theory in reduced statistical-mechanical approach within HBA on the premise that the mode-coupling approach to the diffusion theory can give both the long-range and local dynamics starting from equilibrium averages which were obtained from detailed atomistic simulations

  9. Numerical simulations of N=(1,1) 1+1-dimensional super Yang-Mills theory with large supersymmetry breaking

    International Nuclear Information System (INIS)

    Filippov, I.; Pinsky, S.

    2002-01-01

    We consider the N=(1,1) super Yang-Mills (SYM) theory that is obtained by dimensionally reducing SYM theory in 2+1 dimensions to 1+1 dimensions and discuss soft supersymmetry breaking. We discuss the numerical simulation of this theory using supersymmetric discrete light-cone quantization when either the boson or the fermion has a large mass. We compare our result to the pure adjoint fermion theory and pure adjoint boson discrete light-cone quantization calculations of Klebanov, Demeterfi, Bhanot and Kutasov. With a large boson mass we find that it is necessary to add additional operators to the theory to obtain sensible results. When a large fermion mass is added to the theory we find that it is not necessary to add operators to obtain a sensible theory. The theory of the adjoint boson is a theory that has stringy bound states similar to the full SYM theory. We also discuss another theory of adjoint bosons with a spectrum similar to that obtained by Klebanov, Demeterfi, and Bhanot

  10. WAM-E user's manual

    International Nuclear Information System (INIS)

    Rayes, L.G.; Riley, J.E.

    1986-07-01

    The WAM-E series of mainframe computer codes have been developed to efficiently analyze the large binary models (e.g., fault trees) used to represent the logic relationships within and between the systems of a nuclear power plant or other large, multisystem entity. These codes have found wide application in reliability and safety studies of nuclear power plant systems. There are now nine codes in the WAM-E series, with six (WAMBAM/WAMTAP, WAMCUT, WAMCUT-II, WAMFM, WAMMRG, and SPASM) classified as Type A Production codes and the other three (WAMFTP, WAMTOP, and WAMCONV) classified as Research codes. This document serves as a combined User's Guide, Programmer's Manual, and Theory Reference for the codes, with emphasis on the Production codes. To that end, the manual is divided into four parts: Part I, Introduction; Part II, Theory and Numerics; Part III, WAM-E User's Guide; and Part IV, WAMMRG Programmer's Manual

  11. Self-Assembly of DNA-Coated Particles: Experiment, Simulation and Theory

    Science.gov (United States)

    Song, Minseok

    The bottom-up assembly of material architectures with tunable complexity, function, composition, and structure is a long sought goal in rational materials design. One promising approach aims to harnesses the programmability and specificity of DNA hybridization in order to direct the assembly of oligonucleotide-functionalized nano- and micro-particles by tailoring, in part, interparticle interactions. DNA-programmable assembly into three-dimensionally ordered structures has attracted extensive research interest owing to emergent applications in photonics, plasmonics and catalysis and potentially many other areas. Progress on the rational design of DNA-mediated interactions to create useful two-dimensional structures (e.g., structured films), on the other hand, has been rather slow. In this thesis, we establish strategies to engineer a diversity of 2D crystalline arrangements by designing and exploiting DNA-programmable interparticle interactions. We employ a combination of simulation, theory and experiments to predict and confirm accessibility of 2D structural diversity in an effort to establish a rational approach to 2D DNA-mediated particle assembly. We start with the experimental realization of 2D DNA-mediated assembly by decorating micron-sized silica particles with covalently attached single-stranded DNA through a two-step reaction. Subsequently, we elucidate sensitivity and ultimate controllability of DNA-mediated assembly---specifically the melting transition from dispersed singlet particles to aggregated or assembled structures---through control of the concentration of commonly employed nonionic surfactants. We relate the observed tunability to an apparent coupling with the critical micelle temperature in these systems. Also, both square and hexagonal 2D ordered particle arrangements are shown to evolve from disordered aggregates under appropriate annealing conditions defined based upon pre-established melting profiles. Subsequently, the controlled mixing of

  12. FY2014 FES (Fusion Energy Sciences) Theory & Simulation Performance Target, Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Guoyong [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Budny, Robert [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Gorelenkov, Nikolai [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Poli, Francesca [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Chen, Yang [Univ. of Colorado, Boulder, CO (United States); McClenaghan, Joseph [Univ. of California, Irvine, CA (United States); Lin, Zhihong [Univ. of California, Irvine, CA (United States); Spong, Don [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bass, Eric [Univ. of California, San Diego, CA (United States); Waltz, Ron [General Atomics, San Diego, CA (United States)

    2014-10-14

    We report here the work done for the FY14 OFES Theory Performance Target as given below: "Understanding alpha particle confinement in ITER, the world's first burning plasma experiment, is a key priority for the fusion program. In FY 2014, determine linear instability trends and thresholds of energetic particle-driven shear Alfven eigenmodes in ITER for a range of parameters and profiles using a set of complementary simulation models (gyrokinetic, hybrid, and gyrofluid). Carry out initial nonlinear simulations to assess the effects of the unstable modes on energetic particle transport". In the past year (FY14), a systematic study of the alpha-driven Alfven modes in ITER has been carried out jointly by researchers from six institutions involving seven codes including the transport simulation code TRANSP (R. Budny and F. Poli, PPPL), three gyrokinetic codes: GEM (Y. Chen, Univ. of Colorado), GTC (J. McClenaghan, Z. Lin, UCI), and GYRO (E. Bass, R. Waltz, UCSD/GA), the hybrid code M3D-K (G.Y. Fu, PPPL), the gyro-fluid code TAEFL (D. Spong, ORNL), and the linear kinetic stability code NOVA-K (N. Gorelenkov, PPPL). A range of ITER parameters and profiles are specified by TRANSP simulation of a hybrid scenario case and a steady-state scenario case. Based on the specified ITER equilibria linear stability calculations are done to determine the stability boundary of alpha-driven high-n TAEs using the five initial value codes (GEM, GTC, GYRO, M3D-K, and TAEFL) and the kinetic stability code (NOVA-K). Both the effects of alpha particles and beam ions have been considered. Finally, the effects of the unstable modes on energetic particle transport have been explored using GEM and M3D-K.

  13. Theory and simulation of an inverse free-electron laser experiment

    Science.gov (United States)

    Gou, S. K.; Bhattacharjee, A.; Fang, J.-M.; Marshall, T. C.

    1997-03-01

    An experimental demonstration of the acceleration of electrons using a high-power CO2 laser interacting with a relativistic electron beam moving along a wiggler has been carried out at the Accelerator Test Facility of the Brookhaven National Laboratory [Phys. Rev. Lett. 77, 2690 (1996)]. The data generated by this inverse free-electron-laser (IFEL) experiment are studied by means of theory and simulation. Included in the simulations are such effects as: a low-loss metallic waveguide with a dielectric coating on the walls; multi-mode coupling due to self-consistent interaction between the electrons and the optical wave; space charge; energy spread of the electrons; and arbitrary wiggler-field profile. Two types of wiggler profile are considered: a linear taper of the period, and a step-taper of the period. (The period of the wiggler is ˜3 cm, its magnetic field is ˜1 T, and the wiggler length is 0.47 m.) The energy increment of the electrons (˜1-2%) is analyzed in detail as a function of laser power, wiggler parameters, and the initial beam energy (˜40 MeV). At a laser power level ˜0.5 Gw, the simulation results on energy gain are in reasonable agreement with the experimental results. Preliminary results on the electron energy distribution at the end of the IFEL are presented. Whereas the experiment produces a near-monotone distribution of electron energies with the peak shifted to higher energy, the simulation shows a more structured and non-monotonic distribution at the end of the wiggler. Effects that may help reconcile these differences are considered.

  14. Modular Manufacturing Simulator Users Manual

    Science.gov (United States)

    1997-01-01

    Since the agency was established in 1958, a key part of the National Aeronautics and Space Administration's mission has been to make technologies available to American industry so it can be more widely used by the citizens who paid for it. While many people might think that 'rocket science' has no application to earthly problems, rocket science in fact employs earthly materials, processes, and designs adapted for space, and which can be adapted for other purposes on Earth. Marshall Space Flight Center's Technology Transfer Office has outreach programs designed to connect American business, industries, educational institutions, and individuals who have needs, with NASA people and laboratories who may have the solutions. MSFC's national goal is to enhance America's competitiveness in the world marketplace and ensure that the technological breakthroughs by American laboratories benefit taxpayers and the many industries making up our Nation's industrial base. Activities may range from simple exchanges of technical data to Space Act Agreements which lead to NASA and industry working closely together to solve a problem. The goal is to ensure that America gains and maintains its proper place of leadership among the world's technologically developed nations. Some of the many technologies transferred from NASA to commercial customers include those associated with: Welding and fabrication; Medical and pharmaceutical uses; Fuels and coatings; Structural composites and Robotics. These activities are aimed to achieve the same goal: slowing, halting, and gradually reversing the erosion of American technological leadership. Legislation such as the National Technology Initiative starts at the top and works down through the national corporate structure, while MSFC's activities start at the grassroots level and work up through the small and medium-sized business which form the bulk of our industrial community.

  15. Threshold defect production in silicon determined by density functional theory molecular dynamics simulations

    International Nuclear Information System (INIS)

    Holmstroem, E.; Kuronen, A.; Nordlund, K.

    2008-01-01

    We studied threshold displacement energies for creating stable Frenkel pairs in silicon using density functional theory molecular dynamics simulations. The average threshold energy over all lattice directions was found to be 36±2 STAT ±2 SYST eV, and thresholds in the directions and were found to be 20±2 SYST eV and 12.5±1.5 SYST eV, respectively. Moreover, we found that in most studied lattice directions, a bond defect complex is formed with a lower threshold than a Frenkel pair. The average threshold energy for producing either a bond defect or a Frenkel pair was found to be 24±1 STAT ±2 SYST eV

  16. Simulation and Hardware Implementation of Shunt Active Power Filter Based on Synchronous Reference Frame Theory

    Directory of Open Access Journals (Sweden)

    Karthikrjan Senthilnathan

    2018-02-01

    Full Text Available This paper describes about the Hybrid Shunt Active Power Filter (HSAPF for the elimination of the current harmonics in the line side of the three phase three wire systems. The Active Power Filter is based on the Voltage Source Converter (VSC topology. The control strategy for the converter is based on Synchronous Reference Frame (SRF theory. The compensation of harmonics is done by the APF which is connected in the shunt configuration to the system. The Shunt APF has the better compensation of current harmonics. The design and implementation of Shunt active power filter is done by MATLAB/Simulink. The real time implementation by using the ATMEGA 8 Microcontroller. The Simulation and Hardware results shows that the current harmonics are eliminated in the system

  17. Computer simulation and high level virial theory of Saturn-ring or UFO colloids

    Science.gov (United States)

    Bates, Martin A.; Dennison, Matthew; Masters, Andrew

    2008-08-01

    Monte Carlo simulations are used to map out the complete phase diagram of hard body UFO systems, in which the particles are composed of a concentric sphere and thin disk. The equation of state and phase behavior are determined for a range of relative sizes of the sphere and disk. We show that for relatively large disks, nematic and solid phases are observed in addition to the isotropic fluid. For small disks, two different solid phases exist. For intermediate sizes, only a disordered fluid phase is observed. The positional and orientational structure of the various phases are examined. We also compare the equations of state and the nematic-isotropic coexistence densities with those predicted by an extended Onsager theory using virial coefficients up to B8.

  18. Simultaneous positioning and orientation of a single nano-object by flow control: theory and simulations

    International Nuclear Information System (INIS)

    Mathai, Pramod P; Berglund, Andrew J; Alexander Liddle, J; Shapiro, Benjamin A

    2011-01-01

    In this paper, we theoretically describe a method to simultaneously control both the position and orientation of single nano-objects in fluids by precisely controlling the flow around them. We develop and simulate a control law that uses electro-osmotic flow (EOF) actuation to translate and rotate rigid nano-objects in two spatial dimensions. Using EOF to control nano-objects offers advantages as compared to other approaches: a wide class of objects can be manipulated (no magnetic or electric dipole moments are needed), the object can be controlled over a long range (>100 μm) with sub-micrometer accuracy, and control may be achieved with simple polydimethylsiloxane (PDMS) devices. We demonstrate the theory and numerical solutions that will enable deterministic control of the position and orientation of a nano-object in solution, which can be used, for example, to integrate nanostructures in circuits and orient sensors to probe living cells.

  19. Research on ion implantation in MEMS device fabrication by theory, simulation and experiments

    Science.gov (United States)

    Bai, Minyu; Zhao, Yulong; Jiao, Binbin; Zhu, Lingjian; Zhang, Guodong; Wang, Lei

    2018-06-01

    Ion implantation is widely utilized in microelectromechanical systems (MEMS), applied for embedded lead, resistors, conductivity modifications and so forth. In order to achieve an expected device, the principle of ion implantation must be carefully examined. The elementary theory of ion implantation including implantation mechanism, projectile range and implantation-caused damage in the target were studied, which can be regarded as the guidance of ion implantation in MEMS device design and fabrication. Critical factors including implantations dose, energy and annealing conditions are examined by simulations and experiments. The implantation dose mainly determines the dopant concentration in the target substrate. The implantation energy is the key factor of the depth of the dopant elements. The annealing time mainly affects the repair degree of lattice damage and thus the activated elements’ ratio. These factors all together contribute to ions’ behavior in the substrates and characters of the devices. The results can be referred to in the MEMS design, especially piezoresistive devices.

  20. Theory, simulations and the design of functionalized nanoparticles for biomedical applications: A Soft Matter Perspective

    Science.gov (United States)

    Angioletti-Uberti, Stefano

    2017-11-01

    Functionalised nanoparticles for biomedical applications represents an incredibly exciting and rapidly growing field of research. Considering the complexity of the nano-bio interface, an important question is to what extent can theory and simulations be used to study these systems in a realistic, meaningful way. In this review, we will argue for a positive answer to this question. Approaching the issue from a "Soft Matter" perspective, we will consider those properties of functionalised nanoparticles that can be captured within a classical description. We will thus not concentrate on optical and electronic properties, but rather on the way nanoparticles' interactions with the biological environment can be tuned by functionalising their surface and exploited in different contexts relevant to applications. In particular, we wish to provide a critical overview of theoretical and computational coarse-grained models, developed to describe these interactions and present to the readers some of the latest results in this fascinating area of research.

  1. Complexation of Polyelectrolytes with Hydrophobic Drug Molecules in Salt-Free Solution: Theory and Simulations.

    Science.gov (United States)

    Lei, Qun-Li; Hadinoto, Kunn; Ni, Ran

    2017-04-18

    The delivery and dissolution of poorly soluble drugs is challenging in the pharmaceutical industry. One way to significantly improve the delivery efficiency is to incorporate these hydrophobic small molecules into a colloidal polyelectrolyes(PE)-drug complex in their ionized states. Despite its huge application value, the general mechanism of PE collapse and complex formation in this system has not been well understood. In this work, by combining a mean-field theory with extensive molecular simulations, we unveil the phase behaviors of the system under dilute and salt-free conditions. We find that the complexation is a first-order-like phase transition triggered by the hydrophobic attraction between the drug molecules. Importantly, the valence ratio between the drug molecule and PE monomer plays a crucial role in determining the stability and morphology of the complex. Moreover, the sign of the zeta potential and the net charge of the complex are found to be inverted as the hydrophobicity of the drug molecules increases. Both theory and simulation indicate that the complexation point and complex morphology and the electrostatic properties of the complex have a weak dependence on chain length. Finally, the dynamics aspect of PE-drug complexation is also explored, and it is found that the complex can be trapped into a nonequilibrium glasslike state when the hydropobicity of the drug molecule is too strong. Our work gives a clear physical picture behind the PE-drug complexation phenomenon and provides guidelines to fabricate the colloidal PE-drug complex with the desired physical characteristics.

  2. Using Sandia's Z Machine and Density Functional Theory Simulations to Understand Planetary Materials

    Science.gov (United States)

    Root, Seth

    2017-06-01

    The use of Z, NIF, and Omega have produced many breakthrough results in high pressure physics. One area that has greatly benefited from these facilities is the planetary sciences. The high pressure behavior of planetary materials has implications for numerous geophysical and planetary processes. The continuing discovery of exosolar super-Earths demonstrates the need for accurate equation of state data to better inform our models of their interior structures. Planetary collision processes, such as the moon-forming giant impact, require understanding planetary materials over a wide-range of pressures and temperatures. Using Z, we examined the shock compression response of some common planetary materials: MgO, Mg2SiO4, and Fe2O3 (hematite). We compare the experimental shock compression measurements with density functional theory (DFT) based quantum molecular dynamics (QMD) simulations. The combination of experiment and theory provides clearer understanding of planetary materials properties at extreme conditions. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  3. Formation factor in Bentheimer and Fontainebleau sandstones: Theory compared with pore-scale numerical simulations

    Science.gov (United States)

    Ghanbarian, Behzad; Berg, Carl F.

    2017-09-01

    Accurate quantification of formation resistivity factor F (also called formation factor) provides useful insight into connectivity and pore space topology in fully saturated porous media. In particular the formation factor has been extensively used to estimate permeability in reservoir rocks. One of the widely applied models to estimate F is Archie's law (F = ϕ- m in which ϕ is total porosity and m is cementation exponent) that is known to be valid in rocks with negligible clay content, such as clean sandstones. In this study we compare formation factors determined by percolation and effective-medium theories as well as Archie's law with numerical simulations of electrical resistivity on digital rock models. These digital models represent Bentheimer and Fontainebleau sandstones and are derived either by reconstruction or directly from micro-tomographic images. Results show that the universal quadratic power law from percolation theory accurately estimates the calculated formation factor values in network models over the entire range of porosity. However, it crosses over to the linear scaling from the effective-medium approximation at the porosity of 0.75 in grid models. We also show that the effect of critical porosity, disregarded in Archie's law, is nontrivial, and the Archie model inaccurately estimates the formation factor in low-porosity homogeneous sandstones.

  4. Theory and simulation of DNA-coated colloids: a guide for rational design.

    Science.gov (United States)

    Angioletti-Uberti, Stefano; Mognetti, Bortolo M; Frenkel, Daan

    2016-03-07

    By exploiting the exquisite selectivity of DNA hybridization, DNA-coated colloids (DNACCs) can be made to self-assemble in a wide variety of structures. The beauty of this system stems largely from its exceptional versatility and from the fact that a proper choice of the grafted DNA sequences yields fine control over the colloidal interactions. Theory and simulations have an important role to play in the optimal design of self assembling DNACCs. At present, the powerful model-based design tools are not widely used, because the theoretical literature is fragmented and the connection between different theories is often not evident. In this Perspective, we aim to discuss the similarities and differences between the different models that have been described in the literature, their underlying assumptions, their strengths and their weaknesses. Using the tools described in the present Review, it should be possible to move towards a more rational design of novel self-assembling structures of DNACCs and, more generally, of systems where ligand-receptor are used to control interactions.

  5. Simulation in paediatric urology and surgery. Part 1: An overview of educational theory.

    Science.gov (United States)

    Nataraja, Ramesh M; Webb, Nathalie; Lopez, Pedro-Jose

    2018-03-01

    Surgical training has changed radically in the last few decades. The traditional Halstedian model of time-bound apprenticeship has been replaced with competency-based training. Advanced understanding of mastery learning principles has vastly altered educational methodology in surgical training, in terms of instructional design, delivery of educational content, assessment of learning, and programmatic evaluation. As part of this educational revolution, fundamentals of simulation-based education have been adopted into all levels and aspects of surgical training, requiring an understanding of concepts of fidelity and realism and the impact they have on learning. There are many educational principles and theories that can help clinical teachers understand the way that their trainees learn. In the acquisition of surgical expertise, concepts of mastery learning, deliberate practice, and experiential learning are particularly important. Furthermore, surgical teachers need to understand the principles of effective feedback, which is essential to all forms of skills learning. This article, the first of two papers, presents an overview of relevant learning theory for the busy paediatric surgeon and urologist. Seeking to introduce the concepts underpinning current changes in surgical education and training, providing practical tips to optimise teaching endeavours. Copyright © 2018 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.

  6. Simulation of electron energy loss spectra of nanomaterials with linear-scaling density functional theory

    International Nuclear Information System (INIS)

    Tait, E W; Payne, M C; Ratcliff, L E; Haynes, P D; Hine, N D M

    2016-01-01

    Experimental techniques for electron energy loss spectroscopy (EELS) combine high energy resolution with high spatial resolution. They are therefore powerful tools for investigating the local electronic structure of complex systems such as nanostructures, interfaces and even individual defects. Interpretation of experimental electron energy loss spectra is often challenging and can require theoretical modelling of candidate structures, which themselves may be large and complex, beyond the capabilities of traditional cubic-scaling density functional theory. In this work, we present functionality to compute electron energy loss spectra within the onetep linear-scaling density functional theory code. We first demonstrate that simulated spectra agree with those computed using conventional plane wave pseudopotential methods to a high degree of precision. The ability of onetep to tackle large problems is then exploited to investigate convergence of spectra with respect to supercell size. Finally, we apply the novel functionality to a study of the electron energy loss spectra of defects on the (1 0 1) surface of an anatase slab and determine concentrations of defects which might be experimentally detectable. (paper)

  7. Methodology for Simulation and Analysis of Complex Adaptive Supply Network Structure and Dynamics Using Information Theory

    Directory of Open Access Journals (Sweden)

    Joshua Rodewald

    2016-10-01

    Full Text Available Supply networks existing today in many industries can behave as complex adaptive systems making them more difficult to analyze and assess. Being able to fully understand both the complex static and dynamic structures of a complex adaptive supply network (CASN are key to being able to make more informed management decisions and prioritize resources and production throughout the network. Previous efforts to model and analyze CASN have been impeded by the complex, dynamic nature of the systems. However, drawing from other complex adaptive systems sciences, information theory provides a model-free methodology removing many of those barriers, especially concerning complex network structure and dynamics. With minimal information about the network nodes, transfer entropy can be used to reverse engineer the network structure while local transfer entropy can be used to analyze the network structure’s dynamics. Both simulated and real-world networks were analyzed using this methodology. Applying the methodology to CASNs allows the practitioner to capitalize on observations from the highly multidisciplinary field of information theory which provides insights into CASN’s self-organization, emergence, stability/instability, and distributed computation. This not only provides managers with a more thorough understanding of a system’s structure and dynamics for management purposes, but also opens up research opportunities into eventual strategies to monitor and manage emergence and adaption within the environment.

  8. ECOTONE Manual

    National Research Council Canada - National Science Library

    Hochstrasser, Tamara; Peters, Debra

    2005-01-01

    .... One tool, the ECOTONE model, was set up to simulate vegetation recovery from military disturbances on Fort Bliss, Texas, as a conceptual structure to prioritize the research efforts in land management...

  9. LCS Users Manual

    International Nuclear Information System (INIS)

    Redd, A.J.; Ignat, D.W.

    1998-01-01

    The Lower Hybrid Simulation Code (LSC) is a computational model of lower hybrid current drive in the presence of an electric field. Details of geometry, plasma profiles, and circuit equations are treated. Two-dimensional velocity space effects are approximated in a one-dimensional Fokker-Planck treatment. The LSC was originally written to be a module for lower hybrid current drive called by the Tokamak Simulation Code (TSC), which is a numerical model of an axisymmetric tokamak plasma and the associated control systems. The TSC simulates the time evolution of a free boundary plasma by solving the MHD equations on a rectangular computational grid. The MHD equations are coupled to the external circuits (representing poloidal field coils) through the boundary conditions. The code includes provisions for modeling the control system, external heating, and fusion heating. The LSC module can also be called by the TRANSP code. TRANSP represents the plasma with an axisymmetric, fixed-boundary model and focuses on calculation of plasma transport to determine transport coefficients from data on power inputs and parameters reached. This manual covers the basic material needed to use the LSC. If run in conjunction with TSC, the ''TSC Users Manual'' should be consulted. If run in conjunction with TRANSP, on-line documentation will be helpful. A theoretical background of the governing equations and numerical methods is given. Information on obtaining, compiling, and running the code is also provided

  10. Simulating Excitons in MoS2 with Time-Dependent Density Functional Theory

    Science.gov (United States)

    Flamant, Cedric; Kolesov, Grigory; Kaxiras, Efthimios

    Monolayer molybdenum disulfide, owing to its graphene-like two-dimensional geometry whilst still having a finite bandgap, is a material of great interest in condensed matter physics and for potential application in electronic devices. In particular, MoS2 exhibits significant excitonic effects, a desirable quality for fundamental many-body research. Time-dependent density functional theory (TD-DFT) allows us to simulate dynamical effects as well as temperature-based effects in a natural way given the direct treatment of the time evolution of the system. We present a TD-DFT study of monolayer MoS2 exciton dynamics, examining various qualitative and quantitative predictions in pure samples and in the presence of defects. In particular, we generate an absorption spectrum through simulated pulse excitation for comparison to experiment and also analyze the response of the exciton in an external electric field.In this work we also discuss the electronic structure of the exciton in MoS2 with and without vacancies.

  11. The theory and simulation of relativistic electron beam transport in the ion-focused regime

    International Nuclear Information System (INIS)

    Swanekamp, S.B.; Holloway, J.P.; Kammash, T.; Gilgenbach, R.M.

    1992-01-01

    Several recent experiments involving relativistic electron beam (REB) transport in plasma channels show two density regimes for efficient transport; a low-density regime known as the ion-focused regime (IFR) and a high-pressure regime. The results obtained in this paper use three separate models to explain the dependency of REB transport efficiency on the plasma density in the IFR. Conditions for efficient beam transport are determined by examining equilibrium solutions of the Vlasov--Maxwell equations under conditions relevant to IFR transport. The dynamic force balance required for efficient IFR transport is studied using the particle-in-cell (PIC) method. These simulations provide new insight into the transient beam front physics as well as the dynamic approach to IFR equilibrium. Nonlinear solutions to the beam envelope are constructed to explain oscillations in the beam envelope observed in the PIC simulations but not contained in the Vlasov equilibrium analysis. A test particle analysis is also developed as a method to visualize equilibrium solutions of the Vlasov equation. This not only provides further insight into the transport mechanism but also illustrates the connections between the three theories used to describe IFR transport. Separately these models provide valuable information about transverse beam confinement; together they provide a clear physical understanding of REB transport in the IFR

  12. The SIMRAND methodology: Theory and application for the simulation of research and development projects

    Science.gov (United States)

    Miles, R. F., Jr.

    1986-01-01

    A research and development (R&D) project often involves a number of decisions that must be made concerning which subset of systems or tasks are to be undertaken to achieve the goal of the R&D project. To help in this decision making, SIMRAND (SIMulation of Research ANd Development Projects) is a methodology for the selection of the optimal subset of systems or tasks to be undertaken on an R&D project. Using alternative networks, the SIMRAND methodology models the alternative subsets of systems or tasks under consideration. Each path through an alternative network represents one way of satisfying the project goals. Equations are developed that relate the system or task variables to the measure of reference. Uncertainty is incorporated by treating the variables of the equations probabilistically as random variables, with cumulative distribution functions assessed by technical experts. Analytical techniques of probability theory are used to reduce the complexity of the alternative networks. Cardinal utility functions over the measure of preference are assessed for the decision makers. A run of the SIMRAND Computer I Program combines, in a Monte Carlo simulation model, the network structure, the equations, the cumulative distribution functions, and the utility functions.

  13. Plasma confinement theory and transport simulation: Technical progress report, October 1, 1987-October 14, 1988

    International Nuclear Information System (INIS)

    Ross, D.W.

    1988-06-01

    An overview of the program has been given in the recent proposal. The principal objectives are to provide theoretical interpretation and computer modelling for the TEXT tokamak, and to advance the simulation studies of tokamaks generally, functioning as a national transport computation facility. We also carry out equilibrium and stability studies in support of the TEXT upgrade, and work continues, at low levels, on Alfven waves and MFEnet software development. The specific focus of the program is to lay the groundwork for detailed comparison with experiment of the various transport theories, so that physics understanding and confidence in predictions of future machine behavior will be enhanced. This involves to collect, in retrievable form, the data from TEXT and other tokamaks to make the data available through easy-to-use interfaces; to develop criteria for success in fitting models to the data; to maintain the Texas transport code, CHAPO, and make it available to users; to collect theoretical models and implement them in the transport code; and to carry out the simulation studies and evaluate the fits to the data. 37 refs

  14. IMAGE User Manual

    Energy Technology Data Exchange (ETDEWEB)

    Stehfest, E; De Waal, L; Oostenrijk, R.

    2010-09-15

    This user manual contains the basic information for running the simulation model IMAGE ('Integrated Model to Assess the Global Environment') of PBL. The motivation for this report was a substantial restructuring of the source code for IMAGE version 2.5. The document gives concise content information about the submodels, tells the user how to install the program, describes the directory structure of the run environment, shows how scenarios have to be prepared and run, and gives insight in the restart functionality.

  15. NASCAP programmer's reference manual

    Science.gov (United States)

    Mandell, M. J.; Stannard, P. R.; Katz, I.

    1993-05-01

    The NASA Charging Analyzer Program (NASCAP) is a computer program designed to model the electrostatic charging of complicated three-dimensional objects, both in a test tank and at geosynchronous altitudes. This document is a programmer's reference manual and user's guide. It is designed as a reference to experienced users of the code, as well as an introduction to its use for beginners. All of the many capabilities of NASCAP are covered in detail, together with examples of their use. These include the definition of objects, plasma environments, potential calculations, particle emission and detection simulations, and charging analysis.

  16. Structure and dynamics of amorphous polymers: computer simulations compared to experiment and theory

    International Nuclear Information System (INIS)

    Paul, Wolfgang; Smith, Grant D

    2004-01-01

    This contribution considers recent developments in the computer modelling of amorphous polymeric materials. Progress in our capabilities to build models for the computer simulation of polymers from the detailed atomistic scale up to coarse-grained mesoscopic models, together with the ever-improving performance of computers, have led to important insights from computer simulations into the structural and dynamic properties of amorphous polymers. Structurally, chain connectivity introduces a range of length scales from that of the chemical bond to the radius of gyration of the polymer chain covering 2-4 orders of magnitude. Dynamically, this range of length scales translates into an even larger range of time scales observable in relaxation processes in amorphous polymers ranging from about 10 -13 to 10 -3 s or even to 10 3 s when glass dynamics is concerned. There is currently no single simulation technique that is able to describe all these length and time scales efficiently. On large length and time scales basic topology and entropy become the governing properties and this fact can be exploited using computer simulations of coarse-grained polymer models to study universal aspects of the structure and dynamics of amorphous polymers. On the largest length and time scales chain connectivity is the dominating factor leading to the strong increase in longest relaxation times described within the reptation theory of polymer melt dynamics. Recently, many of the universal aspects of this behaviour have been further elucidated by computer simulations of coarse-grained polymer models. On short length scales the detailed chemistry and energetics of the polymer are important, and one has to be able to capture them correctly using chemically realistic modelling of specific polymers, even when the aim is to extract generic physical behaviour exhibited by the specific chemistry. Detailed studies of chemically realistic models highlight the central importance of torsional dynamics

  17. Interpretation of the U L3-edge EXAFS in uranium dioxide using molecular dynamics and density functional theory simulations

    International Nuclear Information System (INIS)

    Bocharov, Dmitry; Chollet, Melanie; Krack, Matthias; Bertsch, Johannes; Grolimund, Daniel; Martin, Matthias; Kuzmin, Alexei; Purans, Juris; Kotomin, Eugene

    2016-01-01

    X-ray absorption spectroscopy is employed to study the local structure of pure and Cr-doped UO 2 at 300 K. The U L 3 -edge EXAFS spectrum is interpreted within the multiplescattering (MS) theory using the results of the classical and ab initio molecular dynamics simulations, allowing us to validate the accuracy of theoretical models. The Cr K-edge XANES is simulated within the full-multiple-scattering formalism considering a substitutional model (Cr at U site). It is shown that both unrelaxed and relaxed structures, produced by ab initio density functional theory (DFT) calculations, fail to describe the experiment. (paper)

  18. Caltrans : construction manual

    Science.gov (United States)

    2009-08-01

    Caltrans intends this manual as a resource for all personnel engaged in contract administration. The manual establishes policies and procedures for the construction phase of Caltrans projects. However, this manual is not a contract document. It impos...

  19. Plane shear flows of frictionless spheres: Kinetic theory and 3D soft-sphere discrete element method simulations

    OpenAIRE

    Vescovi, Dalila; Berzi, Diego; Richard, Patrick; Brodu, Nicolas

    2014-01-01

    International audience; We use existing 3D Discrete Element simulations of simple shear flows of spheres to evaluate the radial distribution function at contact that enables kinetic theory to correctly predict the pressure and the shear stress, for different values of the collisional coefficient of restitution. Then, we perform 3D Discrete Element simulations of plane flows of frictionless, inelastic spheres, sheared between walls made bumpy by gluing particles in a regular array, at fixed av...

  20. CFD simulation of direct contact condensation with ANSYS CFX using surface renewal theory based heat transfer coefficients

    Energy Technology Data Exchange (ETDEWEB)

    Wanninger, Andreas; Ceuca, Sabin Cristian; Macian-Juan, Rafael [Technische Univ. Muenchen, Garching (Germany). Dept. of Nuclear Engineering

    2013-07-01

    Different approaches for the calculation of Direct Contact Condensation (DCC) using Heat Transfer Coefficients (HTC) based on the Surface Renewal Theory (SRT) are tested using the CFD simulation tool ANSYS CFX. The present work constitutes a preliminary study of the flow patterns and conditions observed using different HTC models. A complex 3D flow pattern will be observed in the CFD simulations as well as a strong coupling between the condensation rate and the two-phase flow dynamics. (orig.)

  1. Fused hard-sphere chain molecules: Comparison between Monte Carlo simulation for the bulk pressure and generalized Flory theories

    International Nuclear Information System (INIS)

    Costa, L.A.; Zhou, Y.; Hall, C.K.; Carra, S.

    1995-01-01

    We report Monte Carlo simulation results for the bulk pressure of fused-hard-sphere (FHS) chain fluids with bond-length-to-bead-diameter ratios ∼ 0.4 at chain lengths n=4, 8 and 16. We also report density profiles for FHS chain fluids at a hard wall. The results for the compressibility factor are compared to results from extensions of the Generalized Flory (GF) and Generalized Flory Dimer (GFD) theories proposed by Yethiraj et al. and by us. Our new GF theory, GF-AB, significantly improves the prediction of the bulk pressure of fused-hard-sphere chains over the GFD theories proposed by Yethiraj et al. and by us although the GFD theories give slightly better low-density results. The GFD-A theory, the GFD-B theory and the new theories (GF-AB, GFD-AB, and GFD-AC) satisfy the exact zero-bonding-length limit. All theories considered recover the GF or GFD theories at the tangent hard-sphere chain limit

  2. The effective χ parameter in polarizable polymeric systems: One-loop perturbation theory and field-theoretic simulations.

    Science.gov (United States)

    Grzetic, Douglas J; Delaney, Kris T; Fredrickson, Glenn H

    2018-05-28

    We derive the effective Flory-Huggins parameter in polarizable polymeric systems, within a recently introduced polarizable field theory framework. The incorporation of bead polarizabilities in the model self-consistently embeds dielectric response, as well as van der Waals interactions. The latter generate a χ parameter (denoted χ̃) between any two species with polarizability contrast. Using one-loop perturbation theory, we compute corrections to the structure factor Sk and the dielectric function ϵ^(k) for a polarizable binary homopolymer blend in the one-phase region of the phase diagram. The electrostatic corrections to S(k) can be entirely accounted for by a renormalization of the excluded volume parameter B into three van der Waals-corrected parameters B AA , B AB , and B BB , which then determine χ̃. The one-loop theory not only enables the quantitative prediction of χ̃ but also provides useful insight into the dependence of χ̃ on the electrostatic environment (for example, its sensitivity to electrostatic screening). The unapproximated polarizable field theory is amenable to direct simulation via complex Langevin sampling, which we employ here to test the validity of the one-loop results. From simulations of S(k) and ϵ^(k) for a system of polarizable homopolymers, we find that the one-loop theory is best suited to high concentrations, where it performs very well. Finally, we measure χ̃N in simulations of a polarizable diblock copolymer melt and obtain excellent agreement with the one-loop theory. These constitute the first fully fluctuating simulations conducted within the polarizable field theory framework.

  3. Theory and simulation of explicit solvent effects on protein folding in vitro and in vivo

    Science.gov (United States)

    England, Jeremy L.

    The aim of this work is to develop theoretical tools for understanding what happens to water that is confined in amphipathic cavities, and for testing the consequences of this understanding for protein folding in vitro and in vivo. We begin in the first chapter with a brief review of the theoretical and simulation literature on the hydrophobic effect and the aqueous solvation of charged species that also puts forward a simple theoretical framework within which various solvation phenomena reported in past studies may be unified. Subsequently, in the second chapter we also review past computational and theoretical work on the specific question of how chaperonin complexes assist the folding of their substrates. With the context set, we turn in Chapter 3 to the case of an open system with water trapped between hydrophobic plates that experiences a uniform electric field normal to and between the plates. Classic bulk theory of electrostriction in polarizable fluids tells us that the electric field should cause an increase in local water density as it rises, yet some simulations have suggested the opposite. We present a mean-field Potts model we have developed to explain this discrepancy, and show how such a simple, coarse-grained lattice description can capture the fundamental consequences of the fact that external electric fields can frustrate the hydrogen bond network in confined water. Chapter 4 continues to pursue the issue of solvent evacuation between hydrophobic plates, but focuses on the impact of chemical denaturants on hydrophobic effects using molecular dynamics simulations of hydrophobic dewetting. We find that while urea and guanidinium have similar qualitative effects at the bulk level, they seem to differ in the microscopic mechanism by which they denature proteins, although both inhibit the onset of dewetting. Lastly, Chapters 5 and 6 examine the potential importance of solvent-mediated forces to protein folding in vivo. Chapter 5 develops a Landau

  4. The Decision to Emigrate: A Simulation Model Based on the Theory of Planned Behaviour

    NARCIS (Netherlands)

    Willekens, F.J.; Grow, A.; Van Bavel, J.

    2016-01-01

    The theory of planned behaviour (TPB) is one of the most widely used theories of behaviour. It was developed by Ajzen as an extension of Fishbein’s theory of reasoned action (Fishbein and Ajzen, Predicting and changing behaviour. Psychology Press, New York, 2010). The theory states that intentions

  5. Understanding Reactions to Workplace Injustice through Process Theories of Motivation: A Teaching Module and Simulation

    Science.gov (United States)

    Stecher, Mary D.; Rosse, Joseph G.

    2007-01-01

    Management and organizational behavior students are often overwhelmed by the plethora of motivation theories they must master at the undergraduate level. This article offers a teaching module geared toward helping students understand how two major process theories of motivation, equity and expectancy theories and theories of organizational…

  6. Investigating dislocation motion through a field of solutes with atomistic simulations and reaction rate theory

    International Nuclear Information System (INIS)

    Saroukhani, S.; Warner, D.H.

    2017-01-01

    The rate of thermally activated dislocation motion across a field of solutes is studied using traditional and modern atomistically informed rate theories. First, the accuracy of popular variants of the Harmonic Transition State Theory, as the most common approach, is examined by comparing predictions to direct MD simulations. It is shown that HTST predictions are grossly inaccurate due to the anharmonic effect of thermal softening. Next, the utility of the Transition Interface Sampling was examined as the method was recently shown to be effective for predicting the rate of dislocation-precipitate interactions. For dislocation-solute interactions studied here, TIS is found to be accurate only when the dislocation overcomes multiple obstacles at a time, i.e. jerky motion, and it is inaccurate in the unpinning regime where the energy barrier is of diffusive nature. It is then shown that the Partial Path TIS method - designed for diffusive barriers - provides accurate predictions in the unpinning regime. The two methods are then used to study the temperature and load dependence of the rate. It is shown that Meyer-Neldel (MN) rule prediction of the entropy barrier is not as accurate as it is in the case of dislocation-precipitate interactions. In response, an alternative model is proposed that provides an accurate prediction of the entropy barrier. This model can be combined with TST to offer an attractively simple rate prediction approach. Lastly, (PP)TIS is used to predict the Strain Rate Sensitivity (SRS) factor at experimental strain rates and the predictions are compared to experimental values.

  7. Extraction of space-charge-dominated ion beams from an ECR ion source: Theory and simulation

    Science.gov (United States)

    Alton, G. D.; Bilheux, H.

    2004-05-01

    Extraction of high quality space-charge-dominated ion beams from plasma ion sources constitutes an optimization problem centered about finding an optimal concave plasma emission boundary that minimizes half-angular divergence for a given charge state, independent of the presence or lack thereof of a magnetic field in the extraction region. The curvature of the emission boundary acts to converge/diverge the low velocity beam during extraction. Beams of highest quality are extracted whenever the half-angular divergence, ω, is minimized. Under minimum half-angular divergence conditions, the plasma emission boundary has an optimum curvature and the perveance, P, current density, j+ext, and extraction gap, d, have optimum values for a given charge state, q. Optimum values for each of the independent variables (P, j+ext and d) are found to be in close agreement with those derived from elementary analytical theory for extraction with a simple two-electrode extraction system, independent of the presence of a magnetic field. The magnetic field only increases the emittances of beams through additional aberrational effects caused by increased angular divergences through coupling of the longitudinal to the transverse velocity components of particles as they pass though the mirror region of the electron cyclotron resonance (ECR) ion source. This article reviews the underlying theory of elementary extraction optics and presents results derived from simulation studies of extraction of space-charge dominated heavy-ion beams of varying mass, charge state, and intensity from an ECR ion source with emphasis on magnetic field induced effects.

  8. Extraction of space-charge-dominated ion beams from an ECR ion source: Theory and simulation

    International Nuclear Information System (INIS)

    Alton, G.D.; Bilheux, H.

    2004-01-01

    Extraction of high quality space-charge-dominated ion beams from plasma ion sources constitutes an optimization problem centered about finding an optimal concave plasma emission boundary that minimizes half-angular divergence for a given charge state, independent of the presence or lack thereof of a magnetic field in the extraction region. The curvature of the emission boundary acts to converge/diverge the low velocity beam during extraction. Beams of highest quality are extracted whenever the half-angular divergence, ω, is minimized. Under minimum half-angular divergence conditions, the plasma emission boundary has an optimum curvature and the perveance, P, current density, j +ext , and extraction gap, d, have optimum values for a given charge state, q. Optimum values for each of the independent variables (P, j +ext and d) are found to be in close agreement with those derived from elementary analytical theory for extraction with a simple two-electrode extraction system, independent of the presence of a magnetic field. The magnetic field only increases the emittances of beams through additional aberrational effects caused by increased angular divergences through coupling of the longitudinal to the transverse velocity components of particles as they pass though the mirror region of the electron cyclotron resonance (ECR) ion source. This article reviews the underlying theory of elementary extraction optics and presents results derived from simulation studies of extraction of space-charge dominated heavy-ion beams of varying mass, charge state, and intensity from an ECR ion source with emphasis on magnetic field induced effects

  9. A close examination of the structure and dynamics of HC(NH2)2PbI3by MD simulations and group theory

    KAUST Repository

    Carignano, M. A.; Saeed, Y.; Aravindh, S. Assa; Roqan, Iman S.; Even, J.; Katan, C.

    2016-01-01

    The formamidinium lead iodide hybrid perovskite is studied using first principles molecular dynamics simulations and further analyzed using group theory. The simulations are performed on large supercells containing 768 atoms under isothermal

  10. SYVAC3 manual

    International Nuclear Information System (INIS)

    Andres, T.H.

    2000-01-01

    SYVAC3 (Systems Variability Analysis Code, generation 3) is a computer program that implements a method called systems variability analysis to analyze the behaviour of a system in the presence of uncertainty. This method is based on simulating the system many times to determine the variation in behaviour it can exhibit. SYVAC3 specializes in systems representing the transport of contaminants, and has several features to simplify the modelling of such systems. It provides a general tool for estimating environmental impacts from the dispersal of contaminants. This report describes the use and structure of SYVAC3. It is intended for modellers, programmers, operators and reviewers who deal with simulation codes based on SYVAC3. From this manual they can learn how to link a model with SYVAC3, how to set up an input file, and how to extract results from output files. The manual lists the subroutines of SYVAC3 that are available for use by models, and describes their argument lists. It also gives an overview of how routines in the File Reading Package, the Parameter Sampling Package and the Time Series Package can be used by programs outside of SYVAC3. (author)

  11. SYVAC3 manual

    Energy Technology Data Exchange (ETDEWEB)

    Andres, T.H

    2000-07-01

    SYVAC3 (Systems Variability Analysis Code, generation 3) is a computer program that implements a method called systems variability analysis to analyze the behaviour of a system in the presence of uncertainty. This method is based on simulating the system many times to determine the variation in behaviour it can exhibit. SYVAC3 specializes in systems representing the transport of contaminants, and has several features to simplify the modelling of such systems. It provides a general tool for estimating environmental impacts from the dispersal of contaminants. This report describes the use and structure of SYVAC3. It is intended for modellers, programmers, operators and reviewers who deal with simulation codes based on SYVAC3. From this manual they can learn how to link a model with SYVAC3, how to set up an input file, and how to extract results from output files. The manual lists the subroutines of SYVAC3 that are available for use by models, and describes their argument lists. It also gives an overview of how routines in the File Reading Package, the Parameter Sampling Package and the Time Series Package can be used by programs outside of SYVAC3. (author)

  12. Light Condensation and Localization in Disordered Photonic Media: Theory and Large Scale ab initio Simulations

    KAUST Repository

    Toth, Laszlo Daniel

    2013-05-07

    Disordered photonics is the study of light in random media. In a disordered photonic medium, multiple scattering of light and coherence, together with the fundamental principle of reciprocity, produce a wide range of interesting phenomena, such as enhanced backscattering and Anderson localization of light. They are also responsible for the existence of modes in these random systems. It is known that analogous processes to Bose-Einstein condensation can occur in classical wave systems, too. Classical condensation has been studied in several contexts in photonics: pulse formation in lasers, mode-locking theory and coherent emission of disordered lasers. All these systems have the common theme of possessing a large ensemble of waves or modes, together with nonlinearity, dispersion or gain. In this work, we study light condensation and its connection with light localization in a disordered, passive dielectric medium. We develop a theory for the modes inside the disordered resonator, which combines the Feshbach projection technique with spin-glass theory and statistical physics. In particular, starting from the Maxwell’s equations, we map the system to a spherical p-spin model with p = 2. The spins are replaced by modes and the temperature is related to the fluctuations in the environment. We study the equilibrium thermodynamics of the system in a general framework and show that two distinct phases exist: a paramagnetic phase, where all the modes are randomly oscillating and a condensed phase, where the energy condensates on a single mode. The thermodynamic quantities can be explicitly interpreted and can also be computed from the disorder-averaged time domain correlation function. We launch an ab initio simulation campaign using our own code and the Shaheen supercomputer to test the theoretical predictions. We construct photonic samples of varying disorder and find computationally relevant ways to obtain the thermodynamic quantities. We observe the phase transition

  13. Computer Simulations of Quantum Theory of Hydrogen Atom for Natural Science Education Students in a Virtual Lab

    Science.gov (United States)

    Singh, Gurmukh

    2012-01-01

    The present article is primarily targeted for the advanced college/university undergraduate students of chemistry/physics education, computational physics/chemistry, and computer science. The most recent software system such as MS Visual Studio .NET version 2010 is employed to perform computer simulations for modeling Bohr's quantum theory of…

  14. Extension of the Johnson-Mehl-Avrami-Kolmogorov theory incorporating anisotropic growth studied by Monte Carlo simulations

    NARCIS (Netherlands)

    Kooi, BJ

    An analytical theory has been developed, based on Monte Carlo (MC) simulations, describing the kinetics of isothermal phase transformations proceeding by nucleation and subsequent growth for d-1 dimensional growth in d dimensional space (with d 2 or 3). This type of growth is of interest since it is

  15. On theory and simulation of heaving-buoy wave-energy converters with control

    Energy Technology Data Exchange (ETDEWEB)

    Eidsmoen, H.

    1995-12-01

    Heaving-buoy wave-energy converters with control were studied. The buoy is small compared to the wavelength. The resonance bandwidth is then narrow and the energy conversion in irregular waves can be significantly increased if the oscillatory motion of the device can be actively controlled, and the power output from the converter will vary less with time than the wave power transport. A system of two concentric cylinders of the same radius, oscillating in heave only, is analysed in the frequency-domain. The mathematical model can be used to study a tight-moored buoy, as well as a buoy reacting against a submerged body. The knowledge of the frequency-domain hydrodynamic parameters is used to develop frequency-domain and time-domain mathematical models of heaving-buoy wave energy converters. The main emphasis is on using control to maximize the energy production and to protect the machinery of the wave-energy converter in very large waves. Three different methods are used to study control. (1) In the frequency-domain explicit analytical expressions for the optimum oscillation are found, assuming a continuous sinusoidal control force, and from these expressions the optimum time-domain oscillation can be determined. (2) The second method uses optimal control theory, using a control variable as the instrument for the optimisation. Unlike the first method, this method can include non-linearities. But this method gives numerical time series for the state variables and the control variable rather than analytical expressions for the optimum oscillation. (3) The third method is time-domain simulation. Non-linear forces are included, but the method only gives the response of the system to a given incident wave. How the different methods can be used to develop real-time control is discussed. Simulations are performed for a tight-moored heaving-buoy converter with a high-pressure hydraulic system for energy production and motion control. 147 refs., 38 figs., 22 tabs.

  16. Theory, simulation and experimental results of the acoustic detection of magnetization changes in superparamagnetic iron oxide

    Directory of Open Access Journals (Sweden)

    Borgert Jörn

    2011-06-01

    Full Text Available Abstract Background Magnetic Particle Imaging is a novel method for medical imaging. It can be used to measure the local concentration of a tracer material based on iron oxide nanoparticles. While the resulting images show the distribution of the tracer material in phantoms or anatomic structures of subjects under examination, no information about the tissue is being acquired. To expand Magnetic Particle Imaging into the detection of soft tissue properties, a new method is proposed, which detects acoustic emissions caused by magnetization changes in superparamagnetic iron oxide. Methods Starting from an introduction to the theory of acoustically detected Magnetic Particle Imaging, a comparison to magnetically detected Magnetic Particle Imaging is presented. Furthermore, an experimental setup for the detection of acoustic emissions is described, which consists of the necessary field generating components, i.e. coils and permanent magnets, as well as a calibrated microphone to perform the detection. Results The estimated detection limit of acoustic Magnetic Particle Imaging is comparable to the detection limit of magnetic resonance imaging for iron oxide nanoparticles, whereas both are inferior to the theoretical detection limit for magnetically detected Magnetic Particle Imaging. Sufficient data was acquired to perform a comparison to the simulated data. The experimental results are in agreement with the simulations. The remaining differences can be well explained. Conclusions It was possible to demonstrate the detection of acoustic emissions of magnetic tracer materials in Magnetic Particle Imaging. The processing of acoustic emission in addition to the tracer distribution acquired by magnetic detection might allow for the extraction of mechanical tissue parameters. Such parameters, like for example the velocity of sound and the attenuation caused by the tissue, might also be used to support and improve ultrasound imaging. However, the method

  17. Diffusion of Supercritical Fluids through Single-Layer Nanoporous Solids: Theory and Molecular Simulations.

    Science.gov (United States)

    Oulebsir, Fouad; Vermorel, Romain; Galliero, Guillaume

    2018-01-16

    With the advent of graphene material, membranes based on single-layer nanoporous solids appear as promising devices for fluid separation, be it liquid or gaseous mixtures. The design of such architectured porous materials would greatly benefit from accurate models that can predict their transport and separation properties. More specifically, there is no universal understanding of how parameters such as temperature, fluid loading conditions, or the ratio of the pore size to the fluid molecular diameter influence the permeation process. In this study, we address the problem of pure supercritical fluids diffusing through simplified models of single-layer porous materials. Basically, we investigate a toy model that consists of a single-layer lattice of Lennard-Jones interaction sites with a slit gap of controllable width. We performed extensive equilibrium and biased molecular dynamics simulations to document the physical mechanisms involved at the molecular scale. We propose a general constitutive equation for the diffusional transport coefficient derived from classical statistical mechanics and kinetic theory, which can be further simplified in the ideal gas limit. This transport coefficient relates the molecular flux to the fluid density jump across the single-layer membrane. It is found to be proportional to the accessible surface porosity of the single-layer porous solid and to a thermodynamic factor accounting for the inhomogeneity of the fluid close to the pore entrance. Both quantities directly depend on the potential of mean force that results from molecular interactions between solid and fluid atoms. Comparisons with the simulations data show that the kinetic model captures how narrowing the pore size below the fluid molecular diameter lowers dramatically the value of the transport coefficient. Furthermore, we demonstrate that our general constitutive equation allows for a consistent interpretation of the intricate effects of temperature and fluid loading

  18. Tried and true: self-regulation theory as a guiding framework for teaching parents diabetes education using human patient simulation.

    Science.gov (United States)

    Sullivan-Bolyai, Susan; Johnson, Kimberly; Cullen, Karen; Hamm, Terry; Bisordi, Jean; Blaney, Kathleen; Maguire, Laura; Melkus, Gail

    2014-01-01

    Parents become emotionally upset when learning that their child has type 1 diabetes, yet they are expected to quickly learn functional diabetes management. The purpose of this article is to describe the application of self-regulation theory to guide a family-focused education intervention using human patient simulation to enhance the initial education of parents in diabetes management. A brief description is provided of the intervention framed by self-regulation theory. On the basis of the literature, we describe the educational vignettes used based on self-regulation in the randomized controlled trial entitled "Parent Education Through Simulation-Diabetes." Examples of theory-in-practice will be illustrated by parental learning responses to this alternative educational innovation.

  19. CIRCE2/DEKGEN2: A software package for facilitated optical analysis of 3-D distributed solar energy concentrators. Theory and user manual

    Energy Technology Data Exchange (ETDEWEB)

    Romero, V.J.

    1994-03-01

    CIRCE2 is a computer code for modeling the optical performance of three-dimensional dish-type solar energy concentrators. Statistical methods are used to evaluate the directional distribution of reflected rays from any given point on the concentrator. Given concentrator and receiver geometries, sunshape (angular distribution of incident rays from the sun), and concentrator imperfections such as surface roughness and random deviation in slope, the code predicts the flux distribution and total power incident upon the target. Great freedom exists in the variety of concentrator and receiver configurations that can be modeled. Additionally, provisions for shading and receiver aperturing are included.- DEKGEN2 is a preprocessor designed to facilitate input of geometry, error distributions, and sun models. This manual describes the optical model, user inputs, code outputs, and operation of the software package. A user tutorial is included in which several collectors are built and analyzed in step-by-step examples.

  20. Effects of periodic boundary conditions on equilibrium properties of computer simulated fluids. I. Theory

    International Nuclear Information System (INIS)

    Pratt, L.R.; Haan, S.W.

    1981-01-01

    An exact formal theory for the effects of periodic boundary conditions on the equilibrium properties of computer simulated classical many-body systems is developed. This is done by observing that use of the usual periodic conditions is equivalent to the study of a certain supermolecular liquid, in which a supermolecule is a polyatomic molecule of infinite extent composed of one of the physical particles in the system plus all its periodic images. For this supermolecular system in the grand ensemble, all the cluster expansion techniques used in the study of real molecular liquids are directly applicable. As expected, particle correlations are translationally uniform, but explicitly anisotropic. When the intermolecular potential energy functions are of short enough range, or cut off, so that the minimum image method is used, evaluation of the cluster integrals is dramatically simplified. In this circumstance, a large and important class of cluster expansion contributions can be summed exactly, and expressed in terms of the correlation functions which result when the system size is allowed to increase without bound. This result yields a simple and useful approximation to the corrections to the particle correlations due to the use of periodic boundary conditions with finite systems. Numerical application of these results are reported in the following paper

  1. Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model.

    Science.gov (United States)

    Xin, Cao; Chongshi, Gu

    2016-01-01

    Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event considering both fuzziness and randomness of failure criterion, design parameters and measured data. Credibility distribution function is conducted as a novel way to represent uncertainty of influence factors of gravity dam stability. And combining with Monte Carlo simulation, corresponding calculation method and procedure are proposed. Based on a dam section, a detailed application of the modeling approach on risk calculation of both dam foundation and double sliding surfaces is provided. The results show that, the present method is feasible to be applied on analysis of stability failure risk for gravity dams. The risk assessment obtained can reflect influence of both sorts of uncertainty, and is suitable as an index value.

  2. Acetate and phosphate anion adsorption linear sweep voltammograms simulated using density functional theory

    KAUST Repository

    Savizi, Iman Shahidi Pour

    2011-04-01

    Specific adsorption of anions to electrode surfaces may alter the rates of electrocatalytic reactions. Density functional theory (DFT) methods are used to predict the adsorption free energy of acetate and phosphate anions as a function of Pt(1 1 1) electrode potential. Four models of the electrode potential are used including a simple vacuum slab model, an applied electric field model with and without the inclusion of a solvating water bi-layer, and the double reference model. The linear sweep voltammogram (LSV) due to anion adsorption is simulated using the DFT results. The inclusion of solvation at the electrochemical interface is necessary for accurately predicting the adsorption peak position. The Langmuir model is sufficient for predicting the adsorption peak shape, indicating coverage effects are minor in altering the LSV for acetate and phosphate adsorption. Anion adsorption peak positions are determined for solution phase anion concentrations present in microbial fuel cells and microbial electrolysis cells and discussion is provided as to the impact of anion adsorption on oxygen reduction and hydrogen evolution reaction rates in these devices. © 2011 Elsevier Ltd. All rights reserved.

  3. 360⁰ -View of Quantum Theory and Ab Initio Simulation at Extreme Conditions: 2014 Sanibel Symposium

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Hai-Ping [Univ. of Florida, Gainesville, FL (United States)

    2016-09-02

    The Sanibel Symposium 2014 was held February 16-21, 2014, at the King and Prince, St. Simons Island, GA. It was successful in bringing condensed-matter physicists and quantum chemists together productively to drive the emergence of those specialties. The Symposium had a significant role in preparing a whole generation of quantum theorists. The 54th Sanibel meeting looked to the future in two ways. We had 360⁰-View sessions to honor the exceptional contributions of Rodney Bartlett (70), Bill Butler (70), Yngve Öhrn (80), Fritz Schaefer (70), and Malcolm Stocks (70). The work of these five has greatly impacted several generations of quantum chemists and condensed matter physicists. The “360⁰” is the sum of their ages. More significantly, it symbolizes a panoramic view of critical developments and accomplishments in theoretical and computational chemistry and physics oriented toward the future. Thus, two of the eight 360⁰-View sessions focused specifically on younger scientists. The 360⁰-View program was the major component of the 2014 Sanibel meeting. Another four sessions included a sub-symposium on ab initio Simulations at Extreme Conditions, with focus on getting past the barriers of present-day Born-Oppenheimer molecular dynamics by advances in finite-temperature density functional theory, orbital-free DFT, and new all-numerical approaches.

  4. Numerical Simulations of Marine Hydrokinetic (MHK) Turbines Using the Blade Element Momentum Theory

    Science.gov (United States)

    Javaherchi, Teymour; Thulin, Oskar; Aliseda, Alberto

    2011-11-01

    Energy extraction from the available kinetic energy in tidal currents via Marine Hydrokinetic (MHK) turbines has recently attracted scientists' attention as a highly predictable source of renewable energy. The strongest tidal resources have a concentrated nature that require close turbine spacing in a farm of MHK turbines. This tight spacing, however, will lead to interaction of the downstream turbines with the turbulent wake generated by upstream turbines. This interaction can significantly reduce the power generated and possibly result in structural failure before the expected service life is completed. Development of a numerical methodology to study the turbine-wake interaction can provide a tool for optimization of turbine spacing to maximize the power generated in turbine arrays. In this work, we will present numerical simulations of the flow field in a farm of horizontal axis MHK turbines using the Blade Element Momentum Theory (BEMT). We compare the value of integral variables (i.e. efficiency, power, torque and etc.) calculated for each turbine in the farm for different arrangements with varying streamwise and lateral offsets between turbines. We find that BEMT provides accurate estimates of turbine efficiency under uniform flow conditions, but overpredicts the efficiency of downstream turbines when they are strongly affected by the wakes. Supported by DOE through the National Northwest Marine Renewable Energy Center.

  5. Combined Molecular Dynamics Simulation-Molecular-Thermodynamic Theory Framework for Predicting Surface Tensions.

    Science.gov (United States)

    Sresht, Vishnu; Lewandowski, Eric P; Blankschtein, Daniel; Jusufi, Arben

    2017-08-22

    A molecular modeling approach is presented with a focus on quantitative predictions of the surface tension of aqueous surfactant solutions. The approach combines classical Molecular Dynamics (MD) simulations with a molecular-thermodynamic theory (MTT) [ Y. J. Nikas, S. Puvvada, D. Blankschtein, Langmuir 1992 , 8 , 2680 ]. The MD component is used to calculate thermodynamic and molecular parameters that are needed in the MTT model to determine the surface tension isotherm. The MD/MTT approach provides the important link between the surfactant bulk concentration, the experimental control parameter, and the surfactant surface concentration, the MD control parameter. We demonstrate the capability of the MD/MTT modeling approach on nonionic alkyl polyethylene glycol surfactants at the air-water interface and observe reasonable agreement of the predicted surface tensions and the experimental surface tension data over a wide range of surfactant concentrations below the critical micelle concentration. Our modeling approach can be extended to ionic surfactants and their mixtures with both ionic and nonionic surfactants at liquid-liquid interfaces.

  6. Simulation of X-ray absorption spectra with orthogonality constrained density functional theory.

    Science.gov (United States)

    Derricotte, Wallace D; Evangelista, Francesco A

    2015-06-14

    Orthogonality constrained density functional theory (OCDFT) [F. A. Evangelista, P. Shushkov and J. C. Tully, J. Phys. Chem. A, 2013, 117, 7378] is a variational time-independent approach for the computation of electronic excited states. In this work we extend OCDFT to compute core-excited states and generalize the original formalism to determine multiple excited states. Benchmark computations on a set of 13 small molecules and 40 excited states show that unshifted OCDFT/B3LYP excitation energies have a mean absolute error of 1.0 eV. Contrary to time-dependent DFT, OCDFT excitation energies for first- and second-row elements are computed with near-uniform accuracy. OCDFT core excitation energies are insensitive to the choice of the functional and the amount of Hartree-Fock exchange. We show that OCDFT is a powerful tool for the assignment of X-ray absorption spectra of large molecules by simulating the gas-phase near-edge spectrum of adenine and thymine.

  7. Real-time simulation of electric motors. Applying software and control theory; Echtzeit-Simulation von E-Maschinen. Regelungstechnik im Hybridantrieb

    Energy Technology Data Exchange (ETDEWEB)

    Werner, Joerg; Tauber, Hermann [Silver-Atena, Muenchen (Germany)

    2010-02-15

    Developments in the electric drive-train have the highest priority, but all the same proven development methods are not consequently applied. For example the simulation of the hybrid drive-train excludes the e-motor and its system environment such as the corresponding controllers. ''Too complex and too much effort'', is the justification. Silver-Atena produces a proof of the contrary and achieves with a real-time simulation of an electric machine using the Hardware-in-the-loop technique more precise system verifications. Despite the undoubtedly high complexity Software- and Control-Theory engineers can thus work more efficiently. (orig.)

  8. Theory and simulation of ion acceleration with circularly polarized laser pulses; Theorie et simulation de l'acceleration des ions par impulsions laser a polarisation circulaire

    Energy Technology Data Exchange (ETDEWEB)

    Macchi, A. [CNR/INFM/polyLAB, Pisa (Italy); Macchi, A.; Tuveri, S.; Veghini, S. [Pisa Univ., Dept. of Physics E. Fermi (Italy); Liseikina, T.V. [Max Planck Institute for Nuclear Physics, Heidelberg (Germany)

    2009-03-15

    Ion acceleration driven by the radiation pressure of circularly polarized pulses is investigated via analytical modeling and particle-in-cell simulations. Both thick and thin targets, i.e. the 'hole boring' and 'light sail' regimes are considered. Parametric studies in one spatial dimension are used to determine the optimal thickness of thin targets and to address the effects of preformed plasma profiles and laser pulse ellipticity in thick targets. Three-dimensional (3D) simulations show that 'flat-top' radial profiles of the intensity are required to prevent early laser pulse breakthrough in thin targets. The 3D simulations are also used to address the issue of the conservation of the angular momentum of the laser pulse and its absorption in the plasma. (authors)

  9. Load theory behind the wheel: an experimental application of a cognitive model to simulated driving

    OpenAIRE

    Murphy, Gillian

    2017-01-01

    Load Theory is a prominent model of selective attention first proposed over twenty years ago. Load Theory is supported by a great many experimental and neuroimaging studies. There is however, little evidence that Load Theory can be applied to real world attention, though it has great practical potential. Driving, as an everyday task where failures of attention can have profound consequences, stands to benefit from the understanding of selective attention that Load Theory provides. The aim of ...

  10. Initial conditions for cosmological N-body simulations of the scalar sector of theories of Newtonian, Relativistic and Modified Gravity

    International Nuclear Information System (INIS)

    Valkenburg, Wessel; Hu, Bin

    2015-01-01

    We present a description for setting initial particle displacements and field values for simulations of arbitrary metric theories of gravity, for perfect and imperfect fluids with arbitrary characteristics. We extend the Zel'dovich Approximation to nontrivial theories of gravity, and show how scale dependence implies curved particle paths, even in the entirely linear regime of perturbations. For a viable choice of Effective Field Theory of Modified Gravity, initial conditions set at high redshifts are affected at the level of up to 5% at Mpc scales, which exemplifies the importance of going beyond Λ-Cold Dark Matter initial conditions for modifications of gravity outside of the quasi-static approximation. In addition, we show initial conditions for a simulation where a scalar modification of gravity is modelled in a Lagrangian particle-like description. Our description paves the way for simulations and mock galaxy catalogs under theories of gravity beyond the standard model, crucial for progress towards precision tests of gravity and cosmology

  11. Plane shear flows of frictionless spheres: Kinetic theory and 3D soft-sphere discrete element method simulations

    Science.gov (United States)

    Vescovi, D.; Berzi, D.; Richard, P.; Brodu, N.

    2014-05-01

    We use existing 3D Discrete Element simulations of simple shear flows of spheres to evaluate the radial distribution function at contact that enables kinetic theory to correctly predict the pressure and the shear stress, for different values of the collisional coefficient of restitution. Then, we perform 3D Discrete Element simulations of plane flows of frictionless, inelastic spheres, sheared between walls made bumpy by gluing particles in a regular array, at fixed average volume fraction and distance between the walls. The results of the numerical simulations are used to derive boundary conditions appropriated in the cases of large and small bumpiness. Those boundary conditions are, then, employed to numerically integrate the differential equations of Extended Kinetic Theory, where the breaking of the molecular chaos assumption at volume fraction larger than 0.49 is taken into account in the expression of the dissipation rate. We show that the Extended Kinetic Theory is in very good agreement with the numerical simulations, even for coefficients of restitution as low as 0.50. When the bumpiness is increased, we observe that some of the flowing particles are stuck in the gaps between the wall spheres. As a consequence, the walls are more dissipative than expected, and the flows resemble simple shear flows, i.e., flows of rather constant volume fraction and granular temperature.

  12. Plane shear flows of frictionless spheres: Kinetic theory and 3D soft-sphere discrete element method simulations

    International Nuclear Information System (INIS)

    Vescovi, D.; Berzi, D.; Richard, P.; Brodu, N.

    2014-01-01

    We use existing 3D Discrete Element simulations of simple shear flows of spheres to evaluate the radial distribution function at contact that enables kinetic theory to correctly predict the pressure and the shear stress, for different values of the collisional coefficient of restitution. Then, we perform 3D Discrete Element simulations of plane flows of frictionless, inelastic spheres, sheared between walls made bumpy by gluing particles in a regular array, at fixed average volume fraction and distance between the walls. The results of the numerical simulations are used to derive boundary conditions appropriated in the cases of large and small bumpiness. Those boundary conditions are, then, employed to numerically integrate the differential equations of Extended Kinetic Theory, where the breaking of the molecular chaos assumption at volume fraction larger than 0.49 is taken into account in the expression of the dissipation rate. We show that the Extended Kinetic Theory is in very good agreement with the numerical simulations, even for coefficients of restitution as low as 0.50. When the bumpiness is increased, we observe that some of the flowing particles are stuck in the gaps between the wall spheres. As a consequence, the walls are more dissipative than expected, and the flows resemble simple shear flows, i.e., flows of rather constant volume fraction and granular temperature

  13. PROTEUS-SN User Manual

    Energy Technology Data Exchange (ETDEWEB)

    Shemon, Emily R. [Argonne National Lab. (ANL), Argonne, IL (United States); Smith, Micheal A. [Argonne National Lab. (ANL), Argonne, IL (United States); Lee, Changho [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-02-16

    is a part of the SHARP multi-physics suite for coupled multi-physics analysis of nuclear reactors. This user manual describes how to set up a neutron transport simulation with the PROTEUS-SN code. A companion methodology manual describes the theory and algorithms within PROTEUS-SN.

  14. Program package for multicanonical simulations of U(1) lattice gauge theory-Second version

    Science.gov (United States)

    Bazavov, Alexei; Berg, Bernd A.

    2013-03-01

    A new version STMCMUCA_V1_1 of our program package is available. It eliminates compatibility problems of our Fortran 77 code, originally developed for the g77 compiler, with Fortran 90 and 95 compilers. New version program summaryProgram title: STMC_U1MUCA_v1_1 Catalogue identifier: AEET_v1_1 Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html Programming language: Fortran 77 compatible with Fortran 90 and 95 Computers: Any capable of compiling and executing Fortran code Operating systems: Any capable of compiling and executing Fortran code RAM: 10 MB and up depending on lattice size used No. of lines in distributed program, including test data, etc.: 15059 No. of bytes in distributed program, including test data, etc.: 215733 Keywords: Markov chain Monte Carlo, multicanonical, Wang-Landau recursion, Fortran, lattice gauge theory, U(1) gauge group, phase transitions of continuous systems Classification: 11.5 Catalogue identifier of previous version: AEET_v1_0 Journal Reference of previous version: Computer Physics Communications 180 (2009) 2339-2347 Does the new version supersede the previous version?: Yes Nature of problem: Efficient Markov chain Monte Carlo simulation of U(1) lattice gauge theory (or other continuous systems) close to its phase transition. Measurements and analysis of the action per plaquette, the specific heat, Polyakov loops and their structure factors. Solution method: Multicanonical simulations with an initial Wang-Landau recursion to determine suitable weight factors. Reweighting to physical values using logarithmic coding and calculating jackknife error bars. Reasons for the new version: The previous version was developed for the g77 compiler Fortran 77 version. Compiler errors were encountered with Fortran 90 and Fortran 95 compilers (specified below). Summary of revisions: epsilon=one/10**10 is replaced by epsilon/10.0D10 in the parameter statements of the subroutines u1_bmha.f, u1_mucabmha.f, u1wl

  15. Large-scale atomistic simulations of nanostructured materials based on divide-and-conquer density functional theory

    Directory of Open Access Journals (Sweden)

    Vashishta P.

    2011-05-01

    Full Text Available A linear-scaling algorithm based on a divide-and-conquer (DC scheme is designed to perform large-scale molecular-dynamics simulations, in which interatomic forces are computed quantum mechanically in the framework of the density functional theory (DFT. This scheme is applied to the thermite reaction at an Al/Fe2O3 interface. It is found that mass diffusion and reaction rate at the interface are enhanced by a concerted metal-oxygen flip mechanism. Preliminary simulations are carried out for an aluminum particle in water based on the conventional DFT, as a target system for large-scale DC-DFT simulations. A pair of Lewis acid and base sites on the aluminum surface preferentially catalyzes hydrogen production in a low activation-barrier mechanism found in the simulations

  16. Plasma theory and simulation: Quarterly progress report Nos. 1 and 2, January 1, 1986-June 30, 1986

    International Nuclear Information System (INIS)

    Birdsall, C.K.

    1986-01-01

    This quarterly report deals with General Plasma Theory and Simulation. Computer simulation of bounded plasma systems, with external circuits, is discussed in considerable detail. Artificial cooling of trapped electrons in bounded simulations was observed and is now attributed to noiseless injection; the cooling does not occur if random injection is used. This report also deals with Plasma-Wall Physics and Simulation. The collector and source sheaths at the boundaries of warm plasma are treated in detail, including ion reflection and secondary electron emission at the collector. The Kelvin-Helmholtz instability is observed in a self-consistent magnetized sheath, producing long-lived vortices which increase the particle transport to the wall dramatically

  17. Towards an understanding of the attributes of simulation that enable learning in undergraduate nurse education: A grounded theory study.

    Science.gov (United States)

    Bland, Andrew J; Tobbell, Jane

    2016-09-01

    Simulation has become an established feature of nurse education yet little is understood about the mechanisms that lead to learning. To explore the attributes of simulation-based education that enable student learning in undergraduate nurse education. Final year students drawn from one UK University (n=46) participated in a grounded theory study. First, nonparticipant observation and video recording of student activity was undertaken. Following initial analysis, recordings and observations were deconstructed during focus group interviews that enabled both the researcher and participants to unpack meaning. Lastly emergent findings were verified with final year students drawn from a second UK University (n=6). A staged approach to learning emerged from engagement in simulation. This began with initial hesitation as students moved through nonlinear stages to making connections and thinking like a nurse. Core findings suggest that simulation enables curiosity and intellect (main concern) through doing (core category) and interaction with others identified as social collaboration (category). This study offers a theoretical basis for understanding simulation-based education and integration of strategies that maximise the potential for learning. Additionally it offers direction for further research, particularly with regards to how the application of theory to practice is accelerated through learning by doing and working collaboratively. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. GRSAC Users Manual

    International Nuclear Information System (INIS)

    Ball, S.J.; Nypaver, D.J.

    1999-01-01

    An interactive workstation-based simulation code (GRSAC) for studying postulated severe accidents in gas-cooled reactors has been developed to accommodate user-generated input with ''smart front-end'' checking. Code features includes on- and off-line plotting, on-line help and documentation, and an automated sensitivity study option. The code and its predecessors have been validated using comparisons with a variety of experimental data and similar codes. GRSAC model features include a three-dimensional representation of the core thermal hydraulics, and optional ATWS (anticipated transients without scram) capabilities. The user manual includes a detailed description of the code features, and includes four case studies which guide the user through four different examples of the major uses of GRSAC: an accident case; an initial conditions setup and run; a sensitivity study; and the setup of a new reactor model

  19. GRSAC Users Manual

    Energy Technology Data Exchange (ETDEWEB)

    Ball, S.J.; Nypaver, D.J.

    1999-02-01

    An interactive workstation-based simulation code (GRSAC) for studying postulated severe accidents in gas-cooled reactors has been developed to accommodate user-generated input with ''smart front-end'' checking. Code features includes on- and off-line plotting, on-line help and documentation, and an automated sensitivity study option. The code and its predecessors have been validated using comparisons with a variety of experimental data and similar codes. GRSAC model features include a three-dimensional representation of the core thermal hydraulics, and optional ATWS (anticipated transients without scram) capabilities. The user manual includes a detailed description of the code features, and includes four case studies which guide the user through four different examples of the major uses of GRSAC: an accident case; an initial conditions setup and run; a sensitivity study; and the setup of a new reactor model.

  20. Lattice simulations of QCD-like theories at finite baryon density

    International Nuclear Information System (INIS)

    Scior, Philipp Friedrich

    2016-01-01

    The exploration of the phase diagram of quantum chromodynamics (QCD) is of great importance to describe e.g. the properties of neutron stars or heavy-ion collisions. Due to the sign problem of lattice QCD at finite chemical potential we need effective theories to study QCD at finite density. Here, we use a three-dimensional Polyakov-loop theory to study the phase diagrams of QCD-like theories. In particular, we investigate the heavy quark limit of the QCD-like theories where the effective theory can be derived from the full theory by a combined strong coupling and hopping expansion. This expansion can be systematically improved order by order. Since there is no sign problem for the QCD-like theories we consider, we can compare our results to data from lattice calculations of the full theories to make qualitative and quantitative statements of the effective theory's validity. We start by deriving the effective theory up to next-to-next-to leading-order, in particular for two-color and G_2-QCD where replace the three colors in QCD with only two colors or respectively replace the gauge group SU(3) of QCD with G_2. We will then apply the effective theory at finite temperature mainly to test the theory and the implementation but also to make some predictions for the deconfinement phase transition in G_2 Yang-Mills theory. Finally, we turn our attention to the cold and dense regime of the phase diagram where we observe a sharp increase of the baryon density with the quark chemical potential μ, when μ reaches half the diquark mass. At vanishing temperature this is expected to happen in a quantum phase transition with Bose-Einstein-condensation of diquarks. In contrast to the liquid-gas transition in QCD, the phase transition to the Bose-Einstein condensate is continuous. We find evidence that the effective theories for heavy quarks are able to describe the qualitative difference between first and second order phase transitions. For even higher μ we find the rise of the

  1. Lattice simulations of QCD-like theories at finite baryon density

    Energy Technology Data Exchange (ETDEWEB)

    Scior, Philipp Friedrich

    2016-07-13

    The exploration of the phase diagram of quantum chromodynamics (QCD) is of great importance to describe e.g. the properties of neutron stars or heavy-ion collisions. Due to the sign problem of lattice QCD at finite chemical potential we need effective theories to study QCD at finite density. Here, we use a three-dimensional Polyakov-loop theory to study the phase diagrams of QCD-like theories. In particular, we investigate the heavy quark limit of the QCD-like theories where the effective theory can be derived from the full theory by a combined strong coupling and hopping expansion. This expansion can be systematically improved order by order. Since there is no sign problem for the QCD-like theories we consider, we can compare our results to data from lattice calculations of the full theories to make qualitative and quantitative statements of the effective theory's validity. We start by deriving the effective theory up to next-to-next-to leading-order, in particular for two-color and G{sub 2}-QCD where replace the three colors in QCD with only two colors or respectively replace the gauge group SU(3) of QCD with G{sub 2}. We will then apply the effective theory at finite temperature mainly to test the theory and the implementation but also to make some predictions for the deconfinement phase transition in G{sub 2} Yang-Mills theory. Finally, we turn our attention to the cold and dense regime of the phase diagram where we observe a sharp increase of the baryon density with the quark chemical potential μ, when μ reaches half the diquark mass. At vanishing temperature this is expected to happen in a quantum phase transition with Bose-Einstein-condensation of diquarks. In contrast to the liquid-gas transition in QCD, the phase transition to the Bose-Einstein condensate is continuous. We find evidence that the effective theories for heavy quarks are able to describe the qualitative difference between first and second order phase transitions. For even higher μ we

  2. The Mind as Black Box: A Simulation of Theory Building in Psychology.

    Science.gov (United States)

    Hildebrandt, Carolyn; Oliver, Jennifer

    2000-01-01

    Discusses an activity that uses the metaphor "the mind is a black box," in which students work in groups to discover what is inside a sealed, black, plastic box. States that the activity enables students to understand the need for theories in psychology and to comprehend how psychologists build, test, and refine those theories. (CMK)

  3. Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study

    Directory of Open Access Journals (Sweden)

    Elie Bienenstock

    2008-06-01

    Full Text Available Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i For all estimators considered, the main source of error is the bias. (ii The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in

  4. A Manual of Style.

    Science.gov (United States)

    Nebraska State Dept. of Education, Lincoln.

    This "Manual of Style" is offered as a guide to assist Nebraska State employees in producing quality written communications and in presenting a consistently professional image of government documents. The manual is not designed to be all-inclusive. Sections of the manual discuss formatting documents, memorandums, letters, mailing…

  5. Oxygen ordering in YBa2Cu3O6+x using Monte Carlo simulation and analytic theory

    DEFF Research Database (Denmark)

    Mønster, D.; Lindgård, Per-Anker; Andersen, N.H.

    2001-01-01

    We have simulated the phase diagram and structural properties of the oxygen ordering in YBa2Cu3O6+x testing simple extensions of the asymmetric next-nearest-neighbor Ising (ASYNNNI) Model. In a preliminary paper [Phys. Rev. B 60, 110 (1999)] we demonstrated that the inclusion of a single further...... on a nano scale into box-like domains and anti-domains of typical average dimension (10a,30b,2c). Theory and model simulations demonstrate that the distribution of such domains causes deviations from Lorentzian line shapes, and not the Porod effect. Analytic theory is used to estimate the effect of a range...... of values of the interaction parameters used, as well as the effect of an extension to include infinite ranged interactions. In the experiments a large cap is found between the onset temperatures of the ortho-I and ortho-II orders at x=0.5. This cannot be fully reproduced in the simulations. The simulations...

  6. Electroosmotic flow in a rectangular channel with variable wall zeta-potential: comparison of numerical simulation with asymptotic theory.

    Science.gov (United States)

    Datta, Subhra; Ghosal, Sandip; Patankar, Neelesh A

    2006-02-01

    Electroosmotic flow in a straight micro-channel of rectangular cross-section is computed numerically for several situations where the wall zeta-potential is not constant but has a specified spatial variation. The results of the computation are compared with an earlier published asymptotic theory based on the lubrication approximation: the assumption that any axial variations take place on a long length scale compared to a characteristic channel width. The computational results are found to be in excellent agreement with the theory even when the scale of axial variations is comparable to the channel width. In the opposite limit when the wavelength of fluctuations is much shorter than the channel width, the lubrication theory fails to describe the solution either qualitatively or quantitatively. In this short wave limit the solution is well described by Ajdari's theory for electroosmotic flow between infinite parallel plates (Ajdari, A., Phys. Rev. E 1996, 53, 4996-5005.) The infinitely thin electric double layer limit is assumed in the theory as well as in the simulation.

  7. An adaptive finite element method for simulating surface tension with the gradient theory of fluid interfaces

    KAUST Repository

    Kou, Jisheng; Sun, Shuyu

    2014-01-01

    The gradient theory for the surface tension of simple fluids and mixtures is rigorously analyzed based on mathematical theory. The finite element approximation of surface tension is developed and analyzed, and moreover, an adaptive finite element method based on a physical-based estimator is proposed and it can be coupled efficiently with Newton's method as well. The numerical tests are carried out both to verify the proposed theory and to demonstrate the efficiency of the proposed method. © 2013 Elsevier B.V. All rights reserved.

  8. An adaptive finite element method for simulating surface tension with the gradient theory of fluid interfaces

    KAUST Repository

    Kou, Jisheng

    2014-01-01

    The gradient theory for the surface tension of simple fluids and mixtures is rigorously analyzed based on mathematical theory. The finite element approximation of surface tension is developed and analyzed, and moreover, an adaptive finite element method based on a physical-based estimator is proposed and it can be coupled efficiently with Newton\\'s method as well. The numerical tests are carried out both to verify the proposed theory and to demonstrate the efficiency of the proposed method. © 2013 Elsevier B.V. All rights reserved.

  9. Toward a Unified Theory of Work: Organizational Simulations and Policy Analyses

    National Research Council Canada - National Science Library

    Vaughan, David

    2002-01-01

    .... The Department of Defense needs an integrated MPT planning and management system. We believe that a unified theory of work is needed to provide a framework and to guide and focus related research and development...

  10. GWSCREEN: A Semi-analytical Model for Assessment of the Groundwater Pathway from Surface or Buried Contamination, Theory and User's Manual, Version 2.5

    Energy Technology Data Exchange (ETDEWEB)

    Rood, Arthur South

    1998-08-01

    GWSCREEN was developed for assessment of the groundwater pathway from leaching of radioactive and non-radioactive substances from surface or buried sources. The code was designed for implementation in the Track I and Track II assessment of Comprehensive Environmental Response, Compensation, and Liability Act sites identified as low probability hazard at the Idaho National Engineering Laboratory. The code calculates 1) the limiting soil concentration such that, after leaching and transport to the aquifer regulatory contaminant levels in groundwater are not exceeded, 2) peak aquifer concentration and associated human health impacts, and 3) aquifer concentrations and associated human health impacts as a function of time and space. The code uses a mass conservation approach to model three processes: contaminant release from a source volume, vertical contaminant transport in the unsaturated zone, and 2D or 3D contaminant transport in the saturated zone. The source model considers the sorptive properties and solubility of the contaminant. In Version 2.5, transport in the unsaturated zone is described by a plug flow or dispersive solution model. Transport in the saturated zone is calculated with a semi-analytical solution to the advection dispersion equation in groundwater. Three source models are included; leaching from a surface or buried source, infiltration pond, or user-defined arbitrary release. Dispersion in the aquifer may be described by fixed dispersivity values or three, spatial-variable dispersivity functions. Version 2.5 also includes a Monte Carlo sampling routine for uncertainty/sensitivity analysis and a preprocessor to allow multiple input files and multiple contaminants to be run in a single simulation. GWSCREEN has been validated against other codes using similar algorithms and techniques. The code was originally designed for assessment and screening of the groundwater pathway when field data are limited. It was intended to simulate relatively simple

  11. Structural relaxation of polydisperse hard spheres: comparison of the mode-coupling theory to a Langevin dynamics simulation.

    Science.gov (United States)

    Weysser, F; Puertas, A M; Fuchs, M; Voigtmann, Th

    2010-07-01

    We analyze the slow glassy structural relaxation as measured through collective and tagged-particle density correlation functions obtained from Brownian dynamics simulations for a polydisperse system of quasi-hard spheres in the framework of the mode-coupling theory (MCT) of the glass transition. Asymptotic analyses show good agreement for the collective dynamics when polydispersity effects are taken into account in a multicomponent calculation, but qualitative disagreement at small q when the system is treated as effectively monodisperse. The origin of the different small-q behavior is attributed to the interplay between interdiffusion processes and structural relaxation. Numerical solutions of the MCT equations are obtained taking properly binned partial static structure factors from the simulations as input. Accounting for a shift in the critical density, the collective density correlation functions are well described by the theory at all densities investigated in the simulations, with quantitative agreement best around the maxima of the static structure factor and worst around its minima. A parameter-free comparison of the tagged-particle dynamics however reveals large quantitative errors for small wave numbers that are connected to the well-known decoupling of self-diffusion from structural relaxation and to dynamical heterogeneities. While deviations from MCT behavior are clearly seen in the tagged-particle quantities for densities close to and on the liquid side of the MCT glass transition, no such deviations are seen in the collective dynamics.

  12. Incremental retinal-defocus theory of myopia development--schematic analysis and computer simulation.

    Science.gov (United States)

    Hung, George K; Ciuffreda, Kenneth J

    2007-07-01

    Previous theories of myopia development involved subtle and complex processes such as the sensing and analyzing of chromatic aberration, spherical aberration, spatial gradient of blur, or spatial frequency content of the retinal image, but they have not been able to explain satisfactorily the diverse experimental results reported in the literature. On the other hand, our newly proposed incremental retinal-defocus theory (IRDT) has been able to explain all of these results. This theory is based on a relatively simple and direct mechanism for the regulation of ocular growth. It states that a time-averaged decrease in retinal-image defocus area decreases the rate of release of retinal neuromodulators, which decreases the rate of retinal proteoglycan synthesis with an associated decrease in scleral structural integrity. This increases the rate of scleral growth, and in turn the eye's axial length, which leads to myopia. Our schematic analysis has provided a clear explanation for the eye's ability to grow in the appropriate direction under a wide range of experimental conditions. In addition, the theory has been able to explain how repeated cycles of nearwork-induced transient myopia leads to repeated periods of decreased retinal-image defocus, whose cumulative effect over an extended period of time results in an increase in axial growth that leads to permanent myopia. Thus, this unifying theory forms the basis for understanding the underlying retinal and scleral mechanisms of myopia development.

  13. Direct simulation of groundwater transit-time distributions using the reservoir theory

    Science.gov (United States)

    Etcheverry, David; Perrochet, Pierre

    Groundwater transit times are of interest for the management of water resources, assessment of pollution from non-point sources, and quantitative dating of groundwaters by the use of environmental isotopes. The age of water is the time water has spent in an aquifer since it has entered the system, whereas the transit time is the age of water as it exits the system. Water at the outlet of an aquifer is a mixture of water elements with different transit times, as a consequence of the different flow-line lengths. In this paper, transit-time distributions are calculated by coupling two existing methods, the reservoir theory and a recent age-simulation method. Based on the derivation of the cumulative age distribution over the whole domain, the approach accounts for the whole hydrogeological framework. The method is tested using an analytical example and its applicability illustrated for a regional layered aquifer. Results show the asymmetry and multimodality of the transit-time distribution even in advection-only conditions, due to the aquifer geometry and to the velocity-field heterogeneity. Résumé Les temps de transit des eaux souterraines sont intéressants à connaître pour gérer l'évaluation des ressources en eau dans le cas de pollution à partir de sources non ponctuelles, et aussi pour dater quantitativement les eaux souterraines au moyen des isotopes du milieu. L'âge de l'eau est le temps qu'elle a passé dans un aquifère depuis qu'elle est entrée dans le système, alors que le temps de transit est l'âge de l'eau au moment où elle quitte le système. L'eau à la sortie d'un aquifère est un mélange d'eaux possédant différents temps de transit, du fait des longueurs différentes des lignes de courant suivies. Dans ce papier, les distributions des temps de transit sont calculées en couplant deux méthodes, la théorie du réservoir et une méthode récente de simulation des âges. Basée sur la dérivation de la distribution cumulées des âges sur

  14. Strategic bidding in electricity markets: An agent-based simulator with game theory for scenario analysis

    DEFF Research Database (Denmark)

    Pinto, Tiago; Praca, Isabel; Morais, Hugo

    2013-01-01

    the behavior that better fits their objectives. This model includes forecasts of competitor players’ actions, to build models of their behavior, in order to define the most probable expected scenarios. Once the scenarios are defined, game theory is applied to support the choice of the ac-tion to be performed......Electricity markets are complex environments, involving a large number of different entities, with specific charac-teristics and objectives, making their decisions and interacting in a dynamic scene. Game-theory has been widely used to sup-port decisions in competitive environments; therefore its...... application in electricity markets can prove to be a high potential tool. This paper proposes a new scenario analysis algorithm, which includes the application of game-theory, to evaluate and preview different scenarios and provide players with the ability to strategically react in order to exhibit...

  15. User Manual for Graphical User Interface Version 2.4 with Fire and Smoke Simulation Model (FSSIM) Version 1.2

    National Research Council Canada - National Science Library

    Haupt, Tomasz A; Henley, Greg; Sura, Bhargavi; Kirkland, Robert; Floyd, Jason; Scheffey, Joseph; Tatem, Patricia A; Williams, Frederick W

    2006-01-01

    The collaborative work of Hughes Associates, Inc., the Naval Research Laboratory, and a group at Mississippi State University resulted in development of a simulation system including a Graphical User Interface (GUI...

  16. Theory and simulations for hard-disk models of binary mixtures of molecules with internal degrees of freedom

    DEFF Research Database (Denmark)

    Fraser, Diane P.; Zuckermann, Martin J.; Mouritsen, Ole G.

    1991-01-01

    A two-dimensional Monte Carlo simulation method based on the NpT ensemble and the Voronoi tesselation, which was previously developed for single-species hard-disk systems, is extended, along with a version of scaled-particle theory, to many-component mixtures. These systems are unusual in the sense...... and internal degrees of freedom leads to a rich phase structure that includes solid-liquid transitions (governed by the translational variables) as well as transitions involving changes in average disk size (governed by the internal variables). The relationship between these two types of transitions is studied...... by the method in the case of a binary mixture, and results are presented for varying disk-size ratios and degeneracies. The results are also compared with the predictions of the extended scaled-particle theory. Applications of the model are discussed in relation to lipid monolayers spread on air...

  17. Structure of cylindrical electric double layers: Comparison of density functional and modified Poisson-Boltzmann theories with Monte Carlo simulations

    Directory of Open Access Journals (Sweden)

    V.Dorvilien

    2013-01-01

    Full Text Available The structure of cylindrical double layers is studied using a modified Poisson Boltzmann theory and the density functional approach. In the model double layer the electrode is a cylindrical polyion that is infinitely long, impenetrable, and uniformly charged. The polyion is immersed in a sea of equi-sized rigid ions embedded in a dielectric continuum. An in-depth comparison of the theoretically predicted zeta potentials, the mean electrostatic potentials, and the electrode-ion singlet density distributions is made with the corresponding Monte Carlo simulation data. The theories are seen to be consistent in their predictions that include variations in ionic diameters, electrolyte concentrations, and electrode surface charge densities, and are also able to reproduce well some new and existing Monte Carlo results.

  18. Assessment and simulation tools for sustainable energy systems theory and applications

    CERN Document Server

    Cavallaro, Fausto

    2013-01-01

    This book covers both simulations using markal model and linear programming (LP) and methods and applications of multi-criteria, fuzzy-sets, algorithm genetics and neural nets (artificial intelligence) to energy systems.

  19. Simulation of creep effects in framework of a geometrically nonlinear endochronic theory of inelasticity

    Science.gov (United States)

    Zabavnikova, T. A.; Kadashevich, Yu. I.; Pomytkin, S. P.

    2018-05-01

    A geometric non-linear endochronic theory of inelasticity in tensor parametric form is considered. In the framework of this theory, the creep strains are modelled. The effect of various schemes of applying stresses and changing of material properties on the development of creep strains is studied. The constitutive equations of the model are represented by non-linear systems of ordinary differential equations which are solved in MATLAB environment by implicit difference method. Presented results demonstrate a good qualitative agreement of theoretical data and experimental observations including the description of the tertiary creep and pre-fracture of materials.

  20. A Multi-context BDI Recommender System: from Theory to Simulation

    OpenAIRE

    Ben Othmane , Amel; Tettamanzi , Andrea G. B.; Villata , Serena; Le Thanh , Nhan

    2016-01-01

    International audience; In this paper, a simulation of a multi-agent recommender system is presented and developed in the NetLogo platform. The specification of this recommender system is based on the well known Belief-Desire-Intention agent architecture applied to multi-context systems, extended with contexts foradditional reasoning abilities, especially social ones. The main goal of this simulation study is, besides illustrating the usefulness and feasibility of our agent-based recommender ...

  1. Heavy ion beam fusion theory and simulation: Annual report, October 1985 to 31 January 1987

    International Nuclear Information System (INIS)

    Haber, I.

    1987-01-01

    A large number of simulations have been performed to establish a database of simulations for use in accelerator designs, and to compare the simulated emittance growths with the threshold for emittance growth actually measured in the Single Beam Transport Experiment (SBTE) at LBL. These simulations show substantial agreement with the experiment. They also extend into the parameter regime, where emittance growths are slower than could be measured in SBTE, but which may still be important to a driver system several times longer. Also demonstrated by these simulations, is that, even for beams which are not in detailed space-charge equilibrium and can therefore be subject to substantial nonlinear space-charge forces, emittance growths are restricted to what is consistent with energy conservation provided that the instability threshold is not crossed. This occurs even though energy need not be conserved in alternating gradient systems. Major modifications have been made to the two dimensional SHIFT-XY (Simulation of Heavy Ion Fusion Transport) code to add some of the three-dimensional physics associated with the transverse variation of the longitudinal fields in a long beam. Enhancements to the code have also been implemented which can decrease running times as much as 30% for typical parameters. 13 refs., 7 figs

  2. Time-dependent density-functional theory simulation of local currents in pristine and single-defect zigzag graphene nanoribbons

    Energy Technology Data Exchange (ETDEWEB)

    He, Shenglai, E-mail: shenglai.he@vanderbilt.edu; Russakoff, Arthur; Li, Yonghui; Varga, Kálmán, E-mail: kalman.varga@vanderbilt.edu [Department of Physics and Astronomy, Vanderbilt University, Nashville, Tennessee 37235 (United States)

    2016-07-21

    The spatial current distribution in H-terminated zigzag graphene nanoribbons (ZGNRs) under electrical bias is investigated using time-dependent density-functional theory solved on a real-space grid. A projected complex absorbing potential is used to minimize the effect of reflection at simulation cell boundary. The calculations show that the current flows mainly along the edge atoms in the hydrogen terminated pristine ZGNRs. When a vacancy is introduced to the ZGNRs, loop currents emerge at the ribbon edge due to electrons hopping between carbon atoms of the same sublattice. The loop currents hinder the flow of the edge current, explaining the poor electric conductance observed in recent experiments.

  3. Implementation of ECIRR model based on virtual simulation media to reduce students’ misconception on kinetic theory of gases

    Science.gov (United States)

    Prastiwi, A. C.; Kholiq, A.; Setyarsih, W.

    2018-03-01

    The purposed of this study are to analyse reduction of students’ misconceptions after getting ECIRR with virtual simulation. The design of research is the pre-experimental design with One Group Pretest-Posttest Design. Subjects of this research were 36 students of class XI MIA-5 SMAN 1 Driyorejo Gresik 2015/2016 school year. Students misconceptions was determined by Three-tier Diagnostic Test. The result shows that the average percentage of misconceptions reduced on topics of ideal gas law, equation of ideal gases and kinetic theory of gases respectively are 38%, 34% and 38%.

  4. Monte Carlo Simulations of Electron Energy-Loss Spectra with the Addition of Fine Structure from Density Functional Theory Calculations.

    Science.gov (United States)

    Attarian Shandiz, Mohammad; Guinel, Maxime J-F; Ahmadi, Majid; Gauvin, Raynald

    2016-02-01

    A new approach is presented to introduce the fine structure of core-loss excitations into the electron energy-loss spectra of ionization edges by Monte Carlo simulations based on an optical oscillator model. The optical oscillator strength is refined using the calculated electron energy-loss near-edge structure by density functional theory calculations. This approach can predict the effects of multiple scattering and thickness on the fine structure of ionization edges. In addition, effects of the fitting range for background removal and the integration range under the ionization edge on signal-to-noise ratio are investigated.

  5. Optimisation of simulated team training through the application of learning theories: a debate for a conceptual framework

    Science.gov (United States)

    2014-01-01

    Background As a conceptual review, this paper will debate relevant learning theories to inform the development, design and delivery of an effective educational programme for simulated team training relevant to health professionals. Discussion Kolb’s experiential learning theory is used as the main conceptual framework to define the sequence of activities. Dewey’s theory of reflective thought and action, Jarvis modification of Kolb’s learning cycle and Schön’s reflection-on-action serve as a model to design scenarios for optimal concrete experience and debriefing for challenging participants’ beliefs and habits. Bandura’s theory of self-efficacy and newer socio-cultural learning models outline that for efficient team training, it is mandatory to introduce the social-cultural context of a team. Summary The ideal simulated team training programme needs a scenario for concrete experience, followed by a debriefing with a critical reflexive observation and abstract conceptualisation phase, and ending with a second scenario for active experimentation. Let them re-experiment to optimise the effect of a simulated training session. Challenge them to the edge: The scenario needs to challenge participants to generate failures and feelings of inadequacy to drive and motivate team members to critical reflect and learn. Not experience itself but the inadequacy and contradictions of habitual experience serve as basis for reflection. Facilitate critical reflection: Facilitators and group members must guide and motivate individual participants through the debriefing session, inciting and empowering learners to challenge their own beliefs and habits. To do this, learners need to feel psychological safe. Let the group talk and critical explore. Motivate with reality and context: Training with multidisciplinary team members, with different levels of expertise, acting in their usual environment (in-situ simulation) on physiological variables is mandatory to introduce

  6. An activity theory perspective of how scenario-based simulations support learning: a descriptive analysis.

    Science.gov (United States)

    Battista, Alexis

    2017-01-01

    The dominant frameworks for describing how simulations support learning emphasize increasing access to structured practice and the provision of feedback which are commonly associated with skills-based simulations. By contrast, studies examining student participants' experiences during scenario-based simulations suggest that learning may also occur through participation. However, studies directly examining student participation during scenario-based simulations are limited. This study examined the types of activities student participants engaged in during scenario-based simulations and then analyzed their patterns of activity to consider how participation may support learning. Drawing from Engeström's first-, second-, and third-generation activity systems analysis, an in-depth descriptive analysis was conducted. The study drew from multiple qualitative methods, namely narrative, video, and activity systems analysis, to examine student participants' activities and interaction patterns across four video-recorded simulations depicting common motivations for using scenario-based simulations (e.g., communication, critical patient management). The activity systems analysis revealed that student participants' activities encompassed three clinically relevant categories, including (a) use of physical clinical tools and artifacts, (b) social interactions, and (c) performance of structured interventions. Role assignment influenced participants' activities and the complexity of their engagement. Importantly, participants made sense of the clinical situation presented in the scenario by reflexively linking these three activities together. Specifically, student participants performed structured interventions, relying upon the use of physical tools, clinical artifacts, and social interactions together with interactions between students, standardized patients, and other simulated participants to achieve their goals. When multiple student participants were present, such as in a

  7. Digital linear control theory applied to automatic stepsize control in electrical circuit simulation

    NARCIS (Netherlands)

    Verhoeven, A.; Beelen, T.G.J.; Hautus, M.L.J.; Maten, ter E.J.W.; Di Bucchianico, A.; Mattheij, R.M.M.; Peletier, M.A.

    2006-01-01

    Adaptive stepsize control is used to control the local errors of the numerical solution. For optimization purposes smoother stepsize controllers are wanted, such that the errors and stepsizes also behave smoothly. We consider approaches from digital linear control theory applied to multistep

  8. Monte Carlo simulation of Su(2) lattice gauge theory with internal quark loops

    International Nuclear Information System (INIS)

    Azcoiti, V.; Nakamura, A.

    1982-01-01

    Dynamical effects of quark loops in lattice gauge theory with icosahedral group are studied. The standard Wilson action is employed and the fermionic part by a discretize pseudo fermionic method is calculated. The masses of π, rho, ω are computed and the average value of an effective fermionic action is evaluated

  9. High energy physics program: Task A, Experiment and theory; Task B, Numerical simulation

    International Nuclear Information System (INIS)

    1993-01-01

    This report discusses research in High Energy Physics at Florida State University. Contained in this paper are: highlights of activities during the past few years; five year summary; fixed target experiments; collider experiments; SSC preparation, detector development and detector construction; computing, networking and VAX upgrade to ALPHA; and particle theory programs

  10. Comparison of coupled mode theory and FDTD simulations of coupling between bent and straight optical waveguides

    NARCIS (Netherlands)

    Bertolotti, M.; Symes, W.W.; Stoffer, Remco; Hiremath, K.R.; Driessen, A.; Michelotti, F; Hammer, Manfred

    Analysis of integrated optical cylindrical microresonators involves the coupling between a straight waveguide and a bent waveguide. Our (2D) variant of coupled mode theory is based on analytically represented mode profiles. With the bend modes expressed in Cartesian coordinates, coupled mode

  11. An Explanatory Item Response Theory Approach for a Computer-Based Case Simulation Test

    Science.gov (United States)

    Kahraman, Nilüfer

    2014-01-01

    Problem: Practitioners working with multiple-choice tests have long utilized Item Response Theory (IRT) models to evaluate the performance of test items for quality assurance. The use of similar applications for performance tests, however, is often encumbered due to the challenges encountered in working with complicated data sets in which local…

  12. Digital linear control theory applied to automatic stepsize control in electrical circuit simulation

    NARCIS (Netherlands)

    Verhoeven, A.; Beelen, T.G.J.; Hautus, M.L.J.; Maten, ter E.J.W.

    2005-01-01

    Adaptive stepsize control is used to control the local errors of the numerical solution. For optimization purposes smoother stepsize controllers are wanted, such that the errors and stepsizes also behave smoothly. We consider approaches from digital linear control theory applied to multistep

  13. The effect of implementing cognitive load theory-based design principles in virtual reality simulation training of surgical skills: a randomized controlled trial

    DEFF Research Database (Denmark)

    Andersen, Steven Arild Wuyts; Mikkelsen, Peter Trier; Konge, Lars

    2016-01-01

    Cognitive overload can inhibit learning, and cognitive load theory-based instructional design principles can be used to optimize learning situations. This study aims to investigate the effect of implementing cognitive load theory-based design principles in virtual reality simulation training...

  14. Stirling engine design manual

    Science.gov (United States)

    Martini, W. R.

    1978-01-01

    This manual is intended to serve both as an introduction to Stirling engine analysis methods and as a key to the open literature on Stirling engines. Over 800 references are listed and these are cross referenced by date of publication, author and subject. Engine analysis is treated starting from elementary principles and working through cycles analysis. Analysis methodologies are classified as first, second or third order depending upon degree of complexity and probable application; first order for preliminary engine studies, second order for performance prediction and engine optimization, and third order for detailed hardware evaluation and engine research. A few comparisons between theory and experiment are made. A second order design procedure is documented step by step with calculation sheets and a worked out example to follow. Current high power engines are briefly described and a directory of companies and individuals who are active in Stirling engine development is included. Much remains to be done. Some of the more complicated and potentially very useful design procedures are now only referred to. Future support will enable a more thorough job of comparing all available design procedures against experimental data which should soon be available.

  15. Globally-Optimized Local Pseudopotentials for (Orbital-Free) Density Functional Theory Simulations of Liquids and Solids.

    Science.gov (United States)

    Del Rio, Beatriz G; Dieterich, Johannes M; Carter, Emily A

    2017-08-08

    The accuracy of local pseudopotentials (LPSs) is one of two major determinants of the fidelity of orbital-free density functional theory (OFDFT) simulations. We present a global optimization strategy for LPSs that enables OFDFT to reproduce solid and liquid properties obtained from Kohn-Sham DFT. Our optimization strategy can fit arbitrary properties from both solid and liquid phases, so the resulting globally optimized local pseudopotentials (goLPSs) can be used in solid and/or liquid-phase simulations depending on the fitting process. We show three test cases proving that we can (1) improve solid properties compared to our previous bulk-derived local pseudopotential generation scheme; (2) refine predicted liquid and solid properties by adding force matching data; and (3) generate a from-scratch, accurate goLPS from the local channel of a non-local pseudopotential. The proposed scheme therefore serves as a full and improved LPS construction protocol.

  16. Simultaneous ion and neutral evaporation in aqueous nanodrops: experiment, theory, and molecular dynamics simulations.

    Science.gov (United States)

    Higashi, Hidenori; Tokumi, Takuya; Hogan, Christopher J; Suda, Hiroshi; Seto, Takafumi; Otani, Yoshio

    2015-06-28

    We use a combination of tandem ion mobility spectrometry (IMS-IMS, with differential mobility analyzers), molecular dynamics (MD) simulations, and analytical models to examine both neutral solvent (H2O) and ion (solvated Na(+)) evaporation from aqueous sodium chloride nanodrops. For experiments, nanodrops were produced via electrospray ionization (ESI) of an aqueous sodium chloride solution. Two nanodrops were examined in MD simulations: a 2500 water molecule nanodrop with 68 Na(+) and 60 Cl(-) ions (an initial net charge of z = +8), and (2) a 1000 water molecule nanodrop with 65 Na(+) and 60 Cl(-) ions (an initial net charge of z = +5). Specifically, we used MD simulations to examine the validity of a model for the neutral evaporation rate incorporating both the Kelvin (surface curvature) and Thomson (electrostatic) influences, while both MD simulations and experimental measurements were compared to predictions of the ion evaporation rate equation of Labowsky et al. [Anal. Chim. Acta, 2000, 406, 105-118]. Within a single fit parameter, we find excellent agreement between simulated and modeled neutral evaporation rates for nanodrops with solute volume fractions below 0.30. Similarly, MD simulation inferred ion evaporation rates are in excellent agreement with predictions based on the Labowsky et al. equation. Measurements of the sizes and charge states of ESI generated NaCl clusters suggest that the charge states of these clusters are governed by ion evaporation, however, ion evaporation appears to have occurred with lower activation energies in experiments than was anticipated based on analytical calculations as well as MD simulations. Several possible reasons for this discrepancy are discussed.

  17. Investigations of the role of nonlinear couplings in structure formation and transport regulation: Experiment, simulation, and theory

    International Nuclear Information System (INIS)

    Holland, C.; Kim, E.J.; Champeaux, S.; Gurcan, O.; Rosenbluth, M.N.; Diamond, P.H.; Tynan, G.R.; Nevins, W.; Candy, J.

    2003-01-01

    Understanding the physics of shear flow and structure formation in plasmas is a central problem for the advancement of magnetic fusion because of the roles such flows are believed to play in regulating turbulence and transport levels. In this paper, we report on integrated experimental, computational, and theoretical studies of sheared zonal flows and radially extended convective cells, with the aim of assessing the results of theory experiment and theory-simulation comparisons. In particular, simulations are used as test beds for verifying analytical predictions and demonstrating the suitability of techniques such as bispectral analysis for isolating nonlinear couplings in data. Based on intriguing initial results suggesting increased levels of nonlinear coupling occur during L-H transitions, we have undertaken a comprehensive study of bispectral quantities in fluid and gyrokinetic simulations, and compared these results with theoretical expectations. Topics of study include locality and directionality of energy transfer, amplitude scaling, and parameter dependences. Techniques for inferring nonlinear coupling coefficients from data are discussed, and initial results from experimental data are presented. Future experimental studies are motivated. We also present work investigating the role of structures in transport. Analysis of simulation data indicates that the turbulent heat flux can be represented as an ensemble of 'heat pulses' of varying sizes, with a power law distribution. The slope of the power law is shown to determine global transport scaling (i.e. Bohm or gyro-Bohm). Theoretical work studying the dynamics of the largest cells (termed 'streamers') is presented, as well as results from ongoing analysis studying connections between heat pulse distribution and bispectral quantities. (author)

  18. Simulations

    CERN Document Server

    Ngada, Narcisse

    2015-06-15

    The complexity and cost of building and running high-power electrical systems make the use of simulations unavoidable. The simulations available today provide great understanding about how systems really operate. This paper helps the reader to gain an insight into simulation in the field of power converters for particle accelerators. Starting with the definition and basic principles of simulation, two simulation types, as well as their leading tools, are presented: analog and numerical simulations. Some practical applications of each simulation type are also considered. The final conclusion then summarizes the main important items to keep in mind before opting for a simulation tool or before performing a simulation.

  19. Geothermal reservoir assessment manual; 1984-1992 nendo chinetsu choryusou hyoka shuhou manual

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-02-01

    A geothermal reservoir assessment manual was prepared for the promotion of the development of geothermal power generation, based on the results of the 'geothermal reservoir assessment technique development project' implemented during the fiscal 1984-1992 period and on the results of surveys conducted in Japan and abroad. Of the geothermal systems generally classified into the steam dominant type and the hot water dominant type, encounters with the steam dominant type are but seldom reported. This manual therefore covers the hot water dominant type only. In addition to the explanation of the basic concept and the outline of geothermal reservoirs, the manual carries data necessary for reservoir assessment; geological and geophysical data analyses; geochemistry in reservoir assessment; data of underground logging and of fuming; conceptual models; simulators and models for reservoir simulation; natural-state simulation, history-matching simulation, and reservoir behavior predicting simulation; case history (modeling of a geothermal reservoir prior to exploitation), references, and so forth. (NEDO)

  20. Computer simulation of high resolution transmission electron micrographs: theory and analysis

    International Nuclear Information System (INIS)

    Kilaas, R.

    1985-03-01

    Computer simulation of electron micrographs is an invaluable aid in their proper interpretation and in defining optimum conditions for obtaining images experimentally. Since modern instruments are capable of atomic resolution, simulation techniques employing high precision are required. This thesis makes contributions to four specific areas of this field. First, the validity of a new method for simulating high resolution electron microscope images has been critically examined. Second, three different methods for computing scattering amplitudes in High Resolution Transmission Electron Microscopy (HRTEM) have been investigated as to their ability to include upper Laue layer (ULL) interaction. Third, a new method for computing scattering amplitudes in high resolution transmission electron microscopy has been examined. Fourth, the effect of a surface layer of amorphous silicon dioxide on images of crystalline silicon has been investigated for a range of crystal thicknesses varying from zero to 2 1/2 times that of the surface layer

  1. Radiological Control Manual

    Energy Technology Data Exchange (ETDEWEB)

    1993-04-01

    This manual has been prepared by Lawrence Berkeley Laboratory to provide guidance for site-specific additions, supplements, and clarifications to the DOE Radiological Control Manual. The guidance provided in this manual is based on the requirements given in Title 10 Code of Federal Regulations Part 835, Radiation Protection for Occupational Workers, DOE Order 5480.11, Radiation Protection for Occupational Workers, and the DOE Radiological Control Manual. The topics covered are (1) excellence in radiological control, (2) radiological standards, (3) conduct of radiological work, (4) radioactive materials, (5) radiological health support operations, (6) training and qualification, and (7) radiological records.

  2. EMSL Operations Manual

    Energy Technology Data Exchange (ETDEWEB)

    Foster, Nancy S.

    2009-06-18

    This manual is a general resource tool to assist EMSL users and Laboratory staff within EMSL locate official policy, practice and subject matter experts. It is not intended to replace or amend any formal Battelle policy or practice. Users of this manual should rely only on Battelle’s Standard Based Management System (SBMS) for official policy. No contractual commitment or right of any kind is created by this manual. Battelle management reserves the right to alter, change, or delete any information contained within this manual without prior notice.

  3. EMSL Operations Manual

    Energy Technology Data Exchange (ETDEWEB)

    Foster, Nancy S.

    2009-03-25

    This manual is a general resource tool to assist EMSL users and Laboratory staff within EMSL locate official policy, practice and subject matter experts. It is not intended to replace or amend any formal Battelle policy or practice. Users of this manual should rely only on Battelle’s Standard Based Management System (SBMS) for official policy. No contractual commitment or right of any kind is created by this manual. Battelle management reserves the right to alter, change, or delete any information contained within this manual without prior notice.

  4. Radiological Control Manual

    International Nuclear Information System (INIS)

    1993-04-01

    This manual has been prepared by Lawrence Berkeley Laboratory to provide guidance for site-specific additions, supplements, and clarifications to the DOE Radiological Control Manual. The guidance provided in this manual is based on the requirements given in Title 10 Code of Federal Regulations Part 835, Radiation Protection for Occupational Workers, DOE Order 5480.11, Radiation Protection for Occupational Workers, and the DOE Radiological Control Manual. The topics covered are (1) excellence in radiological control, (2) radiological standards, (3) conduct of radiological work, (4) radioactive materials, (5) radiological health support operations, (6) training and qualification, and (7) radiological records

  5. HASL procedures manual

    International Nuclear Information System (INIS)

    Harley, J.H.

    1977-08-01

    Additions and corrections to the following sections of the HASL Procedures Manual are provided: General, Sampling, Field Measurements; General Analytical Chemistry, Chemical Procedures, Data Section, and Specifications

  6. PCs The Missing Manual

    CERN Document Server

    Karp, David

    2005-01-01

    Your vacuum comes with one. Even your blender comes with one. But your PC--something that costs a whole lot more and is likely to be used daily and for tasks of far greater importance and complexity--doesn't come with a printed manual. Thankfully, that's not a problem any longer: PCs: The Missing Manual explains everything you need to know about PCs, both inside and out, and how to keep them running smoothly and working the way you want them to work. A complete PC manual for both beginners and power users, PCs: The Missing Manual has something for everyone. PC novices will appreciate the una

  7. Validation and modification of the Blade Element Momentum theory based on comparisons with actuator disc simulations

    DEFF Research Database (Denmark)

    Aagaard Madsen, Helge; Bak, Christian; Døssing, Mads

    2010-01-01

    A comprehensive investigation of the Blade Element Momentum (BEM) model using detailed numerical simulations with an axis symmetric actuator disc (AD) model has been carried out. The present implementation of the BEM model is in a version where exactly the same input in the form of non-dimensiona......A comprehensive investigation of the Blade Element Momentum (BEM) model using detailed numerical simulations with an axis symmetric actuator disc (AD) model has been carried out. The present implementation of the BEM model is in a version where exactly the same input in the form of non...

  8. Trapped Electron Instability of Electron Plasma Waves: Vlasov simulations and theory

    Science.gov (United States)

    Berger, Richard; Chapman, Thomas; Brunner, Stephan

    2013-10-01

    The growth of sidebands of a large-amplitude electron plasma wave is studied with Vlasov simulations for a range of amplitudes (. 001 vph = +/-ωbe , where vph =ω0 /k0 and ωbe is the bounce frequency of a deeply trapped electron. In 2D simulations, we find that the instability persists and co-exists with the filamentation instability. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and funded by the Laboratory Research and Development Program at LLNL under project tracking code 12-ERD.

  9. Theory and simulation of photogeneration and transport in Si-SiOx superlattice absorbers

    Directory of Open Access Journals (Sweden)

    Aeberhard Urs

    2011-01-01

    Full Text Available Abstract Si-SiOx superlattices are among the candidates that have been proposed as high band gap absorber material in all-Si tandem solar cell devices. Owing to the large potential barriers for photoexited charge carriers, transport in these devices is restricted to quantum-confined superlattice states. As a consequence of the finite number of wells and large built-in fields, the electronic spectrum can deviate considerably from the minibands of a regular superlattice. In this article, a quantum-kinetic theory based on the non-equilibrium Green's function formalism for an effective mass Hamiltonian is used for investigating photogeneration and transport in such devices for arbitrary geometry and operating conditions. By including the coupling of electrons to both photons and phonons, the theory is able to provide a microscopic picture of indirect generation, carrier relaxation, and inter-well transport mechanisms beyond the ballistic regime.

  10. Molecular dynamics simulations of the penetration lengths: application within the fluctuation theory for diffusion coefficients

    DEFF Research Database (Denmark)

    Galliero, Guillaume; Medvedev, Oleg; Shapiro, Alexander

    2005-01-01

    A 322 (2004) 151). In the current study, a fast molecular dynamics scheme has been developed to determine the values of the penetration lengths in Lennard-Jones binary systems. Results deduced from computations provide a new insight into the concept of penetration lengths. It is shown for four different...... fluctuation theory and molecular dynamics scheme exhibit consistent trends and average deviations from experimental data around 10-20%. (c) 2004 Elsevier B.V. All rights reserved....

  11. Application of Stochastic Unsaturated Flow Theory, Numerical Simulations, and Comparisons to Field Observations

    DEFF Research Database (Denmark)

    Jensen, Karsten Høgh; Mantoglou, Aristotelis

    1992-01-01

    unsaturated flow equation representing the mean system behavior is solved using a finite difference numerical solution technique. The effective parameters are evaluated from the stochastic theory formulas before entering them into the numerical solution for each iteration. The stochastic model is applied...... seems to offer a rational framework for modeling large-scale unsaturated flow and estimating areal averages of soil-hydrological processes in spatially variable soils....

  12. Experiments, Theory, and Simulation on the Evolution of Fabric in Granular Materials

    Science.gov (United States)

    1992-07-27

    Deformation," Intl. J. Plast., 7, 141-160, 1991. [4] Arminjon, M. "Sur le Champ de Rotation des Cristaux dans un Poly- cristal Dform6 Plastiquement ...in Metals," J. Inst. Mftals, 62, 307, 1938. [17] Zbib, H. M. and Aifantis, E. C. --On the Concept of Relative Spin and its Implications to Large... Concept of Relative Spin and its Implications to Large Deformation Theories. Part I: Anisotropic Hard- ening Plasticity," Acta Mechanica, 74, 35, 1988b. 107

  13. One for All and All for One: Using Multiple Identification Theory Simulations to Build Cooperative Attitudes and Behaviors in a Middle Eastern Conflict Scenario

    Science.gov (United States)

    Williams, Robert Howard; Williams, Alexander Jonathan

    2010-01-01

    The authors previously developed multiple identification theory (MIT) as a system of simulation game design intended to promote attitude change. The present study further tests MIT's effectiveness. The authors created a game (CULTURE & CREED) via MIT as a complex simulation of Middle Eastern conflict resolution, designed to change attitudes…

  14. Properties of a planar electric double layer under extreme conditions investigated by classical density functional theory and Monte Carlo simulations.

    Science.gov (United States)

    Zhou, Shiqi; Lamperski, Stanisław; Zydorczak, Maria

    2014-08-14

    Monte Carlo (MC) simulation and classical density functional theory (DFT) results are reported for the structural and electrostatic properties of a planar electric double layer containing ions having highly asymmetric diameters or valencies under extreme concentration condition. In the applied DFT, for the excess free energy contribution due to the hard sphere repulsion, a recently elaborated extended form of the fundamental measure functional is used, and coupling of Coulombic and short range hard-sphere repulsion is described by a traditional second-order functional perturbation expansion approximation. Comparison between the MC and DFT results indicates that validity interval of the traditional DFT approximation expands to high ion valences running up to 3 and size asymmetry high up to diameter ratio of 4 whether the high valence ions or the large size ion are co- or counter-ions; and to a high bulk electrolyte concentration being close to the upper limit of the electrolyte mole concentration the MC simulation can deal with well. The DFT accuracy dependence on the ion parameters can be self-consistently explained using arguments of liquid state theory, and new EDL phenomena such as overscreening effect due to monovalent counter-ions, extreme layering effect of counter-ions, and appearance of a depletion layer with almost no counter- and co-ions are observed.

  15. Bourbaki's structure theory in the problem of complex systems simulation models synthesis and model-oriented programming

    Science.gov (United States)

    Brodsky, Yu. I.

    2015-01-01

    The work is devoted to the application of Bourbaki's structure theory to substantiate the synthesis of simulation models of complex multicomponent systems, where every component may be a complex system itself. An application of the Bourbaki's structure theory offers a new approach to the design and computer implementation of simulation models of complex multicomponent systems—model synthesis and model-oriented programming. It differs from the traditional object-oriented approach. The central concept of this new approach and at the same time, the basic building block for the construction of more complex structures is the concept of models-components. A model-component endowed with a more complicated structure than, for example, the object in the object-oriented analysis. This structure provides to the model-component an independent behavior-the ability of standard responds to standard requests of its internal and external environment. At the same time, the computer implementation of model-component's behavior is invariant under the integration of models-components into complexes. This fact allows one firstly to construct fractal models of any complexity, and secondly to implement a computational process of such constructions uniformly-by a single universal program. In addition, the proposed paradigm allows one to exclude imperative programming and to generate computer code with a high degree of parallelism.

  16. Examining real-time time-dependent density functional theory nonequilibrium simulations for the calculation of electronic stopping power

    Science.gov (United States)

    Yost, Dillon C.; Yao, Yi; Kanai, Yosuke

    2017-09-01

    In ion irradiation processes, electronic stopping power describes the energy transfer rate from the irradiating ion to the target material's electrons. Due to the scarcity and significant uncertainties in experimental electronic stopping power data for materials beyond simple solids, there has been growing interest in the use of first-principles theory for calculating electronic stopping power. In recent years, advances in high-performance computing have opened the door to fully first-principles nonequilibrium simulations based on real-time time-dependent density functional theory (RT-TDDFT). While it has been demonstrated that the RT-TDDFT approach is capable of predicting electronic stopping power for a wide range of condensed matter systems, there has yet to be an exhaustive examination of the physical and numerical approximations involved and their effects on the calculated stopping power. We discuss the results of such a study for crystalline silicon with protons as irradiating ions. We examine the influences of key approximations in RT-TDDFT nonequilibrium simulations on the calculated electronic stopping power, including approximations related to basis sets, finite size effects, exchange-correlation approximation, pseudopotentials, and more. Finally, we propose a simple and efficient correction scheme to account for the contribution from core-electron excitations to the stopping power, as it was found to be significant for large proton velocities.

  17. A Theory for the Neural Basis of Language Part 2: Simulation Studies of the Model

    Science.gov (United States)

    Baron, R. J.

    1974-01-01

    Computer simulation studies of the proposed model are presented. Processes demonstrated are (1) verbally directed recall of visual experience; (2) understanding of verbal information; (3) aspects of learning and forgetting; (4) the dependence of recognition and understanding in context; and (5) elementary concepts of sentence production. (Author)

  18. From molecular dynamics and particle simulations towards constitutive relations for continuum theory

    NARCIS (Netherlands)

    Luding, Stefan; Koren, B.; Vuik, K.

    2009-01-01

    A challenge of today‖s research is the realistic simulation of disordered atomistic systems or particulate and granular materials like sand, powders, ceramics or composites, which consist of many millions of atoms/particles. The inhomogeneous fine-structure of such materials makes it very difficult

  19. Characterizing Representational Learning: A Combined Simulation and Tutorial on Perturbation Theory

    Science.gov (United States)

    Kohnle, Antje; Passante, Gina

    2017-01-01

    Analyzing, constructing, and translating between graphical, pictorial, and mathematical representations of physics ideas and reasoning flexibly through them ("representational competence") is a key characteristic of expertise in physics but is a challenge for learners to develop. Interactive computer simulations and University of…

  20. Theory and MHD simulation of fuelling process by Compact Toroid (CT) injection

    International Nuclear Information System (INIS)

    Suzuki, Y.; Hayashi, T.; Kishimoto, Y.

    2001-01-01

    The fuelling process by a spheromak-like compact toroid (SCT) injection is investigated by using MHD numerical simulations, where the SCT is injected into a magnetized target plasma region corresponding to a fusion device. In our previous study, the theoretical model to determine the penetration depth of the SCT into the target region has been proposed based on the simulation results, in which the SCT is decelerated not only by the magnetic pressure force but also by the magnetic tension force. However, since both ends of the target magnetic field are fixed on the boundary wall in the simulation, the deceleration caused by the magnetic tension force would be overestimated. In this study, the dependence of the boundary condition of the target magnetic field on the SCT penetration process is examined. From these results, the theoretical model we have proposed is improved to include the effect that the wave length of the target magnetic field bent by the SCT penetration expands with the Alfven velocity. In addition, by carrying out the simulation with the torus domain, it is confirmed that the theoretical model is applicable to estimate the penetration depth of the SCT under such conditions. Furthermore, the dependence of the injection position (the side injection and the top/bottom injection) on the penetration process is examined. (author)

  1. Theory and simulation of ion conduction in the pentameric GLIC channel.

    Science.gov (United States)

    Zhu, Fangqiang; Hummer, Gerhard

    2012-10-09

    GLIC is a bacterial member of the large family of pentameric ligand-gated ion channels. To study ion conduction through GLIC and other membrane channels, we combine the one-dimensional potential of mean force for ion passage with a Smoluchowski diffusion model, making it possible to calculate single-channel conductance in the regime of low ion concentrations from all-atom molecular dynamics (MD) simulations. We then perform MD simulations to examine sodium ion conduction through the GLIC transmembrane pore in two systems with different bulk ion concentrations. The ion potentials of mean force, calculated from umbrella sampling simulations with Hamiltonian replica exchange, reveal a major barrier at the hydrophobic constriction of the pore. The relevance of this barrier for ion transport is confirmed by a committor function that rises sharply in the barrier region. From the free evolution of Na(+) ions starting at the barrier top, we estimate the effective diffusion coefficient in the barrier region, and subsequently calculate the conductance of the pore. The resulting diffusivity compares well with the position-dependent ion diffusion coefficient obtained from restrained simulations. The ion conductance obtained from the diffusion model agrees with the value determined via a reactive-flux rate calculation. Our results show that the conformation in the GLIC crystal structure, with an estimated conductance of ~1 picosiemens at 140 mM ion concentration, is consistent with a physiologically open state of the channel.

  2. DIMAC program user's manual

    International Nuclear Information System (INIS)

    Lee, Byoung Oon; Song, Tae Young

    2003-11-01

    DIMAC (A DIspersion Metallic fuel performance Analysis Code) is a computer program for simulating the behavior of dispersion fuel rods under normal operating conditions of HYPER. It computes the one-dimensional temperature distribution and the thermo-mechanical characteristics of fuel rod under the steady state operation condition, including the swelling and rod deformation. DIMAC was developed based on the experience of research reactor fuel. DIMAC consists of the temperature calculation module, the mechanical swelling calculation module, and the fuel deformation calculation module in order to predict the deformation of a dispersion fuel as a function of power history. Because there are a little of available U-TRU-Zr or TRU-Zr characteristics, the material data of U-Pu-Zr or Pu-Zr are used for those of U-TRU-Zr or TRU-Zr. This report is mainly intended as a user's manual for the DIMAC code. The general description on this code, the description on input parameter, the description on each subroutine, the sample problem and the sample input and partial output are written in this report

  3. DIMAC program user's manual

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Byoung Oon; Song, Tae Young

    2003-11-01

    DIMAC (A DIspersion Metallic fuel performance Analysis Code) is a computer program for simulating the behavior of dispersion fuel rods under normal operating conditions of HYPER. It computes the one-dimensional temperature distribution and the thermo-mechanical characteristics of fuel rod under the steady state operation condition, including the swelling and rod deformation. DIMAC was developed based on the experience of research reactor fuel. DIMAC consists of the temperature calculation module, the mechanical swelling calculation module, and the fuel deformation calculation module in order to predict the deformation of a dispersion fuel as a function of power history. Because there are a little of available U-TRU-Zr or TRU-Zr characteristics, the material data of U-Pu-Zr or Pu-Zr are used for those of U-TRU-Zr or TRU-Zr. This report is mainly intended as a user's manual for the DIMAC code. The general description on this code, the description on input parameter, the description on each subroutine, the sample problem and the sample input and partial output are written in this repo0008.

  4. Transients from initial conditions based on Lagrangian perturbation theory in N-body simulations II: the effect of the transverse mode

    International Nuclear Information System (INIS)

    Tatekawa, Takayuki

    2014-01-01

    We study the initial conditions for cosmological N-body simulations for precision cosmology. In general, Zel'dovich approximation has been applied for the initial conditions of N-body simulations for a long time. These initial conditions provide incorrect higher-order growth. These error caused by setting up the initial conditions by perturbation theory is called transients. We investigated the impact of transient on non-Gaussianity of density field by performing cosmological N-body simulations with initial conditions based on first-, second-, and third-order Lagrangian perturbation theory in previous paper. In this paper, we evaluates the effect of the transverse mode in the third-order Lagrangian perturbation theory for several statistical quantities such as power spectrum and non-Gaussianty. Then we clarified that the effect of the transverse mode in the third-order Lagrangian perturbation theory is quite small

  5. Effect of upstream ULF waves on the energetic ion diffusion at the earth's foreshock: Theory, Simulation, and Observations

    Science.gov (United States)

    Otsuka, F.; Matsukiyo, S.; Kis, A.; Hada, T.

    2017-12-01

    Spatial diffusion of energetic particles is an important problem not only from a fundamental physics point of view but also for its application to particle acceleration processes at astrophysical shocks. Quasi-linear theory can provide the spatial diffusion coefficient as a function of the wave turbulence spectrum. By assuming a simple power-law spectrum for the turbulence, the theory has been successfully applied to diffusion and acceleration of cosmic rays in the interplanetary and interstellar medium. Near the earth's foreshock, however, the wave spectrum often has an intense peak, presumably corresponding to the upstream ULF waves generated by the field-aligned beam (FAB). In this presentation, we numerically and theoretically discuss how the intense ULF peak in the wave spectrum modifies the spatial parallel diffusion of energetic ions. The turbulence is given as a superposition of non-propagating transverse MHD waves in the solar wind rest frame, and its spectrum is composed of a piecewise power-law spectrum with different power-law indices. The diffusion coefficients are then estimated by using the quasi-linear theory and test particle simulations. We find that the presence of the ULF peak produces a concave shape of the diffusion coefficient when it is plotted versus the ion energy. The results above are used to discuss the Cluster observations of the diffuse ions at the Earth's foreshock. Using the density gradients of the energetic ions detected by the Cluster spacecraft, we determine the e-folding distances, equivalently, the spatial diffusion coefficients, of ions with their energies from 10 to 32 keV. The observed e-folding distances are significantly smaller than those estimated in the past statistical studies. This suggests that the particle acceleration at the foreshock can be more efficient than considered before. Our test particle simulation explains well the small estimate of the e-folding distances, by using the observed wave turbulence spectrum

  6. Binding constants of membrane-anchored receptors and ligands: A general theory corroborated by Monte Carlo simulations.

    Science.gov (United States)

    Xu, Guang-Kui; Hu, Jinglei; Lipowsky, Reinhard; Weikl, Thomas R

    2015-12-28

    Adhesion processes of biological membranes that enclose cells and cellular organelles are essential for immune responses, tissue formation, and signaling. These processes depend sensitively on the binding constant K2D of the membrane-anchored receptor and ligand proteins that mediate adhesion, which is difficult to measure in the "two-dimensional" (2D) membrane environment of the proteins. An important problem therefore is to relate K2D to the binding constant K3D of soluble variants of the receptors and ligands that lack the membrane anchors and are free to diffuse in three dimensions (3D). In this article, we present a general theory for the binding constants K2D and K3D of rather stiff proteins whose main degrees of freedom are translation and rotation, along membranes and around anchor points "in 2D," or unconstrained "in 3D." The theory generalizes previous results by describing how K2D depends both on the average separation and thermal nanoscale roughness of the apposing membranes, and on the length and anchoring flexibility of the receptors and ligands. Our theoretical results for the ratio K2D/K3D of the binding constants agree with detailed results from Monte Carlo simulations without any data fitting, which indicates that the theory captures the essential features of the "dimensionality reduction" due to membrane anchoring. In our Monte Carlo simulations, we consider a novel coarse-grained model of biomembrane adhesion in which the membranes are represented as discretized elastic surfaces, and the receptors and ligands as anchored molecules that diffuse continuously along the membranes and rotate at their anchor points.

  7. Classical density functional theory and Monte Carlo simulation study of electric double layer in the vicinity of a cylindrical electrode

    Science.gov (United States)

    Zhou, Shiqi; Lamperski, Stanisław; Sokołowska, Marta

    2017-07-01

    We have performed extensive Monte-Carlo simulations and classical density functional theory (DFT) calculations of the electrical double layer (EDL) near a cylindrical electrode in a primitive model (PM) modified by incorporating interionic dispersion interactions. It is concluded that (i) in general, an unsophisticated use of the mean field (MF) approximation for the interionic dispersion interactions does not distinctly worsen the classical DFT performance, even if the salt ions considered are highly asymmetrical in size (3:1) and charge (5:1), the bulk molar concentration considered is high up to a total bulk ion packing fraction of 0.314, and the surface charge density of up to 0.5 C m-2. (ii) More specifically, considering the possible noises in the simulation, the local volume charge density profiles are the most accurately predicted by the classical DFT in all situations, and the co- and counter-ion singlet distributions are also rather accurately predicted; whereas the mean electrostatic potential profile is relatively less accurately predicted due to an integral amplification of minor inaccuracy of the singlet distributions. (iii) It is found that the layered structure of the co-ion distribution is abnormally possible only if the surface charge density is high enough (for example 0.5 C m-2) moreover, the co-ion valence abnormally influences the peak height of the first counter-ion layer, which decreases with the former. (iv) Even if both the simulation and DFT indicate an insignificant contribution of the interionic dispersion interaction to the above three ‘local’ quantities, it is clearly shown by the classical DFT that the interionic dispersion interaction does significantly influence a ‘global’ quantity like the cylinder surface-aqueous electrolyte interfacial tension, and this may imply the role of the interionic dispersion interaction in explaining the specific Hofmeister effects. We elucidate all of the above observations based on the

  8. Oil Spill Response Manual

    NARCIS (Netherlands)

    Marieke Zeinstra; Sandra Heins; Wierd Koops

    2014-01-01

    A two year programme has been carried out by the NHL University of Applied Sciences together with private companies in the field of oil and chemical spill response to finalize these manuals on oil and chemical spill response. These manuals give a good overview of all aspects of oil and chemical

  9. Technical Manual. The ACT®

    Science.gov (United States)

    ACT, Inc., 2014

    2014-01-01

    This manual contains technical information about the ACT® college readiness assessment. The principal purpose of this manual is to document the technical characteristics of the ACT in light of its intended purposes. ACT regularly conducts research as part of the ongoing formative evaluation of its programs. The research is intended to ensure that…

  10. Eco-Innovation Manual

    DEFF Research Database (Denmark)

    O'Hare, Jamie Alexander; McAloone, Tim C.; Pigosso, Daniela Cristina Antelmi

    Aim of this manual is to introduce a methodology for the implementation of eco‐innovation within small and medium sized companies in developing and emerging economies. The intended audience of this manual is organizations that provide professional services to guide and support manufacturing compa...... companies to improve their sustainability performance....

  11. Marketing Research. Instructor's Manual.

    Science.gov (United States)

    Small Business Administration, Washington, DC.

    Prepared for the Administrative Management Course Program, this instructor's manual was developed to serve small-business management needs. The sections of the manual are as follows: (1) Lesson Plan--an outline of material covered, which may be used as a teaching guide, presented in two columns: the presentation, and a step-by-step indication of…

  12. Indoor Air Quality Manual.

    Science.gov (United States)

    Baldwin Union Free School District, NY.

    This manual identifies ways to improve a school's indoor air quality (IAQ) and discusses practical actions that can be carried out by school staff in managing air quality. The manual includes discussions of the many sources contributing to school indoor air pollution and the preventive planning for each including renovation and repair work,…

  13. CRISP instrument manual

    International Nuclear Information System (INIS)

    Bucknall, D.G.; Langridge, Sean

    1997-05-01

    This document is a user manual for CRISP, one of the two neutron reflectomers at ISIS. CRISP is highly automated allowing precision reproducible measurements. The manual provides detailed instructions for the setting-up and running of the instrument and advice on data analysis. (UK)

  14. Finite element simulations with ANSYS workbench 17 theory, applications, case studies

    CERN Document Server

    Lee, Huei-Huang

    2017-01-01

    Finite Element Simulations with ANSYS Workbench 17 is a comprehensive and easy to understand workbook. Printed in full color, it utilizes rich graphics and step-by-step instructions to guide you through learning how to perform finite element simulations using ANSYS Workbench. Twenty seven real world case studies are used throughout the book. Many of these case studies are industrial or research projects that you build from scratch. Prebuilt project files are available for download should you run into any problems. Companion videos, that demonstrate exactly how to perform each tutorial, are also available. Relevant background knowledge is reviewed whenever necessary. To be efficient, the review is conceptual rather than mathematical. Key concepts are inserted whenever appropriate and summarized at the end of each chapter. Additional exercises or extension research problems are provided as homework at the end of each chapter. A learning approach emphasizing hands-on experiences spreads though this entire boo...

  15. The Watts-Strogatz network model developed by including degree distribution: theory and computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Y W [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China); Zhang, L F [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China); Huang, J P [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China)

    2007-07-20

    By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property.

  16. The Watts-Strogatz network model developed by including degree distribution: theory and computer simulation

    International Nuclear Information System (INIS)

    Chen, Y W; Zhang, L F; Huang, J P

    2007-01-01

    By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property

  17. Theory-based transport simulations of TFTR L-mode temperature profiles

    International Nuclear Information System (INIS)

    Bateman, G.

    1991-01-01

    The temperature profiles from a selection of TFTR L-mode discharges are simulated with the 1-1/2-D BALDUR transport code using a combination of theoretically derived transport models, called the Multi-Mode Model. The present version of the Multi-Mode Model consists of effective thermal diffusivities resulting from trapped electron modes and ion temperature gradient (η i ) modes, which dominate in the core of the plasma, together with resistive ballooning modes, which dominate in the periphery. Within the context of this transport model and the TFTR simulations reported here, the scaling of confinement with heating power comes from the temperature dependence of the η i and trapped electron modes, while the scaling with current comes mostly from resistive ballooning modes. 24 refs., 16 figs., 3 tabs

  18. Finite element simulations with ANSYS Workbench 18 theory, applications, case studies

    CERN Document Server

    Lee,\tHuei-huang

    2018-01-01

    Finite Element Simulations with ANSYS Workbench 18 is a comprehensive and easy to understand workbook. Printed in full color, it utilizes rich graphics and step-by-step instructions to guide you through learning how to perform finite element simulations using ANSYS Workbench. Twenty seven real world case studies are used throughout the book. Many of these case studies are industrial or research projects that you build from scratch. Prebuilt project files are available for download should you run into any problems. Companion videos, that demonstrate exactly how to perform each tutorial, are also available. Relevant background knowledge is reviewed whenever necessary. To be efficient, the review is conceptual rather than mathematical. Key concepts are inserted whenever appropriate and summarized at the end of each chapter. Additional exercises or extension research problems are provided as homework at the end of each chapter. A learning approach emphasizing hands-on experiences is utilized though this entire...

  19. Non-equilibrium Green function method: theory and application in simulation of nanometer electronic devices

    International Nuclear Information System (INIS)

    Do, Van-Nam

    2014-01-01

    We review fundamental aspects of the non-equilibrium Green function method in the simulation of nanometer electronic devices. The method is implemented into our recently developed computer package OPEDEVS to investigate transport properties of electrons in nano-scale devices and low-dimensional materials. Concretely, we present the definition of the four real-time Green functions, the retarded, advanced, lesser and greater functions. Basic relations among these functions and their equations of motion are also presented in detail as the basis for the performance of analytical and numerical calculations. In particular, we review in detail two recursive algorithms, which are implemented in OPEDEVS to solve the Green functions defined in finite-size opened systems and in the surface layer of semi-infinite homogeneous ones. Operation of the package is then illustrated through the simulation of the transport characteristics of a typical semiconductor device structure, the resonant tunneling diodes. (review)

  20. Layered interfaces between immiscible liquids studied by density-functional theory and molecular-dynamics simulations.

    Science.gov (United States)

    Geysermans, P; Elyeznasni, N; Russier, V

    2005-11-22

    We present a study of the structure in the interface between two immiscible liquids by density-functional theory and molecular-dynamics calculations. The liquids are modeled by Lennard-Jones potentials, which achieve immiscibility by suppressing the attractive interaction between unlike particles. The density profiles of the liquids display oscillations only in a limited part of the simple liquid-phase diagram (rho,T). When approaching the liquid-vapor coexistence, a significant depletion appears while the layering behavior of the density profile vanishes. By analogy with the liquid-vapor interface and the analysis of the adsorption this behavior is suggested to be strongly related to the drying transition.