WorldWideScience

Sample records for fault simulation acceleration

  1. A Hardware Accelerator for Fault Simulation Utilizing a Reconfigurable Array Architecture

    Directory of Open Access Journals (Sweden)

    Sungho Kang

    1996-01-01

    Full Text Available In order to reduce cost and to achieve high speed a new hardware accelerator for fault simulation has been designed. The architecture of the new accelerator is based on a reconfigurabl mesh type processing element (PE array. Circuit elements at the same topological level are simulated concurrently, as in a pipelined process. A new parallel simulation algorithm expands all of the gates to two input gates in order to limit the number of faults to two at each gate, so that the faults can be distributed uniformly throughout the PE array. The PE array reconfiguration operation provides a simulation speed advantage by maximizing the use of each PE cell.

  2. Design of fault simulator

    Energy Technology Data Exchange (ETDEWEB)

    Gabbar, Hossam A. [Faculty of Energy Systems and Nuclear Science, University of Ontario Institute of Technology (UOIT), Ontario, L1H 7K4 (Canada)], E-mail: hossam.gabbar@uoit.ca; Sayed, Hanaa E.; Osunleke, Ajiboye S. [Okayama University, Graduate School of Natural Science and Technology, Division of Industrial Innovation Sciences Department of Intelligent Systems Engineering, Okayama 700-8530 (Japan); Masanobu, Hara [AspenTech Japan Co., Ltd., Kojimachi Crystal City 10F, Kojimachi, Chiyoda-ku, Tokyo 102-0083 (Japan)

    2009-08-15

    Fault simulator is proposed to understand and evaluate all possible fault propagation scenarios, which is an essential part of safety design and operation design and support of chemical/production processes. Process models are constructed and integrated with fault models, which are formulated in qualitative manner using fault semantic networks (FSN). Trend analysis techniques are used to map real time and simulation quantitative data into qualitative fault models for better decision support and tuning of FSN. The design of the proposed fault simulator is described and applied on experimental plant (G-Plant) to diagnose several fault scenarios. The proposed fault simulator will enable industrial plants to specify and validate safety requirements as part of safety system design as well as to support recovery and shutdown operation and disaster management.

  3. LHC Accelerator Fault Tracker - First Experience

    CERN Document Server

    Apollonio, Andrea; Roderick, Chris; Schmidt, Ruediger; Todd, Benjamin; Wollmann, Daniel

    2016-01-01

    Availability is one of the key performance indicators of LHC operation, being directly correlated with integrated luminosity production. An effective tool for availability tracking is a necessity to ensure a coherent capture of fault information and relevant dependencies on operational modes and beam parameters. At the beginning of LHC Run 2 in 2015, the Accelerator Fault Tracking (AFT) tool was deployed at CERN to track faults or events affecting LHC operation. Information derived from the AFT is crucial for the identification of areas to improve LHC availability, and hence LHC physics production. For the 2015 run, the AFT has been used by members of the CERN Availability Working Group, LHC Machine coordinators and equipment owners to identify the main contributors to downtime and to understand the evolution of LHC availability throughout the year. In this paper the 2015 experience with the AFT for availability tracking is summarised and an overview of the first results as well as an outlook to future develo...

  4. Memory Circuit Fault Simulator

    Science.gov (United States)

    Sheldon, Douglas J.; McClure, Tucker

    2013-01-01

    Spacecraft are known to experience significant memory part-related failures and problems, both pre- and postlaunch. These memory parts include both static and dynamic memories (SRAM and DRAM). These failures manifest themselves in a variety of ways, such as pattern-sensitive failures, timingsensitive failures, etc. Because of the mission critical nature memory devices play in spacecraft architecture and operation, understanding their failure modes is vital to successful mission operation. To support this need, a generic simulation tool that can model different data patterns in conjunction with variable write and read conditions was developed. This tool is a mathematical and graphical way to embed pattern, electrical, and physical information to perform what-if analysis as part of a root cause failure analysis effort.

  5. Hardware Accelerated Simulated Radiography

    International Nuclear Information System (INIS)

    Laney, D; Callahan, S; Max, N; Silva, C; Langer, S; Frank, R

    2005-01-01

    We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32 bit floating point texture capabilities to obtain validated solutions to the radiative transport equation for X-rays. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedra that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester. We show that the hardware accelerated solution is faster than the current technique used by scientists

  6. Accelerator simulation using computers

    International Nuclear Information System (INIS)

    Lee, M.; Zambre, Y.; Corbett, W.

    1992-01-01

    Every accelerator or storage ring system consists of a charged particle beam propagating through a beam line. Although a number of computer programs exits that simulate the propagation of a beam in a given beam line, only a few provide the capabilities for designing, commissioning and operating the beam line. This paper shows how a ''multi-track'' simulation and analysis code can be used for these applications

  7. Incipient fault detection and identification in process systems using accelerating neural network learning

    International Nuclear Information System (INIS)

    Parlos, A.G.; Muthusami, J.; Atiya, A.F.

    1994-01-01

    The objective of this paper is to present the development and numerical testing of a robust fault detection and identification (FDI) system using artificial neural networks (ANNs), for incipient (slowly developing) faults occurring in process systems. The challenge in using ANNs in FDI systems arises because of one's desire to detect faults of varying severity, faults from noisy sensors, and multiple simultaneous faults. To address these issues, it becomes essential to have a learning algorithm that ensures quick convergence to a high level of accuracy. A recently developed accelerated learning algorithm, namely a form of an adaptive back propagation (ABP) algorithm, is used for this purpose. The ABP algorithm is used for the development of an FDI system for a process composed of a direct current motor, a centrifugal pump, and the associated piping system. Simulation studies indicate that the FDI system has significantly high sensitivity to incipient fault severity, while exhibiting insensitivity to sensor noise. For multiple simultaneous faults, the FDI system detects the fault with the predominant signature. The major limitation of the developed FDI system is encountered when it is subjected to simultaneous faults with similar signatures. During such faults, the inherent limitation of pattern-recognition-based FDI methods becomes apparent. Thus, alternate, more sophisticated FDI methods become necessary to address such problems. Even though the effectiveness of pattern-recognition-based FDI methods using ANNs has been demonstrated, further testing using real-world data is necessary

  8. Rare event simulation for dynamic fault trees

    NARCIS (Netherlands)

    Ruijters, Enno Jozef Johannes; Reijsbergen, D.P.; de Boer, Pieter-Tjerk; Stoelinga, Mariëlle Ida Antoinette

    2017-01-01

    Fault trees (FT) are a popular industrial method for reliability engineering, for which Monte Carlo simulation is an important technique to estimate common dependability metrics, such as the system reliability and availability. A severe drawback of Monte Carlo simulation is that the number of

  9. Rare Event Simulation for Dynamic Fault Trees

    NARCIS (Netherlands)

    Ruijters, Enno Jozef Johannes; Reijsbergen, D.P.; de Boer, Pieter-Tjerk; Stoelinga, Mariëlle Ida Antoinette; Tonetta, Stefano; Schoitsch, Erwin; Bitsch, Friedemann

    2017-01-01

    Fault trees (FT) are a popular industrial method for reliability engineering, for which Monte Carlo simulation is an important technique to estimate common dependability metrics, such as the system reliability and availability. A severe drawback of Monte Carlo simulation is that the number of

  10. The TAO Accelerator Simulation Program

    CERN Document Server

    Sagan, David

    2005-01-01

    A new accelerator design and analysis simulation environment based on the BMAD relativistic charged particle dynamics library is in development at Cornell University. Called TAO (Tool for Accelerator Optimization), it is a machine independent program that implements the essential ingredients needed to solve simulation problems. This includes the ability to: 1. Design lattices subject to constraints, 2. Simulate errors and changes in machine parameters, and 3. Simulate machine commissioning including simulating data measurement and correction. TAO is designed to be easily customizable so that extending it to solve new and different problems is straight forward. The capability to simultaneously model multiple accelerator lattices, both linacs and storage rings, and injection from one lattice to another allows for the design and commissioning of large multi stage accelerators. It can also simultaneously model multiple configurations of a single lattice. Single particle, particle beam and macroparticle tracking i...

  11. Dynamic fault simulation of wind turbines using commercial simulation tools

    DEFF Research Database (Denmark)

    Lund, Torsten; Eek, Jarle; Uski, Sanna

    2005-01-01

    This paper compares the commercial simulation tools: PSCAD/EMTDC, PowerFactory, SIMPOW and PSS/E for analysing fault sequences defined in the Danish grid code requirements for wind turbines connected to a voltage level below 100 kV. Both symmetrical and unsymmetrical faults are analysed. The devi......This paper compares the commercial simulation tools: PSCAD/EMTDC, PowerFactory, SIMPOW and PSS/E for analysing fault sequences defined in the Danish grid code requirements for wind turbines connected to a voltage level below 100 kV. Both symmetrical and unsymmetrical faults are analysed....... The deviations and the reasons for the deviations between the tools are stated. The simulation models are imple-mented using the built-in library components of the simulation tools with exception of the mechanical drive-train model, which had to be user-modeled in PowerFactory and PSS/E....

  12. Hardware-Accelerated Simulated Radiography

    International Nuclear Information System (INIS)

    Laney, D; Callahan, S; Max, N; Silva, C; Langer, S.; Frank, R

    2005-01-01

    We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32-bit floating point texture capabilities to obtain solutions to the radiative transport equation for X-rays. The hardware accelerated solutions are accurate enough to enable scientists to explore the experimental design space with greater efficiency than the methods currently in use. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedral meshes that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester

  13. Simulation of automotive starter faults

    Directory of Open Access Journals (Sweden)

    Dziubiński Mieczysław

    2017-06-01

    Full Text Available The article presents a new diagnostic method of a motor starter based on the analysis of the starter’s power and the Hall effect. Using the Matlab Simulink program the wear and tear impact of the starter sleeves on power characteristics was simulated. For the analysis of the flux propagation and the distribution of magnetic induction for selected states of the wear and tear of the sleeve the QuickField program was used. Within the experimental tests, registration of the distribution of magnetic induction was conducted by the Hall sensor placed in the link slot. The model and the tests made it possible to develop diagnostic patterns within the OBD diagnostics.

  14. Early detection of incipient faults in power plants using accelerated neural network learning

    International Nuclear Information System (INIS)

    Parlos, A.G.; Jayakumar, M.; Atiya, A.

    1992-01-01

    An important aspect of power plant automation is the development of computer systems able to detect and isolate incipient (slowly developing) faults at the earliest possible stages of their occurrence. In this paper, the development and testing of such a fault detection scheme is presented based on recognition of sensor signatures during various failure modes. An accelerated learning algorithm, namely adaptive backpropagation (ABP), has been developed that allows the training of a multilayer perceptron (MLP) network to a high degree of accuracy, with an order of magnitude improvement in convergence speed. An artificial neural network (ANN) has been successfully trained using the ABP algorithm, and it has been extensively tested with simulated data to detect and classify incipient faults of various types and severity and in the presence of varying sensor noise levels

  15. AESS: Accelerated Exact Stochastic Simulation

    Science.gov (United States)

    Jenkins, David D.; Peterson, Gregory D.

    2011-12-01

    The Stochastic Simulation Algorithm (SSA) developed by Gillespie provides a powerful mechanism for exploring the behavior of chemical systems with small species populations or with important noise contributions. Gene circuit simulations for systems biology commonly employ the SSA method, as do ecological applications. This algorithm tends to be computationally expensive, so researchers seek an efficient implementation of SSA. In this program package, the Accelerated Exact Stochastic Simulation Algorithm (AESS) contains optimized implementations of Gillespie's SSA that improve the performance of individual simulation runs or ensembles of simulations used for sweeping parameters or to provide statistically significant results. Program summaryProgram title: AESS Catalogue identifier: AEJW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: University of Tennessee copyright agreement No. of lines in distributed program, including test data, etc.: 10 861 No. of bytes in distributed program, including test data, etc.: 394 631 Distribution format: tar.gz Programming language: C for processors, CUDA for NVIDIA GPUs Computer: Developed and tested on various x86 computers and NVIDIA C1060 Tesla and GTX 480 Fermi GPUs. The system targets x86 workstations, optionally with multicore processors or NVIDIA GPUs as accelerators. Operating system: Tested under Ubuntu Linux OS and CentOS 5.5 Linux OS Classification: 3, 16.12 Nature of problem: Simulation of chemical systems, particularly with low species populations, can be accurately performed using Gillespie's method of stochastic simulation. Numerous variations on the original stochastic simulation algorithm have been developed, including approaches that produce results with statistics that exactly match the chemical master equation (CME) as well as other approaches that approximate the CME. Solution

  16. JACoW Accelerator fault tracking at CERN

    CERN Document Server

    Roderick, Chris; Martin Anido, Daniel; Pade, Steffen; Wilk, Pawel

    2018-01-01

    CERN’s Accelerator Fault Tracking (AFT) system aims to facilitate answering questions like: “Why are we not doing physics when we should be?” and “What can we do to increase machine availability?” People have tracked faults for many years, using numerous, diverse, distributed and un-related systems. As a result, and despite a lot of effort, it has been difficult to get a clear and consistent overview of what is going on, where the problems are, how long they last for, and what is the impact. This is particularly true for the LHC, where faults may induce long recovery times after being fixed. The AFT project was launched in February 2014 as a collaboration between the Controls and Operations groups with stakeholders from the LHC Availability Working Group (AWG). The AFT system has been used successfully in operation for LHC since 2015, yielding a lot of interest and generating a growing user community. In 2017 the scope has been extended to cover the entire Injector Complex. This paper will describe ...

  17. Application of subset simulation methods to dynamic fault tree analysis

    International Nuclear Information System (INIS)

    Liu Mengyun; Liu Jingquan; She Ding

    2015-01-01

    Although fault tree analysis has been implemented in the nuclear safety field over the past few decades, it was recently criticized for the inability to model the time-dependent behaviors. Several methods are proposed to overcome this disadvantage, and dynamic fault tree (DFT) has become one of the research highlights. By introducing additional dynamic gates, DFT is able to describe the dynamic behaviors like the replacement of spare components or the priority of failure events. Using Monte Carlo simulation (MCS) approach to solve DFT has obtained rising attention, because it can model the authentic behaviors of systems and avoid the limitations in the analytical method. In this paper, it provides an overview and MCS information for DFT analysis, including the sampling of basic events and the propagation rule for logic gates. When calculating rare-event probability, large amount of simulations in standard MCS are required. To improve the weakness, subset simulation (SS) approach is applied. Using the concept of conditional probability and Markov Chain Monte Carlo (MCMC) technique, the SS method is able to accelerate the efficiency of exploring the failure region. Two cases are tested to illustrate the performance of SS approach, and the numerical results suggest that it gives high efficiency when calculating complicated systems with small failure probabilities. (author)

  18. Simulations of tremor-related creep reveal a weak crustal root of the San Andreas Fault

    Science.gov (United States)

    Shelly, David R.; Bradley, Andrew M.; Johnson, Kaj M.

    2013-01-01

    Deep aseismic roots of faults play a critical role in transferring tectonic loads to shallower, brittle crustal faults that rupture in large earthquakes. Yet, until the recent discovery of deep tremor and creep, direct inference of the physical properties of lower-crustal fault roots has remained elusive. Observations of tremor near Parkfield, CA provide the first evidence for present-day localized slip on the deep extension of the San Andreas Fault and triggered transient creep events. We develop numerical simulations of fault slip to show that the spatiotemporal evolution of triggered tremor near Parkfield is consistent with triggered fault creep governed by laboratory-derived friction laws between depths of 20–35 km on the fault. Simulated creep and observed tremor northwest of Parkfield nearly ceased for 20–30 days in response to small coseismic stress changes of order 104 Pa from the 2003 M6.5 San Simeon Earthquake. Simulated afterslip and observed tremor following the 2004 M6.0 Parkfield earthquake show a coseismically induced pulse of rapid creep and tremor lasting for 1 day followed by a longer 30 day period of sustained accelerated rates due to propagation of shallow afterslip into the lower crust. These creep responses require very low effective normal stress of ~1 MPa on the deep San Andreas Fault and near-neutral-stability frictional properties expected for gabbroic lower-crustal rock.

  19. A Fault Sample Simulation Approach for Virtual Testability Demonstration Test

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yong; QIU Jing; LIU Guanjun; YANG Peng

    2012-01-01

    Virtual testability demonstration test has many advantages,such as low cost,high efficiency,low risk and few restrictions.It brings new requirements to the fault sample generation.A fault sample simulation approach for virtual testability demonstration test based on stochastic process theory is proposed.First,the similarities and differences of fault sample generation between physical testability demonstration test and virtual testability demonstration test are discussed.Second,it is pointed out that the fault occurrence process subject to perfect repair is renewal process.Third,the interarrival time distribution function of the next fault event is given.Steps and flowcharts of fault sample generation are introduced.The number of faults and their occurrence time are obtained by statistical simulation.Finally,experiments are carried out on a stable tracking platform.Because a variety of types of life distributions and maintenance modes are considered and some assumptions are removed,the sample size and structure of fault sample simulation results are more similar to the actual results and more reasonable.The proposed method can effectively guide the fault injection in virtual testability demonstration test.

  20. Modeling and Fault Simulation of Propellant Filling System

    International Nuclear Information System (INIS)

    Jiang Yunchun; Liu Weidong; Hou Xiaobo

    2012-01-01

    Propellant filling system is one of the key ground plants in launching site of rocket that use liquid propellant. There is an urgent demand for ensuring and improving its reliability and safety, and there is no doubt that Failure Mode Effect Analysis (FMEA) is a good approach to meet it. Driven by the request to get more fault information for FMEA, and because of the high expense of propellant filling, in this paper, the working process of the propellant filling system in fault condition was studied by simulating based on AMESim. Firstly, based on analyzing its structure and function, the filling system was modular decomposed, and the mathematic models of every module were given, based on which the whole filling system was modeled in AMESim. Secondly, a general method of fault injecting into dynamic system was proposed, and as an example, two typical faults - leakage and blockage - were injected into the model of filling system, based on which one can get two fault models in AMESim. After that, fault simulation was processed and the dynamic characteristics of several key parameters were analyzed under fault conditions. The results show that the model can simulate effectively the two faults, and can be used to provide guidance for the filling system maintain and amelioration.

  1. Hardwired interlock system with fault latchability and annunciation panel for electron accelerators

    International Nuclear Information System (INIS)

    Mukesh Kumar; Roychoudhury, P.; Nimje, V.T.

    2011-01-01

    A hard-wired interlock system is designed, developed, installed and tested to ensure healthy status for interlock signals, coming from the various sub-systems of electron accelerators as digital inputs. Each electron accelerator has approximately ninety-six interlock signals. Hardwired Interlock system consists of twelve-channel 19 inches rack mountable hard-wired interlock module of 4U height. Digital inputs are fed to the hard-wired interlock module in the form of 24V dc for logic 'TRUE' and 0V for logic 'FALSE'. These signals are flow signals to ensure cooling of the various sub-systems, signals from the klystron modulator system in RF Linac to ensure its healthy state to start, signals from high voltage system of DC accelerator, vacuum signals from vacuum system to ensure proper vacuum in the electron accelerator, door interlock signals, air flow signals, and area search and secure signals. This hard-wired interlock system ensures the safe start-up, fault annunciation and alarm, fault latchablity, and fail-safe operation of the electron accelerators. Safe start-up feature ensures that beam generation system can be made ON only when cooling of all the electron accelerator sub-systems are confirmed, all the fault signals of high voltage generation system are attended, proper vacuum is achieved inside the beam transport system, all the doors are closed and various areas have been searched and secured manually. Fault annunciation and alarm feature ensures that during the start up and operation of the electron accelerators, if any fault is there, that fault signal window keeps on flashing with red colour and alarm is sounded till the operator acknowledges the fault. Once acknowledged, flashing and alarm stops but display of the window in red colour remains till the operator clears the fault. Fault latchability feature ensures that if any fault has happened, accelerator cannot be started again till the operator resets that interlock signal. Fail-safe feature ensures

  2. FPGA-accelerated simulation of computer systems

    CERN Document Server

    Angepat, Hari; Chung, Eric S; Hoe, James C; Chung, Eric S

    2014-01-01

    To date, the most common form of simulators of computer systems are software-based running on standard computers. One promising approach to improve simulation performance is to apply hardware, specifically reconfigurable hardware in the form of field programmable gate arrays (FPGAs). This manuscript describes various approaches of using FPGAs to accelerate software-implemented simulation of computer systems and selected simulators that incorporate those techniques. More precisely, we describe a simulation architecture taxonomy that incorporates a simulation architecture specifically designed f

  3. Parallel beam dynamics simulation of linear accelerators

    International Nuclear Information System (INIS)

    Qiang, Ji; Ryne, Robert D.

    2002-01-01

    In this paper we describe parallel particle-in-cell methods for the large scale simulation of beam dynamics in linear accelerators. These techniques have been implemented in the IMPACT (Integrated Map and Particle Accelerator Tracking) code. IMPACT is being used to study the behavior of intense charged particle beams and as a tool for the design of next-generation linear accelerators. As examples, we present applications of the code to the study of emittance exchange in high intensity beams and to the study of beam transport in a proposed accelerator for the development of accelerator-driven waste transmutation technologies

  4. Monte Carlo simulations and benchmark studies at CERN's accelerator chain

    CERN Document Server

    AUTHOR|(CDS)2083190; Brugger, Markus

    2016-01-01

    Mixed particle and energy radiation fields present at the Large Hadron Collider (LHC) and its accelerator chain are responsible for failures on electronic devices located in the vicinity of the accelerator beam lines. These radiation effects on electronics and, more generally, the overall radiation damage issues have a direct impact on component and system lifetimes, as well as on maintenance requirements and radiation exposure to personnel who have to intervene and fix existing faults. The radiation environments and respective radiation damage issues along the CERN’s accelerator chain were studied in the framework of the CERN Radiation to Electronics (R2E) project and are hereby presented. The important interplay between Monte Carlo simulations and radiation monitoring is also highlighted.

  5. Simulating spontaneous aseismic and seismic slip events on evolving faults

    Science.gov (United States)

    Herrendörfer, Robert; van Dinther, Ylona; Pranger, Casper; Gerya, Taras

    2017-04-01

    Plate motion along tectonic boundaries is accommodated by different slip modes: steady creep, seismic slip and slow slip transients. Due to mainly indirect observations and difficulties to scale results from laboratory experiments to nature, it remains enigmatic which fault conditions favour certain slip modes. Therefore, we are developing a numerical modelling approach that is capable of simulating different slip modes together with the long-term fault evolution in a large-scale tectonic setting. We extend the 2D, continuum mechanics-based, visco-elasto-plastic thermo-mechanical model that was designed to simulate slip transients in large-scale geodynamic simulations (van Dinther et al., JGR, 2013). We improve the numerical approach to accurately treat the non-linear problem of plasticity (see also EGU 2017 abstract by Pranger et al.). To resolve a wide slip rate spectrum on evolving faults, we develop an invariant reformulation of the conventional rate-and-state dependent friction (RSF) and adapt the time step (Lapusta et al., JGR, 2000). A crucial part of this development is a conceptual ductile fault zone model that relates slip rates along discrete planes to the effective macroscopic plastic strain rates in the continuum. We test our implementation first in a simple 2D setup with a single fault zone that has a predefined initial thickness. Results show that deformation localizes in case of steady creep and for very slow slip transients to a bell-shaped strain rate profile across the fault zone, which suggests that a length scale across the fault zone may exist. This continuum length scale would overcome the common mesh-dependency in plasticity simulations and question the conventional treatment of aseismic slip on infinitely thin fault zones. We test the introduction of a diffusion term (similar to the damage description in Lyakhovsky et al., JMPS, 2011) into the state evolution equation and its effect on (de-)localization during faster slip events. We compare

  6. Preliminary simulation studies of accelerator cavity loading

    International Nuclear Information System (INIS)

    Faehl, R.J.

    1980-06-01

    Two-dimensional simulations of loading effects in a 350 MHz accelerator cavity have been performed. Electron currents of 1-10 kA have been accelerated in 5 MV/m fields. Higher order cavity modes induced by the beam may lead to emittance growth. Operation in an autoaccelerator mode has been studied

  7. Kinematic Earthquake Ground‐Motion Simulations on Listric Normal Faults

    KAUST Repository

    Passone, Luca

    2017-11-28

    Complex finite-faulting source processes have important consequences for near-source ground motions, but empirical ground-motion prediction equations still lack near-source data and hence cannot fully capture near-fault shaking effects. Using a simulation-based approach, we study the effects of specific source parameterizations on near-field ground motions where empirical data are limited. Here, we investigate the effects of fault listricity through near-field kinematic ground-motion simulations. Listric faults are defined as curved faults in which dip decreases with depth, resulting in a concave upward profile. The listric profiles used in this article are built by applying a specific shape function and varying the initial dip and the degree of listricity. Furthermore, we consider variable rupture speed and slip distribution to generate ensembles of kinematic source models. These ensembles are then used in a generalized 3D finite-difference method to compute synthetic seismograms; the corresponding shaking levels are then compared in terms of peak ground velocities (PGVs) to quantify the effects of breaking fault planarity. Our results show two general features: (1) as listricity increases, the PGVs decrease on the footwall and increase on the hanging wall, and (2) constructive interference of seismic waves emanated from the listric fault causes PGVs over two times higher than those observed for the planar fault. Our results are relevant for seismic hazard assessment for near-fault areas for which observations are scarce, such as in the listric Campotosto fault (Italy) located in an active seismic area under a dam.

  8. Kinematic Earthquake Ground‐Motion Simulations on Listric Normal Faults

    KAUST Repository

    Passone, Luca; Mai, Paul Martin

    2017-01-01

    Complex finite-faulting source processes have important consequences for near-source ground motions, but empirical ground-motion prediction equations still lack near-source data and hence cannot fully capture near-fault shaking effects. Using a simulation-based approach, we study the effects of specific source parameterizations on near-field ground motions where empirical data are limited. Here, we investigate the effects of fault listricity through near-field kinematic ground-motion simulations. Listric faults are defined as curved faults in which dip decreases with depth, resulting in a concave upward profile. The listric profiles used in this article are built by applying a specific shape function and varying the initial dip and the degree of listricity. Furthermore, we consider variable rupture speed and slip distribution to generate ensembles of kinematic source models. These ensembles are then used in a generalized 3D finite-difference method to compute synthetic seismograms; the corresponding shaking levels are then compared in terms of peak ground velocities (PGVs) to quantify the effects of breaking fault planarity. Our results show two general features: (1) as listricity increases, the PGVs decrease on the footwall and increase on the hanging wall, and (2) constructive interference of seismic waves emanated from the listric fault causes PGVs over two times higher than those observed for the planar fault. Our results are relevant for seismic hazard assessment for near-fault areas for which observations are scarce, such as in the listric Campotosto fault (Italy) located in an active seismic area under a dam.

  9. Automated Bearing Fault Diagnosis Using 2D Analysis of Vibration Acceleration Signals under Variable Speed Conditions

    Directory of Open Access Journals (Sweden)

    Sheraz Ali Khan

    2016-01-01

    Full Text Available Traditional fault diagnosis methods of bearings detect characteristic defect frequencies in the envelope power spectrum of the vibration signal. These defect frequencies depend upon the inherently nonstationary shaft speed. Time-frequency and subband signal analysis of vibration signals has been used to deal with random variations in speed, whereas design variations require retraining a new instance of the classifier for each operating speed. This paper presents an automated approach for fault diagnosis in bearings based upon the 2D analysis of vibration acceleration signals under variable speed conditions. Images created from the vibration signals exhibit unique textures for each fault, which show minimal variation with shaft speed. Microtexture analysis of these images is used to generate distinctive fault signatures for each fault type, which can be used to detect those faults at different speeds. A k-nearest neighbor classifier trained using fault signatures generated for one operating speed is used to detect faults at all the other operating speeds. The proposed approach is tested on the bearing fault dataset of Case Western Reserve University, and the results are compared with those of a spectrum imaging-based approach.

  10. From experiment to design -- Fault characterization and detection in parallel computer systems using computational accelerators

    Science.gov (United States)

    Yim, Keun Soo

    This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of

  11. Deflation acceleration of lattice QCD simulations

    International Nuclear Information System (INIS)

    Luescher, Martin

    2007-01-01

    Close to the chiral limit, many calculations in numerical lattice QCD can potentially be accelerated using low-mode deflation techniques. In this paper it is shown that the recently introduced domain-decomposed deflation subspaces can be propagated along the field trajectories generated by the Hybrid Monte Carlo (HMC) algorithm with a modest effort. The quark forces that drive the simulation may then be computed using a deflation-accelerated solver for the lattice Dirac equation. As a consequence, the computer time required for the simulations is significantly reduced and an improved scaling behaviour of the simulation algorithm with respect to the quark mass is achieved

  12. Deflation acceleration of lattice QCD simulations

    CERN Document Server

    Lüscher, Martin

    2007-01-01

    Close to the chiral limit, many calculations in numerical lattice QCD can potentially be accelerated using low-mode deflation techniques. In this paper it is shown that the recently introduced domain-decomposed deflation subspaces can be propagated along the field trajectories generated by the Hybrid Monte Carlo (HMC) algorithm with a modest effort. The quark forces that drive the simulation may then be computed using a deflation-accelerated solver for the lattice Dirac equation. As a consequence, the computer time required for the simulations is significantly reduced and an improved scaling behaviour of the simulation algorithm with respect to the quark mass is achieved.

  13. Fuzzy delay model based fault simulator for crosstalk delay fault test ...

    Indian Academy of Sciences (India)

    In this paper, a fuzzy delay model based crosstalk delay fault simulator is proposed. As design .... To find the quality of non-robust tests, a fuzzy delay ..... Dubois D and Prade H 1989 Processing Fuzzy temporal knowledge. IEEE Transactions ...

  14. Fuzzy delay model based fault simulator for crosstalk delay fault test ...

    Indian Academy of Sciences (India)

    In this paper, a fuzzy delay model based crosstalk delay fault simulator is proposed. As design trends move towards nanometer technologies, more number of new parameters affects the delay of the component. Fuzzy delay models are ideal for modelling the uncertainty found in the design and manufacturing steps.

  15. Quantitative evaluation of fault coverage for digitalized systems in NPPs using simulated fault injection method

    International Nuclear Information System (INIS)

    Kim, Suk Joon

    2004-02-01

    Even though digital systems have numerous advantages such as precise processing of data, enhanced calculation capability over the conventional analog systems, there is a strong restriction on the application of digital systems to the safety systems in nuclear power plants (NPPs). This is because we do not fully understand the reliability of digital systems, and therefore we cannot guarantee the safety of digital systems. But, as the need for introduction of digital systems to safety systems in NPPs increasing, the need for the quantitative analysis on the safety of digital systems is also increasing. NPPs, which are quite conservative in terms of safety, require proving the reliability of digital systems when applied them to the NPPs. Moreover, digital systems which are applied to the NPPs are required to increase the overall safety of NPPs. however, it is very difficult to evaluate the reliability of digital systems because they include the complex fault processing mechanisms at various levels of the systems. Software is another obstacle in reliability assessment of the systems that requires ultra-high reliability. In this work, the fault detection coverage for the digital system is evaluated using simulated fault injection method. The target system is the Local Coincidence Logic (LCL) processor in Digital Plant Protection System (DPPS). However, as the LCL processor is difficult to design equally for evaluating the fault detection coverage, the LCL system has to be simplified. The simulations for evaluating the fault detection coverage of components are performed by dividing into two cases and the failure rates of components are evaluated using MIL-HDBK-217F. Using these results, the fault detection coverage of simplified LCL system is evaluated. In the experiments, heartbeat signals were just emitted at regular interval after executing logic without self-checking algorithm. When faults are injected into the simplified system, fault occurrence can be detected by

  16. An efficient CMOS bridging fault simulator with SPICE accuracy

    NARCIS (Netherlands)

    Di, C.; Jess, J.A.G.

    1996-01-01

    This paper presents an alternative modeling and simulation method for CMOS bridging faults. The significance of the method is the introduction of a set of generic-bridge tables which characterize the bridged outputs for each bridge and a set of generic-cell tables which characterize how each cell

  17. Accelerating Climate Simulations Through Hybrid Computing

    Science.gov (United States)

    Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark

    2009-01-01

    Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.

  18. New Dynamic Library of Reverse Osmosis Plants with Fault Simulation

    International Nuclear Information System (INIS)

    Luis, Palacin; Fernando, Tadeo; Cesar, de Prada; Elfil, Hamza

    2009-01-01

    This paper presents an update of a dynamic library of reverse osmosis plants (ROSIM). This library has been developed in order to be used for optimization, simulation, controller testing or fault detection strategies and a simple fault tolerant control is tested. ROSIM is based in a set of components representing the different units of a typical reverse osmosis plant (as sand filters, cartridge filters, exchanger energy recoveries, pumps, membranes, storage tanks, control systems, valves, etc.). Different types of fouling (calcium carbonate, iron hydroxide, biofouling) have been added and the mathematical model of the reverse osmosis membranes, proposed in the original library, has been improved.

  19. Simulation of different types of faults of Northern Iraq power system

    Energy Technology Data Exchange (ETDEWEB)

    Muhammad, Aree A. [University of Salahaddin-Hawler, College of Engineering, Department of Electrical Engineering (Iraq)], e-mail: areeakram@maktoob.com

    2011-07-01

    This paper presents and analyses the results of a simulation of various defects that have been identified in Northern Iraq's power system and which need to be addressed so as to allow that system to expand. This study was done using an Ipsa simulator and Matlab software and yielded information that will be useful in the expansion of operations and strengthening of the system's capacity to deal with operational difficulties. Fault studies are important since they help identify the areas where guidance is needed for proper relay setting and coordination, for designing circuit breakers with the capacity to handle each type of fault, and for rating the protective switchgears. As this paper states, negative sequence current may cause the temperature of a rotor to rise, accelerating wear on the insulation and causing mechanical stress on the rotating components. For this reason, negative sequence current protection should be given serious consideration.

  20. GPU Accelerated Surgical Simulators for Complex Morhpology

    DEFF Research Database (Denmark)

    Mosegaard, Jesper; Sørensen, Thomas Sangild

    2005-01-01

    a springmass system in order to simulate a complex organ such as the heart. Computations are accelerated by taking advantage of modern graphics processing units (GPUs). Two GPU implementations are presented. They vary in their generality of spring connections and in the speedup factor they achieve...

  1. Fault Detection in High Speed Helical Gears Considering Signal Processing Method in Real Simulation

    Directory of Open Access Journals (Sweden)

    Amir Ali Tabatabai Adnani

    Full Text Available Abstract In the present study, in order to detect the fault of the gearmeshs, two engaged gears based on research department of a major automotive company have been modeled. First off, by using the CATIA software the fault was induced to the output gear. Then, the faulty gearmesh and non-faulty gearmesh is modeled to find the fault pattern to predict and estimate the failure of the gearmesh. The induced defect is according to the frequently practical fault that takes place to the teeth of gears. In order to record the acceleration signals to calculate the decomposition algorithm, mount the accelerometer on accessible place of the output shaft to recognize the pattern. Then, for more realistic simulation, noise is added to the output signal. At the first step by means of Butterworth low pass digital, the noise has to be removed from signals after that by using the Empirical Mode Decomposition (EMD, signals have decomposed into the Instinct Mode Function (IMF and every IMF were tested by using the Instantaneous Frequency (IF in way of Hillbert Transform (HT. For this purpose a code was developed in MATLAB software. Then, in order to detect the presence of the fault the frequency spectrum of IMF's are created and defect is detected in gearmesh frequency of the spectrum.

  2. Faster quantum chemistry simulation on fault-tolerant quantum computers

    International Nuclear Information System (INIS)

    Cody Jones, N; McMahon, Peter L; Yamamoto, Yoshihisa; Whitfield, James D; Yung, Man-Hong; Aspuru-Guzik, Alán; Van Meter, Rodney

    2012-01-01

    Quantum computers can in principle simulate quantum physics exponentially faster than their classical counterparts, but some technical hurdles remain. We propose methods which substantially improve the performance of a particular form of simulation, ab initio quantum chemistry, on fault-tolerant quantum computers; these methods generalize readily to other quantum simulation problems. Quantum teleportation plays a key role in these improvements and is used extensively as a computing resource. To improve execution time, we examine techniques for constructing arbitrary gates which perform substantially faster than circuits based on the conventional Solovay–Kitaev algorithm (Dawson and Nielsen 2006 Quantum Inform. Comput. 6 81). For a given approximation error ϵ, arbitrary single-qubit gates can be produced fault-tolerantly and using a restricted set of gates in time which is O(log ϵ) or O(log log ϵ); with sufficient parallel preparation of ancillas, constant average depth is possible using a method we call programmable ancilla rotations. Moreover, we construct and analyze efficient implementations of first- and second-quantized simulation algorithms using the fault-tolerant arbitrary gates and other techniques, such as implementing various subroutines in constant time. A specific example we analyze is the ground-state energy calculation for lithium hydride. (paper)

  3. Neural network based expert system for fault diagnosis of particle accelerators

    International Nuclear Information System (INIS)

    Dewidar, M.M.

    1997-01-01

    Particle accelerators are generators that produce beams of charged particles, acquiring different energies, depending on the accelerator type. The MGC-20 cyclotron is a cyclic particle accelerator used for accelerating protons, deuterons, alpha particles, and helium-3 to different energies. Its applications include isotope production, nuclear reaction, and mass spectroscopy studies. It is a complicated machine, it consists of five main parts, the ion source, the deflector, the beam transport system, the concentric and harmonic coils, and the radio frequency system. The diagnosis of this device is a very complex task. it depends on the conditions of 27 indicators of the control panel of the device. The accurate diagnosis can lead to a high system reliability and save maintenance costs. so an expert system for the cyclotron fault diagnosis is necessary to be built. In this thesis , a hybrid expert system was developed for the fault diagnosis of the MGC-20 cyclotron. Two intelligent techniques, multilayer feed forward back propagation neural network and the rule based expert system, are integrated as a pre-processor loosely coupled model to build the proposed hybrid expert system. The architecture of the developed hybrid expert system consists of two levels. The first level is two feed forward back propagation neural networks, used for isolating the faulty part of the cyclotron. The second level is the rule based expert system, used for troubleshooting the faults inside the isolated faulty part. 4-6 tabs., 4-5 figs., 36 refs

  4. Low footwall accelerations and variable surface rupture behavior on the Fort Sage Mountains fault, northeast California

    Science.gov (United States)

    Briggs, Richard W.; Wesnousky, Steven G.; Brune, James N.; Purvance, Matthew D.; Mahan, Shannon

    2013-01-01

    The Fort Sage Mountains fault zone is a normal fault in the Walker Lane of the western Basin and Range that produced a small surface rupture (L 5.6 earthquake in 1950. We investigate the paleoseismic history of the Fort Sage fault and find evidence for two paleoearthquakes with surface displacements much larger than those observed in 1950. Rupture of the Fort Sage fault ∼5.6  ka resulted in surface displacements of at least 0.8–1.5 m, implying earthquake moment magnitudes (Mw) of 6.7–7.1. An older rupture at ∼20.5  ka displaced the ground at least 1.5 m, implying an earthquake of Mw 6.8–7.1. A field of precariously balanced rocks (PBRs) is located less than 1 km from the surface‐rupture trace of this Holocene‐active normal fault. Ground‐motion prediction equations (GMPEs) predict peak ground accelerations (PGAs) of 0.2–0.3g for the 1950 rupture and 0.3–0.5g for the ∼5.6  ka paleoearthquake one kilometer from the fault‐surface trace, yet field tests indicate that the Fort Sage PBRs will be toppled by PGAs between 0.1–0.3g. We discuss the paleoseismic history of the Fort Sage fault in the context of the nearby PBRs, GMPEs, and probabilistic seismic hazard maps for extensional regimes. If the Fort Sage PBRs are older than the mid‐Holocene rupture on the Fort Sage fault zone, this implies that current GMPEs may overestimate near‐fault footwall ground motions at this site.

  5. Computer simulation of dynamic processes on accelerators

    International Nuclear Information System (INIS)

    Kol'ga, V.V.

    1979-01-01

    The problems of computer numerical investigation of motion of accelerated particles in accelerators and storages, an effect of different accelerator systems on the motion, determination of optimal characteristics of accelerated charged particle beams are considered. Various simulation representations are discussed which describe the accelerated particle dynamics, such as the enlarged particle method, the representation where a great number of discrete particle is substituted for a field of continuously distributed space charge, the method based on determination of averaged beam characteristics. The procedure is described of numerical studies involving the basic problems, viz. calculation of closed orbits, establishment of stability regions, investigation of resonance propagation determination of the phase stability region, evaluation of the space charge effect the problem of beam extraction. It is shown that most of such problems are reduced to solution of the Cauchy problem using a computer. The ballistic method which is applied to solution of the boundary value problem of beam extraction is considered. It is shown that introduction into the equation under study of additional members with the small positive regularization parameter is a general idea of the methods for regularization of noncorrect problems [ru

  6. A simulation of the San Andreas fault experiment

    Science.gov (United States)

    Agreen, R. W.; Smith, D. E.

    1974-01-01

    The San Andreas fault experiment (Safe), which employs two laser tracking systems for measuring the relative motion of two points on opposite sides of the fault, has been simulated for an 8-yr observation period. The two tracking stations are located near San Diego on the western side of the fault and near Quincy on the eastern side; they are roughly 900 km apart. Both will simultaneously track laser reflector equipped satellites as they pass near the stations. Tracking of the Beacon Explorer C spacecraft has been simulated for these two stations during August and September for 8 consecutive years. An error analysis of the recovery of the relative location of Quincy from the data has been made, allowing for model errors in the mass of the earth, the gravity field, solar radiation pressure, atmospheric drag, errors in the position of the San Diego site, and biases and noise in the laser systems. The results of this simulation indicate that the distance of Quincy from San Diego will be determined each year with a precision of about 10 cm. Projected improvements in these model parameters and in the laser systems over the next few years will bring the precision to about 1-2 cm by 1980.

  7. Simulation-driven machine learning: Bearing fault classification

    Science.gov (United States)

    Sobie, Cameron; Freitas, Carina; Nicolai, Mike

    2018-01-01

    Increasing the accuracy of mechanical fault detection has the potential to improve system safety and economic performance by minimizing scheduled maintenance and the probability of unexpected system failure. Advances in computational performance have enabled the application of machine learning algorithms across numerous applications including condition monitoring and failure detection. Past applications of machine learning to physical failure have relied explicitly on historical data, which limits the feasibility of this approach to in-service components with extended service histories. Furthermore, recorded failure data is often only valid for the specific circumstances and components for which it was collected. This work directly addresses these challenges for roller bearings with race faults by generating training data using information gained from high resolution simulations of roller bearing dynamics, which is used to train machine learning algorithms that are then validated against four experimental datasets. Several different machine learning methodologies are compared starting from well-established statistical feature-based methods to convolutional neural networks, and a novel application of dynamic time warping (DTW) to bearing fault classification is proposed as a robust, parameter free method for race fault detection.

  8. Simulations of multistage intense ion beam acceleration

    International Nuclear Information System (INIS)

    Slutz, S.A.; Poukey, J.W.

    1992-01-01

    An analytic theory for magnetically insulated, multistage acceleration of high intensity ion beams, where the diamagnetic effect due to electron flow is important, has been presented by Slutz and Desjarlais. The theory predicts the existence of two limiting voltages called V 1 (W) and V 2 (W), which are both functions of the injection energy qW of ions entering the accelerating gap. As the voltage approaches V 1 (W), unlimited beam-current density can penetrate the gap without the formation of a virtual anode because the dynamic gap goes to zero. Unlimited beam current density can penetrate an accelerating gap above V 2 (W), although a virtual anode is formed. It was found that the behavior of these limiting voltages is strongly dependent on the electron density profile. The authors have investigated the behavior of these limiting voltages numerically using the 2-D particle-in-cell (PIC) code MAGIC. Results of these simulations are consistent with the superinsulated analytic results. This is not surprising, since the ignored coordinate eliminates instabilities known to be important from studies of single stage magnetically insulated ion diodes. To investigate the effect of these instabilities the authors have simulated the problem with the 3-D PIC code QUICKSILVER, which indicates behavior that is consistent with the saturated model

  9. Fault Risk Assessment of Underwater Vehicle Steering System Based on Virtual Prototyping and Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    He Deyu

    2016-09-01

    Full Text Available Assessing the risks of steering system faults in underwater vehicles is a human-machine-environment (HME systematic safety field that studies faults in the steering system itself, the driver’s human reliability (HR and various environmental conditions. This paper proposed a fault risk assessment method for an underwater vehicle steering system based on virtual prototyping and Monte Carlo simulation. A virtual steering system prototype was established and validated to rectify a lack of historic fault data. Fault injection and simulation were conducted to acquire fault simulation data. A Monte Carlo simulation was adopted that integrated randomness due to the human operator and environment. Randomness and uncertainty of the human, machine and environment were integrated in the method to obtain a probabilistic risk indicator. To verify the proposed method, a case of stuck rudder fault (SRF risk assessment was studied. This method may provide a novel solution for fault risk assessment of a vehicle or other general HME system.

  10. Acceleration of PIC simulation with GPU

    International Nuclear Information System (INIS)

    Suzuki, Junya; Shimazu, Hironori; Fukazawa, Keiichiro; Den, Mitsue

    2011-01-01

    Particle-in-cell (PIC) is a simulation technique for plasma physics. The large number of particles in high-resolution plasma simulation increases the volume computation required, making it vital to increase computation speed. In this study, we attempt to accelerate computation speed on graphics processing units (GPUs) using KEMPO, a PIC simulation code package. We perform two tests for benchmarking, with small and large grid sizes. In these tests, we run KEMPO1 code using a CPU only, both a CPU and a GPU, and a GPU only. The results showed that performance using only a GPU was twice that of using a CPU alone. While, execution time for using both a CPU and GPU is comparable to the tests with a CPU alone, because of the significant bottleneck in communication between the CPU and GPU. (author)

  11. 3D Dynamic Rupture Simulations along Dipping Faults, with a focus on the Wasatch Fault Zone, Utah

    Science.gov (United States)

    Withers, K.; Moschetti, M. P.

    2017-12-01

    We study dynamic rupture and ground motion from dip-slip faults in regions that have high-seismic hazard, such as the Wasatch fault zone, Utah. Previous numerical simulations have modeled deterministic ground motion along segments of this fault in the heavily populated regions near Salt Lake City but were restricted to low frequencies ( 1 Hz). We seek to better understand the rupture process and assess broadband ground motions and variability from the Wasatch Fault Zone by extending deterministic ground motion prediction to higher frequencies (up to 5 Hz). We perform simulations along a dipping normal fault (40 x 20 km along strike and width, respectively) with characteristics derived from geologic observations to generate a suite of ruptures > Mw 6.5. This approach utilizes dynamic simulations (fully physics-based models, where the initial stress drop and friction law are imposed) using a summation by parts (SBP) method. The simulations include rough-fault topography following a self-similar fractal distribution (over length scales from 100 m to the size of the fault) in addition to off-fault plasticity. Energy losses from heat and other mechanisms, modeled as anelastic attenuation, are also included, as well as free-surface topography, which can significantly affect ground motion patterns. We compare the effect of material structure and both rate and state and slip-weakening friction laws have on rupture propagation. The simulations show reduced slip and moment release in the near surface with the inclusion of plasticity, better agreeing with observations of shallow slip deficit. Long-wavelength fault geometry imparts a non-uniform stress distribution along both dip and strike, influencing the preferred rupture direction and hypocenter location, potentially important for seismic hazard estimation.

  12. Using the GeoFEST Faulted Region Simulation System

    Science.gov (United States)

    Parker, Jay W.; Lyzenga, Gregory A.; Donnellan, Andrea; Judd, Michele A.; Norton, Charles D.; Baker, Teresa; Tisdale, Edwin R.; Li, Peggy

    2004-01-01

    GeoFEST (the Geophysical Finite Element Simulation Tool) simulates stress evolution, fault slip and plastic/elastic processes in realistic materials, and so is suitable for earthquake cycle studies in regions such as Southern California. Many new capabilities and means of access for GeoFEST are now supported. New abilities include MPI-based cluster parallel computing using automatic PYRAMID/Parmetis-based mesh partitioning, automatic mesh generation for layered media with rectangular faults, and results visualization that is integrated with remote sensing data. The parallel GeoFEST application has been successfully run on over a half-dozen computers, including Intel Xeon clusters, Itanium II and Altix machines, and the Apple G5 cluster. It is not separately optimized for different machines, but relies on good domain partitioning for load-balance and low communication, and careful writing of the parallel diagonally preconditioned conjugate gradient solver to keep communication overhead low. Demonstrated thousand-step solutions for over a million finite elements on 64 processors require under three hours, and scaling tests show high efficiency when using more than (order of) 4000 elements per processor. The source code and documentation for GeoFEST is available at no cost from Open Channel Foundation. In addition GeoFEST may be used through a browser-based portal environment available to approved users. That environment includes semi-automated geometry creation and mesh generation tools, GeoFEST, and RIVA-based visualization tools that include the ability to generate a flyover animation showing deformations and topography. Work is in progress to support simulation of a region with several faults using 16 million elements, using a strain energy metric to adapt the mesh to faithfully represent the solution in a region of widely varying strain.

  13. The numerical simulation of accelerator components

    International Nuclear Information System (INIS)

    Herrmannsfeldt, W.B.; Hanerfeld, H.

    1987-05-01

    The techniques of the numerical simulation of plasmas can be readily applied to problems in accelerator physics. Because the problems usually involve a single component ''plasma,'' and times that are at most, a few plasma oscillation periods, it is frequently possible to make very good simulations with relatively modest computation resources. We will discuss the methods and illustrate them with several examples. One of the more powerful techniques of understanding the motion of charged particles is to view computer-generated motion pictures. We will show several little movie strips to illustrate the discussions. The examples will be drawn from the application areas of Heavy Ion Fusion, electron-positron linear colliders and injectors for free-electron lasers. 13 refs., 10 figs., 2 tabs

  14. Fault structure analysis by means of large deformation simulator; Daihenkei simulator ni yoru danso kozo kaiseki

    Energy Technology Data Exchange (ETDEWEB)

    Murakami, Y.; Shi, B. [Geological Survey of Japan, Tsukuba (Japan); Matsushima, J. [The University of Tokyo, Tokyo (Japan). Faculty of Engineering

    1997-05-27

    Large deformation of the crust is generated by relatively large displacement of the mediums on both sides along a fault. In the conventional finite element method, faults are dealt with by special elements which are called joint elements, but joint elements, elements microscopic in width, generate numerical instability if large shear displacement is given. Therefore, by introducing the master slave (MO) method used for contact analysis in the metal processing field, developed was a large deformation simulator for analyzing diastrophism including large displacement along the fault. Analysis examples were shown in case the upper basement and lower basement were relatively dislocated with the fault as a boundary. The bottom surface and right end boundary of the lower basement are fixed boundaries. The left end boundary of the lower basement is fixed, and to the left end boundary of the upper basement, the horizontal speed, 3{times}10{sup -7}m/s, was given. In accordance with the horizontal movement of the upper basement, the boundary surface largely deformed. Stress is almost at right angles at the boundary surface. As to the analysis of faults by the MO method, it has been used for a single simple fault, but should be spread to lots of faults in the future. 13 refs., 2 figs.

  15. Simulation model of a transient fault controller for an active-stall wind turbine

    Energy Technology Data Exchange (ETDEWEB)

    Jauch, C.; Soerensen, P.; Bak Jensen, B.

    2005-01-01

    This paper describes the simulation model of a controller that enables an active-stall wind turbine to ride through transient faults. The simulated wind turbine is connected to a simple model of a power system. Certain fault scenarios are specified and the turbine shall be able to sustain operation in case of such faults. The design of the controller is described and its performance assessed by simulations. The control strategies are explained and the behaviour of the turbine discussed. (author)

  16. Simulation of Co-Seismic Off-Fault Stress Effects: Influence of Fault Roughness and Pore Pressure Coupling

    Science.gov (United States)

    Fälth, B.; Lund, B.; Hökmark, H.

    2017-12-01

    Aiming at improved safety assessment of geological nuclear waste repositories, we use dynamic 3D earthquake simulations to estimate the potential for co-seismic off-fault distributed fracture slip. Our model comprises a 12.5 x 8.5 km strike-slip fault embedded in a full space continuum where we apply a homogeneous initial stress field. In the reference case (Case 1) the fault is planar and oriented optimally for slip, given the assumed stress field. To examine the potential impact of fault roughness, we also study cases where the fault surface has undulations with self-similar fractal properties. In both the planar and the undulated cases the fault has homogeneous frictional properties. In a set of ten rough fault models (Case 2), the fault friction is equal to that of Case 1, meaning that these models generate lower seismic moments than Case 1. In another set of ten rough fault models (Case 3), the fault dynamic friction is adjusted such that seismic moments on par with that of Case 1 are generated. For the propagation of the earthquake rupture we adopt the linear slip-weakening law and obtain Mw 6.4 in Case 1 and Case 3, and Mw 6.3 in Case 2 (35 % lower moment than Case 1). During rupture we monitor the off-fault stress evolution along the fault plane at 250 m distance and calculate the corresponding evolution of the Coulomb Failure Stress (CFS) on optimally oriented hypothetical fracture planes. For the stress-pore pressure coupling, we assume Skempton's coefficient B = 0.5 as a base case value, but also examine the sensitivity to variations of B. We observe the following: (I) The CFS values, and thus the potential for fracture slip, tend to increase with the distance from the hypocenter. This is in accordance with results by other authors. (II) The highest CFS values are generated by quasi-static stress concentrations around fault edges and around large scale fault bends, where we obtain values of the order of 10 MPa. (III) Locally, fault roughness may have a

  17. Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area

    Energy Technology Data Exchange (ETDEWEB)

    Pratama, Cecep, E-mail: great.pratama@gmail.com [Graduate Program of Earth Science, Faculty of Earth Science and Technology, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia); Meilano, Irwan [Geodesy Research Division, Faculty of Earth Science and Technology, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia); Nugraha, Andri Dian [Global Geophysical Group, Faculty of Mining and Petroleum Engineering, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia)

    2015-04-24

    Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate for Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 – 0.8465 g with uncertainty between 0.0847 – 0.2389 g and COV between 17.7% – 29.8%.

  18. Safety assessment of automated vehicle functions by simulation-based fault injection

    OpenAIRE

    Juez, Garazi; Amparan, Estibaliz; Lattarulo, Ray; Rastelli, Joshue Perez; Ruiz, Alejandra; Espinoza, Huascar

    2017-01-01

    As automated driving vehicles become more sophisticated and pervasive, it is increasingly important to assure its safety even in the presence of faults. This paper presents a simulation-based fault injection approach (Sabotage) aimed at assessing the safety of automated vehicle functions. In particular, we focus on a case study to forecast fault effects during the model-based design of a lateral control function. The goal is to determine the acceptable fault detection interval for pe...

  19. Fault-tolerant Control of Unmanned Underwater Vehicles with Continuous Faults: Simulations and Experiments

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2010-02-01

    Full Text Available A novel thruster fault diagnosis and accommodation method for open-frame underwater vehicles is presented in the paper. The proposed system consists of two units: a fault diagnosis unit and a fault accommodation unit. In the fault diagnosis unit an ICMAC (Improved Credit Assignment Cerebellar Model Articulation Controllers neural network information fusion model is used to realize the fault identification of the thruster. The fault accommodation unit is based on direct calculations of moment and the result of fault identification is used to find the solution of the control allocation problem. The approach resolves the continuous faulty identification of the UV. Results from the experiment are provided to illustrate the performance of the proposed method in uncertain continuous faulty situation.

  20. Fault-tolerant Control of Unmanned Underwater Vehicles with Continuous Faults: Simulations and Experiments

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2009-12-01

    Full Text Available A novel thruster fault diagnosis and accommodation method for open-frame underwater vehicles is presented in the paper. The proposed system consists of two units: a fault diagnosis unit and a fault accommodation unit. In the fault diagnosis unit an ICMAC (Improved Credit Assignment Cerebellar Model Articulation Controllers neural network information fusion model is used to realize the fault identification of the thruster. The fault accommodation unit is based on direct calculations of moment and the result of fault identification is used to find the solution of the control allocation problem. The approach resolves the continuous faulty identification of the UV. Results from the experiment are provided to illustrate the performance of the proposed method in uncertain continuous faulty situation.

  1. Stacking fault growth of FCC crystal: The Monte-Carlo simulation approach

    International Nuclear Information System (INIS)

    Jian Jianmin; Ming Naiben

    1988-03-01

    The Monte-Carlo method has been used to simulate the growth of the FCC (111) crystal surface, on which is presented the outcrop of a stacking fault. The comparison of the growth rates has been made between the stacking fault containing surface and the perfect surface. The successive growth stages have been simulated. It is concluded that the outcrop of stacking fault on the crystal surface can act as a self-perpetuating step generating source. (author). 7 refs, 3 figs

  2. Early Safety Assessment of Automotive Systems Using Sabotage Simulation-Based Fault Injection Framework

    OpenAIRE

    Juez, Garazi; Amparan, Estíbaliz; Lattarulo, Ray; Ruíz, Alejandra; Perez, Joshue; Espinoza, Huascar

    2017-01-01

    As road vehicles increase their autonomy and the driver reduces his role in the control loop, novel challenges on dependability assessment arise. Model-based design combined with a simulation-based fault injection technique and a virtual vehicle poses as a promising solution for an early safety assessment of automotive systems. To start with, the design, where no safety was considered, is stimulated with a set of fault injection simulations (fault forecasting). By doing so, safety strategies ...

  3. A simulation training evaluation method for distribution network fault based on radar chart

    Directory of Open Access Journals (Sweden)

    Yuhang Xu

    2018-01-01

    Full Text Available In order to solve the problem of automatic evaluation of dispatcher fault simulation training in distribution network, a simulation training evaluation method based on radar chart for distribution network fault is proposed. The fault handling information matrix is established to record the dispatcher fault handling operation sequence and operation information. The four situations of the dispatcher fault isolation operation are analyzed. The fault handling anti-misoperation rule set is established to describe the rules prohibiting dispatcher operation. Based on the idea of artificial intelligence reasoning, the feasibility of dispatcher fault handling is described by the feasibility index. The relevant factors and evaluation methods are discussed from the three aspects of the fault handling result feasibility, the anti-misoperation correctness and the operation process conciseness. The detailed calculation formula is given. Combining the independence and correlation between the three evaluation angles, a comprehensive evaluation method of distribution network fault simulation training based on radar chart is proposed. The method can comprehensively reflect the fault handling process of dispatchers, and comprehensively evaluate the fault handling process from various angles, which has good practical value.

  4. Cyclic loading of simulated fault gouge to large strains

    Science.gov (United States)

    Jones, Lucile M.

    1980-04-01

    As part of a study of the mechanics of simulated fault gouge, deformation of Kayenta Sandstone (24% initial porosity) was observed in triaxial stress tests through several stress cycles. Between 50- and 300-MPa effective pressure the specimens deformed stably without stress drops and with deformation occurring throughout the sample. At 400-MPa effective pressure the specimens underwent strain softening with the deformation occurring along one plane. However, the difference in behavior seems to be due to the density variation at different pressures rather than to the difference in pressure. After peak stress was reached in each cycle, the samples dilated such that the volumetric strain and the linear strain maintained a constant ratio (approximately 0.1) at all pressures. The behavior was independent of the number of stress cycles to linear strains up to 90% and was in general agreement with laws of soil behavior derived from experiments conducted at low pressure (below 5 MPa).

  5. Fault attacks, injection techniques and tools for simulation

    NARCIS (Netherlands)

    Piscitelli, R.; Bhasin, S.; Regazzoni, F.

    2015-01-01

    Faults attacks are a serious threat to secure devices, because they are powerful and they can be performed with extremely cheap equipment. Resistance against fault attacks is often evaluated directly on the manufactured devices, as commercial tools supporting fault evaluation do not usually provide

  6. Modeling of HVAC operational faults in building performance simulation

    International Nuclear Information System (INIS)

    Zhang, Rongpeng; Hong, Tianzhen

    2017-01-01

    Highlights: •Discuss significance of capturing operational faults in existing buildings. •Develop a novel feature in EnergyPlus to model operational faults of HVAC systems. •Compare three approaches to faults modeling using EnergyPlus. •A case study demonstrates the use of the fault-modeling feature. •Future developments of new faults are discussed. -- Abstract: Operational faults are common in the heating, ventilating, and air conditioning (HVAC) systems of existing buildings, leading to a decrease in energy efficiency and occupant comfort. Various fault detection and diagnostic methods have been developed to identify and analyze HVAC operational faults at the component or subsystem level. However, current methods lack a holistic approach to predicting the overall impacts of faults at the building level—an approach that adequately addresses the coupling between various operational components, the synchronized effect between simultaneous faults, and the dynamic nature of fault severity. This study introduces the novel development of a fault-modeling feature in EnergyPlus which fills in the knowledge gap left by previous studies. This paper presents the design and implementation of the new feature in EnergyPlus and discusses in detail the fault-modeling challenges faced. The new fault-modeling feature enables EnergyPlus to quantify the impacts of faults on building energy use and occupant comfort, thus supporting the decision making of timely fault corrections. Including actual building operational faults in energy models also improves the accuracy of the baseline model, which is critical in the measurement and verification of retrofit or commissioning projects. As an example, EnergyPlus version 8.6 was used to investigate the impacts of a number of typical operational faults in an office building across several U.S. climate zones. The results demonstrate that the faults have significant impacts on building energy performance as well as on occupant

  7. State-of-the-art assessment of testing and testability of custom LSI/VLSI circuits. Volume 8: Fault simulation

    Science.gov (United States)

    Breuer, M. A.; Carlan, A. J.

    1982-10-01

    Fault simulation is widely used by industry in such applications as scoring the fault coverage of test sequences and construction of fault dictionaries. For use in testing VLSI circuits a simulator is evaluated by its accuracy, i.e., modelling capability. To be accurate simulators must employ multi-valued logic in order to represent unknown signal values, impedance, signal transitions, etc., circuit delays such as transport rise/fall, inertial, and the fault modes it is capable of handling. Of the three basic fault simulators now in use (parallel, deductive and concurrent) concurrent fault simulation appears most promising.

  8. Verification of the machinery condition monitoring technology by fault simulation tests

    International Nuclear Information System (INIS)

    Maehara, Takafumi; Watanabe, Yukio; Osaki, Kenji; Higuma, Koji; Nakano, Tomohito

    2009-01-01

    This paper shows the test items and equipments introduced by Japan Nuclear Energy Safety Organization to establish the monitoring technique for machinery conditions. From the result of vertical pump simulation tests, it was confirmed that fault analysis was impossible by measuring the accelerations on both motor and pump column pipes, however, was possible by measuring of pump shaft vibrations. Because hydraulic whirls by bearing wear had significant influences over bearing misalignments and flow rates, the monitoring trends must be done under the same condition (on bearing alignments and flow rates). We have confirmed that malfunctions of vertical pumps can be diagnosed using measured shaft vibration by ultrasonic sensors from outer surface of pump casing on the floor. (author)

  9. A flexible simulator for training an early fault diagnostic system

    International Nuclear Information System (INIS)

    Marsiletti, M.; Santinelli, A.; Zuenkov, M.; Poletykin, A.

    1997-01-01

    An early fault diagnostic system has been developed addressed to timely trouble shooting in process plants during any operational modes. The theory of this diagnostic system is related with the usage of learning methods for automatic generation of knowledge bases. This approach enables the conversion of ''cause→effect'' relations into ''effect→possible-causes'' ones. The diagnostic rules are derived from the operation of a plant simulator according to a specific procedure. Flexibility, accuracy and high speed are the major characteristics of the training simulator, used to generate the diagnostic knowledge base. The simulator structure is very flexible, being based on LEGO code but allowing the use of practically any kind of FORTRAN routines (recently also ACSL macros has been introduced) as plant modules: this permits, when needed, a very accurate description of the malfunctions the diagnostic system should ''known''. The high speed is useful to shorten the ''learning'' phase of the diagnostic system. The feasibility of the overall system has been assessed, using as reference plant the conventional Sampierdarena (Italy) power station, that is a combined cycle plant dedicated to produce both electrical and heat power. The hardware configuration of this prototype system was made up of a network of a Hewlett-Packard workstation and a Digital VAX-Station. The paper illustrates the basic structure of the simulator used for this diagnostic system training purpose, as well as the theoretical background on which the diagnostic system is based. Some evidence of the effectiveness of the concept through the application to Sampierdarena 40 MW cogeneration plant is reported. Finally an outline of an ongoing application to a WWER-1000 plant is given; the operating system is, in this case, UNIX. (author)

  10. A circuit-based photovoltaic module simulator with shadow and fault settings

    Science.gov (United States)

    Chao, Kuei-Hsiang; Chao, Yuan-Wei; Chen, Jyun-Ping

    2016-03-01

    The main purpose of this study was to develop a photovoltaic (PV) module simulator. The proposed simulator, using electrical parameters from solar cells, could simulate output characteristics not only during normal operational conditions, but also during conditions of partial shadow and fault conditions. Such a simulator should possess the advantages of low cost, small size and being easily realizable. Experiments have shown that results from a proposed PV simulator of this kind are very close to that from simulation software during partial shadow conditions, and with negligible differences during fault occurrence. Meanwhile, the PV module simulator, as developed, could be used on various types of series-parallel connections to form PV arrays, to conduct experiments on partial shadow and fault events occurring in some of the modules. Such experiments are designed to explore the impact of shadow and fault conditions on the output characteristics of the system as a whole.

  11. Community Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Spentzouris, P.; /Fermilab; Cary, J.; /Tech-X, Boulder; McInnes, L.C.; /Argonne; Mori, W.; /UCLA; Ng, C.; /SLAC; Ng, E.; Ryne, R.; /LBL, Berkeley

    2011-11-14

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The ComPASS organization

  12. Simulation of a flexible wind turbine response to a grid fault

    DEFF Research Database (Denmark)

    Hansen, Anca Daniela; Cutululis, Nicolaos Antonio; Sørensen, Poul Ejnar

    2007-01-01

    The purpose of this work is to illustrate the impact of a grid fault on the mechanical loads of a wind turbine. Grid faults generate transients in the generator electromagnetic torque, which are propagated in the wind turbine, stressing its mechanical components. Grid faults are normally simulated...... in power system simulation tools applying simplified mechanical models of the drive train. This paper presents simulations of the wind turbine load response to grid faults with an advanced aeroelastic computer code (HAWC2). The core of this code is an advanced model for the flexible structure of the wind...... turbines, taking the flexibility of the tower, blades and other components of the wind turbines into account. The effect of a grid fault on the wind turbine flexible structure is assessed for a typical fixed speed wind turbine, equipped with an induction generator....

  13. Simulating autonomous driving styles: Accelerations for three road profiles

    Directory of Open Access Journals (Sweden)

    Karjanto Juffrizal

    2017-01-01

    Full Text Available This paper presents a new experimental approach to simulate projected autonomous driving styles based on the accelerations at three road profiles. This study was focused on the determination of ranges of accelerations in triaxial direction to simulate the autonomous driving experience. A special device, known as the Automatic Acceleration and Data controller (AUTOAccD, has been developed to guide the designated driver to accomplish the selected accelerations based on the road profiles and the intended driving styles namely assertive, defensive and light rail transit (LRT. Experimental investigations have been carried out at three different road profiles (junction, speed hump, and corner with two designated drivers with five trials on each condition. A driving style with the accelerations of LRT has also been included in this study as it is significant to the present methodology because the autonomous car is predicted to accelerate like an LRT, in such a way that it enables the users to conduct activities such as working on a laptop, using personal devices or eating and drinking while travelling. The results demonstrated that 92 out of 110 trials of the intended accelerations for autonomous driving styles could be achieved and simulated on the real road by the designated drivers. The differences between the two designated drivers were negligible, and the rates of succeeding in realizing the intended accelerations were high. The present approach in simulating autonomous driving styles focusing on accelerations can be used as a tool for experimental setup involving autonomous driving experience and acceptance.

  14. Ground motion modeling of Hayward fault scenario earthquakes II:Simulation of long-period and broadband ground motions

    Energy Technology Data Exchange (ETDEWEB)

    Aagaard, B T; Graves, R W; Rodgers, A; Brocher, T M; Simpson, R W; Dreger, D; Petersson, N A; Larsen, S C; Ma, S; Jachens, R C

    2009-11-04

    We simulate long-period (T > 1.0-2.0 s) and broadband (T > 0.1 s) ground motions for 39 scenarios earthquakes (Mw 6.7-7.2) involving the Hayward, Calaveras, and Rodgers Creek faults. For rupture on the Hayward fault we consider the effects of creep on coseismic slip using two different approaches, both of which reduce the ground motions compared with neglecting the influence of creep. Nevertheless, the scenario earthquakes generate strong shaking throughout the San Francisco Bay area with about 50% of the urban area experiencing MMI VII or greater for the magnitude 7.0 scenario events. Long-period simulations of the 2007 Mw 4.18 Oakland and 2007 Mw 4.5 Alum Rock earthquakes show that the USGS Bay Area Velocity Model version 08.3.0 permits simulation of the amplitude and duration of shaking throughout the San Francisco Bay area, with the greatest accuracy in the Santa Clara Valley (San Jose area). The ground motions exhibit a strong sensitivity to the rupture length (or magnitude), hypocenter (or rupture directivity), and slip distribution. The ground motions display a much weaker sensitivity to the rise time and rupture speed. Peak velocities, peak accelerations, and spectral accelerations from the synthetic broadband ground motions are, on average, slightly higher than the Next Generation Attenuation (NGA) ground-motion prediction equations. We attribute at least some of this difference to the relatively narrow width of the Hayward fault ruptures. The simulations suggest that the Spudich and Chiou (2008) directivity corrections to the NGA relations could be improved by including a dependence on the rupture speed and increasing the areal extent of rupture directivity with period. The simulations also indicate that the NGA relations may under-predict amplification in shallow sedimentary basins.

  15. Simulation of a medical linear accelerator for teaching purposes.

    Science.gov (United States)

    Anderson, Rhys; Lamey, Michael; MacPherson, Miller; Carlone, Marco

    2015-05-08

    Simulation software for medical linear accelerators that can be used in a teaching environment was developed. The components of linear accelerators were modeled to first order accuracy using analytical expressions taken from the literature. The expressions used constants that were empirically set such that realistic response could be expected. These expressions were programmed in a MATLAB environment with a graphical user interface in order to produce an environment similar to that of linear accelerator service mode. The program was evaluated in a systematic fashion, where parameters affecting the clinical properties of medical linear accelerator beams were adjusted independently, and the effects on beam energy and dose rate recorded. These results confirmed that beam tuning adjustments could be simulated in a simple environment. Further, adjustment of service parameters over a large range was possible, and this allows the demonstration of linear accelerator physics in an environment accessible to both medical physicists and linear accelerator service engineers. In conclusion, a software tool, named SIMAC, was developed to improve the teaching of linear accelerator physics in a simulated environment. SIMAC performed in a similar manner to medical linear accelerators. The authors hope that this tool will be valuable as a teaching tool for medical physicists and linear accelerator service engineers.

  16. Community petascale project for accelerator science and simulation: Advancing computational science for future accelerators and accelerator technologies

    International Nuclear Information System (INIS)

    Spentzouris, P.; Cary, J.; McInnes, L.C.; Mori, W.; Ng, C.; Ng, E.; Ryne, R.

    2008-01-01

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R and D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  17. A linear accelerator for simulated micrometeors.

    Science.gov (United States)

    Slattery, J. C.; Becker, D. G.; Hamermesh, B.; Roy, N. L.

    1973-01-01

    Review of the theory, design parameters, and construction details of a linear accelerator designed to impart meteoric velocities to charged microparticles in the 1- to 10-micron diameter range. The described linac is of the Sloan Lawrence type and, in a significant departure from conventional accelerator practice, is adapted to single particle operation by employing a square wave driving voltage with the frequency automatically adjusted from 12.5 to 125 kHz according to the variable velocity of each injected particle. Any output velocity up to about 30 km/sec can easily be selected, with a repetition rate of approximately two particles per minute.

  18. A GPU Accelerated Spring Mass System for Surgical Simulation

    DEFF Research Database (Denmark)

    Mosegaard, Jesper; Sørensen, Thomas Sangild

    2005-01-01

    There is a growing demand for surgical simulators to dofast and precise calculations of tissue deformation to simulateincreasingly complex morphology in real-time. Unfortunately, evenfast spring-mass based systems have slow convergence rates for largemodels. This paper presents a method to accele...... to accelerate computation of aspring-mass system in order to simulate a complex organ such as theheart. This acceleration is achieved by taking advantage of moderngraphics processing units (GPU)....

  19. Beam dynamics simulation of a double pass proton linear accelerator

    Directory of Open Access Journals (Sweden)

    Kilean Hwang

    2017-04-01

    Full Text Available A recirculating superconducting linear accelerator with the advantage of both straight and circular accelerator has been demonstrated with relativistic electron beams. The acceleration concept of a recirculating proton beam was recently proposed [J. Qiang, Nucl. Instrum. Methods Phys. Res., Sect. A 795, 77 (2015NIMAER0168-900210.1016/j.nima.2015.05.056] and is currently under study. In order to further support the concept, the beam dynamics study on a recirculating proton linear accelerator has to be carried out. In this paper, we study the feasibility of a two-pass recirculating proton linear accelerator through the direct numerical beam dynamics design optimization and the start-to-end simulation. This study shows that the two-pass simultaneous focusing without particle losses is attainable including fully 3D space-charge effects through the entire accelerator system.

  20. Finite element simulation of earthquake cycle dynamics for continental listric fault system

    Science.gov (United States)

    Wei, T.; Shen, Z. K.

    2017-12-01

    We simulate stress/strain evolution through earthquake cycles for a continental listric fault system using the finite element method. A 2-D lithosphere model is developed, with the upper crust composed of plasto-elastic materials and the lower crust/upper mantle composed of visco-elastic materials respectively. The media is sliced by a listric fault, which is soled into the visco-elastic lower crust at its downdip end. The system is driven laterally by constant tectonic loading. Slip on fault is controlled by rate-state friction. We start with a simple static/dynamic friction law, and drive the system through multiple earthquake cycles. Our preliminary results show that: (a) periodicity of the earthquake cycles is strongly modulated by the static/dynamic friction, with longer period correlated with higher static friction and lower dynamic friction; (b) periodicity of earthquake is a function of fault depth, with less frequent events of greater magnitudes occurring at shallower depth; and (c) rupture on fault cannot release all the tectonic stress in the system, residual stress is accumulated in the hanging wall block at shallow depth close to the fault, which has to be released either by conjugate faulting or inelastic folding. We are in a process of exploring different rheologic structure and friction laws and examining their effects on earthquake behavior and deformation pattern. The results will be applied to specific earthquakes and fault zones such as the 2008 great Wenchuan earthquake on the Longmen Shan fault system.

  1. Optimal Model-Based Fault Estimation and Correction for Particle Accelerators and Industrial Plants Using Combined Support Vector Machines and First Principles Models

    International Nuclear Information System (INIS)

    2010-01-01

    parameters of the beam lifetime model) are physically meaningful. (3) Numerical Efficiency of the Training - We investigated the numerical efficiency of the SVM training. More specifically, for the primal formulation of the training, we have developed a problem formulation that avoids the linear increase in the number of the constraints as a function of the number of data points. (4) Flexibility of Software Architecture - The software framework for the training of the support vector machines was designed to enable experimentation with different solvers. We experimented with two commonly used nonlinear solvers for our simulations. The primary application of interest for this project has been the sustained optimal operation of particle accelerators at the Stanford Linear Accelerator Center (SLAC). Particle storage rings are used for a variety of applications ranging from 'colliding beam' systems for high-energy physics research to highly collimated x-ray generators for synchrotron radiation science. Linear accelerators are also used for collider research such as International Linear Collider (ILC), as well as for free electron lasers, such as the Linear Coherent Light Source (LCLS) at SLAC. One common theme in the operation of storage rings and linear accelerators is the need to precisely control the particle beams over long periods of time with minimum beam loss and stable, yet challenging, beam parameters. We strongly believe that beyond applications in particle accelerators, the high fidelity and cost benefits of a combined model-based fault estimation/correction system will attract customers from a wide variety of commercial and scientific industries. Even though the acquisition of Pavilion Technologies, Inc. by Rockwell Automation Inc. in 2007 has altered the small business status of the Pavilion and it no longer qualifies for a Phase II funding, our findings in the course of the Phase I research have convinced us that further research will render a workable model

  2. Optimal Model-Based Fault Estimation and Correction for Particle Accelerators and Industrial Plants Using Combined Support Vector Machines and First Principles Models

    Energy Technology Data Exchange (ETDEWEB)

    Sayyar-Rodsari, Bijan; Schweiger, Carl; /SLAC /Pavilion Technologies, Inc., Austin, TX

    2010-08-25

    parameters of the beam lifetime model) are physically meaningful. (3) Numerical Efficiency of the Training - We investigated the numerical efficiency of the SVM training. More specifically, for the primal formulation of the training, we have developed a problem formulation that avoids the linear increase in the number of the constraints as a function of the number of data points. (4) Flexibility of Software Architecture - The software framework for the training of the support vector machines was designed to enable experimentation with different solvers. We experimented with two commonly used nonlinear solvers for our simulations. The primary application of interest for this project has been the sustained optimal operation of particle accelerators at the Stanford Linear Accelerator Center (SLAC). Particle storage rings are used for a variety of applications ranging from 'colliding beam' systems for high-energy physics research to highly collimated x-ray generators for synchrotron radiation science. Linear accelerators are also used for collider research such as International Linear Collider (ILC), as well as for free electron lasers, such as the Linear Coherent Light Source (LCLS) at SLAC. One common theme in the operation of storage rings and linear accelerators is the need to precisely control the particle beams over long periods of time with minimum beam loss and stable, yet challenging, beam parameters. We strongly believe that beyond applications in particle accelerators, the high fidelity and cost benefits of a combined model-based fault estimation/correction system will attract customers from a wide variety of commercial and scientific industries. Even though the acquisition of Pavilion Technologies, Inc. by Rockwell Automation Inc. in 2007 has altered the small business status of the Pavilion and it no longer qualifies for a Phase II funding, our findings in the course of the Phase I research have convinced us that further research will render a workable

  3. Simulation of accelerator transmutation of long-lived nuclear wastes

    International Nuclear Information System (INIS)

    Wolff-Bacha Fabienne

    1997-01-01

    The incineration of minor actinides with a hybrid reactor (i.e. coupled with an accelerator) could reduce their radioactivity. The scientific tool used for simulations, the GEANT code implemented on a paralleled computer, has been confirmed initially on thin and thick targets and by simulation of a pressurized water reactor, a fast reactor like Superphenix, and a molten salt fast hybrid reactor 'ATP'. Simulating a thermal hybrid reactor seems to indicate the non-negligible presence of neutrons which diffuse back to the accelerator. In spite of simplifications, the simulation of a molten lead fast hybrid reactor (as the CERN Fast Energy Amplifier) might indicate difficulties in the radial power distribution in the core, the life time of the window and the activated air leak risk. Finally, we propose a thermoelectric compact hybrid reactor, PRAHE - small atomic board hybrid reactor - the principle of which allows a neutron coupling between the accelerator and the reactor. (author)

  4. Rupture Dynamics and Seismic Radiation on Rough Faults for Simulation-Based PSHA

    Science.gov (United States)

    Mai, P. M.; Galis, M.; Thingbaijam, K. K. S.; Vyas, J. C.; Dunham, E. M.

    2017-12-01

    Simulation-based ground-motion predictions may augment PSHA studies in data-poor regions or provide additional shaking estimations, incl. seismic waveforms, for critical facilities. Validation and calibration of such simulation approaches, based on observations and GMPE's, is important for engineering applications, while seismologists push to include the precise physics of the earthquake rupture process and seismic wave propagation in 3D heterogeneous Earth. Geological faults comprise both large-scale segmentation and small-scale roughness that determine the dynamics of the earthquake rupture process and its radiated seismic wavefield. We investigate how different parameterizations of fractal fault roughness affect the rupture evolution and resulting near-fault ground motions. Rupture incoherence induced by fault roughness generates realistic ω-2 decay for high-frequency displacement amplitude spectra. Waveform characteristics and GMPE-based comparisons corroborate that these rough-fault rupture simulations generate realistic synthetic seismogram for subsequent engineering application. Since dynamic rupture simulations are computationally expensive, we develop kinematic approximations that emulate the observed dynamics. Simplifying the rough-fault geometry, we find that perturbations in local moment tensor orientation are important, while perturbations in local source location are not. Thus, a planar fault can be assumed if the local strike, dip, and rake are maintained. The dynamic rake angle variations are anti-correlated with local dip angles. Based on a dynamically consistent Yoffe source-time function, we show that the seismic wavefield of the approximated kinematic rupture well reproduces the seismic radiation of the full dynamic source process. Our findings provide an innovative pseudo-dynamic source characterization that captures fault roughness effects on rupture dynamics. Including the correlations between kinematic source parameters, we present a new

  5. ELECTROMAGNETIC SIMULATIONS OF LINEAR PROTON ACCELERATOR STRUCTURES USING DIELECTRIC WALL ACCELERATORS

    International Nuclear Information System (INIS)

    Nelson, S; Poole, B; Caporaso, G

    2007-01-01

    Proton accelerator structures for medical applications using Dielectric Wall Accelerator (DWA) technology allow for the utilization of high electric field gradients on the order of 100 MV/m to accelerate the proton bunch. Medical applications involving cancer therapy treatment usually desire short bunch lengths on the order of hundreds of picoseconds in order to limit the extent of the energy deposited in the tumor site (in 3D space, time, and deposited proton charge). Electromagnetic simulations of the DWA structure, in combination with injections of proton bunches have been performed using 3D finite difference codes in combination with particle pushing codes. Electromagnetic simulations of DWA structures includes these effects and also include the details of the switch configuration and how that switch time affects the electric field pulse which accelerates the particle beam

  6. Accounting for Fault Roughness in Pseudo-Dynamic Ground-Motion Simulations

    KAUST Repository

    Mai, Paul Martin

    2017-04-03

    Geological faults comprise large-scale segmentation and small-scale roughness. These multi-scale geometrical complexities determine the dynamics of the earthquake rupture process, and therefore affect the radiated seismic wavefield. In this study, we examine how different parameterizations of fault roughness lead to variability in the rupture evolution and the resulting near-fault ground motions. Rupture incoherence naturally induced by fault roughness generates high-frequency radiation that follows an ω−2 decay in displacement amplitude spectra. Because dynamic rupture simulations are computationally expensive, we test several kinematic source approximations designed to emulate the observed dynamic behavior. When simplifying the rough-fault geometry, we find that perturbations in local moment tensor orientation are important, while perturbations in local source location are not. Thus, a planar fault can be assumed if the local strike, dip, and rake are maintained. We observe that dynamic rake angle variations are anti-correlated with the local dip angles. Testing two parameterizations of dynamically consistent Yoffe-type source-time function, we show that the seismic wavefield of the approximated kinematic ruptures well reproduces the radiated seismic waves of the complete dynamic source process. This finding opens a new avenue for an improved pseudo-dynamic source characterization that captures the effects of fault roughness on earthquake rupture evolution. By including also the correlations between kinematic source parameters, we outline a new pseudo-dynamic rupture modeling approach for broadband ground-motion simulation.

  7. Accounting for Fault Roughness in Pseudo-Dynamic Ground-Motion Simulations

    Science.gov (United States)

    Mai, P. Martin; Galis, Martin; Thingbaijam, Kiran K. S.; Vyas, Jagdish C.; Dunham, Eric M.

    2017-09-01

    Geological faults comprise large-scale segmentation and small-scale roughness. These multi-scale geometrical complexities determine the dynamics of the earthquake rupture process, and therefore affect the radiated seismic wavefield. In this study, we examine how different parameterizations of fault roughness lead to variability in the rupture evolution and the resulting near-fault ground motions. Rupture incoherence naturally induced by fault roughness generates high-frequency radiation that follows an ω-2 decay in displacement amplitude spectra. Because dynamic rupture simulations are computationally expensive, we test several kinematic source approximations designed to emulate the observed dynamic behavior. When simplifying the rough-fault geometry, we find that perturbations in local moment tensor orientation are important, while perturbations in local source location are not. Thus, a planar fault can be assumed if the local strike, dip, and rake are maintained. We observe that dynamic rake angle variations are anti-correlated with the local dip angles. Testing two parameterizations of dynamically consistent Yoffe-type source-time function, we show that the seismic wavefield of the approximated kinematic ruptures well reproduces the radiated seismic waves of the complete dynamic source process. This finding opens a new avenue for an improved pseudo-dynamic source characterization that captures the effects of fault roughness on earthquake rupture evolution. By including also the correlations between kinematic source parameters, we outline a new pseudo-dynamic rupture modeling approach for broadband ground-motion simulation.

  8. Accounting for Fault Roughness in Pseudo-Dynamic Ground-Motion Simulations

    KAUST Repository

    Mai, Paul Martin; Galis, Martin; Thingbaijam, Kiran Kumar; Vyas, Jagdish Chandra; Dunham, Eric M.

    2017-01-01

    Geological faults comprise large-scale segmentation and small-scale roughness. These multi-scale geometrical complexities determine the dynamics of the earthquake rupture process, and therefore affect the radiated seismic wavefield. In this study, we examine how different parameterizations of fault roughness lead to variability in the rupture evolution and the resulting near-fault ground motions. Rupture incoherence naturally induced by fault roughness generates high-frequency radiation that follows an ω−2 decay in displacement amplitude spectra. Because dynamic rupture simulations are computationally expensive, we test several kinematic source approximations designed to emulate the observed dynamic behavior. When simplifying the rough-fault geometry, we find that perturbations in local moment tensor orientation are important, while perturbations in local source location are not. Thus, a planar fault can be assumed if the local strike, dip, and rake are maintained. We observe that dynamic rake angle variations are anti-correlated with the local dip angles. Testing two parameterizations of dynamically consistent Yoffe-type source-time function, we show that the seismic wavefield of the approximated kinematic ruptures well reproduces the radiated seismic waves of the complete dynamic source process. This finding opens a new avenue for an improved pseudo-dynamic source characterization that captures the effects of fault roughness on earthquake rupture evolution. By including also the correlations between kinematic source parameters, we outline a new pseudo-dynamic rupture modeling approach for broadband ground-motion simulation.

  9. Numerical simulations of earthquakes and the dynamics of fault systems using the Finite Element method.

    Science.gov (United States)

    Kettle, L. M.; Mora, P.; Weatherley, D.; Gross, L.; Xing, H.

    2006-12-01

    Simulations using the Finite Element method are widely used in many engineering applications and for the solution of partial differential equations (PDEs). Computational models based on the solution of PDEs play a key role in earth systems simulations. We present numerical modelling of crustal fault systems where the dynamic elastic wave equation is solved using the Finite Element method. This is achieved using a high level computational modelling language, escript, available as open source software from ACcESS (Australian Computational Earth Systems Simulator), the University of Queensland. Escript is an advanced geophysical simulation software package developed at ACcESS which includes parallel equation solvers, data visualisation and data analysis software. The escript library was implemented to develop a flexible Finite Element model which reliably simulates the mechanism of faulting and the physics of earthquakes. Both 2D and 3D elastodynamic models are being developed to study the dynamics of crustal fault systems. Our final goal is to build a flexible model which can be applied to any fault system with user-defined geometry and input parameters. To study the physics of earthquake processes, two different time scales must be modelled, firstly the quasi-static loading phase which gradually increases stress in the system (~100years), and secondly the dynamic rupture process which rapidly redistributes stress in the system (~100secs). We will discuss the solution of the time-dependent elastic wave equation for an arbitrary fault system using escript. This involves prescribing the correct initial stress distribution in the system to simulate the quasi-static loading of faults to failure; determining a suitable frictional constitutive law which accurately reproduces the dynamics of the stick/slip instability at the faults; and using a robust time integration scheme. These dynamic models generate data and information that can be used for earthquake forecasting.

  10. Simulation of accelerator-driven systems

    International Nuclear Information System (INIS)

    Kadi, Y.; Carminati, F.

    2003-01-01

    The neutronic calculations presented in this paper are a result of a state-of-the-art computer code package developed by the EET group at CERN. Both high-energy particle interactions and low-energy neutron transport are treated with a sophisticated method based on a full Monte Carlo simulation, together with modem nuclear data libraries. The code is designed to run both on parallel and scalar computers. A series of experiments carried out at the CERN-PS (i) confirmed that the spallation process is correctly predicted and (ii) validated the reliability of the predictions of the integral neutronic parameters of the Energy Amplifier Demonstration Facility. (author)

  11. Accelerator and feedback control simulation using neural networks

    International Nuclear Information System (INIS)

    Nguyen, D.; Lee, M.; Sass, R.; Shoaee, H.

    1991-05-01

    Unlike present constant model feedback system, neural networks can adapt as the dynamics of the process changes with time. Using a process model, the ''Accelerator'' network is first trained to simulate the dynamics of the beam for a given beam line. This ''Accelerator'' network is then used to train a second ''Controller'' network which performs the control function. In simulation, the networks are used to adjust corrector magnetics to control the launch angle and position of the beam to keep it on the desired trajectory when the incoming beam is perturbed. 4 refs., 3 figs

  12. Numerical simulation of superconducting accelerator magnets

    CERN Document Server

    Kurz, Stefan

    2002-01-01

    Modeling and simulation are key elements in assuring the fast and successful design of superconducting magnets. After a general introduction the paper focuses on electromagnetic field computations, which are an indipensable tool in the design process. A technique which is especially well suited for the accurate computation of magnetic fields in superconducting magnets is presented. This method couples Boundary Elements (BEM) which discretize the surface of the iron yoke and Finite Elements (FEM) for the modeling of the non linear interior of the yoke. The formulation is based on a total magnetic scalar potential throughout the whole problem domain. The results for a short dipole model are presented and compared to previous results, which have been obtained from a similar BEM-FEM coupled vector potential formulation. 10 Refs. --- 25 --- AN

  13. Computer simulations of compact toroid formation and acceleration

    International Nuclear Information System (INIS)

    Peterkin, R.E. Jr.; Sovinec, C.R.

    1990-01-01

    Experiments to form, accelerate, and focus compact toroid plasmas will be performed on the 9.4 MJ SHIVA STAR fast capacitor bank at the Air Force Weapons Laboratory during the 1990. The MARAUDER (magnetically accelerated rings to achieve ultrahigh directed energy and radiation) program is a research effort to accelerate magnetized plasma rings with the masses between 0.1 and 1.0 mg to velocities above 10 8 cm/sec and energies above 1 MJ. Research on these high-velocity compact toroids may lead to development of very fast opening switches, high-power microwave sources, and an alternative path to inertial confinement fusion. Design of a compact toroid accelerator experiment on the SHIVA STAR capacitor bank is underway, and computer simulations with the 2 1/2-dimensional magnetohydrodynamics code, MACH2, have been performed to guide this endeavor. The compact toroids are produced in a magnetized coaxial plasma gun, and the acceleration will occur in a configuration similar to a coaxial railgun. Detailed calculations of formation and equilibration of a low beta magnetic force-free configuration (curl B = kB) have been performed with MACH2. In this paper, the authors discuss computer simulations of the focusing and acceleration of the toroid

  14. Fault Gauge Numerical Simulation : Dynamic Rupture Propagation and Local Energy Partitioning

    Science.gov (United States)

    Mollon, G.

    2017-12-01

    In this communication, we present dynamic simulations of the local (centimetric) behaviour of a fault filled with a granular gauge submitted to dynamic rupture. The numerical tool (Fig. 1) combines classical Discrete Element Modelling (albeit with the ability to deal with arbitrary grain shapes) for the simualtion of the gauge, and continuous modelling for the simulation of the acoustic waves emission and propagation. In a first part, the model is applied to the simulation of steady-state shearing of the fault under remote displacement boudary conditions, in order to observe the shear accomodation at the interface (R1 cracks, localization, wear, etc.). It also makes it possible to fit to desired values the Rate and State Friction properties of the granular gauge by adapting the contact laws between grains. Such simulations provide quantitative insight in the steady-state energy partitionning between fracture, friction and acoustic emissions as a function of the shear rate. In a second part, the model is submitted to dynamic rupture. For that purpose, the fault is elastically preloaded just below rupture, and a displacement pulse is applied at one end of the sample (and on only one side of the fault). This allows to observe the propagation of the instability along the fault and the interplay between this propagation and the local granular phenomena. Energy partitionning is then observed both in space and time.

  15. GPU-accelerated micromagnetic simulations using cloud computing

    International Nuclear Information System (INIS)

    Jermain, C.L.; Rowlands, G.E.; Buhrman, R.A.; Ralph, D.C.

    2016-01-01

    Highly parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics. - Highlights: • The benefits of cloud computing for GPU-accelerated micromagnetics are examined. • We present the MuCloud software for running simulations on cloud computing. • Simulation run times are measured to benchmark cloud computing performance. • Comparison benchmarks are analyzed between CPU and GPU based solvers.

  16. GPU-accelerated micromagnetic simulations using cloud computing

    Energy Technology Data Exchange (ETDEWEB)

    Jermain, C.L., E-mail: clj72@cornell.edu [Cornell University, Ithaca, NY 14853 (United States); Rowlands, G.E.; Buhrman, R.A. [Cornell University, Ithaca, NY 14853 (United States); Ralph, D.C. [Cornell University, Ithaca, NY 14853 (United States); Kavli Institute at Cornell, Ithaca, NY 14853 (United States)

    2016-03-01

    Highly parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics. - Highlights: • The benefits of cloud computing for GPU-accelerated micromagnetics are examined. • We present the MuCloud software for running simulations on cloud computing. • Simulation run times are measured to benchmark cloud computing performance. • Comparison benchmarks are analyzed between CPU and GPU based solvers.

  17. Special relativity in beam trajectory simulation in small accelerators

    International Nuclear Information System (INIS)

    Pramudita Anggraita; Budi Santosa; Taufik; Emy Mulyani; Frida Iswinning Diah

    2012-01-01

    Calculation for trajectory simulation of particle beam in small accelerators should account special relativity effect in the beam motion, which differs between parallel and perpendicular direction to the beam velocity. For small electron beam machine of 300 keV, the effect shows up as the rest mass of electron is only 511 keV. Neglecting the effect yields wrong kinetic energy after 300 kV of dc acceleration. For a 13 MeV PET (positron emission tomography) baby cyclotron accelerating proton beam, the effect increases the proton mass by about 1.4% at the final energy. To keep the beam isochronous with the accelerating radiofrequency, a radial increase of the average magnetic field must be designed accordingly. (author)

  18. linear accelerator simulation framework with placet and guinea-pig

    CERN Document Server

    Snuverink, Jochem; CERN. Geneva. ATS Department

    2016-01-01

    Many good tracking tools are available for simulations for linear accelerators. However, several simple tasks need to be performed repeatedly, like lattice definitions, beam setup, output storage, etc. In addition, complex simulations can become unmanageable quite easily. A high level layer would therefore be beneficial. We propose LinSim, a linear accelerator framework with the codes PLACET and GUINEA-PIG. It provides a documented well-debugged high level layer of functionality. Users only need to provide the input settings and essential code and / or use some of the many implemented imperfections and algorithms. It can be especially useful for first-time users. Currently the following accelerators are implemented: ATF2, ILC, CLIC and FACET. This note is the comprehensive manual, discusses the framework design and shows its strength in some condensed examples.

  19. Simulation of the long-term behaviour of a fault with two asperities

    Directory of Open Access Journals (Sweden)

    M. Dragoni

    2010-12-01

    Full Text Available A system made of two sliding blocks coupled by a spring is employed to simulate the long-term behaviour of a fault with two asperities. An analytical solution is given for the motion of the system in the case of blocks having the same friction. An analysis of the phase space shows that orbits can reach a limit cycle only after entering a particular subset of the space. There is an infinite number of different limit cycles, characterized by the difference between the forces applied to the blocks or, as an alternative, by the recurrence pattern of block motions. These results suggest that the recurrence pattern of seismic events produced by the equivalent fault system is associated with a particular stress distribution which repeats periodically. Admissible stress distributions require a certain degree of inhomogeneity, which depends on the geometry of fault system. Aperiodicity may derive from stress transfers from neighboring faults.

  20. Accelerated pavement testing efforts using the heavy vehicle simulator

    CSIR Research Space (South Africa)

    Du Plessis, Louw

    2017-10-01

    Full Text Available This paper provides a brief description of the technological developments involved in the development and use of the Heavy Vehicle Simulator (HVS) accelerated pavement testing equipment. This covers the period from concept in the late 1960’s...

  1. The Design and Semi-Physical Simulation Test of Fault-Tolerant Controller for Aero Engine

    Science.gov (United States)

    Liu, Yuan; Zhang, Xin; Zhang, Tianhong

    2017-11-01

    A new fault-tolerant control method for aero engine is proposed, which can accurately diagnose the sensor fault by Kalman filter banks and reconstruct the signal by real-time on-board adaptive model combing with a simplified real-time model and an improved Kalman filter. In order to verify the feasibility of the method proposed, a semi-physical simulation experiment has been carried out. Besides the real I/O interfaces, controller hardware and the virtual plant model, semi-physical simulation system also contains real fuel system. Compared with the hardware-in-the-loop (HIL) simulation, semi-physical simulation system has a higher degree of confidence. In order to meet the needs of semi-physical simulation, a rapid prototyping controller with fault-tolerant control ability based on NI CompactRIO platform is designed and verified on the semi-physical simulation test platform. The result shows that the controller can realize the aero engine control safely and reliably with little influence on controller performance in the event of fault on sensor.

  2. Quantification of Fault-Zone Plasticity Effects with Spontaneous Rupture Simulations

    Science.gov (United States)

    Roten, D.; Olsen, K. B.; Day, S. M.; Cui, Y.

    2017-09-01

    Previous studies have shown that plastic yielding in crustal rocks in the fault zone may impose a physical limit to extreme ground motions. We explore the effects of fault-zone non-linearity on peak ground velocities (PGVs) by simulating a suite of surface-rupturing strike-slip earthquakes in a medium governed by Drucker-Prager plasticity using the AWP-ODC finite-difference code. Our simulations cover magnitudes ranging from 6.5 to 8.0, three different rock strength models, and average stress drops of 3.5 and 7.0 MPa, with a maximum frequency of 1 Hz and a minimum shear-wave velocity of 500 m/s. Friction angles and cohesions in our rock models are based on strength criteria which are frequently used for fractured rock masses in civil and mining engineering. For an average stress drop of 3.5 MPa, plastic yielding reduces near-fault PGVs by 15-30% in pre-fractured, low strength rock, but less than 1% in massive, high-quality rock. These reductions are almost insensitive to magnitude. If the stress drop is doubled, plasticity reduces near-fault PGVs by 38-45% and 5-15% in rocks of low and high strength, respectively. Because non-linearity reduces slip rates and static slip near the surface, plasticity acts in addition to, and may partially be emulated by, a shallow velocity-strengthening layer. The effects of plasticity are exacerbated if a fault damage zone with reduced shear-wave velocities and reduced rock strength is present. In the linear case, fault-zone trapped waves result in higher near-surface peak slip rates and ground velocities compared to simulations without a low-velocity zone. These amplifications are balanced out by fault-zone plasticity if rocks in the damage zone exhibit low-to-moderate strength throughout the depth extent of the low-velocity zone (˜5 km). We also perform dynamic non-linear simulations of a high stress drop (8 MPa) M 7.8 earthquake rupturing the southern San Andreas fault along 250 km from Indio to Lake Hughes. Non-linearity in the

  3. Using an Earthquake Simulator to Model Tremor Along a Strike Slip Fault

    Science.gov (United States)

    Cochran, E. S.; Richards-Dinger, K. B.; Kroll, K.; Harrington, R. M.; Dieterich, J. H.

    2013-12-01

    We employ the earthquake simulator, RSQSim, to investigate the conditions under which tremor occurs in the transition zone of the San Andreas fault. RSQSim is a computationally efficient method that uses rate- and state- dependent friction to simulate a wide range of event sizes for long time histories of slip [Dieterich and Richards-Dinger, 2010; Richards-Dinger and Dieterich, 2012]. RSQSim has been previously used to investigate slow slip events in Cascadia [Colella et al., 2011; 2012]. Earthquakes, tremor, slow slip, and creep occurrence are primarily controlled by the rate and state constants a and b and slip speed. We will report the preliminary results of using RSQSim to vary fault frictional properties in order to better understand rupture dynamics in the transition zone using observed characteristics of tremor along the San Andreas fault. Recent studies of tremor along the San Andreas fault provide information on tremor characteristics including precise locations, peak amplitudes, duration of tremor episodes, and tremor migration. We use these observations to constrain numerical simulations that examine the slip conditions in the transition zone of the San Andreas Fault. Here, we use the earthquake simulator, RSQSim, to conduct multi-event simulations of tremor for a strike slip fault modeled on Cholame section of the San Andreas fault. Tremor was first observed on the San Andreas fault near Cholame, California near the southern edge of the 2004 Parkfield rupture [Nadeau and Dolenc, 2005]. Since then, tremor has been observed across a 150 km section of the San Andreas with depths between 16-28 km and peak amplitudes that vary by a factor of 7 [Shelly and Hardebeck, 2010]. Tremor episodes, comprised of multiple low frequency earthquakes (LFEs), tend to be relatively short, lasting tens of seconds to as long as 1-2 hours [Horstmann et al., in review, 2013]; tremor occurs regularly with some tremor observed almost daily [Shelly and Hardebeck, 2010; Horstmann

  4. Design and simulation of an accelerating and focusing system

    Directory of Open Access Journals (Sweden)

    A Sadeghipanah

    2011-06-01

    Full Text Available Electrostatic focusing lenses have a vast field of applications in electrostatic accelerators and particularly in electron guns. In this paper, we first express a parametric mathematical analysis of an electrostatic accelerator and focusing system for an electron beam. Next, we At design a system of electron emission slit, accelerating electrodes and focusing lens for an electron beam emitted from a cathode with 4 mm radius and 2 mA current, in a distance less than 10 cm and up to the energy of 30 keV with the beam divergence less than 5°. This is achieved by solving the yielded equations in mathematical analysis using MATLAB. At the end, we simulate the behavior of above electron beam in the designed accelerating and focusing system using CST EM Studio. The results of simulation are in high agreement with required specifications of the electron beam, showing the accuracy of the used method in analysis and design of the accelerating and focusing system.

  5. Simulation an Accelerator driven Subcritical Reactor core with thorium fuel

    International Nuclear Information System (INIS)

    Shirmohammadi, L.; Pazirandeh, A.

    2011-01-01

    The main purpose of this work is simulation An Accelerator driven Subcritical core with Thorium as a new generation nuclear fuel. In this design core , A subcritical core coupled to an accelerator with proton beam (E p =1 GeV) is simulated by MCNPX code .Although the main purpose of ADS systems are transmutation and use MA (Minor Actinides) as a nuclear fuel but another use of these systems are use thorium fuel. This simulated core has two fuel assembly type : (Th-U) and (U-Pu) . Consequence , Neutronic parameters related to ADS core are calculated. It has shown that Thorium fuel is use able in this core and less nuclear waste ,Although Iran has not Thorium reserves but study on Thorium fuel cycle can open a new horizontal in use nuclear energy as a clean energy and without nuclear waste

  6. Frictional healing in simulated anhydrite fault gouges: effects of water and CO2

    Science.gov (United States)

    Pluymakers, Anne; Bakker, Elisenda; Samuelson, Jon; Spiers, Christopher

    2014-05-01

    Currently, depleted hydrocarbon reservoirs are in many ways considered ideal for storage of CO2 and other gases. Faults are of major importance to CO2 storage because of their potential as leakage pathways, and also due to the possible seismic risk associated with fault reactivation. Both in the Netherlands and worldwide, anhydrite-rich rocks are a common topseal for many potential storage sites, making it likely that crosscutting faults will contain fault gouges rich in anhydrite. In order to assess the likelihood of fault reactivation and/or fault leakage, it is important to have a thorough understanding of the fault strength, velocity dependence and of the potential to regain frictional strength after fault movement (healing behavior) of anhydrite fault gouge. Starting with a natural anhydrite (>95wt% CaSO4), with minor quantities of dolomite (direct shear experiments on simulated anhydrite fault gouges with both a slide-hold-slide and velocity-stepping sequences. Pore fluid phase was varied (air, vacuum, water, dry/wet CO2), and pressure and temperature conditions used are representative for potential CO2 storage sites, with an effective normal stress of 25 MPa, a temperature of 120°C and, where used, a pore fluid pressure of 15 MPa. First results indicate that frictional healing in anhydrite is strongly influenced by the presence of water. Dry fault gouges exhibit no measurable frictional healing for hold times up to 1 hour, whereas wet gouges show significant healing and stress relaxation, even for short duration hold periods (30s), suggesting a fluid-assisted process such as pressure solution might be of importance. Interestingly, while many materials exhibit a log-linear dependence of frictional drop on hold time (i.e. "Dieterich-type" healing), our results for wet gouge indicate a non-linear increase of frictional drop with increasing hold time. To determine if pressure solution controls frictional healing we will perform control experiments using a CaSO4

  7. Beam Dynamics Simulation for the CTF3 Drive Beam Accelerator

    CERN Document Server

    Schulte, Daniel

    2000-01-01

    A new CLIC Test Facility (CTF3) at CERN will serve to study the drive beam generation for the Compact Linear Collider (CLIC). CTF3 has to accelerate a 3.5 A electron beam in almost fully-loaded structures. The pulse contains more than 2000 bunches, one in every second RF bucket, and has a length of more than one microsecond. Different options for the lattice of the drive-beam accelerator are presented, based on FODO-cells and triplets as well as solenoids. The transverse stability is simulated, including the effects of beam jitter, alignment and beam-based correction.

  8. Temporal acceleration of spatially distributed kinetic Monte Carlo simulations

    International Nuclear Information System (INIS)

    Chatterjee, Abhijit; Vlachos, Dionisios G.

    2006-01-01

    The computational intensity of kinetic Monte Carlo (KMC) simulation is a major impediment in simulating large length and time scales. In recent work, an approximate method for KMC simulation of spatially uniform systems, termed the binomial τ-leap method, was introduced [A. Chatterjee, D.G. Vlachos, M.A. Katsoulakis, Binomial distribution based τ-leap accelerated stochastic simulation, J. Chem. Phys. 122 (2005) 024112], where molecular bundles instead of individual processes are executed over coarse-grained time increments. This temporal coarse-graining can lead to significant computational savings but its generalization to spatially lattice KMC simulation has not been realized yet. Here we extend the binomial τ-leap method to lattice KMC simulations by combining it with spatially adaptive coarse-graining. Absolute stability and computational speed-up analyses for spatial systems along with simulations provide insights into the conditions where accuracy and substantial acceleration of the new spatio-temporal coarse-graining method are ensured. Model systems demonstrate that the r-time increment criterion of Chatterjee et al. obeys the absolute stability limit for values of r up to near 1

  9. Average accelerator simulation Truebeam using phase space in IAEA format

    International Nuclear Information System (INIS)

    Santana, Emico Ferreira; Milian, Felix Mas; Paixao, Paulo Oliveira; Costa, Raranna Alves da; Velasco, Fermin Garcia

    2015-01-01

    In this paper is used a computational code of radiation transport simulation based on Monte Carlo technique, in order to model a linear accelerator of treatment by Radiotherapy. This work is the initial step of future proposals which aim to study several treatment of patient by Radiotherapy, employing computational modeling in cooperation with the institutions UESC, IPEN, UFRJ e COI. The Chosen simulation code is GATE/Geant4. The average accelerator is TrueBeam of Varian Company. The geometric modeling was based in technical manuals, and radiation sources on the phase space for photons, provided by manufacturer in the IAEA (International Atomic Energy Agency) format. The simulations were carried out in equal conditions to experimental measurements. Were studied photons beams of 6MV, with 10 per 10 cm of field, focusing on a water phantom. For validation were compared dose curves in depth, lateral profiles in different depths of the simulated results and experimental data. The final modeling of this accelerator will be used in future works involving treatments and real patients. (author)

  10. A novel Lagrangian approach for the stable numerical simulation of fault and fracture mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Franceschini, Andrea; Ferronato, Massimiliano, E-mail: massimiliano.ferronato@unipd.it; Janna, Carlo; Teatini, Pietro

    2016-06-01

    The simulation of the mechanics of geological faults and fractures is of paramount importance in several applications, such as ensuring the safety of the underground storage of wastes and hydrocarbons or predicting the possible seismicity triggered by the production and injection of subsurface fluids. However, the stable numerical modeling of ground ruptures is still an open issue. The present work introduces a novel formulation based on the use of the Lagrange multipliers to prescribe the constraints on the contact surfaces. The variational formulation is modified in order to take into account the frictional work along the activated fault portion according to the principle of maximum plastic dissipation. The numerical model, developed in the framework of the Finite Element method, provides stable solutions with a fast convergence of the non-linear problem. The stabilizing properties of the proposed model are emphasized with the aid of a realistic numerical example dealing with the generation of ground fractures due to groundwater withdrawal in arid regions. - Highlights: • A numerical model is developed for the simulation of fault and fracture mechanics. • The model is implemented in the framework of the Finite Element method and with the aid of Lagrange multipliers. • The proposed formulation introduces a new contribution due to the frictional work on the portion of activated fault. • The resulting algorithm is highly non-linear as the portion of activated fault is itself unknown. • The numerical solution is validated against analytical results and proves to be stable also in realistic applications.

  11. A novel Lagrangian approach for the stable numerical simulation of fault and fracture mechanics

    International Nuclear Information System (INIS)

    Franceschini, Andrea; Ferronato, Massimiliano; Janna, Carlo; Teatini, Pietro

    2016-01-01

    The simulation of the mechanics of geological faults and fractures is of paramount importance in several applications, such as ensuring the safety of the underground storage of wastes and hydrocarbons or predicting the possible seismicity triggered by the production and injection of subsurface fluids. However, the stable numerical modeling of ground ruptures is still an open issue. The present work introduces a novel formulation based on the use of the Lagrange multipliers to prescribe the constraints on the contact surfaces. The variational formulation is modified in order to take into account the frictional work along the activated fault portion according to the principle of maximum plastic dissipation. The numerical model, developed in the framework of the Finite Element method, provides stable solutions with a fast convergence of the non-linear problem. The stabilizing properties of the proposed model are emphasized with the aid of a realistic numerical example dealing with the generation of ground fractures due to groundwater withdrawal in arid regions. - Highlights: • A numerical model is developed for the simulation of fault and fracture mechanics. • The model is implemented in the framework of the Finite Element method and with the aid of Lagrange multipliers. • The proposed formulation introduces a new contribution due to the frictional work on the portion of activated fault. • The resulting algorithm is highly non-linear as the portion of activated fault is itself unknown. • The numerical solution is validated against analytical results and proves to be stable also in realistic applications.

  12. Simulation of density measurements in plasma wakefields using photo acceleration

    CERN Document Server

    Kasim, Muhammad Firmansyah; Ceurvorst, Luke; Sadler, James; Burrows, Philip N; Trines, Raoul; Holloway, James; Wing, Matthew; Bingham, Robert; Norreys, Peter

    2015-01-01

    One obstacle in plasma accelerator development is the limitation of techniques to diagnose and measure plasma wakefield parameters. In this paper, we present a novel concept for the density measurement of a plasma wakefield using photon acceleration, supported by extensive particle in cell simulations of a laser pulse that copropagates with a wakefield. The technique can provide the perturbed electron density profile in the laser’s reference frame, averaged over the propagation length, to be accurate within 10%. We discuss the limitations that affect the measurement: small frequency changes, photon trapping, laser displacement, stimulated Raman scattering, and laser beam divergence. By considering these processes, one can determine the optimal parameters of the laser pulse and its propagation length. This new technique allows a characterization of the density perturbation within a plasma wakefield accelerator.

  13. Simulation of density measurements in plasma wakefields using photon acceleration

    Directory of Open Access Journals (Sweden)

    Muhammad Firmansyah Kasim

    2015-03-01

    Full Text Available One obstacle in plasma accelerator development is the limitation of techniques to diagnose and measure plasma wakefield parameters. In this paper, we present a novel concept for the density measurement of a plasma wakefield using photon acceleration, supported by extensive particle in cell simulations of a laser pulse that copropagates with a wakefield. The technique can provide the perturbed electron density profile in the laser’s reference frame, averaged over the propagation length, to be accurate within 10%. We discuss the limitations that affect the measurement: small frequency changes, photon trapping, laser displacement, stimulated Raman scattering, and laser beam divergence. By considering these processes, one can determine the optimal parameters of the laser pulse and its propagation length. This new technique allows a characterization of the density perturbation within a plasma wakefield accelerator.

  14. PIC simulation of electron acceleration in an underdense plasma

    Directory of Open Access Journals (Sweden)

    S Darvish Molla

    2011-06-01

    Full Text Available One of the interesting Laser-Plasma phenomena, when the laser power is high and ultra intense, is the generation of large amplitude plasma waves (Wakefield and electron acceleration. An intense electromagnetic laser pulse can create plasma oscillations through the action of the nonlinear pondermotive force. electrons trapped in the wake can be accelerated to high energies, more than 1 TW. Of the wide variety of methods for generating a regular electric field in plasmas with strong laser radiation, the most attractive one at the present time is the scheme of the Laser Wake Field Accelerator (LWFA. In this method, a strong Langmuir wave is excited in the plasma. In such a wave, electrons are trapped and can acquire relativistic energies, accelerated to high energies. In this paper the PIC simulation of wakefield generation and electron acceleration in an underdense plasma with a short ultra intense laser pulse is discussed. 2D electromagnetic PIC code is written by FORTRAN 90, are developed, and the propagation of different electromagnetic waves in vacuum and plasma is shown. Next, the accuracy of implementation of 2D electromagnetic code is verified, making it relativistic and simulating the generating of wakefield and electron acceleration in an underdense plasma. It is shown that when a symmetric electromagnetic pulse passes through the plasma, the longitudinal field generated in plasma, at the back of the pulse, is weaker than the one due to an asymmetric electromagnetic pulse, and thus the electrons acquire less energy. About the asymmetric pulse, when front part of the pulse has smaller time rise than the back part of the pulse, a stronger wakefield generates, in plasma, at the back of the pulse, and consequently the electrons acquire more energy. In an inverse case, when the rise time of the back part of the pulse is bigger in comparison with that of the back part, a weaker wakefield generates and this leads to the fact that the electrons

  15. Multi-Level Simulated Fault Injection for Data Dependent Reliability Analysis of RTL Circuit Descriptions

    Directory of Open Access Journals (Sweden)

    NIMARA, S.

    2016-02-01

    Full Text Available This paper proposes data-dependent reliability evaluation methodology for digital systems described at Register Transfer Level (RTL. It uses a hybrid hierarchical approach, combining the accuracy provided by Gate Level (GL Simulated Fault Injection (SFI and the low simulation overhead required by RTL fault injection. The methodology comprises the following steps: the correct simulation of the RTL system, according to a set of input vectors, hierarchical decomposition of the system into basic RTL blocks, logic synthesis of basic RTL blocks, data-dependent SFI for the GL netlists, and RTL SFI. The proposed methodology has been validated in terms of accuracy on a medium sized circuit – the parallel comparator used in Check Node Unit (CNU of the Low-Density Parity-Check (LDPC decoders. The methodology has been applied for the reliability analysis of a 128-bit Advanced Encryption Standard (AES crypto-core, for which the GL simulation was prohibitive in terms of required computational resources.

  16. Inspection of piping wall loss with flow accelerated corrosion accelerated simulation test

    International Nuclear Information System (INIS)

    Ryu, Kyung Ha; Kim, Ji Hak; Hwang, Il Soon; Lee, Na Young; Kim, Ji Hyun

    2009-01-01

    Flow Accelerated Corrosion (FAC) has become a hot issue for aging of passive components. Ultrasonic Technique (UT) has been adopted to inspect the secondary piping of Nuclear Power Plants (NPPs). UT, however, uses point detection method, which results in numerous detecting points and thus takes time. We developed an Equipotential Switching Direct Current Potential Drop (ES-DCPD) method to monitor the thickness of piping that covers wide range of piping at once time. Since the ES-DCPD method covers area, not a point, it needs less monitoring time. This can be a good approach to broad carbon steel piping system such as secondary piping of NPPs. In this paper, FAC accelerated simulation test results is described. We realized accelerated FAC phenomenon by 2 times test: 23.7% thinning in 216.7 hours and 51% thinning in 795 hours. These were monitored by ES-DCPD and traditional UT. Some parameters of water chemistry are monitored and controlled to accelerate FAC process. As sensitive factors on FAC, temperature and pH was changed during the test. The wall loss monitored results reflected these changes of water chemistry successfully. Developed electrodes are also applied to simulation loop to monitor water chemistry. (author)

  17. The common component architecture for particle accelerator simulations

    International Nuclear Information System (INIS)

    Dechow, D.R.; Norris, B.; Amundson, J.

    2007-01-01

    Synergia2 is a beam dynamics modeling and simulation application for high-energy accelerators such as the Tevatron at Fermilab and the International Linear Collider, which is now under planning and development. Synergia2 is a hybrid, multilanguage software package comprised of two separate accelerator physics packages (Synergia and MaryLie/Impact) and one high-performance computer science package (PETSc). We describe our approach to producing a set of beam dynamics-specific software components based on the Common Component Architecture specification. Among other topics, we describe particular experiences with the following tasks: using Python steering to guide the creation of interfaces and to prototype components; working with legacy Fortran codes; and an example component-based, beam dynamics simulation.

  18. Analysis and simulation of an electrostatic FN Tandem accelerator

    International Nuclear Information System (INIS)

    Ugarte, Ricardo

    2007-01-01

    An analysis, modeling, and simulation of a positive ion FN Tandem electrostatic accelerator has been done. That has induced a detailed study over all physics components inside the accelerators tank, the terminal control stabilizer (TPS), the corona point, the capacitor pick off (CPO) and over the generating voltmeter (GVM) signals. The parameter of the model has been developed using the Prediction Error estimation Methods (PEM), and within classical techniques of analysis of circuits. The result obtained was used to check and increase the stability of the terminal voltage using Matlab software tools. The result of the simulation was contrasted with the reality and it was possible to improve the stability of the terminal voltage, successfully. The facility belongs to ARN (Argentina) and, in principle, it was installed to development an AMS system. (author)

  19. New Developments in the Simulation of Advanced Accelerator Concepts

    International Nuclear Information System (INIS)

    Paul, K.; Cary, J.R.; Cowan, B.; Bruhwiler, D.L.; Geddes, C.G.R.; Mullowney, P.J.; Messmer, P.; Esarey, E.; Cormier-Michel, E.; Leemans, W.P.; Vay, J.-L.

    2008-01-01

    Improved computational methods are essential to the diverse and rapidly developing field of advanced accelerator concepts. We present an overview of some computational algorithms for laser-plasma concepts and high-brightness photocathode electron sources. In particular, we discuss algorithms for reduced laser-plasma models that can be orders of magnitude faster than their higher-fidelity counterparts, as well as important on-going efforts to include relevant additional physics that has been previously neglected. As an example of the former, we present 2D laser wakefield accelerator simulations in an optimal Lorentz frame, demonstrating and gt;10 GeV energy gain of externally injected electrons over a 2 m interaction length, showing good agreement with predictions from scaled simulations and theory, with a speedup factor of ∼2,000 as compared to standard particle-in-cell.

  20. Numerical simulation on beam breakup unstability of linear induction accelerator

    International Nuclear Information System (INIS)

    Zhang Kaizhi; Wang Huacen; Lin Yuzheng

    2003-01-01

    A code is written to simulate BBU in induction linac according to theoretical analysis. The general form of evolution of BBU in induction linac is investigated at first, then the effect of related parameters on BBU is analyzed, for example, the alignment error, oscillation frequency of beam centroid, beam pulse shape and acceleration gradient. At last measures are put forward to damp beam breakup unstability (BBU)

  1. Design of Accelerator Online Simulator Server Using Structured Data

    International Nuclear Information System (INIS)

    Shen, Guobao

    2012-01-01

    Model based control plays an important role for a modern accelerator during beam commissioning, beam study, and even daily operation. With a realistic model, beam behaviour can be predicted and therefore effectively controlled. The approach used by most current high level application environments is to use a built-in simulation engine and feed a realistic model into that simulation engine. Instead of this traditional monolithic structure, a new approach using a client-server architecture is under development. An on-line simulator server is accessed via network accessible structured data. With this approach, a user can easily access multiple simulation codes. This paper describes the design, implementation, and current status of PVData, which defines the structured data, and PVAccess, which provides network access to the structured data.

  2. EDP supported control room simulation for training of fault cases

    International Nuclear Information System (INIS)

    Weber, P.

    1984-01-01

    The picture used for simulation was the model of a power station control room designed by KWU for the German Museum, the cooling water circuit of which is illustrated, in order to avoid long training times by a manageable problem setting. A process video system equipped with a light pen made by KRUPP ATLAS was available for the VDU representation of simulation, which is used in industry, for the control and supervision of technical system. This process video system was controlled by a Digital PDP 11/40, which has several great advantages over stand-alone operation. (orig./DG) [de

  3. One and two dimensional simulations on beat wave acceleration

    International Nuclear Information System (INIS)

    Mori, W.; Joshi, C.; Dawson, J.M.; Forslund, D.W.; Kindel, J.M.

    1984-01-01

    Recently there has been considerable interest in the use of fast-large-amplitude plasma waves as the basis for a high energy particle accelerator. In these schemes, lasers are used to create the plasma wave. To date the few simulation studies on this subject have been limited to one-dimensional, short rise time simulations. Here the authors present results from simulations in which more realistic parameters are used. In addition, they present the first two dimensional simulations on this subject. One dimensional simulations on a 2 1/2-D relativistic electromagnetic particle code, in which only a few cells were used in one direction, on colinear optical mixing are presented. In these simulations the laser rise time, laser intensity, plasma density, plasma temperature and system size were varied. The simulations indicate that the theory of Rosenbluth and Liu is applicable over a wide range of parameters. In addition, simulations with a DC magnetic field are presented in order to study the ''Surfatron'' concept

  4. Dynamic earthquake rupture simulations on nonplanar faults embedded in 3D geometrically complex, heterogeneous elastic solids

    Energy Technology Data Exchange (ETDEWEB)

    Duru, Kenneth, E-mail: kduru@stanford.edu [Department of Geophysics, Stanford University, Stanford, CA (United States); Dunham, Eric M. [Department of Geophysics, Stanford University, Stanford, CA (United States); Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA (United States)

    2016-01-15

    Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge–Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture

  5. Strength evolution of simulated carbonate-bearing faults: The role of normal stress and slip velocity

    Science.gov (United States)

    Mercuri, Marco; Scuderi, Marco Maria; Tesei, Telemaco; Carminati, Eugenio; Collettini, Cristiano

    2018-04-01

    A great number of earthquakes occur within thick carbonate sequences in the shallow crust. At the same time, carbonate fault rocks exhumed from a depth plasticity). We performed friction experiments on water-saturated simulated carbonate-bearing faults for a wide range of normal stresses (from 5 to 120 MPa) and slip velocities (from 0.3 to 100 μm/s). At high normal stresses (σn > 20 MPa) fault gouges undergo strain-weakening, that is more pronounced at slow slip velocities, and causes a significant reduction of frictional strength, from μ = 0.7 to μ = 0.47. Microstructural analysis show that fault gouge weakening is driven by deformation accommodated by cataclasis and pressure-insensitive deformation processes (pressure solution and granular plasticity) that become more efficient at slow slip velocity. The reduction in frictional strength caused by strain weakening behaviour promoted by the activation of pressure-insensitive deformation might play a significant role in carbonate-bearing faults mechanics.

  6. Centrifugal compressor fault diagnosis based on qualitative simulation and thermal parameters

    Science.gov (United States)

    Lu, Yunsong; Wang, Fuli; Jia, Mingxing; Qi, Yuanchen

    2016-12-01

    This paper concerns fault diagnosis of centrifugal compressor based on thermal parameters. An improved qualitative simulation (QSIM) based fault diagnosis method is proposed to diagnose the faults of centrifugal compressor in a gas-steam combined-cycle power plant (CCPP). The qualitative models under normal and two faulty conditions have been built through the analysis of the principle of centrifugal compressor. To solve the problem of qualitative description of the observations of system variables, a qualitative trend extraction algorithm is applied to extract the trends of the observations. For qualitative states matching, a sliding window based matching strategy which consists of variables operating ranges constraints and qualitative constraints is proposed. The matching results are used to determine which QSIM model is more consistent with the running state of system. The correct diagnosis of two typical faults: seal leakage and valve stuck in the centrifugal compressor has validated the targeted performance of the proposed method, showing the advantages of fault roots containing in thermal parameters.

  7. COMPASS, the COMmunity Petascale project for Accelerator Science and Simulation, a board computational accelerator physics initiative

    International Nuclear Information System (INIS)

    Cary, J.R.; Spentzouris, P.; Amundson, J.; McInnes, L.; Borland, M.; Mustapha, B.; Ostroumov, P.; Wang, Y.; Fischer, W.; Fedotov, A.; Ben-Zvi, I.; Ryne, R.; Esarey, E.; Geddes, C.; Qiang, J.; Ng, E.; Li, S.; Ng, C.; Lee, R.; Merminga, L.; Wang, H.; Bruhwiler, D.L.; Dechow, D.; Mullowney, P.; Messmer, P.; Nieter, C.; Ovtchinnikov, S.; Paul, K.; Stoltz, P.; Wade-Stein, D.; Mori, W.B.; Decyk, V.; Huang, C.K.; Lu, W.; Tzoufras, M.; Tsung, F.; Zhou, M.; Werner, G.R.; Antonsen, T.; Katsouleas, T.; Morris, B.

    2007-01-01

    Accelerators are the largest and most costly scientific instruments of the Department of Energy, with uses across a broad range of science, including colliders for particle physics and nuclear science and light sources and neutron sources for materials studies. COMPASS, the Community Petascale Project for Accelerator Science and Simulation, is a broad, four-office (HEP, NP, BES, ASCR) effort to develop computational tools for the prediction and performance enhancement of accelerators. The tools being developed can be used to predict the dynamics of beams in the presence of optical elements and space charge forces, the calculation of electromagnetic modes and wake fields of cavities, the cooling induced by comoving beams, and the acceleration of beams by intense fields in plasmas generated by beams or lasers. In SciDAC-1, the computational tools had multiple successes in predicting the dynamics of beams and beam generation. In SciDAC-2 these tools will be petascale enabled to allow the inclusion of an unprecedented level of physics for detailed prediction

  8. A universal postprocessing toolkit for accelerator simulation and data analysis

    International Nuclear Information System (INIS)

    Borland, M.

    1998-01-01

    The Self-Describing Data Sets (SDDS) toolkit comprises about 70 generally-applicable programs sharing a common data protocol. At the Advanced Photon Source (APS), SDDS performs the vast majority of operational data collection and processing, most data display functions, and many control functions. In addition, a number of accelerator simulation codes use SDDS for all post-processing and data display. This has three principle advantages: first, simulation codes need not provide customized post-processing tools, thus simplifying development and maintenance. Second, users can enhance code capabilities without changing the code itself, by adding SDDS-based pre- and post-processing. Third, multiple codes can be used together more easily, by employing SDDS for data transfer and adaptation. Given its broad applicability, the SDDS file protocol is surprisingly simple, making it quite easy for simulations to generate SDDS-compliant data. This paper discusses the philosophy behind SDDS, contrasting it with some recent trends, and outlines the capabilities of the toolkit. The paper also gives examples of using SDDS for accelerator simulation

  9. A comparative study of accelerated tests to simulate atmospheric corrosion

    International Nuclear Information System (INIS)

    Assis, Sergio Luiz de

    2000-01-01

    In this study, specimens coated with five organic coating systems were exposed to accelerated tests for periods up to 2000 hours, and also to weathering for two years and six months. The accelerated tests consisted of the salt spray test, according to ASTM B-117; Prohesion (ASTM G 85-98 annex 5A); Prohesion combined with cyclic exposure to UV-A radiation and condensation; 'Prohchuva' a test described by ASTM G 85-98 using a salt spray with composition that simulated the acid rain of Sao Paulo, but one thousand times more concentrated, and 'Prohchuva' combined with cyclic exposure to UV-A radiation and condensation. The coated specimens were exposed with and without incision to expose the substrate. The onset and progress of corrosion at and of the exposed metallic surface, besides coating degradation, were followed by visual observation, and photographs were taken. The coating systems were classified according to the extent of corrosion protection given to the substrate, using a method based on ASTM standards D-610, D-714, D-1654 and D-3359. The rankings of the coatings obtained from accelerated tests and weathering were compared and contrasted with classification of the same systems obtained from literature, for specimens exposed to an industrial atmosphere. Coating degradation was strongly dependent on the test, and could be attributed to differences in test conditions. The best correlation between accelerated test and weathering was found for the test Prohesion alternated with cycles of exposure to UV-A radiation and condensation. (author)

  10. Micrometeoroid impact simulations using a railgun electromagnetic accelerator

    International Nuclear Information System (INIS)

    Upshaw, J.L.; Kajs, J.P.

    1991-01-01

    The Center for Electromechanics at The University of Texas at Austin (CEM-UT), using a railgun electromagnetic (EM) accelerator, has done a series of hypervelocity micrometeoroid impact simulations. Simulations done to date (78 tests) were carried out under contracts with Lockheed Palo Alto Research Laboratory and Martin Marietta Corporation. The tests were designed to demonstrate that railguns can provide a repeatable means of accelerating particles between 10 -4 and 10 -7 g to hypervelocities within a high-vacuum flight chamber. Sodalime glass beads were accelerated up to 11 km/s impacting into silicon, aluminum, quartz and various proprietary targets. At the muzzle of the gun was a 5.8-m-long, high-vacuum flight chamber. Targets were placed in this chamber at various distances from the gun. Impact craters on all the targets were examined using a light-source microscope and several targets were further examined using a scanning electron microscope. Gun and flight range diagnostics, along with experimental setups and results for several of the experiments, are presented in this paper

  11. Simulating electron clouds in heavy-ion accelerators

    International Nuclear Information System (INIS)

    Cohen, R.H.; Friedman, A.; Covo, M. Kireeff; Lund, S.M.; Molvik, A.W.; Bieniosek, F.M.; Seidl, P.A.; Vay, J.-L.; Stoltz, P.; Veitzer, S.

    2005-01-01

    Contaminating clouds of electrons are a concern for most accelerators of positively charged particles, but there are some unique aspects of heavy-ion accelerators for fusion and high-energy density physics which make modeling such clouds especially challenging. In particular, self-consistent electron and ion simulation is required, including a particle advance scheme which can follow electrons in regions where electrons are strongly magnetized, weakly magnetized, and unmagnetized. The approach to such self-consistency is described, and in particular a scheme for interpolating between full-orbit (Boris) and drift-kinetic particle pushes that enables electron time steps long compared to the typical gyroperiod in the magnets. Tests and applications are presented: simulation of electron clouds produced by three different kinds of sources indicates the sensitivity of the cloud shape to the nature of the source; first-of-a-kind self-consistent simulation of electron-cloud experiments on the high-current experiment [L. R. Prost, P. A. Seidl, F. M. Bieniosek, C. M. Celata, A. Faltens, D. Baca, E. Henestroza, J. W. Kwan, M. Leitner, W. L. Waldron, R. Cohen, A. Friedman, D. Grote, S. M. Lund, A. W. Molvik, and E. Morse, 'High current transport experiment for heavy ion inertial fusion', Physical Review Special Topics, Accelerators and Beams 8, 020101 (2005)], at Lawrence Berkeley National Laboratory, in which the machine can be flooded with electrons released by impact of the ion beam on an end plate, demonstrate the ability to reproduce key features of the ion-beam phase space; and simulation of a two-stream instability of thin beams in a magnetic field demonstrates the ability of the large-time-step mover to accurately calculate the instability

  12. Accelerating transient simulation of linear reduced order models.

    Energy Technology Data Exchange (ETDEWEB)

    Thornquist, Heidi K.; Mei, Ting; Keiter, Eric Richard; Bond, Brad

    2011-10-01

    Model order reduction (MOR) techniques have been used to facilitate the analysis of dynamical systems for many years. Although existing model reduction techniques are capable of providing huge speedups in the frequency domain analysis (i.e. AC response) of linear systems, such speedups are often not obtained when performing transient analysis on the systems, particularly when coupled with other circuit components. Reduced system size, which is the ostensible goal of MOR methods, is often insufficient to improve transient simulation speed on realistic circuit problems. It can be shown that making the correct reduced order model (ROM) implementation choices is crucial to the practical application of MOR methods. In this report we investigate methods for accelerating the simulation of circuits containing ROM blocks using the circuit simulator Xyce.

  13. LEGO - A Class Library for Accelerator Design and Simulation

    International Nuclear Information System (INIS)

    Cai, Yunhai

    1998-01-01

    An object-oriented class library of accelerator design and simulation is designed and implemented in a simple and modular fashion. All physics of single-particle dynamics is implemented based on the Hamiltonian in the local frame of the component. Symplectic integrators are used to approximate the integration of the Hamiltonian. A differential algebra class is introduced to extract a Taylor map up to arbitrary order. Analysis of optics is done in the same way both for the linear and non-linear cases. Recently, Monte Carlo simulation of synchrotron radiation has been added into the library. The code is used to design and simulate the lattices of the PEP-II and SPEAR3. And it is also used for the commissioning of the PEP-II. Some examples of how to use the library will be given

  14. DEM simulation of granular flows in a centrifugal acceleration field

    Science.gov (United States)

    Cabrera, Miguel Angel; Peng, Chong; Wu, Wei

    2017-04-01

    The main purpose of mass-flow experimental models is abstracting distinctive features of natural granular flows, and allow its systematic study in the laboratory. In this process, particle size, space, time, and stress scales must be considered for the proper representation of specific phenomena [5]. One of the most challenging tasks in small scale models, is matching the range of stresses and strains among the particle and fluid media observed in a field event. Centrifuge modelling offers an alternative to upscale all gravity-driven processes, and it has been recently employed in the simulation of granular flows [1, 2, 3, 6, 7]. Centrifuge scaling principles are presented in Ref. [4], collecting a wide spectrum of static and dynamic models. However, for the case of kinematic processes, the non-uniformity of the centrifugal acceleration field plays a major role (i.e., Coriolis and inertial effects). In this work, we discuss a general formulation for the centrifugal acceleration field, implemented in a discrete element model framework (DEM), and validated with centrifuge experimental results. Conventional DEM simulations relate the volumetric forces as a function of the gravitational force Gp = mpg. However, in the local coordinate system of a rotating centrifuge model, the cylindrical centrifugal acceleration field needs to be included. In this rotating system, the centrifugal acceleration of a particle depends on the rotating speed of the centrifuge, as well as the position and speed of the particle in the rotating model. Therefore, we obtain the formulation of centrifugal acceleration field by coordinate transformation. The numerical model is validated with a series of centrifuge experiments of monodispersed glass beads, flowing down an inclined plane at different acceleration levels and slope angles. Further discussion leads to the numerical parameterization necessary for simulating equivalent granular flows under an augmented acceleration field. The premise of

  15. Enhancing protein adsorption simulations by using accelerated molecular dynamics.

    Directory of Open Access Journals (Sweden)

    Christian Mücksch

    Full Text Available The atomistic modeling of protein adsorption on surfaces is hampered by the different time scales of the simulation ([Formula: see text][Formula: see text]s and experiment (up to hours, and the accordingly different 'final' adsorption conformations. We provide evidence that the method of accelerated molecular dynamics is an efficient tool to obtain equilibrated adsorption states. As a model system we study the adsorption of the protein BMP-2 on graphite in an explicit salt water environment. We demonstrate that due to the considerably improved sampling of conformational space, accelerated molecular dynamics allows to observe the complete unfolding and spreading of the protein on the hydrophobic graphite surface. This result is in agreement with the general finding of protein denaturation upon contact with hydrophobic surfaces.

  16. Ball bearing defect models: A study of simulated and experimental fault signatures

    Science.gov (United States)

    Mishra, C.; Samantaray, A. K.; Chakraborty, G.

    2017-07-01

    Numerical model based virtual prototype of a system can serve as a tool to generate huge amount of data which replace the dependence on expensive and often difficult to conduct experiments. However, the model must be accurate enough to substitute the experiments. The abstraction level and details considered during model development depend on the purpose for which simulated data should be generated. This article concerns development of simulation models for deep groove ball bearings which are used in a variety of rotating machinery. The purpose of the model is to generate vibration signatures which usually contain features of bearing defects. Three different models with increasing level-of-complexity are considered: a bearing kinematics based planar motion block diagram model developed in MATLAB Simulink which does not explicitly consider cage and traction dynamics, a planar motion model with cage, traction and contact dynamics developed using multi-energy domain bond graph formalism in SYMBOLS software, and a detailed spatial multi-body dynamics model with complex contact and traction mechanics developed using ADAMS software. Experiments are conducted using Spectra Quest machine fault simulator with different prefabricated faulted bearings. The frequency domain characteristics of simulated and experimental vibration signals for different bearing faults are compared and conclusions are drawn regarding usefulness of the developed models.

  17. Design & simulation of a 800 kV dynamitron accelerator by CST studio

    Directory of Open Access Journals (Sweden)

    A M Aghayan

    2015-09-01

    Full Text Available Nowadays, middle energy electrostatic accelerators in industries are widely used due to their high efficiency and low cost compared with other types of accelerators. In this paper, the importance and applications of electrostatic accelerators with 800 keV energy are studied. Design and simulation of capacitive coupling of a dynamitron accelerator is proposed. Furthermore, accelerating tube are designed and simulated by means of CST Suit Studio

  18. Fast acceleration of 2D wave propagation simulations using modern computational accelerators.

    Directory of Open Access Journals (Sweden)

    Wei Wang

    Full Text Available Recent developments in modern computational accelerators like Graphics Processing Units (GPUs and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than 150x speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least 200x faster than the sequential implementation and 30x faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of 120x with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other

  19. Numerical simulation of wire array load implosion on Yang accelerator

    International Nuclear Information System (INIS)

    Zhao Hailong; Deng Jianjun; Wang Qiang; Zou Wenkang; Wang Ganghua

    2012-01-01

    Based on the ZORK model describing the Saturn facility, a zero dimensional load model of the wire array Z-pinch on Yang accelerator is designed using Pspice to simulate the implosion process. Comparisons between the calculated results and experimental data prove the load model to be correct. The applicability and shortcomings of the load model are presented. One-dimensional magnetohydrodynamic calculations are performed by using the current curve obtained from calculated results of experiment Yang 1050#. and the parameters such as implosion time and radiation X-ray power are obtained. (authors)

  20. Frictional response of simulated faults to normal stresses perturbations probed with ultrasonic waves

    Science.gov (United States)

    Shreedharan, S.; Riviere, J.; Marone, C.

    2017-12-01

    We report on a suite of laboratory friction experiments conducted on saw-cut Westerly Granite surfaces to probe frictional response to step changes in normal stress and loading rate. The experiments are conducted to illuminate the fundamental processes that yield friction rate and state dependence. We quantify the microphysical frictional response of the simulated fault surfaces to normal stress steps, in the range of 1% - 600% step increases and decreases from a nominal baseline normal stress. We measure directly the fault slip rate and account for changes in slip rate with changes in normal stress and complement mechanical data acquisition by continuously probing the faults with ultrasonic pulses. We conduct the experiments at room temperature and humidity conditions in a servo controlled biaxial testing apparatus in the double direct shear configuration. The samples are sheared over a range of velocities, from 0.02 - 100 μm/s. We report observations of a transient shear stress and friction evolution with step increases and decreases in normal stress. Specifically, we show that, at low shear velocities and small increases in normal stress ( 5% increases), the shear stress evolves immediately with normal stress. We show that the excursions in slip rate resulting from the changes in normal stress must be accounted for in order to predict fault strength evolution. Ultrasonic wave amplitudes which first increase immediately in response to normal stress steps, then decrease approximately linearly to a new steady state value, in part due to changes in fault slip rate. Previous descriptions of frictional state evolution during normal stress perturbations have not adequately accounted for the effect of large slip velocity excursions. Here, we attempt to do so by using the measured ultrasonic amplitudes as a proxy for frictional state during transient shear stress evolution. Our work aims to improve understanding of induced and triggered seismicity with focus on

  1. Local Interaction Simulation Approach for Fault Detection in Medical Ultrasonic Transducers

    Directory of Open Access Journals (Sweden)

    Z. Hashemiyan

    2015-01-01

    Full Text Available A new approach is proposed for modelling medical ultrasonic transducers operating in air. The method is based on finite elements and the local interaction simulation approach. The latter leads to significant reductions of computational costs. Transmission and reception properties of the transducer are analysed using in-air reverberation patterns. The proposed approach can help to provide earlier detection of transducer faults and their identification, reducing the risk of misdiagnosis due to poor image quality.

  2. Compaction creep of simulated anhydrite fault gouge by pressure solution: theory v. experiments and implications for fault sealing

    NARCIS (Netherlands)

    Pluymakers, A. M. H; Spiers, Christopher

    2015-01-01

    The sealing and healing behaviour of faults filled with anhydrite gouge, by processes such as pressure solution, is of interest in relation both to the integrity of faults cutting geological storage systems sealed by anhydrite caprocks and to seismic events that may nucleate in anhydrite-bearing

  3. Accelerating cardiac bidomain simulations using graphics processing units.

    Science.gov (United States)

    Neic, A; Liebmann, M; Hoetzl, E; Mitchell, L; Vigmond, E J; Haase, G; Plank, G

    2012-08-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6-20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20 GPUs, 476 CPU cores were required on a national supercomputing facility.

  4. Quench simulations for superconducting elements in the LHC accelerator

    Science.gov (United States)

    Sonnemann, F.; Schmidt, R.

    2000-08-01

    The design of the protection system for the superconducting elements in an accelerator such as the large Hadron collider (LHC), now under construction at CERN, requires a detailed understanding of the thermo-hydraulic and electrodynamic processes during a quench. A numerical program (SPQR - simulation program for quench research) has been developed to evaluate temperature and voltage distributions during a quench as a function of space and time. The quench process is simulated by approximating the heat balance equation with the finite difference method in presence of variable cooling and powering conditions. The simulation predicts quench propagation along a superconducting cable, forced quenching with heaters, impact of eddy currents induced by a magnetic field change, and heat transfer through an insulation layer into helium, an adjacent conductor or other material. The simulation studies allowed a better understanding of experimental quench data and were used for determining the adequate dimensioning and protection of the highly stabilised superconducting cables for connecting magnets (busbars), optimising the quench heater strip layout for the main magnets, and studying quench back by induced eddy currents in the superconductor. After the introduction of the theoretical approach, some applications of the simulation model for the LHC dipole and corrector magnets are presented and the outcome of the studies is compared with experimental data.

  5. Accelerating Climate and Weather Simulations through Hybrid Computing

    Science.gov (United States)

    Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark

    2011-01-01

    Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.

  6. Simulation of PEP-II Accelerator Backgrounds Using TURTLE

    CERN Document Server

    Barlow, Roger J; Kozanecki, Witold; Majewski, Stephanie; Roudeau, Patrick; Stocchi, Achille

    2005-01-01

    We present studies of accelerator-induced backgrounds in the BaBar detector at the SLAC B-Factory, carried out using a modified version ofthe DECAY TURTLE simulation package. Lost-particle backgrounds in PEP-II are dominated by a combination of beam-gas bremstrahlung, beam-gas Coulomb scattering, radiative-Bhabha events and beam-beam blow-up. The radiation damage and detector occupancy caused by the associated electromagnetic shower debris can limit the usable luminosity. In order to understand and mitigate such backgrounds, we have performed a full programme of beam-gas and luminosity-background simulations, that include the effects of the detector solenoidal field, detailed modelling of limiting apertures in both collider rings, and optimization of the betatron collimation scheme in the presence of large transverse tails.

  7. Simulation of the impact of wind power on the transient fault behavior of the Nordic power system

    DEFF Research Database (Denmark)

    Jauch, Clemens; Sørensen, Poul Ejnar; Norheim, Ian

    2007-01-01

    influences the post-fault behavior of the Nordic power system. It is concluded that an increasing level of wind power penetration leads to stronger system oscillations in case of fixed speed wind turbines. It is found that fixed speed wind turbines that merely ride through transient faults have negative......In this paper the effect of wind power on the transient fault behavior of the Nordic power system is investigated. The Nordic power system is the interconnected power system of the countries Norway, Sweden, Finland and Denmark. For the purpose of these investigations the wind turbines installed...... and connected in eastern Denmark are taken as study case. The current and future wind power situation in eastern Denmark is modeled and short circuit faults in the system simulated. The simulations yield information on (i) how the faults impact on the wind turbines and (ii) how the response of the wind turbines...

  8. Fault-Tolerant Robot Programming through Simulation with Realistic Sensor Models

    Directory of Open Access Journals (Sweden)

    Axel Waggershauser

    2008-11-01

    Full Text Available We introduce a simulation system for mobile robots that allows a realistic interaction of multiple robots in a common environment. The simulated robots are closely modeled after robots from the EyeBot family and have an identical application programmer interface. The simulation supports driving commands at two levels of abstraction as well as numerous sensors such as shaft encoders, infrared distance sensors, and compass. Simulation of on-board digital cameras via synthetic images allows the use of image processing routines for robot control within the simulation. Specific error models for actuators, distance sensors, camera sensor, and wireless communication have been implemented. Progressively increasing error levels for an application program allows for testing and improving its robustness and fault-tolerance.

  9. Accelerating large-scale phase-field simulations with GPU

    Directory of Open Access Journals (Sweden)

    Xiaoming Shi

    2017-10-01

    Full Text Available A new package for accelerating large-scale phase-field simulations was developed by using GPU based on the semi-implicit Fourier method. The package can solve a variety of equilibrium equations with different inhomogeneity including long-range elastic, magnetostatic, and electrostatic interactions. Through using specific algorithm in Compute Unified Device Architecture (CUDA, Fourier spectral iterative perturbation method was integrated in GPU package. The Allen-Cahn equation, Cahn-Hilliard equation, and phase-field model with long-range interaction were solved based on the algorithm running on GPU respectively to test the performance of the package. From the comparison of the calculation results between the solver executed in single CPU and the one on GPU, it was found that the speed on GPU is enormously elevated to 50 times faster. The present study therefore contributes to the acceleration of large-scale phase-field simulations and provides guidance for experiments to design large-scale functional devices.

  10. Fault classification method for the driving safety of electrified vehicles

    Science.gov (United States)

    Wanner, Daniel; Drugge, Lars; Stensson Trigell, Annika

    2014-05-01

    A fault classification method is proposed which has been applied to an electric vehicle. Potential faults in the different subsystems that can affect the vehicle directional stability were collected in a failure mode and effect analysis. Similar driveline faults were grouped together if they resembled each other with respect to their influence on the vehicle dynamic behaviour. The faults were physically modelled in a simulation environment before they were induced in a detailed vehicle model under normal driving conditions. A special focus was placed on faults in the driveline of electric vehicles employing in-wheel motors of the permanent magnet type. Several failures caused by mechanical and other faults were analysed as well. The fault classification method consists of a controllability ranking developed according to the functional safety standard ISO 26262. The controllability of a fault was determined with three parameters covering the influence of the longitudinal, lateral and yaw motion of the vehicle. The simulation results were analysed and the faults were classified according to their controllability using the proposed method. It was shown that the controllability decreased specifically with increasing lateral acceleration and increasing speed. The results for the electric driveline faults show that this trend cannot be generalised for all the faults, as the controllability deteriorated for some faults during manoeuvres with low lateral acceleration and low speed. The proposed method is generic and can be applied to various other types of road vehicles and faults.

  11. Modeling and Simulation of Transient Fault Response at Lillgrund Wind Farm when Subjected to Faults in the Connecting 130 kV Grid

    Energy Technology Data Exchange (ETDEWEB)

    Eliasson, Anders; Isabegovic, Emir

    2009-07-01

    The purpose of this thesis was to investigate what type of faults in the connecting grid should be dimensioning for future wind farms. An investigation of over and under voltages at the main transformer and the turbines inside Lillgrund wind farm was the main goal. The results will be used in the planning stage of future wind farms when performing insulation coordination and determining the protection settings. A model of the Lillgrund wind farm and a part of the connecting 130 kV grid were built in PSCAD/EMTDC. The farm consists of 48 Siemens SWT-2.3-93 2.3 MW wind turbines with full power converters. The turbines were modeled as controllable current sources providing a constant active power output up to the current limit of 1.4 pu. The transmission lines and cables were modeled as frequency dependent (phase) models. The load flows and bus voltages were verified towards a PSS/E model and the transient response was verified towards measuring data from two faults, a line to line fault in the vicinity of Barsebaeck (BBK) and a single line-to-ground fault close to Bunkeflo (BFO) substation. For the simulation, three phase to ground, single line to ground and line to line faults were applied at different locations in the connecting grid and the phase to ground voltages at different buses in the connecting grid and at turbines were studied. These faults were applied for different configurations of the farm. For single line to ground faults, the highest over voltage on a turbine was 1.22 pu (32.87 kV) due to clearing of a fault at BFO (the PCC). For line to line faults, the highest over voltage on a turbine was 1.59 pu (42.83 kV) at the beginning of a fault at KGE one bus away from BFO. Both these cases were when all radials were connected and the turbines ran at full power. The highest over voltage observed at Lillgrund was 1.65 pu (44.45 kV). This over voltage was caused by a three phase to ground fault applied at KGE and occurred at the beginning of the fault and when

  12. Accelerated Testing of UH-60 Viscous Bearings for Degraded Grease Fault

    Science.gov (United States)

    Dykas, Brian; Hood, Adrian; Krantz, Timothy; Klemmer, Marko

    2015-01-01

    An accelerated aging investigation of critical aviation bearings lubricated with MIL-PRF- 81322 grease was conducted to derive an understanding of the mechanisms of grease degradation and loss of lubrication over time. The current study focuses on UH-60 Black Hawk viscous damper bearings supporting the tail rotor driveshaft, which were subjected to more than 5800 hours of testing in a heated environment to accelerate the deterioration of the grease. The mechanism of grease degradation is a reduction in the oil/thickener ratio rather than the expected chemical degradation of grease constituents. Over the course of testing, vibration and temperature monitoring of bearings was conducted and trends for failing bearings are presented.

  13. A novel Lagrangian approach for the stable numerical simulation of fault and fracture mechanics

    Science.gov (United States)

    Franceschini, Andrea; Ferronato, Massimiliano; Janna, Carlo; Teatini, Pietro

    2016-06-01

    The simulation of the mechanics of geological faults and fractures is of paramount importance in several applications, such as ensuring the safety of the underground storage of wastes and hydrocarbons or predicting the possible seismicity triggered by the production and injection of subsurface fluids. However, the stable numerical modeling of ground ruptures is still an open issue. The present work introduces a novel formulation based on the use of the Lagrange multipliers to prescribe the constraints on the contact surfaces. The variational formulation is modified in order to take into account the frictional work along the activated fault portion according to the principle of maximum plastic dissipation. The numerical model, developed in the framework of the Finite Element method, provides stable solutions with a fast convergence of the non-linear problem. The stabilizing properties of the proposed model are emphasized with the aid of a realistic numerical example dealing with the generation of ground fractures due to groundwater withdrawal in arid regions.

  14. SAFTAC, Monte-Carlo Fault Tree Simulation for System Design Performance and Optimization

    International Nuclear Information System (INIS)

    Crosetti, P.A.; Garcia de Viedma, L.

    1976-01-01

    1 - Description of problem or function: SAFTAC is a Monte Carlo fault tree simulation program that provides a systematic approach for analyzing system design, performing trade-off studies, and optimizing system changes or additions. 2 - Method of solution: SAFTAC assumes an exponential failure distribution for basic input events and a choice of either Gaussian distributed or constant repair times. The program views the system represented by the fault tree as a statistical assembly of independent basic input events, each characterized by an exponential failure distribution and, if used, a constant or normal repair distribution. 3 - Restrictions on the complexity of the problem: The program is dimensioned to handle 1100 basic input events and 1100 logical gates. It can be re-dimensioned to handle up to 2000 basic input events and 2000 logical gates within the existing core memory

  15. The Combined Application of Fault Trees and Turbine Cycle Simulation in Generation Risk Assessment

    International Nuclear Information System (INIS)

    Heo, Gyun Young; Park, Jin Kyun

    2009-01-01

    The paper describes a few ideas developed for the framework to quantify human errors taking place during the test and maintenance (T and M) in a secondary system of nuclear power plants, which was presented in the previous meeting. GRA-HRE (Generation Risk Assessment for Human Related Events) is composed of four essential components, the human error interpreter, the frequency estimator, the risk estimator, and the derate estimator. The proposed GRA gave emphasis on explicitly considering human errors, performing fault tree analysis including the entire balance-of-plant side, and quantifying electric loss under abnormal plant configurations. In terms of the consideration of human errors, it was hard to distinguish the effects of human errors from other failure modes in the conventional GRA because the human errors were implicitly involved in mechanical failure mode. Since the risk estimator in GRA-HRE separately deals with the basic events representing human error modes such as control failure, wrong object, omission, wrong action, etc., we can recognize their relative importance comparing with other types of mechanical failures. Other specialties in GRA-HRE came from the combined application of fault tree analysis and turbine cycle simulation. The previous study suggested that we would use the fault tree analysis with the top events designated by system's malfunction such as 'feedwater system failure' to develop the risk estimator. However, this approach could not clearly provide the path of propagation of human errors, and it was difficult to present the failure logics in some cases. In order to overcome these bottlenecks, the paper is going to propose the modified idea to setup top events and to explain how to make use of turbine cycle simulation to complete the fault trees in a cooperative manner

  16. Ground-Motion Simulations of Scenario Earthquakes on the Hayward Fault

    Energy Technology Data Exchange (ETDEWEB)

    Aagaard, B; Graves, R; Larsen, S; Ma, S; Rodgers, A; Ponce, D; Schwartz, D; Simpson, R; Graymer, R

    2009-03-09

    We compute ground motions in the San Francisco Bay area for 35 Mw 6.7-7.2 scenario earthquake ruptures involving the Hayward fault. The modeled scenarios vary in rupture length, hypocenter, slip distribution, rupture speed, and rise time. This collaborative effort involves five modeling groups, using different wave propagation codes and domains of various sizes and resolutions, computing long-period (T > 1-2 s) or broadband (T > 0.1 s) synthetic ground motions for overlapping subsets of the suite of scenarios. The simulations incorporate 3-D geologic structure and illustrate the dramatic increase in intensity of shaking for Mw 7.05 ruptures of the entire Hayward fault compared with Mw 6.76 ruptures of the southern two-thirds of the fault. The area subjected to shaking stronger than MMI VII increases from about 10% of the San Francisco Bay urban area in the Mw 6.76 events to more than 40% of the urban area for the Mw 7.05 events. Similarly, combined rupture of the Hayward and Rodgers Creek faults in a Mw 7.2 event extends shaking stronger than MMI VII to nearly 50% of the urban area. For a given rupture length, the synthetic ground motions exhibit the greatest sensitivity to the slip distribution and location inside or near the edge of sedimentary basins. The hypocenter also exerts a strong influence on the amplitude of the shaking due to rupture directivity. The synthetic waveforms exhibit a weaker sensitivity to the rupture speed and are relatively insensitive to the rise time. The ground motions from the simulations are generally consistent with Next Generation Attenuation ground-motion prediction models but contain long-period effects, such as rupture directivity and amplification in shallow sedimentary basins that are not fully captured by the ground-motion prediction models.

  17. Monte Carlo simulation of medical linear accelerator using primo code

    International Nuclear Information System (INIS)

    Omer, Mohamed Osman Mohamed Elhasan

    2014-12-01

    The use of monte Carlo simulation has become very important in the medical field and especially in calculation in radiotherapy. Various Monte Carlo codes were developed simulating interactions of particles and photons with matter. One of these codes is PRIMO that performs simulation of radiation transport from the primary electron source of a linac to estimate the absorbed dose in a water phantom or computerized tomography (CT). PRIMO is based on Penelope Monte Carlo code. Measurements of 6 MV photon beam PDD and profile were done for Elekta precise linear accelerator at Radiation and Isotopes Center Khartoum using computerized Blue water phantom and CC13 Ionization Chamber. accept Software was used to control the phantom to measure and verify dose distribution. Elektalinac from the list of available linacs in PRIMO was tuned to model Elekta precise linear accelerator. Beam parameter of 6.0 MeV initial electron energy, 0.20 MeV FWHM, and 0.20 cm focal spot FWHM were used, and an error of 4% between calculated and measured curves was found. The buildup region Z max was 1.40 cm and homogenous profile in cross line and in line were acquired. A number of studies were done to verily the model usability one of them is the effect of the number of histories on accuracy of the simulation and the resulted profile for the same beam parameters. The effect was noticeable and inaccuracies in the profile were reduced by increasing the number of histories. Another study was the effect of Side-step errors on the calculated dose which was compared with the measured dose for the same setting.It was in range of 2% for 5 cm shift, but it was higher in the calculated dose because of the small difference between the tuned model and measured dose curves. Future developments include simulating asymmetrical fields, calculating the dose distribution in computerized tomographic (CT) volume, studying the effect of beam modifiers on beam profile for both electron and photon beams.(Author)

  18. Monte Carlo simulation of a clinical linear accelerator

    International Nuclear Information System (INIS)

    Lin, S.-Y.; Chu, T.-C.; Lin, J.-P.

    2001-01-01

    The effects of the physical parameters of an electron beam from a Siemens PRIMUS clinical linear accelerator (linac) on the dose distribution in water were investigated by Monte Carlo simulation. The EGS4 user code, OMEGA/BEAM, was used in this study. Various incident electron beams, for example, with different energies, spot sizes and distances from the point source, were simulated using the detailed linac head structure in the 6 MV photon mode. Approximately 10 million particles were collected in the scored plane, which was set under the reticle to form the so-called phase space file. The phase space file served as a source for simulating the dose distribution in water using DOSXYZ. Dose profiles at D max (1.5 cm) and PDD curves were calculated following simulating about 1 billion histories for dose profiles and 500 million histories for percent depth dose (PDD) curves in a 30x30x30 cm 3 water phantom. The simulation results were compared with the data measured by a CEA film and an ion chamber. The results show that the dose profiles are influenced by the energy and the spot size, while PDD curves are primarily influenced by the energy of the incident beam. The effect of the distance from the point source on the dose profile is not significant and is recommended to be set at infinity. We also recommend adjusting the beam energy by using PDD curves and, then, adjusting the spot size by using the dose profile to maintain the consistency of the Monte Carlo results and measured data

  19. On acceleration of plasmoids in magnetohydrodynamic simulations of magnetotail reconnection

    International Nuclear Information System (INIS)

    Scholer, M.; Hautz, R.

    1991-01-01

    The formation and acceleration of plasmoids is investigated by two-dimensional magnetohydrodynamic simulations. The initial equilibrium contains a plasma sheet with a northward magnetic field (B z ) component and a tailward pressure gradient. Reconnection is initiated by three different methods: Case A, a constant resistivity is applied everywhere and a tearing mode evolves, case B, a spatially localized resistivity is fixed in the near-Earth region, and case C, the resistivity is allowed to depend on the electrical current density. In case A, the authors obtain the same results as have been presented by Otto et al. (1990): the tearing instability releases the tension of the closed field lines so that the inherent pressure gradient of the two-dimensional system is not balanced anymore. The pressure gradient then sets the plasmoid into motion. Any sling-shot effect of open magnetic field lines is of minor importance. A completely different behavior has been found in cases B and C. In these cases the high-speed flow in the wedge-shaped region tailward of the near-Earth neutral line pushes against the detached plasmoid and drives it tailward. The ideal terms contributing to the acceleration are still only the pressure and the magnetic field term. However, in these cases the pressure is due to the dynamic pressure of the fast outflow from the reconnection region. The outflow in the wedge-shaped region on both sides of the neutral line is due to acceleration of plasma by tangential magnetic stresses at the slow mode shocks extending form the X line

  20. ELEGANT: A flexible SDDS-compliant code for accelerator simulation

    International Nuclear Information System (INIS)

    Borland, M.

    2000-01-01

    ELEGANT (ELEctron Generation ANd Tracking) is the principle accelerator simulation code used at the Advanced Photon Source (APS) for circular and one-pass machines. Capabilities include 6-D tracking using matrices up to third order, canonical integration, and numerical integration. Standard beamline elements are supported, as well as coherent synchrotron radiation, wakefields, rf elements, kickers, apertures, scattering, and more. In addition to tracking with and without errors, ELEGANT performs optimization of tracked properties, as well as computation and optimization of Twiss parameters, radiation integrals, matrices, and floor coordinates. Orbit/trajectory, tune, and chromaticity correction are supported. ELEGANT is fully compliant with the Self Describing Data Sets (SDDS) file protocol, and hence uses the SDDS Toolkit for pre- and post-processing. This permits users to prepare scripts to run the code in a flexible and automated fashion. It is particularly well suited to multistage simulation and concurrent simulation on many workstations. Several examples of complex projects performed with ELEGANT are given, including top-up safety analysis of the APS and design of the APS bunch compressor

  1. Beam dynamics simulation of the Spallation Neutron Source linear accelerator

    International Nuclear Information System (INIS)

    Takeda, H.; Billen, J.H.; Bhatia, T.S.

    1998-01-01

    The accelerating structure for Spallation Neutron Source (SNS) consists of a radio-frequency-quadrupole-linac (RFQ), a drift-tube-linac (DTL), a coupled-cavity-drift-tube-linac (CCDTL), and a coupled-cavity-linac (CCL). The linac is operated at room temperature. The authors discuss the detailed design of linac which accelerates an H - pulsed beam coming out from RFQ at 2.5 MeV to 1000 MeV. They show a detailed transition from 402.5 MHz DTL with a 4 βλ structure to a CCDTL operated at 805 MHz with a 12 βλ structure. After a discussion of overall feature of the linac, they present an end-to-end particle simulation using the new version of the PARMILA code for a beam starting from the RFQ entrance through the rest of the linac. At 1000 MeV, the beam is transported to a storage ring. The storage ring requires a large (±500-keV) energy spread. This is accomplished by operating the rf-phase in the last section of the linac so the particles are at the unstable fixed point of the separatrix. They present zero-current phase advance, beam size, and beam emittance along the entire linac

  2. Simulations of Flame Acceleration and DDT in Mixture Composition Gradients

    Science.gov (United States)

    Zheng, Weilin; Kaplan, Carolyn; Houim, Ryan; Oran, Elaine

    2017-11-01

    Unsteady, multidimensional, fully compressible numerical simulations of methane-air in an obstructed channel with spatial gradients in equivalence ratios have been carried to determine the effects of the gradients on flame acceleration and transition to detonation. Results for gradients perpendicular to the propagation direction were considered here. A calibrated, optimized chemical-diffusive model that reproduces correct flame and detonation properties for methane-air over a range of equivalence ratios was derived from a combination of a genetic algorithm with a Nelder-Mead optimization scheme. Inhomogeneous mixtures of methane-air resulted in slower flame acceleration and longer distance to DDT. Detonations were more likely to decouple into a flame and a shock under sharper concentration gradients. Detailed analyses of temperature and equivalence ratio illustrated that vertical gradients can greatly affect the formation of hot spots that initiate detonation by changing the strength of leading shock wave and local equivalence ratio near the base of obstacles. This work is supported by the Alpha Foundation (Grant No. AFC215-20).

  3. Dynamic Monte Carlo simulations of radiatively accelerated GRB fireballs

    Science.gov (United States)

    Chhotray, Atul; Lazzati, Davide

    2018-05-01

    We present a novel Dynamic Monte Carlo code (DynaMo code) that self-consistently simulates the Compton-scattering-driven dynamic evolution of a plasma. We use the DynaMo code to investigate the time-dependent expansion and acceleration of dissipationless gamma-ray burst fireballs by varying their initial opacities and baryonic content. We study the opacity and energy density evolution of an initially optically thick, radiation-dominated fireball across its entire phase space - in particular during the Rph matter-dominated fireballs due to Thomson scattering. We quantify the new phases by providing analytical expressions of Lorentz factor evolution, which will be useful for deriving jet parameters.

  4. CUDA accelerated simulation of needle insertions in deformable tissue

    International Nuclear Information System (INIS)

    Patriciu, Alexandru

    2012-01-01

    This paper presents a stiff needle-deformable tissue interaction model. The model uses a mesh-less discretization of continuum; avoiding thus the expensive remeshing required by the finite element models. The proposed model can accommodate both linear and nonlinear material characteristics. The needle-deformable tissue interaction is modeled through fundamental boundaries. The forces applied by the needle on the tissue are divided in tangent forces and constraint forces. The constraint forces are adaptively computed such that the material is properly constrained by the needle. The implementation is accelerated using NVidia CUDA. We present detailed analysis of the execution timing in both serial and parallel case. The proposed needle insertion model was integrated in a custom software that loads DICOM images, generate the deformable model, and can simulate different insertion strategies.

  5. A comparison among observations and earthquake simulator results for the allcal2 California fault model

    Science.gov (United States)

    Tullis, Terry. E.; Richards-Dinger, Keith B.; Barall, Michael; Dieterich, James H.; Field, Edward H.; Heien, Eric M.; Kellogg, Louise; Pollitz, Fred F.; Rundle, John B.; Sachs, Michael K.; Turcotte, Donald L.; Ward, Steven N.; Yikilmaz, M. Burak

    2012-01-01

    In order to understand earthquake hazards we would ideally have a statistical description of earthquakes for tens of thousands of years. Unfortunately the ∼100‐year instrumental, several 100‐year historical, and few 1000‐year paleoseismological records are woefully inadequate to provide a statistically significant record. Physics‐based earthquake simulators can generate arbitrarily long histories of earthquakes; thus they can provide a statistically meaningful history of simulated earthquakes. The question is, how realistic are these simulated histories? This purpose of this paper is to begin to answer that question. We compare the results between different simulators and with information that is known from the limited instrumental, historic, and paleoseismological data.As expected, the results from all the simulators show that the observational record is too short to properly represent the system behavior; therefore, although tests of the simulators against the limited observations are necessary, they are not a sufficient test of the simulators’ realism. The simulators appear to pass this necessary test. In addition, the physics‐based simulators show similar behavior even though there are large differences in the methodology. This suggests that they represent realistic behavior. Different assumptions concerning the constitutive properties of the faults do result in enhanced capabilities of some simulators. However, it appears that the similar behavior of the different simulators may result from the fault‐system geometry, slip rates, and assumed strength drops, along with the shared physics of stress transfer.This paper describes the results of running four earthquake simulators that are described elsewhere in this issue of Seismological Research Letters. The simulators ALLCAL (Ward, 2012), VIRTCAL (Sachs et al., 2012), RSQSim (Richards‐Dinger and Dieterich, 2012), and ViscoSim (Pollitz, 2012) were run on our most recent all‐California fault

  6. Numerical simulation of faulting in the Sunda Trench shows that seamounts may generate megathrust earthquakes

    Science.gov (United States)

    Jiao, L.; Chan, C. H.; Tapponnier, P.

    2017-12-01

    The role of seamounts in generating earthquakes has been debated, with some studies suggesting that seamounts could be truncated to generate megathrust events, while other studies indicate that the maximum size of megathrust earthquakes could be reduced as subducting seamounts could lead to segmentation. The debate is highly relevant for the seamounts discovered along the Mentawai patch of the Sunda Trench, where previous studies have suggested that a megathrust earthquake will likely occur within decades. In order to model the dynamic behavior of the Mentawai patch, we simulated forearc faulting caused by seamount subducting using the Discrete Element Method. Our models show that rupture behavior in the subduction system is dominated by stiffness of the overriding plate. When stiffness is low, a seamount can be a barrier to rupture propagation, resulting in several smaller (M≤8.0) events. If, however, stiffness is high, a seamount can cause a megathrust earthquake (M8 class). In addition, we show that a splay fault in the subduction environment could only develop when a seamount is present, and a larger offset along a splay fault is expected when stiffness of the overriding plate is higher. Our dynamic models are not only consistent with previous findings from seismic profiles and earthquake activities, but the models also better constrain the rupture behavior of the Mentawai patch, thus contributing to subsequent seismic hazard assessment.

  7. A comparison between rate-and-state friction and microphysical models, based on numerical simulations of fault slip

    Science.gov (United States)

    van den Ende, M. P. A.; Chen, J.; Ampuero, J.-P.; Niemeijer, A. R.

    2018-05-01

    Rate-and-state friction (RSF) is commonly used for the characterisation of laboratory friction experiments, such as velocity-step tests. However, the RSF framework provides little physical basis for the extrapolation of these results to the scales and conditions of natural fault systems, and so open questions remain regarding the applicability of the experimentally obtained RSF parameters for predicting seismic cycle transients. As an alternative to classical RSF, microphysics-based models offer means for interpreting laboratory and field observations, but are generally over-simplified with respect to heterogeneous natural systems. In order to bridge the temporal and spatial gap between the laboratory and nature, we have implemented existing microphysical model formulations into an earthquake cycle simulator. Through this numerical framework, we make a direct comparison between simulations exhibiting RSF-controlled fault rheology, and simulations in which the fault rheology is dictated by the microphysical model. Even though the input parameters for the RSF simulation are directly derived from the microphysical model, the microphysics-based simulations produce significantly smaller seismic event sizes than the RSF-based simulation, and suggest a more stable fault slip behaviour. Our results reveal fundamental limitations in using classical rate-and-state friction for the extrapolation of laboratory results. The microphysics-based approach offers a more complete framework in this respect, and may be used for a more detailed study of the seismic cycle in relation to material properties and fault zone pressure-temperature conditions.

  8. Accelerated prompt gamma estimation for clinical proton therapy simulations

    Science.gov (United States)

    Huisman, Brent F. B.; Létang, J. M.; Testa, É.; Sarrut, D.

    2016-11-01

    There is interest in the particle therapy community in using prompt gammas (PGs), a natural byproduct of particle treatment, for range verification and eventually dose control. However, PG production is a rare process and therefore estimation of PGs exiting a patient during a proton treatment plan executed by a Monte Carlo (MC) simulation converges slowly. Recently, different approaches to accelerating the estimation of PG yield have been presented. Sterpin et al (2015 Phys. Med. Biol. 60 4915-46) described a fast analytic method, which is still sensitive to heterogeneities. El Kanawati et al (2015 Phys. Med. Biol. 60 8067-86) described a variance reduction method (pgTLE) that accelerates the PG estimation by precomputing PG production probabilities as a function of energy and target materials, but has as a drawback that the proposed method is limited to analytical phantoms. We present a two-stage variance reduction method, named voxelized pgTLE (vpgTLE), that extends pgTLE to voxelized volumes. As a preliminary step, PG production probabilities are precomputed once and stored in a database. In stage 1, we simulate the interactions between the treatment plan and the patient CT with low statistic MC to obtain the spatial and spectral distribution of the PGs. As primary particles are propagated throughout the patient CT, the PG yields are computed in each voxel from the initial database, as a function of the current energy of the primary, the material in the voxel and the step length. The result is a voxelized image of PG yield, normalized to a single primary. The second stage uses this intermediate PG image as a source to generate and propagate the number of PGs throughout the rest of the scene geometry, e.g. into a detection device, corresponding to the number of primaries desired. We achieved a gain of around 103 for both a geometrical heterogeneous phantom and a complete patient CT treatment plan with respect to analog MC, at a convergence level of 2% relative

  9. Mathematical modeling and numerical simulation of unilateral dynamic rupture propagation along very-long reverse faults

    Science.gov (United States)

    Hirano, S.

    2017-12-01

    For some great earthquakes, dynamic rupture propagates unilaterally along a horizontal direction of very-long reverse faults (e.g., the Mw9.1 Sumatra earthquake in 2004, the Mw8.0 Wenchuan earthquake in 2008, and the Mw8.8 Maule earthquake in 2010, etc.). It seems that barriers or creeping sections may not lay along the opposite region of the co-seismically ruptured direction. In fact, in the case of Sumatra, the Mw8.6 earthquake occurred in the opposite region only three months after the mainshock. Mechanism of unilateral mode-II rupture along a material interface has been investigated theoretically and numerically. For mode-II rupture propagating along a material interface, an analytical solution implies that co-seismic stress perturbation depends on the rupture direction (Weertman, 1980 JGR; Hirano & Yamashita, 2016 BSSA), and numerical modeling of plastic yielding contributes to simulating the unilateral rupture (DeDonteny et al., 2011 JGR). However, mode-III rupture may dominate for the very-long reverse faults, and it can be shown that stress perturbation due to mode-III rupture does not depend on the rupture direction. Hence, an effect of the material interface is insufficient to understand the mechanism of unilateral rupture along the very-long reverse faults. In this study, I consider a two-dimensional bimaterial system with interfacial dynamic mode-III rupture under an obliquely pre-stressed configuration (i.e., the maximum shear direction of the background stress is inclined from the interfacial fault). First, I derived an analytical solution of regularized elastic stress field around a steady-state interfacial slip pulse using the method of Rice et al. (2005 BSSA). Then I found that the total stress, which is the sum of the background stress and co-seismic stress perturbation, depends on the rupture direction even in the mode-III case. Second, I executed a finite difference numerical simulation with a plastic yielding model of Andrews (1978 JGR; 2005

  10. CONFIG - Adapting qualitative modeling and discrete event simulation for design of fault management systems

    Science.gov (United States)

    Malin, Jane T.; Basham, Bryan D.

    1989-01-01

    CONFIG is a modeling and simulation tool prototype for analyzing the normal and faulty qualitative behaviors of engineered systems. Qualitative modeling and discrete-event simulation have been adapted and integrated, to support early development, during system design, of software and procedures for management of failures, especially in diagnostic expert systems. Qualitative component models are defined in terms of normal and faulty modes and processes, which are defined by invocation statements and effect statements with time delays. System models are constructed graphically by using instances of components and relations from object-oriented hierarchical model libraries. Extension and reuse of CONFIG models and analysis capabilities in hybrid rule- and model-based expert fault-management support systems are discussed.

  11. Application of a Cycle Jump Technique for Acceleration of Fatigue Crack Growth Simulation

    DEFF Research Database (Denmark)

    Moslemian, Ramin; Berggreen, Christian; Karlsson, A.M.

    2010-01-01

    A method for accelerated simulation of fatigue crack growth in a bimaterial interface is proposed. To simulate fatigue crack growth in a bimaterial interface a routine is developed in the commercial finite element code ANSYS and a method to accelerate the simulation is implemented. The proposed m...... of the simulation show that with fair accuracy, using the cycle jump method, more than 70% reduction in computation time can be achieved....

  12. Performance inspection of smart superconducting fault current controller in radial distribution substation through PSCAD/EMTDC simulation

    Energy Technology Data Exchange (ETDEWEB)

    MassoudiFarid, Mehrdad; Shin, Jae Woong; Lee, Ji Ho; Ko, Tae Kuk [Yonsei University, Seoul (Korea, Republic of)

    2013-12-15

    In power grid, in order to level out the generation with demand, up-gradation of the system is occasionally required. This will lead to more fault current levels. However, upgrading all the protection instruments of the system is both costly and extravagant. This issue could be dominated by using Smart Fault Current Controller (SFCC). While the impact of Fault current Limiters (FCL) in various locations has been studied in different situations for years, the performance of SFCC has not been investigated extensively. In this research, SFCC which has adopted the characteristics of a full bridge thyristor rectifier with a superconducting coil is applied to three main locations such as load feeder, Bus-tie position and main feeder location and its behavior is investigated through simulation in presence and absence of small Distributed Generation unit (DG). The results show a huge difference in limiting the fault current when using SFCC.

  13. Strong ground motion prediction applying dynamic rupture simulations for Beppu-Haneyama Active Fault Zone, southwestern Japan

    Science.gov (United States)

    Yoshimi, M.; Matsushima, S.; Ando, R.; Miyake, H.; Imanishi, K.; Hayashida, T.; Takenaka, H.; Suzuki, H.; Matsuyama, H.

    2017-12-01

    We conducted strong ground motion prediction for the active Beppu-Haneyama Fault zone (BHFZ), Kyushu island, southwestern Japan. Since the BHFZ runs through Oita and Beppy cities, strong ground motion as well as fault displacement may affect much to the cities.We constructed a 3-dimensional velocity structure of a sedimentary basin, Beppu bay basin, where the fault zone runs through and Oita and Beppu cities are located. Minimum shear wave velocity of the 3d model is 500 m/s. Additional 1-d structure is modeled for sites with softer sediment: holocene plain area. We observed, collected, and compiled data obtained from microtremor surveys, ground motion observations, boreholes etc. phase velocity and H/V ratio. Finer structure of the Oita Plain is modeled, as 250m-mesh model, with empirical relation among N-value, lithology, depth and Vs, using borehole data, then validated with the phase velocity data obtained by the dense microtremor array observation (Yoshimi et al., 2016).Synthetic ground motion has been calculated with a hybrid technique composed of a stochastic Green's function method (for HF wave), a 3D finite difference (LF wave) and 1D amplification calculation. Fault geometry has been determined based on reflection surveys and active fault map. The rake angles are calculated with a dynamic rupture simulation considering three fault segments under a stress filed estimated from source mechanism of earthquakes around the faults (Ando et al., JpGU-AGU2017). Fault parameters such as the average stress drop, a size of asperity etc. are determined based on an empirical relation proposed by Irikura and Miyake (2001). As a result, strong ground motion stronger than 100 cm/s is predicted in the hanging wall side of the Oita plain.This work is supported by the Comprehensive Research on the Beppu-Haneyama Fault Zone funded by the Ministry of Education, Culture, Sports, Science, and Technology (MEXT), Japan.

  14. Simulating the effect of SFCL on limiting the internal fault of synchronous machine

    International Nuclear Information System (INIS)

    Kheirizad, I; Varahram, M H; Jahed-Motlagh, M R; Rahnema, M; Mohammadi, A

    2008-01-01

    In this paper, we have modelled a synchronous generator with internal one phase to ground fault and then the performance of this machine with internal one phase to ground fault have been analyzed. The results show that if the faults occur in vicinity of machine's terminal, then we would have serious damages. To protect the machine from this kind of faults we have suggested integrating a SFCL (superconducting fault current limiter) into the machine's model. The results show that the fault currents in this case will reduce considerably without influencing the normal operation of the machine

  15. Probabilistic Approach to Enable Extreme-Scale Simulations under Uncertainty and System Faults. Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Knio, Omar [Duke Univ., Durham, NC (United States). Dept. of Mechanical Engineering and Materials Science

    2017-05-05

    The current project develops a novel approach that uses a probabilistic description to capture the current state of knowledge about the computational solution. To effectively spread the computational effort over multiple nodes, the global computational domain is split into many subdomains. Computational uncertainty in the solution translates into uncertain boundary conditions for the equation system to be solved on those subdomains, and many independent, concurrent subdomain simulations are used to account for this bound- ary condition uncertainty. By relying on the fact that solutions on neighboring subdomains must agree with each other, a more accurate estimate for the global solution can be achieved. Statistical approaches in this update process make it possible to account for the effect of system faults in the probabilistic description of the computational solution, and the associated uncertainty is reduced through successive iterations. By combining all of these elements, the probabilistic reformulation allows splitting the computational work over very many independent tasks for good scalability, while being robust to system faults.

  16. High aspect ratio problem in simulation of a fault current limiter based on superconducting tapes

    International Nuclear Information System (INIS)

    Velichko, A V; Coombs, T A

    2006-01-01

    We are offering a solution for the high-aspect-ratio problem relevant to the numerical simulation of AC loss in superconductors and metals with high aspect (width-to-thickness) ratio. This is particularly relevant to simulation of fault current limiters (FCLs) based on second generation YBCO tapes on RABiTS. By assuming a linear scaling of the electric and thermal properties with the size of the structure, we can replace the real sample with an effective sample of a reduced aspect ratio by introducing size multipliers into the equations that govern the physics of the system. The simulation is performed using both a proprietary equivalent circuit software and a commercial FEM software. The correctness of the procedure is verified by simulating temperature and current distributions for samples with all three dimensions varying within 10 -3 -10 3 of the original size. Qualitatively the distributions for the original and scaled samples are indistinguishable, whereas quantitative differences in the worst case do not exceed 10%

  17. High aspect ratio problem in simulation of a fault current limiter based on superconducting tapes

    Energy Technology Data Exchange (ETDEWEB)

    Velichko, A V; Coombs, T A [Electrical Engineering Division, University of Cambridge (United Kingdom)

    2006-06-15

    We are offering a solution for the high-aspect-ratio problem relevant to the numerical simulation of AC loss in superconductors and metals with high aspect (width-to-thickness) ratio. This is particularly relevant to simulation of fault current limiters (FCLs) based on second generation YBCO tapes on RABiTS. By assuming a linear scaling of the electric and thermal properties with the size of the structure, we can replace the real sample with an effective sample of a reduced aspect ratio by introducing size multipliers into the equations that govern the physics of the system. The simulation is performed using both a proprietary equivalent circuit software and a commercial FEM software. The correctness of the procedure is verified by simulating temperature and current distributions for samples with all three dimensions varying within 10{sup -3}-10{sup 3} of the original size. Qualitatively the distributions for the original and scaled samples are indistinguishable, whereas quantitative differences in the worst case do not exceed 10%.

  18. Research on burnout fault of moulded case circuit breaker based on finite element simulation

    Science.gov (United States)

    Xue, Yang; Chang, Shuai; Zhang, Penghe; Xu, Yinghui; Peng, Chuning; Shi, Erwei

    2017-09-01

    In the failure event of molded case circuit breaker, overheating of the molded case near the wiring terminal has a very important proportion. The burnout fault has become an important factor restricting the development of molded case circuit breaker. This paper uses the finite element simulation software to establish the model of molded case circuit breaker by coupling multi-physics field. This model can simulate the operation and study the law of the temperature distribution. The simulation results show that the temperature near the wiring terminal, especially the incoming side of the live wire, of the molded case circuit breaker is much higher than that of the other areas. The steady-state and transient simulation results show that the temperature at the wiring terminals is abnormally increased by increasing the contact resistance of the wiring terminals. This is consistent with the frequent occurrence of burnout of the molded case in this area. Therefore, this paper holds that the burnout failure of the molded case circuit breaker is mainly caused by the abnormal increase of the contact resistance of the wiring terminal.

  19. Complex of electrostatic accelerators for simulation and diagnostics of radiation damage

    International Nuclear Information System (INIS)

    Antuf'ev, Yu.P.; Belyaev, V.Kh.; Vergunov, A.D.

    1983-01-01

    The installation for simulation and diagnostics of radiation damage of materials is described. The installation consists of two electrostatic accelerators of vertical type for 5 MV and horizontal type for 800 kV. The accelerating complex ensures accelerated ion beam production in the independent operation regime as well as in the two beams target simultaneous irradiation regime, energy range of accelerated single-charged ions is 80 keV ... 5 MeV, homogeneity is better than +-0.05%. Oilless vacuum pumping out system is realized at the accelerating complex

  20. Simulation of the impact of wind power on the transient fault behavior of the Nordic power system

    Energy Technology Data Exchange (ETDEWEB)

    Jauch, Clemens; Soerensen, Poul [Risoe National Laboratory, Wind Energy Department, P.O. Box 49, DK-4000 Roskilde (Denmark); Norheim, Ian [SINTEF Energy Research, The department of Energy Systems, Sem Saelands Vei 11, NO-7463 Trondheim (Norway); Rasmussen, Carsten [Elkraft System, 2750 Ballerup (Denmark)

    2007-02-15

    In this paper the effect of wind power on the transient fault behavior of the Nordic power system is investigated. The Nordic power system is the interconnected power system of the countries Norway, Sweden, Finland and Denmark. For the purpose of these investigations the wind turbines installed and connected in eastern Denmark are taken as study case. The current and future wind power situation in eastern Denmark is modeled and short circuit faults in the system simulated. The simulations yield information on (i) how the faults impact on the wind turbines and (ii) how the response of the wind turbines influences the post-fault behavior of the Nordic power system. It is concluded that an increasing level of wind power penetration leads to stronger system oscillations in case of fixed speed wind turbines. It is found that fixed speed wind turbines that merely ride through transient faults have negative impacts on the dynamic response of the system. These negative impacts can be mitigated though, if sophisticated wind turbine control is applied. (author)

  1. Determining DfT Hardware by VHDL-AMS Fault Simulation for Biological Micro-Electronic Fluidic Arrays

    NARCIS (Netherlands)

    Kerkhoff, Hans G.; Zhang, X.; Liu, H.; Richardson, A.; Nouet, P.; Azais, F.

    2005-01-01

    The interest of microelectronic fluidic arrays for biomedical applications, like DNA determination, is rapidly increasing. In order to evaluate these systems in terms of required Design-for-Test structures, fault simulations in both fluidic and electronic domains are necessary. VHDL-AMS can be used

  2. Dynamic rupture simulations of the 2016 Mw7.8 Kaikōura earthquake: a cascading multi-fault event

    Science.gov (United States)

    Ulrich, T.; Gabriel, A. A.; Ampuero, J. P.; Xu, W.; Feng, G.

    2017-12-01

    The Mw7.8 Kaikōura earthquake struck the Northern part of New Zealand's South Island roughly one year ago. It ruptured multiple segments of the contractional North Canterbury fault zone and of the Marlborough fault system. Field observations combined with satellite data suggest a rupture path involving partly unmapped faults separated by large stepover distances larger than 5 km, the maximum distance usually considered by the latest seismic hazard assessment methods. This might imply distant rupture transfer mechanisms generally not considered in seismic hazard assessment. We present high-resolution 3D dynamic rupture simulations of the Kaikōura earthquake under physically self-consistent initial stress and strength conditions. Our simulations are based on recent finite-fault slip inversions that constrain fault system geometry and final slip distribution from remote sensing, surface rupture and geodetic data (Xu et al., 2017). We assume a uniform background stress field, without lateral fault stress or strength heterogeneity. We use the open-source software SeisSol (www.seissol.org) which is based on an arbitrary high-order accurate DERivative Discontinuous Galerkin method (ADER-DG). Our method can account for complex fault geometries, high resolution topography and bathymetry, 3D subsurface structure, off-fault plasticity and modern friction laws. It enables the simulation of seismic wave propagation with high-order accuracy in space and time in complex media. We show that a cascading rupture driven by dynamic triggering can break all fault segments that were involved in this earthquake without mechanically requiring an underlying thrust fault. Our prefered fault geometry connects most fault segments: it does not features stepover larger than 2 km. The best scenario matches the main macroscopic characteristics of the earthquake, including its apparently slow rupture propagation caused by zigzag cascading, the moment magnitude and the overall inferred slip

  3. Advanced visualization technology for terascale particle accelerator simulations

    International Nuclear Information System (INIS)

    Ma, K-L; Schussman, G.; Wilson, B.; Ko, K.; Qiang, J.; Ryne, R.

    2002-01-01

    This paper presents two new hardware-assisted rendering techniques developed for interactive visualization of the terascale data generated from numerical modeling of next generation accelerator designs. The first technique, based on a hybrid rendering approach, makes possible interactive exploration of large-scale particle data from particle beam dynamics modeling. The second technique, based on a compact texture-enhanced representation, exploits the advanced features of commodity graphics cards to achieve perceptually effective visualization of the very dense and complex electromagnetic fields produced from the modeling of reflection and transmission properties of open structures in an accelerator design. Because of the collaborative nature of the overall accelerator modeling project, the visualization technology developed is for both desktop and remote visualization settings. We have tested the techniques using both time varying particle data sets containing up to one billion particle s per time step and electromagnetic field data sets with millions of mesh elements

  4. Cosmic ray acceleration by stellar wind. Simulation for heliosphere

    International Nuclear Information System (INIS)

    Petukhov, S.I.; Turpanov, A.A.; Nikolaev, V.S.

    1985-01-01

    The solar wind deceleration by the interstellar medium may result in the existence of the solar wind terminal shock. In this case a certain fraction of thermal particles after being heated at the shock would obtain enough energy to be injected to the regular acceleration process. An analytical solution for the spectrum in the frame of a simplified model that includes particle acceleration at the shock front and adiabatic cooling inside the stellar wind cavity has been derived. It is shown that the acceleration of the solar wind particles at the solar wind terminal shock is capable of providing the total flux, spectrum and radial gradients of the low-energy protons close to one observed in the interplanetary space

  5. Particle acceleration in solar flares: observations versus numerical simulations

    International Nuclear Information System (INIS)

    Benz, A O; Grigis, P C; Battaglia, M

    2006-01-01

    Solar flares are generally agreed to be impulsive releases of magnetic energy. Reconnection in dilute plasma is the suggested trigger for the coronal phenomenon. It releases up to 10 26 J, accelerates up to 10 38 electrons and ions and must involve a volume that greatly exceeds the current sheet dimension. The Ramaty High-Energy Solar Spectroscopic Imager satellite can image a source in the corona that appears to contain the acceleration region and can separate it from other x-ray emissions. The new observations constrain the acceleration process by a quantitative relation between spectral index and flux. We present recent observational results and compare them with theoretical modelling by a stochastic process assuming transit-time damping of fast-mode waves, escape and replenishment. The observations can only be fitted if additional assumptions on trapping by an electric potential and possibly other processes such as isotropization and magnetic trapping are made

  6. Simulation and design of the photonic crystal microwave accelerating structure

    International Nuclear Information System (INIS)

    Song Ruiying; Wu Congfeng; He Xiaodong; Dong Sai

    2007-01-01

    The authors have derived the global band gaps for general two-dimensional (2D) photonic crystal microwave accelerating structures formed by square or triangular arrays of metal posts. A coordinate-space, finite-difference code was used to calculate the complete dispersion curves for the lattices. The fundamental and higher frequency global photonic band gaps were determined numerically. The structure formed by triangular arrays of metal posts with a missing rod at the center has advantages of higher-order-modes (HOM) suppression and main mode restriction under the condition of a/b<0.2. The relationship between the RF properties and the geometrical parameters have been studied for the 9.37 GHz photonic crystal accelerating structure. The Rs, Q, Rs/Q of the new structure may be comparable to the disk-loaded accelerating structure. (authors)

  7. Computing elastic‐rebound‐motivated rarthquake probabilities in unsegmented fault models: a new methodology supported by physics‐based simulators

    Science.gov (United States)

    Field, Edward H.

    2015-01-01

    A methodology is presented for computing elastic‐rebound‐based probabilities in an unsegmented fault or fault system, which involves computing along‐fault averages of renewal‐model parameters. The approach is less biased and more self‐consistent than a logical extension of that applied most recently for multisegment ruptures in California. It also enables the application of magnitude‐dependent aperiodicity values, which the previous approach does not. Monte Carlo simulations are used to analyze long‐term system behavior, which is generally found to be consistent with that of physics‐based earthquake simulators. Results cast doubt that recurrence‐interval distributions at points on faults look anything like traditionally applied renewal models, a fact that should be considered when interpreting paleoseismic data. We avoid such assumptions by changing the "probability of what" question (from offset at a point to the occurrence of a rupture, assuming it is the next event to occur). The new methodology is simple, although not perfect in terms of recovering long‐term rates in Monte Carlo simulations. It represents a reasonable, improved way to represent first‐order elastic‐rebound predictability, assuming it is there in the first place, and for a system that clearly exhibits other unmodeled complexities, such as aftershock triggering.

  8. Parameter design and performance simulation of a 10 kV voltage compensation type active superconducting fault current limiter

    International Nuclear Information System (INIS)

    Chen, L.; Tang, Y.J.; Song, M.; Shi, J.; Ren, L.

    2013-01-01

    Highlights: •For a practical 10 kV system, the 10 kV active SFCL’s basic parameters are designed. •Under different fault conditions, the 10 kV active SFCL’s performances are simulated. •The designed 10 kV active SFCL’s engineering feasibility is discussed preliminarily. -- Abstract: Since the introduction of superconducting fault current limiter (SFCL) into electrical distribution system may be a good choice with economy and practicability, the parameter design and current-limiting characteristics of a 10 kV voltage compensation type active SFCL are studied in this paper. Firstly, the SFCL’s circuit structure and operation principle are presented. Then, taking a practical 10 kV distribution system as its application object, the SFCL’s basic parameters are designed to meet the system requirements. Further, using MATLAB, the detailed current-limiting performances of the 10 kV active SFCL are simulated under different fault conditions. The simulation results show that the active SFCL can deal well with the faults, and the parameter design’s suitability can be testified. At the end, in view of the engineering feasibility of the 10 kV active SFCL, some preliminary discussions are carried out

  9. Fault Diagnosis for a Multistage Planetary Gear Set Using Model-Based Simulation and Experimental Investigation

    Directory of Open Access Journals (Sweden)

    Guoyan Li

    2016-01-01

    Full Text Available The gear damage will induce modulation effects in vibration signals. A thorough analysis of modulation sidebands spectral structure is necessary for fault diagnosis of planetary gear set. However, the spectral characteristics are complicated in practice, especially for a multistage planetary gear set which contains close frequency components. In this study, a coupled lateral and torsional dynamic model is established to predict the modulation sidebands of a two-stage compound planetary gear set. An improved potential energy method is used to calculate the time-varying mesh stiffness of each gear pair, and the influence of crack propagation on the mesh stiffness is analyzed. The simulated signals of the gear set are obtained by using Runge-Kutta numerical analysis method. Meanwhile, the sidebands characteristics are summarized to exhibit the modulation effects caused by sun gear damage. At the end, the experimental signals collected from an industrial SD16 planetary gearbox are analyzed to verify the theoretical derivations. The results of experiment agree well with the simulated analysis.

  10. Generation Risk Assessment Using Fault Trees and Turbine Cycle Simulation: Case Studies

    International Nuclear Information System (INIS)

    Heo, Gyun Young; Park, Jin Kyun

    2009-01-01

    Since 2007, Korea Atomic Energy Research Institute and Kyung Hee University have collaborated on the development of the framework to quantify human errors broken out during the test and maintenance (T and M) in secondary systems of nuclear power plants (NPPs). The project entitled 'Development of Causality Analyzer for Maintenance/Test Tasks in Nuclear Power Plants' for OPR1000 on the basis of the proposed framework is still on-going, and will come to fruition by 2010. The overall concept of GRA-HRE (Generation Risk Assessment for Human Related Events) which is the designation of the framework, and the quantification methods for evaluating risk and electric loss have introduced in other references. The originality emerged while implementing GRA-HRE could be evaluated in view of (1) recognizing the relative importance of human errors comparing with other types of mechanical and/or electrical failures, (2) providing the top-down path of the propagation of human errors by designating top events in the fault tree model as trip signals, and (3) analyzing electric loss using turbine cycle simulation. Recently, we were successfully to illustrate the applicability of GRA-HRE by simulating several abnormalities. Since the detailed methodologies were released enough to follow up, this paper is going to only exemplify the case studies

  11. FDTD method using for electrodynamic simulation of resonator accelerating structures

    International Nuclear Information System (INIS)

    Vorogushin, M.F.; Svistunov, Yu.A.; Chetverikov, I.O.; Malyshev, V.N.; Malyukhov, M.V.

    2000-01-01

    The finite difference method in the time area (FDTD) makes it possible to model both stationary and nonstationary processes, originating by the beam and field interaction. Possibilities of the method by modeling the fields in the resonant accelerating structures are demonstrated. The possibility of considering the transition processes is important besides the solution of the problem on determination of frequencies and distribution in the space of the resonators oscillations proper types. The program presented makes it possible to obtain practical results for modeling accelerating structures on personal computers [ru

  12. Simulation of fault-bend fold by incompressible Newtonian fluid; Hiasshukusei Newton ryutai ni yoru danso oremagari shukyoku kozo no simulation

    Energy Technology Data Exchange (ETDEWEB)

    Tamagawa, T; Matsuoka, T [Japan Petroleum Exploration Corp., Tokyo (Japan); Tsukui, R [Japan National Oil Corp., Tokyo (Japan). Technology Research Center

    1997-10-22

    Incompressible Newtonian fluid simulation is experimentally applied to faults typical of the compression and extension fields. A fault-bend folding structure of the flat-ramp flat fault in the compression field and a folding structure of a normal fault in the extension field are studied, and the results are compared with those obtained by the balanced cross section method. The result of calculation indicates that the velocity gradient with the ramp angle set at 30deg is correspondent to stress and that stress concentration is taking place at the ramp section of the fault. This solution is an approximation and does not necessary support the conservation of area but, when the ramp angle is allowed to change from 10 through 40deg, it is found that the conservation of area holds though roughly. It is found that the configuration of the folding structure formed by a flat-ramp flat fault is positioned between the anomalous-mode layer parallel shear typical of a balanced cross section and the folding structure formed by a vertical shear. 7 refs., 7 figs.

  13. Simulation of dynamic traffic loading based on accelerated pavement testing (APT)

    CSIR Research Space (South Africa)

    Steyn, WJvdM

    2004-03-01

    Full Text Available The objective of this paper is to introduce the latest Heavy Vehicle Simulator (HVS) technology as part of the South African Accelerated Pavement Testing (APT) efforts, its capabilities and expected impact on road pavement analysis....

  14. An FFT-accelerated time-domain multiconductor transmission line simulator

    KAUST Repository

    Bagci, Hakan; Yilmaz, Ali E.; Michielssen, Eric

    2010-01-01

    simulator is amenable to hybridization, is fast Fourier transform (FFT)-accelerated, and is highly accurate: 1) It can easily be hybridized with TDIE-based field solvers (in a fully rigorous mathematical framework) for performing electromagnetic interference

  15. Modeling of fluid injection and withdrawal induced fault activation using discrete element based hydro-mechanical and dynamic coupled simulator

    Science.gov (United States)

    Yoon, Jeoung Seok; Zang, Arno; Zimmermann, Günter; Stephansson, Ove

    2016-04-01

    Operation of fluid injection into and withdrawal from the subsurface for various purposes has been known to induce earthquakes. Such operations include hydraulic fracturing for shale gas extraction, hydraulic stimulation for Enhanced Geothermal System development and waste water disposal. Among these, several damaging earthquakes have been reported in the USA in particular in the areas of high-rate massive amount of wastewater injection [1] mostly with natural fault systems. Oil and gas production have been known to induce earthquake where pore fluid pressure decreases in some cases by several tens of Mega Pascal. One recent seismic event occurred in November 2013 near Azle, Texas where a series of earthquakes began along a mapped ancient fault system [2]. It was studied that a combination of brine production and waste water injection near the fault generated subsurface pressures sufficient to induced earthquakes on near-critically stressed faults. This numerical study aims at investigating the occurrence mechanisms of such earthquakes induced by fluid injection [3] and withdrawal by using hydro-geomechanical coupled dynamic simulator (Itasca's Particle Flow Code 2D). Generic models are setup to investigate the sensitivity of several parameters which include fault orientation, frictional properties, distance from the injection well to the fault, amount of fluid withdrawal around the injection well, to the response of the fault systems and the activation magnitude. Fault slip movement over time in relation to the diffusion of pore pressure is analyzed in detail. Moreover, correlations between the spatial distribution of pore pressure change and the locations of induced seismic events and fault slip rate are investigated. References [1] Keranen KM, Weingarten M, Albers GA, Bekins BA, Ge S, 2014. Sharp increase in central Oklahoma seismicity since 2008 induced by massive wastewater injection, Science 345, 448, DOI: 10.1126/science.1255802. [2] Hornbach MJ, DeShon HR

  16. Hazard-to-Risk: High-Performance Computing Simulations of Large Earthquake Ground Motions and Building Damage in the Near-Fault Region

    Science.gov (United States)

    Miah, M.; Rodgers, A. J.; McCallen, D.; Petersson, N. A.; Pitarka, A.

    2017-12-01

    We are running high-performance computing (HPC) simulations of ground motions for large (magnitude, M=6.5-7.0) earthquakes in the near-fault region (steel moment frame buildings throughout the near-fault domain. For ground motions, we are using SW4, a fourth order summation-by-parts finite difference time-domain code running on 10,000-100,000's of cores. Earthquake ruptures are generated using the Graves and Pitarka (2017) method. We validated ground motion intensity measurements against Ground Motion Prediction Equations. We considered two events (M=6.5 and 7.0) for vertical strike-slip ruptures with three-dimensional (3D) basin structures, including stochastic heterogeneity. We have also considered M7.0 scenarios for a Hayward Fault rupture scenario which effects the San Francisco Bay Area and northern California using both 1D and 3D earth structure. Dynamic, inelastic response of canonical buildings is computed with the NEVADA, a nonlinear, finite-deformation finite element code. Canonical buildings include 3-, 9-, 20- and 40-story steel moment frame buildings. Damage potential is tracked by the peak inter-story drift (PID) ratio, which measures the maximum displacement between adjacent floors of the building and is strongly correlated with damage. PID ratios greater 1.0 generally indicate non-linear response and permanent deformation of the structure. We also track roof displacement to identify permanent deformation. PID (damage) for a given earthquake scenario (M, slip distribution, hypocenter) is spatially mapped throughout the SW4 domain with 1-2 km resolution. Results show that in the near fault region building damage is correlated with peak ground velocity (PGV), while farther away (> 20 km) it is better correlated with peak ground acceleration (PGA). We also show how simulated ground motions have peaks in the response spectra that shift to longer periods for larger magnitude events and for locations of forward directivity, as has been reported by

  17. Acceleration techniques for dependability simulation. M.S. Thesis

    Science.gov (United States)

    Barnette, James David

    1995-01-01

    As computer systems increase in complexity, the need to project system performance from the earliest design and development stages increases. We have to employ simulation for detailed dependability studies of large systems. However, as the complexity of the simulation model increases, the time required to obtain statistically significant results also increases. This paper discusses an approach that is application independent and can be readily applied to any process-based simulation model. Topics include background on classical discrete event simulation and techniques for random variate generation and statistics gathering to support simulation.

  18. Stochastic Modeling and Simulation of Near-Fault Ground Motions for Performance-Based Earthquake Engineering

    OpenAIRE

    Dabaghi, Mayssa

    2014-01-01

    A comprehensive parameterized stochastic model of near-fault ground motions in two orthogonal horizontal directions is developed. The proposed model uniquely combines several existing and new sub-models to represent major characteristics of recorded near-fault ground motions. These characteristics include near-fault effects of directivity and fling step; temporal and spectral non-stationarity; intensity, duration and frequency content characteristics; directionality of components, as well as ...

  19. Simulation of the fault transitory of the feedwater controller in a Boiling water reactor with the Ramona-3B code

    International Nuclear Information System (INIS)

    Hernandez M, J.L.; Ortiz V, J.

    2005-01-01

    The obtained results when carrying out the simulation of the fault transitory of the feedwater controller (FCAA) with the Ramona-3B code, happened in the Unit 2 of the Laguna Verde power plant (CNLV), in September of the year 2000 are presented. The transitory originates as consequence of the controller's fault of speed of a turbo pump of feedwater. The work includes a short description of the event, the suppositions considered for the simulation and the obtained results. Also, a discussion of the impact of the transitory event is presented on aspects of reactor safety. Although the carried out simulation is limited by the capacities of the code and for the lack of available information, it was found that even in a conservative situation, the power was incremented only in 12% above the nominal value, while that the thermal limit determined by the minimum reason of the critical power, MCPR, always stayed above the limit values of operation and safety. (Author)

  20. Constraint methods that accelerate free-energy simulations of biomolecules.

    Science.gov (United States)

    Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  1. Multi-Physics Modelling of Fault Mechanics Using REDBACK: A Parallel Open-Source Simulator for Tightly Coupled Problems

    Science.gov (United States)

    Poulet, Thomas; Paesold, Martin; Veveakis, Manolis

    2017-03-01

    Faults play a major role in many economically and environmentally important geological systems, ranging from impermeable seals in petroleum reservoirs to fluid pathways in ore-forming hydrothermal systems. Their behavior is therefore widely studied and fault mechanics is particularly focused on the mechanisms explaining their transient evolution. Single faults can change in time from seals to open channels as they become seismically active and various models have recently been presented to explain the driving forces responsible for such transitions. A model of particular interest is the multi-physics oscillator of Alevizos et al. (J Geophys Res Solid Earth 119(6), 4558-4582, 2014) which extends the traditional rate and state friction approach to rate and temperature-dependent ductile rocks, and has been successfully applied to explain spatial features of exposed thrusts as well as temporal evolutions of current subduction zones. In this contribution we implement that model in REDBACK, a parallel open-source multi-physics simulator developed to solve such geological instabilities in three dimensions. The resolution of the underlying system of equations in a tightly coupled manner allows REDBACK to capture appropriately the various theoretical regimes of the system, including the periodic and non-periodic instabilities. REDBACK can then be used to simulate the drastic permeability evolution in time of such systems, where nominally impermeable faults can sporadically become fluid pathways, with permeability increases of several orders of magnitude.

  2. Nucleation and arrest of slow slip earthquakes: mechanisms and nonlinear simulations using realistic fault geometries and heterogeneous medium properties

    Science.gov (United States)

    Alves da Silva Junior, J.; Frank, W.; Campillo, M.; Juanes, R.

    2017-12-01

    Current models for slow slip earthquakes (SSE) assume a simplified fault embedded on a homogeneous half-space. In these models SSE events nucleate on the transition from velocity strengthening (VS) to velocity weakening (VW) down dip from the trench and propagate towards the base of the seismogenic zone, where high normal effective stress is assumed to arrest slip. Here, we investigate SSE nucleation and arrest using quasi-static finite element simulations, with rate and state friction, on a domain with heterogeneous properties and realistic fault geometry. We use the fault geometry of the Guerrero Gap in the Cocos subduction zone, where SSE events occurs every 4 years, as a proxy for subduction zone. Our model is calibrated using surface displacements from GPS observations. We apply boundary conditions according to the plate convergence rate and impose a depth-dependent pore pressure on the fault. Our simulations indicate that the fault geometry and elastic properties of the medium play a key role in the arrest of SSE events at the base of the seismogenic zone. SSE arrest occurs due to aseismic deformations of the domain that result in areas with elevated effective stress. SSE nucleation occurs in the transition from VS to VW and propagates as a crack-like expansion with increased nucleation length prior to dynamic instability. Our simulations encompassing multiple seismic cycles indicate SSE interval times between 1 and 10 years and, importantly, a systematic increase of rupture area prior to dynamic instability, followed by a hiatus in the SSE occurrence. We hypothesize that these SSE characteristics, if confirmed by GPS observations in different subduction zones, can add to the understanding of nucleation of large earthquakes in the seismogenic zone.

  3. Particle acceleration inside PWN: Simulation and observational constraints with INTEGRAL

    International Nuclear Information System (INIS)

    Forot, M.

    2006-12-01

    The context of this thesis is to gain new constraints on the different particle accelerators that occur in the complex environment of neutron stars: in the pulsar magnetosphere, in the striped wind or wave outside the light cylinder, in the jets and equatorial wind, and at the wind terminal shock. An important tool to constrain both the magnetic field and primary particle energies is to image the synchrotron ageing of the population, but it requires a careful modelling of the magnetic field evolution in the wind flow. The current models and understanding of these different accelerators, the acceleration processes and open questions have been reviewed in the first part of the thesis. The instrumental part of this work involves the IBIS imager, on board the INTEGRAL satellite, that provides images with 12' resolution from 17 keV to MeV where the SPI spectrometer takes over up, to 10 MeV, but with a reduced 2 degrees resolution. A new method for using the double-layer IBIS imager as a Compton telescope with coded mask aperture. Its performance has been measured. The Compton scattering information and the achieved sensitivity also open a new window for polarimetry in gamma rays. A method has been developed to extract the linear polarization properties and to check the instrument response for fake polarimetric signals in the various backgrounds and projection effects

  4. On the application of accelerated molecular dynamics to liquid water simulations.

    Science.gov (United States)

    de Oliveira, César Augusto F; Hamelberg, Donald; McCammon, J Andrew

    2006-11-16

    Our group recently proposed a robust bias potential function that can be used in an efficient all-atom accelerated molecular dynamics (MD) approach to simulate the transition of high energy barriers without any advance knowledge of the potential-energy landscape. The main idea is to modify the potential-energy surface by adding a bias, or boost, potential in regions close to the local minima, such that all transitions rates are increased. By applying the accelerated MD simulation method to liquid water, we observed that this new simulation technique accelerates the molecular motion without losing its microscopic structure and equilibrium properties. Our results showed that the application of a small boost energy on the potential-energy surface significantly reduces the statistical inefficiency of the simulation while keeping all the other calculated properties unchanged. On the other hand, although aggressive acceleration of the dynamics simulation increases the self-diffusion coefficient of water molecules greatly and dramatically reduces the correlation time of the simulation, configurations representative of the true structure of liquid water are poorly sampled. Our results also showed the strength and robustness of this simulation technique, which confirm this approach as a very useful and promising tool to extend the time scale of the all-atom simulations of biological system with explicit solvent models. However, we should keep in mind that there is a compromise between the strength of the boost applied in the simulation and the reproduction of the ensemble average properties.

  5. GPU-accelerated CFD Simulations for Turbomachinery Design Optimization

    NARCIS (Netherlands)

    Aissa, M.H.

    2017-01-01

    Design optimization relies heavily on time-consuming simulations, especially when using gradient-free optimization methods. These methods require a large number of simulations in order to get a remarkable improvement over reference designs, which are nowadays based on the accumulated engineering

  6. Frictional behaviour and transport properties of simulated fault gouges derived from a natural CO2 reservoir

    NARCIS (Netherlands)

    Bakker, E.; Hangx, S.J.T.|info:eu-repo/dai/nl/30483579X; Niemeijer, A.R.|info:eu-repo/dai/nl/370832132; Spiers, C.J.|info:eu-repo/dai/nl/304829323

    2016-01-01

    We investigated the effects of long-term CO2-brine-rock interactions on the frictional and transport properties of reservoir-derived fault gouges, prepared from both unexposed and CO2-exposed sandstone, and from aragonite-cemented fault rock of an active CO2-leaking conduit, obtained from a natural

  7. Electromagnetic computer simulations of collective ion acceleration by a relativistic electron beam

    International Nuclear Information System (INIS)

    Galvez, M.; Gisler, G.R.

    1988-01-01

    A 2.5 electromagnetic particle-in-cell computer code is used to study the collective ion acceleration when a relativistic electron beam is injected into a drift tube partially filled with cold neutral plasma. The simulations of this system reveals that the ions are subject to electrostatic acceleration by an electrostatic potential that forms behind the head of the beam. This electrostatic potential develops soon after the beam is injected into the drift tube, drifts with the beam, and eventually settles to a fixed position. At later times, this electrostatic potential becomes a virtual cathode. When the permanent position of the electrostatic potential is at the edge of the plasma or further up, then ions are accelerated forward and a unidirectional ion flow is obtained otherwise a bidirectional ion flow occurs. The ions that achieve higher energy are those which drift with the negative potential. When the plasma density is varied, the simulations show that optimum acceleration occurs when the density ratio between the beam (n b ) and the plasma (n o ) is unity. Simulations were carried out by changing the ion mass. The results of these simulations corroborate the hypothesis that the ion acceleration mechanism is purely electrostatic, so that the ion acceleration depends inversely on the charge particle mass. The simulations also show that the ion maximum energy increased logarithmically with the electron beam energy and proportional with the beam current

  8. Accelerators

    CERN Multimedia

    CERN. Geneva

    2001-01-01

    The talk summarizes the principles of particle acceleration and addresses problems related to storage rings like LEP and LHC. Special emphasis will be given to orbit stability, long term stability of the particle motion, collective effects and synchrotron radiation.

  9. Simulating faults and plate boundaries with a transversely isotropic plasticity model

    Science.gov (United States)

    Sharples, W.; Moresi, L. N.; Velic, M.; Jadamec, M. A.; May, D. A.

    2016-03-01

    In mantle convection simulations, dynamically evolving plate boundaries have, for the most part, been represented using an visco-plastic flow law. These systems develop fine-scale, localized, weak shear band structures which are reminiscent of faults but it is a significant challenge to resolve the large- and the emergent, small-scale-behavior. We address this issue of resolution by taking into account the observation that a rock element with embedded, planar, failure surfaces responds as a non-linear, transversely isotropic material with a weak orientation defined by the plane of the failure surface. This approach partly accounts for the large-scale behavior of fine-scale systems of shear bands which we are not in a position to resolve explicitly. We evaluate the capacity of this continuum approach to model plate boundaries, specifically in the context of subduction models where the plate boundary interface has often been represented as a planar discontinuity. We show that the inclusion of the transversely isotropic plasticity model for the plate boundary promotes asymmetric subduction from initiation. A realistic evolution of the plate boundary interface and associated stresses is crucial to understanding inter-plate coupling, convergent margin driven topography, and earthquakes.

  10. Lattice Boltzmann accelerated direct simulation Monte Carlo for dilute gas flow simulations.

    Science.gov (United States)

    Di Staso, G; Clercx, H J H; Succi, S; Toschi, F

    2016-11-13

    Hybrid particle-continuum computational frameworks permit the simulation of gas flows by locally adjusting the resolution to the degree of non-equilibrium displayed by the flow in different regions of space and time. In this work, we present a new scheme that couples the direct simulation Monte Carlo (DSMC) with the lattice Boltzmann (LB) method in the limit of isothermal flows. The former handles strong non-equilibrium effects, as they typically occur in the vicinity of solid boundaries, whereas the latter is in charge of the bulk flow, where non-equilibrium can be dealt with perturbatively, i.e. according to Navier-Stokes hydrodynamics. The proposed concurrent multiscale method is applied to the dilute gas Couette flow, showing major computational gains when compared with the full DSMC scenarios. In addition, it is shown that the coupling with LB in the bulk flow can speed up the DSMC treatment of the Knudsen layer with respect to the full DSMC case. In other words, LB acts as a DSMC accelerator.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2016 The Author(s).

  11. Prediction of strong acceleration motion depended on focal mechanism; Shingen mechanism wo koryoshita jishindo yosoku ni tsuite

    Energy Technology Data Exchange (ETDEWEB)

    Kaneda, Y; Ejiri, J [Obayashi Corp., Tokyo (Japan)

    1996-10-01

    This paper describes simulation results of strong acceleration motion with varying uncertain fault parameters mainly for a fault model of Hyogo-ken Nanbu earthquake. For the analysis, based on the fault parameters, the strong acceleration motion was simulated using the radiation patterns and the breaking time difference of composite faults as parameters. A statistic waveform composition method was used for the simulation. For the theoretical radiation patterns, directivity was emphasized which depended on the strike of faults, and the maximum acceleration was more than 220 gal. While, for the homogeneous radiation patterns, the maximum accelerations were isotopically distributed around the fault as a center. For variations in the maximum acceleration and the predominant frequency due to the breaking time difference of three faults, the response spectral value of maximum/minimum was about 1.7 times. From the viewpoint of seismic disaster prevention, underground structures including potential faults and non-arranging properties can be grasped using this simulation. Significance of the prediction of strong acceleration motion was also provided through this simulation using uncertain factors, such as breaking time of composite faults, as parameters. 4 refs., 4 figs., 1 tab.

  12. Doubly fed induction generator based wind turbine systems subject to recurring grid faults

    DEFF Research Database (Denmark)

    Chen, Wenjie; Blaabjerg, Frede; Zhu, Nan

    2014-01-01

    New grid codes demand the wind turbine systems to ride through recurring grid faults. In this paper, the performance of the Doubly Fed Induction Generator wind turbine system under recurring grid faults is analyzed. The stator natural flux produced by the voltage recovery after the first grid fault...... may be superposed on the stator natural flux produced by the second grid fault, and it may result in large current and voltage transient. The damping of the stator natural flux can be accelerated with a rotor natural current in its opposite direction after voltage recovery, but larger torque....... The performance of DFIG under recurring grid faults is verified by the simulation and experiments....

  13. Dynamic rupture simulation of the 2017 Mw 7.8 Kaikoura (New Zealand) earthquake: Is spontaneous multi-fault rupture expected?

    Science.gov (United States)

    Ando, R.; Kaneko, Y.

    2017-12-01

    The coseismic rupture of the 2016 Kaikoura earthquake propagated over the distance of 150 km along the NE-SW striking fault system in the northern South Island of New Zealand. The analysis of In-SAR, GPS and field observations (Hamling et al., 2017) revealed that the most of the rupture occurred along the previously mapped active faults, involving more than seven major fault segments. These fault segments, mostly dipping to northwest, are distributed in a quite complex manner, manifested by fault branching and step-over structures. Back-projection rupture imaging shows that the rupture appears to jump between three sub-parallel fault segments in sequence from the south to north (Kaiser et al., 2017). The rupture seems to be terminated on the Needles fault in Cook Strait. One of the main questions is whether this multi-fault rupture can be naturally explained with the physical basis. In order to understand the conditions responsible for the complex rupture process, we conduct fully dynamic rupture simulations that account for 3-D non-planar fault geometry embedded in an elastic half-space. The fault geometry is constrained by previous In-SAR observations and geological inferences. The regional stress field is constrained by the result of stress tensor inversion based on focal mechanisms (Balfour et al., 2005). The fault is governed by a relatively simple, slip-weakening friction law. For simplicity, the frictional parameters are uniformly distributed as there is no direct estimate of them except for a shallow portion of the Kekerengu fault (Kaneko et al., 2017). Our simulations show that the rupture can indeed propagate through the complex fault system once it is nucleated at the southernmost segment. The simulated slip distribution is quite heterogeneous, reflecting the nature of non-planar fault geometry, fault branching and step-over structures. We find that optimally oriented faults exhibit larger slip, which is consistent with the slip model of Hamling et al

  14. Simulation of speed control in acceleration mode of a heavy duty vehicle; Ogatasha no kasokuji ni okeru shasoku seigyo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Endo, S; Ukawa, H [Isuzu Advanced Engineering Center, Ltd., Tokyo (Japan); Sanada, K; Kitagawa, A [Tokyo Institute of Technology, Tokyo (Japan)

    1997-10-01

    A control law of speed of a heavy duty vehicle in acceleration mode is presented, which is an extended version of a control law in deceleration mode proposed by the authors. The control law is based on constant acceleration strategy. Using the control law, target velocity and target distance can be performed. Both control laws for acceleration and deceleration mode can be represented by a unified mathematical formulae. Some simulation results are shown to demonstrate the control performance. 7 refs., 9 figs., 2 tabs.

  15. Simulations and Vacuum Tests of a CLIC Accelerating Structure

    CERN Document Server

    Garion, C

    2011-01-01

    The Compact LInear Collider, under study, is based on room temperature high gradient structures. The vacuum specificities of these cavities are low conductance, large surface areas and a non-baked system. The main issue is to reach UHV conditions (typically 10-7 Pa) in a system where the residual vacuum is driven by water outgassing. A finite element model based on an analogy thermal/vacuum has been built to estimate the vacuum profile in an accelerating structure. Vacuum tests are carried out in a dedicated set-up, the vacuum performances of different configurations are presented and compared with the predictions.

  16. Accelerated simulation of near-Earth-orbit polymer degradation

    Science.gov (United States)

    Laue, Eric

    1992-01-01

    There is a need to simulate the near-Earth-orbit environmental conditions, and it is useful to be able to monitor the changes in physical properties of spacecraft materials. Two different methods for simulating the vacuum-ultraviolet (VUV) and soft X-ray near-Earth-orbit flux are presented. Also, methods for monitoring the changes in optical ultraviolet transmission and mass loss are presented. The results of exposures to VUV photons and charged particles on these materials are discussed.

  17. Acceleration of Radiance for Lighting Simulation by Using Parallel Computing with OpenCL

    Energy Technology Data Exchange (ETDEWEB)

    Zuo, Wangda; McNeil, Andrew; Wetter, Michael; Lee, Eleanor

    2011-09-06

    We report on the acceleration of annual daylighting simulations for fenestration systems in the Radiance ray-tracing program. The algorithm was optimized to reduce both the redundant data input/output operations and the floating-point operations. To further accelerate the simulation speed, the calculation for matrix multiplications was implemented using parallel computing on a graphics processing unit. We used OpenCL, which is a cross-platform parallel programming language. Numerical experiments show that the combination of the above measures can speed up the annual daylighting simulations 101.7 times or 28.6 times when the sky vector has 146 or 2306 elements, respectively.

  18. ELECTROMAGNETIC AND THERMAL SIMULATIONS FOR THE SWITCH REGION OF A COMPACT PROTON ACCELERATOR

    International Nuclear Information System (INIS)

    Wang, L; Caporaso, G J; Sullivan, J S

    2007-01-01

    A compact proton accelerator for medical applications is being developed at Lawrence Livermore National Laboratory. The accelerator architecture is based on the dielectric wall accelerator (DWA) concept. One critical area to consider is the switch region. Electric field simulations and thermal calculations of the switch area were performed to help determine the operating limits of rmed SiC switches. Different geometries were considered for the field simulation including the shape of the thin Indium solder meniscus between the electrodes and SiC. Electric field simulations were also utilized to demonstrate how the field stress could be reduced. Both transient and steady steady-state thermal simulations were analyzed to find the average power capability of the switches

  19. Further development of the V-code for recirculating linear accelerator simulations

    Energy Technology Data Exchange (ETDEWEB)

    Franke, Sylvain; Ackermann, Wolfgang; Weiland, Thomas [Institut fuer Theorie Elektromagnetischer Felder, Technische Universitaet Darmstadt (Germany); Eichhorn, Ralf; Hug, Florian; Kleinmann, Michaela; Platz, Markus [Institut fuer Kernphysik, Technische Universitaet Darmstadt (Germany)

    2011-07-01

    The Superconducting Darmstaedter LINear Accelerator (S-DALINAC) installed at the institute of nuclear physics (IKP) at TU Darmstadt is designed as a recirculating linear accelerator. The beam is first accelerated up to 10 MeV in the injector beam line. Then it is deflected by 180 degrees into the main linac. The linac section with eight superconducting cavities is passed up to three times, providing a maximal energy gain of 40 MeV on each passage. Due to this recirculating layout it is complicated to find an accurate setup for the various beam line elements. Fast online beam dynamics simulations can advantageously assist the operators because they provide a more detailed insight into the actual machine status. In this contribution further developments of the moment based simulation tool V-code which enables to simulate recirculating machines are presented together with simulation results.

  20. Mining-induced fault reactivation associated with the main conveyor belt roadway and safety of the Barapukuria Coal Mine in Bangladesh: Constraints from BEM simulations

    Energy Technology Data Exchange (ETDEWEB)

    Islam, Md. Rafiqul; Shinjo, Ryuichi [Department of Physics and Earth Sciences, University of the Ryukyus, Okinawa, 903-0213 (Japan)

    2009-09-01

    Fault reactivation during underground mining is a critical problem in coal mines worldwide. This paper investigates the mining-induced reactivation of faults associated with the main conveyor belt roadway (CBR) of the Barapukuria Coal Mine in Bangladesh. The stress characteristics and deformation around the faults were investigated by boundary element method (BEM) numerical modeling. The model consists of a simple geometry with two faults (Fb and Fb1) near the CBR and the surrounding rock strata. A Mohr-Coulomb failure criterion with bulk rock properties is applied to analyze the stability and safety around the fault zones, as well as for the entire mining operation. The simulation results illustrate that the mining-induced redistribution of stresses causes significant deformation within and around the two faults. The horizontal and vertical stresses influence the faults, and higher stresses are concentrated near the ends of the two faults. Higher vertical tensional stress is prominent at the upper end of fault Fb. High deviatoric stress values that concentrated at the ends of faults Fb and Fb1 indicate the tendency towards block failure around the fault zones. The deviatoric stress patterns imply that the reinforcement strength to support the roof of the roadway should be greater than 55 MPa along the fault core zone, and should be more than 20 MPa adjacent to the damage zone of the fault. Failure trajectories that extend towards the roof and left side of fault Fb indicate that mining-induced reactivation of faults is not sufficient to generate water inflow into the mine. However, if movement of strata occurs along the fault planes due to regional earthquakes, and if the faults intersect the overlying Lower Dupi Tila aquiclude, then liquefaction could occur along the fault zones and enhance water inflow into the mine. The study also reveals that the hydraulic gradient and the general direction of groundwater flow are almost at right angles with the trends of

  1. Analysis of lightning fault detection, location and protection on short and long transmission lines using Real Time Digital Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Andre Luiz Pereira de [Siemens Ltda., Sao Paulo, SP (Brazil)], E-mail: andreluiz.oliveira@siemens.com

    2007-07-01

    The purpose of this paper is to present an analysis of lightning fault detection, location and protection using numeric distance relays applied in high voltage transmission lines, more specifically in the 500 kV transmission lines of CEMIG (Brazilian Energy Utility) between the Vespasiano 2 - Neves 1 (short line - 23.9 km) and Vespasiano 2 - Mesquita (long line - 148.6 km) substations. The analysis was based on the simulations results of numeric distance protective relays on power transmission lines, realized in September 02 to 06, 2002, at Siemens AG's facilities (Erlangen - Germany), using Real Time Digital Simulator (RTDS{sup TM}). Several lightning faults simulations were accomplished, in several conditions of the electrical power system where the protective relays would be installed. The results are presented not only with the times of lightning faults elimination, but also all the functionality of a protection system, including the correct detection, location and other advantages that these modern protection devices make possible to the power system. (author)

  2. Fault diagnosis

    Science.gov (United States)

    Abbott, Kathy

    1990-01-01

    The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to

  3. Simulation Studies of the Dielectric Grating as an Accelerating and Focusing Structure

    International Nuclear Information System (INIS)

    Soong, Ken; Peralta, E.A.; Byer, R.L.; Colby, E.

    2011-01-01

    A grating-based design is a promising candidate for a laser-driven dielectric accelerator. Through simulations, we show the merits of a readily fabricated grating structure as an accelerating component. Additionally, we show that with a small design perturbation, the accelerating component can be converted into a focusing structure. The understanding of these two components is critical in the successful development of any complete accelerator. The concept of accelerating electrons with the tremendous electric fields found in lasers has been proposed for decades. However, until recently the realization of such an accelerator was not technologically feasible. Recent advances in the semiconductor industry, as well as advances in laser technology, have now made laser-driven dielectric accelerators imminent. The grating-based accelerator is one proposed design for a dielectric laser-driven accelerator. This design, which was introduced by Plettner, consists of a pair of opposing transparent binary gratings, illustrated in Fig. 1. The teeth of the gratings serve as a phase mask, ensuring a phase synchronicity between the electromagnetic field and the moving particles. The current grating accelerator design has the drive laser incident perpendicular to the substrate, which poses a laser-structure alignment complication. The next iteration of grating structure fabrication seeks to monolithically create an array of grating structures by etching the grating's vacuum channel into a fused silica wafer. With this method it is possible to have the drive laser confined to the plane of the wafer, thus ensuring alignment of the laser-and-structure, the two grating halves, and subsequent accelerator components. There has been previous work using 2-dimensional finite difference time domain (2D-FDTD) calculations to evaluate the performance of the grating accelerator structure. However, this work approximates the grating as an infinite structure and does not accurately model a

  4. Availability simulation software adaptation to the IFMIF accelerator facility RAMI analyses

    International Nuclear Information System (INIS)

    Bargalló, Enric; Sureda, Pere Joan; Arroyo, Jose Manuel; Abal, Javier; De Blas, Alfredo; Dies, Javier; Tapia, Carlos; Mollá, Joaquín; Ibarra, Ángel

    2014-01-01

    Highlights: • The reason why IFMIF RAMI analyses needs a simulation is explained. • Changes, modifications and software validations done to AvailSim are described. • First IFMIF RAMI results obtained with AvailSim 2.0 are shown. • Implications of AvailSim 2.0 in IFMIF RAMI analyses are evaluated. - Abstract: Several problems were found when using generic reliability tools to perform RAMI (Reliability Availability Maintainability Inspectability) studies for the IFMIF (International Fusion Materials Irradiation Facility) accelerator. A dedicated simulation tool was necessary to model properly the complexity of the accelerator facility. AvailSim, the availability simulation software used for the International Linear Collider (ILC) became an excellent option to fulfill RAMI analyses needs. Nevertheless, this software needed to be adapted and modified to simulate the IFMIF accelerator facility in a useful way for the RAMI analyses in the current design phase. Furthermore, some improvements and new features have been added to the software. This software has become a great tool to simulate the peculiarities of the IFMIF accelerator facility allowing obtaining a realistic availability simulation. Degraded operation simulation and maintenance strategies are the main relevant features. In this paper, the necessity of this software, main modifications to improve it and its adaptation to IFMIF RAMI analysis are described. Moreover, first results obtained with AvailSim 2.0 and a comparison with previous results is shown

  5. Availability simulation software adaptation to the IFMIF accelerator facility RAMI analyses

    Energy Technology Data Exchange (ETDEWEB)

    Bargalló, Enric, E-mail: enric.bargallo-font@upc.edu [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Sureda, Pere Joan [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Arroyo, Jose Manuel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Abal, Javier; De Blas, Alfredo; Dies, Javier; Tapia, Carlos [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Mollá, Joaquín; Ibarra, Ángel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain)

    2014-10-15

    Highlights: • The reason why IFMIF RAMI analyses needs a simulation is explained. • Changes, modifications and software validations done to AvailSim are described. • First IFMIF RAMI results obtained with AvailSim 2.0 are shown. • Implications of AvailSim 2.0 in IFMIF RAMI analyses are evaluated. - Abstract: Several problems were found when using generic reliability tools to perform RAMI (Reliability Availability Maintainability Inspectability) studies for the IFMIF (International Fusion Materials Irradiation Facility) accelerator. A dedicated simulation tool was necessary to model properly the complexity of the accelerator facility. AvailSim, the availability simulation software used for the International Linear Collider (ILC) became an excellent option to fulfill RAMI analyses needs. Nevertheless, this software needed to be adapted and modified to simulate the IFMIF accelerator facility in a useful way for the RAMI analyses in the current design phase. Furthermore, some improvements and new features have been added to the software. This software has become a great tool to simulate the peculiarities of the IFMIF accelerator facility allowing obtaining a realistic availability simulation. Degraded operation simulation and maintenance strategies are the main relevant features. In this paper, the necessity of this software, main modifications to improve it and its adaptation to IFMIF RAMI analysis are described. Moreover, first results obtained with AvailSim 2.0 and a comparison with previous results is shown.

  6. FEM Techniques for High Stress Detection in Accelerated Fatigue Simulation

    Science.gov (United States)

    Veltri, M.

    2016-09-01

    This work presents the theory and a numerical validation study in support to a novel method for a priori identification of fatigue critical regions, with the aim to accelerate durability design in large FEM problems. The investigation is placed in the context of modern full-body structural durability analysis, where a computationally intensive dynamic solution could be required to identify areas with potential for fatigue damage initiation. The early detection of fatigue critical areas can drive a simplification of the problem size, leading to sensible improvement in solution time and model handling while allowing processing of the critical areas in higher detail. The proposed technique is applied to a real life industrial case in a comparative assessment with established practices. Synthetic damage prediction quantification and visualization techniques allow for a quick and efficient comparison between methods, outlining potential application benefits and boundaries.

  7. Single event simulation for memories using accelerated ions

    International Nuclear Information System (INIS)

    Sakagawa, Y.; Shiono, N.; Mizusawa, T.; Sekiguchi, M.; Sato, K.; Sugai, I.; Hirao, Y.; Nishimura, J.; Hattori, T.

    1987-01-01

    To evaluate the error immunity of the LSI memories from cosmic rays in space, an irradiation test using accelerated heavy ions is performed. The sensitive regions for 64 K DRAM (Dynamic Random Access Memory) and 4 K SRAM (Static Random Access Memory) are determined from the irradiation test results and the design parameters of the devices. The observed errors can be classified into two types. One is the direct ionization type and the other is the recoil produced error type. Sensitive region is determined for the devices. Error rate estimation methods for both types are proposed and applied to those memories used in space. The error rate of direct ionization exceeds the recoil type by 2 or 3 orders. And the direct ionization is susceptible to shield thickness. (author)

  8. Simulation of the Focal Spot of the Accelerator Bremsstrahlung Radiation

    Science.gov (United States)

    Sorokin, V.; Bespalov, V.

    2016-06-01

    Testing of thick-walled objects by bremsstrahlung radiation (BR) is primarily performed via high-energy quanta. The testing parameters are specified by the focal spot size of the high-energy bremsstrahlung radiation. In determining the focal spot size, the high- energy BR portion cannot be experimentally separated from the low-energy BR to use high- energy quanta only. The patterns of BR focal spot formation have been investigated via statistical modeling of the radiation transfer in the target material. The distributions of BR quanta emitted by the target for different energies and emission angles under normal distribution of the accelerated electrons bombarding the target have been obtained, and the ratio of the distribution parameters has been determined.

  9. Waves and particles in the Fermi accelerator model. Numerical simulation

    International Nuclear Information System (INIS)

    Meplan, O.

    1996-01-01

    This thesis is devoted to a numerical study of the quantum dynamics of the Fermi accelerator which is classically chaotic: it is particle in a one dimensional box with a oscillating wall. First, we study the classical dynamics: we show that the time of impact of the particle with the moving wall and its energy in the wall frame are conjugated variables and that Poincare surface of sections in these variables are more understandable than the usual stroboscopic sections. Then, the quantum dynamics of this systems is studied by the means of two numerical methods. The first one is a generalization of the KKR method in the space-time; it is enough to solve an integral equation on the boundary of a space-time billiard. The second method is faster and is based on successive free propagations and kicks of potential. This allows us to obtain Floquet states which we can on one hand, compare to the classical dynamics with the help of Husimi distributions and on the other hand, study as a function of parameters of the system. This study leads us to nice illustrations of phenomenons such as spatial localizations of a wave packet in a vibrating well or tunnel effects. In the adiabatic situation, we give a formula for quasi-energies which exhibits a phase term independent of states. In this regime, there exist some particular situations where the quasi-energy spectrum presents a total quasi-degeneracy. Then, the wave packet energy can increase significantly. This phenomenon is quite surprising for smooth motion of the wall. The third part deals with the evolution of a classical wave in the Fermi accelerator. Using generalized KKR method, we show a surprising phenomenon: in most of situations (so long as the wall motion is periodic), a wave is localized exponentially in the well and its energy increases in a geometric way. (author). 107 refs., 66 figs., 5 tabs. 2 appends

  10. Classical-trajectory simulation of accelerating neutral atoms with polarized intense laser pulses

    Science.gov (United States)

    Xia, Q. Z.; Fu, L. B.; Liu, J.

    2013-03-01

    In the present paper, we perform the classical trajectory Monte Carlo simulation of the complex dynamics of accelerating neutral atoms with linearly or circularly polarized intense laser pulses. Our simulations involve the ion motion as well as the tunneling ionization and the scattering dynamics of valence electron in the combined Coulomb and electromagnetic fields, for both helium (He) and magnesium (Mg). We show that for He atoms, only linearly polarized lasers can effectively accelerate the atoms, while for Mg atoms, we find that both linearly and circularly polarized lasers can successively accelerate the atoms. The underlying mechanism is discussed and the subcycle dynamics of accelerating trajectories is investigated. We have compared our theoretical results with a recent experiment [Eichmann Nature (London)NATUAS0028-083610.1038/nature08481 461, 1261 (2009)].

  11. Accelerating Sequential Gaussian Simulation with a constant path

    Science.gov (United States)

    Nussbaumer, Raphaël; Mariethoz, Grégoire; Gravey, Mathieu; Gloaguen, Erwan; Holliger, Klaus

    2018-03-01

    Sequential Gaussian Simulation (SGS) is a stochastic simulation technique commonly employed for generating realizations of Gaussian random fields. Arguably, the main limitation of this technique is the high computational cost associated with determining the kriging weights. This problem is compounded by the fact that often many realizations are required to allow for an adequate uncertainty assessment. A seemingly simple way to address this problem is to keep the same simulation path for all realizations. This results in identical neighbourhood configurations and hence the kriging weights only need to be determined once and can then be re-used in all subsequent realizations. This approach is generally not recommended because it is expected to result in correlation between the realizations. Here, we challenge this common preconception and make the case for the use of a constant path approach in SGS by systematically evaluating the associated benefits and limitations. We present a detailed implementation, particularly regarding parallelization and memory requirements. Extensive numerical tests demonstrate that using a constant path allows for substantial computational gains with very limited loss of simulation accuracy. This is especially the case for a constant multi-grid path. The computational savings can be used to increase the neighbourhood size, thus allowing for a better reproduction of the spatial statistics. The outcome of this study is a recommendation for an optimal implementation of SGS that maximizes accurate reproduction of the covariance structure as well as computational efficiency.

  12. Quench simulations for superconducting elements in the LHC accelerator

    CERN Document Server

    Sonnemann, F

    2000-01-01

    The design of he protection system for he superconducting elements in an accel- erator such as the Large Hadron Collider (LHC),now under construction at CERN, requires a detailed understanding of the hermo-hydraulic and electrodynamic pro- cesses during a quench.A numerical program (SPQR -Simulation Program for Quench Research)has been developed o evaluate temperature and voltage dis ri- butions during a quench as a func ion of space and ime.The quench process is simulated by approximating the heat balance equation with the finite di fference method in presence of variable cooling and powering conditions.The simulation predicts quench propagation along a superconducting cable,forced quenching with heaters,impact of eddy curren s induced by a magnetic field change,and heat trans- fer hrough an insulation layer in o helium,an adjacen conductor or other material. The simulation studies allowed a better understanding of experimental quench data and were used for determining the adequ...

  13. Research and simulation of intense pulsed beam transfer in electrostatic accelerate tube

    International Nuclear Information System (INIS)

    Li Chaolong; Shi Haiquan; Lu Jianqin

    2012-01-01

    To study intense pulsed beam transfer in electrostatic accelerate tube, the matrix method was applied to analyze the transport matrixes in electrostatic accelerate tube of non-intense pulsed beam and intense pulsed beam, and a computer code was written for the intense pulsed beam transporting in electrostatic accelerate tube. Optimization techniques were used to attain the given optical conditions and iteration procedures were adopted to compute intense pulsed beam for obtaining self-consistent solutions in this computer code. The calculations were carried out by using ACCT, TRACE-3D and TRANSPORT for different beam currents, respectively. The simulation results show that improvement of the accelerating voltage ratio can enhance focusing power of electrostatic accelerate tube, reduce beam loss and increase the transferring efficiency. (authors)

  14. Computer Simulation of Complex Power System Faults under various Operating Conditions

    International Nuclear Information System (INIS)

    Khandelwal, Tanuj; Bowman, Mark

    2015-01-01

    A power system is normally treated as a balanced symmetrical three-phase network. When a fault occurs, the symmetry is normally upset, resulting in unbalanced currents and voltages appearing in the network. For the correct application of protection equipment, it is essential to know the fault current distribution throughout the system and the voltages in different parts of the system due to the fault. There may be situations where protection engineers have to analyze faults that are more complex than simple shunt faults. One type of complex fault is an open phase condition that can result from a fallen conductor or failure of a breaker pole. In the former case, the condition is often accompanied by a fault detectable with normal relaying. In the latter case, the condition may be undetected by standard line relaying. The effect on a generator is dependent on the location of the open phase and the load level. If an open phase occurs between the generator terminals and the high-voltage side of the GSU in the switchyard, and the generator is at full load, damaging negative sequence current can be generated. However, for the same operating condition, an open conductor at the incoming transmission lines located in the switchyard can result in minimal negative sequence current. In 2012, a nuclear power generating station (NPGS) suffered series or open phase fault due to insulator mechanical failure in the 345 kV switchyard. This resulted in both reactor units tripping offline in two separate incidents. Series fault on one of the phases resulted in voltage imbalance that was not detected by the degraded voltage relays. These under-voltage relays did not initiate a start signal to the emergency diesel generators (EDG) because they sensed adequate voltage on the remaining phases exposing a design vulnerability. This paper is intended to help protection engineers calculate complex circuit faults like open phase condition using computer program. The impact of this type of

  15. New "Tau-Leap" Strategy for Accelerated Stochastic Simulation.

    Science.gov (United States)

    Ramkrishna, Doraiswami; Shu, Che-Chi; Tran, Vu

    2014-12-10

    The "Tau-Leap" strategy for stochastic simulations of chemical reaction systems due to Gillespie and co-workers has had considerable impact on various applications. This strategy is reexamined with Chebyshev's inequality for random variables as it provides a rigorous probabilistic basis for a measured τ-leap thus adding significantly to simulation efficiency. It is also shown that existing strategies for simulation times have no probabilistic assurance that they satisfy the τ-leap criterion while the use of Chebyshev's inequality leads to a specified degree of certainty with which the τ-leap criterion is satisfied. This reduces the loss of sample paths which do not comply with the τ-leap criterion. The performance of the present algorithm is assessed, with respect to one discussed by Cao et al. ( J. Chem. Phys. 2006 , 124 , 044109), a second pertaining to binomial leap (Tian and Burrage J. Chem. Phys. 2004 , 121 , 10356; Chatterjee et al. J. Chem. Phys. 2005 , 122 , 024112; Peng et al. J. Chem. Phys. 2007 , 126 , 224109), and a third regarding the midpoint Poisson leap (Peng et al., 2007; Gillespie J. Chem. Phys. 2001 , 115 , 1716). The performance assessment is made by estimating the error in the histogram measured against that obtained with the so-called stochastic simulation algorithm. It is shown that the current algorithm displays notably less histogram error than its predecessor for a fixed computation time and, conversely, less computation time for a fixed accuracy. This computational advantage is an asset in repetitive calculations essential for modeling stochastic systems. The importance of stochastic simulations is derived from diverse areas of application in physical and biological sciences, process systems, and economics, etc. Computational improvements such as those reported herein are therefore of considerable significance.

  16. Comparative Laboratory and Numerical Simulations of Shearing Granular Fault Gouge: Micromechanical Processes

    Science.gov (United States)

    Morgan, J. K.; Marone, C. J.; Guo, Y.; Anthony, J. L.; Knuth, M. W.

    2004-12-01

    Laboratory studies of granular shear zones have provided significant insight into fault zone processes and the mechanics of earthquakes. The micromechanisms of granular deformation are more difficult to ascertain, but have been hypothesized based on known variations in boundary conditions, particle properties and geometries, and mechanical behavior. Numerical simulations using particle dynamics methods (PDM) can offer unique views into deforming granular shear zones, revealing the precise details of granular microstructures, particle interactions, and packings, which can be correlated with macroscopic mechanical behavior. Here, we describe a collaborative program of comparative laboratory and numerical experiments of granular shear using idealized materials, i.e., glass beads, glass rods or pasta, and angular sand. Both sets of experiments are carried out under similar initial and boundary conditions in a non-fracturing stress regime. Phenomenologically, the results of the two sets of experiments are very similar. Peak friction values vary as a function of particle dimensionality (1-D vs. 2-D vs. 3-D), particle angularity, particle size and size distributions, boundary roughness, and shear zone thickness. Fluctuations in shear strength during an experiment, i.e., stick-slip events, can be correlated with distinct changes in the nature, geometries, and durability of grain bridges that support the shear zone walls. Inclined grain bridges are observed to form, and to support increasing loads, during gradual increases in assemblage strength. Collapse of an individual grain bridge leads to distinct localization of strain, generating a rapidly propagating shear surface that cuts across multiple grain bridges, accounting for the sudden drop in strength. The distribution of particle sizes within an assemblage, along with boundary roughness and its periodicity, influence the rate of formation and dissipation of grain bridges, thereby controlling friction variations during

  17. Faults simulation on reactor internals of Uljin 1 and 2 nuclear power plant

    International Nuclear Information System (INIS)

    Ryu, J. S.; Park, J. H.; Nam, H. Y.; Woo, J. S.; Kim, T. R.

    1999-01-01

    The dynamic characteristics analysis were performed for finite element model of Uljin 1 and 2 NPP reactor internals with artificial faults on the hold-down ring and the thermal shield. To prove the validity of the modelling, the fundamental beam and shell mode frequencies of core support barrel(CSB) in normal state are compared with those from the measurement results, which shows good agreement. According to the analysis results, the fundamental natural frequency of the CSB beam decreases by 5%, 18%, 54% and 92% for 10%, 20%, 50% and 80% partial faults of the hold-down ring respectively. And the fundamental shell natural frequency is within 5.3% for 20% partial faults, but decrease by 22% and 72% for 50% and 80% partial faults. For the faults of the thermal shield with the normal hold-down ring, frequency decreases of the higher shell modes are more than the beam modes and the 5th to 8th natural frequencies decrease by 5%, 9%, 13% and 20% for 25%, 50%, 75% and 100% partial faults respectively

  18. 3D Simulations for a Micron-Scale, Dielectric-Based Acceleration Experiment

    International Nuclear Information System (INIS)

    Yoder, R. B.; Travish, G.; Xu Jin; Rosenzweig, J. B.

    2009-01-01

    An experimental program to demonstrate a dielectric, slab-symmetric accelerator structure has been underway for the past two years. These resonant devices are driven by a side-coupled 800-nm laser and can be configured to maintain the field profile necessary for synchronous acceleration and focusing of relativistic or nonrelativistic particles. We present 3D simulations of various versions of the structure geometry, including a metal-walled structure relevant to ongoing cold tests on resonant properties, and an all-dielectric structure to be constructed for a proof-of-principle acceleration experiment.

  19. Microparticle accelerator of unique design. [for micrometeoroid impact and cratering simulation

    Science.gov (United States)

    Vedder, J. F.

    1978-01-01

    A microparticle accelerator has been devised for micrometeoroid impact and cratering simulation; the device produces high-velocity (0.5-15 km/sec), micrometer-sized projectiles of any cohesive material. In the source, an electrodynamic levitator, single particles are charged by ion bombardment in high vacuum. The vertical accelerator has four drift tubes, each initially at a high negative voltage. After injection of the projectile, each tube is grounded in turn at a time determined by the voltage and charge/mass ratio to give four acceleration stages with a total voltage equivalent to about 1.7 MV.

  20. BEAM DYNAMICS SIMULATIONS OF SARAF ACCELERATOR INCLUDING ERROR PROPAGATION AND IMPLICATIONS FOR THE EURISOL DRIVER

    CERN Document Server

    J. Rodnizki, D. Berkovits, K. Lavie, I. Mardor, A. Shor and Y. Yanay (Soreq NRC, Yavne), K. Dunkel, C. Piel (ACCEL, Bergisch Gladbach), A. Facco (INFN/LNL, Legnaro, Padova), V. Zviagintsev (TRIUMF, Vancouver)

    AbstractBeam dynamics simulations of SARAF (Soreq Applied Research Accelerator Facility) superconducting RF linear accelerator have been performed in order to establish the accelerator design. The multi-particle simulation includes 3D realistic electromagnetic field distributions, space charge forces and fabrication, misalignment and operation errors. A 4 mA proton or deuteron beam is accelerated up to 40 MeV with a moderated rms emittance growth and a high real-estate gradient of 2 MeV/m. An envelope of 40,000 macro-particles is kept under a radius of 1.1 cm, well below the beam pipe bore radius. The accelerator design of SARAF is proposed as an injector for the EURISOL driver accelerator. The Accel 176 MHZ β0=0.09 and β0=0.15 HWR lattice was extended to 90 MeV based on the LNL 352 MHZ β0=0.31 HWR. The matching between both lattices ensures smooth transition and the possibility to extend the accelerator to the required EURISOL ion energy.

  1. Community Project for Accelerator Science and Simulation (ComPASS)

    Energy Technology Data Exchange (ETDEWEB)

    Simmons, Christopher [Univ. of Texas, Austin, TX (United States); Carey, Varis [Univ. of Texas, Austin, TX (United States)

    2016-10-12

    After concluding our initial exercise (solving a simplified statistical inverse problem with unknown parameter laser intensity) of coupling Vorpal and our parallel statistical library QUESO, we shifted the application focus to DLA. Our efforts focused on developing a Gaussian process (GP) emulator within QUESO for efficient optimization of power couplers within woodpiles. The smaller simulation size (compared with LPA) allows for sufficient “training runs” to develop a reasonable GP statistical emulator for a parameter space of moderate dimension.

  2. Biocellion: accelerating computer simulation of multicellular biological system models.

    Science.gov (United States)

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-11-01

    Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Application of Java Technology to Simulation of Transient Effects in Accelerator Magnets

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Superconducting magnets are one of the key building blocks of modern high-energy particle accelerators. Operating at extremely low temperatures (1.9 K), superconducting magnets produce high magnetic field needed to control the trajectory of beams travelling at nearly the speed of light. With high performance comes considerable complexity represented by several coupled physical domains characterized by multi-rate and multi-scale behaviour. The full exploitation of the LHC, as well as the design of its upgrades and future accelerators calls for more accurate simulations. With such a long-term vision in mind, the STEAM (Simulation of Transient Effects in Accelerator Magnets) project has been establish and is based on two pillars: (i) models developed with optimised solvers for particular sub-problems, (ii) coupling interfaces allowing to exchange information between the models. In order to tackle these challenges and develop a maintainable and extendable simulation framework, a team of developers implemented a ...

  4. XORBIT---An x-windows accelerator simulation

    International Nuclear Information System (INIS)

    Evans, K. Jr.

    1993-01-01

    Xorbit is an accelerator physics code that tracks particle orbits. Its two distinguishing features are a rich graphical interface and the ability to connect to and be controlled by external programs such as Mathematica or control-system software. The design goal is to have a code that can be controlled in much the same way as a real machine is controlled. This allows the testing of control algorithms before the real machine is commissioned or without disturbing the real machine at any time. The graphical interface provides a means of changing magnet parameters easily with immediate visual feedback on the resulting orbit changes. There are a number of features including interactive plotting of orbits and Twiss parameters; the ability to display error positions, monitor readings, or the full orbit; the ability to display true or difference orbits; as well as the ability to find closed orbits, track from given initial conditions, or apply a variety of correction methods. There is a Design Mode in which element strengths and positions can be changed with the mouse with continuous display of the results. All of these operations are fast and intuitive

  5. Computer codes and methods for simulating accelerator driven systems

    International Nuclear Information System (INIS)

    Sartori, E.; Byung Chan Na

    2003-01-01

    A large set of computer codes and associated data libraries have been developed by nuclear research and industry over the past half century. A large number of them are in the public domain and can be obtained under agreed conditions from different Information Centres. The areas covered comprise: basic nuclear data and models, reactor spectra and cell calculations, static and dynamic reactor analysis, criticality, radiation shielding, dosimetry and material damage, fuel behaviour, safety and hazard analysis, heat conduction and fluid flow in reactor systems, spent fuel and waste management (handling, transportation, and storage), economics of fuel cycles, impact on the environment of nuclear activities etc. These codes and models have been developed mostly for critical systems used for research or power generation and other technological applications. Many of them have not been designed for accelerator driven systems (ADS), but with competent use, they can be used for studying such systems or can form the basis for adapting existing methods to the specific needs of ADS's. The present paper describes the types of methods, codes and associated data available and their role in the applications. It provides Web addresses for facilitating searches for such tools. Some indications are given on the effect of non appropriate or 'blind' use of existing tools to ADS. Reference is made to available experimental data that can be used for validating the methods use. Finally, some international activities linked to the different computational aspects are described briefly. (author)

  6. 3D ground‐motion simulations of Mw 7 earthquakes on the Salt Lake City segment of the Wasatch fault zone: Variability of long‐period (T≥1  s) ground motions and sensitivity to kinematic rupture parameters

    Science.gov (United States)

    Moschetti, Morgan P.; Hartzell, Stephen; Ramirez-Guzman, Leonardo; Frankel, Arthur; Angster, Stephen J.; Stephenson, William J.

    2017-01-01

    We examine the variability of long‐period (T≥1  s) earthquake ground motions from 3D simulations of Mw 7 earthquakes on the Salt Lake City segment of the Wasatch fault zone, Utah, from a set of 96 rupture models with varying slip distributions, rupture speeds, slip velocities, and hypocenter locations. Earthquake ruptures were prescribed on a 3D fault representation that satisfies geologic constraints and maintained distinct strands for the Warm Springs and for the East Bench and Cottonwood faults. Response spectral accelerations (SA; 1.5–10 s; 5% damping) were measured, and average distance scaling was well fit by a simple functional form that depends on the near‐source intensity level SA0(T) and a corner distance Rc:SA(R,T)=SA0(T)(1+(R/Rc))−1. Period‐dependent hanging‐wall effects manifested and increased the ground motions by factors of about 2–3, though the effects appeared partially attributable to differences in shallow site response for sites on the hanging wall and footwall of the fault. Comparisons with modern ground‐motion prediction equations (GMPEs) found that the simulated ground motions were generally consistent, except within deep sedimentary basins, where simulated ground motions were greatly underpredicted. Ground‐motion variability exhibited strong lateral variations and, at some sites, exceeded the ground‐motion variability indicated by GMPEs. The effects on the ground motions of changing the values of the five kinematic rupture parameters can largely be explained by three predominant factors: distance to high‐slip subevents, dynamic stress drop, and changes in the contributions from directivity. These results emphasize the need for further characterization of the underlying distributions and covariances of the kinematic rupture parameters used in 3D ground‐motion simulations employed in probabilistic seismic‐hazard analyses.

  7. Modern approaches to accelerator simulation and on-line control

    International Nuclear Information System (INIS)

    Lee, M.; Clearwater, S.; Theil, E.; Paxson, V.

    1987-02-01

    COMFORT-PLUS consists of three parts: (1) COMFORT (Control Of Machine Function, ORbits, and Trajectories), which computes the machine lattice functions and transport matrices along a beamline; (2) PLUS (Prediction from Lattice Using Simulation) which finds or compensates for errors in the beam parameters or machine elements; and (3) a highly graphical interface to PLUS. The COMFORT-PLUS package has been developed on a SUN-3 workstation. The structure and use of COMFORT-PLUS are described, and an example of the use of the package is presented

  8. Beam trajectory simulation program at the National Institute of Nuclear Research Tandem Accelerator facility

    International Nuclear Information System (INIS)

    Murillo C, G.

    1996-01-01

    The main object of this thesis is to show in a clear and simple way to the people in general, the function of the Tandem Accelerator located on site the ININ facilities. For this presentation, a computer program was developed. The software written in C language in a structural form, simulates the ion production and its trajectory in a schematic and in an easy way to comprehend. According to the goals of this work, the simulation also shows details of some of the machine components like the source, the accelerator cavity, ,and the bombarding chamber. Electric and magnetic fields calculations are included for the 90 degrees bending magnet and quadrupoles. (Author)

  9. Electric field simulation and measurement of a pulse line ion accelerator

    International Nuclear Information System (INIS)

    Shen Xiaokang; Zhang Zimin; Cao Shuchun; Zhao Hongwei; Zhao Quantang; Liu Ming; Jing Yi; Wang Bo; Shen Xiaoli

    2012-01-01

    An oil dielectric helical pulse line to demonstrate the principles of a Pulse Line Ion Accelerator (PLIA) has been designed and fabricated. The simulation of the axial electric field of an accelerator with CST code has been completed and the simulation results show complete agreement with the theoretical calculations. To fully understand the real value of the electric field excited from the helical line in PLIA, an optical electric integrated electric field measurement system was adopted. The measurement result shows that the real magnitude of axial electric field is smaller than that calculated, probably due to the actual pitch of the resister column which is much less than that of helix. (authors)

  10. A unified approach to building accelerator simulation software for the SSC

    International Nuclear Information System (INIS)

    Paxson, V.; Aragon, C.; Peggs, S.; Saltmarsh, C.; Schachinger, L.

    1989-03-01

    To adequately simulate the physics and control of a complex accelerator requires a substantial number of programs which must present a uniform interface to both the user and the internal representation of the accelerator. If these programs are to be truly modular, so that their use can be orchestrated as needed, the specification of both their graphical and data interfaces must be carefully designed. We describe the state of such SSC simulation software, with emphasis on addressing these uniform interface needs by using a standardized data set format and object-oriented approaches to graphics and modeling. 12 refs

  11. Off-fault plasticity in three-dimensional dynamic rupture simulations using a modal Discontinuous Galerkin method on unstructured meshes: Implementation, verification, and application

    Science.gov (United States)

    Wollherr, Stephanie; Gabriel, Alice-Agnes; Uphoff, Carsten

    2018-05-01

    The dynamics and potential size of earthquakes depend crucially on rupture transfers between adjacent fault segments. To accurately describe earthquake source dynamics, numerical models can account for realistic fault geometries and rheologies such as nonlinear inelastic processes off the slip interface. We present implementation, verification, and application of off-fault Drucker-Prager plasticity in the open source software SeisSol (www.seissol.org). SeisSol is based on an arbitrary high-order derivative modal Discontinuous Galerkin (ADER-DG) method using unstructured, tetrahedral meshes specifically suited for complex geometries. Two implementation approaches are detailed, modelling plastic failure either employing sub-elemental quadrature points or switching to nodal basis coefficients. At fine fault discretizations the nodal basis approach is up to 6 times more efficient in terms of computational costs while yielding comparable accuracy. Both methods are verified in community benchmark problems and by three dimensional numerical h- and p-refinement studies with heterogeneous initial stresses. We observe no spectral convergence for on-fault quantities with respect to a given reference solution, but rather discuss a limitation to low-order convergence for heterogeneous 3D dynamic rupture problems. For simulations including plasticity, a high fault resolution may be less crucial than commonly assumed, due to the regularization of peak slip rate and an increase of the minimum cohesive zone width. In large-scale dynamic rupture simulations based on the 1992 Landers earthquake, we observe high rupture complexity including reverse slip, direct branching, and dynamic triggering. The spatio-temporal distribution of rupture transfers are altered distinctively by plastic energy absorption, correlated with locations of geometrical fault complexity. Computational cost increases by 7% when accounting for off-fault plasticity in the demonstrating application. Our results

  12. A program PULSYN01 for wide-band simulation of source radiation from a finite earthquake source/fault

    International Nuclear Information System (INIS)

    Gusev, A.A.

    2001-12-01

    The purpose of the program PULSYN01 is to apply a realistic wideband source-side input for calculation of earthquake ground motion. The source is represented as a grid of point subsources, and their seismic moment rate time functions are generated considering each of them as realizations (sample functions) of a non-stationary random process. The model is intended for use at receiver-to fault distances from far field to as small as 10-20% of the fault width. Combined with an adequate Green's function synthesizer, PULSUNT01 can be used for assessment of possible ground motion and seismic hazard in many ways, including scenario event simulation, parametric studies, and eventually stochastic hazard calculations

  13. The Study of Non-Linear Acceleration of Particles during Substorms Using Multi-Scale Simulations

    International Nuclear Information System (INIS)

    Ashour-Abdalla, Maha

    2011-01-01

    To understand particle acceleration during magnetospheric substorms we must consider the problem on multple scales ranging from the large scale changes in the entire magnetosphere to the microphysics of wave particle interactions. In this paper we present two examples that demonstrate the complexity of substorm particle acceleration and its multi-scale nature. The first substorm provided us with an excellent example of ion acceleration. On March 1, 2008 four THEMIS spacecraft were in a line extending from 8 R E to 23 R E in the magnetotail during a very large substorm during which ions were accelerated to >500 keV. We used a combination of a global magnetohydrodynamic and large scale kinetic simulations to model the ion acceleration and found that the ions gained energy by non-adiabatic trajectories across the substorm electric field in a narrow region extending across the magnetotail between x = -10 R E and x = -15 R E . In this strip called the 'wall region' the ions move rapidly in azimuth and gain 100s of keV. In the second example we studied the acceleration of electrons associated with a pair of dipolarization fronts during a substorm on February 15, 2008. During this substorm three THEMIS spacecraft were grouped in the near-Earth magnetotail (x ∼-10 R E ) and observed electron acceleration of >100 keV accompanied by intense plasma waves. We used the MHD simulations and analytic theory to show that adiabatic motion (betatron and Fermi acceleration) was insufficient to account for the electron acceleration and that kinetic processes associated with the plasma waves were important.

  14. Earthquake cycle modeling of multi-segmented faults: dynamic rupture and ground motion simulation of the 1992 Mw 7.3 Landers earthquake.

    Science.gov (United States)

    Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.

    2017-12-01

    We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the

  15. SU-E-T-512: Electromagnetic Simulations of the Dielectric Wall Accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Uselmann, A; Mackie, T [University of Wisconsin and Morgridge Institute for Research, Madison, WI (United States)

    2014-06-01

    Purpose: To characterize and parametrically study the key components of a dielectric wall accelerator through electromagnetic modeling and particle tracking. Methods: Electromagnetic and particle tracking simulations were performed using a commercial code (CST Microwave Studio, CST Inc.) utilizing the finite integration technique. A dielectric wall accelerator consists of a series of stacked transmission lines sequentially fired in synchrony with an ion pulse. Numerous properties of the stacked transmission lines, including geometric, material, and electronic properties, were analyzed and varied in order to assess their impact on the transverse and axial electric fields. Additionally, stacks of transmission lines were simulated in order to quantify the parasitic effect observed in closely packed lines. Particle tracking simulations using the particle-in-cell method were performed on the various stacks to determine the impact of the above properties on the resultant phase space of the ions. Results: Examination of the simulation results show that novel geometries can shape the accelerating pulse in order to reduce the energy spread and increase the average energy of accelerated ions. Parasitic effects were quantified for various geometries and found to vary with distance from the end of the transmission line and along the beam axis. An optimal arrival time of an ion pulse relative to the triggering of the transmission lines for a given geometry was determined through parametric study. Benchmark simulations of single transmission lines agree well with published experimental results. Conclusion: This work characterized the behavior of the transmission lines used in a dielectric wall accelerator and used this information to improve them in novel ways. Utilizing novel geometries, we were able to improve the accelerating gradient and phase space of the accelerated particle bunch. Through simulation, we were able to discover and optimize design issues with the device at

  16. CRISP. Simulation tool for fault detection and diagnostics in high-DG power networks

    International Nuclear Information System (INIS)

    Fontela, M.; Andrieu, C.; Raison, B.

    2004-08-01

    This document gives a description of a tool proposed for fault detection and diagnostics. The main principles of the functions of fault localization are described and detailed for a given MV network that will be used for the ICT experiment in Grenoble (experiment 3B). The aim of the tool is to create a technical, simple and realistic context for testing ICT dedicated to an electrical application. The tool gives the expected inputs and outputs contents of the various distributed ICT components when a fault occurs in a given MV network. So the requirements for the ICT components are given in term of expected data collected, analysed and transmitted. Several examples are given in order to illustrate the inputs/outputs in case of different faults. The tool includes a topology description which is a main aspect to develop in the future for managing the distribution network. Updating topology in real time will become necessary for fault diagnostic and protection, but also necessary for the various possible added applications (local market balance and local electrical power quality for instance). The tool gives a context and a simple view for the ICT components behaviours assuming an ideal response and transmission from them. The real characteristics and possible limitations for the ICT (information latency, congestion, security) will be established during the experiments from the same context described in the HTFD tool

  17. Beam equipment electromagnetic interaction in accelerators: simulation and experimental benchmarking

    CERN Document Server

    Passarelli, Andrea; Vaccaro, Vittorio Giorgio; Massa, Rita; Masullo, Maria Rosaria

    One of the most significant technological problems to achieve the nominal performances in the Large Hadron Collider (LHC) concerns the system of collimation of particle beams. The use of collimators crystals, exploiting the channeling effect on extracted beam, has been experimentally demonstrated. The first part of this thesis is about the optimization of UA9 goniometer at CERN, this device used for beam collimation will replace a part of the vacuum chamber. The optimization process, however, requires the calculation of the coupling impedance between the circulating beam and this structure in order to define the threshold of admissible intensity to do not trigger instability processes. Simulations have been performed with electromagnetic codes to evaluate the coupling impedance and to assess the beam-structure interaction. The results clearly showed that the most concerned resonance frequencies are due solely to the open cavity to the compartment of the motors and position sensors considering the crystal in o...

  18. Simple scaling for faster tracking simulation in accelerator multiparticle dynamics

    International Nuclear Information System (INIS)

    MacLachlan, J.A.

    2001-01-01

    Macroparticle tracking is a direct and attractive approach to following the evolution of a phase space distribution. When the particles interact through short range wake fields or when inter-particle force is included, calculations of this kind require a large number of macroparticles. It is possible to reduce both the number of macroparticles required and the number of tracking steps per unit simulated time by employing a simple scaling which can be inferred directly from the single-particle equations of motion. In many cases of practical importance the speed of calculation improves with the fourth power of the scaling constant. Scaling has been implemented in an existing longitudinal tracking code; early experience supports the concept and promises major time savings. Limitations on the scaling are discussed

  19. GPU-accelerated simulations of isolated black holes

    Science.gov (United States)

    Lewis, Adam G. M.; Pfeiffer, Harald P.

    2018-05-01

    We present a port of the numerical relativity code SpEC which is capable of running on NVIDIA GPUs. Since this code must be maintained in parallel with SpEC itself, a primary design consideration is to perform as few explicit code changes as possible. We therefore rely on a hierarchy of automated porting strategies. At the highest level we use TLoops, a C++ library of our design, to automatically emit CUDA code equivalent to tensorial expressions written into C++ source using a syntax similar to analytic calculation. Next, we trace out and cache explicit matrix representations of the numerous linear transformations in the SpEC code, which allows these to be performed on the GPU using pre-existing matrix-multiplication libraries. We port the few remaining important modules by hand. In this paper we detail the specifics of our port, and present benchmarks of it simulating isolated black hole spacetimes on several generations of NVIDIA GPU.

  20. The numerical simulation study of the dynamic evolutionary processes in an earthquake cycle on the Longmen Shan Fault

    Science.gov (United States)

    Tao, Wei; Shen, Zheng-Kang; Zhang, Yong

    2016-04-01

    concentration areas in the model, one is located in the mid and upper crust on the hanging wall where the strain energy could be released by permanent deformation like folding, and the other lies in the deep part of the fault where the strain energy could be released by earthquakes. (5) The whole earthquake dynamic process could be clearly reflected by the evolutions of the strain energy increments on the stages of the earthquake cycle. In the inter-seismic period, the strain energy accumulates relatively slowly; prior to the earthquake, the fault is locking and the strain energy accumulates fast, and some of the strain energy is released on the upper crust on the hanging wall of the fault. In coseismic stage, the strain energy is released fast along the fault. In the poseismic stage, the slow accumulation process of strain recovers rapidly as that in the inerseismic period in around one hundred years. The simulation study in this thesis would help better understand the earthquake dynamic process.

  1. Estimation of reliability on digital plant protection system in nuclear power plants using fault simulation with self-checking

    International Nuclear Information System (INIS)

    Lee, Jun Seok; Kim, Suk Joon; Seong, Poong Hyun

    2004-01-01

    Safety-critical digital systems in nuclear power plants require high design reliability. Reliable software design and accurate prediction methods for the system reliability are important problems. In the reliability analysis, the error detection coverage of the system is one of the crucial factors, however, it is difficult to evaluate the error detection coverage of digital instrumentation and control system in nuclear power plants due to complexity of the system. To evaluate the error detection coverage for high efficiency and low cost, the simulation based fault injections with self checking are needed for digital instrumentation and control system in nuclear power plants. The target system is local coincidence logic in digital plant protection system and a simplified software modeling for this target system is used in this work. C++ based hardware description of micro computer simulator system is used to evaluate the error detection coverage of the system. From the simulation result, it is possible to estimate the error detection coverage of digital plant protection system in nuclear power plants using simulation based fault injection method with self checking. (author)

  2. Poisson simulation for high voltage terminal of test stand for 1MV electrostatic accelerator

    International Nuclear Information System (INIS)

    Park, Sae-Hoon; Kim, Jeong-Tae; Kwon, Hyeok-Jung; Cho, Yong-Sub; Kim, Yu-Seok

    2014-01-01

    KOMAC provide ion beam to user which energy range need to expand to MeV range and develop 1 MV electrostatic accelerator. The specifications of the electrostatic accelerator are 1MV acceleration voltage, 10 mA peak current and variable gas ion. We are developing test stand before set up 1 MV electrostatic accelerator. The test stand voltage is 300 kV and operating time is 8 hours. The test stand is consist of 300 kV high voltage terminal, DC-AC-DC inverter, power supply device inside terminal, 200MHz RF power, 5 kV extraction power supply, 300 kV accelerating tube and vacuum system.. The beam measurement system and beam dump will be installed next to accelerating tube. Poisson code simulation results of the high voltage terminal are presented in this paper. Poisson code has been used to calculate the electric field for high voltage terminal. The results of simulation were verified with reasonable results. The poisson code structure could be apply to the high voltage terminal of the test stand

  3. Poisson simulation for high voltage terminal of test stand for 1MV electrostatic accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sae-Hoon; Kim, Jeong-Tae; Kwon, Hyeok-Jung; Cho, Yong-Sub [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Yu-Seok [Dongguk Univ.., Gyeongju (Korea, Republic of)

    2014-10-15

    KOMAC provide ion beam to user which energy range need to expand to MeV range and develop 1 MV electrostatic accelerator. The specifications of the electrostatic accelerator are 1MV acceleration voltage, 10 mA peak current and variable gas ion. We are developing test stand before set up 1 MV electrostatic accelerator. The test stand voltage is 300 kV and operating time is 8 hours. The test stand is consist of 300 kV high voltage terminal, DC-AC-DC inverter, power supply device inside terminal, 200MHz RF power, 5 kV extraction power supply, 300 kV accelerating tube and vacuum system.. The beam measurement system and beam dump will be installed next to accelerating tube. Poisson code simulation results of the high voltage terminal are presented in this paper. Poisson code has been used to calculate the electric field for high voltage terminal. The results of simulation were verified with reasonable results. The poisson code structure could be apply to the high voltage terminal of the test stand.

  4. Particle-in-cell simulation of x-ray wakefield acceleration and betatron radiation in nanotubes

    Directory of Open Access Journals (Sweden)

    Xiaomei Zhang

    2016-10-01

    Full Text Available Though wakefield acceleration in crystal channels has been previously proposed, x-ray wakefield acceleration has only recently become a realistic possibility since the invention of the single-cycled optical laser compression technique. We investigate the acceleration due to a wakefield induced by a coherent, ultrashort x-ray pulse guided by a nanoscale channel inside a solid material. By two-dimensional particle-in-cell computer simulations, we show that an acceleration gradient of TeV/cm is attainable. This is about 3 orders of magnitude stronger than that of the conventional plasma-based wakefield accelerations, which implies the possibility of an extremely compact scheme to attain ultrahigh energies. In addition to particle acceleration, this scheme can also induce the emission of high energy photons at ∼O(10–100  MeV. Our simulations confirm such high energy photon emissions, which is in contrast with that induced by the optical laser driven wakefield scheme. In addition to this, the significantly improved emittance of the energetic electrons has been discussed.

  5. Automated fault-management in a simulated spaceflight micro-world

    Science.gov (United States)

    Lorenz, Bernd; Di Nocera, Francesco; Rottger, Stefan; Parasuraman, Raja

    2002-01-01

    BACKGROUND: As human spaceflight missions extend in duration and distance from Earth, a self-sufficient crew will bear far greater onboard responsibility and authority for mission success. This will increase the need for automated fault management (FM). Human factors issues in the use of such systems include maintenance of cognitive skill, situational awareness (SA), trust in automation, and workload. This study examine the human performance consequences of operator use of intelligent FM support in interaction with an autonomous, space-related, atmospheric control system. METHODS: An expert system representing a model-base reasoning agent supported operators at a low level of automation (LOA) by a computerized fault finding guide, at a medium LOA by an automated diagnosis and recovery advisory, and at a high LOA by automate diagnosis and recovery implementation, subject to operator approval or veto. Ten percent of the experimental trials involved complete failure of FM support. RESULTS: Benefits of automation were reflected in more accurate diagnoses, shorter fault identification time, and reduced subjective operator workload. Unexpectedly, fault identification times deteriorated more at the medium than at the high LOA during automation failure. Analyses of information sampling behavior showed that offloading operators from recovery implementation during reliable automation enabled operators at high LOA to engage in fault assessment activities CONCLUSIONS: The potential threat to SA imposed by high-level automation, in which decision advisories are automatically generated, need not inevitably be counteracted by choosing a lower LOA. Instead, freeing operator cognitive resources by automatic implementation of recover plans at a higher LOA can promote better fault comprehension, so long as the automation interface is designed to support efficient information sampling.

  6. Particle-in-cell simulations of plasma accelerators and electron-neutral collisions

    Directory of Open Access Journals (Sweden)

    David L. Bruhwiler

    2001-10-01

    Full Text Available We present 2D simulations of both beam-driven and laser-driven plasma wakefield accelerators, using the object-oriented particle-in-cell code XOOPIC, which is time explicit, fully electromagnetic, and capable of running on massively parallel supercomputers. Simulations of laser-driven wakefields with low \\(∼10^{16} W/cm^{2}\\ and high \\(∼10^{18} W/cm^{2}\\ peak intensity laser pulses are conducted in slab geometry, showing agreement with theory and fluid simulations. Simulations of the E-157 beam wakefield experiment at the Stanford Linear Accelerator Center, in which a 30 GeV electron beam passes through 1 m of preionized lithium plasma, are conducted in cylindrical geometry, obtaining good agreement with previous work. We briefly describe some of the more significant modifications to XOOPIC required by this work, and summarize the issues relevant to modeling relativistic electron-neutral collisions in a particle-in-cell code.

  7. Standardization of accelerator irradiation procedures for simulation of neutron induced damage in reactor structural materials

    Science.gov (United States)

    Shao, Lin; Gigax, Jonathan; Chen, Di; Kim, Hyosim; Garner, Frank A.; Wang, Jing; Toloczko, Mychailo B.

    2017-10-01

    Self-ion irradiation is widely used as a method to simulate neutron damage in reactor structural materials. Accelerator-based simulation of void swelling, however, introduces a number of neutron-atypical features which require careful data extraction and, in some cases, introduction of innovative irradiation techniques to alleviate these issues. We briefly summarize three such atypical features: defect imbalance effects, pulsed beam effects, and carbon contamination. The latter issue has just been recently recognized as being relevant to simulation of void swelling and is discussed here in greater detail. It is shown that carbon ions are entrained in the ion beam by Coulomb force drag and accelerated toward the target surface. Beam-contaminant interactions are modeled using molecular dynamics simulation. By applying a multiple beam deflection technique, carbon and other contaminants can be effectively filtered out, as demonstrated in an irradiation of HT-9 alloy by 3.5 MeV Fe ions.

  8. Theory and simulation of ion acceleration with circularly polarized laser pulses; Theorie et simulation de l'acceleration des ions par impulsions laser a polarisation circulaire

    Energy Technology Data Exchange (ETDEWEB)

    Macchi, A. [CNR/INFM/polyLAB, Pisa (Italy); Macchi, A.; Tuveri, S.; Veghini, S. [Pisa Univ., Dept. of Physics E. Fermi (Italy); Liseikina, T.V. [Max Planck Institute for Nuclear Physics, Heidelberg (Germany)

    2009-03-15

    Ion acceleration driven by the radiation pressure of circularly polarized pulses is investigated via analytical modeling and particle-in-cell simulations. Both thick and thin targets, i.e. the 'hole boring' and 'light sail' regimes are considered. Parametric studies in one spatial dimension are used to determine the optimal thickness of thin targets and to address the effects of preformed plasma profiles and laser pulse ellipticity in thick targets. Three-dimensional (3D) simulations show that 'flat-top' radial profiles of the intensity are required to prevent early laser pulse breakthrough in thin targets. The 3D simulations are also used to address the issue of the conservation of the angular momentum of the laser pulse and its absorption in the plasma. (authors)

  9. Injector and beam transport simulation study of proton dielectric wall accelerator

    International Nuclear Information System (INIS)

    Zhao, Quantang; Yuan, P.; Zhang, Z.M.; Cao, S.C; Shen, X.K.; Jing, Y.; Ma, Y.Y.; Yu, C.S.; Li, Z.P.; Liu, M.; Xiao, R.Q.; Zhao, H.W.

    2012-01-01

    A simulation study of a short-pulsed proton injector for, and beam transport in, a dielectric wall accelerator (DWA) has been carried out using the particle-in-cell (PIC) code Warp. It was shown that applying “tilt pulse” voltage waveforms on three electrodes enables the production of a shorter bunch by the injector. The fields in the DWA beam tube were simulated using Computer Simulation Technology’s Microwave Studio (CST MWS) package, with various choices for the boundary conditions. For acceleration in the DWA, the beam transport was simulated with Warp, using applied fields obtained by running CST MWS. Our simulations showed that the electric field at the entrance to the DWA represents a challenging issue for the beam transport. We thus simulated a configuration with a mesh at the entrance of the DWA, intended to improve the entrance field. In these latter simulations, a proton bunch was successfully accelerated from 130 keV to about 36 MeV in a DWA with a length of 36.75 cm. As the beam bunch progresses, its transverse dimensions diminish from (roughly) 0.5×0.5 cm to 0.2×0.4 cm. The beam pulse lengthens from 1 cm to 2 cm due to lack of longitudinal compression fields. -- Highlights: ► A pulse proton injector with tilt voltages on the three electrodes was simulated. ► The fields in different part of the DWA were simulated with CST and analyzed. ► The proton beam transport in DWA was simulated with Warp successfully. ► The simulation can help for designing a real DWA.

  10. Measurements and simulation of controlled beamfront motion in the Laser Controlled Collective Accelerator

    International Nuclear Information System (INIS)

    Yao, R.L.; Destler, W.W.; Striffler, C.D.; Rodgers, J.; Scgalov, Z.

    1989-01-01

    In the Laser Controlled Collective Accelerator, an intense electron beam is injected at a current above the vacuum space charge limit into an initially evacuated drift tube. A plasma channel, produced by time-sequenced, multiple laser beam ionization of a solid target on the drift tube wall, provides the necessary neutralization to allow for effective beam propagation. By controlling the rate of production of the plasma channel as a function of time down the drift tube, control of the electron beamfront can be achieved. Recent experimental measurements of controlled beamfront motion in this configuration are presented, along with results of ion acceleration experiments conducted using two different accelerating gradients. These results are compared with numerical simulations of the system in which both controlled beamfront motion and ion acceleration is observed consistent with both design expectations and experimental results. 5 refs., 6 figs

  11. Simulator for an Accelerator-Driven Subcritical Fissile Solution System

    International Nuclear Information System (INIS)

    Klein, Steven Karl; Day, Christy M.; Determan, John C.

    2015-01-01

    LANL has developed a process to generate a progressive family of system models for a fissile solution system. This family includes a dynamic system simulation comprised of coupled nonlinear differential equations describing the time evolution of the system. Neutron kinetics, radiolytic gas generation and transport, and core thermal hydraulics are included in the DSS. Extensions to explicit operation of cooling loops and radiolytic gas handling are embedded in these systems as is a stability model. The DSS may then be converted to an implementation in Visual Studio to provide a design team the ability to rapidly estimate system performance impacts from a variety of design decisions. This provides a method to assist in optimization of the system design. Once design has been generated in some detail the C++ version of the system model may then be implemented in a LabVIEW user interface to evaluate operator controls and instrumentation and operator recognition and response to off-normal events. Taken as a set of system models the DSS, Visual Studio, and LabVIEW progression provides a comprehensive set of design support tools.

  12. Simulator for an Accelerator-Driven Subcritical Fissile Solution System

    Energy Technology Data Exchange (ETDEWEB)

    Klein, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Day, Christy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Determan, John C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-09-14

    LANL has developed a process to generate a progressive family of system models for a fissile solution system. This family includes a dynamic system simulation comprised of coupled nonlinear differential equations describing the time evolution of the system. Neutron kinetics, radiolytic gas generation and transport, and core thermal hydraulics are included in the DSS. Extensions to explicit operation of cooling loops and radiolytic gas handling are embedded in these systems as is a stability model. The DSS may then be converted to an implementation in Visual Studio to provide a design team the ability to rapidly estimate system performance impacts from a variety of design decisions. This provides a method to assist in optimization of the system design. Once design has been generated in some detail the C++ version of the system model may then be implemented in a LabVIEW user interface to evaluate operator controls and instrumentation and operator recognition and response to off-normal events. Taken as a set of system models the DSS, Visual Studio, and LabVIEW progression provides a comprehensive set of design support tools.

  13. Accelerated finite element elastodynamic simulations using the GPU

    Energy Technology Data Exchange (ETDEWEB)

    Huthwaite, Peter, E-mail: p.huthwaite@imperial.ac.uk

    2014-01-15

    An approach is developed to perform explicit time domain finite element simulations of elastodynamic problems on the graphical processing unit, using Nvidia's CUDA. Of critical importance for this problem is the arrangement of nodes in memory, allowing data to be loaded efficiently and minimising communication between the independently executed blocks of threads. The initial stage of memory arrangement is partitioning the mesh; both a well established ‘greedy’ partitioner and a new, more efficient ‘aligned’ partitioner are investigated. A method is then developed to efficiently arrange the memory within each partition. The software is applied to three models from the fields of non-destructive testing, vibrations and geophysics, demonstrating a memory bandwidth of very close to the card's maximum, reflecting the bandwidth-limited nature of the algorithm. Comparison with Abaqus, a widely used commercial CPU equivalent, validated the accuracy of the results and demonstrated a speed improvement of around two orders of magnitude. A software package, Pogo, incorporating these developments, is released open source, downloadable from (http://www.pogo-fea.com/) to benefit the community. -- Highlights: •A novel memory arrangement approach is discussed for finite elements on the GPU. •The mesh is partitioned then nodes are arranged efficiently within each partition. •Models from ultrasonics, vibrations and geophysics are run. •The code is significantly faster than an equivalent commercial CPU package. •Pogo, the new software package, is released open source.

  14. Accelerated finite element elastodynamic simulations using the GPU

    International Nuclear Information System (INIS)

    Huthwaite, Peter

    2014-01-01

    An approach is developed to perform explicit time domain finite element simulations of elastodynamic problems on the graphical processing unit, using Nvidia's CUDA. Of critical importance for this problem is the arrangement of nodes in memory, allowing data to be loaded efficiently and minimising communication between the independently executed blocks of threads. The initial stage of memory arrangement is partitioning the mesh; both a well established ‘greedy’ partitioner and a new, more efficient ‘aligned’ partitioner are investigated. A method is then developed to efficiently arrange the memory within each partition. The software is applied to three models from the fields of non-destructive testing, vibrations and geophysics, demonstrating a memory bandwidth of very close to the card's maximum, reflecting the bandwidth-limited nature of the algorithm. Comparison with Abaqus, a widely used commercial CPU equivalent, validated the accuracy of the results and demonstrated a speed improvement of around two orders of magnitude. A software package, Pogo, incorporating these developments, is released open source, downloadable from (http://www.pogo-fea.com/) to benefit the community. -- Highlights: •A novel memory arrangement approach is discussed for finite elements on the GPU. •The mesh is partitioned then nodes are arranged efficiently within each partition. •Models from ultrasonics, vibrations and geophysics are run. •The code is significantly faster than an equivalent commercial CPU package. •Pogo, the new software package, is released open source

  15. Hybrid Methods for Muon Accelerator Simulations with Ionization Cooling

    Energy Technology Data Exchange (ETDEWEB)

    Kunz, Josiah [Anderson U.; Snopok, Pavel [Fermilab; Berz, Martin [Michigan State U.; Makino, Kyoko [Michigan State U.

    2018-03-28

    Muon ionization cooling involves passing particles through solid or liquid absorbers. Careful simulations are required to design muon cooling channels. New features have been developed for inclusion in the transfer map code COSY Infinity to follow the distribution of charged particles through matter. To study the passage of muons through material, the transfer map approach alone is not sufficient. The interplay of beam optics and atomic processes must be studied by a hybrid transfer map--Monte-Carlo approach in which transfer map methods describe the deterministic behavior of the particles, and Monte-Carlo methods are used to provide corrections accounting for the stochastic nature of scattering and straggling of particles. The advantage of the new approach is that the vast majority of the dynamics are represented by fast application of the high-order transfer map of an entire element and accumulated stochastic effects. The gains in speed are expected to simplify the optimization of cooling channels which is usually computationally demanding. Progress on the development of the required algorithms and their application to modeling muon ionization cooling channels is reported.

  16. Kinetic Simulations of Plasma Energization and Particle Acceleration in Interacting Magnetic Flux Ropes

    Science.gov (United States)

    Du, S.; Guo, F.; Zank, G. P.; Li, X.; Stanier, A.

    2017-12-01

    The interaction between magnetic flux ropes has been suggested as a process that leads to efficient plasma energization and particle acceleration (e.g., Drake et al. 2013; Zank et al. 2014). However, the underlying plasma dynamics and acceleration mechanisms are subject to examination of numerical simulations. As a first step of this effort, we carry out 2D fully kinetic simulations using the VPIC code to study the plasma energization and particle acceleration during coalescence of two magnetic flux ropes. Our analysis shows that the reconnection electric field and compression effect are important in plasma energization. The results may help understand the energization process associated with magnetic flux ropes frequently observed in the solar wind near the heliospheric current sheet.

  17. Accelerator simulation and theoretical modelling of radiation effects (SMoRE)

    CERN Document Server

    2018-01-01

    This publication summarizes the findings and conclusions of the IAEA coordinated research project (CRP) on accelerator simulation and theoretical modelling of radiation effects, aimed at supporting Member States in the development of advanced radiation-resistant structural materials for implementation in innovative nuclear systems. This aim can be achieved through enhancement of both experimental neutron-emulation capabilities of ion accelerators and improvement of the predictive efficiency of theoretical models and computer codes. This dual approach is challenging but necessary, because outputs of accelerator simulation experiments need adequate theoretical interpretation, and theoretical models and codes need high dose experimental data for their verification. Both ion irradiation investigations and computer modelling have been the specific subjects of the CRP, and the results of these studies are presented in this publication which also includes state-ofthe- art reviews of four major aspects of the project...

  18. Accelerating Project and Process Improvement using Advanced Software Simulation Technology: From the Office to the Enterprise

    Science.gov (United States)

    2010-04-29

    Technology: From the Office Larry Smith Software Technology Support Center to the Enterprise 517 SMXS/MXDEA 6022 Fir Avenue Hill AFB, UT 84056 801...2010 to 00-00-2010 4. TITLE AND SUBTITLE Accelerating Project and Process Improvement using Advanced Software Simulation Technology: From the Office to

  19. A micro-macro acceleration method for the Monte Carlo simulation of stochastic differential equations

    DEFF Research Database (Denmark)

    Debrabant, Kristian; Samaey, Giovanni; Zieliński, Przemysław

    2017-01-01

    We present and analyse a micro-macro acceleration method for the Monte Carlo simulation of stochastic differential equations with separation between the (fast) time-scale of individual trajectories and the (slow) time-scale of the macroscopic function of interest. The algorithm combines short...

  20. Simulation studies of acceleration of heavy ions and their elemental compositions

    International Nuclear Information System (INIS)

    Toida, Mieko; Ohsawa, Yukiharu

    1996-07-01

    By using a one-dimensional, electromagnetic particle simulation code with full ion and electron dynamics, we have studied the acceleration of heavy ions by a nonlinear magnetosonic wave in a multi-ion-species plasma. First, we describe the mechanism of heavy ion acceleration by magnetosonic waves. We then investigate this by particle simulations. The simulation plasma contains four ion species: H, He, O, and Fe. The number density of He is taken to be 10% of that of H, and those of O and Fe are much lower. Simulations confirm that, as in a single-ion-species plasma, some of the hydrogens can be accelerated by the longitudinal electric field formed in the wave. Furthermore, they show that magnetosonic waves can accelerate all the particles of all the heavy species (He, O, and Fe) by a different mechanism, i.e., by the transverse electric field. The maximum speeds of the heavy species are about the same, of the order of the wave propagation speed. These are in good agreement with theoretical prediction. These results indicate that, if high-energy ions are produced in the solar corona through these mechanisms, the elemental compositions of these heavy ions can be similar to that of the background plasma, i.e., the corona

  1. Simulation of Electric Faults in Doubly-Fed Induction Generators Employing Advanced Mathematical Modelling

    DEFF Research Database (Denmark)

    Martens, Sebastian; Mijatovic, Nenad; Holbøll, Joachim

    2015-01-01

    in many areas of electrical machine analysis. However, for fault investigations, the phase-coordinate representation has been found more suitable. This paper presents a mathematical model in phase coordinates of the DFIG with two parallel windings per rotor phase. The model has been implemented in Matlab...

  2. Radio Evolution of Supernova Remnants Including Nonlinear Particle Acceleration: Insights from Hydrodynamic Simulations

    Science.gov (United States)

    Pavlović, Marko Z.; Urošević, Dejan; Arbutina, Bojan; Orlando, Salvatore; Maxted, Nigel; Filipović, Miroslav D.

    2018-01-01

    We present a model for the radio evolution of supernova remnants (SNRs) obtained by using three-dimensional hydrodynamic simulations coupled with nonlinear kinetic theory of cosmic-ray (CR) acceleration in SNRs. We model the radio evolution of SNRs on a global level by performing simulations for a wide range of the relevant physical parameters, such as the ambient density, supernova (SN) explosion energy, acceleration efficiency, and magnetic field amplification (MFA) efficiency. We attribute the observed spread of radio surface brightnesses for corresponding SNR diameters to the spread of these parameters. In addition to our simulations of Type Ia SNRs, we also considered SNR radio evolution in denser, nonuniform circumstellar environments modified by the progenitor star wind. These simulations start with the mass of the ejecta substantially higher than in the case of a Type Ia SN and presumably lower shock speed. The magnetic field is understandably seen as very important for the radio evolution of SNRs. In terms of MFA, we include both resonant and nonresonant modes in our large-scale simulations by implementing models obtained from first-principles, particle-in-cell simulations and nonlinear magnetohydrodynamical simulations. We test the quality and reliability of our models on a sample consisting of Galactic and extragalactic SNRs. Our simulations give Σ ‑ D slopes between ‑4 and ‑6 for the full Sedov regime. Recent empirical slopes obtained for the Galactic samples are around ‑5, while those for the extragalactic samples are around ‑4.

  3. Laser-wakefield accelerators for medical phase contrast imaging: Monte Carlo simulations and experimental studies

    Science.gov (United States)

    Cipiccia, S.; Reboredo, D.; Vittoria, Fabio A.; Welsh, G. H.; Grant, P.; Grant, D. W.; Brunetti, E.; Wiggins, S. M.; Olivo, A.; Jaroszynski, D. A.

    2015-05-01

    X-ray phase contrast imaging (X-PCi) is a very promising method of dramatically enhancing the contrast of X-ray images of microscopic weakly absorbing objects and soft tissue, which may lead to significant advancement in medical imaging with high-resolution and low-dose. The interest in X-PCi is giving rise to a demand for effective simulation methods. Monte Carlo codes have been proved a valuable tool for studying X-PCi including coherent effects. The laser-plasma wakefield accelerators (LWFA) is a very compact particle accelerator that uses plasma as an accelerating medium. Accelerating gradient in excess of 1 GV/cm can be obtained, which makes them over a thousand times more compact than conventional accelerators. LWFA are also sources of brilliant betatron radiation, which are promising for applications including medical imaging. We present a study that explores the potential of LWFA-based betatron sources for medical X-PCi and investigate its resolution limit using numerical simulations based on the FLUKA Monte Carlo code, and present preliminary experimental results.

  4. Simulation and analysis of TE wave propagation for measurement of electron cloud densities in particle accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Sonnad, Kiran G., E-mail: kgs52@cornell.edu [CLASSE, Cornell University, Ithaca, NY (United States); Hammond, Kenneth C. [Department of Physics, Harvard University, Cambridge, MA (United States); Schwartz, Robert M. [CLASSE, Cornell University, Ithaca, NY (United States); Veitzer, Seth A. [Tech-X Corporation, Boulder, CO (United States)

    2014-08-01

    The use of transverse electric (TE) waves has proved to be a powerful, noninvasive method for estimating the densities of electron clouds formed in particle accelerators. Results from the plasma simulation program VSim have served as a useful guide for experimental studies related to this method, which have been performed at various accelerator facilities. This paper provides results of the simulation and modeling work done in conjunction with experimental efforts carried out at the Cornell electron storage ring “Test Accelerator” (CESRTA). This paper begins with a discussion of the phase shift induced by electron clouds in the transmission of RF waves, followed by the effect of reflections along the beam pipe, simulation of the resonant standing wave frequency shifts and finally the effects of external magnetic fields, namely dipoles and wigglers. A derivation of the dispersion relationship of wave propagation for arbitrary geometries in field free regions with a cold, uniform cloud density is also provided.

  5. D-leaping: Accelerating stochastic simulation algorithms for reactions with delays

    International Nuclear Information System (INIS)

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2009-01-01

    We propose a novel, accelerated algorithm for the approximate stochastic simulation of biochemical systems with delays. The present work extends existing accelerated algorithms by distributing, in a time adaptive fashion, the delayed reactions so as to minimize the computational effort while preserving their accuracy. The accuracy of the present algorithm is assessed by comparing its results to those of the corresponding delay differential equations for a representative biochemical system. In addition, the fluctuations produced from the present algorithm are comparable to those from an exact stochastic simulation with delays. The algorithm is used to simulate biochemical systems that model oscillatory gene expression. The results indicate that the present algorithm is competitive with existing works for several benchmark problems while it is orders of magnitude faster for certain systems of biochemical reactions.

  6. Simulation of isothermal multi-phase fuel-coolant interaction using MPS method with GPU acceleration

    Energy Technology Data Exchange (ETDEWEB)

    Gou, W.; Zhang, S.; Zheng, Y. [Zhejiang Univ., Hangzhou (China). Center for Engineering and Scientific Computation

    2016-07-15

    The energetic fuel-coolant interaction (FCI) has been one of the primary safety concerns in nuclear power plants. Graphical processing unit (GPU) implementation of the moving particle semi-implicit (MPS) method is presented and used to simulate the fuel coolant interaction problem. The governing equations are discretized with the particle interaction model of MPS. Detailed implementation on single-GPU is introduced. The three-dimensional broken dam is simulated to verify the developed GPU acceleration MPS method. The proposed GPU acceleration algorithm and developed code are then used to simulate the FCI problem. As a summary of results, the developed GPU-MPS method showed a good agreement with the experimental observation and theoretical prediction.

  7. Numerical simulation for the accelerator of the KSTAR neutral beam ion source

    International Nuclear Information System (INIS)

    Kim, Tae-Seong; Jeong, Seung Ho; In, Sang Ryul

    2010-01-01

    Recent experiments with a prototype long-pulse, high-current ion source being developed for the neutral beam injection system of the Korea Superconducting Tokamak Advanced Research have shown that the accelerator grid assembly needs a further upgrade to achieve the final goal of 120keV/65A for the deuterium ion beam. The accelerator upgrade concept was determined theoretically by simulations using the IGUN code. The simulation study was focused on finding parameter sets that raise the optimum perveance as large as possible and reduce the beam divergence as low as possible. From the simulation results, it was concluded that it is possible to achieve this goal by sliming the plasma grid (G1), shortening the second gap (G2-G3), and adjusting the G2 voltage ratio.

  8. Two-stage light-gas magnetoplasma accelerator for hypervelocity impact simulation

    International Nuclear Information System (INIS)

    Khramtsov, P P; Vasetskij, V A; Makhnach, A I; Grishenko, V M; Chernik, M Yu; Shikh, I A; Doroshko, M V

    2016-01-01

    The development of macroparticles acceleration methods for high-speed impact simulation in a laboratory is an actual problem due to increasing of space flights duration and necessity of providing adequate spacecraft protection against micrometeoroid and space debris impacts. This paper presents results of experimental study of a two-stage light- gas magnetoplasma launcher for acceleration of a macroparticle, in which a coaxial plasma accelerator creates a shock wave in a high-pressure channel filled with light gas. Graphite and steel spheres with diameter of 2.5-4 mm were used as a projectile and were accelerated to the speed of 0.8-4.8 km/s. A launching of particle occurred in vacuum. For projectile velocity control the speed measuring method was developed. The error of this metod does not exceed 5%. The process of projectile flight from the barrel and the process of a particle collision with a target were registered by use of high-speed camera. The results of projectile collision with elements of meteoroid shielding are presented. In order to increase the projectile velocity, the high-pressure channel should be filled with hydrogen. However, we used helium in our experiments for safety reasons. Therefore, we can expect that the range of mass and velocity of the accelerated particles can be extended by use of hydrogen as an accelerating gas. (paper)

  9. Two-stage light-gas magnetoplasma accelerator for hypervelocity impact simulation

    Science.gov (United States)

    Khramtsov, P. P.; Vasetskij, V. A.; Makhnach, A. I.; Grishenko, V. M.; Chernik, M. Yu; Shikh, I. A.; Doroshko, M. V.

    2016-11-01

    The development of macroparticles acceleration methods for high-speed impact simulation in a laboratory is an actual problem due to increasing of space flights duration and necessity of providing adequate spacecraft protection against micrometeoroid and space debris impacts. This paper presents results of experimental study of a two-stage light- gas magnetoplasma launcher for acceleration of a macroparticle, in which a coaxial plasma accelerator creates a shock wave in a high-pressure channel filled with light gas. Graphite and steel spheres with diameter of 2.5-4 mm were used as a projectile and were accelerated to the speed of 0.8-4.8 km/s. A launching of particle occurred in vacuum. For projectile velocity control the speed measuring method was developed. The error of this metod does not exceed 5%. The process of projectile flight from the barrel and the process of a particle collision with a target were registered by use of high-speed camera. The results of projectile collision with elements of meteoroid shielding are presented. In order to increase the projectile velocity, the high-pressure channel should be filled with hydrogen. However, we used helium in our experiments for safety reasons. Therefore, we can expect that the range of mass and velocity of the accelerated particles can be extended by use of hydrogen as an accelerating gas.

  10. Design and Optimization of Large Accelerator Systems through High-Fidelity Electromagnetic Simulations

    International Nuclear Information System (INIS)

    Ng, Cho; Akcelik, Volkan; Candel, Arno; Chen, Sheng; Ge, Lixin; Kabel, Andreas; Lee, Lie-Quan; Li, Zenghai; Prudencio, Ernesto; Schussman, Greg; Uplenchwar1, Ravi; Xiao1, Liling; Ko1, Kwok; Austin, T.; Cary, J.R.; Ovtchinnikov, S.; Smith, D.N.; Werner, G.R.; Bellantoni, L.; TechX Corp.; Fermilab

    2008-01-01

    SciDAC1, with its support for the 'Advanced Computing for 21st Century Accelerator Science and Technology' (AST) project, witnessed dramatic advances in electromagnetic (EM) simulations for the design and optimization of important accelerators across the Office of Science. In SciDAC2, EM simulations continue to play an important role in the 'Community Petascale Project for Accelerator Science and Simulation' (ComPASS), through close collaborations with SciDAC CETs/Institutes in computational science. Existing codes will be improved and new multi-physics tools will be developed to model large accelerator systems with unprecedented realism and high accuracy using computing resources at petascale. These tools aim at targeting the most challenging problems facing the ComPASS project. Supported by advances in computational science research, they have been successfully applied to the International Linear Collider (ILC) and the Large Hadron Collider (LHC) in High Energy Physics (HEP), the JLab 12-GeV Upgrade in Nuclear Physics (NP), as well as the Spallation Neutron Source (SNS) and the Linac Coherent Light Source (LCLS) in Basic Energy Sciences (BES)

  11. Design and optimization of large accelerator systems through high-fidelity electromagnetic simulations

    International Nuclear Information System (INIS)

    Ng, C; Akcelik, V; Candel, A; Chen, S; Ge, L; Kabel, A; Lee, Lie-Quan; Li, Z; Prudencio, E; Schussman, G; Uplenchwar, R; Xiao, L; Ko, K; Austin, T; Cary, J R; Ovtchinnikov, S; Smith, D N; Werner, G R; Bellantoni, L

    2008-01-01

    SciDAC-1, with its support for the 'Advanced Computing for 21st Century Accelerator Science and Technology' project, witnessed dramatic advances in electromagnetic (EM) simulations for the design and optimization of important accelerators across the Office of Science. In SciDAC2, EM simulations continue to play an important role in the 'Community Petascale Project for Accelerator Science and Simulation' (ComPASS), through close collaborations with SciDAC Centers and Insitutes in computational science. Existing codes will be improved and new multi-physics tools will be developed to model large accelerator systems with unprecedented realism and high accuracy using computing resources at petascale. These tools aim at targeting the most challenging problems facing the ComPASS project. Supported by advances in computational science research, they have been successfully applied to the International Linear Collider and the Large Hadron Collider in high energy physics, the JLab 12-GeV Upgrade in nuclear physics, and the Spallation Neutron Source and the Linac Coherent Light Source in basic energy sciences

  12. FAULT DIAGNOSIS WITH MULTI-STATE ALARMS IN A NUCLEAR POWER CONTROL SIMULATOR

    Energy Technology Data Exchange (ETDEWEB)

    Austin Ragsdale; Roger Lew; Brian P. Dyre; Ronald L. Boring

    2012-10-01

    This research addresses how alarm systems can increase operator performance within nuclear power plant operations. The experiment examined the effect of two types of alarm systems (two-state and three-state alarms) on alarm compliance and diagnosis for two types of faults differing in complexity. We hypothesized three-state alarms would improve performance in alarm recognition and fault diagnoses over that of two-state alarms. We used sensitivity and criterion based on Signal Detection Theory to measure performance. We further hypothesized that operator trust would be highest when using three-state alarms. The findings from this research showed participants performed better and had more trust in three-state alarms compared to two-state alarms. Furthermore, these findings have significant theoretical implications and practical applications as they apply to improving the efficiency and effectiveness of nuclear power plant operations.

  13. Two-fluid electromagnetic simulations of plasma-jet acceleration with detailed equation-of-state

    International Nuclear Information System (INIS)

    Thoma, C.; Welch, D. R.; Clark, R. E.; Bruner, N.; MacFarlane, J. J.; Golovkin, I. E.

    2011-01-01

    We describe a new particle-based two-fluid fully electromagnetic algorithm suitable for modeling high density (n i ∼ 10 17 cm -3 ) and high Mach number laboratory plasma jets. In this parameter regime, traditional particle-in-cell (PIC) techniques are challenging due to electron timescale and lengthscale constraints. In this new approach, an implicit field solve allows the use of large timesteps while an Eulerian particle remap procedure allows simulations to be run with very few particles per cell. Hall physics and charge separation effects are included self-consistently. A detailed equation of state (EOS) model is used to evolve the ion charge state and introduce non-ideal gas behavior. Electron cooling due to radiation emission is included in the model as well. We demonstrate the use of these new algorithms in 1D and 2D Cartesian simulations of railgun (parallel plate) jet accelerators using He and Ar gases. The inclusion of EOS and radiation physics reduces the electron temperature, resulting in higher calculated jet Mach numbers in the simulations. We also introduce a surface physics model for jet accelerators in which a frictional drag along the walls leads to axial spreading of the emerging jet. The simulations demonstrate that high Mach number jets can be produced by railgun accelerators for a variety of applications, including high energy density physics experiments.

  14. Two-fluid electromagnetic simulations of plasma-jet acceleration with detailed equation-of-state

    Energy Technology Data Exchange (ETDEWEB)

    Thoma, C.; Welch, D. R.; Clark, R. E.; Bruner, N. [Voss Scientific, LLC, Albuquerque, New Mexico 87108 (United States); MacFarlane, J. J.; Golovkin, I. E. [Prism Computational Sciences, Inc., Madison, Wisconsin 53711 (United States)

    2011-10-15

    We describe a new particle-based two-fluid fully electromagnetic algorithm suitable for modeling high density (n{sub i} {approx} 10{sup 17} cm{sup -3}) and high Mach number laboratory plasma jets. In this parameter regime, traditional particle-in-cell (PIC) techniques are challenging due to electron timescale and lengthscale constraints. In this new approach, an implicit field solve allows the use of large timesteps while an Eulerian particle remap procedure allows simulations to be run with very few particles per cell. Hall physics and charge separation effects are included self-consistently. A detailed equation of state (EOS) model is used to evolve the ion charge state and introduce non-ideal gas behavior. Electron cooling due to radiation emission is included in the model as well. We demonstrate the use of these new algorithms in 1D and 2D Cartesian simulations of railgun (parallel plate) jet accelerators using He and Ar gases. The inclusion of EOS and radiation physics reduces the electron temperature, resulting in higher calculated jet Mach numbers in the simulations. We also introduce a surface physics model for jet accelerators in which a frictional drag along the walls leads to axial spreading of the emerging jet. The simulations demonstrate that high Mach number jets can be produced by railgun accelerators for a variety of applications, including high energy density physics experiments.

  15. Forward and adjoint spectral-element simulations of seismic wave propagation using hardware accelerators

    Science.gov (United States)

    Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri

    2015-04-01

    Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  16. Particle-in-cell/accelerator code for space-charge dominated beam simulation

    Energy Technology Data Exchange (ETDEWEB)

    2012-05-08

    Warp is a multidimensional discrete-particle beam simulation program designed to be applicable where the beam space-charge is non-negligible or dominant. It is being developed in a collaboration among LLNL, LBNL and the University of Maryland. It was originally designed and optimized for heave ion fusion accelerator physics studies, but has received use in a broader range of applications, including for example laser wakefield accelerators, e-cloud studies in high enery accelerators, particle traps and other areas. At present it incorporates 3-D, axisymmetric (r,z) planar (x-z) and transverse slice (x,y) descriptions, with both electrostatic and electro-magnetic fields, and a beam envelope model. The code is guilt atop the Python interpreter language.

  17. Evaluation of a server-client architecture for accelerator modeling and simulation

    International Nuclear Information System (INIS)

    Bowling, B.A.; Akers, W.; Shoaee, H.; Watson, W.; Zeijts, J. van; Witherspoon, S.

    1997-01-01

    Traditional approaches to computational modeling and simulation often utilize a batch method for code execution using file-formatted input/output. This method of code implementation was generally chosen for several factors, including CPU throughput and availability, complexity of the required modeling problem, and presentation of computation results. With the advent of faster computer hardware and the advances in networking and software techniques, other program architectures for accelerator modeling have recently been employed. Jefferson Laboratory has implemented a client/server solution for accelerator beam transport modeling utilizing a query-based I/O. The goal of this code is to provide modeling information for control system applications and to serve as a computation engine for general modeling tasks, such as machine studies. This paper performs a comparison between the batch execution and server/client architectures, focusing on design and implementation issues, performance, and general utility towards accelerator modeling demands

  18. Accelerating simulation for the multiple-point statistics algorithm using vector quantization

    Science.gov (United States)

    Zuo, Chen; Pan, Zhibin; Liang, Hao

    2018-03-01

    Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.

  19. An investigation of the efficiency in simulating 6 MV medical accelerator using OMEGA/BEAM

    International Nuclear Information System (INIS)

    Dai Zhenhui; Wang Xuetao; Zhu Lin; Zhang Yu; Liu Xiaowei

    2013-01-01

    Background: Monte Carlo simulation techniques are presently considered to be the most reliable method for radiation therapy treatment planning. However, long simulation times involved when using the general-purpose Monte Carlo code systems have led to the development of special-purpose Monte Carlo programs. Purpose: This paper attempts to improve computing efficiency for dose calculation in the EGSnrc modeling of clinical linear accelerator by selecting proper parameters. Methods: Several variance reduction techniques including uniform bremsstrahlung splitting, selective bremsstrahlung splitting, directional bremsstrahlung splitting are applied in BEAMnrc simulating medical accelerator treatment head to generate phase-space file which is selected as a source for DOSXYZnrc simulation, both photon splitting and particle recycling are used to improve the efficiency in the calculation of dose profile in water phantom. Results: The splitting number for maximum efficiency in directional bremsstrahlung splitting (no electron splitting) is 2500 in the BEAMnrc simulation. The highest efficiency of DOSXYZnrc simulation is given when photon splitting number is set to 40. Conclusions: Efficiency can be significantly improved by setting appropriate bremsstrahlung splitting and optimized photon splitting number and particle recycling number. (authors)

  20. Radiation belt electron acceleration during the 17 March 2015 geomagnetic storm: Observations and simulations

    International Nuclear Information System (INIS)

    Li, W.; Ma, Q.; Thorne, R. M.; Bortnik, J.; Zhang, X.-J.

    2016-01-01

    Various physical processes are known to cause acceleration, loss, and transport of energetic electrons in the Earth's radiation belts, but their quantitative roles in different time and space need further investigation. During the largest storm over the past decade (17 March 2015), relativistic electrons experienced fairly rapid acceleration up to ~7 MeV within 2 days after an initial substantial dropout, as observed by Van Allen Probes. In the present paper, we evaluate the relative roles of various physical processes during the recovery phase of this large storm using a 3-D diffusion simulation. By quantitatively comparing the observed and simulated electron evolution, we found that chorus plays a critical role in accelerating electrons up to several MeV near the developing peak location and produces characteristic flat-top pitch angle distributions. By only including radial diffusion, the simulation underestimates the observed electron acceleration, while radial diffusion plays an important role in redistributing electrons and potentially accelerates them to even higher energies. Moreover, plasmaspheric hiss is found to provide efficient pitch angle scattering losses for hundreds of keV electrons, while its scattering effect on > 1 MeV electrons is relatively slow. Although an additional loss process is required to fully explain the overestimated electron fluxes at multi-MeV, the combined physical processes of radial diffusion and pitch angle and energy diffusion by chorus and hiss reproduce the observed electron dynamics remarkably well, suggesting that quasi-linear diffusion theory is reasonable to evaluate radiation belt electron dynamics during this big storm.

  1. Energy loss of a high charge bunched electron beam in plasma: Simulations, scaling, and accelerating wakefields

    Directory of Open Access Journals (Sweden)

    J. B. Rosenzweig

    2004-06-01

    Full Text Available The energy loss and gain of a beam in the nonlinear, “blowout” regime of the plasma wakefield accelerator, which features ultrahigh accelerating fields, linear transverse focusing forces, and nonlinear plasma motion, has been asserted, through previous observations in simulations, to scale linearly with beam charge. Additionally, from a recent analysis by Barov et al., it has been concluded that for an infinitesimally short beam, the energy loss is indeed predicted to scale linearly with beam charge for arbitrarily large beam charge. This scaling is predicted to hold despite the onset of a relativistic, nonlinear response by the plasma, when the number of beam particles occupying a cubic plasma skin depth exceeds that of plasma electrons within the same volume. This paper is intended to explore the deviations from linear energy loss using 2D particle-in-cell simulations that arise in the case of experimentally relevant finite length beams. The peak accelerating field in the plasma wave excited behind the finite-length beam is also examined, with the artifact of wave spiking adding to the apparent persistence of linear scaling of the peak field amplitude into the nonlinear regime. At large enough normalized charge, the linear scaling of both decelerating and accelerating fields collapses, with serious consequences for plasma wave excitation efficiency. Using the results of parametric particle-in-cell studies, the implications of these results for observing severe deviations from linear scaling in present and planned experiments are discussed.

  2. Magnetic-Island Contraction and Particle Acceleration in Simulated Eruptive Solar Flares

    Science.gov (United States)

    Guidoni, S. E.; Devore, C. R.; Karpen, J. T.; Lynch, B. J.

    2016-01-01

    The mechanism that accelerates particles to the energies required to produce the observed high-energy impulsive emission in solar flares is not well understood. Drake et al. proposed a mechanism for accelerating electrons in contracting magnetic islands formed by kinetic reconnection in multi-layered current sheets (CSs). We apply these ideas to sunward-moving flux ropes (2.5D magnetic islands) formed during fast reconnection in a simulated eruptive flare. A simple analytic model is used to calculate the energy gain of particles orbiting the field lines of the contracting magnetic islands in our ultrahigh-resolution 2.5D numerical simulation. We find that the estimated energy gains in a single island range up to a factor of five. This is higher than that found by Drake et al. for islands in the terrestrial magnetosphere and at the heliopause, due to strong plasma compression that occurs at the flare CS. In order to increase their energy by two orders of magnitude and plausibly account for the observed high-energy flare emission, the electrons must visit multiple contracting islands. This mechanism should produce sporadic emission because island formation is intermittent. Moreover, a large number of particles could be accelerated in each magneto hydro dynamic-scale island, which may explain the inferred rates of energetic-electron production in flares. We conclude that island contraction in the flare CS is a promising candidate for electron acceleration in solar eruptions.

  3. Radial basis function neural network in fault detection of automotive ...

    African Journals Online (AJOL)

    Radial basis function neural network in fault detection of automotive engines. ... Five faults have been simulated on the MVEM, including three sensor faults, one component fault and one actuator fault. The three sensor faults ... Keywords: Automotive engine, independent RBFNN model, RBF neural network, fault detection

  4. Load management strategy for Particle-In-Cell simulations in high energy particle acceleration

    Energy Technology Data Exchange (ETDEWEB)

    Beck, A., E-mail: beck@llr.in2p3.fr [Laboratoire Leprince-Ringuet, École polytechnique, CNRS-IN2P3, Palaiseau 91128 (France); Frederiksen, J.T. [Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, 2100 København Ø (Denmark); Dérouillat, J. [CEA, Maison de La Simulation, 91400 Saclay (France)

    2016-09-01

    In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.

  5. Benchmark of Space Charge Simulations and Comparison with Experimental Results for High Intensity, Low Energy Accelerators

    CERN Document Server

    Cousineau, Sarah M

    2005-01-01

    Space charge effects are a major contributor to beam halo and emittance growth leading to beam loss in high intensity, low energy accelerators. As future accelerators strive towards unprecedented levels of beam intensity and beam loss control, a more comprehensive understanding of space charge effects is required. A wealth of simulation tools have been developed for modeling beams in linacs and rings, and with the growing availability of high-speed computing systems, computationally expensive problems that were inconceivable a decade ago are now being handled with relative ease. This has opened the field for realistic simulations of space charge effects, including detailed benchmarks with experimental data. A great deal of effort is being focused in this direction, and several recent benchmark studies have produced remarkably successful results. This paper reviews the achievements in space charge benchmarking in the last few years, and discusses the challenges that remain.

  6. Object-Oriented Parallel Particle-in-Cell Code for Beam Dynamics Simulation in Linear Accelerators

    International Nuclear Information System (INIS)

    Qiang, J.; Ryne, R.D.; Habib, S.; Decky, V.

    1999-01-01

    In this paper, we present an object-oriented three-dimensional parallel particle-in-cell code for beam dynamics simulation in linear accelerators. A two-dimensional parallel domain decomposition approach is employed within a message passing programming paradigm along with a dynamic load balancing. Implementing object-oriented software design provides the code with better maintainability, reusability, and extensibility compared with conventional structure based code. This also helps to encapsulate the details of communications syntax. Performance tests on SGI/Cray T3E-900 and SGI Origin 2000 machines show good scalability of the object-oriented code. Some important features of this code also include employing symplectic integration with linear maps of external focusing elements and using z as the independent variable, typical in accelerators. A successful application was done to simulate beam transport through three superconducting sections in the APT linac design

  7. LIAR -- A new program for the modeling and simulation of linear accelerators with high gradients and small emittances

    International Nuclear Information System (INIS)

    Assmann, R.; Adolphsen, C.; Bane, K.; Raubenheimer, T.O.; Siemann, R.; Thompson, K.

    1996-09-01

    Linear accelerators are the central components of the proposed next generation of linear colliders. They need to provide acceleration of up to 750 GeV per beam while maintaining very small normalized emittances. Standard simulation programs, mainly developed for storage rings, do not meet the specific requirements for high energy linear accelerators. The authors present a new program LIAR (LInear Accelerator Research code) that includes wakefield effects, a 4D coupled beam description, specific optimization algorithms and other advanced features. Its modular structure allows to use and to extend it easily for different purposes. They present examples of simulations for SLC and NLC

  8. Parallel, Multigrid Finite Element Simulator for Fractured/Faulted and Other Complex Reservoirs based on Common Component Architecture (CCA)

    Energy Technology Data Exchange (ETDEWEB)

    Milind Deo; Chung-Kan Huang; Huabing Wang

    2008-08-31

    Black-oil, compositional and thermal simulators have been developed to address different physical processes in reservoir simulation. A number of different types of discretization methods have also been proposed to address issues related to representing the complex reservoir geometry. These methods are more significant for fractured reservoirs where the geometry can be particularly challenging. In this project, a general modular framework for reservoir simulation was developed, wherein the physical models were efficiently decoupled from the discretization methods. This made it possible to couple any discretization method with different physical models. Oil characterization methods are becoming increasingly sophisticated, and it is possible to construct geologically constrained models of faulted/fractured reservoirs. Discrete Fracture Network (DFN) simulation provides the option of performing multiphase calculations on spatially explicit, geologically feasible fracture sets. Multiphase DFN simulations of and sensitivity studies on a wide variety of fracture networks created using fracture creation/simulation programs was undertaken in the first part of this project. This involved creating interfaces to seamlessly convert the fracture characterization information into simulator input, grid the complex geometry, perform the simulations, and analyze and visualize results. Benchmarking and comparison with conventional simulators was also a component of this work. After demonstration of the fact that multiphase simulations can be carried out on complex fracture networks, quantitative effects of the heterogeneity of fracture properties were evaluated. Reservoirs are populated with fractures of several different scales and properties. A multiscale fracture modeling study was undertaken and the effects of heterogeneity and storage on water displacement dynamics in fractured basements were investigated. In gravity-dominated systems, more oil could be recovered at a given pore

  9. Numerical simulation of spin motion in circular accelerators using spinor formulation

    International Nuclear Information System (INIS)

    Nghiem, P.; Tkatchenko, A.

    1992-07-01

    A simple method is presented based on spinor algebra formalism for tracking the spin motion in circular accelerators. Using an analytical expression of the one-turn transformation matrix including the effects of perturbating fields or of siberian snakes, a simple and very fast numerical code has been written for studying spin motion in various circumstances. In particular, effects of synchrotron oscillations on final polarization after one isolated resonance crossing are simulated. Results of these calculations agree very well with those which have been obtained previously from analytical approaches or from other numerical-simulation programs. (author) 8 refs.; 14 figs

  10. BEAMPATH: a program library for beam dynamics simulation in linear accelerators

    International Nuclear Information System (INIS)

    Batygin, Y.K.

    1992-01-01

    A structured programming technique was used to develop software for space charge dominated beams investigation in linear accelerators. The method includes hierarchical program design using program independent modules and a flexible combination of modules to provide a most effective version of structure for every specific case of simulation. A modular program BEAMPATH was developed for 2D and 3D particle-in-cell simulation of beam dynamics in a structure containing RF gaps, radio-frequency quadrupoles (RFQ), multipole lenses, waveguides, bending magnets and solenoids. (author) 5 refs.; 2 figs

  11. Particle-in-Cell Code BEAMPATH for Beam Dynamics Simulations in Linear Accelerators and Beamlines

    International Nuclear Information System (INIS)

    Batygin, Y.

    2004-01-01

    A code library BEAMPATH for 2 - dimensional and 3 - dimensional space charge dominated beam dynamics study in linear particle accelerators and beam transport lines is developed. The program is used for particle-in-cell simulation of axial-symmetric, quadrupole-symmetric and z-uniform beams in a channel containing RF gaps, radio-frequency quadrupoles, multipole lenses, solenoids and bending magnets. The programming method includes hierarchical program design using program-independent modules and a flexible combination of modules to provide the most effective version of the structure for every specific case of simulation. Numerical techniques as well as the results of beam dynamics studies are presented

  12. Particle-in-Cell Code BEAMPATH for Beam Dynamics Simulations in Linear Accelerators and Beamlines

    Energy Technology Data Exchange (ETDEWEB)

    Batygin, Y.

    2004-10-28

    A code library BEAMPATH for 2 - dimensional and 3 - dimensional space charge dominated beam dynamics study in linear particle accelerators and beam transport lines is developed. The program is used for particle-in-cell simulation of axial-symmetric, quadrupole-symmetric and z-uniform beams in a channel containing RF gaps, radio-frequency quadrupoles, multipole lenses, solenoids and bending magnets. The programming method includes hierarchical program design using program-independent modules and a flexible combination of modules to provide the most effective version of the structure for every specific case of simulation. Numerical techniques as well as the results of beam dynamics studies are presented.

  13. An Accelerating Solution for N-Body MOND Simulation with FPGA-SoC

    Directory of Open Access Journals (Sweden)

    Bo Peng

    2016-01-01

    Full Text Available As a modified-gravity proposal to handle the dark matter problem on galactic scales, Modified Newtonian Dynamics (MOND has shown a great success. However, the N-body MOND simulation is quite challenged by its computation complexity, which appeals to acceleration of the simulation calculation. In this paper, we present a highly integrated accelerating solution for N-body MOND simulations. By using the FPGA-SoC, which integrates both FPGA and SoC (system on chip in one chip, our solution exhibits potentials for better performance, higher integration, and lower power consumption. To handle the calculation bottleneck of potential summation, on one hand, we develop a strategy to simplify the pipeline, in which the square calculation task is conducted by the DSP48E1 of Xilinx 7 series FPGAs, so as to reduce the logic resource utilization of each pipeline; on the other hand, advantages of particle-mesh scheme are taken to overcome the bottleneck on bandwidth. Our experiment results show that 2 more pipelines can be integrated in Zynq-7020 FPGA-SoC with the simplified pipeline, and the bandwidth requirement is reduced significantly. Furthermore, our accelerating solution has a full range of advantages over different processors. Compared with GPU, our work is about 10 times better in performance per watt and 50% better in performance per cost.

  14. 3D electromagnetic simulation of spatial autoresonance acceleration of electron beams

    International Nuclear Information System (INIS)

    Dugar-Zhabon, V D; Orozco, E A; González, J D

    2016-01-01

    The results of full electromagnetic simulations of the electron beam acceleration by a TE 112 linear polarized electromagnetic field through Space Autoresonance Acceleration mechanism are presented. In the simulations, both the self-sustaned electric field and selfsustained magnetic field produced by the beam electrons are included into the elaborated 3D Particle in Cell code. In this system, the space profile of the magnetostatic field maintains the electron beams in the acceleration regime along their trajectories. The beam current density evolution is calculated applying the charge conservation method. The full magnetic field in the superparticle positions is found by employing the trilinear interpolation of the mesh node data. The relativistic Newton-Lorentz equation presented in the centered finite difference form is solved using the Boris algorithm that provides visualization of the beam electrons pathway and energy evolution. A comparison between the data obtained from the full electromagnetic simulations and the results derived from the motion equation depicted in an electrostatic approximation is carried out. It is found that the self-sustained magnetic field is a factor which improves the resonance phase conditions and reduces the beam energy spread. (paper)

  15. Simulations and measurements of coupling impedance for modern particle accelerator devices

    CERN Document Server

    AUTHOR|(CDS)2158523; Biancacci, Nicolò; Mostacci, Andrea

    In this document it has been treated the study of the coupling impedance in modern devices, already installed or not, in different particle accelerators. In the specific case: • For a device in-phase of project, several simulations for impedance calculation have been done. • For a component already realized and used, measurements of coupling impedance value have been done. Simulations are used to determine the impact of the interconnect between to magnets, designed for the future particle accelerator FCC, on the overall impedance of the machine which is about 100 km long. In particular has been done a check between theory, simulations and measurements of components already built, allowing a better and deeper study of the component we have analysed. Controls that probably will be helpful to have a clear guideline in future works. The measurements instead concern in an existing component that was already used in LHC, the longest particle accelerator ever realised on the planet, 27 km long. The coupling impe...

  16. Computer simulations of a single-laser double-gas-jet wakefield accelerator concept

    Directory of Open Access Journals (Sweden)

    R. G. Hemker

    2002-04-01

    Full Text Available We report in this paper on full scale 2D particle-in-cell simulations investigating laser wakefield acceleration. First we describe our findings of electron beam generation by a laser propagating through a single gas jet. Using realistic parameters which are relevant for the experimental setup in our laboratory we find that the electron beam resulting after the propagation of a 0.8 μm, 50 fs laser through a 1.5 mm gas jet has properties that would make it useful for further acceleration. Our simulations show that the electron beam is generated when the laser exits the gas jet, and the properties of the generated beam, especially its energy, depend only weakly on most properties of the gas jet. We therefore propose to use the first gas jet as a plasma cathode and then use a second gas jet placed immediately behind the first to provide additional acceleration. Our simulations of this proposed setup indicate the feasibility of this idea and also suggest ways to optimize the quality of the resulting beam.

  17. Using Equation-Free Computation to Accelerate Network-Free Stochastic Simulation of Chemical Kinetics.

    Science.gov (United States)

    Lin, Yen Ting; Chylek, Lily A; Lemons, Nathan W; Hlavacek, William S

    2018-06-21

    The chemical kinetics of many complex systems can be concisely represented by reaction rules, which can be used to generate reaction events via a kinetic Monte Carlo method that has been termed network-free simulation. Here, we demonstrate accelerated network-free simulation through a novel approach to equation-free computation. In this process, variables are introduced that approximately capture system state. Derivatives of these variables are estimated using short bursts of exact stochastic simulation and finite differencing. The variables are then projected forward in time via a numerical integration scheme, after which a new exact stochastic simulation is initialized and the whole process repeats. The projection step increases efficiency by bypassing the firing of numerous individual reaction events. As we show, the projected variables may be defined as populations of building blocks of chemical species. The maximal number of connected molecules included in these building blocks determines the degree of approximation. Equation-free acceleration of network-free simulation is found to be both accurate and efficient.

  18. LAMPF first-fault identifier for fast transient faults

    International Nuclear Information System (INIS)

    Swanson, A.R.; Hill, R.E.

    1979-01-01

    The LAMPF accelerator is presently producing 800-MeV proton beams at 0.5 mA average current. Machine protection for such a high-intensity accelerator requires a fast shutdown mechanism, which can turn off the beam within a few microseconds of the occurrence of a machine fault. The resulting beam unloading transients cause the rf systems to exceed control loop tolerances and consequently generate multiple fault indications for identification by the control computer. The problem is to isolate the primary fault or cause of beam shutdown while disregarding as many as 50 secondary fault indications that occur as a result of beam shutdown. The LAMPF First-Fault Identifier (FFI) for fast transient faults is operational and has proven capable of first-fault identification. The FFI design utilized features of the Fast Protection System that were previously implemented for beam chopping and rf power conservation. No software changes were required

  19. PEM Fuel Cells with Bio-Ethanol Processor Systems A Multidisciplinary Study of Modelling, Simulation, Fault Diagnosis and Advanced Control

    CERN Document Server

    Feroldi, Diego; Outbib, Rachid

    2012-01-01

    An apparently appropriate control scheme for PEM fuel cells may actually lead to an inoperable plant when it is connected to other unit operations in a process with recycle streams and energy integration. PEM Fuel Cells with Bio-Ethanol Processor Systems presents a control system design that provides basic regulation of the hydrogen production process with PEM fuel cells. It then goes on to construct a fault diagnosis system to improve plant safety above this control structure. PEM Fuel Cells with Bio-Ethanol Processor Systems is divided into two parts: the first covers fuel cells and the second discusses plants for hydrogen production from bio-ethanol to feed PEM fuel cells. Both parts give detailed analyses of modeling, simulation, advanced control, and fault diagnosis. They give an extensive, in-depth discussion of the problems that can occur in fuel cell systems and propose a way to control these systems through advanced control algorithms. A significant part of the book is also given over to computer-aid...

  20. Automated detection and analysis of particle beams in laser-plasma accelerator simulations

    International Nuclear Information System (INIS)

    Ushizima, Daniela Mayumi; Geddes, C.G.; Cormier-Michel, E.; Bethel, E. Wes; Jacobsen, J.; Prabhat; Ruebel, O.; Weber, G.; Hamann, B.

    2010-01-01

    Numerical simulations of laser-plasma wakefield (particle) accelerators model the acceleration of electrons trapped in plasma oscillations (wakes) left behind when an intense laser pulse propagates through the plasma. The goal of these simulations is to better understand the process involved in plasma wake generation and how electrons are trapped and accelerated by the wake. Understanding of such accelerators, and their development, offer high accelerating gradients, potentially reducing size and cost of new accelerators. One operating regime of interest is where a trapped subset of electrons loads the wake and forms an isolated group of accelerated particles with low spread in momentum and position, desirable characteristics for many applications. The electrons trapped in the wake may be accelerated to high energies, the plasma gradient in the wake reaching up to a gigaelectronvolt per centimeter. High-energy electron accelerators power intense X-ray radiation to terahertz sources, and are used in many applications including medical radiotherapy and imaging. To extract information from the simulation about the quality of the beam, a typical approach is to examine plots of the entire dataset, visually determining the adequate parameters necessary to select a subset of particles, which is then further analyzed. This procedure requires laborious examination of massive data sets over many time steps using several plots, a routine that is unfeasible for large data collections. Demand for automated analysis is growing along with the volume and size of simulations. Current 2D LWFA simulation datasets are typically between 1GB and 100GB in size, but simulations in 3D are of the order of TBs. The increase in the number of datasets and dataset sizes leads to a need for automatic routines to recognize particle patterns as particle bunches (beam of electrons) for subsequent analysis. Because of the growth in dataset size, the application of machine learning techniques for

  1. Acceleration of coupled granular flow and fluid flow simulations in pebble bed energy systems

    International Nuclear Information System (INIS)

    Li, Yanheng; Ji, Wei

    2013-01-01

    Highlights: ► Fast simulation of coupled pebble flow and coolant flow in PBR systems is studied. ► Dimension reduction based on axisymmetric geometry shows significant speedup. ► Relaxation of coupling frequency is investigated and an optimal range is determined. ► A total of 80% efficiency increase is achieved by the two fast strategies. ► Fast strategies can be applied to simulating other general fluidized bed systems. -- Abstract: Fast and accurate approaches to simulating the coupled particle flow and fluid flow are of importance to the analysis of large particle-fluid systems. This is especially needed when one tries to simulate pebble flow and coolant flow in Pebble Bed Reactor (PBR) energy systems on a routine basis. As one of the Generation IV designs, the PBR design is a promising nuclear energy system with high fuel performance and inherent safety. A typical PBR core can be modeled as a particle-fluid system with strong interactions among pebbles, coolants and reactor walls. In previous works, the coupled Discrete Element Method (DEM)-Computational Fluid Dynamics (CFD) approach has been investigated and applied to modeling PBR systems. However, the DEM-CFD approach is computationally expensive due to large amounts of pebbles in PBR systems. This greatly restricts the PBR analysis for the real time prediction and inclusion of more physics. In this work, based on the symmetry of the PBR geometry and the slow motion characteristics of the pebble flow, two acceleration strategies are proposed. First, a simplified 3D-DEM/2D-CFD approach is proposed to speed up the DEM-CFD simulation without loss of accuracy. Pebble flow is simulated by a full 3D DEM, while the coolant flow field is calculated with a 2D CFD simulation by averaging variables along the annular direction in the cylindrical and annular geometries. Second, based on the slow motion of pebble flow, the impact of the coupling frequency on the computation accuracy and efficiency is

  2. Acceleration of coupled granular flow and fluid flow simulations in pebble bed energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanheng, E-mail: liy19@rpi.edu [Department of Mechanical, Aerospace, and Nuclear Engineering, Rensselaer Polytechnic Institute, 110 8th Street, Troy, NY (United States); Ji, Wei, E-mail: jiw2@rpi.edu [Department of Mechanical, Aerospace, and Nuclear Engineering, Rensselaer Polytechnic Institute, 110 8th Street, Troy, NY (United States)

    2013-05-15

    Highlights: ► Fast simulation of coupled pebble flow and coolant flow in PBR systems is studied. ► Dimension reduction based on axisymmetric geometry shows significant speedup. ► Relaxation of coupling frequency is investigated and an optimal range is determined. ► A total of 80% efficiency increase is achieved by the two fast strategies. ► Fast strategies can be applied to simulating other general fluidized bed systems. -- Abstract: Fast and accurate approaches to simulating the coupled particle flow and fluid flow are of importance to the analysis of large particle-fluid systems. This is especially needed when one tries to simulate pebble flow and coolant flow in Pebble Bed Reactor (PBR) energy systems on a routine basis. As one of the Generation IV designs, the PBR design is a promising nuclear energy system with high fuel performance and inherent safety. A typical PBR core can be modeled as a particle-fluid system with strong interactions among pebbles, coolants and reactor walls. In previous works, the coupled Discrete Element Method (DEM)-Computational Fluid Dynamics (CFD) approach has been investigated and applied to modeling PBR systems. However, the DEM-CFD approach is computationally expensive due to large amounts of pebbles in PBR systems. This greatly restricts the PBR analysis for the real time prediction and inclusion of more physics. In this work, based on the symmetry of the PBR geometry and the slow motion characteristics of the pebble flow, two acceleration strategies are proposed. First, a simplified 3D-DEM/2D-CFD approach is proposed to speed up the DEM-CFD simulation without loss of accuracy. Pebble flow is simulated by a full 3D DEM, while the coolant flow field is calculated with a 2D CFD simulation by averaging variables along the annular direction in the cylindrical and annular geometries. Second, based on the slow motion of pebble flow, the impact of the coupling frequency on the computation accuracy and efficiency is

  3. Effect of Re on stacking fault nucleation under shear strain in Ni by atomistic simulation

    International Nuclear Information System (INIS)

    Liu Zheng-Guang; Wang Chong-Yu; Yu Tao

    2014-01-01

    The effect of Re on stacking fault (SF) nucleation under shear strain in Ni is investigated using the climbing image nudged elastic band method with a Ni—Al—Re embedded-atom-method potential. A parameter (ΔE sf b ), the activation energy of SF nucleation under shear strain, is introduced to evaluate the effect of Re on SF nucleation under shear strain. Calculation results show that ΔE sf b decreases with Re addition, which means that SF nucleation under shear strain in Ni may be enhanced by Re. The atomic structure observation shows that the decrease of ΔE sf b may be due to the expansion of local structure around the Re atom when SF goes through the Re atom. (rapid communication)

  4. Sensitivity of tsunami wave profiles and inundation simulations to earthquake slip and fault geometry for the 2011 Tohoku earthquake

    KAUST Repository

    Goda, Katsuichiro; Mai, Paul Martin; Yasuda, Tomohiro; Mori, Nobuhito

    2014-01-01

    In this study, we develop stochastic random-field slip models for the 2011 Tohoku earthquake and conduct a rigorous sensitivity analysis of tsunami hazards with respect to the uncertainty of earthquake slip and fault geometry. Synthetic earthquake slip distributions generated from the modified Mai-Beroza method captured key features of inversion-based source representations of the mega-thrust event, which were calibrated against rich geophysical observations of this event. Using original and synthesised earthquake source models (varied for strike, dip, and slip distributions), tsunami simulations were carried out and the resulting variability in tsunami hazard estimates was investigated. The results highlight significant sensitivity of the tsunami wave profiles and inundation heights to the coastal location and the slip characteristics, and indicate that earthquake slip characteristics are a major source of uncertainty in predicting tsunami risks due to future mega-thrust events.

  5. Sensitivity of tsunami wave profiles and inundation simulations to earthquake slip and fault geometry for the 2011 Tohoku earthquake

    KAUST Repository

    Goda, Katsuichiro

    2014-09-01

    In this study, we develop stochastic random-field slip models for the 2011 Tohoku earthquake and conduct a rigorous sensitivity analysis of tsunami hazards with respect to the uncertainty of earthquake slip and fault geometry. Synthetic earthquake slip distributions generated from the modified Mai-Beroza method captured key features of inversion-based source representations of the mega-thrust event, which were calibrated against rich geophysical observations of this event. Using original and synthesised earthquake source models (varied for strike, dip, and slip distributions), tsunami simulations were carried out and the resulting variability in tsunami hazard estimates was investigated. The results highlight significant sensitivity of the tsunami wave profiles and inundation heights to the coastal location and the slip characteristics, and indicate that earthquake slip characteristics are a major source of uncertainty in predicting tsunami risks due to future mega-thrust events.

  6. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Pareja, S. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' Carlos Haya' , Avda. Carlos Haya, s/n, E-29010 Malaga (Spain)], E-mail: garciapareja@gmail.com; Vilches, M. [Servicio de Fisica y Proteccion Radiologica, Hospital Regional Universitario ' Virgen de las Nieves' , Avda. de las Fuerzas Armadas, 2, E-18014 Granada (Spain); Lallena, A.M. [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)

    2007-09-21

    The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool.

  7. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators

    International Nuclear Information System (INIS)

    Garcia-Pareja, S.; Vilches, M.; Lallena, A.M.

    2007-01-01

    The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool

  8. Simulation of 20-year deterioration of acrylic IOLs using severe accelerated deterioration tests.

    Science.gov (United States)

    Kawai, Kenji; Hayakawa, Kenji; Suzuki, Takahiro

    2012-09-20

    To investigate IOL deterioration by conducting severe accelerated deterioration testing of acrylic IOLs. Department of Ophthalmology, Tokai University School of Medicine Methods: Severe accelerated deterioration tests performed on 7 types of acrylic IOLs simulated 20 years of deterioration. IOLs were placed in a screw tube bottle containing ultra-pure water and kept in an oven (100°C) for 115 days. Deterioration was determined based the outer appearance of the IOL in water and under air-dried conditions using an optical microscope. For accelerated deterioration of polymeric material, the elapse of 115 days was considered to be equivalent to 20 years based on the Arrhenius equation. All of the IOLs in the hydrophobic acrylic group except for AU6 showed glistening-like opacity. The entire optical sections of MA60BM and SA60AT became yellowish white in color. Hydrophilic acrylic IOL HP60M showed no opacity at any of the time points examined. Our data based on accelerated testing showed differences in water content to play a major role in transparency. There were differences in opacity among manufacturers. The method we have used for determining the relative time of IOL deterioration might not represent the exact clinical setting, but the appearance of the materials would presumably be very similar to that seen in patients.

  9. GPU acceleration of Monte Carlo simulations for polarized photon scattering in anisotropic turbid media.

    Science.gov (United States)

    Li, Pengcheng; Liu, Celong; Li, Xianpeng; He, Honghui; Ma, Hui

    2016-09-20

    In earlier studies, we developed scattering models and the corresponding CPU-based Monte Carlo simulation programs to study the behavior of polarized photons as they propagate through complex biological tissues. Studying the simulation results in high degrees of freedom that created a demand for massive simulation tasks. In this paper, we report a parallel implementation of the simulation program based on the compute unified device architecture running on a graphics processing unit (GPU). Different schemes for sphere-only simulations and sphere-cylinder mixture simulations were developed. Diverse optimizing methods were employed to achieve the best acceleration. The final-version GPU program is hundreds of times faster than the CPU version. Dependence of the performance on input parameters and precision were also studied. It is shown that using single precision in the GPU simulations results in very limited losses in accuracy. Consumer-level graphics cards, even those in laptop computers, are more cost-effective than scientific graphics cards for single-precision computation.

  10. Design, simulation and construction of quadrupole magnets for focusing electron beam in powerful industrial electron accelerator

    Directory of Open Access Journals (Sweden)

    S KH Mousavi

    2015-09-01

    Full Text Available In this paper the design and simulation of quadrupole magnets and electron beam optical of that by CST Studio code has been studied. Based on simulation result the magnetic quadrupole has been done for using in beam line of first Iranian powerful electron accelerator. For making the suitable magnetic field the effects of material and core geometry and coils current variation on quadrupole magnetic field have been studied. For test of quadrupole magnet the 10 MeV beam energy and 0.5 pi mm mrad emittance of input beam has been considered. We see the electron beam through the quadrupole magnet focus in one side and defocus in other side. The optimum of distance between two quadrupole magnets for low emittance have been achieved. The simulation results have good agreement with experimental results

  11. Simulation of through via bottom—up copper plating with accelerator for the filling of TSVs

    International Nuclear Information System (INIS)

    Wu Heng; Tang Zhen'an; Wang Zhu; Cheng Wan; Yu Daquan

    2013-01-01

    Filling high aspect ratio through silicon vias (TSVs) without voids and seams by copper plating is one of the technical challenges for 3D integration. Bottom—up copper plating is an effective solution for TSV filling. In this paper, a new numerical model was developed to simulate the electrochemical deposition (ECD) process, and the influence of an accelerator in the electrolyte was investigated. The arbitrary Lagrange—Eulerian (ALE) method for solving moving boundaries in the finite element method (FEM) was used to simulate the electrochemical process. In the model, diffusion coefficient and adsorption coefficient were considered, and then the time-resolved evolution of electroplating profiles was simulated with ion concentration distribution and the electric current density. (semiconductor technology)

  12. Experimental validation of neutron activation simulation of a varian medical linear accelerator.

    Science.gov (United States)

    Morato, S; Juste, B; Miro, R; Verdu, G; Diez, S

    2016-08-01

    This work presents a Monte Carlo simulation using the last version of MCNP, v. 6.1.1, of a Varian CLinAc emitting a 15MeV photon beam. The main objective of the work is to estimate the photoneutron production and activated products inside the medical linear accelerator head. To that, the Varian LinAc head was modelled in detail using the manufacturer information, and the model was generated with a CAD software and exported as a mesh to be included in the particle transport simulation. The model includes the transport of photoneutrons generated by primary photons and the (n, γ) reactions which can result in activation products. The validation of this study was done using experimental measures. Activation products have been identified by in situ gamma spectroscopy placed at the jaws exit of the LinAc shortly after termination of a high energy photon beam irradiation. Comparison between experimental and simulation results shows good agreement.

  13. A general CFD framework for fault-resilient simulations based on multi-resolution information fusion

    Science.gov (United States)

    Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em

    2017-10-01

    We develop a general CFD framework for multi-resolution simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy, in space-time, simulated fields. We combine approximation theory and domain decomposition together with statistical learning techniques, e.g. coKriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation (a) on a small number of spatial "patches" distributed across the domain, simulated by finite differences at fine resolution and (b) on the entire domain simulated at very low resolution, thus fusing multi-resolution models to obtain the final answer. Second, we simulate the flow in a lid-driven cavity in an analogous fashion, by fusing finite difference solutions obtained with fine and low resolution assuming gappy data sets. We investigate the influence of various parameters for this framework, including the correlation kernel, the size of a buffer employed in estimating boundary conditions, the coarseness of the resolution of auxiliary data, and the communication frequency across different patches in fusing the information at different resolution levels. In addition to its robustness and resilience, the new framework can be employed to generalize previous multiscale approaches involving heterogeneous discretizations or even fundamentally different flow descriptions, e.g. in continuum-atomistic simulations.

  14. Benchmarking shielding simulations for an accelerator-driven spallation neutron source

    Directory of Open Access Journals (Sweden)

    Nataliia Cherkashyna

    2015-08-01

    Full Text Available The shielding at an accelerator-driven spallation neutron facility plays a critical role in the performance of the neutron scattering instruments, the overall safety, and the total cost of the facility. Accurate simulation of shielding components is thus key for the design of upcoming facilities, such as the European Spallation Source (ESS, currently in construction in Lund, Sweden. In this paper, we present a comparative study between the measured and the simulated neutron background at the Swiss Spallation Neutron Source (SINQ, at the Paul Scherrer Institute (PSI, Villigen, Switzerland. The measurements were carried out at several positions along the SINQ monolith wall with the neutron dosimeter WENDI-2, which has a well-characterized response up to 5 GeV. The simulations were performed using the Monte-Carlo radiation transport code geant4, and include a complete transport from the proton beam to the measurement locations in a single calculation. An agreement between measurements and simulations is about a factor of 2 for the points where the measured radiation dose is above the background level, which is a satisfactory result for such simulations spanning many energy regimes, different physics processes and transport through several meters of shielding materials. The neutrons contributing to the radiation field emanating from the monolith were confirmed to originate from neutrons with energies above 1 MeV in the target region. The current work validates geant4 as being well suited for deep-shielding calculations at accelerator-based spallation sources. We also extrapolate what the simulated flux levels might imply for short (several tens of meters instruments at ESS.

  15. Self-optimized construction of transition rate matrices from accelerated atomistic simulations with Bayesian uncertainty quantification

    Science.gov (United States)

    Swinburne, Thomas D.; Perez, Danny

    2018-05-01

    A massively parallel method to build large transition rate matrices from temperature-accelerated molecular dynamics trajectories is presented. Bayesian Markov model analysis is used to estimate the expected residence time in the known state space, providing crucial uncertainty quantification for higher-scale simulation schemes such as kinetic Monte Carlo or cluster dynamics. The estimators are additionally used to optimize where exploration is performed and the degree of temperature acceleration on the fly, giving an autonomous, optimal procedure to explore the state space of complex systems. The method is tested against exactly solvable models and used to explore the dynamics of C15 interstitial defects in iron. Our uncertainty quantification scheme allows for accurate modeling of the evolution of these defects over timescales of several seconds.

  16. Monte Carlo simulation of a medical linear accelerator for radiotherapy use

    International Nuclear Information System (INIS)

    Serrano, B.; Hachem, A.; Franchisseur, E.; Herault, J.; Marcie, S.; Costa, A.; Bensadoun, R. J.; Barthe, J.; Gerard, J. P.

    2006-01-01

    A Monte Carlo code MCNPX (Monte Carlo N-particle) was used to model a 25 MV photon beam from a PRIMUS (KD2-Siemens) medical linear electron accelerator at the Centre Antoine Lacassagne in Nice. The entire geometry including the accelerator head and the water phantom was simulated to calculate the dose profile and the relative depth-dose distribution. The measurements were done using an ionisation chamber in water for different square field ranges. The first results show that the mean electron beam energy is not 19 MeV as mentioned by Siemens. The adjustment between the Monte Carlo calculated and measured data is obtained when the mean electron beam energy is ∼15 MeV. These encouraging results will permit to check calculation data given by the treatment planning system, especially for small fields in high gradient heterogeneous zones, typical for intensity modulated radiation therapy technique. (authors)

  17. Kinetic Simulation of Fast Electron Transport with Ionization Effects and Ion Acceleration

    International Nuclear Information System (INIS)

    Robinson, A. P. L.; Bell, A. R.; Kingham, R. J.

    2005-01-01

    The generation of relativistic electrons and multi-MeV ions is central to ultra intense (> 1018Wcm-2) laser-solid interactions. The production of energetic particles by lasers has a number of potential applications ranging from Fast Ignition ICF to medicine. In terms of the relativistic (fast) electrons the areas of interest can be divided into three areas. Firstly there is the absorption of laser energy into fast electrons and MeV ions. Secondly there is the transport of fast electrons through the solid target. Finally there is a transduction stage, where the fast electron energy is imparted. This may range from being the electrostatic acceleration of ions at a plasma-vacuum interface, to the heating of a compressed core (as in Fast Ignitor ICF).We have used kinetic simulation codes to study the transport stage and electrostatic ion acceleration. (Author)

  18. Plasma accelerators

    International Nuclear Information System (INIS)

    Bingham, R.; Angelis, U. de; Johnston, T.W.

    1991-01-01

    Recently attention has focused on charged particle acceleration in a plasma by a fast, large amplitude, longitudinal electron plasma wave. The plasma beat wave and plasma wakefield accelerators are two efficient ways of producing ultra-high accelerating gradients. Starting with the plasma beat wave accelerator (PBWA) and laser wakefield accelerator (LWFA) schemes and the plasma wakefield accelerator (PWFA) steady progress has been made in theory, simulations and experiments. Computations are presented for the study of LWFA. (author)

  19. Enhanced quasi-static particle-in-cell simulation of electron cloud instabilities in circular accelerators

    Science.gov (United States)

    Feng, Bing

    Electron cloud instabilities have been observed in many circular accelerators around the world and raised concerns of future accelerators and possible upgrades. In this thesis, the electron cloud instabilities are studied with the quasi-static particle-in-cell (PIC) code QuickPIC. Modeling in three-dimensions the long timescale propagation of beam in electron clouds in circular accelerators requires faster and more efficient simulation codes. Thousands of processors are easily available for parallel computations. However, it is not straightforward to increase the effective speed of the simulation by running the same problem size on an increasingly number of processors because there is a limit to domain size in the decomposition of the two-dimensional part of the code. A pipelining algorithm applied on the fully parallelized particle-in-cell code QuickPIC is implemented to overcome this limit. The pipelining algorithm uses multiple groups of processors and optimizes the job allocation on the processors in parallel computing. With this novel algorithm, it is possible to use on the order of 102 processors, and to expand the scale and the speed of the simulation with QuickPIC by a similar factor. In addition to the efficiency improvement with the pipelining algorithm, the fidelity of QuickPIC is enhanced by adding two physics models, the beam space charge effect and the dispersion effect. Simulation of two specific circular machines is performed with the enhanced QuickPIC. First, the proposed upgrade to the Fermilab Main Injector is studied with an eye upon guiding the design of the upgrade and code validation. Moderate emittance growth is observed for the upgrade of increasing the bunch population by 5 times. But the simulation also shows that increasing the beam energy from 8GeV to 20GeV or above can effectively limit the emittance growth. Then the enhanced QuickPIC is used to simulate the electron cloud effect on electron beam in the Cornell Energy Recovery Linac

  20. Sustained Accelerated Idioventricular Rhythm in a Centrifuge-Simulated Suborbital Spaceflight.

    Science.gov (United States)

    Suresh, Rahul; Blue, Rebecca S; Mathers, Charles; Castleberry, Tarah L; Vanderploeg, James M

    2017-08-01

    Hypergravitational exposures during human centrifugation are known to provoke dysrhythmias, including sinus dysrhythmias/tachycardias, premature atrial/ventricular contractions, and even atrial fibrillations or flutter patterns. However, events are generally short-lived and resolve rapidly after cessation of acceleration. This case report describes a prolonged ectopic ventricular rhythm in response to high G exposure. A previously healthy 30-yr-old man voluntarily participated in centrifuge trials as a part of a larger study, experiencing a total of 7 centrifuge runs over 48 h. Day 1 consisted of two +Gz runs (peak +3.5 Gz, run 2) and two +Gx runs (peak +6.0 Gx, run 4). Day 2 consisted of three runs approximating suborbital spaceflight profiles (combined +Gx and +Gz). Hemodynamic data collected included blood pressure, heart rate, and continuous three-lead electrocardiogram. Following the final acceleration exposure of the last Day 2 run (peak +4.5 Gx and +4.0 Gz combined, resultant +6.0 G), during a period of idle resting centrifuge activity (resultant vector +1.4 G), the subject demonstrated a marked change in his three-lead electrocardiogram from normal sinus rhythm to a wide-complex ectopic ventricular rhythm at a rate of 91-95 bpm, consistent with an accelerated idioventricular rhythm (AIVR). This rhythm was sustained for 2 m, 24 s before reversion to normal sinus. The subject reported no adverse symptoms during this time. While prolonged, the dysrhythmia was asymptomatic and self-limited. AIVR is likely a physiological response to acceleration and can be managed conservatively. Vigilance is needed to ensure that AIVR is correctly distinguished from other, malignant rhythms to avoid inappropriate treatment and negative operational impacts.Suresh R, Blue RS, Mathers C, Castleberry TL, Vanderploeg JM. Sustained accelerated idioventricular rhythm in a centrifuge-simulated suborbital spaceflight. Aerosp Med Hum Perform. 2017; 88(8):789-793.

  1. GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method

    International Nuclear Information System (INIS)

    Gong Chunye; Liu Jie; Chi Lihua; Huang Haowei; Fang Jingyue; Gong Zhenghu

    2011-01-01

    Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates (S n ) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.

  2. Contact detection acceleration in pebble flow simulation for pebble bed reactor systems

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y.; Ji, W. [Department of Mechanical, Aerospace, and Nuclear Engineering Rensselaer, Polytechnic Institute, 110 8th street, Troy, NY 12180 (United States)

    2013-07-01

    Pebble flow simulation plays an important role in the steady state and transient analysis of thermal-hydraulics and neutronics for Pebble Bed Reactors (PBR). The Discrete Element Method (DEM) and the modified Molecular Dynamics (MD) method are widely used to simulate the pebble motion to obtain the distribution of pebble concentration, velocity, and maximum contact stress. Although DEM and MD present high accuracy in the pebble flow simulation, they are quite computationally expensive due to the large quantity of pebbles to be simulated in a typical PBR and the ubiquitous contacts and collisions between neighboring pebbles that need to be detected frequently in the simulation, which greatly restricted their applicability for large scale PBR designs such as PBMR400. Since the contact detection accounts for more than 60% of the overall CPU time in the pebble flow simulation, the acceleration of the contact detection can greatly enhance the overall efficiency. In the present work, based on the design features of PBRs, two contact detection algorithms, the basic cell search algorithm and the bounding box search algorithm are investigated and applied to pebble contact detection. The influence from the PBR system size, core geometry and the searching cell size on the contact detection efficiency is presented. Our results suggest that for present PBR applications, the bounding box algorithm is less sensitive to the aforementioned effects and has superior performance in pebble contact detection compared with basic cell search algorithm. (authors)

  3. Contact detection acceleration in pebble flow simulation for pebble bed reactor systems

    International Nuclear Information System (INIS)

    Li, Y.; Ji, W.

    2013-01-01

    Pebble flow simulation plays an important role in the steady state and transient analysis of thermal-hydraulics and neutronics for Pebble Bed Reactors (PBR). The Discrete Element Method (DEM) and the modified Molecular Dynamics (MD) method are widely used to simulate the pebble motion to obtain the distribution of pebble concentration, velocity, and maximum contact stress. Although DEM and MD present high accuracy in the pebble flow simulation, they are quite computationally expensive due to the large quantity of pebbles to be simulated in a typical PBR and the ubiquitous contacts and collisions between neighboring pebbles that need to be detected frequently in the simulation, which greatly restricted their applicability for large scale PBR designs such as PBMR400. Since the contact detection accounts for more than 60% of the overall CPU time in the pebble flow simulation, the acceleration of the contact detection can greatly enhance the overall efficiency. In the present work, based on the design features of PBRs, two contact detection algorithms, the basic cell search algorithm and the bounding box search algorithm are investigated and applied to pebble contact detection. The influence from the PBR system size, core geometry and the searching cell size on the contact detection efficiency is presented. Our results suggest that for present PBR applications, the bounding box algorithm is less sensitive to the aforementioned effects and has superior performance in pebble contact detection compared with basic cell search algorithm. (authors)

  4. GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method

    Science.gov (United States)

    Gong, Chunye; Liu, Jie; Chi, Lihua; Huang, Haowei; Fang, Jingyue; Gong, Zhenghu

    2011-07-01

    Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates ( Sn) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.

  5. GeNN: a code generation framework for accelerated brain simulations

    Science.gov (United States)

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-01

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.

  6. Mixed-field GCR Simulations for Radiobiological Research using Ground Based Accelerators

    Science.gov (United States)

    Kim, Myung-Hee Y.; Rusek, Adam; Cucinotta, Francis

    Space radiation is comprised of a large number of particle types and energies, which have differential ionization power from high energy protons to high charge and energy (HZE) particles and secondary neutrons produced by galactic cosmic rays (GCR). Ground based accelerators such as the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (BNL) are used to simulate space radiation for radiobiology research and dosimetry, electronics parts, and shielding testing using mono-energetic beams for single ion species. As a tool to support research on new risk assessment models, we have developed a stochastic model of heavy ion beams and space radiation effects, the GCR Event-based Risk Model computer code (GERMcode). For radiobiological research on mixed-field space radiation, a new GCR simulator at NSRL is proposed. The NSRL-GCR simulator, which implements the rapid switching mode and the higher energy beam extraction to 1.5 GeV/u, can integrate multiple ions into a single simulation to create GCR Z-spectrum in major energy bins. After considering the GCR environment and energy limitations of NSRL, a GCR reference field is proposed after extensive simulation studies using the GERMcode. The GCR reference field is shown to reproduce the Z and LET spectra of GCR behind shielding within 20 percents accuracy compared to simulated full GCR environments behind shielding. A major challenge for space radiobiology research is to consider chronic GCR exposure of up to 3-years in relation to simulations with cell and animal models of human risks. We discuss possible approaches to map important biological time scales in experimental models using ground-based simulation with extended exposure of up to a few weeks and fractionation approaches at a GCR simulator.

  7. Rapid acceleration leads to rapid weakening in earthquake-like laboratory experiments

    Science.gov (United States)

    Chang, Jefferson C.; Lockner, David A.; Reches, Z.

    2012-01-01

    After nucleation, a large earthquake propagates as an expanding rupture front along a fault. This front activates countless fault patches that slip by consuming energy stored in Earth’s crust. We simulated the slip of a fault patch by rapidly loading an experimental fault with energy stored in a spinning flywheel. The spontaneous evolution of strength, acceleration, and velocity indicates that our experiments are proxies of fault-patch behavior during earthquakes of moment magnitude (Mw) = 4 to 8. We show that seismically determined earthquake parameters (e.g., displacement, velocity, magnitude, or fracture energy) can be used to estimate the intensity of the energy release during an earthquake. Our experiments further indicate that high acceleration imposed by the earthquake’s rupture front quickens dynamic weakening by intense wear of the fault zone.

  8. Design and simulation of a 1.2MeV electron accelerator used for desulfuration and denitrogenation

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, J.; Zhu, D.J.; Liu, S.G.; Wang, H.B.; Xu, Z.; Liu, X.S. [University of Electrical Science & Technology of China, Chengdu (China)

    2005-07-01

    This paper presents the structural design and functional analysis of a new kind of 1.2MeV industrial electron accelerator. PIC (Particle-In-Cell) method is used to simulate this accelerator and to optimize the design. The results show that the optics property of this accelerator has been improved. This electron accelerator is used for desulfurisation and denitrification in environmental industry. This application purifies flue gases of the thermal power stations from sulphur oxide and nitrogen oxides in order to reduce air pollution.

  9. Design and simulation of a 1.2 MeV electron accelerator used for desulfuration and denitrogenation

    International Nuclear Information System (INIS)

    Zhou Jun; Zhu Dajun; Liu Shenggang

    2005-01-01

    This paper presents the structural design and functional analysis of a new kind of 1.2 MeV industrial electron accelerator. PIC (Particle-In-Cell) method is used to simulate this accelerator and to optimize the design, the results show that the optics property of this accelerator has been improved. This electron accelerator is used for desulfuration and denitrogenation in environmental industry. This application purifies flue gases of the thermal power station from Sulphurous oxide and Nitrogen oxide in order to reduce the pollution in the air. (author)

  10. Computational Fluid Dynamics based Fault Simulations of a Vertical Axis Wind Turbines

    International Nuclear Information System (INIS)

    Park, Kyoo-seon; Asim, Taimoor; Mishra, Rakesh

    2012-01-01

    Due to depleting fossil fuels and a rapid increase in the fuel prices globally, the search for alternative energy sources is becoming more and more significant. One of such energy source is the wind energy which can be harnessed with the use of wind turbines. The fundamental principle of wind turbines is to convert the wind energy into first mechanical and then into electrical form. The relatively simple operation of such turbines has stirred the researchers to come up with innovative designs for global acceptance and to make these turbines commercially viable. Furthermore, the maintenance of wind turbines has long been a topic of interest. Condition based monitoring of wind turbines is essential to maintain continuous operation of wind turbines. The present work focuses on the difference in the outputs of a vertical axis wind turbine (VAWT) under different operational conditions. A Computational Fluid Dynamics (CFD) technique has been used for various blade configurations of a VAWT. The results indicate that there is significant degradation in the performance output of wind turbines as the number of blades broken or missing from the VAWT increases. The study predicts the faults in the blades of VAWTs by monitoring its output.

  11. GPU-accelerated depth map generation for X-ray simulations of complex CAD geometries

    Science.gov (United States)

    Grandin, Robert J.; Young, Gavin; Holland, Stephen D.; Krishnamurthy, Adarsh

    2018-04-01

    Interactive x-ray simulations of complex computer-aided design (CAD) models can provide valuable insights for better interpretation of the defect signatures such as porosity from x-ray CT images. Generating the depth map along a particular direction for the given CAD geometry is the most compute-intensive step in x-ray simulations. We have developed a GPU-accelerated method for real-time generation of depth maps of complex CAD geometries. We preprocess complex components designed using commercial CAD systems using a custom CAD module and convert them into a fine user-defined surface tessellation. Our CAD module can be used by different simulators as well as handle complex geometries, including those that arise from complex castings and composite structures. We then make use of a parallel algorithm that runs on a graphics processing unit (GPU) to convert the finely-tessellated CAD model to a voxelized representation. The voxelized representation can enable heterogeneous modeling of the volume enclosed by the CAD model by assigning heterogeneous material properties in specific regions. The depth maps are generated from this voxelized representation with the help of a GPU-accelerated ray-casting algorithm. The GPU-accelerated ray-casting method enables interactive (> 60 frames-per-second) generation of the depth maps of complex CAD geometries. This enables arbitrarily rotation and slicing of the CAD model, leading to better interpretation of the x-ray images by the user. In addition, the depth maps can be used to aid directly in CT reconstruction algorithms.

  12. An FFT-accelerated time-domain multiconductor transmission line simulator

    KAUST Repository

    Bagci, Hakan

    2010-02-01

    A fast time-domain multiconductor transmission line (MTL) simulator for analyzing general MTL networks is presented. The simulator models the networks as homogeneous MTLs that are excited by external fields and driven/terminated/ connected by potentially nonlinear lumped circuitry. It hybridizes an MTL solver derived from time-domain integral equations (TDIEs) in unknown wave coefficients for each MTL with a circuit solver rooted in modified nodal analysis equations in unknown node voltages and voltage-source currents for each circuit. These two solvers are rigorously interfaced at MTL and circuit terminals, and the resulting coupled system of equations is solved simultaneously for all MTL and circuit unknowns at each time step. The proposed simulator is amenable to hybridization, is fast Fourier transform (FFT)-accelerated, and is highly accurate: 1) It can easily be hybridized with TDIE-based field solvers (in a fully rigorous mathematical framework) for performing electromagnetic interference and compatibility analysis on electrically large and complex structures loaded with MTL networks. 2) It is accelerated by an FFT algorithm that calculates temporal convolutions of time-domain MTL Green functions in only O(Ntlog2 N t) rather than O(Ntt2) operations, where N t is the number of time steps of simulation. Moreover, the algorithm, which operates on temporal samples of MTL Green functions, is indifferent to the method used to obtain them. 3) It approximates MTL voltages, currents, and wave coefficients, using high-order temporal basis functions. Various numerical examples, including the crosstalk analysis of a (twisted) unshielded twisted-pair (UTP)-CAT5 cable and the analysis of field coupling into UTP-CAT5 and RG-58 cables located on an airplane, are presented to demonstrate the accuracy, efficiency, and versatility of the proposed simulator. © 2010 IEEE.

  13. Nuclear models, experiments and data libraries needed for numerical simulation of accelerator-driven system

    International Nuclear Information System (INIS)

    Bauge, E.; Bersillon, O.

    2000-01-01

    This paper presents the transparencies of the speech concerning the nuclear models, experiments and data libraries needed for numerical simulation of Accelerator-Driven Systems. The first part concerning the nuclear models defines the spallation process, the corresponding models (intra-nuclear cascade, statistical model, Fermi breakup, fission, transport, decay and macroscopic aspects) and the code systems. The second part devoted to the experiments presents the angular measurements, the integral measurements, the residual nuclei and the energy deposition. In the last part, dealing with the data libraries, the author details the fundamental quantities as the reaction cross-section, the low energy transport databases and the decay libraries. (A.L.B.)

  14. Simulation of Cascaded Longitudinal-Space-Charge Amplifier at the Fermilab Accelerator Science & Technology (Fast) Facility

    Energy Technology Data Exchange (ETDEWEB)

    Halavanau, A. [Northern Illinois U.; Piot, P. [Northern Illinois U.

    2015-12-01

    Cascaded Longitudinal Space Charge Amplifiers (LSCA) have been proposed as a mechanism to generate density modulation over a board spectral range. The scheme has been recently demonstrated in the optical regime and has confirmed the production of broadband optical radiation. In this paper we investigate, via numerical simulations, the performance of a cascaded LSCA beamline at the Fermilab Accelerator Science & Technology (FAST) facility to produce broadband ultraviolet radiation. Our studies are carried out using elegant with included tree-based grid-less space charge algorithm.

  15. High energy gain in three-dimensional simulations of light sail acceleration

    Energy Technology Data Exchange (ETDEWEB)

    Sgattoni, A., E-mail: andrea.sgattoni@polimi.it [Dipartimento di Energia, Politecnico di Milano, Milano (Italy); CNR, Istituto Nazionale di Ottica, u.o.s. “Adriano Gozzini,” Pisa (Italy); Sinigardi, S. [CNR, Istituto Nazionale di Ottica, u.o.s. “Adriano Gozzini,” Pisa (Italy); Dipartimento di Fisica e Astronomia, Università di Bologna, Bologna (Italy); INFN sezione di Bologna, Bologna (Italy); Macchi, A. [CNR, Istituto Nazionale di Ottica, u.o.s. “Adriano Gozzini,” Pisa (Italy); Dipartimento di Fisica “Enrico Fermi,” Università di Pisa, Pisa (Italy)

    2014-08-25

    The dynamics of radiation pressure acceleration in the relativistic light sail regime are analysed by means of large scale, three-dimensional (3D) particle-in-cell simulations. Differently to other mechanisms, the 3D dynamics leads to faster and higher energy gain than in 1D or 2D geometry. This effect is caused by the local decrease of the target density due to transverse expansion leading to a “lighter sail.” However, the rarefaction of the target leads to an earlier transition to transparency limiting the energy gain. A transverse instability leads to a structured and inhomogeneous ion distribution.

  16. High energy gain in three-dimensional simulations of light sail acceleration

    International Nuclear Information System (INIS)

    Sgattoni, A.; Sinigardi, S.; Macchi, A.

    2014-01-01

    The dynamics of radiation pressure acceleration in the relativistic light sail regime are analysed by means of large scale, three-dimensional (3D) particle-in-cell simulations. Differently to other mechanisms, the 3D dynamics leads to faster and higher energy gain than in 1D or 2D geometry. This effect is caused by the local decrease of the target density due to transverse expansion leading to a “lighter sail.” However, the rarefaction of the target leads to an earlier transition to transparency limiting the energy gain. A transverse instability leads to a structured and inhomogeneous ion distribution.

  17. Accelerating molecular dynamic simulation on the cell processor and Playstation 3.

    Science.gov (United States)

    Luttmann, Edgar; Ensign, Daniel L; Vaidyanathan, Vishal; Houston, Mike; Rimon, Noam; Øland, Jeppe; Jayachandran, Guha; Friedrichs, Mark; Pande, Vijay S

    2009-01-30

    Implementation of molecular dynamics (MD) calculations on novel architectures will vastly increase its power to calculate the physical properties of complex systems. Herein, we detail algorithmic advances developed to accelerate MD simulations on the Cell processor, a commodity processor found in PlayStation 3 (PS3). In particular, we discuss issues regarding memory access versus computation and the types of calculations which are best suited for streaming processors such as the Cell, focusing on implicit solvation models. We conclude with a comparison of improved performance on the PS3's Cell processor over more traditional processors. (c) 2008 Wiley Periodicals, Inc.

  18. Simulation studies of crystal-photodetector assemblies for the Turkish accelerator center particle factory electromagnetic calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Kocak, F., E-mail: fkocak@uludag.edu.tr

    2015-07-01

    The Turkish Accelerator Center Particle Factory detector will be constructed for the detection of the produced particles from the collision of a 1 GeV electron beam against a 3.6 GeV positron beam. PbWO{sub 4} and CsI(Tl) crystals are considered for the construction of the electromagnetic calorimeter part of the detector. The generated optical photons in these crystals are detected by avalanche or PIN photodiodes. Geant4 simulation code has been used to estimate the energy resolution of the calorimeter for these crystal–photodiode assemblies.

  19. Volumetric change of simulated radioactive waste glass irradiated by electron accelerator. [Silica glass

    Energy Technology Data Exchange (ETDEWEB)

    Sato, Seichi; Furuya, Hirotaka; Inagaki, Yaohiro; Kozaka, Tetsuo; Sugisaki, Masayasu

    1987-11-01

    Density changes of simulated radioactive waste glasses, silica glass and Pyrex glass irradiated by an electron accelerator were measured by a ''sink-float'' technique. The density changes of the waste and silica glasses were less than 0.05 %, irradiated at 2.0 MeV up to the fluence of 1.7 x 10/sup 17/ ecm/sup 2/, while were remarkably smaller than that of Pyrex glass of 0.18 % shrinkage. Precision of the measurements in the density changes of the waste glass was lower than that of Pyrex glass possibly because of the inhomogeneity of the waste glass

  20. Fault Detection Based on Tracking Differentiator Applied on the Suspension System of Maglev Train

    Directory of Open Access Journals (Sweden)

    Hehong Zhang

    2015-01-01

    Full Text Available A fault detection method based on the optimized tracking differentiator is introduced. It is applied on the acceleration sensor of the suspension system of maglev train. It detects the fault of the acceleration sensor by comparing the acceleration integral signal with the speed signal obtained by the optimized tracking differentiator. This paper optimizes the control variable when the states locate within or beyond the two-step reachable region to improve the performance of the approximate linear discrete tracking differentiator. Fault-tolerant control has been conducted by feedback based on the speed signal acquired from the optimized tracking differentiator when the acceleration sensor fails. The simulation and experiment results show the practical usefulness of the presented method.

  1. Simulation of space charge effects in particle accelerators. Annual report, August 1, 1983-September 30, 1984

    International Nuclear Information System (INIS)

    Haber, I.

    1984-01-01

    Progress during the FY83/84 period has involved both the use of existing numerical tools to investigate current issues, and the development of new techniques for future simulations of increasing sophistication. A balance has been sought with a view towards maximizing the utility of simulations to both present and future decisions in accelerator design. Emphasis during this contract has centered on investigating the nonlinear dynamics of a very low emittance beam with a realistic distribution function - especially when complications such as the image forces from a nearby conducting electrode are considered. A significant part of the effort during this period was also expended in spreading the simulation capabilities already developed. Versions of the SHIFT (Simulation of Heavy Ion Fusion Transport) series of computer codes have been installed on machines available to the HIF community. The enhanced availability of these codes has facilitated their use outside of NRL. For example, simulation results with a significant impact on MBE design were obtained at LBL using the MFECC Version of SHIFT-XY

  2. Fault Detection for Industrial Processes

    Directory of Open Access Journals (Sweden)

    Yingwei Zhang

    2012-01-01

    Full Text Available A new fault-relevant KPCA algorithm is proposed. Then the fault detection approach is proposed based on the fault-relevant KPCA algorithm. The proposed method further decomposes both the KPCA principal space and residual space into two subspaces. Compared with traditional statistical techniques, the fault subspace is separated based on the fault-relevant influence. This method can find fault-relevant principal directions and principal components of systematic subspace and residual subspace for process monitoring. The proposed monitoring approach is applied to Tennessee Eastman process and penicillin fermentation process. The simulation results show the effectiveness of the proposed method.

  3. SIMULATION TOOL OF VELOCITY AND TEMPERATURE PROFILES IN THE ACCELERATED COOLING PROCESS OF HEAVY PLATES

    Directory of Open Access Journals (Sweden)

    Antônio Adel dos Santos

    2014-10-01

    Full Text Available The aim of this paper was to develop and apply mathematical models for determining the velocity and temperature profiles of heavy plates processed by accelerated cooling at Usiminas’ Plate Mill in Ipatinga. The development was based on the mathematical/numerical representation of physical phenomena occurring in the processing line. Production data from 3334 plates processed in the Plate Mill were used for validating the models. A user-friendly simulation tool was developed within the Visual Basic framework, taking into account all steel grades produced, the configuration parameters of the production line and these models. With the aid of this tool the thermal profile through the plate thickness for any steel grade and dimensions can be generated, which allows the tuning of online process control models. The simulation tool has been very useful for the development of new steel grades, since the process variables can be related to the thermal profile, which affects the mechanical properties of the steels.

  4. Magnetic field simulation of wiggler on LUCX accelerator facility using Radia

    Science.gov (United States)

    Sutygina, Y. N.; Harisova, A. E.; Shkitov, D. A.

    2016-11-01

    A flat wiggler consisting of NdFeB permanent magnets was installed on a compact linear electron accelerator LUCX (KEK) in Japan. After installing the wiggler on LUCX, the experiments on the generation of undulator radiation (UR) in the terahertz wavelength range is planned. To perform the detailed calculations and optimization of UR characteristics, it is necessary to know the parameters of the magnetic field generated in the wiggler. In this paper extended simulation results of wiggler magnetic field over the entire volume between the poles are presented. The obtained in the Radia simulation magnetic field is compared with the field calculated by another code, which is based on the finite element method.

  5. Monte Carlo based simulation of LIAC intraoperative radiotherapy accelerator along with beam shaper applicator

    Directory of Open Access Journals (Sweden)

    N Heidarloo

    2017-08-01

    Full Text Available Intraoperative electron radiotherapy is one of the radiotherapy methods that delivers a high single fraction of radiation dose to the patient in one session during the surgery. Beam shaper applicator is one of the applicators that is recently employed with this radiotherapy method. This applicator has a considerable application in treatment of large tumors. In this study, the dosimetric characteristics of the electron beam produced by LIAC intraoperative radiotherapy accelerator in conjunction with this applicator have been evaluated through Monte Carlo simulation by MCNP code. The results showed that the electron beam produced by the beam shaper applicator would have the desirable dosimetric characteristics, so that the mentioned applicator can be considered for clinical purposes. Furthermore, the good agreement between the results of simulation and practical dosimetry, confirms the applicability of Monte Carlo method in determining the dosimetric parameters of electron beam  intraoperative radiotherapy

  6. Simulation study of the sub-terawatt laser wakefield acceleration operated in self-modulated regime

    Science.gov (United States)

    Hsieh, C.-Y.; Lin, M.-W.; Chen, S.-H.

    2018-02-01

    Laser wakefield acceleration (LWFA) can be accomplished by introducing a sub-terawatt (TW) laser pulse into a thin, high-density gas target. In this way, the self-focusing effect and the self-modulation that happened on the laser pulse produce a greatly enhanced laser peak intensity that can drive a nonlinear plasma wave to accelerate electrons. A particle-in-cell model is developed to study sub-TW LWFA when a 0.6-TW laser pulse interacts with a dense hydrogen plasma. Gas targets having a Gaussian density profile or a flat-top distribution are defined for investigating the properties of sub-TW LWFA when conducting with a gas jet or a gas cell. In addition to using 800-nm laser pulses, simulations are performed with 1030-nm laser pulses, as they represent a viable approach to realize the sub-TW LWFA driven by high-frequency, diode-pumped laser systems. The peak density which allows the laser peak power PL˜2 Pc r of self-focusing critical power is favourable for conducting sub-TW LWFA. Otherwise, an excessively high peak density can induce an undesired filament effect which rapidly disintegrates the laser field envelope and violates the process of plasma wave excitation. The plateau region of a flat-top density distribution allows the self-focusing and the self-modulation of the laser pulse to develop, from which well-established plasma bubbles can be produced to accelerate electrons. The process of electron injection is complicated in such high-density plasma conditions; however, increasing the length of the plateau region represents a straightforward method to realize the injection and acceleration of electrons within the first bubble, such that an improved LWFA performance can be accomplished.

  7. Warp simulations for capture and control of laser-accelerated proton beams

    International Nuclear Information System (INIS)

    Nuernberg, Frank; Harres, K; Roth, M; Friedman, A; Grote, D P; Logan, B G; Schollmeier, M

    2010-01-01

    The capture of laser-accelerated proton beams accompanied by co-moving electrons via a solenoid field has been studied with particle-in-cell simulations. The main advantages of the Warp simulation suite that we have used, relative to envelope or tracking codes, are the possibility of including all source parameters energy resolved, adding electrons as second species and considering the non-negligible space-charge forces and electrostatic self-fields. It was observed that the influence of the electrons is of vital importance. The magnetic effect on the electrons outbalances the space-charge force. Hence, the electrons are forced onto the beam axis and attract protons. Beside the energy dependent proton density increase on axis, the change in the particle spectrum is also important for future applications. Protons are accelerated/decelerated slightly, electrons highly. 2/3 of all electrons get lost directly at the source and 27% of all protons hit the inner wall of the solenoid.

  8. Monte Carlo Simulation of a Linear Accelerator and Electron Beam Parameters Used in Radiotherapy

    Directory of Open Access Journals (Sweden)

    Mohammad Taghi Bahreyni Toossi

    2009-06-01

    Full Text Available Introduction: In recent decades, several Monte Carlo codes have been introduced for research and medical applications. These methods provide both accurate and detailed calculation of particle transport from linear accelerators. The main drawback of Monte Carlo techniques is the extremely long computing time that is required in order to obtain a dose distribution with good statistical accuracy. Material and Methods: In this study, the MCNP-4C Monte Carlo code was used to simulate the electron beams generated by a Neptun 10 PC linear accelerator. The depth dose curves and related parameters to depth dose and beam profiles were calculated for 6, 8 and 10 MeV electron beams with different field sizes and these data were compared with the corresponding measured values. The actual dosimetry was performed by employing a Welhofer-Scanditronix dose scanning system, semiconductor detectors and ionization chambers. Results: The result showed good agreement (better than 2% between calculated and measured depth doses and lateral dose profiles for all energies in different field sizes. Also good agreements were achieved between calculated and measured related electron beam parameters such as E0, Rq, Rp and R50. Conclusion: The simulated model of the linac developed in this study is capable of computing electron beam data in a water phantom for different field sizes and the resulting data can be used to predict the dose distributions in other complex geometries.

  9. Warp simulations for capture and control of laser-accelerated proton beams

    International Nuclear Information System (INIS)

    Nurnberg, F.; Friedman, A.; Grote, D.P.; Harres, K.; Logan, B.G.; Schollmeier, M.; Roth, M.

    2009-01-01

    The capture of laser-accelerated proton beams accompanied by co-moving electrons via a solenoid field has been studied with particle-in-cell simulations. The main advantages of the Warp simulation suite that was used, relative to envelope or tracking codes, are the possibility of including all source parameters energy resolved, adding electrons as second species and considering the non-negligible space-charge forces and electrostatic self-fields. It was observed that the influence of the electrons is of vital importance. The magnetic effect on the electrons out balances the space-charge force. Hence, the electrons are forced onto the beam axis and attract protons. Besides the energy dependent proton density increase on axis, the change in the particle spectrum is also important for future applications. Protons are accelerated/decelerated slightly, electrons highly. 2/3 of all electrons get lost directly at the source and 27% of all protons hit the inner wall of the solenoid.

  10. Broad-band simulation of M7.2 earthquake on the North Tehran fault, considering non-linear soil effects

    Science.gov (United States)

    Majidinejad, A.; Zafarani, H.; Vahdani, S.

    2018-05-01

    The North Tehran fault (NTF) is known to be one of the most drastic sources of seismic hazard on the city of Tehran. In this study, we provide broad-band (0-10 Hz) ground motions for the city as a consequence of probable M7.2 earthquake on the NTF. Low-frequency motions (0-2 Hz) are provided from spectral element dynamic simulation of 17 scenario models. High-frequency (2-10 Hz) motions are calculated with a physics-based method based on S-to-S backscattering theory. Broad-band ground motions at the bedrock level show amplifications, both at low and high frequencies, due to the existence of deep Tehran basin in the vicinity of the NTF. By employing soil profiles obtained from regional studies, effect of shallow soil layers on broad-band ground motions is investigated by both linear and non-linear analyses. While linear soil response overestimate ground motion prediction equations, non-linear response predicts plausible results within one standard deviation of empirical relationships. Average Peak Ground Accelerations (PGAs) at the northern, central and southern parts of the city are estimated about 0.93, 0.59 and 0.4 g, respectively. Increased damping caused by non-linear soil behaviour, reduces the soil linear responses considerably, in particular at frequencies above 3 Hz. Non-linear deamplification reduces linear spectral accelerations up to 63 per cent at stations above soft thick sediments. By performing more general analyses, which exclude source-to-site effects on stations, a correction function is proposed for typical site classes of Tehran. Parameters for the function which reduces linear soil response in order to take into account non-linear soil deamplification are provided for various frequencies in the range of engineering interest. In addition to fully non-linear analyses, equivalent-linear calculations were also conducted which their comparison revealed appropriateness of the method for large peaks and low frequencies, but its shortage for small to

  11. Man-systems evaluation of moving base vehicle simulation motion cues. [human acceleration perception involving visual feedback

    Science.gov (United States)

    Kirkpatrick, M.; Brye, R. G.

    1974-01-01

    A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.

  12. Monte Carlo simulation of a medical linear accelerator for generation of phase spaces

    International Nuclear Information System (INIS)

    Oliveira, Alex C.H.; Santana, Marcelo G.; Lima, Fernando R.A.; Vieira, Jose W.

    2013-01-01

    Radiotherapy uses various techniques and equipment for local treatment of cancer. The equipment most often used in radiotherapy to the patient irradiation are linear accelerators (Linacs) which produce beams of X-rays in the range 5-30 MeV. Among the many algorithms developed over recent years for evaluation of dose distributions in radiotherapy planning, the algorithms based on Monte Carlo methods have proven to be very promising in terms of accuracy by providing more realistic results. The MC methods allow simulating the transport of ionizing radiation in complex configurations, such as detectors, Linacs, phantoms, etc. The MC simulations for applications in radiotherapy are divided into two parts. In the first, the simulation of the production of the radiation beam by the Linac is performed and then the phase space is generated. The phase space contains information such as energy, position, direction, etc. og millions of particles (photos, electrons, positrons). In the second part the simulation of the transport of particles (sampled phase space) in certain configurations of irradiation field is performed to assess the dose distribution in the patient (or phantom). The objective of this work is to create a computational model of a 6 MeV Linac using the MC code Geant4 for generation of phase spaces. From the phase space, information was obtained to asses beam quality (photon and electron spectra and two-dimensional distribution of energy) and analyze the physical processes involved in producing the beam. (author)

  13. Accelerating rejection-based simulation of biochemical reactions with bounded acceptance probability

    Energy Technology Data Exchange (ETDEWEB)

    Thanh, Vo Hong, E-mail: vo@cosbi.eu [The Microsoft Research - University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Priami, Corrado, E-mail: priami@cosbi.eu [The Microsoft Research - University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Department of Mathematics, University of Trento, Trento (Italy); Zunino, Roberto, E-mail: roberto.zunino@unitn.it [Department of Mathematics, University of Trento, Trento (Italy)

    2016-06-14

    Stochastic simulation of large biochemical reaction networks is often computationally expensive due to the disparate reaction rates and high variability of population of chemical species. An approach to accelerate the simulation is to allow multiple reaction firings before performing update by assuming that reaction propensities are changing of a negligible amount during a time interval. Species with small population in the firings of fast reactions significantly affect both performance and accuracy of this simulation approach. It is even worse when these small population species are involved in a large number of reactions. We present in this paper a new approximate algorithm to cope with this problem. It is based on bounding the acceptance probability of a reaction selected by the exact rejection-based simulation algorithm, which employs propensity bounds of reactions and the rejection-based mechanism to select next reaction firings. The reaction is ensured to be selected to fire with an acceptance rate greater than a predefined probability in which the selection becomes exact if the probability is set to one. Our new algorithm improves the computational cost for selecting the next reaction firing and reduces the updating the propensities of reactions.

  14. Fourier analysis of Solar atmospheric numerical simulations accelerated with GPUs (CUDA).

    Science.gov (United States)

    Marur, A.

    2015-12-01

    Solar dynamics from the convection zone creates a variety of waves that may propagate through the solar atmosphere. These waves are important in facilitating the energy transfer between the sun's surface and the corona as well as propagating energy throughout the solar system. How and where these waves are dissipated remains an open question. Advanced 3D numerical simulations have furthered our understanding of the processes involved. Fourier transforms to understand the nature of the waves by finding the frequency and wavelength of these waves through the simulated atmosphere, as well as the nature of their propagation and where they get dissipated. In order to analyze the different waves produced by the aforementioned simulations and models, Fast Fourier Transform algorithms will be applied. Since the processing of the multitude of different layers of the simulations (of the order of several 100^3 grid points) would be time intensive and inefficient on a CPU, CUDA, a computing architecture that harnesses the power of the GPU, will be used to accelerate the calculations.

  15. Tabulated square-shaped source model for linear accelerator electron beam simulation.

    Science.gov (United States)

    Khaledi, Navid; Aghamiri, Mahmood Reza; Aslian, Hossein; Ameri, Ahmad

    2017-01-01

    Using this source model, the Monte Carlo (MC) computation becomes much faster for electron beams. The aim of this study was to present a source model that makes linear accelerator (LINAC) electron beam geometry simulation less complex. In this study, a tabulated square-shaped source with transversal and axial distribution biasing and semi-Gaussian spectrum was investigated. A low energy photon spectrum was added to the semi-Gaussian beam to correct the bremsstrahlung X-ray contamination. After running the MC code multiple times and optimizing all spectrums for four electron energies in three different medical LINACs (Elekta, Siemens, and Varian), the characteristics of a beam passing through a 10 cm × 10 cm applicator were obtained. The percentage depth dose and dose profiles at two different depths were measured and simulated. The maximum difference between simulated and measured percentage of depth doses and dose profiles was 1.8% and 4%, respectively. The low energy electron and photon spectrum and the Gaussian spectrum peak energy and associated full width at half of maximum and transversal distribution weightings were obtained for each electron beam. The proposed method yielded a maximum computation time 702 times faster than a complete head simulation. Our study demonstrates that there was an excellent agreement between the results of our proposed model and measured data; furthermore, an optimum calculation speed was achieved because there was no need to define geometry and materials in the LINAC head.

  16. Head simulation of linear accelerators and spectra considerations using EGS4 Monte Carlo code in a PC

    International Nuclear Information System (INIS)

    Malatara, G.; Kappas, K.; Sphiris, N.

    1994-01-01

    In this work, a Monte Carlo EGS4 code was used to simulate radiation transport through linear accelerators to produce and score energy spectra and angular distributions of 6, 12, 15 and 25 MeV bremsstrahlung photons exiting from different accelerator treatment heads. The energy spectra was used as input for a convolution method program to calculate the tissue-maximum ratio in water. 100.000 histories are recorded in the scoring plane for each simulation. The validity of the Monte Carlo simulation and the precision to the calculated spectra have been verified experimentally and were in good agreement. We believe that the accurate simulation of the different components of the linear accelerator head is very important for the precision of the results. The results of the Monte Carlo and the Convolution Method can be compared with experimental data for verification and they are powerful and practical tools to generate accurate spectra and dosimetric data. (authors)

  17. Simulation of accelerator transmutation of long-lived nuclear wastes; Simulation de transmutation de dechets nucleaires a vie longue par accelerateur

    Energy Technology Data Exchange (ETDEWEB)

    Fabienne, Wolff-Bacha [Paris-11 Univ., 91 - Orsay (France)

    1997-07-09

    The incineration of minor actinides with a hybrid reactor (i.e. coupled with an accelerator) could reduce their radioactivity. The scientific tool used for simulations, the GEANT code implemented on a paralleled computer, has been confirmed initially on thin and thick targets and by simulation of a pressurized water reactor, a fast reactor like Superphenix, and a molten salt fast hybrid reactor `ATP`. Simulating a thermal hybrid reactor seems to indicate the non-negligible presence of neutrons which diffuse back to the accelerator. In spite of simplifications, the simulation of a molten lead fast hybrid reactor (as the CERN Fast Energy Amplifier) might indicate difficulties in the radial power distribution in the core, the life time of the window and the activated air leak risk. Finally, we propose a thermoelectric compact hybrid reactor, PRAHE - small atomic board hybrid reactor - the principle of which allows a neutron coupling between the accelerator and the reactor. (author) 270 refs., 91 figs., 31 tabs.

  18. Magnetohydrodynamic simulation study of plasma jets and plasma-surface contact in coaxial plasma accelerators

    Science.gov (United States)

    Subramaniam, Vivek; Raja, Laxminarayan L.

    2017-06-01

    Recent experiments by Loebner et al. [IEEE Trans. Plasma Sci. 44, 1534 (2016)] studied the effect of a hypervelocity jet emanating from a coaxial plasma accelerator incident on target surfaces in an effort to mimic the transient loading created during edge localized mode disruption events in fusion plasmas. In this paper, we present a magnetohydrodynamic (MHD) numerical model to simulate plasma jet formation and plasma-surface contact in this coaxial plasma accelerator experiment. The MHD system of equations is spatially discretized using a cell-centered finite volume formulation. The temporal discretization is performed using a fully implicit backward Euler scheme and the resultant stiff system of nonlinear equations is solved using the Newton method. The numerical model is employed to obtain some key insights into the physical processes responsible for the generation of extreme stagnation conditions on the target surfaces. Simulations of the plume (without the target plate) are performed to isolate and study phenomena such as the magnetic pinch effect that is responsible for launching pressure pulses into the jet free stream. The simulations also yield insights into the incipient conditions responsible for producing the pinch, such as the formation of conductive channels. The jet-target impact studies indicate the existence of two distinct stages involved in the plasma-surface interaction. A fast transient stage characterized by a thin normal shock transitions into a pseudo-steady stage that exhibits an extended oblique shock structure. A quadratic scaling of the pinch and stagnation conditions with the total current discharged between the electrodes is in qualitative agreement with the results obtained in the experiments. This also illustrates the dominant contribution of the magnetic pressure term in determining the magnitude of the quantities of interest.

  19. Convergence acceleration for partitioned simulations of the fluid-structure interaction in arteries

    Science.gov (United States)

    Radtke, Lars; Larena-Avellaneda, Axel; Debus, Eike Sebastian; Düster, Alexander

    2016-06-01

    We present a partitioned approach to fluid-structure interaction problems arising in analyses of blood flow in arteries. Several strategies to accelerate the convergence of the fixed-point iteration resulting from the coupling of the fluid and the structural sub-problem are investigated. The Aitken relaxation and variants of the interface quasi-Newton -least-squares method are applied to different test cases. A hybrid variant of two well-known variants of the interface quasi-Newton-least-squares method is found to perform best. The test cases cover the typical boundary value problem faced when simulating the fluid-structure interaction in arteries, including a strong added mass effect and a wet surface which accounts for a large part of the overall surface of each sub-problem. A rubber-like Neo Hookean material model and a soft-tissue-like Holzapfel-Gasser-Ogden material model are used to describe the artery wall and are compared in terms of stability and computational expenses. To avoid any kind of locking, high-order finite elements are used to discretize the structural sub-problem. The finite volume method is employed to discretize the fluid sub-problem. We investigate the influence of mass-proportional damping and the material model chosen for the artery on the performance and stability of the acceleration strategies as well as on the simulation results. To show the applicability of the partitioned approach to clinical relevant studies, the hemodynamics in a pathologically deformed artery are investigated, taking the findings of the test case simulations into account.

  20. Data Files for Ground-Motion Simulations of the 1906 San Francisco Earthquake and Scenario Earthquakes on the Northern San Andreas Fault

    Science.gov (United States)

    Aagaard, Brad T.; Barall, Michael; Brocher, Thomas M.; Dolenc, David; Dreger, Douglas; Graves, Robert W.; Harmsen, Stephen; Hartzell, Stephen; Larsen, Shawn; McCandless, Kathleen; Nilsson, Stefan; Petersson, N. Anders; Rodgers, Arthur; Sjogreen, Bjorn; Zoback, Mary Lou

    2009-01-01

    This data set contains results from ground-motion simulations of the 1906 San Francisco earthquake, seven hypothetical earthquakes on the northern San Andreas Fault, and the 1989 Loma Prieta earthquake. The bulk of the data consists of synthetic velocity time-histories. Peak ground velocity on a 1/60th degree grid and geodetic displacements from the simulations are also included. Details of the ground-motion simulations and analysis of the results are discussed in Aagaard and others (2008a,b).

  1. Ambient Noise Green's Function Simulation of Long-Period Ground Motions for Reverse Faulting

    Science.gov (United States)

    Miyake, H.; Beroza, G. C.

    2009-12-01

    Long-time correlation of ambient seismic noise has been demonstrated as a useful tool for strong ground motion prediction [Prieto and Beroza, 2008]. An important advantage of ambient noise Green's functions is that they can be used for ground motion simulation without resorting to either complex 3-D velocity structure to develop theoretical Green’s functions, or aftershock records for empirical Green’s function analysis. The station-to-station approach inherent to ambient noise Green’s functions imposes some limits to its application, since they are band-limited, applied at the surface, and for a single force. We explore the applicability of this method to strong motion prediction using the 2007 Chuetsu-oki, Japan, earthquake (Mw 6.6, depth = 9 km), which excited long-period ground motions in and around the Kanto basin almost 200 km from the epicenter. We test the performance of ambient noise Green's function for long-period ground motion simulation. We use three components of F-net broadband data at KZK station, which is located near the source region, as a virtual source, and three components of six F-net stations in and around the Kanto basin to calculate the response. An advantage to applying this approach in Japan is that ambient-noise sources are active in diverse directions. The dominant period of the ambient noise for the F-net datasets is mostly 7 s over the year, and amplitudes are largest in winter. This period matches the dominant periods of the Kanto and Niigata basins. For the 9 components of the ambient noise Green’s functions, we have confirmed long-period components corresponding to Love wave and Rayleigh waves that can be used for simulation of the 2007 Chuetsu-oki earthquake. The relative amplitudes, phases, and durations of the ambient noise Green’s functions at the F-net stations in and around the Kanto basin respect to F-net KZK station are fairly well matched with those of the observed ground motions for the 2007 Chuetsu

  2. Computational Materials Science and Chemistry: Accelerating Discovery and Innovation through Simulation-Based Engineering and Science

    Energy Technology Data Exchange (ETDEWEB)

    Crabtree, George [Argonne National Lab. (ANL), Argonne, IL (United States); Glotzer, Sharon [University of Michigan; McCurdy, Bill [University of California Davis; Roberto, Jim [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2010-07-26

    This report is based on a SC Workshop on Computational Materials Science and Chemistry for Innovation on July 26-27, 2010, to assess the potential of state-of-the-art computer simulations to accelerate understanding and discovery in materials science and chemistry, with a focus on potential impacts in energy technologies and innovation. The urgent demand for new energy technologies has greatly exceeded the capabilities of today's materials and chemical processes. To convert sunlight to fuel, efficiently store energy, or enable a new generation of energy production and utilization technologies requires the development of new materials and processes of unprecedented functionality and performance. New materials and processes are critical pacing elements for progress in advanced energy systems and virtually all industrial technologies. Over the past two decades, the United States has developed and deployed the world's most powerful collection of tools for the synthesis, processing, characterization, and simulation and modeling of materials and chemical systems at the nanoscale, dimensions of a few atoms to a few hundred atoms across. These tools, which include world-leading x-ray and neutron sources, nanoscale science facilities, and high-performance computers, provide an unprecedented view of the atomic-scale structure and dynamics of materials and the molecular-scale basis of chemical processes. For the first time in history, we are able to synthesize, characterize, and model materials and chemical behavior at the length scale where this behavior is controlled. This ability is transformational for the discovery process and, as a result, confers a significant competitive advantage. Perhaps the most spectacular increase in capability has been demonstrated in high performance computing. Over the past decade, computational power has increased by a factor of a million due to advances in hardware and software. This rate of improvement, which shows no sign of

  3. Aacsfi-PSC. Advanced accelerator concepts for strong field interaction simulated with the Plasma-Simulation-Code

    Energy Technology Data Exchange (ETDEWEB)

    Ruhl, Hartmut [Munich Univ. (Germany). Chair for Computational and Plasma Physics

    2016-11-01

    Since the installation of SuperMUC phase 2 the 9216 nodes of phase 1 are more easily available for large scale runs allowing for the thin foil and AWAKE simulations. Besides phase 2 could be used in parallel for high throughput of the ion acceleration simulations. Challenging to our project were the full-volume checkpoints required by PIC that strained the I/O-subsystem of SuperMUC to its limits. New approaches considered for the next generation system, like burst buffers could overcome this bottleneck. Additionally, as the FDTD solver in PIC is strongly bandwidth bound, PSC will benefit profoundly from high-bandwidth memory (HBM) that most likely will be available in future HPC machines. This will be of great advantage as in 2018 phase II of AWAKE should begin, with a longer plasma channel further increasing the need for additional computing resources. Last but not least, it is expected that our methods used in plasma physics (many body interaction with radiation) will be more and more adapted for medical diagnostics and treatments. For this research field we expect centimeter sized volumes with necessary resolutions of tens of micro meters resulting in boxes of >10{sup 12} voxels (100-200 TB) on a regular basis. In consequence the demand for computing time and especially for data storage and data handling capacities will also increase significantly.

  4. Monte Carlo simulation of a medical accelerator: application on a heterogeneous phantom

    International Nuclear Information System (INIS)

    Serrano, B.; Franchisseur, E.; Hachem, A.; Herault, J.; Marcie, S.; Bensadoun, R.J.

    2005-01-01

    The objective of this study is to seek an accurate and efficient method to calculate the dose distribution for small fields in high gradient heterogeneity, typical for Intensity Modulated Radiation Therapy (IMRT) technique on head and neck regions. This motivates a Monte Carlo (MC) simulation of the photon beam for the two nominal potential energies of 25 and 6 MV delivered by a medical linear electron accelerator (Linac) used at the Centre Antoine Lacassagne. These investigations were checked by means of an ionization chamber (IC). Some first adjustments on parameters given by the manufacturer for the 25 and the 6 MV data have been applied to optimize the adjustment between the IC and the MC simulation on the depth-dose and the dose profile distributions. The good agreement between the MC calculated and the measured data are only obtained when the mean energies of the electron beams are respectively 15 MeV and 5.2 MeV and the corresponding spot size diameter 2 and 3 mm. Once the validation of the MC simulation of the Linac is overcome, these results permit us in a second part to check the calculation data given by a treatment planning system (TPS) on a heterogeneous phantom. The result shows some discrepancies up to 7% between TPS and MC simulation. Those differences come from a bad approximation of the material density by the TPS. These encouraging results of the MC simulation will permit us afterwards to check the dose deposition given by the TPS on IMRT treatment. (authors)

  5. Design of 6 Mev linear accelerator based pulsed thermal neutron source: FLUKA simulation and experiment

    Energy Technology Data Exchange (ETDEWEB)

    Patil, B.J., E-mail: bjp@physics.unipune.ac.in [Department of Physics, University of Pune, Pune 411 007 (India); Chavan, S.T.; Pethe, S.N.; Krishnan, R. [SAMEER, IIT Powai Campus, Mumbai 400 076 (India); Bhoraskar, V.N. [Department of Physics, University of Pune, Pune 411 007 (India); Dhole, S.D., E-mail: sanjay@physics.unipune.ac.in [Department of Physics, University of Pune, Pune 411 007 (India)

    2012-01-15

    The 6 MeV LINAC based pulsed thermal neutron source has been designed for bulk materials analysis. The design was optimized by varying different parameters of the target and materials for each region using FLUKA code. The optimized design of thermal neutron source gives flux of 3 Multiplication-Sign 10{sup 6}ncm{sup -2}s{sup -1} with more than 80% of thermal neutrons and neutron to gamma ratio was 1 Multiplication-Sign 10{sup 4}ncm{sup -2}mR{sup -1}. The results of prototype experiment and simulation are found to be in good agreement with each other. - Highlights: Black-Right-Pointing-Pointer The optimized 6 eV linear accelerator based thermal neutron source using FLUKA simulation. Black-Right-Pointing-Pointer Beryllium as a photonuclear target and reflector, polyethylene as a filter and shield, graphite as a moderator. Black-Right-Pointing-Pointer Optimized pulsed thermal neutron source gives neutron flux of 3 Multiplication-Sign 10{sup 6} n cm{sup -2} s{sup -1}. Black-Right-Pointing-Pointer Results of the prototype experiment were compared with simulations and are found to be in good agreement. Black-Right-Pointing-Pointer This source can effectively be used for the study of bulk material analysis and activation products.

  6. Optimal Acceleration-Velocity-Bounded Trajectory Planning in Dynamic Crowd Simulation

    Directory of Open Access Journals (Sweden)

    Fu Yue-wen

    2014-01-01

    Full Text Available Creating complex and realistic crowd behaviors, such as pedestrian navigation behavior with dynamic obstacles, is a difficult and time consuming task. In this paper, we study one special type of crowd which is composed of urgent individuals, normal individuals, and normal groups. We use three steps to construct the crowd simulation in dynamic environment. The first one is that the urgent individuals move forward along a given path around dynamic obstacles and other crowd members. An optimal acceleration-velocity-bounded trajectory planning method is utilized to model their behaviors, which ensures that the durations of the generated trajectories are minimal and the urgent individuals are collision-free with dynamic obstacles (e.g., dynamic vehicles. In the second step, a pushing model is adopted to simulate the interactions between urgent members and normal ones, which ensures that the computational cost of the optimal trajectory planning is acceptable. The third step is obligated to imitate the interactions among normal members using collision avoidance behavior and flocking behavior. Various simulation results demonstrate that these three steps give realistic crowd phenomenon just like the real world.

  7. Monte Carlo simulation to study the doses in an accelerator BNCT treatment

    International Nuclear Information System (INIS)

    Burlon, Alejandro A.; Valda, Alejandro A.; Somacal, Hector R.; Kreiner, Andres J.; Minsky, Daniel M.

    2003-01-01

    In this work the 7 Li(p, n) 7 Be reaction has been studied as a neutron source for accelerator-based BNCT (Boron Neutron Capture Therapy). In order to optimize the design of the neutron production target and the beam shaping assembly, extensive MCNP simulations have been performed. These simulations include a thick Li metal target, a whole-body phantom, a moderator-reflector assembly (Al/AlF 3 as moderator and graphite as reflector) and the treatment room. The doses were evaluated for two proton bombarding energies of 1.92 MeV (near to the threshold of the reaction) and 2.3 MeV (near to the resonance of the reaction) and for three Al/ALF 3 moderator thicknesses (18, 26 and 34 cm). To assess the doses, a comparison using a Tumor Control Probability (TCP) model was done. In a second instance, the effect of the specific skin radiosensitivity (an RBE of 2.5 for the 10 B(n,α) 7 Li reaction) and a 10 B uptake of 17 ppm was considered for the scalp. Finally, the simulations show the advantage of irradiating with near-resonance-energy protons (2.3 MeV) because of the high neutron yield at this energy, leading to the lowest treatment times. Moreover, the 26 cm Al/AlF 3 moderator has shown the best performance among the studied cases. (author)

  8. OpenMP-accelerated SWAT simulation using Intel C and FORTRAN compilers: Development and benchmark

    Science.gov (United States)

    Ki, Seo Jin; Sugimura, Tak; Kim, Albert S.

    2015-02-01

    We developed a practical method to accelerate execution of Soil and Water Assessment Tool (SWAT) using open (free) computational resources. The SWAT source code (rev 622) was recompiled using a non-commercial Intel FORTRAN compiler in Ubuntu 12.04 LTS Linux platform, and newly named iOMP-SWAT in this study. GNU utilities of make, gprof, and diff were used to develop the iOMP-SWAT package, profile memory usage, and check identicalness of parallel and serial simulations. Among 302 SWAT subroutines, the slowest routines were identified using GNU gprof, and later modified using Open Multiple Processing (OpenMP) library in an 8-core shared memory system. In addition, a C wrapping function was used to rapidly set large arrays to zero by cross compiling with the original SWAT FORTRAN package. A universal speedup ratio of 2.3 was achieved using input data sets of a large number of hydrological response units. As we specifically focus on acceleration of a single SWAT run, the use of iOMP-SWAT for parameter calibrations will significantly improve the performance of SWAT optimization.

  9. Simulation studies of electron acceleration by ion ring distributions in solar flares

    International Nuclear Information System (INIS)

    McClements, K.G.; Bingham, R.; Su, J.J.; Dawson, J.M.; Spicer, D.S.

    1990-07-01

    Using a 21/2-D fully relativistic electromagnetic particle-in-cell code (PIC) we have investigated a potential electron acceleration mechanism in solar flares. The free energy is provided by ions which have a ring velocity distribution about the magnetic field direction. Ion rings may be produced by perpendicular shocks, which could in turn be generated by the super-Alfvenic motion of magnetic flux tubes emerging from the photosphere or by coronal mass ejections (CMEs). Such ion distributions are known to be unstable to the generation of lower hybrid waves, which have phase velocities in excess of the electron thermal speed parallel to the field and can therefore resonantly accelerate electrons in that direction. The simulations show the transfer of perpendicular ion energy to energetic electrons via lower hybrid wave turbulence. With plausible ion ring velocities, the process can account for the observationally inferred fluxes and energies of non-thermal electrons during the impulsive phase of flares. Our results also show electrostatic wave generation close to the plasma frequency: we suggest that this is due to bump-in-tail instability of the electron distribution. (author)

  10. Simulations and experiments on external electron injection for laser wakefield acceleration

    NARCIS (Netherlands)

    Dijk, van W.

    2010-01-01

    Laser wake field acceleration is a technique that can be used to accelerate electrons using electric fields that are several orders of magnitude higher than those available in conventional accelerators. With these higher fields, it is possible to drastically reduce the length of accelerator needed

  11. One-dimensional theory and simulation of acceleration in relativistic electron beam Raman scattering

    International Nuclear Information System (INIS)

    Abe, T.

    1986-01-01

    Raman scattering by a parallel relativistic electron beam was examined analytically and by using the numerical simulation. Incident wave energy can be transferred not only to the scattered electromagnetic wave but also to the beam. That is, the beam can be accelerated by the Doppler-shifted plasma oscillation accompanied by the scattered wave. The energy conversion rates for them were obtained. They increase with the γ value of the electron beam. For the larger γ values of the beam, the energy of the incident wave is mainly transferred to the beam, while in smaller γ, the energy conversion rate to the scattered wave is about 0.2 times that to the beam. Even in smaller γ, the total energy conversion rate is about 0.1

  12. On using moving windows in finite element time domain simulation for long accelerator structures

    International Nuclear Information System (INIS)

    Lee, L.-Q.; Candel, Arno; Ng, Cho; Ko, Kwok

    2010-01-01

    A finite element moving window technique is developed to simulate the propagation of electromagnetic waves induced by the transit of a charged particle beam inside large and long structures. The window moving along with the beam in the computational domain adopts high-order finite element basis functions through p refinement and/or a high-resolution mesh through h refinement so that a sufficient accuracy is attained with substantially reduced computational costs. Algorithms to transfer discretized fields from one mesh to another, which are the keys to implementing a moving window in a finite element unstructured mesh, are presented. Numerical experiments are carried out using the moving window technique to compute short-range wakefields in long accelerator structures. The results are compared with those obtained from the normal finite element time domain (FETD) method and the advantages of using the moving window technique are discussed.

  13. Simulation study of accelerator based quasi-mono-energetic epithermal neutron beams for BNCT.

    Science.gov (United States)

    Adib, M; Habib, N; Bashter, I I; El-Mesiry, M S; Mansy, M S

    2016-01-01

    Filtered neutron techniques were applied to produce quasi-mono-energetic neutron beams in the energy range of 1.5-7.5 keV at the accelerator port using the generated neutron spectrum from a Li (p, n) Be reaction. A simulation study was performed to characterize the filter components and transmitted beam lines. The feature of the filtered beams is detailed in terms of optimal thickness of the primary and additive components. A computer code named "QMNB-AS" was developed to carry out the required calculations. The filtered neutron beams had high purity and intensity with low contamination from the accompanying thermal, fast neutrons and γ-rays. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Accidental beam loss in superconducting accelerators: Simulations, consequences of accidents and protective measures

    International Nuclear Information System (INIS)

    Drozhdin, A.; Mokhov, N.; Parker, B.

    1994-02-01

    The consequences of an accidental beam loss in superconducting accelerators and colliders of the next generation range from the mundane to rather dramatic, i.e., from superconducting magnet quench, to overheating of critical components, to a total destruction of some units via explosion. Specific measures are required to minimize and eliminate such events as much as practical. In this paper we study such accidents taking the Superconducting Supercollider complex as an example. Particle tracking, beam loss and energy deposition calculations were done using the realistic machine simulation with the Monte-Carlo codes MARS 12 and STRUCT. Protective measures for minimizing the damaging effects of prefire and misfire of injection and extraction kicker magnets are proposed here

  15. Tools for simulation of high beam intensity ion accelerators; Simulationswerkzeuge fuer die Berechnung hochintensiver Ionenbeschleuniger

    Energy Technology Data Exchange (ETDEWEB)

    Tiede, Rudolf

    2009-07-09

    A new particle-in-cell space charge routine based on a fast Fourier transform was developed and implemented to the LORASR code. It provides the ability to perform up to several 100 batch run simulations with up to 1 million macroparticles each within reasonable computation time. The new space charge routine was successfully validated in the framework of the European ''High Intensity Pulsed Proton Injectors'' (HIPPI) collaboration: Several static Poisson solver benchmarking comparisons were performed, as well as particle tracking comparisons along the GSI UNILAC Alvarez section. Moreover machine error setting routines and data analysis tools were developed and applied on error studies for the ''Heidelberg Cacer Therapy'' (HICAT) IH-type drift tube linear accelerator (linac), the FAIR Facility Proton Linac and the proposal of a linac for the ''International Fusion Materials Irradiation Facility'' (IFMIF) based on superconducting CH-type structures. (orig.)

  16. Effects of dimensionality and laser polarization on kinetic simulations of laser-ion acceleration in the transparency regime

    Science.gov (United States)

    Stark, David; Yin, Lin; Albright, Brian; Guo, Fan

    2017-10-01

    The often cost-prohibitive nature of three-dimensional (3D) kinetic simulations of laser-plasma interactions has resulted in heavy use of two-dimensional (2D) simulations to extract physics. However, depending on whether the polarization is modeled as 2D-S or 2D-P (laser polarization in and out of the simulation plane, respectively), different results arise. In laser-ion acceleration in the transparency regime, VPIC particle-in-cell simulations show that 2D-S and 2D-P capture different physics that appears in 3D simulations. The electron momentum distribution is virtually two-dimensional in 2D-P, unlike the more isotropic distributions in 2D-S and 3D, leading to greater heating in the simulation plane. As a result, target expansion time scales and density thresholds for the onset of relativistic transparency differ dramatically between 2D-S and 2D-P. The artificial electron heating in 2D-P exaggerates the effectiveness of target-normal sheath acceleration (TNSA) into its dominant acceleration mechanism, whereas 2D-S and 3D both have populations accelerated preferentially during transparency to higher energies than those of TNSA. Funded by the LANL Directed Research and Development Program.

  17. Optimization of accelerator target and detector for portal imaging using Monte Carlo simulation and experiment

    International Nuclear Information System (INIS)

    Flampouri, S.; Evans, P.M.; Partridge, M.; Nahum, A.E.; Verhaegen, A.E.; Spezi, E.

    2002-01-01

    Megavoltage portal images suffer from poor quality compared to those produced with kilovoltage x-rays. Several authors have shown that the image quality can be improved by modifying the linear accelerator to generate more low-energy photons. This work addresses the problem of using Monte Carlo simulation and experiment to optimize the beam and detector combination to maximize image quality for a given patient thickness. A simple model of the whole imaging chain was developed for investigation of the effect of the target parameters on the quality of the image. The optimum targets (6 mm thick aluminium and 1.6 mm copper) were installed in an Elekta SL25 accelerator. The first beam will be referred to as Al6 and the second as Cu1.6. A tissue-equivalent contrast phantom was imaged with the 6 MV standard photon beam and the experimental beams with standard radiotherapy and mammography film/screen systems. The arrangement with a thin Al target/mammography system improved the contrast from 1.4 cm bone in 5 cm water to 19% compared with 2% for the standard arrangement of a thick, high-Z target/radiotherapy verification system. The linac/phantom/detector system was simulated with the BEAM/EGS4 Monte Carlo code. Contrast calculated from the predicted images was in good agreement with the experiment (to within 2.5%). The use of MC techniques to predict images accurately, taking into account the whole imaging system, is a powerful new method for portal imaging system design optimization. (author)

  18. Acceleration of Monte Carlo simulation of photon migration in complex heterogeneous media using Intel many-integrated core architecture.

    Science.gov (United States)

    Gorshkov, Anton V; Kirillin, Mikhail Yu

    2015-08-01

    Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing.

  19. Real-time cavity simulator-based low-level radio-frequency test bench and applications for accelerators

    Science.gov (United States)

    Qiu, Feng; Michizono, Shinichiro; Miura, Takako; Matsumoto, Toshihiro; Liu, Na; Wibowo, Sigit Basuki

    2018-03-01

    A Low-level radio-frequency (LLRF) control systems is required to regulate the rf field in the rf cavity used for beam acceleration. As the LLRF system is usually complex, testing of the basic functions or control algorithms of this system in real time and in advance of beam commissioning is strongly recommended. However, the equipment necessary to test the LLRF system, such as superconducting cavities and high-power rf sources, is very expensive; therefore, we have developed a field-programmable gate array (FPGA)-based cavity simulator as a substitute for real rf cavities. Digital models of the cavity and other rf systems are implemented in the FPGA. The main components include cavity baseband models for the fundamental and parasitic modes, a mechanical model of the Lorentz force detuning, and a model of the beam current. Furthermore, in our simulator, the disturbance model used to simulate the power-supply ripples and microphonics is also carefully considered. Based on the presented cavity simulator, we have established an LLRF system test bench that can be applied to different cavity operational conditions. The simulator performance has been verified by comparison with real cavities in KEK accelerators. In this paper, the development and implementation of this cavity simulator is presented first, and the LLRF test bench based on the presented simulator is constructed. The results are then compared with those for KEK accelerators. Finally, several LLRF applications of the cavity simulator are illustrated.

  20. Real-time cavity simulator-based low-level radio-frequency test bench and applications for accelerators

    Directory of Open Access Journals (Sweden)

    Feng Qiu

    2018-03-01

    Full Text Available A Low-level radio-frequency (LLRF control systems is required to regulate the rf field in the rf cavity used for beam acceleration. As the LLRF system is usually complex, testing of the basic functions or control algorithms of this system in real time and in advance of beam commissioning is strongly recommended. However, the equipment necessary to test the LLRF system, such as superconducting cavities and high-power rf sources, is very expensive; therefore, we have developed a field-programmable gate array (FPGA-based cavity simulator as a substitute for real rf cavities. Digital models of the cavity and other rf systems are implemented in the FPGA. The main components include cavity baseband models for the fundamental and parasitic modes, a mechanical model of the Lorentz force detuning, and a model of the beam current. Furthermore, in our simulator, the disturbance model used to simulate the power-supply ripples and microphonics is also carefully considered. Based on the presented cavity simulator, we have established an LLRF system test bench that can be applied to different cavity operational conditions. The simulator performance has been verified by comparison with real cavities in KEK accelerators. In this paper, the development and implementation of this cavity simulator is presented first, and the LLRF test bench based on the presented simulator is constructed. The results are then compared with those for KEK accelerators. Finally, several LLRF applications of the cavity simulator are illustrated.

  1. BrainFrame: a node-level heterogeneous accelerator platform for neuron simulations

    Science.gov (United States)

    Smaragdos, Georgios; Chatzikonstantis, Georgios; Kukreja, Rahul; Sidiropoulos, Harry; Rodopoulos, Dimitrios; Sourdis, Ioannis; Al-Ars, Zaid; Kachris, Christoforos; Soudris, Dimitrios; De Zeeuw, Chris I.; Strydis, Christos

    2017-12-01

    Objective. The advent of high-performance computing (HPC) in recent years has led to its increasing use in brain studies through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a homogeneous acceleration platform to effectively address the complete array of modeling requirements. Approach. In this paper we propose and build BrainFrame, a heterogeneous acceleration platform that incorporates three distinct acceleration technologies, an Intel Xeon-Phi CPU, a NVidia GP-GPU and a Maxeler Dataflow Engine. The PyNN software framework is also integrated into the platform. As a challenging proof of concept, we analyze the performance of BrainFrame on different experiment instances of a state-of-the-art neuron model, representing the inferior-olivary nucleus using a biophysically-meaningful, extended Hodgkin-Huxley representation. The model instances take into account not only the neuronal-network dimensions but also different network-connectivity densities, which can drastically affect the workload’s performance characteristics. Main results. The combined use of different HPC technologies demonstrates that BrainFrame is better able to cope with the modeling diversity encountered in realistic experiments while at the same time running on significantly lower energy budgets. Our performance analysis clearly shows that the model directly affects performance and all three technologies are required to cope with all the model use cases. Significance. The BrainFrame framework is designed to transparently configure and select the appropriate back-end accelerator technology for use per simulation run. The PyNN integration provides a familiar bridge to the vast number of models already available. Additionally, it gives a clear roadmap for extending the platform

  2. Dislocation motion and the microphysics of flash heating and weakening of faults during earthquakes

    NARCIS (Netherlands)

    Spagnuolo, Elena; Plümper, Oliver; Violay, Marie; Cavallo, Andrea; Di Toro, Giulio

    2016-01-01

    Earthquakes are the result of slip along faults and are due to the decrease of rock frictional strength (dynamic weakening) with increasing slip and slip rate. Friction experiments simulating the abrupt accelerations (>>10 m/s2), slip rates (~1 m/s), and normal stresses (>>10 MPa) expected at the

  3. Faults Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Through the study of faults and their effects, much can be learned about the size and recurrence intervals of earthquakes. Faults also teach us about crustal...

  4. The Italian Project S2 - Task 4:Near-fault earthquake ground motion simulation in the Sulmona alluvial basin

    Science.gov (United States)

    Stupazzini, M.; Smerzini, C.; Cauzzi, C.; Faccioli, E.; Galadini, F.; Gori, S.

    2009-04-01

    ), OpenSHA: A Developing Community-Modeling Environment for Seismic Hazard Analysis, Seism. Res. Lett. 74, 406-419. Stupazzini M., R. Paolucci, H. Igel (2009), Near-fault earthquake ground motion simulation in the Grenoble Valley by a high-performance spectral element code, accepted for publication in Bull. of the Seism. Soc. of America.

  5. Simulating Earthquake Rupture and Off-Fault Fracture Response: Application to the Safety Assessment of the Swedish Nuclear Waste Repository

    KAUST Repository

    Falth, B.; Hokmark, H.; Lund, B.; Mai, Paul Martin; Roberts, R.; Munier, R.

    2014-01-01

    To assess the long-term safety of a deep repository of spent nuclear fuel, upper bound estimates of seismically induced secondary fracture shear displacements are needed. For this purpose, we analyze a model including an earthquake fault, which

  6. Network Fault Diagnosis Using DSM

    Institute of Scientific and Technical Information of China (English)

    Jiang Hao; Yan Pu-liu; Chen Xiao; Wu Jing

    2004-01-01

    Difference similitude matrix (DSM) is effective in reducing information system with its higher reduction rate and higher validity. We use DSM method to analyze the fault data of computer networks and obtain the fault diagnosis rules. Through discretizing the relative value of fault data, we get the information system of the fault data. DSM method reduces the information system and gets the diagnosis rules. The simulation with the actual scenario shows that the fault diagnosis based on DSM can obtain few and effective rules.

  7. Microstructural changes after control rolling and interrupted accelerated cooling simulations in pipeline steel

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Mourino, Nuria; Petrov, Roumen [Department of Materials Science and Engineering, Ghent University, Technologiepark Zwijnaarde 903, B-9052 Ghent (Belgium); Bae, Jin-Ho; Kim, Kisoo [Sheet Products and Process Research Group, POSCO, Jeonnam, 545-090 (Korea, Republic of); Kestens, Leo A.I. [Department of Materials Science and Engineering, Ghent University, Technologiepark Zwijnaarde 903, B-9052 Ghent (Belgium); Department of Materials Science and Engineering, Delft University of Technology, Mekelweg 2, 2628 CD, Delft (Netherlands)

    2011-04-15

    The {gamma}-{alpha} transformation and final microstructure in pipeline steel was studied by carrying out a number of physical simulations of industrial hot rolling schedules. Particularly, the effect of the reheating temperature, deformation and cooling parameters on the transformation temperatures and final grain size were considered with a goal to obtain an appropriate thermo-mechanical processing route which will generate appropriate microstructures for pipeline applications. The CCT diagram of the steel was derived experimentally by means of dilatometric tests. Hot torsion experiments were applied in a multi-deformation cycle at various temperatures in the austenite region to simulate industrial rolling schedules. By variation of the reheating temperature, equivalent strain, and accelerated cooling, different types of microstructures were obtained. It was found that the deformation increases the transformation temperatures whereas the higher cooling rates after deformation decrease them. Post-deformation microstructure consists of fine bainitic-ferrite grains with dispersed carbides and small amount of dispersed martensite/austenite islands which can be controlled by varying the reheating temperature, deformation and post-deformation cooling. The detailed microstructure characteristics obtained from the present work could be used to optimize the mechanical properties, strength and toughness of pipeline steel grades by an appropriate control of the thermo-mechanical processing. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  8. GPU accelerated flow solver for direct numerical simulation of turbulent flows

    Energy Technology Data Exchange (ETDEWEB)

    Salvadore, Francesco [CASPUR – via dei Tizii 6/b, 00185 Rome (Italy); Bernardini, Matteo, E-mail: matteo.bernardini@uniroma1.it [Department of Mechanical and Aerospace Engineering, University of Rome ‘La Sapienza’ – via Eudossiana 18, 00184 Rome (Italy); Botti, Michela [CASPUR – via dei Tizii 6/b, 00185 Rome (Italy)

    2013-02-15

    Graphical processing units (GPUs), characterized by significant computing performance, are nowadays very appealing for the solution of computationally demanding tasks in a wide variety of scientific applications. However, to run on GPUs, existing codes need to be ported and optimized, a procedure which is not yet standardized and may require non trivial efforts, even to high-performance computing specialists. In the present paper we accurately describe the porting to CUDA (Compute Unified Device Architecture) of a finite-difference compressible Navier–Stokes solver, suitable for direct numerical simulation (DNS) of turbulent flows. Porting and validation processes are illustrated in detail, with emphasis on computational strategies and techniques that can be applied to overcome typical bottlenecks arising from the porting of common computational fluid dynamics solvers. We demonstrate that a careful optimization work is crucial to get the highest performance from GPU accelerators. The results show that the overall speedup of one NVIDIA Tesla S2070 GPU is approximately 22 compared with one AMD Opteron 2352 Barcelona chip and 11 compared with one Intel Xeon X5650 Westmere core. The potential of GPU devices in the simulation of unsteady three-dimensional turbulent flows is proved by performing a DNS of a spatially evolving compressible mixing layer.

  9. PARTICLE ACCELERATION AND THE ORIGIN OF X-RAY FLARES IN GRMHD SIMULATIONS OF SGR A*

    Energy Technology Data Exchange (ETDEWEB)

    Ball, David; Özel, Feryal; Psaltis, Dimitrios; Chan, Chi-kwan [Steward Observatory and Department of Astronomy, University of Arizona (United States)

    2016-07-20

    Significant X-ray variability and flaring has been observed from Sgr A* but is poorly understood from a theoretical standpoint. We perform general relativistic magnetohydrodynamic simulations that take into account a population of non-thermal electrons with energy distributions and injection rates that are motivated by PIC simulations of magnetic reconnection. We explore the effects of including these non-thermal electrons on the predicted broadband variability of Sgr A* and find that X-ray variability is a generic result of localizing non-thermal electrons to highly magnetized regions, where particles are likely to be accelerated via magnetic reconnection. The proximity of these high-field regions to the event horizon forms a natural connection between IR and X-ray variability and accounts for the rapid timescales associated with the X-ray flares. The qualitative nature of this variability is consistent with observations, producing X-ray flares that are always coincident with IR flares, but not vice versa, i.e., there are a number of IR flares without X-ray counterparts.

  10. Simulation and experimental studies on electron cloud effects in particle accelerators

    CERN Document Server

    Romano, Annalisa; Cimino, Roberto; Iadarola, Giovanni; Rumolo, Giovanni

    Electron Cloud (EC) effects represent a serious limitation for particle accelerators operating with intense beams of positively charged particles. This Master thesis work presents simulation and experimental studies on EC effects carried out in collaboration with the European Organization for Nuclear Research (CERN) in Geneva and with the INFN-LNF laboratories in Frascati. During the Long Shut- down 1 (LS1, 2013-2014), a new detector for EC measurements has been installed in one of the main magnets of the CERN Proton Synchrotron (PS) to study the EC formation in presence of a strong magnetic field. The aim is to develop a reli- able EC model of the PS vacuum chamber in order to identify possible limitation for the future high intensity and high brightness beams foreseen by Large Hadron Collider (LHC) Injectors Upgrade (LIU) project. Numerical simulations with the new PyECLOUD code were performed in order to quantify the expected signal at the detector under different beam conditions. The experimental activity...

  11. Large-scale conformational changes of Trypanosoma cruzi proline racemase predicted by accelerated molecular dynamics simulation.

    Directory of Open Access Journals (Sweden)

    César Augusto F de Oliveira

    2011-10-01

    Full Text Available Chagas' disease, caused by the protozoan parasite Trypanosoma cruzi (T. cruzi, is a life-threatening illness affecting 11-18 million people. Currently available treatments are limited, with unacceptable efficacy and safety profiles. Recent studies have revealed an essential T. cruzi proline racemase enzyme (TcPR as an attractive candidate for improved chemotherapeutic intervention. Conformational changes associated with substrate binding to TcPR are believed to expose critical residues that elicit a host mitogenic B-cell response, a process contributing to parasite persistence and immune system evasion. Characterization of the conformational states of TcPR requires access to long-time-scale motions that are currently inaccessible by standard molecular dynamics simulations. Here we describe advanced accelerated molecular dynamics that extend the effective simulation time and capture large-scale motions of functional relevance. Conservation and fragment mapping analyses identified potential conformational epitopes located in the vicinity of newly identified transient binding pockets. The newly identified open TcPR conformations revealed by this study along with knowledge of the closed to open interconversion mechanism advances our understanding of TcPR function. The results and the strategy adopted in this work constitute an important step toward the rationalization of the molecular basis behind the mitogenic B-cell response of TcPR and provide new insights for future structure-based drug discovery.

  12. Reconstruction of X-rays spectra of clinical linear accelerators using the generalized simulated annealing method

    International Nuclear Information System (INIS)

    Manrique, John Peter O.; Costa, Alessandro M.

    2016-01-01

    The spectral distribution of megavoltage X-rays used in radiotherapy departments is a fundamental quantity from which, in principle, all relevant information required for radiotherapy treatments can be determined. To calculate the dose delivered to the patient who make radiation therapy, are used treatment planning systems (TPS), which make use of convolution and superposition algorithms and which requires prior knowledge of the photon fluence spectrum to perform the calculation of three-dimensional doses and thus ensure better accuracy in the tumor control probabilities preserving the normal tissue complication probabilities low. In this work we have obtained the photon fluence spectrum of X-ray of the SIEMENS ONCOR linear accelerator of 6 MV, using an character-inverse method to the reconstruction of the spectra of photons from transmission curves measured for different thicknesses of aluminum; the method used for reconstruction of the spectra is a stochastic technique known as generalized simulated annealing (GSA), based on the work of quasi-equilibrium statistic of Tsallis. For the validation of the reconstructed spectra we calculated the curve of percentage depth dose (PDD) for energy of 6 MV, using Monte Carlo simulation with Penelope code, and from the PDD then calculate the beam quality index TPR_2_0_/_1_0. (author)

  13. Accelerated damage studies of titanate ceramics containing simulated PW-4b and JW-A waste

    International Nuclear Information System (INIS)

    Hart, K.P.; Vance, E.R.; Lumpkin, G.R.; Mitamura, H.; Matsumoto, S.; Banba, T.

    1999-01-01

    Ceramic waste forms are affected by radiation damage, primarily arising from aloha-decay processes that can lead to volume expansion and amorphization of the component crystalline phases. The understanding of the extent and impact of these effects on the overall durability of the waste form is critical to the prediction of their long-term performance under repository conditions. Since 1985 ANSTO and JAERI have carried out joint studies on the use of 244 Cm to simulate alpha-radiation damage in ceramic waste forms. These studies have focussed on synroc formulations doped with simulated PW-4b and JW-A wastes. The studies have established the relationship between density change and irradiation levels for Synroc containing JW-A and PW-4b wastes. The storage of samples at 200 C halves the rate of decrease in the density of the samples compared to that measured at room temperature. This effect is consistent with that found for natural samples where the amorphization of natural samples stored under crustal conditions is lower, by factors between 2 and 4, than that measured for samples from accelerated doping experiments stored at room temperature. (J.P.N.)

  14. MO-DE-BRA-02: SIMAC: A Simulation Tool for Teaching Linear Accelerator Physics

    Energy Technology Data Exchange (ETDEWEB)

    Carlone, M; Harnett, N [Princess Margaret Hospital, Toronto, ON (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario (Canada); Harris, W [Duke University Medical Physics Graduate Program, Durham NC (United States); Norrlinger, B [Princess Margaret Hospital, Toronto, ON (Canada); MacPherson, M [The Ottawa Hospital, Ottawa, Ontario (Canada); Lamey, M [Trillium Health Partners, Mississauga, Ontario (Canada); Oldham, M [Duke University Medical Medical Center, Durham NC (United States); Duke University Medical Physics Graduate Program, Durham NC (United States); Anderson, R

    2016-06-15

    Purpose: The first goal of this work is to develop software that can simulate the physics of linear accelerators (linac). The second goal is to show that this simulation tool is effective in teaching linac physics to medical physicists and linac service engineers. Methods: Linacs were modeled using analytical expressions that can correctly describe the physical response of a linac to parameter changes in real time. These expressions were programmed with a graphical user interface in order to produce an environment similar to that of linac service mode. The software, “SIMAC”, has been used as a learning aid in a professional development course 3 times (2014 – 2016) as well as in a physics graduate program. Exercises were developed to supplement the didactic components of the courses consisting of activites designed to reinforce the concepts of beam loading; the effect of steering coil currents on beam symmetry; and the relationship between beam energy and flatness. Results: SIMAC was used to teach 35 professionals (medical physicists; regulators; service engineers; 1 week course) as well as 20 graduate students (1 month project). In the student evaluations, 85% of the students rated the effectiveness of SIMAC as very good or outstanding, and 70% rated the software as the most effective part of the courses. Exercise results were collected showing that 100% of the students were able to use the software correctly. In exercises involving gross changes to linac operating points (i.e. energy changes) the majority of students were able to correctly perform these beam adjustments. Conclusion: Software simulation(SIMAC), can be used to effectively teach linac physics. In short courses, students were able to correctly make gross parameter adjustments that typically require much longer training times using conventional training methods.

  15. MO-DE-BRA-02: SIMAC: A Simulation Tool for Teaching Linear Accelerator Physics

    International Nuclear Information System (INIS)

    Carlone, M; Harnett, N; Harris, W; Norrlinger, B; MacPherson, M; Lamey, M; Oldham, M; Anderson, R

    2016-01-01

    Purpose: The first goal of this work is to develop software that can simulate the physics of linear accelerators (linac). The second goal is to show that this simulation tool is effective in teaching linac physics to medical physicists and linac service engineers. Methods: Linacs were modeled using analytical expressions that can correctly describe the physical response of a linac to parameter changes in real time. These expressions were programmed with a graphical user interface in order to produce an environment similar to that of linac service mode. The software, “SIMAC”, has been used as a learning aid in a professional development course 3 times (2014 – 2016) as well as in a physics graduate program. Exercises were developed to supplement the didactic components of the courses consisting of activites designed to reinforce the concepts of beam loading; the effect of steering coil currents on beam symmetry; and the relationship between beam energy and flatness. Results: SIMAC was used to teach 35 professionals (medical physicists; regulators; service engineers; 1 week course) as well as 20 graduate students (1 month project). In the student evaluations, 85% of the students rated the effectiveness of SIMAC as very good or outstanding, and 70% rated the software as the most effective part of the courses. Exercise results were collected showing that 100% of the students were able to use the software correctly. In exercises involving gross changes to linac operating points (i.e. energy changes) the majority of students were able to correctly perform these beam adjustments. Conclusion: Software simulation(SIMAC), can be used to effectively teach linac physics. In short courses, students were able to correctly make gross parameter adjustments that typically require much longer training times using conventional training methods.

  16. Ras conformational switching: simulating nucleotide-dependent conformational transitions with accelerated molecular dynamics.

    Directory of Open Access Journals (Sweden)

    Barry J Grant

    2009-03-01

    Full Text Available Ras mediates signaling pathways controlling cell proliferation and development by cycling between GTP- and GDP-bound active and inactive conformational states. Understanding the complete reaction path of this conformational change and its intermediary structures is critical to understanding Ras signaling. We characterize nucleotide-dependent conformational transition using multiple-barrier-crossing accelerated molecular dynamics (aMD simulations. These transitions, achieved for the first time for wild-type Ras, are impossible to observe with classical molecular dynamics (cMD simulations due to the large energetic barrier between end states. Mapping the reaction path onto a conformer plot describing the distribution of the crystallographic structures enabled identification of highly populated intermediate structures. These structures have unique switch orientations (residues 25-40 and 57-75 intermediate between GTP and GDP states, or distinct loop3 (46-49, loop7 (105-110, and alpha5 C-terminus (159-166 conformations distal from the nucleotide-binding site. In addition, these barrier-crossing trajectories predict novel nucleotide-dependent correlated motions, including correlations of alpha2 (residues 66-74 with alpha3-loop7 (93-110, loop2 (26-37 with loop10 (145-151, and loop3 (46-49 with alpha5 (152-167. The interconversion between newly identified Ras conformations revealed by this study advances our mechanistic understanding of Ras function. In addition, the pattern of correlated motions provides new evidence for a dynamic linkage between the nucleotide-binding site and the membrane interacting C-terminus critical for the signaling function of Ras. Furthermore, normal mode analysis indicates that the dominant collective motion that occurs during nucleotide-dependent conformational exchange, and captured in aMD (but absent in cMD simulations, is a low-frequency motion intrinsic to the structure.

  17. Advanced Simulation and Optimization Tools for Dynamic Aperture of Non-scaling FFAGs and Accelerators including Modern User Interfaces

    International Nuclear Information System (INIS)

    Mills, F.; Makino, K.; Berz, M.; Johnstone, C.

    2010-01-01

    With the U.S. experimental effort in HEP largely located at laboratories supporting the operations of large, highly specialized accelerators, colliding beam facilities, and detector facilities, the understanding and prediction of high energy particle accelerators becomes critical to the success, overall, of the DOE HEP program. One area in which small businesses can contribute to the ongoing success of the U.S. program in HEP is through innovations in computer techniques and sophistication in the modeling of high-energy accelerators. Accelerator modeling at these facilities is performed by experts with the product generally highly specific and representative only of in-house accelerators or special-interest accelerator problems. Development of new types of accelerators like FFAGs with their wide choices of parameter modifications, complicated fields, and the simultaneous need to efficiently handle very large emittance beams requires the availability of new simulation environments to assure predictability in operation. In this, ease of use and interfaces are critical to realizing a successful model, or optimization of a new design or working parameters of machines. In Phase I, various core modules for the design and analysis of FFAGs were developed and Graphical User Interfaces (GUI) have been investigated instead of the more general yet less easily manageable console-type output COSY provides.

  18. Advanced Simulation and Optimization Tools for Dynamic Aperture of Non-scaling FFAGs and Accelerators including Modern User Interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Mills, F.; Makino, Kyoko; Berz, Martin; Johnstone, C.

    2010-09-01

    With the U.S. experimental effort in HEP largely located at laboratories supporting the operations of large, highly specialized accelerators, colliding beam facilities, and detector facilities, the understanding and prediction of high energy particle accelerators becomes critical to the success, overall, of the DOE HEP program. One area in which small businesses can contribute to the ongoing success of the U.S. program in HEP is through innovations in computer techniques and sophistication in the modeling of high-energy accelerators. Accelerator modeling at these facilities is performed by experts with the product generally highly specific and representative only of in-house accelerators or special-interest accelerator problems. Development of new types of accelerators like FFAGs with their wide choices of parameter modifications, complicated fields, and the simultaneous need to efficiently handle very large emittance beams requires the availability of new simulation environments to assure predictability in operation. In this, ease of use and interfaces are critical to realizing a successful model, or optimization of a new design or working parameters of machines. In Phase I, various core modules for the design and analysis of FFAGs were developed and Graphical User Interfaces (GUI) have been investigated instead of the more general yet less easily manageable console-type output COSY provides.

  19. Fault finder

    Science.gov (United States)

    Bunch, Richard H.

    1986-01-01

    A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.

  20. Experimental and Simulated Characterization of a Beam Shaping Assembly for Accelerator- Based Boron Neutron Capture Therapy (AB-BNCT)

    International Nuclear Information System (INIS)

    Burlon, Alejandro A.; Valda, Alejandro A.; Girola, Santiago; Minsky, Daniel M.; Kreiner, Andres J.

    2010-01-01

    In the frame of the construction of a Tandem Electrostatic Quadrupole Accelerator facility devoted to the Accelerator-Based Boron Neutron Capture Therapy, a Beam Shaping Assembly has been characterized by means of Monte-Carlo simulations and measurements. The neutrons were generated via the 7 Li(p, n) 7 Be reaction by irradiating a thick LiF target with a 2.3 MeV proton beam delivered by the TANDAR accelerator at CNEA. The emerging neutron flux was measured by means of activation foils while the beam quality and directionality was evaluated by means of Monte Carlo simulations. The parameters show compliance with those suggested by IAEA. Finally, an improvement adding a beam collimator has been evaluated.

  1. Fractal properties and simulation of micro-seismicity for seismic hazard analysis: a comparison of North Anatolian and San Andreas Fault Zones

    Directory of Open Access Journals (Sweden)

    Naside Ozer

    2012-02-01

    Full Text Available We analyzed statistical properties of earthquakes in western Anatolia as well as the North Anatolian Fault Zone (NAFZ in terms of spatio-temporal variations of fractal dimensions, p- and b-values. During statistically homogeneous periods characterized by closer fractal dimension values, we propose that occurrence of relatively larger shocks (M >= 5.0 is unlikely. Decreases in seismic activity in such intervals result in spatial b-value distributions that are primarily stable. Fractal dimensions decrease with time in proportion to increasing seismicity. Conversely, no spatiotemporal patterns were observed for p-value changes. In order to evaluate failure probabilities and simulate earthquake occurrence in the western NAFZ, we applied a modified version of the renormalization group method. Assuming an increase in small earthquakes is indicative of larger shocks, we apply the mentioned model to micro-seismic (M<= 3.0 activity, and test our results using San Andreas Fault Zone (SAFZ data. We propose that fractal dimension is a direct indicator of material heterogeneity and strength. Results from a model suggest simulated and observed earthquake occurrences are coherent, and may be used for seismic hazard estimation on creeping strike-slip fault zones.

  2. Simulations

    CERN Document Server

    Ngada, Narcisse

    2015-06-15

    The complexity and cost of building and running high-power electrical systems make the use of simulations unavoidable. The simulations available today provide great understanding about how systems really operate. This paper helps the reader to gain an insight into simulation in the field of power converters for particle accelerators. Starting with the definition and basic principles of simulation, two simulation types, as well as their leading tools, are presented: analog and numerical simulations. Some practical applications of each simulation type are also considered. The final conclusion then summarizes the main important items to keep in mind before opting for a simulation tool or before performing a simulation.

  3. Particle acceleration in regions of magnetic flux emergence: a statistical approach using test-particle- and MHD-simulations

    Science.gov (United States)

    Vlahos, Loukas; Archontis, Vasilis; Isliker, Heinz

    We consider 3D nonlinear MHD simulations of an emerging flux tube, from the convection zone into the corona, focusing on the coronal part of the simulations. We first analyze the statistical nature and spatial structure of the electric field, calculating histograms and making use of iso-contour visualizations. Then test-particle simulations are performed for electrons, in order to study heating and acceleration phenomena, as well as to determine HXR emission. This study is done by comparatively exploring quiet, turbulent explosive, and mildly explosive phases of the MHD simulations. Also, the importance of collisional and relativistic effects is assessed, and the role of the integration time is investigated. Particular aim of this project is to verify the quasi- linear assumptions made in standard transport models, and to identify possible transport effects that cannot be captured with the latter. In order to determine the relation of our results to Fermi acceleration and Fokker-Planck modeling, we determine the standard transport coefficients. After all, we find that the electric field of the MHD simulations must be downscaled in order to prevent an un-physically high degree of acceleration, and the value chosen for the scale factor strongly affects the results. In different MHD time-instances we find heating to take place, and acceleration that depends on the level of MHD turbulence. Also, acceleration appears to be a transient phenomenon, there is a kind of saturation effect, and the parallel dynamics clearly dominate the energetics. The HXR spectra are not yet really compatible with observations, we have though to further explore the scaling of the electric field and the integration times used.

  4. MO-F-CAMPUS-I-03: GPU Accelerated Monte Carlo Technique for Fast Concurrent Image and Dose Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Becchetti, M; Tian, X; Segars, P; Samei, E [Clinical Imaging Physics Group, Department of Radiology, Duke University Me, Durham, NC (United States)

    2015-06-15

    Purpose: To develop an accurate and fast Monte Carlo (MC) method of simulating CT that is capable of correlating dose with image quality using voxelized phantoms. Methods: A realistic voxelized phantom based on patient CT data, XCAT, was used with a GPU accelerated MC code for helical MDCT. Simulations were done with both uniform density organs and with textured organs. The organ doses were validated using previous experimentally validated simulations of the same phantom under the same conditions. Images acquired by tracking photons through the phantom with MC require lengthy computation times due to the large number of photon histories necessary for accurate representation of noise. A substantial speed up of the process was attained by using a low number of photon histories with kernel denoising of the projections from the scattered photons. These FBP reconstructed images were validated against those that were acquired in simulations using many photon histories by ensuring a minimal normalized root mean square error. Results: Organ doses simulated in the XCAT phantom are within 10% of the reference values. Corresponding images attained using projection kernel smoothing were attained with 3 orders of magnitude less computation time compared to a reference simulation using many photon histories. Conclusion: Combining GPU acceleration with kernel denoising of scattered photon projections in MC simulations allows organ dose and corresponding image quality to be attained with reasonable accuracy and substantially reduced computation time than is possible with standard simulation approaches.

  5. MO-F-CAMPUS-I-03: GPU Accelerated Monte Carlo Technique for Fast Concurrent Image and Dose Simulation

    International Nuclear Information System (INIS)

    Becchetti, M; Tian, X; Segars, P; Samei, E

    2015-01-01

    Purpose: To develop an accurate and fast Monte Carlo (MC) method of simulating CT that is capable of correlating dose with image quality using voxelized phantoms. Methods: A realistic voxelized phantom based on patient CT data, XCAT, was used with a GPU accelerated MC code for helical MDCT. Simulations were done with both uniform density organs and with textured organs. The organ doses were validated using previous experimentally validated simulations of the same phantom under the same conditions. Images acquired by tracking photons through the phantom with MC require lengthy computation times due to the large number of photon histories necessary for accurate representation of noise. A substantial speed up of the process was attained by using a low number of photon histories with kernel denoising of the projections from the scattered photons. These FBP reconstructed images were validated against those that were acquired in simulations using many photon histories by ensuring a minimal normalized root mean square error. Results: Organ doses simulated in the XCAT phantom are within 10% of the reference values. Corresponding images attained using projection kernel smoothing were attained with 3 orders of magnitude less computation time compared to a reference simulation using many photon histories. Conclusion: Combining GPU acceleration with kernel denoising of scattered photon projections in MC simulations allows organ dose and corresponding image quality to be attained with reasonable accuracy and substantially reduced computation time than is possible with standard simulation approaches

  6. Preliminary results of a pilot study on the behaviour of nuclear power station operating staff in simulated faults

    International Nuclear Information System (INIS)

    Reinartz, S.J.

    1984-01-01

    The necessity of distinguishing the effect of a fault and the stabilizing effects of the automatic control devices leads to operating staff being supported by suitable configuration of the control room and work aids. The aim of the study consists of having a better understanding of how control room operating staff control a fault and how they carry out the problem solving tasks of fault recognition, diagnosis and compensation. These tasks differ from the normal daily tasks and operators only have to do them rarely. The study is intended to examine what is necessary so that operators can do their job: what information they have to have about plant conditions and how this should be shown: trend information of digital display, conventional indication or VDU screens. (orig./DG) [de

  7. ELECTRON ACCELERATIONS AT HIGH MACH NUMBER SHOCKS: TWO-DIMENSIONAL PARTICLE-IN-CELL SIMULATIONS IN VARIOUS PARAMETER REGIMES

    Energy Technology Data Exchange (ETDEWEB)

    Matsumoto, Yosuke [Department of Physics, Chiba University, Yayoi-cho 1-33, Inage-ku, Chiba 263-8522 (Japan); Amano, Takanobu; Hoshino, Masahiro, E-mail: ymatumot@astro.s.chiba-u.ac.jp [Department of Earth and Planetary Science, University of Tokyo, Hongo 1-33, Bunkyo-ku, Tokyo 113-0033 (Japan)

    2012-08-20

    Electron accelerations at high Mach number collisionless shocks are investigated by means of two-dimensional electromagnetic particle-in-cell simulations with various Alfven Mach numbers, ion-to-electron mass ratios, and the upstream electron {beta}{sub e} (the ratio of the thermal pressure to the magnetic pressure). We find electrons are effectively accelerated at a super-high Mach number shock (M{sub A} {approx} 30) with a mass ratio of M/m = 100 and {beta}{sub e} = 0.5. The electron shock surfing acceleration is an effective mechanism for accelerating the particles toward the relativistic regime even in two dimensions with a large mass ratio. Buneman instability excited at the leading edge of the foot in the super-high Mach number shock results in a coherent electrostatic potential structure. While multi-dimensionality allows the electrons to escape from the trapping region, they can interact with the strong electrostatic field several times. Simulation runs in various parameter regimes indicate that the electron shock surfing acceleration is an effective mechanism for producing relativistic particles in extremely high Mach number shocks in supernova remnants, provided that the upstream electron temperature is reasonably low.

  8. Systemic Analysis, Mapping, Modeling, and Simulation of the Advanced Accelerator Applications Program

    International Nuclear Information System (INIS)

    Guan, Yue; Laidler, James J.; Morman, James A.

    2002-01-01

    Advanced chemical separations methods envisioned for use in the Department of Energy Advanced Accelerator Applications (AAA) program have been studied using the Systemic Analysis, Mapping, Modeling, and Simulation (SAMMS) method. This integrated and systematic method considers all aspects of the studied process as one dynamic and inter-dependent system. This particular study focuses on two subjects: the chemical separation processes for treating spent nuclear fuel, and the associated non-proliferation implications of such processing. Two levels of chemical separation models are developed: level 1 models treat the chemical process stages by groups; and level 2 models depict the details of each process stage. Models to estimate the proliferation risks based on proliferation barrier assessment are also developed. This paper describes the research conducted for the single-stratum design in the AAA program. Further research conducted for the multi-strata designs will be presented later. The method and models described in this paper can help in the design of optimized processes that fulfill the chemical separation process specifications and non-proliferation requirements. (authors)

  9. Particle and radiation simulations for the proposed rare isotope accelerator facility

    Energy Technology Data Exchange (ETDEWEB)

    Remec, Igor [Oak Ridge National Laboratory, Oak Ridge, P. O. Box 2008, TN 37831-6172 (United States)]. E-mail: remeci@ornl.gov; Gabriel, Tony A. [Oak Ridge National Laboratory, Oak Ridge, P. O. Box 2008, TN 37831-6172 (United States); Wendel, Mark W. [Oak Ridge National Laboratory, Oak Ridge, P. O. Box 2008, TN 37831-6172 (United States); Conner, David L. [Oak Ridge National Laboratory, Oak Ridge, P. O. Box 2008, TN 37831-6172 (United States); Burgess, Thomas W. [Oak Ridge National Laboratory, Oak Ridge, P. O. Box 2008, TN 37831-6172 (United States); Ronningen, Reginald M. [National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824 (United States); Blideanu, Valentin [National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824 (United States); Bollen, Georg [National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824 (United States); Boles, Jason L. [Lawrence Livermore National Laboratory, P. O. Box 808, L-446, Livermore, CA 94550 (United States); Reyes, Susana [Lawrence Livermore National Laboratory, P. O. Box 808, L-446, Livermore, CA 94550 (United States); Ahle, Larry E. [Lawrence Livermore National Laboratory, P. O. Box 808, L-446, Livermore, CA 94550 (United States); Stein, Werner [Lawrence Livermore National Laboratory, P. O. Box 808, L-446, Livermore, CA 94550 (United States)

    2006-06-23

    The Rare Isotope Accelerator (RIA) facility, planned to be built in the USA, will be capable of delivering diverse beams, from protons to uranium ions, with energies from 1 GeV to at least 400 MeV per nucleon to rare isotope-producing targets. High beam power-400 kW-will allow RIA to become the most powerful rare isotope beam facility in the world; however, it also creates challenges for the design of the isotope-production targets. This paper focuses on the isotope-separator-on-line (ISOL) target work, particularly the radiation transport aspects of the two-step fission target design. Simulations were performed with the PHITS, MCNPX, and MARS15 computer codes. A two-step ISOL target considered here consists of a mercury or tungsten primary target in which primary beam interactions release neutrons, which in turn induce fissions-and produce rare isotopes-in the secondary target filled with fissionable material. Three primary beams were considered: 1-GeV protons, 622-MeV/u deuterons, and 777-MeV/u {sup 3}He ions. The proton and deuterium beams were found to be about equivalent in terms of induced fission rates and heating rates in the target, while the {sup 3}He beam, without optimizing the target geometry, was less favorable, producing about 15% fewer fissions and about 50% higher heating rates than the proton beam at the same beam power.

  10. Particle and radiation simulations for the proposed rare isotope accelerator facility

    Science.gov (United States)

    Remec, Igor; Gabriel, Tony A.; Wendel, Mark W.; Conner, David L.; Burgess, Thomas W.; Ronningen, Reginald M.; Blideanu, Valentin; Bollen, Georg; Boles, Jason L.; Reyes, Susana; Ahle, Larry E.; Stein, Werner

    2006-06-01

    The Rare Isotope Accelerator (RIA) facility, planned to be built in the USA, will be capable of delivering diverse beams, from protons to uranium ions, with energies from 1 GeV to at least 400 MeV per nucleon to rare isotope-producing targets. High beam power—400 kW—will allow RIA to become the most powerful rare isotope beam facility in the world; however, it also creates challenges for the design of the isotope-production targets. This paper focuses on the isotope-separator-on-line (ISOL) target work, particularly the radiation transport aspects of the two-step fission target design. Simulations were performed with the PHITS, MCNPX, and MARS15 computer codes. A two-step ISOL target considered here consists of a mercury or tungsten primary target in which primary beam interactions release neutrons, which in turn induce fissions—and produce rare isotopes—in the secondary target filled with fissionable material. Three primary beams were considered: 1-GeV protons, 622-MeV/u deuterons, and 777-MeV/u 3He ions. The proton and deuterium beams were found to be about equivalent in terms of induced fission rates and heating rates in the target, while the 3He beam, without optimizing the target geometry, was less favorable, producing about 15% fewer fissions and about 50% higher heating rates than the proton beam at the same beam power.

  11. The GENGA code: gravitational encounters in N-body simulations with GPU acceleration

    International Nuclear Information System (INIS)

    Grimm, Simon L.; Stadel, Joachim G.

    2014-01-01

    We describe an open source GPU implementation of a hybrid symplectic N-body integrator, GENGA (Gravitational ENcounters with Gpu Acceleration), designed to integrate planet and planetesimal dynamics in the late stage of planet formation and stability analyses of planetary systems. GENGA uses a hybrid symplectic integrator to handle close encounters with very good energy conservation, which is essential in long-term planetary system integration. We extended the second-order hybrid integration scheme to higher orders. The GENGA code supports three simulation modes: integration of up to 2048 massive bodies, integration with up to a million test particles, or parallel integration of a large number of individual planetary systems. We compare the results of GENGA to Mercury and pkdgrav2 in terms of energy conservation and performance and find that the energy conservation of GENGA is comparable to Mercury and around two orders of magnitude better than pkdgrav2. GENGA runs up to 30 times faster than Mercury and up to 8 times faster than pkdgrav2. GENGA is written in CUDA C and runs on all NVIDIA GPUs with a computing capability of at least 2.0.

  12. The GENGA code: gravitational encounters in N-body simulations with GPU acceleration

    Energy Technology Data Exchange (ETDEWEB)

    Grimm, Simon L.; Stadel, Joachim G., E-mail: sigrimm@physik.uzh.ch [Institute for Computational Science, University of Zürich, Winterthurerstrasse 190, CH-8057 Zürich (Switzerland)

    2014-11-20

    We describe an open source GPU implementation of a hybrid symplectic N-body integrator, GENGA (Gravitational ENcounters with Gpu Acceleration), designed to integrate planet and planetesimal dynamics in the late stage of planet formation and stability analyses of planetary systems. GENGA uses a hybrid symplectic integrator to handle close encounters with very good energy conservation, which is essential in long-term planetary system integration. We extended the second-order hybrid integration scheme to higher orders. The GENGA code supports three simulation modes: integration of up to 2048 massive bodies, integration with up to a million test particles, or parallel integration of a large number of individual planetary systems. We compare the results of GENGA to Mercury and pkdgrav2 in terms of energy conservation and performance and find that the energy conservation of GENGA is comparable to Mercury and around two orders of magnitude better than pkdgrav2. GENGA runs up to 30 times faster than Mercury and up to 8 times faster than pkdgrav2. GENGA is written in CUDA C and runs on all NVIDIA GPUs with a computing capability of at least 2.0.

  13. Numerical simulations of recent proton acceleration experiments with sub-100 TW laser systems

    International Nuclear Information System (INIS)

    Sinigardi, Stefano

    2016-01-01

    Recent experiments carried out at the Italian National Research Center, National Optics Institute Department in Pisa, are showing interesting results regarding maximum proton energies achievable with sub-100 TW laser systems. While laser systems are being continuously upgraded in laboratories around the world, at the same time a new trend on stabilizing and making ion acceleration results reproducible is growing in importance. Almost all applications require a beam with fixed performance, so that the energy spectrum and the total charge exhibit moderate shot to shot variations. This result is surely far from being achieved, but many paths are being explored in order to reach it. Some of the reasons for this variability come from fluctuations in laser intensity and focusing, due to optics instability. Other variation sources come from small differences in the target structure. The target structure can vary substantially, when it is impacted by the main pulse, due to the prepulse duration and intensity, the shape of the main pulse and the total energy deposited. In order to qualitatively describe the prepulse effect, we will present a two dimensional parametric scan of its relevant parameters. A single case is also analyzed with a full three dimensional simulation, obtaining reasonable agreement between the numerical and the experimental energy spectrum.

  14. Application of the Reduction of Scale Range in a Lorentz Boosted Frame to the Numerical Simulation of Particle Acceleration Devices

    International Nuclear Information System (INIS)

    Vay, J.-L.; Fawley, W.M.; Geddes, C.G.R.; Cormier-Michel, E.; Grote, D.P.

    2009-01-01

    It has been shown (1) that it may be computationally advantageous to perform computer simulations in a boosted frame for a certain class of systems: particle beams interacting with electron clouds, free electron lasers, and laser-plasma accelerators. However, even if the computer model relies on a covariant set of equations, it was also pointed out that algorithmic difficulties related to discretization errors may have to be overcome in order to take full advantage of the potential speedup (2) . In this paper, we focus on the analysis of the complication of data input and output in a Lorentz boosted frame simulation, and describe the procedures that were implemented in the simulation code Warp(3). We present our most recent progress in the modeling of laser wakefield acceleration in a boosted frame, and describe briefly the potential benefits of calculating in a boosted frame for the modeling of coherent synchrotron radiation.

  15. Fault Analysis in Solar Photovoltaic Arrays

    Science.gov (United States)

    Zhao, Ye

    Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.

  16. Particle acceleration inside PWN: Simulation and observational constraints with INTEGRAL; Acceleration de particules au sein des vents relativistes de pulsar: simulation et contraintes observationelles avec le satellite INTEGRAL

    Energy Technology Data Exchange (ETDEWEB)

    Forot, M

    2006-12-15

    The context of this thesis is to gain new constraints on the different particle accelerators that occur in the complex environment of neutron stars: in the pulsar magnetosphere, in the striped wind or wave outside the light cylinder, in the jets and equatorial wind, and at the wind terminal shock. An important tool to constrain both the magnetic field and primary particle energies is to image the synchrotron ageing of the population, but it requires a careful modelling of the magnetic field evolution in the wind flow. The current models and understanding of these different accelerators, the acceleration processes and open questions have been reviewed in the first part of the thesis. The instrumental part of this work involves the IBIS imager, on board the INTEGRAL satellite, that provides images with 12' resolution from 17 keV to MeV where the SPI spectrometer takes over up, to 10 MeV, but with a reduced 2 degrees resolution. A new method for using the double-layer IBIS imager as a Compton telescope with coded mask aperture. Its performance has been measured. The Compton scattering information and the achieved sensitivity also open a new window for polarimetry in gamma rays. A method has been developed to extract the linear polarization properties and to check the instrument response for fake polarimetric signals in the various backgrounds and projection effects.

  17. Particle acceleration inside PWN: Simulation and observational constraints with INTEGRAL; Acceleration de particules au sein des vents relativistes de pulsar: simulation et contraintes observationelles avec le satellite INTEGRAL

    Energy Technology Data Exchange (ETDEWEB)

    Forot, M

    2006-12-15

    The context of this thesis is to gain new constraints on the different particle accelerators that occur in the complex environment of neutron stars: in the pulsar magnetosphere, in the striped wind or wave outside the light cylinder, in the jets and equatorial wind, and at the wind terminal shock. An important tool to constrain both the magnetic field and primary particle energies is to image the synchrotron ageing of the population, but it requires a careful modelling of the magnetic field evolution in the wind flow. The current models and understanding of these different accelerators, the acceleration processes and open questions have been reviewed in the first part of the thesis. The instrumental part of this work involves the IBIS imager, on board the INTEGRAL satellite, that provides images with 12' resolution from 17 keV to MeV where the SPI spectrometer takes over up, to 10 MeV, but with a reduced 2 degrees resolution. A new method for using the double-layer IBIS imager as a Compton telescope with coded mask aperture. Its performance has been measured. The Compton scattering information and the achieved sensitivity also open a new window for polarimetry in gamma rays. A method has been developed to extract the linear polarization properties and to check the instrument response for fake polarimetric signals in the various backgrounds and projection effects.

  18. Stacking Faults and Polytypes for Layered Double Hydroxides: What Can We Learn from Simulated and Experimental X-ray Powder Diffraction Data?

    Science.gov (United States)

    Sławiński, Wojciech A; Sjåstad, Anja Olafsen; Fjellvåg, Helmer

    2016-12-19

    Layered double hydroxides (LDH) are a broad group of widely studied materials. The layered character of those materials and their high flexibility for accommodating different metals and anions make them technologically interesting. The general formula for the LDH compound is [M 1-x II M x III (OH) 2 ][A n- ] x/n ·mH 2 O, where M II is a divalent metal cation which can be substituted by M III trivalent cation, and A n- is a charge compensating anion located between positively charged layers. In this paper we present a comprehensive study on possible structural disorder in LDH. We show how X-ray powder diffraction (XRPD) can be used to reveal important features of the LDH crystal structure such as stacking faults, random interlayer shifts, anion-molecule orientation, crystal water content, distribution of interlayer distances, and also LDH slab thickness. All calculations were performed using the Discus package, which gives a better flexibility in defining stacking fault sequences, simulating and refining XRPD patterns, relative to DIFFaX, DIFFaX+, and FAULTS. Finally, we show how the modeling can be applied to two LDH samples: Ni 0.67 Cr 0.33 (OH) 2 (CO 3 ) 0.16 ·mH 2 O (3D structure) and Mg 0.67 Al 0.33 (OH) 2 (NO 3 ) 0.33 (2D layered structure).

  19. GO2OGS 1.0: a versatile workflow to integrate complex geological information with fault data into numerical simulation models

    Science.gov (United States)

    Fischer, T.; Naumov, D.; Sattler, S.; Kolditz, O.; Walther, M.

    2015-11-01

    We offer a versatile workflow to convert geological models built with the ParadigmTM GOCAD© (Geological Object Computer Aided Design) software into the open-source VTU (Visualization Toolkit unstructured grid) format for usage in numerical simulation models. Tackling relevant scientific questions or engineering tasks often involves multidisciplinary approaches. Conversion workflows are needed as a way of communication between the diverse tools of the various disciplines. Our approach offers an open-source, platform-independent, robust, and comprehensible method that is potentially useful for a multitude of environmental studies. With two application examples in the Thuringian Syncline, we show how a heterogeneous geological GOCAD model including multiple layers and faults can be used for numerical groundwater flow modeling, in our case employing the OpenGeoSys open-source numerical toolbox for groundwater flow simulations. The presented workflow offers the chance to incorporate increasingly detailed data, utilizing the growing availability of computational power to simulate numerical models.

  20. A fault detection and diagnosis in a PWR steam generator

    International Nuclear Information System (INIS)

    Park, Seung Yub

    1991-01-01

    The purpose of this study is to develop a fault detection and diagnosis scheme that can monitor process fault and instrument fault of a steam generator. The suggested scheme consists of a Kalman filter and two bias estimators. Method of detecting process and instrument fault in a steam generator uses the mean test on the residual sequence of Kalman filter, designed for the unfailed system, to make a fault decision. Once a fault is detected, two bias estimators are driven to estimate the fault and to discriminate process fault and instrument fault. In case of process fault, the fault diagnosis of outlet temperature, feed-water heater and main steam control valve is considered. In instrument fault, the fault diagnosis of steam generator's three instruments is considered. Computer simulation tests show that on-line prompt fault detection and diagnosis can be performed very successfully.(Author)

  1. Quantitative analysis of CTEM images of small dislocation loops in Al and stacking fault tetrahedra in Cu generated by molecular dynamics simulation

    International Nuclear Information System (INIS)

    Schaeublin, R.; Almazouzi, A.; Dai, Y.; Osetsky, Yu.N.; Victoria, M.

    2000-01-01

    The visibility of conventional transmission electron microscopy (CTEM) images of small crystalline defects generated by molecular dynamics (MD) simulation is investigated. Faulted interstitial dislocation loops in Al smaller than 2 nm in diameter and stacking fault tetrahedra (SFT) in Cu smaller than 4 nm in side are assessed. A recent approach allowing to simulate the CTEM images of computer generated samples described by their atomic positions is applied to obtain bright field and weak beam images. For the dislocation loop-like cluster it appears that the simulated image is comparable to experimental images. The contrast of the g(3.1g) near weak beam image decreases with decreasing size of the cluster but is still 20% of the background intensity for a 2 interstitial cluster. This indicates a visibility at the limit of the experimental background noise. In addition, the cluster image size, which is here always larger than the real size, saturates at about 1 nm when the cluster real size decreases below 1 nm, which corresponds to a cluster of 8 interstitials. For the SFT in Cu the g(6.1g) weak beam image is comparable to experimental images. It appears that the image size is larger than the real size by 20%. A large loss of the contrast features that allows to identify an SFT is observed on the image of the smallest SFT (21 vacancies)

  2. Head simulation of linear accelerators and spectra considerations using EGS4 Monte Carlo code in a PC

    Energy Technology Data Exchange (ETDEWEB)

    Malatara, G; Kappas, K [Medical Physics Department, Faculty of Medicine, University of Patras, 265 00 Patras (Greece); Sphiris, N [Ethnodata S.A., Athens (Greece)

    1994-12-31

    In this work, a Monte Carlo EGS4 code was used to simulate radiation transport through linear accelerators to produce and score energy spectra and angular distributions of 6, 12, 15 and 25 MeV bremsstrahlung photons exiting from different accelerator treatment heads. The energy spectra was used as input for a convolution method program to calculate the tissue-maximum ratio in water. 100.000 histories are recorded in the scoring plane for each simulation. The validity of the Monte Carlo simulation and the precision to the calculated spectra have been verified experimentally and were in good agreement. We believe that the accurate simulation of the different components of the linear accelerator head is very important for the precision of the results. The results of the Monte Carlo and the Convolution Method can be compared with experimental data for verification and they are powerful and practical tools to generate accurate spectra and dosimetric data. (authors). 10 refs,5 figs, 2 tabs.

  3. Simulation of collective ion acceleration in a slow cyclotron beam mode

    International Nuclear Information System (INIS)

    Faehl, R.J.; Shanahan, W.R.; Godfrey, B.B.

    1979-01-01

    The use of slow cyclotron beam waves is examined as a means of accelerating ions in intense relativistic electron beams. Field magnitudes of between 10 5 -and 10 6 V/cm seem achievable in the near term, and while these will never reach the levels of beam front mechanisms, such as virtual cathodes, they will easily exceed conventional ion acceleration sources

  4. PV Systems Reliability Final Technical Report: Ground Fault Detection

    Energy Technology Data Exchange (ETDEWEB)

    Lavrova, Olga [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Flicker, Jack David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Johnson, Jay [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-01-01

    We have examined ground faults in PhotoVoltaic (PV) arrays and the efficacy of fuse, current detection (RCD), current sense monitoring/relays (CSM), isolation/insulation (Riso) monitoring, and Ground Fault Detection and Isolation (GFID) using simulations based on a Simulation Program with Integrated Circuit Emphasis SPICE ground fault circuit model, experimental ground faults installed on real arrays, and theoretical equations.

  5. Monte Carlo simulation of the Varian Clinac 600C accelerator using dynamic wedges

    International Nuclear Information System (INIS)

    Moreno, S.; Chaves, A.; Lopes, M.C.; Peralta, L.; Universidade de Lisboa

    2004-01-01

    The advent of linear accelerators (linac) with computer-controlled dynamic collimation systems and functional and anatomical imaging techniques allowed a more exact delimitation and localisation of the target volume. These advanced treatment techniques inevitably increase the complexity level of dose calculation because of the introduction of the temporal variable. On account of this, it is mandatory the usage of more accurate modelling techniques of the collimator components, as it is the case of Monte Carlo (MC) simulation, which has created an enormous interest in research and clinical practice. Because the patients bodies are not homogenous nor are their body surfaces plane and regular, the dose distribution may differ significantly from the standard distribution from the linac calibration. It is in the treatment planning systems, which include algorithms that are usually measured in homogeneous water phantoms specific for each correction that the dose distributions from each case are obtained. In a real treatment, exception made to superficial lesions, two or more radiation fields are used in order to obtain the recommended dose distributions. The simplest arrangement is made from two parallel and opposed fields that originate a homogeneous dose distribution in almost all the irradiated volume. The available resources are, for example, different types of energies and of radiation, the application of bolus, the protection of healthy structures, the usage of wedged filters and the application of dynamic wedges. A virtual or dynamic wedge, modelled through the movement of one of the jaws, when compared with a set of physical wedges offers an alternative calculation method of an arbitrary number of wedged fields, instead of the four traditional fields of 15 deg, 30 deg, 45 deg and 60 deg angle and obtained with physical wedges. The goal of this work consists in the study of the application of dynamic wedges in tailoring the radiation field by the Varian Clinac 600

  6. Analysis of Uncertainties in Protection Heater Delay Time Measurements and Simulations in Nb$_{3}$Sn High-Field Accelerator Magnets

    CERN Document Server

    Salmi, Tiina; Marchevsky, Maxim; Bajas, Hugo; Felice, Helene; Stenvall, Antti

    2015-01-01

    The quench protection of superconducting high-field accelerator magnets is presently based on protection heaters, which are activated upon quench detection to accelerate the quench propagation within the winding. Estimations of the heater delay to initiate a normal zone in the coil are essential for the protection design. During the development of Nb3Sn magnets for the LHC luminosity upgrade, protection heater delays have been measured in several experiments, and a new computational tool CoHDA (Code for Heater Delay Analysis) has been developed for heater design. Several computational quench analyses suggest that the efficiency of the present heater technology is on the borderline of protecting the magnets. Quantifying the inevitable uncertainties related to the measured and simulated delays is therefore of pivotal importance. In this paper, we analyze the uncertainties in the heater delay measurements and simulations using data from five impregnated high-field Nb3Sn magnets with different heater geometries. ...

  7. Monte-Carlo simulation of the SL-ELEKTA-20 medical linear accelerator. Dosimetric study of a water phantom

    International Nuclear Information System (INIS)

    Thiam, Ch. O.

    2003-06-01

    In radiotherapy, it is essential to have a precise knowledge of the dose delivered in the target volume and the neighbouring critical organs. To be usable clinically, the models of calculation must take into account the exact characteristics of the beams used and the densities of fabrics. Today we can use sophisticated irradiation techniques and get a more precise assessment of the dose and with a better knowledge of its distribution. Thus in this report, will be detailed a simulation of the head of irradiation of accelerator SL-ELEKTA-20 in electrons mode and a dosimetric study of a water phantom. This study is carried out with the code of simulation Monte Carlo GATE adapted for applications of medical physics; the results are compared with the data obtained by the anticancer center 'Jean Perrin' on a similar accelerator. (author)

  8. Analysis of Uncertainties in Protection Heater Delay Time Measurements and Simulations in Nb$_{3}$Sn High-Field Accelerator Magnets

    CERN Document Server

    Salmi, Tiina; Marchevsky, Maxim; Bajas, Hugo; Felice, Helene; Stenvall, Antti

    2015-01-01

    The quench protection of superconducting high-field accelerator magnets is presently based on protection heaters, which are activated upon quench detection to accelerate the quench propagation within the winding. Estimations of the heater delay to initiate a normal zone in the coil are essential for the protection design. During the development of Nb$_{3}$Sn magnets for the LHC luminosity upgrade, protection heater delays have been measured in several experiments, and a new computational tool CoHDA (Code for Heater Delay Analysis) has been developed for heater design. Several computational quench analyses suggest that the efficiency of the present heater technology is on the borderline of protecting the magnets. Quantifying the inevitable uncertainties related to the measured and simulated delays is therefore of pivotal importance. In this paper, we analyze the uncertainties in the heater delay measurements and simulations using data from five impregnated high-field Nb$_{3}$Sn magnets with different heater ge...

  9. Robust Fault Diagnosis Design for Linear Multiagent Systems with Incipient Faults

    Directory of Open Access Journals (Sweden)

    Jingping Xia

    2015-01-01

    Full Text Available The design of a robust fault estimation observer is studied for linear multiagent systems subject to incipient faults. By considering the fact that incipient faults are in low-frequency domain, the fault estimation of such faults is proposed for discrete-time multiagent systems based on finite-frequency technique. Moreover, using the decomposition design, an equivalent conclusion is given. Simulation results of a numerical example are presented to demonstrate the effectiveness of the proposed techniques.

  10. Computer aided construction of fault tree

    International Nuclear Information System (INIS)

    Kovacs, Z.

    1982-01-01

    Computer code CAT for the automatic construction of the fault tree is briefly described. Code CAT makes possible simple modelling of components using decision tables, it accelerates the fault tree construction process, constructs fault trees of different complexity, and is capable of harmonized co-operation with programs PREPandKITT 1,2 for fault tree analysis. The efficiency of program CAT and thus the accuracy and completeness of fault trees constructed significantly depends on the compilation and sophistication of decision tables. Currently, program CAT is used in co-operation with programs PREPandKITT 1,2 in reliability analyses of nuclear power plant systems. (B.S.)

  11. Linking fault permeability, fluid flow, and earthquake triggering in a hydrothermally active tectonic setting: Numerical Simulations of the hydrodynamics in the Tjörnes Fracture Zone, Iceland.

    Science.gov (United States)

    Lupi, M.; Geiger, S.; Graham, C.; Claesson, L.; Richter, B.

    2007-12-01

    A good insight into the transient fluid flow evolution within a hydrothermal system is of primary importance for the understanding of several geologic processes, for example the hydrodynamic triggering of earthquakes or the formation of mineral deposits. The strong permeability contrast between different crustal layers as well as the high geothermal gradient of these areas are elements that strongly affect the flow behaviour. In addition, the sudden and transient occurrence of joints, faults and magmatic intrusions are likely to change the hydrothermal flow paths in very short time. The Tjörnes Fracture Zone (TFZ) north of Iceland, is such a hydrothermal area where a high geothermal gradient, magmatic bodies, faults, and the strong contrast between sediments and fractured lava layers govern the large-scale fluid flow. The TFZ offsets the Kolbeinsey Ridge and the Northern Rift Zone. It is characterized by km-scale faults that link sub-seafloor sediments and lava layers with deeper crystalline rocks. These structures focus fluid flow and allow for the mixing between cold seawater and deep hydrothermal fluids. A strong seismic activity is present in the TFZ: earthquakes up to magnitude 7 have been recorded over the past years. Hydrogeochemical changes before, during and after a magnitude 5.8 earthquake suggest that the evolving stress state before the earthquake leads to (remote) permeability variations, which alter the fluid flow paths. This is in agreement with recent numerical fluid flow simulations which demonstrate that fluid flow in magmatic- hydrothermal systems is often convective and very sensitive to small variations in permeability. In order to understand the transient fluid flow behaviour in this complex geological environment, we have conducted numerical simulations of heat and mass transport in two geologically realistic cross-sectional models of the TFZ. The geologic models are discretised using finite element and finite volume methods. They hence have

  12. Simulating Earthquake Rupture and Off-Fault Fracture Response: Application to the Safety Assessment of the Swedish Nuclear Waste Repository

    KAUST Repository

    Falth, B.

    2014-12-09

    To assess the long-term safety of a deep repository of spent nuclear fuel, upper bound estimates of seismically induced secondary fracture shear displacements are needed. For this purpose, we analyze a model including an earthquake fault, which is surrounded by a number of smaller discontinuities representing fractures on which secondary displacements may be induced. Initial stresses are applied and a rupture is initiated at a predefined hypocenter and propagated at a specified rupture speed. During rupture we monitor shear displacements taking place on the nearby fracture planes in response to static as well as dynamic effects. As a numerical tool, we use the 3Dimensional Distinct Element Code (3DEC) because it has the capability to handle numerous discontinuities with different orientations and at different locations simultaneously. In tests performed to benchmark the capability of our method to generate and propagate seismic waves, 3DEC generates results in good agreement with results from both Stokes solution and the Compsyn code package. In a preliminary application of our method to the nuclear waste repository site at Forsmark, southern Sweden, we assume end-glacial stress conditions and rupture on a shallow, gently dipping, highly prestressed fault with low residual strength. The rupture generates nearly complete stress drop and an M-w 5.6 event on the 12 km(2) rupture area. Of the 1584 secondary fractures (150 m radius), with a wide range of orientations and locations relative to the fault, a majority move less than 5 mm. The maximum shear displacement is some tens of millimeters at 200 m fault-fracture distance.

  13. Fourier-accelerated Langevin simulation of the frustrated XY model and simulation of the spinless and spin one-half Hubbard model

    International Nuclear Information System (INIS)

    Scheinine, A.L.

    1992-01-01

    The frustrated XY model was studied on a lattice, primarily to test Fourier transform acceleration technique for a phase transition having more field structure than just spinwaves and vortices. Also, the spinless Hubbard model without hopping was simulated using continuous variables for the auxiliary field that mediates coupling between fermions. Finally, spin one-half Hubbard model was studied with a technique that sampled the fermion occupation configurations. The frustrated two-dimensional XY model was simulated using the Langevin equation with Fourier transform acceleration. Speedup due to Fourier acceleration was measured for frustration one-half at the transition temperature. The unfrustrated XY model was also studied. For the frustrated case, only long-distance spin correlation and the autocorrelation of the spin showed significant speedup. The frustrated case has Ising-like domains. It was found that Fourier acceleration speeds the evolution of spinwaves but has negligible effect on the Ising-like domains. In the Hubbard model, fermion determinant weight factor in the partition function changes sign, causing large statistical fluctuations of observables. A technique was found for sampling configuration space using continuous auxiliary fields, despite energy barriers where the fermion determinant changes sign. For two-dimensional spinless Hubbard model with no hopping, an exact solution was found for a 4 x 4 lattice; which could be compared to numerical simulations. The sign problem remained, and was found to be related to the sign problem encountered when a discrete variable is used for the auxiliary field. For spin one-half Hubbard model, a Monte Carlo simulation was done in which the fermion occupation configurations were varied. Rather than integrate-out the fermions and make a numerical estimate of the sum over the auxiliary field, the auxiliary field was integrated-out and a numerical estimate was made of the sum over fermion configurations

  14. Large earthquakes and creeping faults

    Science.gov (United States)

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  15. Numerical simulations of intense charged particle beam propagation in a dielectric wakefield accelerator

    International Nuclear Information System (INIS)

    Gai, W.; Kanareykin, A.D.; Kustov, A.L.; Simpson, J.

    1995-01-01

    The propagation of an intense electron beam through a long dielectric tube is a critical issue for the success of the dielectric wakefield acceleration scheme. Due to the head-tail instability, a high current charged particle beam cannot propagate long distance without external focusing. In this paper we examine the beam handling and control problem in the dielectric wakefield accelerator. We show that for the designed 15.6 GHz and 20 GHz dielectric structures a 150 MeV, 40 endash 100 nC beam can be controlled and propagate up to 5 meters without significant particle losses by using external applied focusing and defocusing channel (FODO) around the dielectric tube. Particle dynamics of the accelerated beam is also studied. Our results show that for typical dielectric acceleration structures, the head-tail instabilities can be conveniently controlled in the same way as the driver beam. copyright 1995 American Institute of Physics

  16. Near-Fault Broadband Ground Motion Simulations Using Empirical Green's Functions: Application to the Upper Rhine Graben (France-Germany) Case Study

    Science.gov (United States)

    Del Gaudio, Sergio; Hok, Sebastien; Festa, Gaetano; Causse, Mathieu; Lancieri, Maria

    2017-09-01

    Seismic hazard estimation relies classically on data-based ground motion prediction equations (GMPEs) giving the expected motion level as a function of several parameters characterizing the source and the sites of interest. However, records of moderate to large earthquakes at short distances from the faults are still rare. For this reason, it is difficult to obtain a reliable ground motion prediction for such a class of events and distances where also the largest amount of damage is usually observed. A possible strategy to fill this lack of information is to generate synthetic accelerograms based on an accurate modeling of both extended fault rupture and wave propagation process. The development of such modeling strategies is essential for estimating seismic hazard close to faults in moderate seismic activity zones, where data are even scarcer. For that reason, we selected a target site in Upper Rhine Graben (URG), at the French-German border. URG is a region where faults producing micro-seismic activity are very close to the sites of interest (e.g., critical infrastructures like supply lines, nuclear power plants, etc.) needing a careful investigation of seismic hazard. In this work, we demonstrate the feasibility of performing near-fault broadband ground motion numerical simulations in a moderate seismic activity region such as URG and discuss some of the challenges related to such an application. The modeling strategy is to couple the multi-empirical Green's function technique (multi-EGFt) with a k -2 kinematic source model. One of the advantages of the multi-EGFt is that it does not require a detailed knowledge of the propagation medium since the records of small events are used as the medium transfer function, if, at the target site, records of small earthquakes located on the target fault are available. The selection of suitable events to be used as multi-EGF is detailed and discussed in our specific situation where less number of events are available. We

  17. Accelerating solidification process simulation for large-sized system of liquid metal atoms using GPU with CUDA

    Energy Technology Data Exchange (ETDEWEB)

    Jie, Liang [School of Information Science and Engineering, Hunan University, Changshang, 410082 (China); Li, KenLi, E-mail: lkl@hnu.edu.cn [School of Information Science and Engineering, Hunan University, Changshang, 410082 (China); National Supercomputing Center in Changsha, 410082 (China); Shi, Lin [School of Information Science and Engineering, Hunan University, Changshang, 410082 (China); Liu, RangSu [School of Physics and Micro Electronic, Hunan University, Changshang, 410082 (China); Mei, Jing [School of Information Science and Engineering, Hunan University, Changshang, 410082 (China)

    2014-01-15

    Molecular dynamics simulation is a powerful tool to simulate and analyze complex physical processes and phenomena at atomic characteristic for predicting the natural time-evolution of a system of atoms. Precise simulation of physical processes has strong requirements both in the simulation size and computing timescale. Therefore, finding available computing resources is crucial to accelerate computation. However, a tremendous computational resource (GPGPU) are recently being utilized for general purpose computing due to its high performance of floating-point arithmetic operation, wide memory bandwidth and enhanced programmability. As for the most time-consuming component in MD simulation calculation during the case of studying liquid metal solidification processes, this paper presents a fine-grained spatial decomposition method to accelerate the computation of update of neighbor lists and interaction force calculation by take advantage of modern graphics processors units (GPU), enlarging the scale of the simulation system to a simulation system involving 10 000 000 atoms. In addition, a number of evaluations and tests, ranging from executions on different precision enabled-CUDA versions, over various types of GPU (NVIDIA 480GTX, 580GTX and M2050) to CPU clusters with different number of CPU cores are discussed. The experimental results demonstrate that GPU-based calculations are typically 9∼11 times faster than the corresponding sequential execution and approximately 1.5∼2 times faster than 16 CPU cores clusters implementations. On the basis of the simulated results, the comparisons between the theoretical results and the experimental ones are executed, and the good agreement between the two and more complete and larger cluster structures in the actual macroscopic materials are observed. Moreover, different nucleation and evolution mechanism of nano-clusters and nano-crystals formed in the processes of metal solidification is observed with large

  18. Physically based probabilistic seismic hazard analysis using broadband ground motion simulation: a case study for the Prince Islands Fault, Marmara Sea

    Science.gov (United States)

    Mert, Aydin; Fahjan, Yasin M.; Hutchings, Lawrence J.; Pınar, Ali

    2016-08-01

    The main motivation for this study was the impending occurrence of a catastrophic earthquake along the Prince Island Fault (PIF) in the Marmara Sea and the disaster risk around the Marmara region, especially in Istanbul. This study provides the results of a physically based probabilistic seismic hazard analysis (PSHA) methodology, using broadband strong ground motion simulations, for sites within the Marmara region, Turkey, that may be vulnerable to possible large earthquakes throughout the PIF segments in the Marmara Sea. The methodology is called physically based because it depends on the physical processes of earthquake rupture and wave propagation to simulate earthquake ground motion time histories. We included the effects of all considerable-magnitude earthquakes. To generate the high-frequency (0.5-20 Hz) part of the broadband earthquake simulation, real, small-magnitude earthquakes recorded by a local seismic array were used as empirical Green's functions. For the frequencies below 0.5 Hz, the simulations were obtained by using synthetic Green's functions, which are synthetic seismograms calculated by an explicit 2D /3D elastic finite difference wave propagation routine. By using a range of rupture scenarios for all considerable-magnitude earthquakes throughout the PIF segments, we produced a hazard calculation for frequencies of 0.1-20 Hz. The physically based PSHA used here followed the same procedure as conventional PSHA, except that conventional PSHA utilizes point sources or a series of point sources to represent earthquakes, and this approach utilizes the full rupture of earthquakes along faults. Furthermore, conventional PSHA predicts ground motion parameters by using empirical attenuation relationships, whereas this approach calculates synthetic seismograms for all magnitudes of earthquakes to obtain ground motion parameters. PSHA results were produced for 2, 10, and 50 % hazards for all sites studied in the Marmara region.

  19. Physically-Based Probabilistic Seismic Hazard Analysis Using Broad-Band Ground Motion Simulation: a Case Study for Prince Islands Fault, Marmara Sea

    Science.gov (United States)

    Mert, A.

    2016-12-01

    The main motivation of this study is the impending occurrence of a catastrophic earthquake along the Prince Island Fault (PIF) in Marmara Sea and the disaster risk around Marmara region, especially in İstanbul. This study provides the results of a physically-based Probabilistic Seismic Hazard Analysis (PSHA) methodology, using broad-band strong ground motion simulations, for sites within the Marmara region, Turkey, due to possible large earthquakes throughout the PIF segments in the Marmara Sea. The methodology is called physically-based because it depends on the physical processes of earthquake rupture and wave propagation to simulate earthquake ground motion time histories. We include the effects of all considerable magnitude earthquakes. To generate the high frequency (0.5-20 Hz) part of the broadband earthquake simulation, the real small magnitude earthquakes recorded by local seismic array are used as an Empirical Green's Functions (EGF). For the frequencies below 0.5 Hz the simulations are obtained using by Synthetic Green's Functions (SGF) which are synthetic seismograms calculated by an explicit 2D/3D elastic finite difference wave propagation routine. Using by a range of rupture scenarios for all considerable magnitude earthquakes throughout the PIF segments we provide a hazard calculation for frequencies 0.1-20 Hz. Physically based PSHA used here follows the same procedure of conventional PSHA except that conventional PSHA utilizes point sources or a series of point sources to represent earthquakes and this approach utilizes full rupture of earthquakes along faults. Further, conventional PSHA predicts ground-motion parameters using by empirical attenuation relationships, whereas this approach calculates synthetic seismograms for all magnitude earthquakes to obtain ground-motion parameters. PSHA results are produced for 2%, 10% and 50% hazards for all studied sites in Marmara Region.

  20. Horizontal Accelerator

    Data.gov (United States)

    Federal Laboratory Consortium — The Horizontal Accelerator (HA) Facility is a versatile research tool available for use on projects requiring simulation of the crash environment. The HA Facility is...

  1. Estimation of neutron production from accelerator head assembly of 15 MV medical LINAC using FLUKA simulations

    Energy Technology Data Exchange (ETDEWEB)

    Patil, B.J., E-mail: bjp@physics.unipune.ac.in [Department of Physics, University of Pune, Pune 411 007 (India); Chavan, S.T., E-mail: sharad@sameer.gov.in [SAMEER, IIT Powai Campus, Mumbai 400 076 (India); Pethe, S.N., E-mail: sanjay@sameer.gov.in [SAMEER, IIT Powai Campus, Mumbai 400 076 (India); Krishnan, R., E-mail: krishnan@sameer.gov.in [SAMEER, IIT Powai Campus, Mumbai 400 076 (India); Bhoraskar, V.N., E-mail: vnb@physics.unipune.ac.in [Department of Physics, University of Pune, Pune 411 007 (India); Dhole, S.D., E-mail: sanjay@physics.unipune.ac.in [Department of Physics, University of Pune, Pune 411 007 (India)

    2011-12-15

    For the production of a clinical 15 MeV photon beam, the design of accelerator head assembly has been optimized using Monte Carlo based FLUKA code. The accelerator head assembly consists of e-{gamma} target, flattening filter, primary collimator and an adjustable rectangular secondary collimator. The accelerators used for radiation therapy generate continuous energy gamma rays called Bremsstrahlung (BR) by impinging high energy electrons on high Z materials. The electron accelerators operating above 10 MeV can result in the production of neutrons, mainly due to photo nuclear reaction ({gamma}, n) induced by high energy photons in the accelerator head materials. These neutrons contaminate the therapeutic beam and give a non-negligible contribution to patient dose. The gamma dose and neutron dose equivalent at the patient plane (SSD = 100 cm) were obtained at different field sizes of 0 Multiplication-Sign 0, 10 Multiplication-Sign 10, 20 Multiplication-Sign 20, 30 Multiplication-Sign 30 and 40 Multiplication-Sign 40 cm{sup 2}, respectively. The maximum neutron dose equivalent is observed near the central axis of 30 Multiplication-Sign 30 cm{sup 2} field size. This is 0.71% of the central axis photon dose rate of 0.34 Gy/min at 1 {mu}A electron beam current.

  2. CAS CERN Accelerator School: Power converters for particle accelerators

    International Nuclear Information System (INIS)

    Turner, S.

    1990-01-01

    This volume presents the proceedings of the fifth specialized course organized by the CERN Accelerator School, the subject on this occasion being power converters for particle accelerators. The course started with lectures on the classification and topologies of converters and on the guidelines for achieving high performance. It then went on to cover the more detailed aspects of feedback theory, simulation, measurements, components, remote control, fault diagnosis and equipment protection as well as systems and grid-related problems. The important topics of converter specification, procurement contract management and the likely future developments in semiconductor components were also covered. Although the course was principally directed towards DC and slow-pulsed supplies, lectures were added on fast converters and resonant excitation. Finally the programme was rounded off with three seminars on the related fields of Tokamak converters, battery energy storage for electric vehicles, and the control of shaft generators in ships. (orig.)

  3. Simulation studies of the ion beam transport system in a compact electrostatic accelerator-based D-D neutron generator

    Directory of Open Access Journals (Sweden)

    Das Basanta Kumar

    2014-01-01

    Full Text Available The study of an ion beam transport mechanism contributes to the production of a good quality ion beam with a higher current and better beam emittance. The simulation of an ion beam provides the basis for optimizing the extraction system and the acceleration gap for the ion source. In order to extract an ion beam from an ion source, a carefully designed electrode system for the required beam energy must be used. In our case, a self-extracted penning ion source is used for ion generation, extraction and acceleration with a single accelerating gap for the production of neutrons. The characteristics of the ion beam extracted from this ion source were investigated using computer code SIMION 8.0. The ion trajectories from different locations of the plasma region were investigated. The simulation process provided a good platform for a study on optimizing the extraction and focusing system of the ion beam transported to the required target position without any losses and provided an estimation of beam emittance.

  4. Electron acceleration in the Solar corona - 3D PiC code simulations of guide field reconnection

    Science.gov (United States)

    Alejandro Munoz Sepulveda, Patricio

    2017-04-01

    The efficient electron acceleration in the solar corona detected by means of hard X-ray emission is still not well understood. Magnetic reconnection through current sheets is one of the proposed production mechanisms of non-thermal electrons in solar flares. Previous works in this direction were based mostly on test particle calculations or 2D fully-kinetic PiC simulations. We have now studied the consequences of self-generated current-aligned instabilities on the electron acceleration mechanisms by 3D magnetic reconnection. For this sake, we carried out 3D Particle-in-Cell (PiC) code numerical simulations of force free reconnecting current sheets, appropriate for the description of the solar coronal plasmas. We find an efficient electron energization, evidenced by the formation of a non-thermal power-law tail with a hard spectral index smaller than -2 in the electron energy distribution function. We discuss and compare the influence of the parallel electric field versus the curvature and gradient drifts in the guiding-center approximation on the overall acceleration, and their dependence on different plasma parameters.

  5. Rupture Dynamics and Ground Motion from Earthquakes on Rough Faults in Heterogeneous Media

    Science.gov (United States)

    Bydlon, S. A.; Kozdon, J. E.; Duru, K.; Dunham, E. M.

    2013-12-01

    Heterogeneities in the material properties of Earth's crust scatter propagating seismic waves. The effects of scattered waves are reflected in the seismic coda and depend on the amplitude of the heterogeneities, spatial arrangement, and distance from source to receiver. In the vicinity of the fault, scattered waves influence the rupture process by introducing fluctuations in the stresses driving propagating ruptures. Further variability in the rupture process is introduced by naturally occurring geometric complexity of fault surfaces, and the stress changes that accompany slip on rough surfaces. Our goal is to better understand the origin of complexity in the earthquake source process, and to quantify the relative importance of source complexity and scattering along the propagation path in causing incoherence of high frequency ground motion. Using a 2D high order finite difference rupture dynamics code, we nucleate ruptures on either flat or rough faults that obey strongly rate-weakening friction laws. These faults are embedded in domains with spatially varying material properties characterized by Von Karman autocorrelation functions and their associated power spectral density functions, with variations in wave speed of approximately 5 to 10%. Flat fault simulations demonstrate that off-fault material heterogeneity, at least with this particular form and amplitude, has only a minor influence on the rupture process (i.e., fluctuations in slip and rupture velocity). In contrast, ruptures histories on rough faults in both homogeneous and heterogeneous media include much larger short-wavelength fluctuations in slip and rupture velocity. We therefore conclude that source complexity is dominantly influenced by fault geometric complexity. To examine contributions of scattering versus fault geometry on ground motions, we compute spatially averaged root-mean-square (RMS) acceleration values as a function of fault perpendicular distance for a homogeneous medium and several

  6. Monte Carlo simulations of ultra high vacuum and synchrotron radiation for particle accelerators

    CERN Document Server

    AUTHOR|(CDS)2082330; Leonid, Rivkin

    With preparation of Hi-Lumi LHC fully underway, and the FCC machines under study, accelerators will reach unprecedented energies and along with it very large amount of synchrotron radiation (SR). This will desorb photoelectrons and molecules from accelerator walls, which contribute to electron cloud buildup and increase the residual pressure - both effects reducing the beam lifetime. In current accelerators these two effects are among the principal limiting factors, therefore precise calculation of synchrotron radiation and pressure properties are very important, desirably in the early design phase. This PhD project shows the modernization and a major upgrade of two codes, Molflow and Synrad, originally written by R. Kersevan in the 1990s, which are based on the test-particle Monte Carlo method and allow ultra-high vacuum and synchrotron radiation calculations. The new versions contain new physics, and are built as an all-in-one package - available to the public. Existing vacuum calculation methods are overvi...

  7. Numerical simulation on range of high-energy electron moving in accelerator target

    International Nuclear Information System (INIS)

    Shao Wencheng; Sun Punan; Dai Wenjiang

    2008-01-01

    In order to determine the range of high-energy electron moving in accelerator target, the range of electron with the energy range of 1 to 100 MeV moving in common target material of accelerator was calculated by Monte-Carlo method. Comparison between the calculated result and the published data were performed. The results of Monte-Carlo calculation are in good agreement with the published data. Empirical formulas were obtained for the range of high-energy electron with the energy range of 1 to 100 MeV in common target material by curve fitting, offering a series of referenced data for the design of targets in electron accelerator. (authors)

  8. The time dependent propensity function for acceleration of spatial stochastic simulation of reaction–diffusion systems

    International Nuclear Information System (INIS)

    Fu, Jin; Wu, Sheng; Li, Hong; Petzold, Linda R.

    2014-01-01

    The inhomogeneous stochastic simulation algorithm (ISSA) is a fundamental method for spatial stochastic simulation. However, when diffusion events occur more frequently than reaction events, simulating the diffusion events by ISSA is quite costly. To reduce this cost, we propose to use the time dependent propensity function in each step. In this way we can avoid simulating individual diffusion events, and use the time interval between two adjacent reaction events as the simulation stepsize. We demonstrate that the new algorithm can achieve orders of magnitude efficiency gains over widely-used exact algorithms, scales well with increasing grid resolution, and maintains a high level of accuracy

  9. Polarized e-bunch acceleration at Cornell RCS: Tentative tracking simulations

    Energy Technology Data Exchange (ETDEWEB)

    Meot, F. [Brookhaven National Lab. (BNL), Upton, NY (United States); Ptitsyn, V. [Brookhaven National Lab. (BNL), Upton, NY (United States); Ranjbar, V. [Brookhaven National Lab. (BNL), Upton, NY (United States); Rubin, D. [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2017-10-19

    An option as an injector into eRHIC electron storage ring is a rapid-cyclic synchrotron (RCS). Rapid acceleration of polarized electron bunches has never been done, Cornell synchrotron might lend itself to dedicated tests, which is to be first explored based on numerical investigations. This paper is a very preliminary introduction to the topic.

  10. Exploring the Physics Limitations of Compact High Gradient Accelerating Structures Simulations of the Electron Current Spectrometer Setup in Geant4

    CERN Document Server

    Van Vliet, Philine Julia

    2017-01-01

    The high field gradient of 100 MV/m that will be applied to the accelerator cavities of the Compact Linear Collider (CLIC), gives rise to the problem of RF breakdowns. The field collapses and a plasma of electrons and ions is being formed in the cavity, preventing the RF field from penetrating the cavity. Electrons in the plasma are being accelerated and ejected out, resulting in a breakdown current up to a few Amp`eres, measured outside the cavities. These breakdowns lead to luminosity loss, so reducing their amount is of great importance. For this, a better understanding of the physics behind RF breakdowns is needed. To study these breakdowns, the XBox 2 test facility has a spectrometer setup installed after the RF cavity that is being conditioned. For this report, a simulation of this spectrometer setup has been made using Geant4. Once a detailed simulation of the RF field and cavity has been made, it can be connected to this simulation of the spectrometer setup and used to recreate the data that has b...

  11. Application of the reduction of scale range in a Lorentz boosted frame to the numerical simulation of particle acceleration devices

    International Nuclear Information System (INIS)

    Vay, J.; Fawley, W.M.; Geddes, C.G.; Cormier-Michel, E.; Grote, D.P.

    2009-01-01

    It has been shown that the ratio of longest to shortest space and time scales of a system of two or more components crossing at relativistic velocities is not invariant under Lorentz transformation. This implies the existence of a frame of reference minimizing an aggregate measure of the ratio of space and time scales. It was demonstrated that this translated into a reduction by orders of magnitude in computer simulation run times, using methods based on first principles (e.g., Particle-In-Cell), for particle acceleration devices and for problems such as: free electron laser, laser-plasma accelerator, and particle beams interacting with electron clouds. Since then, speed-ups ranging from 75 to more than four orders of magnitude have been reported for the simulation of either scaled or reduced models of the above-cited problems. In it was shown that to achieve full benefits of the calculation in a boosted frame, some of the standard numerical techniques needed to be revised. The theory behind the speed-up of numerical simulation in a boosted frame, latest developments of numerical methods, and example applications with new opportunities that they offer are all presented

  12. Acceleration and sensitivity analysis of lattice kinetic Monte Carlo simulations using parallel processing and rate constant rescaling.

    Science.gov (United States)

    Núñez, M; Robie, T; Vlachos, D G

    2017-10-28

    Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).

  13. Control and simulation of fault and change effect in a back to back system of high voltage direct current network

    International Nuclear Information System (INIS)

    Mohsen, Kalantar; Mehdi, Rashidi; Mehdi, Rashidi; Tabatabaei, Naser M.

    2005-01-01

    Full text : The primary knowledge of human being from electrical energy was in the form of statistic electricity and the first transmission line was DC. But because of primary problems in transmitting electrical power in DC form with average and low voltage levels and because of the higher efficiency of AC machines in comparison with DC machines and presence of AC transistors in different capacitance, have caused the electrical power to be transmitted in AC form. But with the avancement of electrical engineering technology in the 20th century, the transition systems were used in HVDC form, and from that time the HDVC technology passed its improvement process so fast. In this paper in addition to discussing the advantage of HDVC systems and back to back instruction, the fault and change effect in a back to back system is discussed

  14. Simulation of wire-compensation of long range beam beam interaction in high energy accelerators

    International Nuclear Information System (INIS)

    Dorda, U.; )

    2006-01-01

    Full text: We present weak-strong simulation results for the effect of long-range beam-beam (LRBB) interaction in LHC as well as for proposed wire compensation schemes or wire experiments, respectively. In particular, we discuss details of the simulation model, instability indicators, the effectiveness of compensation, the difference between nominal and PACMAN bunches for the LHC, beam experiments, and wire tolerances. The simulations are performed with the new code BBTrack. (author)

  15. Simulation of power flow in magnetically insulated convolutes for pulsed modular accelerators

    International Nuclear Information System (INIS)

    Seidel, D.B.; Goplen, B.C.; VanDevender, J.P.

    1980-01-01

    Two distinct simulation approaches for magnetic insulation are developed which can be used to address the question of nonsimultaneity. First, a two-dimensional model for a two-module system is simulated using a fully electromagnetic, two-dimensional, time-dependent particle code. Next, a nonlinear equivalent circuit approach is used to compare with the direct simulation for the two module case. The latter approach is then extended to a more interesting three-dimensional geometry with several MITL modules

  16. Accounting for the fringe magnetic field from the bending magnet in a Monte Carlo accelerator treatment head simulation.

    Science.gov (United States)

    O'Shea, Tuathan P; Foley, Mark J; Faddegon, Bruce A

    2011-06-01

    Monte Carlo (MC) simulation can be used for accurate electron beam treatment planning and modeling. Measurement of large electron fields, with the applicator removed and secondary collimator wide open, has been shown to provide accurate simulation parameters, including asymmetry in the measured dose, for the full range of clinical field sizes and patient positions. Recently, disassembly of the treatment head of a linear accelerator has been used to refine the simulation of the electron beam, setting tightly measured constraints on source and geometry parameters used in simulation. The simulation did not explicitly include the known deflection of the electron beam by a fringe magnetic field from the bending magnet, which extended into the treatment head. Instead, the secondary scattering foil and monitor chamber were unrealistically laterally offset to account for the beam deflection. This work is focused on accounting for this fringe magnetic field in treatment head simulation. The magnetic field below the exit window of a Siemens Oncor linear accelerator was measured with a Tesla-meter from 0 to 12 cm from the exit window and 1-3 cm off-axis. Treatment head simulation was performed with the EGSnrc/BEAMnrc code, modified to incorporate the effect of the magnetic field on charged particle transport. Simulations were used to analyze the sensitivity of dose profiles to various sources of asymmetry in the treatment head. This included the lateral spot offset and beam angle at the exit window, the fringe magnetic field and independent lateral offsets of the secondary scattering foil and electron monitor chamber. Simulation parameters were selected within the limits imposed by measurement uncertainties. Calculated dose distributions were then compared with those measured in water. The magnetic field was a maximum at the exit window, increasing from 0.006 T at 6 MeV to 0.020 T at 21 MeV and dropping to approximately 5% of the maximum at the secondary scattering foil. It

  17. A Hardware-Accelerated Fast Adaptive Vortex-Based Flow Simulation Software, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Applied Scientific Research has recently developed a Lagrangian vortex-boundary element method for the grid-free simulation of unsteady incompressible...

  18. Simulation of accelerated strip cooling on the hot rolling mill run-out roller table

    International Nuclear Information System (INIS)

    Muhin, U.; Belskij, S.; Makarov, E.; Koinov, T.

    2013-01-01

    Full text: A mathematical model of the thermal state of the metal on the run-out roller table of a continuous wide hot-strip mill is presented. The mathematical model takes into account the heat generation during the polymorphic γ → α transformation of super cooled austenite phase and the influence of chemical composition on the physical properties of the steel. The model allows the calculation of modes of accelerated cooling of strips on the run-out roller table of a continuous wide hot strip mill. Winding temperature calculation error does not exceed 20 °C for 98.5 % of the strips from low-carbon and low-alloyed steels. key words: hot rolled, wide-strip, accelerated cooling, run-out roller table, polymorphic transformation, mathematical modeling

  19. Simulation experiment on low-level RF control for dual-harmonic acceleration at CSNS RCS

    International Nuclear Information System (INIS)

    Shen Sirong; Li Xiao; Zhang Chunlin; Sun Hong; Tang Jingyu

    2013-01-01

    The design and test of the low-level RF (LLRF) control system for the dual-harmonic acceleration at the rapid cycling synchrotron (RCS) of China Spallation Neutron Source (CSNS) at phase Ⅰ is introduced. In order to implement the mode switch from the second harmonic to the fundamental during the acceleration cycle for one of the eight RF cavities, the LLRF system for the cavity has been designed differently from the others. Several technical measures such as the opening of the control loops during the mode switch and the reclosing of two tuning circuits of the RF amplifier at different moments, have been taken. The experimental results on the testing platform based on an RF prototype show good dynamic performance of the LLRF system and prove the feasibility of dual-harmonic operation. (authors)

  20. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit

    International Nuclear Information System (INIS)

    Badal, Andreu; Badano, Aldo

    2009-01-01

    Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  1. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Badal, Andreu; Badano, Aldo [Division of Imaging and Applied Mathematics, OSEL, CDRH, U.S. Food and Drug Administration, Silver Spring, Maryland 20993-0002 (United States)

    2009-11-15

    Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  2. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit.

    Science.gov (United States)

    Badal, Andreu; Badano, Aldo

    2009-11-01

    It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  3. PARTICLE-IN-CELL SIMULATION OF A STRONG DOUBLE LAYER IN A NONRELATIVISTIC PLASMA FLOW: ELECTRON ACCELERATION TO ULTRARELATIVISTIC SPEEDS

    International Nuclear Information System (INIS)

    Dieckmann, Mark E.; Bret, Antoine

    2009-01-01

    Two charge- and current-neutral plasma beams are modeled with a one-dimensional particle-in-cell simulation. The beams are uniform and unbounded. The relative speed between both beams is 0.4c. One beam is composed of electrons and protons, and the other of protons and negatively charged oxygen (dust). All species have the temperature 9.1 keV. A Buneman instability develops between the electrons of the first beam and the protons of the second beam. The wave traps the electrons, which form plasmons. The plasmons couple energy into the ion acoustic waves, which trap the protons of the second beam. A structure similar to a proton phase-space hole develops, which grows through its interaction with the oxygen and the heated electrons into a rarefaction pulse. This pulse drives a double layer, which accelerates a beam of electrons to about 50 MeV, which is comparable to the proton kinetic energy. The proton distribution eventually evolves into an electrostatic shock. Beams of charged particles moving at such speeds may occur in the foreshock of supernova remnant (SNR) shocks. This double layer is thus potentially relevant for the electron acceleration (injection) into the diffusive shock acceleration by SNR shocks.

  4. Expected damage to accelerator equipment due to the impact of the full LHC beam: beam instrumentation, experiments and simulations

    CERN Document Server

    Burkart, Florian

    The Large Hadron Collider (LHC) is the biggest and most powerful particle accelerator in the world, designed to collide two proton beams with particle momentum of 7 TeV/c each. The stored energy of 362MJ in each beam is sufficient to melt 500 kg of copper or to evaporate about 300 liter of water. An accidental release of even a small fraction of the beam energy can cause severe damage to accelerator equipment. Reliable machine protection systems are necessary to safely operate the accelerator complex. To design a machine protection system, it is essential to know the damage potential of the stored beam and the consequences in case of a failure. One (catastrophic) failure would be, if the entire beam is lost in the aperture due to a problem with the beam dumping system. This thesis presents the simulation studies, results of a benchmarking experiment, and detailed target investigation, for this failure case. In the experiment, solid copper cylinders were irradiated with the 440GeV proton beam delivered by the ...

  5. Novel neural networks-based fault tolerant control scheme with fault alarm.

    Science.gov (United States)

    Shen, Qikun; Jiang, Bin; Shi, Peng; Lim, Cheng-Chew

    2014-11-01

    In this paper, the problem of adaptive active fault-tolerant control for a class of nonlinear systems with unknown actuator fault is investigated. The actuator fault is assumed to have no traditional affine appearance of the system state variables and control input. The useful property of the basis function of the radial basis function neural network (NN), which will be used in the design of the fault tolerant controller, is explored. Based on the analysis of the design of normal and passive fault tolerant controllers, by using the implicit function theorem, a novel NN-based active fault-tolerant control scheme with fault alarm is proposed. Comparing with results in the literature, the fault-tolerant control scheme can minimize the time delay between fault occurrence and accommodation that is called the time delay due to fault diagnosis, and reduce the adverse effect on system performance. In addition, the FTC scheme has the advantages of a passive fault-tolerant control scheme as well as the traditional active fault-tolerant control scheme's properties. Furthermore, the fault-tolerant control scheme requires no additional fault detection and isolation model which is necessary in the traditional active fault-tolerant control scheme. Finally, simulation results are presented to demonstrate the efficiency of the developed techniques.

  6. Numerical simulation of particle jet formation induced by shock wave acceleration in a Hele-Shaw cell

    Science.gov (United States)

    Osnes, A. N.; Vartdal, M.; Pettersson Reif, B. A.

    2018-05-01

    The formation of jets from a shock-accelerated cylindrical shell of particles, confined in a Hele-Shaw cell, is studied by means of numerical simulation. A number of simulations have been performed, systematically varying the coupling between the gas and solid phases in an effort to identify the primary mechanism(s) responsible for jet formation. We find that coupling through drag is sufficient for the formation of jets. Including the effect of particle volume fraction and particle collisions did not alter the general behaviour, but had some influence on the length, spacing and number of jets. Furthermore, we find that the jet selection process starts early in the dispersal process, during the initial expansion of the particle layer.

  7. Cluster analysis of accelerated molecular dynamics simulations: A case study of the decahedron to icosahedron transition in Pt nanoparticles

    Science.gov (United States)

    Huang, Rao; Lo, Li-Ta; Wen, Yuhua; Voter, Arthur F.; Perez, Danny

    2017-10-01

    Modern molecular-dynamics-based techniques are extremely powerful to investigate the dynamical evolution of materials. With the increase in sophistication of the simulation techniques and the ubiquity of massively parallel computing platforms, atomistic simulations now generate very large amounts of data, which have to be carefully analyzed in order to reveal key features of the underlying trajectories, including the nature and characteristics of the relevant reaction pathways. We show that clustering algorithms, such as the Perron Cluster Cluster Analysis, can provide reduced representations that greatly facilitate the interpretation of complex trajectories. To illustrate this point, clustering tools are used to identify the key kinetic steps in complex accelerated molecular dynamics trajectories exhibiting shape fluctuations in Pt nanoclusters. This analysis provides an easily interpretable coarse representation of the reaction pathways in terms of a handful of clusters, in contrast to the raw trajectory that contains thousands of unique states and tens of thousands of transitions.

  8. Design and simulation of a short, variable-energy 4 to 10 MV S-band linear accelerator waveguide.

    Science.gov (United States)

    Baillie, Devin; Fallone, B Gino; Steciw, Stephen

    2017-06-01

    To modify a previously designed, short, 10 MV linac waveguide, so that it can produce any energy from 4 to 10 MV. The modified waveguide is designed to be a drop-in replacement for the 6 MV waveguide used in the author's current linear accelerator-magnetic resonance imager (Linac-MR). Using our group's previously designed short 10 MV linac as a starting point, the port was moved to the fourth cavity, the shift to the first coupling cavity was removed and a tuning cylinder added to the first coupling cavity. Each cavity was retuned using finite element method (FEM) simulations to resonate at the desired frequency. FEM simulations were used to determine the RF field distributions for various tuning cylinder depths, and electron trajectories were computed using a particle-in-cell model to determine the required RF power level and tuning cylinder depth to produce electron energy distributions for 4, 6, 8, and 10 MV photon beams. Monte Carlo simulations were then used to compare the depth dose profiles with those produced by published electron beam characteristics for Varian linacs. For each desired photon energy, the electron beam energy was within 0.5% of the target mean energy, the depth of maximum dose was within 1.5 mm of that produced by the Varian linac, and the ratio of dose at 10 cm depth to 20 cm depth was within 1%. A new 27.5 cm linear accelerator waveguide design capable of producing any photon energy between 4 and 10 MV has been simulated, however coupling port design and the implications of increased electron beam current at 10 MV remain to be investigated. For the specific cases of 4, 6, and 10 MV, this linac produces depth dose profiles similar to those produced by published spectra for Varian linacs. © 2017 American Association of Physicists in Medicine.

  9. A Design Method for Fault Reconfiguration and Fault-Tolerant Control of a Servo Motor

    Directory of Open Access Journals (Sweden)

    Jing He

    2013-01-01

    Full Text Available A design scheme that integrates fault reconfiguration and fault-tolerant position control is proposed for a nonlinear servo system with friction. Analysis of the non-linear friction torque and fault in the system is used to guide design of a sliding mode position controller. A sliding mode observer is designed to achieve fault reconfiguration based on the equivalence principle. Thus, active fault-tolerant position control of the system can be realized. A real-time simulation experiment is performed on a hardware-in-loop simulation platform. The results show that the system reconfigures well for both incipient and abrupt faults. Under the fault-tolerant control mechanism, the output signal for the system position can rapidly track given values without being influenced by faults.

  10. Monte Carlo simulation of electron beams from an accelerator head using PENELOPE

    Energy Technology Data Exchange (ETDEWEB)

    Sempau, J. [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, 08028 Barcelona (Spain). E-mail: josep.sempau@upc.es; Sanchez-Reyes, A. [Servei d' Oncologia Radioterapica, Hospital Clinic de Barcelona, Villarroel 170, 08036 Barcelona (Spain); Institut d' Investigaciones Biomediques August Pi i Sunyer (IDIBAPS), Universitat de Barcelona (Spain); Salvat, F.; Oulad ben Tahar, H.; Fernandez-Varea, J.M. [Facultat de Fisica (ECM), Universitat de Barcelona, Societat Catalana de Fisica (IEC), Diagonal 647, 08028 Barcelona (Spain); Jiang, S.B. [Department of Radiation Oncology, Stanford University School of Medicine, 300 Pasteur Drive, Stanford, CA 94305-5304 (United States)

    2001-04-01

    The Monte Carlo code PENELOPE has been used to simulate electron beams from a Siemens Mevatron KDS linac with nominal energies of 6, 12 and 18 MeV. Owing to its accuracy, which stems from that of the underlying physical interaction models, PENELOPE is suitable for simulating problems of interest to the medical physics community. It includes a geometry package that allows the definition of complex quadric geometries, such as those of irradiation instruments, in a straightforward manner. Dose distributions in water simulated with PENELOPE agree well with experimental measurements using a silicon detector and a monitoring ionization chamber. Insertion of a lead slab in the incident beam at the surface of the water phantom produces sharp variations in the dose distributions, which are correctly reproduced by the simulation code. Results from PENELOPE are also compared with those of equivalent simulations with the EGS4-based user codes BEAM and DOSXYZ. Angular and energy distributions of electrons and photons in the phase-space plane (at the downstream end of the applicator) obtained from both simulation codes are similar, although significant differences do appear in some cases. These differences, however, are shown to have a negligible effect on the calculated dose distributions. Various practical aspects of the simulations, such as the calculation of statistical uncertainties and the effect of the 'latent' variance in the phase-space file, are discussed in detail. (author)

  11. H$^{-}$ ion source for CERN's Linac4 accelerator: simulation, experimental validation and optimization of the hydrogen plasma

    CERN Document Server

    Mattei, Stefano; Lettry, Jacques

    2017-07-25

    Linac4 is the new negative hydrogen ion (H$^-$) linear accelerator of the European Organization for Nuclear Research (CERN). Its ion source operates on the principle of Radio-Frequency Inductively Coupled Plasma (RF-ICP) and it is required to provide 50~mA of H$^-$ beam in pulses of 600~$\\mu$s with a repetition rate up to 2 Hz and within an RMS emittance of 0.25~$\\pi$~mm~mrad in order to fullfil the requirements of the accelerator. This thesis is dedicated to the characterization of the hydrogen plasma in the Linac4 H$^-$ ion source. We have developed a Particle-In-Cell Monte Carlo Collision (PIC-MCC) code to simulate the RF-ICP heating mechanism and performed measurements to benchmark the fraction of the simulation outputs that can be experimentally accessed. The code solves self-consistently the interaction between the electromagnetic field generated by the RF coil and the resulting plasma response, including a kinetic description of charged and neutral species. A fully-implicit implementation allowed to si...

  12. Simulation of accelerated strip cooling on the hot rolling mill run-out roller table

    Directory of Open Access Journals (Sweden)

    E.Makarov

    2016-07-01

    Full Text Available A mathematical model of the thermal state of the metal in the run-out roller table continuous wide hot strip mill. The mathematical model takes into account heat generation due to the polymorphic γ → α transformation of supercooled austenite phase state and the influence of the chemical composition of the steel on the physical properties of the metal. The model allows calculation of modes of accelerated cooling strips on run-out roller table continuous wide hot strip mill. Winding temperature calculation error does not exceed 20°C for 98.5 % of strips of low-carbon and low-alloy steels

  13. Test simulation of neutron damage to electronic components using accelerator facilities

    Energy Technology Data Exchange (ETDEWEB)

    King, D.B., E-mail: dbking@sandia.gov; Fleming, R.M.; Bielejec, E.S.; McDonald, J.K.; Vizkelethy, G.

    2015-12-15

    The purpose of this work is to demonstrate equivalent bipolar transistor damage response to neutrons and silicon ions. We report on irradiation tests performed at the White Sands Missile Range Fast Burst Reactor, the Sandia National Laboratories (SNL) Annular Core Research Reactor, the SNL SPHINX accelerator, and the SNL Ion Beam Laboratory using commercial silicon npn bipolar junction transistors (BJTs) and III–V Npn heterojunction bipolar transistors (HBTs). Late time and early time gain metrics as well as defect spectra measurements are reported.

  14. An improved cellular automata model for train operation simulation with dynamic acceleration

    Science.gov (United States)

    Li, Wen-Jun; Nie, Lei

    2018-03-01

    Urban rail transit plays an important role in the urban public traffic because of its advantages of fast speed, large transport capacity, high safety, reliability and low pollution. This study proposes an improved cellular automaton (CA) model by considering the dynamic characteristic of the train acceleration to analyze the energy consumption and train running time. Constructing an effective model for calculating energy consumption to aid train operation improvement is the basis for studying and analyzing energy-saving measures for urban rail transit system operation.

  15. Computer simulation of 2-D and 3-D ion beam extraction and acceleration

    Energy Technology Data Exchange (ETDEWEB)

    Ido, Shunji; Nakajima, Yuji [Saitama Univ., Urawa (Japan). Faculty of Engineering

    1997-03-01

    The two-dimensional code and the three-dimensional code have been developed to study the physical features of the ion beams in the extraction and acceleration stages. By using the two-dimensional code, the design of first electrode(plasma grid) is examined in regard to the beam divergence. In the computational studies by using the three-dimensional code, the axis-off model of ion beam is investigated. It is found that the deflection angle of ion beam is proportional to the gap displacement of the electrodes. (author)

  16. GPU-Accelerated Sparse Matrix Solvers for Large-Scale Simulations, Phase