WorldWideScience

Sample records for fault simulation acceleration

  1. Accelerated Techniques in Stem Fault Simulation

    Institute of Scientific and Technical Information of China (English)

    石茵; 魏道政

    1996-01-01

    In order to cope with the most expensive stem fault simulation in fault simulation field.several accelerated techniques are presented in this paper.These techniques include static analysis on circuit structure in preprocessing stage and dynamic calculations in fault simulation stage.With these techniques,the area for stem for stem fault simulation and number of the stems requiring explicit fault simulation are greatly reduced,so that the entire fault simulation time is substantially decreased.Experimental results given in this paper show that the fault simulation algorithm using these techniques is of very high efficiency for both small and large numbers of test patterns.Especially with the increase of circuit gates,its effectiveness improves obviously.

  2. Fault-Mechanism Simulator

    Science.gov (United States)

    Guyton, J. W.

    1972-01-01

    An inexpensive, simple mechanical model of a fault can be produced to simulate the effects leading to an earthquake. This model has been used successfully with students from elementary to college levels and can be demonstrated to classes as large as thirty students. (DF)

  3. Fault-Mechanism Simulator

    Science.gov (United States)

    Guyton, J. W.

    1972-01-01

    An inexpensive, simple mechanical model of a fault can be produced to simulate the effects leading to an earthquake. This model has been used successfully with students from elementary to college levels and can be demonstrated to classes as large as thirty students. (DF)

  4. LHC Accelerator Fault Tracker - First Experience

    CERN Document Server

    Apollonio, Andrea; Roderick, Chris; Schmidt, Ruediger; Todd, Benjamin; Wollmann, Daniel

    2016-01-01

    Availability is one of the key performance indicators of LHC operation, being directly correlated with integrated luminosity production. An effective tool for availability tracking is a necessity to ensure a coherent capture of fault information and relevant dependencies on operational modes and beam parameters. At the beginning of LHC Run 2 in 2015, the Accelerator Fault Tracking (AFT) tool was deployed at CERN to track faults or events affecting LHC operation. Information derived from the AFT is crucial for the identification of areas to improve LHC availability, and hence LHC physics production. For the 2015 run, the AFT has been used by members of the CERN Availability Working Group, LHC Machine coordinators and equipment owners to identify the main contributors to downtime and to understand the evolution of LHC availability throughout the year. In this paper the 2015 experience with the AFT for availability tracking is summarised and an overview of the first results as well as an outlook to future develo...

  5. Hardware Accelerated Simulated Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Laney, D; Callahan, S; Max, N; Silva, C; Langer, S; Frank, R

    2005-04-12

    We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32 bit floating point texture capabilities to obtain validated solutions to the radiative transport equation for X-rays. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedra that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester. We show that the hardware accelerated solution is faster than the current technique used by scientists.

  6. Memory Circuit Fault Simulator

    Science.gov (United States)

    Sheldon, Douglas J.; McClure, Tucker

    2013-01-01

    Spacecraft are known to experience significant memory part-related failures and problems, both pre- and postlaunch. These memory parts include both static and dynamic memories (SRAM and DRAM). These failures manifest themselves in a variety of ways, such as pattern-sensitive failures, timingsensitive failures, etc. Because of the mission critical nature memory devices play in spacecraft architecture and operation, understanding their failure modes is vital to successful mission operation. To support this need, a generic simulation tool that can model different data patterns in conjunction with variable write and read conditions was developed. This tool is a mathematical and graphical way to embed pattern, electrical, and physical information to perform what-if analysis as part of a root cause failure analysis effort.

  7. ACCELERATED SYNERGISM ALONG A FAULT: A POSSIBLE INDICATOR FOR AN IMPENDING MAJOR EARTHQUAKE

    Directory of Open Access Journals (Sweden)

    Ma Jin

    2015-09-01

    Full Text Available It is generally accepted that crustal earthquakes are caused by sudden displacement along faults, which rely on two primary conditions. One is that the fault has a high degree of synergism, so that once the stress threshold is reached, fault segments can be connected rapidly to facilitate fast slip of longer fault sections. The other is sufficient strain accumulated at some portions of the fault which can overcome resistance to slip of the high-strength portions of the fault. Investigations to such processes would help explore how to detect short-term and impending precursors prior to earthquakes. A simulation study on instability of a straight fault is conducted in the laboratory. From curves of stress variations, the stress state of the specimen is recognized and the meta-instability stage is identified. By comparison of the observational information from the press machine and physical parameters of the fields on the sample, this work reveals differences of temporal-spatial evolution processes of fault stress in the stages of stress deviating from linearity and meta-instability. The results show that due to interaction between distinct portions of the fault, their independent activities turn gradually into a synergetic activity, and the degree of such synergism is an indicator for the stress state of the fault. This synergetic process of fault activity includes three stages: generation, expansion and increase amount of strain release patches, and connection between them.. The first stage begins when the stress curve deviates from linearity, different strain variations occur at every portions of the fault, resulting in isolated areas of stress release and strain accumulation. The second stage is associated with quasi-static instability of the early meta-instability when isolated strain release areas of the fault increase and stable expansion proceeds. And the third stage corresponds to the late meta-instability, i.e. quasi-dynamic instability

  8. DEM simulation of growth normal fault slip

    Science.gov (United States)

    Chu, Sheng-Shin; Lin, Ming-Lang; Nien, Wie-Tung; Chan, Pei-Chen

    2014-05-01

    Slip of the fault can cause deformation of shallower soil layers and lead to the destruction of infrastructures. Shanchiao fault on the west side of the Taipei basin is categorized. The activities of Shanchiao fault will cause the quaternary sediments underneath the Taipei basin to become deformed. This will cause damage to structures, traffic construction, and utility lines within the area. It is determined from data of geological drilling and dating, Shanchiao fault has growth fault. In experiment, a sand box model was built with non-cohesive sand soil to simulate the existence of growth fault in Shanchiao Fault and forecast the effect on scope of shear band development and ground differential deformation. The results of the experiment showed that when a normal fault containing growth fault, at the offset of base rock the shear band will develop upward along with the weak side of shear band of the original topped soil layer, and this shear band will develop to surface much faster than that of single top layer. The offset ratio (basement slip / lower top soil thickness) required is only about 1/3 of that of single cover soil layer. In this research, it is tried to conduct numerical simulation of sand box experiment with a Discrete Element Method program, PFC2D, to simulate the upper covering sand layer shear band development pace and scope of normal growth fault slip. Results of simulation indicated, it is very close to the outcome of sand box experiment. It can be extended to application in water pipeline project design around fault zone in the future. Keywords: Taipei Basin, Shanchiao fault, growth fault, PFC2D

  9. Numerical Simulation Study of the Sanchiao Fault Earthquake Scenarios

    Science.gov (United States)

    Wang, Yi-Min; Lee, Shiann-Jong

    2015-04-01

    Sanchiao fault is a western boundary fault of the Taipei basin located in northern Taiwan, close to the densely populated Taipei metropolitan area. Recent study indicated that there is about 40 km of the fault trace extended to the marine area offshore northern Taiwan. Combining the marine and terrestrial parts, the total fault length of Sanchiao fault could be nearly 70 kilometers which implies that this fault has potential to produce a big earthquake. In this study, we analyze several Sanchiao fault earthquake scenarios based on the recipe for predicting strong ground motion. The characterized source parameters include fault length, rupture area, seismic moment, asperity, and slip pattern on the fault plane. According to the assumption of the characterized source model, Sanchiao fault has been inferred to have the potential to produce an earthquake with moment magnitude (Mw) larger than 7.0. Three-dimensional seismic simulation results based upon spectral-element method (SEM) indicate that peak ground acceleration (PGA) is significantly stronger along the fault trace. The basin effect also plays an important role when wave propagates in the Taipei basin which cause seismic wave amplified and prolong the shaking for a very long time. Among all rupture scenarios, the rupture propagated from north to south is the most serious one. Owing to the rupture directivity as well as the basin effects, large PGA (>1g) was observed in the Taipei basin, especially in the northwest side. The results of these scenario earthquake simulations will provide important physically-based numerical data for earthquake mitigation and seismic hazard assessment.

  10. Rare event simulation for dynamic fault trees

    NARCIS (Netherlands)

    Ruijters, Enno Jozef Johannes; Reijsbergen, D.P.; de Boer, Pieter-Tjerk; Stoelinga, Mariëlle Ida Antoinette

    2017-01-01

    Fault trees (FT) are a popular industrial method for reliability engineering, for which Monte Carlo simulation is an important technique to estimate common dependability metrics, such as the system reliability and availability. A severe drawback of Monte Carlo simulation is that the number of

  11. Hardware-Accelerated Simulated Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Laney, D; Callahan, S; Max, N; Silva, C; Langer, S; Frank, R

    2005-08-04

    We present the application of hardware accelerated volume rendering algorithms to the simulation of radiographs as an aid to scientists designing experiments, validating simulation codes, and understanding experimental data. The techniques presented take advantage of 32-bit floating point texture capabilities to obtain solutions to the radiative transport equation for X-rays. The hardware accelerated solutions are accurate enough to enable scientists to explore the experimental design space with greater efficiency than the methods currently in use. An unsorted hexahedron projection algorithm is presented for curvilinear hexahedral meshes that produces simulated radiographs in the absorption-only regime. A sorted tetrahedral projection algorithm is presented that simulates radiographs of emissive materials. We apply the tetrahedral projection algorithm to the simulation of experimental diagnostics for inertial confinement fusion experiments on a laser at the University of Rochester.

  12. AESS: Accelerated Exact Stochastic Simulation

    Science.gov (United States)

    Jenkins, David D.; Peterson, Gregory D.

    2011-12-01

    The Stochastic Simulation Algorithm (SSA) developed by Gillespie provides a powerful mechanism for exploring the behavior of chemical systems with small species populations or with important noise contributions. Gene circuit simulations for systems biology commonly employ the SSA method, as do ecological applications. This algorithm tends to be computationally expensive, so researchers seek an efficient implementation of SSA. In this program package, the Accelerated Exact Stochastic Simulation Algorithm (AESS) contains optimized implementations of Gillespie's SSA that improve the performance of individual simulation runs or ensembles of simulations used for sweeping parameters or to provide statistically significant results. Program summaryProgram title: AESS Catalogue identifier: AEJW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: University of Tennessee copyright agreement No. of lines in distributed program, including test data, etc.: 10 861 No. of bytes in distributed program, including test data, etc.: 394 631 Distribution format: tar.gz Programming language: C for processors, CUDA for NVIDIA GPUs Computer: Developed and tested on various x86 computers and NVIDIA C1060 Tesla and GTX 480 Fermi GPUs. The system targets x86 workstations, optionally with multicore processors or NVIDIA GPUs as accelerators. Operating system: Tested under Ubuntu Linux OS and CentOS 5.5 Linux OS Classification: 3, 16.12 Nature of problem: Simulation of chemical systems, particularly with low species populations, can be accurately performed using Gillespie's method of stochastic simulation. Numerous variations on the original stochastic simulation algorithm have been developed, including approaches that produce results with statistics that exactly match the chemical master equation (CME) as well as other approaches that approximate the CME. Solution

  13. Dynamic fault simulation of wind turbines using commercial simulation tools

    DEFF Research Database (Denmark)

    Lund, Torsten; Eek, Jarle; Uski, Sanna

    2005-01-01

    . The deviations and the reasons for the deviations between the tools are stated. The simulation models are imple-mented using the built-in library components of the simulation tools with exception of the mechanical drive-train model, which had to be user-modeled in PowerFactory and PSS/E.......This paper compares the commercial simulation tools: PSCAD/EMTDC, PowerFactory, SIMPOW and PSS/E for analysing fault sequences defined in the Danish grid code requirements for wind turbines connected to a voltage level below 100 kV. Both symmetrical and unsymmetrical faults are analysed...

  14. 3D simulation of near-fault strong ground motion:comparison between surface rupture fault and buried fault

    Institute of Scientific and Technical Information of China (English)

    Liu Qifang; Yuan Yifan; Jin Xing

    2007-01-01

    In this paper,near-fault strong ground motions caused by a surface rupture fault(SRF)and a buried fault(BF) are numerically simulated and compared by using a time-space-decoupled,explicit finite element method combined with a multi-transmitting formula(MTF) of an artificial boundary.Prior to the comparison,verification of the explicit element method and the MTF is conducted.The comparison results show that the final dislocation of the SRF is larger than the BF for the same stress drop on the fault plane.The maximum final dislocation occurs on the fault upper line for the SRF;however,for the BF,the maximum final dislocation is located on the fault central part.Meanwhile,the PGA,PGV and PGD of long period ground motions(≤1 Hz)generated by the SRF are much higher than those of the BF in the near-fault region.The peak value of the velocity pulse generated by the SRF is also higher than the BF.Furthermore,it is found that in a very narrow region along the fault trace,ground motions caused by the SRF are much higher than by the BF.These results may explain why SRFs almost always cause heavy damage in near-fault regions compared to buried faults.

  15. Fault diagnosis based on continuous simulation models

    Science.gov (United States)

    Feyock, Stefan

    1987-01-01

    The results are described of an investigation of techniques for using continuous simulation models as basis for reasoning about physical systems, with emphasis on the diagnosis of system faults. It is assumed that a continuous simulation model of the properly operating system is available. Malfunctions are diagnosed by posing the question: how can we make the model behave like that. The adjustments that must be made to the model to produce the observed behavior usually provide definitive clues to the nature of the malfunction. A novel application of Dijkstra's weakest precondition predicate transformer is used to derive the preconditions for producing the required model behavior. To minimize the size of the search space, an envisionment generator based on interval mathematics was developed. In addition to its intended application, the ability to generate qualitative state spaces automatically from quantitative simulations proved to be a fruitful avenue of investigation in its own right. Implementations of the Dijkstra transform and the envisionment generator are reproduced in the Appendix.

  16. FPGA-accelerated simulation of computer systems

    CERN Document Server

    Angepat, Hari; Chung, Eric S; Hoe, James C; Chung, Eric S

    2014-01-01

    To date, the most common form of simulators of computer systems are software-based running on standard computers. One promising approach to improve simulation performance is to apply hardware, specifically reconfigurable hardware in the form of field programmable gate arrays (FPGAs). This manuscript describes various approaches of using FPGAs to accelerate software-implemented simulation of computer systems and selected simulators that incorporate those techniques. More precisely, we describe a simulation architecture taxonomy that incorporates a simulation architecture specifically designed f

  17. Monte Carlo simulations and benchmark studies at CERN's accelerator chain

    CERN Document Server

    AUTHOR|(CDS)2083190; Brugger, Markus

    2016-01-01

    Mixed particle and energy radiation fields present at the Large Hadron Collider (LHC) and its accelerator chain are responsible for failures on electronic devices located in the vicinity of the accelerator beam lines. These radiation effects on electronics and, more generally, the overall radiation damage issues have a direct impact on component and system lifetimes, as well as on maintenance requirements and radiation exposure to personnel who have to intervene and fix existing faults. The radiation environments and respective radiation damage issues along the CERN’s accelerator chain were studied in the framework of the CERN Radiation to Electronics (R2E) project and are hereby presented. The important interplay between Monte Carlo simulations and radiation monitoring is also highlighted.

  18. A Fault Sample Simulation Approach for Virtual Testability Demonstration Test

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yong; QIU Jing; LIU Guanjun; YANG Peng

    2012-01-01

    Virtual testability demonstration test has many advantages,such as low cost,high efficiency,low risk and few restrictions.It brings new requirements to the fault sample generation.A fault sample simulation approach for virtual testability demonstration test based on stochastic process theory is proposed.First,the similarities and differences of fault sample generation between physical testability demonstration test and virtual testability demonstration test are discussed.Second,it is pointed out that the fault occurrence process subject to perfect repair is renewal process.Third,the interarrival time distribution function of the next fault event is given.Steps and flowcharts of fault sample generation are introduced.The number of faults and their occurrence time are obtained by statistical simulation.Finally,experiments are carried out on a stable tracking platform.Because a variety of types of life distributions and maintenance modes are considered and some assumptions are removed,the sample size and structure of fault sample simulation results are more similar to the actual results and more reasonable.The proposed method can effectively guide the fault injection in virtual testability demonstration test.

  19. Modeling and Fault Simulation of Propellant Filling System

    Science.gov (United States)

    Jiang, Yunchun; Liu, Weidong; Hou, Xiaobo

    2012-05-01

    Propellant filling system is one of the key ground plants in launching site of rocket that use liquid propellant. There is an urgent demand for ensuring and improving its reliability and safety, and there is no doubt that Failure Mode Effect Analysis (FMEA) is a good approach to meet it. Driven by the request to get more fault information for FMEA, and because of the high expense of propellant filling, in this paper, the working process of the propellant filling system in fault condition was studied by simulating based on AMESim. Firstly, based on analyzing its structure and function, the filling system was modular decomposed, and the mathematic models of every module were given, based on which the whole filling system was modeled in AMESim. Secondly, a general method of fault injecting into dynamic system was proposed, and as an example, two typical faults - leakage and blockage - were injected into the model of filling system, based on which one can get two fault models in AMESim. After that, fault simulation was processed and the dynamic characteristics of several key parameters were analyzed under fault conditions. The results show that the model can simulate effectively the two faults, and can be used to provide guidance for the filling system maintain and amelioration.

  20. Automated Bearing Fault Diagnosis Using 2D Analysis of Vibration Acceleration Signals under Variable Speed Conditions

    Directory of Open Access Journals (Sweden)

    Sheraz Ali Khan

    2016-01-01

    Full Text Available Traditional fault diagnosis methods of bearings detect characteristic defect frequencies in the envelope power spectrum of the vibration signal. These defect frequencies depend upon the inherently nonstationary shaft speed. Time-frequency and subband signal analysis of vibration signals has been used to deal with random variations in speed, whereas design variations require retraining a new instance of the classifier for each operating speed. This paper presents an automated approach for fault diagnosis in bearings based upon the 2D analysis of vibration acceleration signals under variable speed conditions. Images created from the vibration signals exhibit unique textures for each fault, which show minimal variation with shaft speed. Microtexture analysis of these images is used to generate distinctive fault signatures for each fault type, which can be used to detect those faults at different speeds. A k-nearest neighbor classifier trained using fault signatures generated for one operating speed is used to detect faults at all the other operating speeds. The proposed approach is tested on the bearing fault dataset of Case Western Reserve University, and the results are compared with those of a spectrum imaging-based approach.

  1. Hybrid Simulations of Particle Acceleration at Shocks

    CERN Document Server

    Caprioli, Damiano

    2014-01-01

    We present the results of large hybrid (kinetic ions - fluid electrons) simulations of particle acceleration at non-relativistic collisionless shocks. Ion acceleration efficiency and magnetic field amplification are investigated in detail as a function of shock inclination and strength, and compared with predictions of diffusive shock acceleration theory, for shocks with Mach number up to 100. Moreover, we discuss the relative importance of resonant and Bell's instability in the shock precursor, and show that diffusion in the self-generated turbulence can be effectively parametrized as Bohm diffusion in the amplified magnetic field.

  2. Surface roughness evolution on experimentally simulated faults

    Science.gov (United States)

    Renard, François; Mair, Karen; Gundersen, Olav

    2012-12-01

    To investigate the physical processes operating in active fault zones, we conduct analogue laboratory experiments where we track the morphological and mechanical evolution of an interface during slip. Our laboratory friction experiments consist of a halite (NaCl) slider held under constant normal load that is dragged across a coarse sandpaper substrate. This set-up is a surrogate for a fault surface, where brittle and plastic deformation mechanisms operate simultaneously during sliding. Surface morphology evolution, frictional resistance and infra-red emission are recorded with cumulative slip. After experiments, we characterize the roughness developed on slid surfaces, to nanometer resolution, using white light interferometry. We directly observe the formation of deformation features, such as slip parallel linear striations, as well as deformation products or gouge. The striations are often associated with marginal ridges of positive relief suggesting sideways transport of gouge products in the plane of the slip surface in a snow-plough-like fashion. Deeper striations are commonly bounded by triangular brittle fractures that fragment the salt surface and efficiently generate a breccia or gouge. Experiments with an abundance of gouge at the sliding interface have reduced shear resistance compared to bare surfaces and we show that friction is reduced with cumulative slip as gouge accumulates from initially bare surfaces. The relative importance of these deformation mechanisms may influence gouge production rate, fault surface roughness evolution, as well as mechanical behavior. Finally, our experimental results are linked to Nature by comparing the experimental surfaces to an actual fault surface, whose striated morphology has been characterized to centimeter resolution using a laser scanner. It is observed that both the stress field and the energy dissipation are heterogeneous at all scales during the maturation of the interface with cumulative slip. Importantly

  3. High Frequency Ground Motion from Finite Fault Rupture Simulations

    Science.gov (United States)

    Crempien, Jorge G. F.

    There are many tectonically active regions on earth with little or no recorded ground motions. The Eastern United States is a typical example of regions with active faults, but with low to medium seismicity that has prevented sufficient ground motion recordings. Because of this, it is necessary to use synthetic ground motion methods in order to estimate the earthquake hazard a region might have. Ground motion prediction equations for spectral acceleration typically have geometric attenuation proportional to the inverse of distance away from the fault. Earthquakes simulated with one-dimensional layered earth models have larger geometric attenuation than the observed ground motion recordings. We show that as incident angles of rays increase at welded boundaries between homogeneous flat layers, the transmitted rays decrease in amplitude dramatically. As the receiver distance increases away from the source, the angle of incidence of up-going rays increases, producing negligible transmitted ray amplitude, thus increasing the geometrical attenuation. To work around this problem we propose a model in which we separate wave propagation for low and high frequencies at a crossover frequency, typically 1Hz. The high-frequency portion of strong ground motion is computed with a homogeneous half-space and amplified with the available and more complex one- or three-dimensional crustal models using the quarter wavelength method. We also make use of seismic coda energy density observations as scattering impulse response functions. We incorporate scattering impulse response functions into our Green's functions by convolving the high-frequency homogeneous half-space Green's functions with normalized synthetic scatterograms to reproduce scattering physical effects in recorded seismograms. This method was validated against ground motion for earthquakes recorded in California and Japan, yielding results that capture the duration and spectral response of strong ground motion.

  4. Accelerated Stochastic Simulation of Large Chemical Systems

    Institute of Scientific and Technical Information of China (English)

    CHEN Xiao; AO Ling

    2007-01-01

    For efficient simulation of chemical systems with large number of reactions, we report a fast and exact algorithm for direct simulation of chemical discrete Markov processes. The approach adopts the scheme of organizing the reactions into hierarchical groups. By generating a random number, the selection of the next reaction that actually occurs is accomplished by a few successive selections in the hierarchical groups. The algorithm which is suited for simulating systems with large number of reactions is much faster than the direct method or the optimized direct method. For a demonstration of its efficiency, the accelerated algorithm is applied to simulate the reaction-diffusion Brusselator model on a discretized space.

  5. Kinetic Simulations of Particle Acceleration at Shocks

    Energy Technology Data Exchange (ETDEWEB)

    Caprioli, Damiano [Princeton University; Guo, Fan [Los Alamos National Laboratory

    2015-07-16

    Collisionless shocks are mediated by collective electromagnetic interactions and are sources of non-thermal particles and emission. The full particle-in-cell approach and a hybrid approach are sketched, simulations of collisionless shocks are shown using a multicolor presentation. Results for SN 1006, a case involving ion acceleration and B field amplification where the shock is parallel, are shown. Electron acceleration takes place in planetary bow shocks and galaxy clusters. It is concluded that acceleration at shocks can be efficient: >15%; CRs amplify B field via streaming instability; ion DSA is efficient at parallel, strong shocks; ions are injected via reflection and shock drift acceleration; and electron DSA is efficient at oblique shocks.

  6. Accelerating Climate Simulations Through Hybrid Computing

    Science.gov (United States)

    Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark

    2009-01-01

    Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.

  7. GPU Accelerated Surgical Simulators for Complex Morhpology

    DEFF Research Database (Denmark)

    Mosegaard, Jesper; Sørensen, Thomas Sangild

    2005-01-01

    a springmass system in order to simulate a complex organ such as the heart. Computations are accelerated by taking advantage of modern graphics processing units (GPUs). Two GPU implementations are presented. They vary in their generality of spring connections and in the speedup factor they achieve...

  8. Fuzzy delay model based fault simulator for crosstalk delay fault test generation in asynchronous sequential circuits

    Indian Academy of Sciences (India)

    S Jayanthy; M C Bhuvaneswari

    2015-02-01

    In this paper, a fuzzy delay model based crosstalk delay fault simulator is proposed. As design trends move towards nanometer technologies, more number of new parameters affects the delay of the component. Fuzzy delay models are ideal for modelling the uncertainty found in the design and manufacturing steps. The fault simulator based on fuzzy delay detects unstable states, oscillations and non-confluence of settling states in asynchronous sequential circuits. The fuzzy delay model based fault simulator is used to validate the test patterns produced by Elitist Non-dominated sorting Genetic Algorithm (ENGA) based test generator, for detecting crosstalk delay faults in asynchronous sequential circuits. The multi-objective genetic algorithm, ENGA targets two objectives of maximizing fault coverage and minimizing number of transitions. Experimental results are tabulated for SIS benchmark circuits for three gate delay models, namely unit delay model, rise/fall delay model and fuzzy delay model. Experimental results indicate that test validation using fuzzy delay model is more accurate than unit delay model and rise/fall delay model.

  9. Accelerated simulation methods for plasma kinetics

    Science.gov (United States)

    Caflisch, Russel

    2016-11-01

    Collisional kinetics is a multiscale phenomenon due to the disparity between the continuum (fluid) and the collisional (particle) length scales. This paper describes a class of simulation methods for gases and plasmas, and acceleration techniques for improving their speed and accuracy. Starting from the Landau-Fokker-Planck equation for plasmas, the focus will be on a binary collision model that is solved using a Direct Simulation Monte Carlo (DSMC) method. Acceleration of this method is achieved by coupling the particle method to a continuum fluid description. The velocity distribution function f is represented as a combination of a Maxwellian M (the thermal component) and a set of discrete particles fp (the kinetic component). For systems that are close to (local) equilibrium, this reduces the number N of simulated particles that are required to represent f for a given level of accuracy. We present two methods for exploiting this representation. In the first method, equilibration of particles in fp, as well as disequilibration of particles from M, due to the collision process, is represented by a thermalization/dethermalization step that employs an entropy criterion. Efficiency of the representation is greatly increased by inclusion of particles with negative weights. This significantly complicates the simulation, but the second method is a tractable approach for negatively weighted particles. The accelerated simulation method is compared with standard PIC-DSMC method for both spatially homogeneous problems such as a bump-on-tail and inhomogeneous problems such as nonlinear Landau damping.

  10. Near-fault ground motions with prominent acceleration pulses: pulse characteristics and ductility demand

    Institute of Scientific and Technical Information of China (English)

    Mai Tong; Vladimir Rzhevsky; Dai Junwu; George C Lee; Qi Jincheng; Qi Xiaozhai

    2007-01-01

    Major earthquakes of last 15 years (e.g., Northridge 1994, Kobe 1995 and Chi-Chi 1999) have shown that many near-fault ground motions possess prominent acceleration pulses. Some of the prominent ground acceleration pulses are related to large ground velocity pulses, others are caused by mechanisms that are totally different from those causing the velocity pulses or fling steps. Various efforts to model acceleration pulses have been reported in the literature. In this paper, research results from a recent study of acceleration pulse prominent ground motions and an analysis of structural damage induced by acceleration pulses are summarized. The main results of the study include: (1) temporal characteristics of acceleration pulses; (2) ductility demand spectrum of simple acceleration pulses with respect to equivalent classes of dynamic systems and pulse characteristic parameters; and (3) estimation of fundamental period change under the excitation of strong acceleration pulses. By using the acceleration pulse induced linear acceleration spectrum and the ductility demand spectrum,a simple procedure has been developed to estimate the ductility demand and the fundamental period change of a reinforced concrete (RC) structure under the impact of a strong acceleration pulse.

  11. Simulations for Plasma and Laser Acceleration

    Science.gov (United States)

    Vay, Jean-Luc; Lehe, Rémi

    Computer simulations have had a profound impact on the design and understanding of past and present plasma acceleration experiments, and will be a key component for turning plasma accelerators from a promising technology into a mainstream scientific tool. In this article, we present an overview of the numerical techniques used with the most popular approaches to model plasma-based accelerators: electromagnetic particle-in-cell, quasistatic and ponderomotive guiding center. The material that is presented is intended to serve as an introduction to the basics of those approaches, and to advances (some of them very recent) that have pushed the state of the art, such as the optimal Lorentz-boosted frame, advanced laser envelope solvers and the elimination of numerical Cherenkov instability. The particle-in-cell method, which has broader interest and is more standardized, is presented in more depth. Additional topics that are cross-cutting, such as azimuthal Fourier decomposition or filtering, are also discussed, as well as potential challenges and remedies in the initialization of simulations and output of data. Examples of simulations using the techniques that are presented have been left out of this article for conciseness, and because simulation results are best understood when presented together, and contrasted with theoretical and/or experimental results, as in other articles of this volume.

  12. Accelerated GPU based SPECT Monte Carlo simulations

    Science.gov (United States)

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-01

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency

  13. Accelerated GPU based SPECT Monte Carlo simulations.

    Science.gov (United States)

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-07

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational

  14. Identification of acceleration pulses in near-fault ground motion using the EMD method

    Institute of Scientific and Technical Information of China (English)

    Zhang Yushan; Hu Yuxian; Zhao Fengxin; Liang Jianwen; Yang Caihong

    2005-01-01

    In this paper, response spectral characteristics of one-, two-, and three-lobe sinusoidal acceleration pulses are investigated, and some of their basic properties are derived. Furthermore, the empirical mode decomposition (EMD) method is utilized as an adaptive filter to decompose the near-fault pulse-like ground motions, which were recorded during the September 20, 1999, Chi-Chi earthquake. These ground motions contain distinct velocity pulses, and were decomposed into high-frequency (HF) and low-frequency (LF) components, from which the corresponding HF acceleration pulse (if existing)and LF acceleration pulse could be easily identified and detected. Finally, the identified acceleration pulses are modeled by simplified sinusoidal approximations, whose dynamic behaviors are compared to those of the original acceleration pulses as well as to those of the original HF and LF acceleration components in the context of elastic response spectra. It was demonstrated that it is just the acceleration pulses contained in the near-fault pulse-like ground motion that fundamentally dominate the special impulsive dynamic behaviors of such motion in an engineering sense. The motion thus has a greater potential to cause severe damage than the far-field ground motions, i.e. they impose high base shear demands on engineering structures as well as placing very high deformation demands on long-period structures.

  15. Quantification and assessment of fault uncertainty and risk using stochastic conditional simulations

    Institute of Scientific and Technical Information of China (English)

    LI Shuxing; Roussos Dimitrakopoulos

    2002-01-01

    The effect of geological uncertainty on the development and mining of underground coal deposits is a key issue for longwall mining, as the presence of faults generates substantial monetary losses. This paper develops a method for the conditional simulation of fault systems and uses the method to quantify and assess fault uncertainty. The method is based on the statistical modelling of fault attributes and the simulation of the locations of the centres of the fault traces. Fault locations are generated from the thinning of a Poisson process using a spatially correlated probability field. The proposed algorithm for simulating fault traces takes into account soft data such as geological interpretations and geomechanical data. The simulations generate realisations of fault populations that reproduce observed faults, honour the statistics of the fault attributes, and respect the constraints of soft data, providing the means to thereby model and assess the related fault uncertainty.

  16. A Technique for Accelerating Injection of Transient Faults in Complex SoCs

    NARCIS (Netherlands)

    Rohani, A.; Kerkhoff, Hans G.

    2011-01-01

    This paper presents a technique for reducing CPU time to perform simulation-based fault-injection experiments in complex SoCs. This technique is fully compatible with commercial HDL simulators with no requirement to develop dedicated compilers. This approach can be easily applied to complex SoC moel

  17. Frictional behavior of experimental faults during a simulated seismic cycle

    Science.gov (United States)

    Spagnuolo, Elena; Nielsen, Stefan; Violay, Marie; Di Felice, Fabio; Di Toro, Giulio

    2016-04-01

    Laboratory friction studies of earthquake mechanics aim at understanding complex phenomena either driving or characterizing the seismic cycle. Previous experiments were mainly conducted on bi-axial machines imposing velocity steps conditions, where slip and slip-rate are usually less than 10 mm and 1 mm/s, respectively. However, earthquake nucleation on natural faults results from the combination of the frictional response of fault materials and wall rock stiffness with complex loading conditions. We propose an alternative experimental approach which consists in imposing a step-wise increase in the shear stress on an experimental fault under constant normal stress. This experimental configuration allows us to investigate the relevance of spontaneous fault surface reworking in (1) driving frictional instabilities, (2) promoting the diversity of slip events including the eventual runaway, and (3) ruling weakening and re-strengthening processes during the seismic cycle. Using a rotary shear apparatus (SHIVA, INGV, Rome) with an on-purpose designed control system, the shear stress acting on a simulated fault can be increased step-wise while both slip and slip-rate are allowed to evolve spontaneously (the slip is namely infinite) to accommodate the new state of stress. This unconventional procedure, which we term "shear stress-step loading", simulates how faults react to either a remote tectonic loading or a sudden seismic or strain event taking place in the vicinity of a fault patch. Our experiments show that the spontaneous slip evolution results in velocity pulses whose shape and occurrence rate are controlled by the lithology and the state of stress. With increasing shear stress and cumulative slip, the experimental fault exhibits three frictional behaviors: (1) stable behavior or individual slip pulses up to few cm/s for few mm of slip in concomitance to the step-wise increase in shear stress; (2) unstable oscillatory slip or continuous slip but with abrupt changes

  18. Simulation of different types of faults of Northern Iraq power system

    Energy Technology Data Exchange (ETDEWEB)

    Muhammad, Aree A. [University of Salahaddin-Hawler, College of Engineering, Department of Electrical Engineering (Iraq)], e-mail: areeakram@maktoob.com

    2011-07-01

    This paper presents and analyses the results of a simulation of various defects that have been identified in Northern Iraq's power system and which need to be addressed so as to allow that system to expand. This study was done using an Ipsa simulator and Matlab software and yielded information that will be useful in the expansion of operations and strengthening of the system's capacity to deal with operational difficulties. Fault studies are important since they help identify the areas where guidance is needed for proper relay setting and coordination, for designing circuit breakers with the capacity to handle each type of fault, and for rating the protective switchgears. As this paper states, negative sequence current may cause the temperature of a rotor to rise, accelerating wear on the insulation and causing mechanical stress on the rotating components. For this reason, negative sequence current protection should be given serious consideration.

  19. Modeling and simulation of longwall scraper conveyor considering operational faults

    Science.gov (United States)

    Cenacewicz, Krzysztof; Katunin, Andrzej

    2016-06-01

    The paper provides a description of analytical model of a longwall scraper conveyor, including its electrical, mechanical, measurement and control actuating systems, as well as presentation of its implementation in the form of computer simulator in the Matlab®/Simulink® environment. Using this simulator eight scenarios typical of usual operational conditions of an underground scraper conveyor can be generated. Moreover, the simulator provides a possibility of modeling various operational faults and taking into consideration a measurement noise generated by transducers. The analysis of various combinations of scenarios of operation and faults with description is presented. The simulator developed may find potential application in benchmarking of diagnostic systems, testing of algorithms of operational control or can be used for supporting the modeling of real processes occurring in similar systems.

  20. Low footwall accelerations and variable surface rupture behavior on the Fort Sage Mountains fault, northeast California

    Science.gov (United States)

    Briggs, Richard W.; Wesnousky, Steven G.; Brune, James N.; Purvance, Matthew D.; Mahan, Shannon

    2013-01-01

    The Fort Sage Mountains fault zone is a normal fault in the Walker Lane of the western Basin and Range that produced a small surface rupture (L 5.6 earthquake in 1950. We investigate the paleoseismic history of the Fort Sage fault and find evidence for two paleoearthquakes with surface displacements much larger than those observed in 1950. Rupture of the Fort Sage fault ∼5.6  ka resulted in surface displacements of at least 0.8–1.5 m, implying earthquake moment magnitudes (Mw) of 6.7–7.1. An older rupture at ∼20.5  ka displaced the ground at least 1.5 m, implying an earthquake of Mw 6.8–7.1. A field of precariously balanced rocks (PBRs) is located less than 1 km from the surface‐rupture trace of this Holocene‐active normal fault. Ground‐motion prediction equations (GMPEs) predict peak ground accelerations (PGAs) of 0.2–0.3g for the 1950 rupture and 0.3–0.5g for the ∼5.6  ka paleoearthquake one kilometer from the fault‐surface trace, yet field tests indicate that the Fort Sage PBRs will be toppled by PGAs between 0.1–0.3g. We discuss the paleoseismic history of the Fort Sage fault in the context of the nearby PBRs, GMPEs, and probabilistic seismic hazard maps for extensional regimes. If the Fort Sage PBRs are older than the mid‐Holocene rupture on the Fort Sage fault zone, this implies that current GMPEs may overestimate near‐fault footwall ground motions at this site.

  1. Numerical and laboratory simulations of auroral acceleration

    Energy Technology Data Exchange (ETDEWEB)

    Gunell, H.; De Keyser, J. [1Belgian Institute for Space Aeronomy, Avenue Circulaire 3, B-1180 Brussels (Belgium); Mann, I. [EISCAT Scientific Association, P.O. Box 812, SE-981 28 Kiruna, Sweden and Department of Physics, Umeå University, SE-901 87 Umeå (Sweden)

    2013-10-15

    The existence of parallel electric fields is an essential ingredient of auroral physics, leading to the acceleration of particles that give rise to the auroral displays. An auroral flux tube is modelled using electrostatic Vlasov simulations, and the results are compared to simulations of a proposed laboratory device that is meant for studies of the plasma physical processes that occur on auroral field lines. The hot magnetospheric plasma is represented by a gas discharge plasma source in the laboratory device, and the cold plasma mimicking the ionospheric plasma is generated by a Q-machine source. In both systems, double layers form with plasma density gradients concentrated on their high potential sides. The systems differ regarding the properties of ion acoustic waves that are heavily damped in the magnetosphere, where the ion population is hot, but weakly damped in the laboratory, where the discharge ions are cold. Ion waves are excited by the ion beam that is created by acceleration in the double layer in both systems. The efficiency of this beam-plasma interaction depends on the acceleration voltage. For voltages where the interaction is less efficient, the laboratory experiment is more space-like.

  2. An exact accelerated stochastic simulation algorithm

    Science.gov (United States)

    Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros

    2009-04-01

    An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present "ER-leap" algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 2/3 power of the number of reaction events in a Galton-Watson process.

  3. A fault and seismicity based composite simulation in northern California

    Directory of Open Access Journals (Sweden)

    M. B. Yıkılmaz

    2011-12-01

    Full Text Available We generate synthetic catalogs of seismicity in northern California using a composite simulation. The basis of the simulation is the fault based "Virtual California" (VC earthquake simulator. Back-slip velocities and mean recurrence intervals are specified on model strike-slip faults. A catalog of characteristic earthquakes is generated for a period of 100 000 yr. These earthquakes are predominantly in the range M = 6 to M = 8, but do not follow Gutenberg-Richter (GR scaling at lower magnitudes. In order to model seismicity on unmapped faults we introduce background seismicity which occurs randomly in time with GR scaling and is spatially associated with the VC model faults. These earthquakes fill in the GR scaling down to M = 4 (the smallest earthquakes modeled. The rate of background seismicity is constrained by the observed rate of occurrence of M > 4 earthquakes in northern California. These earthquakes are then used to drive the BASS (branching aftershock sequence model of aftershock occurrence. The BASS model is the self-similar limit of the ETAS (epidemic type aftershock sequence model. Families of aftershocks are generated following each Virtual California and background main shock. In the simulations the rate of occurrence of aftershocks is essentially equal to the rate of occurrence of main shocks in the magnitude range 4 < M < 7. We generate frequency-magnitude and recurrence interval statistics both regionally and fault specific. We compare our modeled rates of seismicity and spatial variability with observations.

  4. Accelerated molecular dynamics simulations of protein folding.

    Science.gov (United States)

    Miao, Yinglong; Feixas, Ferran; Eun, Changsun; McCammon, J Andrew

    2015-07-30

    Folding of four fast-folding proteins, including chignolin, Trp-cage, villin headpiece and WW domain, was simulated via accelerated molecular dynamics (aMD). In comparison with hundred-of-microsecond timescale conventional molecular dynamics (cMD) simulations performed on the Anton supercomputer, aMD captured complete folding of the four proteins in significantly shorter simulation time. The folded protein conformations were found within 0.2-2.1 Å of the native NMR or X-ray crystal structures. Free energy profiles calculated through improved reweighting of the aMD simulations using cumulant expansion to the second-order are in good agreement with those obtained from cMD simulations. This allows us to identify distinct conformational states (e.g., unfolded and intermediate) other than the native structure and the protein folding energy barriers. Detailed analysis of protein secondary structures and local key residue interactions provided important insights into the protein folding pathways. Furthermore, the selections of force fields and aMD simulation parameters are discussed in detail. Our work shows usefulness and accuracy of aMD in studying protein folding, providing basic references in using aMD in future protein-folding studies.

  5. A Fault Evolution Model Including the Rupture Dynamic Simulation

    Science.gov (United States)

    Wu, Y.; Chen, X.

    2011-12-01

    We perform a preliminary numerical simulation of seismicity and stress evolution along a strike-slip fault in a 3D elastic half space. Following work of Ben-Zion (1996), the fault geometry is devised as a vertical plane which is about 70 km long and 17 km wide, comparable to the size of San Andreas Fault around Parkfield. The loading mechanism is described by "backslip" method. The fault failure is governed by a static/kinetic friction law, and induced stress transfer is calculated with Okada's static solution. In order to track the rupture propagation in detail, we allow induced stress to propagate through the medium at the shear wave velocity by introducing a distance-dependent time delay to responses to stress changes. Current simulation indicates small to moderate earthquakes following the Gutenberg-Richter law and quasi-periodical characteristic large earthquakes, which are consistent with previous work by others. Next we will consider introducing a more realistic friction law, namely, the laboratory-derived rate- and state- dependent law, which can simulate more realistic and complicated sliding behavior such as the stable and unstable slip, the aseismic sliding and the slip nucleation process. In addition, the long duration of aftershocks is expected to be reproduced due to this time-dependent friction law, which is not available in current seismicity simulation. The other difference from previous work is that we are trying to include the dynamic ruptures in this study. Most previous study on seismicity simulation is based on the static solution when dealing with failure induced stress changes. However, studies of numerical simulation of rupture dynamics have revealed lots of important details which are missing in the quasi-static/quasi- dynamic simulation. For example, dynamic simulations indicate that the slip on the ground surface becomes larger if the dynamic rupture process reaches the free surface. The concentration of stress on the propagating crack

  6. Toward GPGPU accelerated human electromechanical cardiac simulations.

    Science.gov (United States)

    Vigueras, Guillermo; Roy, Ishani; Cookson, Andrew; Lee, Jack; Smith, Nicolas; Nordsletten, David

    2014-01-01

    In this paper, we look at the acceleration of weakly coupled electromechanics using the graphics processing unit (GPU). Specifically, we port to the GPU a number of components of CHeart--a CPU-based finite element code developed for simulating multi-physics problems. On the basis of a criterion of computational cost, we implemented on the GPU the ODE and PDE solution steps for the electrophysiology problem and the Jacobian and residual evaluation for the mechanics problem. Performance of the GPU implementation is then compared with single core CPU (SC) execution as well as multi-core CPU (MC) computations with equivalent theoretical performance. Results show that for a human scale left ventricle mesh, GPU acceleration of the electrophysiology problem provided speedups of 164 × compared with SC and 5.5 times compared with MC for the solution of the ODE model. Speedup of up to 72 × compared with SC and 2.6 × compared with MC was also observed for the PDE solve. Using the same human geometry, the GPU implementation of mechanics residual/Jacobian computation provided speedups of up to 44 × compared with SC and 2.0 × compared with MC. © 2013 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons, Ltd.

  7. Electron Beam Simulations on the SCSS Accelerator

    CERN Document Server

    Hara, Toru; Shintake, Tsumoru

    2004-01-01

    The SPring-8 Compact SASE Source (SCSS) is a SASE-FEL project aiming at soft X-ray radiation at its first stage using 1 GeV electron beams. One of the unique features of the SCSS is the use of a pulsed high-voltage electron gun with a thermionic cathode. Main reason for this choice is its high stability and the well developed technology relating to the gun. Meanwhile, the electron bunch should be compressed properly at the injector in order to obtain sufficient peak currents. In this presentation, the results of the electron beam simulations along the accelerator and the expected parameters of the electron beam will be given.

  8. Fault Risk Assessment of Underwater Vehicle Steering System Based on Virtual Prototyping and Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    He Deyu

    2016-09-01

    Full Text Available Assessing the risks of steering system faults in underwater vehicles is a human-machine-environment (HME systematic safety field that studies faults in the steering system itself, the driver’s human reliability (HR and various environmental conditions. This paper proposed a fault risk assessment method for an underwater vehicle steering system based on virtual prototyping and Monte Carlo simulation. A virtual steering system prototype was established and validated to rectify a lack of historic fault data. Fault injection and simulation were conducted to acquire fault simulation data. A Monte Carlo simulation was adopted that integrated randomness due to the human operator and environment. Randomness and uncertainty of the human, machine and environment were integrated in the method to obtain a probabilistic risk indicator. To verify the proposed method, a case of stuck rudder fault (SRF risk assessment was studied. This method may provide a novel solution for fault risk assessment of a vehicle or other general HME system.

  9. Faults

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Through the study of faults and their effects, much can be learned about the size and recurrence intervals of earthquakes. Faults also teach us about crustal...

  10. Thermal imaging on simulated faults during frictional sliding

    CERN Document Server

    Mair, Karen; Gundersen, Olav

    2008-01-01

    Heating during frictional sliding is a major component of the energy budget of earthquakes and represents a potential weakening mechanism. It is therefore important to investigate how heat dissipates during sliding on simulated faults. We present results from laboratory friction experiments where a halite (NaCl) slider held under constant load is dragged across a coarse substrate. Surface evolution and frictional resistance are recorded. Heat emission at the sliding surface is monitored using an infra-red camera. We demonstrate a link between plastic deformations of halite and enhanced heating characterized by transient localized heat spots. When sand 'gouge' is added to the interface, heating is more diffuse. Importantly, when strong asperities concentrate deformation, significantly more heat is produced locally. In natural faults such regions could be nucleation patches for melt production and hence potentially initiate weakening during earthquakes at much smaller sliding velocities or shear stress than pre...

  11. Simulation of growth normal fault sandbox tests using the 2D discrete element method

    Science.gov (United States)

    Chu, Sheng-Shin; Lin, Ming-Lang; Huang, Wen-Chao; Nien, Wei-Tung; Liu, Huan-Chi; Chan, Pei-Chen

    2015-01-01

    A fault slip can cause the deformation of shallow soil layers and destroy infrastructures. The Shanchiao Fault on the west side of the Taipei Basin is one such fault. The activities of the Shanchiao Fault have caused the quaternary sediment beneath the Taipei Basin to become deformed, damaging structures, traffic construction, and utility lines in the area. Data on geological drilling and dating have been used to determine that a growth fault exists in the Shanchiao Fault. In an experiment, a sandbox model was built using noncohesive sandy soil to simulate the existence of a growth fault in the Shanchiao Fault and forecast the effect of the growth fault on shear-band development and ground differential deformation. The experimental results indicated that when a normal fault contains a growth fault at the offset of the base rock, the shear band develops upward beside the weak side of the shear band of the original-topped soil layer, and surfaces considerably faster than that of the single-topped layer. The offset ratio required is approximately one-third that of the single-cover soil layer. In this study, a numerical simulation of the sandbox experiment was conducted using a discrete element method program, PFC2D, to simulate the upper-covering sand layer shear-band development pace and the scope of a growth normal fault slip. The simulation results indicated an outcome similar to that of the sandbox experiment, which can be applied to the design of construction projects near fault zones.

  12. Broadband Strong Ground Motion Simulation For a Potential Mw 7.1 Earthquake on The Enriquillo Fault in Haiti

    Science.gov (United States)

    Douilly, R.; Mavroeidis, G. P.; Calais, E.

    2015-12-01

    The devastating 2010 Haiti earthquake showed the need to be more vigilant toward mitigation for future earthquakes in the region. Previous studies have shown that this earthquake did not occur on the Enriquillo Fault, the main plate boundary fault running through the heavily populated Port-au-Prince region, but on the nearby and previously unknown Léogâne transpressional fault. Slip on that fault has increased stresses on the Enriquillo Fault mostly in the region closer to Port-au-Prince, the most populated city of the country. Here we investigate the ground shaking level in this region if a rupture similar to the Mw 7.0 2010 Haiti earthquake occurred on the Enriquillo fault. We use a finite element method and assumptions on regional stress to simulate low frequency dynamic rupture propagation for a 53 km long segment. We introduce some heterogeneity by creating two slip patches with shear traction 10% greater than the initial shear traction on the fault. The final slip distribution is similar in distribution and magnitude to previous finite fault inversions for the 2010 Haiti earthquake. The high-frequency ground motion components are calculated using the specific barrier model, and the hybrid synthetics are obtained by combining the low-frequencies (f 1Hz) from the stochastic simulation using matched filtering at a crossover frequency of 1 Hz. The average horizontal peak ground acceleration, computed at several sites of interest through Port-au-Prince, has a value of 0.35g. We also compute response spectra at those sites and compare them to the spectra from the microzonation study.

  13. Broadband Ground Motion Simulations for the Puente Hills Fault System

    Science.gov (United States)

    Graves, R. W.

    2005-12-01

    Recent geologic studies have identified the seismic potential of the Puente Hills fault system. This system is comprised of multiple blind thrust segments, a portion of which ruptured in the Mw 5.9 Whittier-Narrows earthquake. Rupture of the entire system could generate a Mw 7.2 (or larger) earthquake. To assess the potential hazard posed by the fault system, we have simulated the response for several earthquake scenarios. These simulations are unprecedented in scope and scale. Broadband (0-10 Hz) ground motions are computed at 66,000 sites, covering most of the LA metropolitan region. Low frequency (f 1 Hz) motions are calculated using a stochastic approach. We consider scenarios ranging from Mw 6.7 to Mw 7.2, including both high and low stress drop events. Finite-fault rupture models for these scenarios are generated following a wavenumber filtering technique (K-2 model) that has been calibrated against recent earthquakes. In all scenarios, strong rupture directivity channels large amplitude pulses of motion directly into the Los Angeles basin, which then propagate southward as basin surface waves. Typically, the waveforms near downtown Los Angeles are dominated by a strong, concentrated pulse of motion. At Long Beach (across the LA basin from the rupture) the waveforms are dominated by late arriving longer period surface waves. The great density of sites used in the calculation allows the construction of detailed maps of various ground motion parameters (PGA, PGV, SA), as well as full animations of the propagating broadband wave field. Additionally, the broadband time histories are available for use in non-linear response analyses of built structures.

  14. Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area

    Energy Technology Data Exchange (ETDEWEB)

    Pratama, Cecep, E-mail: great.pratama@gmail.com [Graduate Program of Earth Science, Faculty of Earth Science and Technology, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia); Meilano, Irwan [Geodesy Research Division, Faculty of Earth Science and Technology, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia); Nugraha, Andri Dian [Global Geophysical Group, Faculty of Mining and Petroleum Engineering, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia)

    2015-04-24

    Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate for Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 – 0.8465 g with uncertainty between 0.0847 – 0.2389 g and COV between 17.7% – 29.8%.

  15. Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area

    Science.gov (United States)

    Pratama, Cecep; Meilano, Irwan; Nugraha, Andri Dian

    2015-04-01

    Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate for Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 - 0.8465 g with uncertainty between 0.0847 - 0.2389 g and COV between 17.7% - 29.8%.

  16. A Novel Path Delay Fault Simulator Using Binary Logic

    Directory of Open Access Journals (Sweden)

    Ananta K. Majhi

    1996-01-01

    Full Text Available A novel path delay fault simulator for combinational logic circuits which is capable of detecting both robust and nonrobust paths is presented. Particular emphasis has been given for the use of binary logic rather than the multiple-valued logic as used in the existing simulators which contributes to the reduction of the overall complexity of the algorithm. A rule based approach has been developed which identifies all robust and nonrobust paths tested by a two-pattern test , while backtracing from the POs to PIs in a depth-first manner. Rules are also given to find probable glitches and to determine how they propagate through the circuit, which enables the identification of nonrobust paths. Experimental results on several ISCAS'85 benchmark circuits demonstrate the efficiency of the algorithm.

  17. Community Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Spentzouris, P.; /Fermilab; Cary, J.; /Tech-X, Boulder; McInnes, L.C.; /Argonne; Mori, W.; /UCLA; Ng, C.; /SLAC; Ng, E.; Ryne, R.; /LBL, Berkeley

    2011-11-14

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The ComPASS organization

  18. Fault-tolerant Control of Unmanned Underwater Vehicles with Continuous Faults: Simulations and Experiments

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2009-12-01

    Full Text Available A novel thruster fault diagnosis and accommodation method for open-frame underwater vehicles is presented in the paper. The proposed system consists of two units: a fault diagnosis unit and a fault accommodation unit. In the fault diagnosis unit an ICMAC (Improved Credit Assignment Cerebellar Model Articulation Controllers neural network information fusion model is used to realize the fault identification of the thruster. The fault accommodation unit is based on direct calculations of moment and the result of fault identification is used to find the solution of the control allocation problem. The approach resolves the continuous faulty identification of the UV. Results from the experiment are provided to illustrate the performance of the proposed method in uncertain continuous faulty situation.

  19. Simulating autonomous driving styles: Accelerations for three road profiles

    Directory of Open Access Journals (Sweden)

    Karjanto Juffrizal

    2017-01-01

    Full Text Available This paper presents a new experimental approach to simulate projected autonomous driving styles based on the accelerations at three road profiles. This study was focused on the determination of ranges of accelerations in triaxial direction to simulate the autonomous driving experience. A special device, known as the Automatic Acceleration and Data controller (AUTOAccD, has been developed to guide the designated driver to accomplish the selected accelerations based on the road profiles and the intended driving styles namely assertive, defensive and light rail transit (LRT. Experimental investigations have been carried out at three different road profiles (junction, speed hump, and corner with two designated drivers with five trials on each condition. A driving style with the accelerations of LRT has also been included in this study as it is significant to the present methodology because the autonomous car is predicted to accelerate like an LRT, in such a way that it enables the users to conduct activities such as working on a laptop, using personal devices or eating and drinking while travelling. The results demonstrated that 92 out of 110 trials of the intended accelerations for autonomous driving styles could be achieved and simulated on the real road by the designated drivers. The differences between the two designated drivers were negligible, and the rates of succeeding in realizing the intended accelerations were high. The present approach in simulating autonomous driving styles focusing on accelerations can be used as a tool for experimental setup involving autonomous driving experience and acceptance.

  20. ACCELERATED SYNERGISM ALONG A FAULT: A POSSIBLE INDICATOR FOR AN IMPENDING MAJOR EARTHQUAKE

    OpenAIRE

    Ma Jin; Guo Yanshuang; S. I. Sherman

    2014-01-01

    It is generally accepted that crustal earthquakes are caused by sudden displacement along faults, which rely on two primary conditions. One is that the fault has a high degree of synergism, so that once the stress threshold is reached, fault segments can be connected rapidly to facilitate fast slip of longer fault sections. The other is sufficient strain accumulated at some portions of the fault which can overcome resistance to slip of the high-strength portions of the fault. Investigations t...

  1. Faults simulations for three-dimensional reservoir-geomechanical models with the extended finite element method

    Science.gov (United States)

    Prévost, Jean H.; Sukumar, N.

    2016-01-01

    Faults are geological entities with thicknesses several orders of magnitude smaller than the grid blocks typically used to discretize reservoir and/or over-under-burden geological formations. Introducing faults in a complex reservoir and/or geomechanical mesh therefore poses significant meshing difficulties. In this paper, we consider the strong-coupling of solid displacement and fluid pressure in a three-dimensional poro-mechanical (reservoir-geomechanical) model. We introduce faults in the mesh without meshing them explicitly, by using the extended finite element method (X-FEM) in which the nodes whose basis function support intersects the fault are enriched within the framework of partition of unity. For the geomechanics, the fault is treated as an internal displacement discontinuity that allows slipping to occur using a Mohr-Coulomb type criterion. For the reservoir, the fault is either an internal fluid flow conduit that allows fluid flow in the fault as well as to enter/leave the fault or is a barrier to flow (sealing fault). For internal fluid flow conduits, the continuous fluid pressure approximation admits a discontinuity in its normal derivative across the fault, whereas for an impermeable fault, the pressure approximation is discontinuous across the fault. Equal-order displacement and pressure approximations are used. Two- and three-dimensional benchmark computations are presented to verify the accuracy of the approach, and simulations are presented that reveal the influence of the rate of loading on the activation of faults.

  2. 3D Dynamic Rupture Simulations Across Interacting Faults: the Mw7.0, 2010, Haiti Earthquake

    Science.gov (United States)

    Douilly, R.; Aochi, H.; Calais, E.; Freed, A. M.; Aagaard, B.

    2014-12-01

    The mechanisms controlling rupture propagation between fault segments during an earthquake are key to the hazard posed by fault systems. Rupture initiation on a fault segment sometimes transfers to a larger fault, resulting in a significant event (e.g.i, 2002 M7.9Denali and 2010 M7.1 Darfield earthquakes). In other cases rupture is constrained to the initial segment and does not transfer to nearby faults, resulting in events of moderate magnitude. This is the case of the 1989 M6.9 Loma Prieta and 2010 M7.0 Haiti earthquakes which initiated on reverse faults abutting against a major strike-slip plate boundary fault but did not propagate onto it. Here we investigatethe rupture dynamics of the Haiti earthquake, seeking to understand why rupture propagated across two segments of the Léogâne fault but did not propagate to the adjacenent Enriquillo Plantain Garden Fault, the major 200 km long plate boundary fault cutting through southern Haiti. We use a Finite Element Model to simulate the nucleation and propagation of rupture on the Léogâne fault, varying friction and background stress to determine the parameter set that best explains the observed earthquake sequence. The best-fit simulation is in remarkable agreement with several finite fault inversions and predicts ground displacement in very good agreement with geodetic and geological observations. The two slip patches inferred from finite-fault inversions are explained by the successive rupture of two fault segments oriented favorably with respect to the rupture propagation, while the geometry of the Enriquillo fault did not allow shear stress to reach failure. Although our simulation results replicate well the ground deformation consistent with the geodetic surface observation but convolving the ground motion with the soil amplification from the microzonation study will correctly account for the heterogeneity of the PGA throughout the rupture area.

  3. Coulomb static stress interactions between simulated M>7 earthquakes and major faults in Southern California

    Science.gov (United States)

    Rollins, J. C.; Ely, G. P.; Jordan, T. H.

    2010-12-01

    We calculate the Coulomb stress changes imparted to major Southern California faults by thirteen simulated worst-case-scenario earthquakes for the region, including the “Big Ten” scenarios (Ely et al, in progress). The source models for the earthquakes are variable-slip simulations from the SCEC CyberShake project (Graves et al, 2010). We find strong stress interactions between the San Andreas and subparallel right-lateral faults, thrust faults under the Los Angeles basin, and the left-lateral Garlock Fault. M>7 earthquakes rupturing sections of the southern San Andreas generally decrease Coulomb stress on the San Jacinto and Elsinore faults and impart localized stress increases and decreases to the Garlock, San Cayetano, Puente Hills and Sierra Madre faults. A M=7.55 quake rupturing the San Andreas between Lake Hughes and San Gorgonio Pass increases Coulomb stress on the eastern San Cayetano fault, consistent with Deng and Sykes (1996). M>7 earthquakes rupturing the San Jacinto, Elsinore, Newport-Inglewood and Palos Verdes faults decrease stress on parallel right-lateral faults. A M=7.35 quake on the San Cayetano Fault decreases stress on the Garlock and imparts localized stress increases and decreases to the San Andreas. A M=7.15 quake on the Puente Hills Fault increases stress on the San Andreas and San Jacinto faults, decreases stress on the Sierra Madre Fault and imparts localized stress increases and decreases to the Newport-Inglewood and Palos Verdes faults. A M=7.25 shock on the Sierra Madre Fault increases stress on the San Andreas and decreases stress on the Puente Hills Fault. These findings may be useful for hazard assessment, paleoseismology, and comparison with dynamic stress interactions featuring the same set of earthquakes.

  4. Commnity Petascale Project for Accelerator Science And Simulation: Advancing Computational Science for Future Accelerators And Accelerator Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Spentzouris, Panagiotis; /Fermilab; Cary, John; /Tech-X, Boulder; Mcinnes, Lois Curfman; /Argonne; Mori, Warren; /UCLA; Ng, Cho; /SLAC; Ng, Esmond; Ryne, Robert; /LBL, Berkeley

    2011-10-21

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  5. Commnity Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Spentzouris, Panagiotis; /Fermilab; Cary, John; /Tech-X, Boulder; Mcinnes, Lois Curfman; /Argonne; Mori, Warren; /UCLA; Ng, Cho; /SLAC; Ng, Esmond; Ryne, Robert; /LBL, Berkeley

    2008-07-01

    The design and performance optimization of particle accelerators is essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC1 Accelerator Science and Technology project, the SciDAC2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modeling. ComPASS is providing accelerator scientists the tools required to enable the necessary accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multi-physics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.

  6. Fault attacks, injection techniques and tools for simulation

    NARCIS (Netherlands)

    Piscitelli, R.; Bhasin, S.; Regazzoni, F.

    2015-01-01

    Faults attacks are a serious threat to secure devices, because they are powerful and they can be performed with extremely cheap equipment. Resistance against fault attacks is often evaluated directly on the manufactured devices, as commercial tools supporting fault evaluation do not usually provide

  7. Frictional strength and strain weakening in simulated fault gouge: Competition between geometrical weakening and chemical strengthening

    NARCIS (Netherlands)

    Niemeijer, André; Marone, Chris; Elsworth, Derek

    2010-01-01

    Despite the importance of hydromechanical effects in fault processes, not much is known about the interplay of chemical and mechanical processes, in part because the conditions are difficult to simulate in the laboratory. We report results from an experimental study of simulated fault gouge composed

  8. Accelerating slip rates on the puente hills blind thrust fault system beneath metropolitan Los Angeles, California, USA

    Science.gov (United States)

    Bergen, Kristian J; Shaw, John H.; Leon, Lorraine A; Dolan, James F; Pratt, Thomas L.; Ponti, Daniel J.; Morrow, Eric; Barrera, Wendy; Rhodes, Edward J.; Murari, Madhav K.; Owen, Lewis

    2017-01-01

    Slip rates represent the average displacement across a fault over time and are essential to estimating earthquake recurrence for proba-bilistic seismic hazard assessments. We demonstrate that the slip rate on the western segment of the Puente Hills blind thrust fault system, which is beneath downtown Los Angeles, California (USA), has accel-erated from ~0.22 mm/yr in the late Pleistocene to ~1.33 mm/yr in the Holocene. Our analysis is based on syntectonic strata derived from the Los Angeles River, which has continuously buried a fold scarp above the blind thrust. Slip on the fault beneath our field site began during the late-middle Pleistocene and progressively increased into the Holocene. This increase in rate implies that the magnitudes and/or the frequency of earthquakes on this fault segment have increased over time. This challenges the characteristic earthquake model and presents an evolving and potentially increasing seismic hazard to metropolitan Los Angeles.

  9. Multipacting simulation in accelerating RF structures

    Energy Technology Data Exchange (ETDEWEB)

    Gusarova, M.A.; Kaminsky, V.I. [Moscow Engineering Physics Institute, State University (Russian Federation); Kravchuk, L.V. [Institute for Nuclear Research of Russian Academy of Sciences (Russian Federation); Kutsaev, S.V. [Moscow Engineering Physics Institute, State University (Russian Federation)], E-mail: s_kutsaev@mail.ru; Lalayan, M.V.; Sobenin, N.P. [Moscow Engineering Physics Institute, State University (Russian Federation); Tarasov, S.G. [Institute for Nuclear Research of Russian Academy of Sciences (Russian Federation)

    2009-02-01

    A new computer code for 3D simulation of multipacting phenomenon in axisymmetric and non-axisymmetric radio frequency (RF) structures is presented. The goal of the simulation is to determine resonant electron trajectories and electron multiplication in RF structure. Both SW and TW structures of normal and superconductivity have been studied. Simulation results are compared with theoretical calculations and experimental measurements.

  10. A GPU Accelerated Spring Mass System for Surgical Simulation

    DEFF Research Database (Denmark)

    Mosegaard, Jesper; Sørensen, Thomas Sangild

    2005-01-01

    There is a growing demand for surgical simulators to dofast and precise calculations of tissue deformation to simulateincreasingly complex morphology in real-time. Unfortunately, evenfast spring-mass based systems have slow convergence rates for largemodels. This paper presents a method to accele...... to accelerate computation of aspring-mass system in order to simulate a complex organ such as theheart. This acceleration is achieved by taking advantage of moderngraphics processing units (GPU)....

  11. Numerical Simulation on Faulting: Microscopic evolution, macroscopic interaction and rupture process of earthquakes

    CERN Document Server

    Aochi, Hideo

    2010-01-01

    We review the recent researches of numerical simulations on faulting, which are interpreted in this paper as the evolution of the state of the fault plane and the evolution of fault structure. The theme includes the fault constitutive (friction) law, the properties of the gauge particles, the initial phase of the rupture, the dynamic rupture process, the interaction of the fault segments, the fault zone dynamics, and so on. Many numerical methods have been developed: boundary integral equation methods (BIEM), finite difference methods (FDM), finite or spectral element methods (FEM, SEM) as well as distinct element methods (DEM), discrete element methods (again DEM) or lattice solid models (LSM). The fault dynamics should be solved as a complex non-linear system, which shows multiple hierarchical structures on its property and behavior. The researches have progressively advanced since the 1990's both numerically and physically thanks to high performance computing environments. The interaction at small scales i...

  12. Beam dynamics simulation of a double pass proton linear accelerator

    Science.gov (United States)

    Hwang, Kilean; Qiang, Ji

    2017-04-01

    A recirculating superconducting linear accelerator with the advantage of both straight and circular accelerator has been demonstrated with relativistic electron beams. The acceleration concept of a recirculating proton beam was recently proposed [J. Qiang, Nucl. Instrum. Methods Phys. Res., Sect. A 795, 77 (2015, 10.1016/j.nima.2015.05.056)] and is currently under study. In order to further support the concept, the beam dynamics study on a recirculating proton linear accelerator has to be carried out. In this paper, we study the feasibility of a two-pass recirculating proton linear accelerator through the direct numerical beam dynamics design optimization and the start-to-end simulation. This study shows that the two-pass simultaneous focusing without particle losses is attainable including fully 3D space-charge effects through the entire accelerator system.

  13. Exploiting Process Locality of Reference in RTL Simulation Acceleration

    Directory of Open Access Journals (Sweden)

    Blumer AricD

    2008-01-01

    Full Text Available Abstract With the increased size and complexity of digital designs, the time required to simulate them has also increased. Traditional simulation accelerators utilize FPGAs in a static configuration, but this paper presents an analysis of six register transfer level (RTL code bases showing that only a subset of the simulation processes is executing at any given time, a quality called executive locality of reference. The efficiency of acceleration hardware can be improved when it is used as a process cache. Run-time adaptations are made to ensure that acceleration resources are not wasted on idle processes, and these adaptations may be affected through process migration between software and hardware. An implementation of an embedded, FPGA-based migration system is described, and empirical data are obtained for use in mathematical and algorithmic modeling of more complex acceleration systems.

  14. Exploiting Process Locality of Reference in RTL Simulation Acceleration

    Directory of Open Access Journals (Sweden)

    Cameron D. Patterson

    2008-04-01

    Full Text Available With the increased size and complexity of digital designs, the time required to simulate them has also increased. Traditional simulation accelerators utilize FPGAs in a static configuration, but this paper presents an analysis of six register transfer level (RTL code bases showing that only a subset of the simulation processes is executing at any given time, a quality called executive locality of reference. The efficiency of acceleration hardware can be improved when it is used as a process cache. Run-time adaptations are made to ensure that acceleration resources are not wasted on idle processes, and these adaptations may be affected through process migration between software and hardware. An implementation of an embedded, FPGA-based migration system is described, and empirical data are obtained for use in mathematical and algorithmic modeling of more complex acceleration systems.

  15. Numerical Simulation of Earthquake Nucleation Process and Seismic Precursors on Faults

    Institute of Scientific and Technical Information of China (English)

    He Changrong

    2000-01-01

    To understand precursory phenomena before seismic fault slip, this work focuses onearthquake nucleation process on a fault plane through numerical simulation. Rate and statedependent friction law with variable normal stress is employed in the analysis. The resultsshow that in the late stage of nucleation process: (1) The maximum slip velocity ismonotonically accelerating; (2) The slipping hot spot (where the slip rate is maximum)migrates spontaneously from a certain instant, and such migration is spatially continuous;(3) When the maximum velocity reaches a detectable order of magnitude (at least one orderof magnitude greater than the loading rate), the remaining time is 20 hours or longer, andthe temporal variation of slip velocity beyond this point may be used as a precursoryindicator; (4) The average slip velocity is related to the remaining time by a log-log linearrelation, which means that a similar relation between rate of microseismicity and remainingtime may also exist; (5) When normal stress variation is taken into account, time scale ofsuch processes can be extended by 2 times.

  16. Ground motion modeling of Hayward fault scenario earthquakes II:Simulation of long-period and broadband ground motions

    Energy Technology Data Exchange (ETDEWEB)

    Aagaard, B T; Graves, R W; Rodgers, A; Brocher, T M; Simpson, R W; Dreger, D; Petersson, N A; Larsen, S C; Ma, S; Jachens, R C

    2009-11-04

    We simulate long-period (T > 1.0-2.0 s) and broadband (T > 0.1 s) ground motions for 39 scenarios earthquakes (Mw 6.7-7.2) involving the Hayward, Calaveras, and Rodgers Creek faults. For rupture on the Hayward fault we consider the effects of creep on coseismic slip using two different approaches, both of which reduce the ground motions compared with neglecting the influence of creep. Nevertheless, the scenario earthquakes generate strong shaking throughout the San Francisco Bay area with about 50% of the urban area experiencing MMI VII or greater for the magnitude 7.0 scenario events. Long-period simulations of the 2007 Mw 4.18 Oakland and 2007 Mw 4.5 Alum Rock earthquakes show that the USGS Bay Area Velocity Model version 08.3.0 permits simulation of the amplitude and duration of shaking throughout the San Francisco Bay area, with the greatest accuracy in the Santa Clara Valley (San Jose area). The ground motions exhibit a strong sensitivity to the rupture length (or magnitude), hypocenter (or rupture directivity), and slip distribution. The ground motions display a much weaker sensitivity to the rise time and rupture speed. Peak velocities, peak accelerations, and spectral accelerations from the synthetic broadband ground motions are, on average, slightly higher than the Next Generation Attenuation (NGA) ground-motion prediction equations. We attribute at least some of this difference to the relatively narrow width of the Hayward fault ruptures. The simulations suggest that the Spudich and Chiou (2008) directivity corrections to the NGA relations could be improved by including a dependence on the rupture speed and increasing the areal extent of rupture directivity with period. The simulations also indicate that the NGA relations may under-predict amplification in shallow sedimentary basins.

  17. Optimal Model-Based Fault Estimation and Correction for Particle Accelerators and Industrial Plants Using Combined Support Vector Machines and First Principles Models

    Energy Technology Data Exchange (ETDEWEB)

    Sayyar-Rodsari, Bijan; Schweiger, Carl; /SLAC /Pavilion Technologies, Inc., Austin, TX

    2010-08-25

    parameters of the beam lifetime model) are physically meaningful. (3) Numerical Efficiency of the Training - We investigated the numerical efficiency of the SVM training. More specifically, for the primal formulation of the training, we have developed a problem formulation that avoids the linear increase in the number of the constraints as a function of the number of data points. (4) Flexibility of Software Architecture - The software framework for the training of the support vector machines was designed to enable experimentation with different solvers. We experimented with two commonly used nonlinear solvers for our simulations. The primary application of interest for this project has been the sustained optimal operation of particle accelerators at the Stanford Linear Accelerator Center (SLAC). Particle storage rings are used for a variety of applications ranging from 'colliding beam' systems for high-energy physics research to highly collimated x-ray generators for synchrotron radiation science. Linear accelerators are also used for collider research such as International Linear Collider (ILC), as well as for free electron lasers, such as the Linear Coherent Light Source (LCLS) at SLAC. One common theme in the operation of storage rings and linear accelerators is the need to precisely control the particle beams over long periods of time with minimum beam loss and stable, yet challenging, beam parameters. We strongly believe that beyond applications in particle accelerators, the high fidelity and cost benefits of a combined model-based fault estimation/correction system will attract customers from a wide variety of commercial and scientific industries. Even though the acquisition of Pavilion Technologies, Inc. by Rockwell Automation Inc. in 2007 has altered the small business status of the Pavilion and it no longer qualifies for a Phase II funding, our findings in the course of the Phase I research have convinced us that further research will render a workable

  18. Optimal Model-Based Fault Estimation and Correction for Particle Accelerators and Industrial Plants Using Combined Support Vector Machines and First Principles Models

    Energy Technology Data Exchange (ETDEWEB)

    Sayyar-Rodsari, Bijan; Schweiger, Carl; /SLAC /Pavilion Technologies, Inc., Austin, TX

    2010-08-25

    parameters of the beam lifetime model) are physically meaningful. (3) Numerical Efficiency of the Training - We investigated the numerical efficiency of the SVM training. More specifically, for the primal formulation of the training, we have developed a problem formulation that avoids the linear increase in the number of the constraints as a function of the number of data points. (4) Flexibility of Software Architecture - The software framework for the training of the support vector machines was designed to enable experimentation with different solvers. We experimented with two commonly used nonlinear solvers for our simulations. The primary application of interest for this project has been the sustained optimal operation of particle accelerators at the Stanford Linear Accelerator Center (SLAC). Particle storage rings are used for a variety of applications ranging from 'colliding beam' systems for high-energy physics research to highly collimated x-ray generators for synchrotron radiation science. Linear accelerators are also used for collider research such as International Linear Collider (ILC), as well as for free electron lasers, such as the Linear Coherent Light Source (LCLS) at SLAC. One common theme in the operation of storage rings and linear accelerators is the need to precisely control the particle beams over long periods of time with minimum beam loss and stable, yet challenging, beam parameters. We strongly believe that beyond applications in particle accelerators, the high fidelity and cost benefits of a combined model-based fault estimation/correction system will attract customers from a wide variety of commercial and scientific industries. Even though the acquisition of Pavilion Technologies, Inc. by Rockwell Automation Inc. in 2007 has altered the small business status of the Pavilion and it no longer qualifies for a Phase II funding, our findings in the course of the Phase I research have convinced us that further research will render a workable

  19. Mineralogical Controls of Fault Healing in Natural and Simulated Gouges with Implications for Fault Zone Processes and the Seismic Cycle

    Science.gov (United States)

    Carpenter, B. M.; Ikari, M.; Marone, C.

    2011-12-01

    The frictional strength and stability of tectonic faults is determined by asperity contact processes, granular deformation, and fault zone fabric development. The evolution of grain-scale contact area during the seismic cycle likely exhibits significant control on overall fault stability by influencing frictional restrengthening, or healing, during the interseismic period, and the rate-dependence of sliding friction, which controls earthquake nucleation and the mode of fault slip. We report on laboratory experiments designed to explore the affect of mineralogy on fault healing. We conducted frictional shear experiments in a double-direct shear configuration at room temperature, 100% relative humidity, and a normal stress of 20 MPa. We used samples from a wide range of natural faults, including outcrop samples and core recovered during scientific drilling. Faults include: Alpine (New Zealand), Zuccale (Italy), Rocchetta (Italy), San Gregorio (California), Calaveras (California), Kodiak (Alaska), Nankai (Japan), Middle America Trench (Costa Rica), and San Andreas (California). To isolate the role of mineralogy, we also tested simulated fault gouges composed of talc, montmorillonite, biotite, illite, kaolinite, quartz, andesine, and granite. Frictional healing was measured at an accumulated shear strain of ~15 within the gouge layers. We conducted slide-hold-slide tests ranging from 3 to 3000 seconds. The main suite of experiments used a background shearing rate of 10 μm/s; these were augmented with sub-suites at 1 and 100 μm/s. We find that phyllosilicate-rich gouges (e.g. talc, montmorillonite, San Andreas Fault) show little to no healing over all hold times. We find the highest healing rates (β ≈ 0.01, Δμ per decade in time, s) in gouges from the Alpine and Rocchetta faults, with the rest of our samples falling into an intermediate range of healing rates. Nearly all gouges exhibit log-linear healing rates with the exceptions of San Andreas Fault gouge and

  20. Start-to-end simulation with rare isotope beam for post accelerator of the RAON accelerator

    CERN Document Server

    Jin, Hyunchang

    2016-01-01

    The RAON accelerator of the Rare Isotope Science Project (RISP) has been developed to create and accelerate various kinds of stable heavy ion beams and rare isotope beams for a wide range of the science applications. In the RAON accelerator, the rare isotope beams generated by the Isotope Separation On-Line (ISOL) system will be transported through the post accelerator, namely, from the post Low Energy Beam Transport (LEBT) system and the post Radio Frequency Quadrupole (RFQ) to the superconducting linac (SCL3). The accelerated beams will be put to use in the low energy experimental hall or accelerated again by the superconducting linac (SCL2) in order to be used in the high energy experimental hall. In this paper, we will describe the results of the start-to-end simulations with the rare isotope beams generated by the ISOL system in the post accelerator of the RAON accelerator. In addition, the error analysis and correction at the superconducting linac SCL3 will be presented.

  1. Ground-motion modeling of Hayward fault scenario earthquakes, part II: Simulation of long-period and broadband ground motions

    Science.gov (United States)

    Aagaard, Brad T.; Graves, Robert W.; Rodgers, Arthur; Brocher, Thomas M.; Simpson, Robert W.; Dreger, Douglas; Petersson, N. Anders; Larsen, Shawn C.; Ma, Shuo; Jachens, Robert C.

    2010-01-01

    We simulate long-period (T>1.0–2.0 s) and broadband (T>0.1 s) ground motions for 39 scenario earthquakes (Mw 6.7–7.2) involving the Hayward, Calaveras, and Rodgers Creek faults. For rupture on the Hayward fault, we consider the effects of creep on coseismic slip using two different approaches, both of which reduce the ground motions, compared with neglecting the influence of creep. Nevertheless, the scenario earthquakes generate strong shaking throughout the San Francisco Bay area, with about 50% of the urban area experiencing modified Mercalli intensity VII or greater for the magnitude 7.0 scenario events. Long-period simulations of the 2007 Mw 4.18 Oakland earthquake and the 2007 Mw 5.45 Alum Rock earthquake show that the U.S. Geological Survey’s Bay Area Velocity Model version 08.3.0 permits simulation of the amplitude and duration of shaking throughout the San Francisco Bay area for Hayward fault earthquakes, with the greatest accuracy in the Santa Clara Valley (San Jose area). The ground motions for the suite of scenarios exhibit a strong sensitivity to the rupture length (or magnitude), hypocenter (or rupture directivity), and slip distribution. The ground motions display a much weaker sensitivity to the rise time and rupture speed. Peak velocities, peak accelerations, and spectral accelerations from the synthetic broadband ground motions are, on average, slightly higher than the Next Generation Attenuation (NGA) ground-motion prediction equations. We attribute much of this difference to the seismic velocity structure in the San Francisco Bay area and how the NGA models account for basin amplification; the NGA relations may underpredict amplification in shallow sedimentary basins. The simulations also suggest that the Spudich and Chiou (2008) directivity corrections to the NGA relations could be improved by increasing the areal extent of rupture directivity with period.

  2. Accelerator and feedback control simulation using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, D.; Lee, M.; Sass, R.; Shoaee, H.

    1991-05-01

    Unlike present constant model feedback system, neural networks can adapt as the dynamics of the process changes with time. Using a process model, the Accelerator'' network is first trained to simulate the dynamics of the beam for a given beam line. This Accelerator'' network is then used to train a second Controller'' network which performs the control function. In simulation, the networks are used to adjust corrector magnetics to control the launch angle and position of the beam to keep it on the desired trajectory when the incoming beam is perturbed. 4 refs., 3 figs.

  3. Dynamic rupture simulations on complex fault zone structures with off-fault plasticity using the ADER-DG method

    Science.gov (United States)

    Wollherr, Stephanie; Gabriel, Alice-Agnes; Igel, Heiner

    2015-04-01

    zones or branched faults. Studying the interplay of stress conditions and angle dependence of neighbouring branches including inelastic material behaviour and its effects on rupture jumps and seismic activation helps to advance our understanding of earthquake source processes. An application is the simulation of a real large-scale subduction zone scenario including plasticity to validate the coupling of our dynamic rupture calculations to a tsunami model in the framework of the ASCETE project (http://www.ascete.de/). Andrews, D. J. (2005): Rupture dynamics with energy loss outside the slip zone, J. Geophys. Res., 110, B01307. Heinecke, A. (2014), A. Breuer, S. Rettenberger, M. Bader, A.-A. Gabriel, C. Pelties, A. Bode, W. Barth, K. Vaidyanathan, M. Smelyanskiy and P. Dubey: Petascale High Order Dynamic Rupture Earthquake Simulations on Heterogeneous Supercomputers. In Supercomputing 2014, The International Conference for High Performance Computing, Networking, Storage and Analysis. IEEE, New Orleans, LA, USA, November 2014. Roten, D. (2014), K. B. Olsen, S.M. Day, Y. Cui, and D. Fäh: Expected seismic shaking in Los Angeles reduced by San Andreas fault zone plasticity, Geophys. Res. Lett., 41, 2769-2777.

  4. linear accelerator simulation framework with placet and guinea-pig

    CERN Document Server

    Snuverink, Jochem; CERN. Geneva. ATS Department

    2016-01-01

    Many good tracking tools are available for simulations for linear accelerators. However, several simple tasks need to be performed repeatedly, like lattice definitions, beam setup, output storage, etc. In addition, complex simulations can become unmanageable quite easily. A high level layer would therefore be beneficial. We propose LinSim, a linear accelerator framework with the codes PLACET and GUINEA-PIG. It provides a documented well-debugged high level layer of functionality. Users only need to provide the input settings and essential code and / or use some of the many implemented imperfections and algorithms. It can be especially useful for first-time users. Currently the following accelerators are implemented: ATF2, ILC, CLIC and FACET. This note is the comprehensive manual, discusses the framework design and shows its strength in some condensed examples.

  5. GPU-accelerated micromagnetic simulations using cloud computing

    Science.gov (United States)

    Jermain, C. L.; Rowlands, G. E.; Buhrman, R. A.; Ralph, D. C.

    2016-03-01

    Highly parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics.

  6. GPU-accelerated micromagnetic simulations using cloud computing

    CERN Document Server

    Jermain, C L; Buhrman, R A; Ralph, D C

    2015-01-01

    Highly-parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics.

  7. Accelerated Hierarchical Collision Detection for Simulation using CUDA

    DEFF Research Database (Denmark)

    Jørgensen, Jimmy Alison; Fugl, Andreas Rune; Petersen, Henrik Gordon

    2011-01-01

    In this article we present a GPU accelerated, hybrid, narrow phase collision detection algorithm for simulation purposes. The algorithm is based on hierarchical bounding volume tree structures of oriented bounding boxes (OBB) that in the past has shown to be efficient for collision detection. The...

  8. Simulations of ion acceleration at non-relativistic shocks: i) Acceleration efficiency

    CERN Document Server

    Caprioli, Damiano

    2013-01-01

    We use 2D and 3D hybrid (kinetic ions - fluid electrons) simulations to investigate particle acceleration and magnetic field amplification at non-relativistic astrophysical shocks. We show that diffusive shock acceleration operates for quasi-parallel configurations (i.e., when the background magnetic field is almost aligned with the shock normal) and, for large sonic and Alfv\\'enic Mach numbers, produces universal power-law spectra proportional to p^(-4), where p is the particle momentum. The maximum energy of accelerated ions increases with time, and it is only limited by finite box size and run time. Acceleration is mainly efficient for parallel and quasi-parallel strong shocks, where 10-20% of the bulk kinetic energy can be converted to energetic particles, and becomes ineffective for quasi-perpendicular shocks. Also, the generation of magnetic turbulence correlates with efficient ion acceleration, and vanishes for quasi-perpendicular configurations. At very oblique shocks, ions can be accelerated via shoc...

  9. Detailed Simulation of Transformer Internal Fault in Power System by Diakoptical Concept

    Directory of Open Access Journals (Sweden)

    KOUHSARI, S. M.

    2010-08-01

    Full Text Available This paper presents a novel method for modeling internal faults in a power transformer. This method uses a distributed computing approach for analysis of internal fault in transient stability (T/S studies of electrical networks using Diakoptics and large change sensitivity (LCS concepts. The combination of these concepts by phase frame model of transformer will be used here to develop an internal fault simulation of transformers. This approach leads to a model which is compatible with commercial phasor-based software packages. Consequently, it enables calculation of fault currents in any branch of the network due to a winding fault of a power transformer. The proposed method is implemented successfully and validated by time domain software and GEC group measurement results.

  10. Numerical simulation on fault water-inrush based on fluid-solid coupling theory

    Institute of Scientific and Technical Information of China (English)

    HUANG Han-fu; MAO Xian-biao; YAO Bang-hua; PU Hai

    2012-01-01

    About 75% water-inrush accidents in China are caused by geological structure such as faults,therefore,it is necessary to investigate the water-inrush mechanism of faults to provide references for the mining activity above confined water.In this paper,based on the fluid-solid coupling theory,we built the stress-seepage coupling model for rock,then we combined with an example of water-inrush caused by fault,studied the water-inrush mechanism by using the numerical software COMSOL Mutiphysics,analyzed the change rule of shear stress,vertical stress,plastic area and water pressure for stope with a fault,and estimated the water-inrush risk at the different distances between working faces and the fault.The numerical simulation results indicate that:(1) the water-inrush risk will grow as the decrease of the distance between working face and the fault;(2) the failure mode of the rock in floor with fault is shear failure; (3) the rock between water-containing fault and working face failure is the reason for water-inrush.

  11. Accounting for Fault Roughness in Pseudo-Dynamic Ground-Motion Simulations

    KAUST Repository

    Mai, Paul Martin

    2017-04-03

    Geological faults comprise large-scale segmentation and small-scale roughness. These multi-scale geometrical complexities determine the dynamics of the earthquake rupture process, and therefore affect the radiated seismic wavefield. In this study, we examine how different parameterizations of fault roughness lead to variability in the rupture evolution and the resulting near-fault ground motions. Rupture incoherence naturally induced by fault roughness generates high-frequency radiation that follows an ω−2 decay in displacement amplitude spectra. Because dynamic rupture simulations are computationally expensive, we test several kinematic source approximations designed to emulate the observed dynamic behavior. When simplifying the rough-fault geometry, we find that perturbations in local moment tensor orientation are important, while perturbations in local source location are not. Thus, a planar fault can be assumed if the local strike, dip, and rake are maintained. We observe that dynamic rake angle variations are anti-correlated with the local dip angles. Testing two parameterizations of dynamically consistent Yoffe-type source-time function, we show that the seismic wavefield of the approximated kinematic ruptures well reproduces the radiated seismic waves of the complete dynamic source process. This finding opens a new avenue for an improved pseudo-dynamic source characterization that captures the effects of fault roughness on earthquake rupture evolution. By including also the correlations between kinematic source parameters, we outline a new pseudo-dynamic rupture modeling approach for broadband ground-motion simulation.

  12. Fault mechanism analysis and simulation for continuity resistance test of electrical components in aircraft engine

    Science.gov (United States)

    Shi, Xudong; Yin, Yaping; Wang, Jialin; Sun, Zhaorong

    2017-01-01

    A large number of electrical components are used in civil aircraft engines, whose electrical circuits are usually intricate and complicated. Continuity resistance is an important parameter for the operating state of electrical components. Electrical continuity fault has serious impact on the reliability of the aircraft engine. In this paper, mathematical models of electrical components are established, and simulation is made by Simulink to analyze the electrical continuity fault.

  13. Simulation of density measurements in plasma wakefields using photo acceleration

    CERN Document Server

    Kasim, Muhammad Firmansyah; Ceurvorst, Luke; Sadler, James; Burrows, Philip N; Trines, Raoul; Holloway, James; Wing, Matthew; Bingham, Robert; Norreys, Peter

    2015-01-01

    One obstacle in plasma accelerator development is the limitation of techniques to diagnose and measure plasma wakefield parameters. In this paper, we present a novel concept for the density measurement of a plasma wakefield using photon acceleration, supported by extensive particle in cell simulations of a laser pulse that copropagates with a wakefield. The technique can provide the perturbed electron density profile in the laser’s reference frame, averaged over the propagation length, to be accurate within 10%. We discuss the limitations that affect the measurement: small frequency changes, photon trapping, laser displacement, stimulated Raman scattering, and laser beam divergence. By considering these processes, one can determine the optimal parameters of the laser pulse and its propagation length. This new technique allows a characterization of the density perturbation within a plasma wakefield accelerator.

  14. PIC simulation of electron acceleration in an underdense plasma

    Directory of Open Access Journals (Sweden)

    S Darvish Molla

    2011-06-01

    Full Text Available One of the interesting Laser-Plasma phenomena, when the laser power is high and ultra intense, is the generation of large amplitude plasma waves (Wakefield and electron acceleration. An intense electromagnetic laser pulse can create plasma oscillations through the action of the nonlinear pondermotive force. electrons trapped in the wake can be accelerated to high energies, more than 1 TW. Of the wide variety of methods for generating a regular electric field in plasmas with strong laser radiation, the most attractive one at the present time is the scheme of the Laser Wake Field Accelerator (LWFA. In this method, a strong Langmuir wave is excited in the plasma. In such a wave, electrons are trapped and can acquire relativistic energies, accelerated to high energies. In this paper the PIC simulation of wakefield generation and electron acceleration in an underdense plasma with a short ultra intense laser pulse is discussed. 2D electromagnetic PIC code is written by FORTRAN 90, are developed, and the propagation of different electromagnetic waves in vacuum and plasma is shown. Next, the accuracy of implementation of 2D electromagnetic code is verified, making it relativistic and simulating the generating of wakefield and electron acceleration in an underdense plasma. It is shown that when a symmetric electromagnetic pulse passes through the plasma, the longitudinal field generated in plasma, at the back of the pulse, is weaker than the one due to an asymmetric electromagnetic pulse, and thus the electrons acquire less energy. About the asymmetric pulse, when front part of the pulse has smaller time rise than the back part of the pulse, a stronger wakefield generates, in plasma, at the back of the pulse, and consequently the electrons acquire more energy. In an inverse case, when the rise time of the back part of the pulse is bigger in comparison with that of the back part, a weaker wakefield generates and this leads to the fact that the electrons

  15. Estimation of direct laser acceleration in laser wakefield accelerators using particle-in-cell simulations

    CERN Document Server

    Shaw, J L; Marsh, K A; Tsung, F S; Mori, W B; Joshi, C

    2015-01-01

    Many current laser wakefield acceleration (LWFA) experiments are carried out in a regime where the laser pulse length is on the order of or longer than the wake wavelength and where ionization injection is employed to inject electrons into the wake. In these experiments, the trapped electrons will co-propagate with the longitudinal wakefield and the transverse laser field. In this scenario, the electrons can gain a significant amount of energy from both the direct laser acceleration (DLA) mechanism as well as the usual LWFA mechanism. Particle-in-cell (PIC) codes are frequently used to discern the relative contribution of these two mechanisms. However, if the longitudinal resolution used in the PIC simulations is inadequate, it can produce numerical heating that can overestimate the transverse motion, which is important in determining the energy gain due to DLA. We have therefore carried out a systematic study of this LWFA regime by varying the longitudinal resolution of PIC simulations from the standard, bes...

  16. The common component architecture for particle accelerator simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Dechow, D. R.; Norris, B.; Amundson, J.; Mathematics and Computer Science; Tech-X Corp; FNAL

    2007-01-01

    Synergia2 is a beam dynamics modeling and simulation application for high-energy accelerators such as the Tevatron at Fermilab and the International Linear Collider, which is now under planning and development. Synergia2 is a hybrid, multilanguage software package comprised of two separate accelerator physics packages (Synergia and MaryLie/Impact) and one high-performance computer science package (PETSc). We describe our approach to producing a set of beam dynamics-specific software components based on the Common Component Architecture specification. Among other topics, we describe particular experiences with the following tasks: using Python steering to guide the creation of interfaces and to prototype components; working with legacy Fortran codes; and an example component-based, beam dynamics simulation.

  17. The Particle Accelerator Simulation Code PyORBIT

    Energy Technology Data Exchange (ETDEWEB)

    Gorlov, Timofey V [ORNL; Holmes, Jeffrey A [ORNL; Cousineau, Sarah M [ORNL; Shishlo, Andrei P [ORNL

    2015-01-01

    The particle accelerator simulation code PyORBIT is presented. The structure, implementation, history, parallel and simulation capabilities, and future development of the code are discussed. The PyORBIT code is a new implementation and extension of algorithms of the original ORBIT code that was developed for the Spallation Neutron Source accelerator at the Oak Ridge National Laboratory. The PyORBIT code has a two level structure. The upper level uses the Python programming language to control the flow of intensive calculations performed by the lower level code implemented in the C++ language. The parallel capabilities are based on MPI communications. The PyORBIT is an open source code accessible to the public through the Google Open Source Projects Hosting service.

  18. Accelerate micromagnetic simulations with GPU programming in MATLAB

    OpenAIRE

    Zhu, Ru

    2015-01-01

    A finite-difference Micromagnetic simulation code written in MATLAB is presented with Graphics Processing Unit (GPU) acceleration. The high performance of Graphics Processing Unit (GPU) is demonstrated compared to a typical Central Processing Unit (CPU) based code. The speed-up of GPU to CPU is shown to be greater than 30 for problems with larger sizes on a mid-end GPU in single precision. The code is less than 200 lines and suitable for new algorithm developing.

  19. Accelerate micromagnetic simulations with GPU programming in MATLAB

    OpenAIRE

    Zhu, Ru

    2015-01-01

    A finite-difference Micromagnetic simulation code written in MATLAB is presented with Graphics Processing Unit (GPU) acceleration. The high performance of Graphics Processing Unit (GPU) is demonstrated compared to a typical Central Processing Unit (CPU) based code. The speed-up of GPU to CPU is shown to be greater than 30 for problems with larger sizes on a mid-end GPU in single precision. The code is less than 200 lines and suitable for new algorithm developing.

  20. Simulation on Buildup of Electron Cloud in Proton Circular Accelerator

    CERN Document Server

    Liu, Yu-Dong

    2014-01-01

    Electron cloud interaction with high energy positive beam are believed responsible for various undesirable effects such as vacuum degradation, collective beam instability and even beam loss in high power proton circular accelerator. An important uncertainty in predicting electron cloud instability lies in the detail processes on the generation and accumulation of the electron cloud. The simulation on the build-up of electron cloud is necessary to further studies on beam instability caused by electron cloud. China Spallation Neutron Source (CSNS) is the largest scientific project in building, whose accelerator complex includes two main parts: an H- linac and a rapid cycling synchrotron (RCS). The RCS accumulates the 80Mev proton beam and accelerates it to 1.6GeV with a repetition rate 25Hz. During the beam injection with lower energy, the emerging electron cloud may cause a serious instability and beam loss on the vacuum pipe. A simulation code has been developed to simulate the build-up, distribution and dens...

  1. Community Petascale Project for Accelerator Science and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Warren B. Mori

    2013-02-01

    The UCLA Plasma Simulation Group is a major partner of the "Community Petascale Project for Accelerator Science and Simulation. This is the final technical report. We include an overall summary, a list of publications and individual progress reports for each years. During the past five years we have made tremendous progress in enhancing the capabilities of OSIRIS and QuickPIC, in developing new algorithms and data structures for PIC codes to run on GPUS and many future core architectures, and in using these codes to model experiments and in making new scientific discoveries. Here we summarize some highlights for which SciDAC was a major contributor.

  2. Electromagnetic Simulations of Helical-Based Ion Acceleration Structures

    CERN Document Server

    Nelson, Scott D; Caporaso, George; Friedman, Alex; Poole, Brian R; Waldron, William

    2005-01-01

    Helix structures have been proposed* for accelerating low energy ion beams using MV/m fields in order to increase the coupling effeciency of the pulsed power system and to tailor the electromagnetic wave propagation speed with the particle beam speed as the beam gains energy. Calculations presented here show the electromagnetic field as it propagates along the helix structure, field stresses around the helix structure (for voltage breakdown determination), optimizations to the helix and driving pulsed power waveform, and simulations showing test particles interacting with the simulated time varying fields.

  3. Accelerating particle-in-cell simulations using multilevel Monte Carlo

    Science.gov (United States)

    Ricketson, Lee

    2015-11-01

    Particle-in-cell (PIC) simulations have been an important tool in understanding plasmas since the dawn of the digital computer. Much more recently, the multilevel Monte Carlo (MLMC) method has accelerated particle-based simulations of a variety of systems described by stochastic differential equations (SDEs), from financial portfolios to porous media flow. The fundamental idea of MLMC is to perform correlated particle simulations using a hierarchy of different time steps, and to use these correlations for variance reduction on the fine-step result. This framework is directly applicable to the Langevin formulation of Coulomb collisions, as demonstrated in previous work, but in order to apply to PIC simulations of realistic scenarios, MLMC must be generalized to incorporate self-consistent evolution of the electromagnetic fields. We present such a generalization, with rigorous results concerning its accuracy and efficiency. We present examples of the method in the collisionless, electrostatic context, and discuss applications and extensions for the future.

  4. Quantification of Fault-Zone Plasticity Effects with Spontaneous Rupture Simulations

    Science.gov (United States)

    Roten, D.; Olsen, K. B.; Day, S. M.; Cui, Y.

    2017-02-01

    Previous studies have shown that plastic yielding in crustal rocks in the fault zone may impose a physical limit to extreme ground motions. We explore the effects of fault-zone non-linearity on peak ground velocities (PGVs) by simulating a suite of surface-rupturing strike-slip earthquakes in a medium governed by Drucker-Prager plasticity using the AWP-ODC finite-difference code. Our simulations cover magnitudes ranging from 6.5 to 8.0, three different rock strength models, and average stress drops of 3.5 and 7.0 MPa, with a maximum frequency of 1 Hz and a minimum shear-wave velocity of 500 m/s. Friction angles and cohesions in our rock models are based on strength criteria which are frequently used for fractured rock masses in civil and mining engineering. For an average stress drop of 3.5 MPa, plastic yielding reduces near-fault PGVs by 15-30% in pre-fractured, low strength rock, but less than 1% in massive, high-quality rock. These reductions are almost insensitive to magnitude. If the stress drop is doubled, plasticity reduces near-fault PGVs by 38-45% and 5-15% in rocks of low and high strength, respectively. Because non-linearity reduces slip rates and static slip near the surface, plasticity acts in addition to, and may partially be emulated by, a shallow velocity-strengthening layer. The effects of plasticity are exacerbated if a fault damage zone with reduced shear-wave velocities and reduced rock strength is present. In the linear case, fault-zone trapped waves result in higher near-surface peak slip rates and ground velocities compared to simulations without a low-velocity zone. These amplifications are balanced out by fault-zone plasticity if rocks in the damage zone exhibit low-to-moderate strength throughout the depth extent of the low-velocity zone (˜ 5 km). We also perform dynamic non-linear simulations of a high stress drop (8 MPa) M 7.8 earthquake rupturing the southern San Andreas fault along 250 km from Indio to Lake Hughes. Non-linearity in

  5. Simulation of a flexible wind turbine response to a grid fault

    DEFF Research Database (Denmark)

    Hansen, Anca D.; Cutululis, A. Nicolaos; Sørensen, Poul;

    2007-01-01

    in power system simulation tools applying simplified mechanical models of the drive train. This paper presents simulations of the wind turbine load response to grid faults with an advanced aeroelastic computer code (HAWC2). The core of this code is an advanced model for the flexible structure of the wind...... turbines, taking the flexibility of the tower, blades and other components of the wind turbines into account. The effect of a grid fault on the wind turbine flexible structure is assessed for a typical fixed speed wind turbine, equipped with an induction generator....

  6. Electromagnetic metamaterial simulations using a GPU-accelerated FDTD method

    Science.gov (United States)

    Seok, Myung-Su; Lee, Min-Gon; Yoo, SeokJae; Park, Q.-Han

    2015-12-01

    Metamaterials composed of artificial subwavelength structures exhibit extraordinary properties that cannot be found in nature. Designing artificial structures having exceptional properties plays a pivotal role in current metamaterial research. We present a new numerical simulation scheme for metamaterial research. The scheme is based on a graphic processing unit (GPU)-accelerated finite-difference time-domain (FDTD) method. The FDTD computation can be significantly accelerated when GPUs are used instead of only central processing units (CPUs). We explain how the fast FDTD simulation of large-scale metamaterials can be achieved through communication optimization in a heterogeneous CPU/GPU-based computer cluster. Our method also includes various advanced FDTD techniques: the non-uniform grid technique, the total-field/scattered-field (TFSF) technique, the auxiliary field technique for dispersive materials, the running discrete Fourier transform, and the complex structure setting. We demonstrate the power of our new FDTD simulation scheme by simulating the negative refraction of light in a coaxial waveguide metamaterial.

  7. Simulating Electron Clouds in Heavy-Ion Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, R.H.; Friedman, A.; Kireeff Covo, M.; Lund, S.M.; Molvik,A.W.; Bieniosek, F.M.; Seidl, P.A.; Vay, J-L.; Stoltz, P.; Veitzer, S.

    2005-04-07

    Contaminating clouds of electrons are a concern for most accelerators of positive-charged particles, but there are some unique aspects of heavy-ion accelerators for fusion and high-energy density physics which make modeling such clouds especially challenging. In particular, self-consistent electron and ion simulation is required, including a particle advance scheme which can follow electrons in regions where electrons are strongly-, weakly-, and un-magnetized. They describe their approach to such self-consistency, and in particular a scheme for interpolating between full-orbit (Boris) and drift-kinetic particle pushes that enables electron time steps long compared to the typical gyro period in the magnets. They present tests and applications: simulation of electron clouds produced by three different kinds of sources indicates the sensitivity of the cloud shape to the nature of the source; first-of-a-kind self-consistent simulation of electron-cloud experiments on the High-Current Experiment (HCX) at Lawrence Berkeley National Laboratory, in which the machine can be flooded with electrons released by impact of the ion beam and an end plate, demonstrate the ability to reproduce key features of the ion-beam phase space; and simulation of a two-stream instability of thin beams in a magnetic field demonstrates the ability of the large-timestep mover to accurately calculate the instability.

  8. Simulation of Near-Fault High-Frequency Ground Motions from the Representation Theorem

    Science.gov (United States)

    Beresnev, Igor A.

    2017-07-01

    "What is the maximum possible ground motion near an earthquake fault?" is an outstanding question of practical significance in earthquake seismology. In establishing a possible theoretical cap on extreme ground motions, the representation integral of elasticity, providing an exact, within limits of applicability, solution for fault radiation at any frequency, is an under-utilized tool. The application of a numerical procedure leading to synthetic ground displacement, velocity, and acceleration time histories to modeling of the record at the Lucerne Valley hard-rock station, uniquely located at 1.1 km from the rupture of the M w 7.2 Landers, California event, using a seismologically constrained temporal form of slip on the fault, reveals that the shape of the displacement waveform can be modeled closely, given the simplicity of the theoretical model. High precision in the double integration, as well as carefully designed smoothing and filtering, are necessary to suppress the numerical noise in the high-frequency (velocity and acceleration) synthetic motions. The precision of the integration of at least eight decimal digits ensures the numerical error in the displacement waveforms generally much lower than 0.005% and reduces the error in the peak velocities and accelerations to the levels acceptable to make the representation theorem a reliable tool in the practical evaluation of the magnitude of maximum possible ground motions in a wide-frequency range of engineering interest.

  9. Electromagnetic simulation study of dielectric wall accelerator structures

    Institute of Scientific and Technical Information of China (English)

    ZHAO Quan-Tang; ZHANG Zi-Min; YUAN Ping; CAO Shu-Chun; SHEN Xiao-Kang; JING Yi; LIU Ming; ZHAO Hong-Wei

    2012-01-01

    Two types of dielectric wall accelerator (DWA) structures,a bi-polar Blumlein line and zero integral pulse line (ZIP) structures were investigated.The high gradient insulator simulated by the particle in cell code confirms that it has little influence on the axial electric field.The results of simulations using CST microwave studio indicate how the axial electric field is formed,and the electric field waveforms agree with the theoretical one very well.The influence of layer-to-layer coupling in a ZIP structure is much smaller and the electric field waveform is much better.The axial of the Blumlein structure's electric field has better axial stability.From both of the above,it found that for a shorter pulse width,the axial electric field is much higher and the pulse stability and fidelity are much better.The CST simulation is very helpful for designing DWA structures.

  10. Accelerating transient simulation of linear reduced order models.

    Energy Technology Data Exchange (ETDEWEB)

    Thornquist, Heidi K.; Mei, Ting; Keiter, Eric Richard; Bond, Brad

    2011-10-01

    Model order reduction (MOR) techniques have been used to facilitate the analysis of dynamical systems for many years. Although existing model reduction techniques are capable of providing huge speedups in the frequency domain analysis (i.e. AC response) of linear systems, such speedups are often not obtained when performing transient analysis on the systems, particularly when coupled with other circuit components. Reduced system size, which is the ostensible goal of MOR methods, is often insufficient to improve transient simulation speed on realistic circuit problems. It can be shown that making the correct reduced order model (ROM) implementation choices is crucial to the practical application of MOR methods. In this report we investigate methods for accelerating the simulation of circuits containing ROM blocks using the circuit simulator Xyce.

  11. DEM simulation of granular flows in a centrifugal acceleration field

    Science.gov (United States)

    Cabrera, Miguel Angel; Peng, Chong; Wu, Wei

    2017-04-01

    The main purpose of mass-flow experimental models is abstracting distinctive features of natural granular flows, and allow its systematic study in the laboratory. In this process, particle size, space, time, and stress scales must be considered for the proper representation of specific phenomena [5]. One of the most challenging tasks in small scale models, is matching the range of stresses and strains among the particle and fluid media observed in a field event. Centrifuge modelling offers an alternative to upscale all gravity-driven processes, and it has been recently employed in the simulation of granular flows [1, 2, 3, 6, 7]. Centrifuge scaling principles are presented in Ref. [4], collecting a wide spectrum of static and dynamic models. However, for the case of kinematic processes, the non-uniformity of the centrifugal acceleration field plays a major role (i.e., Coriolis and inertial effects). In this work, we discuss a general formulation for the centrifugal acceleration field, implemented in a discrete element model framework (DEM), and validated with centrifuge experimental results. Conventional DEM simulations relate the volumetric forces as a function of the gravitational force Gp = mpg. However, in the local coordinate system of a rotating centrifuge model, the cylindrical centrifugal acceleration field needs to be included. In this rotating system, the centrifugal acceleration of a particle depends on the rotating speed of the centrifuge, as well as the position and speed of the particle in the rotating model. Therefore, we obtain the formulation of centrifugal acceleration field by coordinate transformation. The numerical model is validated with a series of centrifuge experiments of monodispersed glass beads, flowing down an inclined plane at different acceleration levels and slope angles. Further discussion leads to the numerical parameterization necessary for simulating equivalent granular flows under an augmented acceleration field. The premise of

  12. Longitudinal RF capture and acceleration simulation in CSNS RCS

    Institute of Scientific and Technical Information of China (English)

    LIU Lin; TANG Jing-Yu; QIU Jing; WEI Tao

    2009-01-01

    China Spallation Neutron Source(CSNS)is a high power proton accelerator-based facility.Uncontrolled beam loss is a major concern in designing the CSNS to control the radioactivation level.For the Rapid Cycling Synchrotron(RCS)of the CSNS,the repetition frequency is too high for the longitudinal motion to be fully adiabatic.Significant beam loss happens during the RF capture and initial acceleration of the injection period.To reduce the longitudinal beam loss,beam chopping and momentum offset painting methods are used in the RCS injection.This paper presents detailed studies on the longitudinal motion in the RCS by using the ORBIT simulations,which include different beam chopping factors,momentum offsets and RF voltage optimization.With a trade-off between the longitudinal beam loss and transverse incoherent tune shift that will also result in beam losses,optimized longitudinal painting schemes are obtained.

  13. Enhancing protein adsorption simulations by using accelerated molecular dynamics.

    Directory of Open Access Journals (Sweden)

    Christian Mücksch

    Full Text Available The atomistic modeling of protein adsorption on surfaces is hampered by the different time scales of the simulation ([Formula: see text][Formula: see text]s and experiment (up to hours, and the accordingly different 'final' adsorption conformations. We provide evidence that the method of accelerated molecular dynamics is an efficient tool to obtain equilibrated adsorption states. As a model system we study the adsorption of the protein BMP-2 on graphite in an explicit salt water environment. We demonstrate that due to the considerably improved sampling of conformational space, accelerated molecular dynamics allows to observe the complete unfolding and spreading of the protein on the hydrophobic graphite surface. This result is in agreement with the general finding of protein denaturation upon contact with hydrophobic surfaces.

  14. Fast acceleration of 2D wave propagation simulations using modern computational accelerators.

    Science.gov (United States)

    Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H; Kay, Matthew

    2014-01-01

    Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than 150x speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least 200x faster than the sequential implementation and 30x faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of 120x with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of

  15. Fast acceleration of 2D wave propagation simulations using modern computational accelerators.

    Directory of Open Access Journals (Sweden)

    Wei Wang

    Full Text Available Recent developments in modern computational accelerators like Graphics Processing Units (GPUs and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than 150x speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least 200x faster than the sequential implementation and 30x faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of 120x with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other

  16. Numerical simulation of the floor water-inrush in working face influenced by fault structure

    Institute of Scientific and Technical Information of China (English)

    CHENG Jiu-long; CAO Ji-sheng; XU Jin-peng; YU Shi-jian; TIAN Li

    2007-01-01

    Used numerical simulation method to study the floor water-inrush mechanism in working face which was influenced by fault structure, and set up many kinds of models and performs numerical calculation by fully using large finite element soft-ANSYS and element birth-death method. The results show that the more high the underground water pressure, the more big the floor displacement and possibility of water-inrush; the floor which has fault structure is more prone to water-inrush than the floor which not has fault structure, the floor which has multi-groups cracks is more prone to water-inrush than the floor which has single-group cracks. The numerical simulation result forecasts the water-inrush in working face preferably.

  17. Design & simulation of a 800 kV dynamitron accelerator by CST studio

    Directory of Open Access Journals (Sweden)

    A M Aghayan

    2015-09-01

    Full Text Available Nowadays, middle energy electrostatic accelerators in industries are widely used due to their high efficiency and low cost compared with other types of accelerators. In this paper, the importance and applications of electrostatic accelerators with 800 keV energy are studied. Design and simulation of capacitive coupling of a dynamitron accelerator is proposed. Furthermore, accelerating tube are designed and simulated by means of CST Suit Studio

  18. Simulation of broad-band strong ground motion for a hypothetical Mw 7.1 earthquake on the Enriquillo Fault in Haiti

    Science.gov (United States)

    Douilly, Roby; Mavroeidis, George P.; Calais, Eric

    2017-10-01

    The devastating 2010 Mw 7.0 Haiti earthquake demonstrated the need to improve mitigation and preparedness for future seismic events in the region. Previous studies have shown that the earthquake did not occur on the Enriquillo Fault, the main plate boundary fault running through the heavily populated Port-au-Prince region, but on the nearby and previously unknown transpressional Léogâne Fault. Slip on that fault has increased stresses on the segment of Enriquillo Fault to the east of Léogâne, which terminates in the ˜3-million-inhabitant capital city of Port-au-Prince. In this study, we investigate ground shaking in the vicinity of Port-au-Prince, if a hypothetical rupture similar to the 2010 Haiti earthquake occurred on that segment of the Enriquillo Fault. We use a finite element method and assumptions on regional tectonic stress to simulate the low-frequency ground motion components using dynamic rupture propagation for a 52-km-long segment. We consider eight scenarios by varying parameters such as hypocentre location, initial shear stress and fault dip. The high-frequency ground motion components are simulated using the specific barrier model in the context of the stochastic modeling approach. The broad-band ground motion synthetics are subsequently obtained by combining the low-frequency components from the dynamic rupture simulation with the high-frequency components from the stochastic simulation using matched filtering at a crossover frequency of 1 Hz. Results show that rupture on a vertical Enriquillo Fault generates larger horizontal permanent displacements in Léogâne and Port-au-Prince than rupture on a south-dipping Enriquillo Fault. The mean horizontal peak ground acceleration (PGA), computed at several sites of interest throughout Port-au-Prince, has a value of ˜0.45 g, whereas the maximum horizontal PGA in Port-au-Prince is ˜0.60 g. Even though we only consider a limited number of rupture scenarios, our results suggest more intense ground

  19. Relativistic Klystron Two-Beam Accelerator Simulation Code Development

    Science.gov (United States)

    Lidia, Steven; Ryne, Robert

    1997-05-01

    We present recent work on the development and testing of a 3-D simu- lation code for relativistic klystron two-beam accelerators (RK-TBAs). This new code utilizes symplectic integration techniques to push macro- particles, coupled to a circuit equation framework that advances the fields in the cavities. Space charge effects are calculated using a Green's function approach, and pipe wall effects are included in the electrostatic approximation. We present simulations of the LBNL/LLNL RK-TBA device, emphasizing cavity power development and beam dynamics, including the high- and low-frequency beam break-up instabilities.

  20. Isogeometric Simulation of Lorentz Detuning in Superconducting Accelerator Cavities

    CERN Document Server

    Corno, Jacopo; De Gersem, Herbert; Schöps, Sebastian

    2016-01-01

    Cavities in linear accelerators suffer from eigenfrequency shifts due to mechanical deformation caused by the electromagnetic radiation pressure, a phenomenon known as Lorentz detuning. Estimating the frequency shift up to the needed accuracy by means of standard Finite Element Methods, is a complex task due to the non exact representation of the geometry and due to the necessity for mesh refinement when using low order basis functions. In this paper, we use Isogeometric Analysis for discretising both mechanical deformations and electromagnetic fields in a coupled multiphysics simulation approach. The combined high-order approximation of both leads to high accuracies at a substantially lower computational cost.

  1. Piloted Simulator Evaluation Results of New Fault-Tolerant Flight Control Algorithm

    NARCIS (Netherlands)

    Lombaerts, T.J.J.; Smaili, M.H.; Stroosma, O.; Chu, Q.P.; Mulder, J.A.; Joosten, D.A.

    2010-01-01

    A high fidelity aircraft simulation model, reconstructed using the Digital Flight Data Recorder (DFDR) of the 1992 Amsterdam Bijlmermeer aircraft accident (Flight 1862), has been used to evaluate a new Fault-Tolerant Flight Control Algorithm in an online piloted evaluation. This paper focuses on the

  2. Parallel Critical Path Tracing—— A Fault Simulation Algorithm for Combinational Circuits

    Institute of Scientific and Technical Information of China (English)

    魏道政

    1990-01-01

    Critical path tracing,a fault simulation method for gate-level combinational circuits,is extended to the parallel critical path tracing for functional block-level combinational circuits.If the word length of the host computer is m,then the parallel critical path tracing will be approximately m times faster than the original one.

  3. VHDL-AMS fault simulation for testing DNA bio-sensing arrays

    NARCIS (Netherlands)

    Kerkhoff, H.G.; Zhang, X.; Liu, H.; Richardson, A.; Nouet, P.; Azais, F.; Zhang, Xiao

    2005-01-01

    The market of microelectronic fluidic arrays for biomedical applications, like DNA determination, is rapidly increasing. In order to evaluate these systems in terms of required design-for-test structures, fault simulations in both fluidic and electronic domains are necessary. VHDL-AMS can be used su

  4. Accelerating Climate and Weather Simulations through Hybrid Computing

    Science.gov (United States)

    Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark

    2011-01-01

    Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.

  5. Simulation of PEP-II Accelerator Backgrounds Using TURTLE

    CERN Document Server

    Barlow, Roger J; Kozanecki, Witold; Majewski, Stephanie; Roudeau, Patrick; Stocchi, Achille

    2005-01-01

    We present studies of accelerator-induced backgrounds in the BaBar detector at the SLAC B-Factory, carried out using a modified version ofthe DECAY TURTLE simulation package. Lost-particle backgrounds in PEP-II are dominated by a combination of beam-gas bremstrahlung, beam-gas Coulomb scattering, radiative-Bhabha events and beam-beam blow-up. The radiation damage and detector occupancy caused by the associated electromagnetic shower debris can limit the usable luminosity. In order to understand and mitigate such backgrounds, we have performed a full programme of beam-gas and luminosity-background simulations, that include the effects of the detector solenoidal field, detailed modelling of limiting apertures in both collider rings, and optimization of the betatron collimation scheme in the presence of large transverse tails.

  6. Spatial Verification of Earthquake Simulators Using Self-Consistent Metrics for Off-Fault Seismicity

    Science.gov (United States)

    Wilson, J. M.; Yoder, M. R.; Rundle, J. B.

    2015-12-01

    We address the problem of verifying the self-consistency of earthquake simulators with the data from which their parameters are drawn. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of the earthquake fault system on which the earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements can be included in these simulations as well. In general, the parameters are adjusted so that natural earthquake sequences are matched in their scaling properties in an optimal way. Generally, these parameters choices are based on paleoseismic data extending over many hundreds and thousands of years. However, one of the problems encountered is the verification of the simulations applied to current earthquake seismicity. It is this problem, for which no currently accepted solution has been proposed, that is the objective of the present paper. Physically-based earthquake simulators allow the generation of many thousands of years of simulated seismicity, allowing for robust capture of statistical properties of large, damaging earthquakes that have long recurrence time scales for observation. Following past simulator and forecast model verification efforts, we approach the challenges in spatial forecast verification fo simulators; namely, that simulator output events are confined to the modeled faults, while observed earthquakes often occur off of known faults. We present two methods for overcoming this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a variation of the Epidemic-type aftershock (ETAS) model, which smears the simulator catalog seismicity over the entire test region. To test these methods, a Receiver Operating Characteristic (ROC) plot was produced by comparing the rate maps to observed m>6.0 earthquakes since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the modified ETAS

  7. The Comprehensive Study of Electrical Faults in PV Arrays

    Directory of Open Access Journals (Sweden)

    M. Sabbaghpur Arani

    2016-01-01

    Full Text Available The rapid growth of the solar industry over the past several years has expanded the significance of photovoltaic (PV systems. Fault analysis in solar photovoltaic (PV arrays is a fundamental task to increase reliability, efficiency, and safety in PV systems and, if not detected, may not only reduce power generation and accelerated system aging but also threaten the availability of the whole system. Due to the current-limiting nature and nonlinear output characteristics of PV arrays, faults in PV arrays may not be detected. In this paper, all possible faults that happen in the PV system have been classified and six common faults (shading condition, open-circuit fault, degradation fault, line-to-line fault, bypass diode fault, and bridging fault have been implemented in 7.5 KW PV farm. Based on the simulation results, both normal operational curves and fault curves have been compared.

  8. Dynamic earthquake rupture simulations on nonplanar faults embedded in 3D geometrically complex, heterogeneous elastic solids

    Energy Technology Data Exchange (ETDEWEB)

    Duru, Kenneth, E-mail: kduru@stanford.edu [Department of Geophysics, Stanford University, Stanford, CA (United States); Dunham, Eric M. [Department of Geophysics, Stanford University, Stanford, CA (United States); Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA (United States)

    2016-01-15

    Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge–Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture

  9. Analysis and Simulation of Fault Characteristics of Power Switch Failures in Distribution Electronic Power Transformers

    Directory of Open Access Journals (Sweden)

    Dan Wang

    2013-08-01

    Full Text Available This paper presents research on the voltage and current distortion in the input stage, isolation stage and output stage of Distribution Electronic Power transformer (D-EPT after the open-circuit and short-circuit faults of its power switches. In this paper, the operational principles and the control methods for input stage, isolation stage and output stage of D-EPT, which work as a cascaded H-bridge rectifier, DC-DC converter and inverter, respectively, are introduced. Based on conclusions derived from the performance analysis of D-EPT after the faults, this paper comes up with the effects from its topology design and control scheme on the current and voltage distortion. According to the EPT fault characteristics, since the waveforms of relevant components heavily depend on the location of the faulty switch, it is very easy to locate the exact position of the faulty switch. Finally, the fault characteristics peculiar to D-EPT are analyzed, and further discussed with simulation on the Saber platform, as well as a fault location diagnosis algorithm.

  10. Three-dimensional dynamic rupture simulations across interacting faults: The Mw7.0, 2010, Haiti earthquake

    Science.gov (United States)

    Douilly, R.; Aochi, H.; Calais, E.; Freed, A. M.

    2015-02-01

    The mechanisms controlling rupture propagation between fault segments during a large earthquake are key to the hazard posed by fault systems. Rupture initiation on a smaller fault sometimes transfers to a larger fault, resulting in a significant event (e.g., 2002 M7.9 Denali USA and 2010 M7.1 Darfield New Zealand earthquakes). In other cases rupture is constrained to the initial fault and does not transfer to nearby faults, resulting in events of more moderate magnitude. This was the case of the 1989 M6.9 Loma Prieta and 2010 M7.0 Haiti earthquakes which initiated on reverse faults abutting against a major strike-slip plate boundary fault but did not propagate onto it. Here we investigate the rupture dynamics of the Haiti earthquake, seeking to understand why rupture propagated across two segments of the Léogâne fault but did not propagate to the adjacent Enriquillo Plantain Garden Fault, the major 200 km long plate boundary fault cutting through southern Haiti. We use a finite element model to simulate propagation of rupture on the Léogâne fault, varying friction and background stress to determine the parameter set that best explains the observed earthquake sequence, in particular, the ground displacement. The two slip patches inferred from finite fault inversions are explained by the successive rupture of two fault segments oriented favorably with respect to the rupture propagation, while the geometry of the Enriquillo fault did not allow shear stress to reach failure.

  11. Final Progress Report - Heavy Ion Accelerator Theory and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Haber, Irving

    2009-10-31

    The use of a beam of heavy ions to heat a target for the study of warm dense matter physics, high energy density physics, and ultimately to ignite an inertial fusion pellet, requires the achievement of beam intensities somewhat greater than have traditionally been obtained using conventional accelerator technology. The research program described here has substantially contributed to understanding the basic nonlinear intense-beam physics that is central to the attainment of the requisite intensities. Since it is very difficult to reverse intensity dilution, avoiding excessive dilution over the entire beam lifetime is necessary for achieving the required beam intensities on target. The central emphasis in this research has therefore been on understanding the nonlinear mechanisms that are responsible for intensity dilution and which generally occur when intense space-charge-dominated beams are not in detailed equilibrium with the external forces used to confine them. This is an important area of study because such lack of detailed equilibrium can be an unavoidable consequence of the beam manipulations such as acceleration, bunching, and focusing necessary to attain sufficient intensity on target. The primary tool employed in this effort has been the use of simulation, particularly the WARP code, in concert with experiment, to identify the nonlinear dynamical characteristics that are important in practical high intensity accelerators. This research has gradually made a transition from the study of idealized systems and comparisons with theory, to study the fundamental scaling of intensity dilution in intense beams, and more recently to explicit identification of the mechanisms relevant to actual experiments. This work consists of two categories; work in direct support beam physics directly applicable to NDCX and a larger effort to further the general understanding of space-charge-dominated beam physics.

  12. Shear faults and dislocation core structure simulations in B2 FeAl

    Energy Technology Data Exchange (ETDEWEB)

    Vailhe, C.; Farkas, D. [Virginia Polytechnic Inst. and State Univ., Blacksburg, VA (United States). Dept. of Materials Science and Engineering

    1997-11-01

    Embedded atom potentials were derived for the Fe-Al system reproducing lattice and elastic properties of B2 FeAl. The structure and energy of vacancies, antisites and anti phase boundaries (APBs) were studied. A significant decrease in the APB energy was obtained for Fe-rich B2 alloys. Shear fault energies along the {l_brace}110{r_brace} and {l_brace}112{r_brace} planes were computed showing that stable planar faults deviated from the exact APB fault. Core structures and critical Peierls stress values were simulated for the <100> and <111> dislocations. The superpartials created in the dissociation reactions were not of the 1/2<111> type, but 1/8<334> in accordance with the stable planar fault in the {l_brace}110{r_brace} planes. The results obtained for these simulations are discussed in terms of the mechanical behavior of FeAl and in comparison with B2 NiAl.

  13. Trypsinogen activation as observed in accelerated molecular dynamics simulations.

    Science.gov (United States)

    Boechi, Leonardo; Pierce, Levi; Komives, Elizabeth A; McCammon, J Andrew

    2014-11-01

    Serine proteases are involved in many fundamental physiological processes, and control of their activity mainly results from the fact that they are synthetized in an inactive form that becomes active upon cleavage. Three decades ago Martin Karplus's group performed the first molecular dynamics simulations of trypsin, the most studied member of the serine protease family, to address the transition from the zymogen to its active form. Based on the computational power available at the time, only high frequency fluctuations, but not the transition steps, could be observed. By performing accelerated molecular dynamics (aMD) simulations, an interesting approach that increases the configurational sampling of atomistic simulations, we were able to observe the N-terminal tail insertion, a crucial step of the transition mechanism. Our results also support the hypothesis that the hydrophobic effect is the main force guiding the insertion step, although substantial enthalpic contributions are important in the activation mechanism. As the N-terminal tail insertion is a conserved step in the activation of serine proteases, these results afford new perspective on the underlying thermodynamics of the transition from the zymogen to the active enzyme.

  14. Simulations of Relativistic Collisionless Shocks: Shock Structure and Particle Acceleration

    Energy Technology Data Exchange (ETDEWEB)

    Spitkovsky, Anatoly; /KIPAC, Menlo Park

    2006-04-10

    We discuss 3D simulations of relativistic collisionless shocks in electron-positron pair plasmas using the particle-in-cell (PIC) method. The shock structure is mainly controlled by the shock's magnetization (''sigma'' parameter). We demonstrate how the structure of the shock varies as a function of sigma for perpendicular shocks. At low magnetizations the shock is mediated mainly by the Weibel instability which generates transient magnetic fields that can exceed the initial field. At larger magnetizations the shock is dominated by magnetic reflections. We demonstrate where the transition occurs and argue that it is impossible to have very low magnetization collisionless shocks in nature (in more than one spatial dimension). We further discuss the acceleration properties of these shocks, and show that higher magnetization perpendicular shocks do not efficiently accelerate nonthermal particles in 3D. Among other astrophysical applications, this may pose a restriction on the structure and composition of gamma-ray bursts and pulsar wind outflows.

  15. Beam dynamics simulation of the Spallation Neutron Source linear accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Takeda, H.; Billen, J.H.; Bhatia, T.S.

    1998-12-31

    The accelerating structure for Spallation Neutron Source (SNS) consists of a radio-frequency-quadrupole-linac (RFQ), a drift-tube-linac (DTL), a coupled-cavity-drift-tube-linac (CCDTL), and a coupled-cavity-linac (CCL). The linac is operated at room temperature. The authors discuss the detailed design of linac which accelerates an H{sup {minus}} pulsed beam coming out from RFQ at 2.5 MeV to 1000 MeV. They show a detailed transition from 402.5 MHz DTL with a 4 {beta}{lambda} structure to a CCDTL operated at 805 MHz with a 12 {beta}{lambda} structure. After a discussion of overall feature of the linac, they present an end-to-end particle simulation using the new version of the PARMILA code for a beam starting from the RFQ entrance through the rest of the linac. At 1000 MeV, the beam is transported to a storage ring. The storage ring requires a large ({+-}500-keV) energy spread. This is accomplished by operating the rf-phase in the last section of the linac so the particles are at the unstable fixed point of the separatrix. They present zero-current phase advance, beam size, and beam emittance along the entire linac.

  16. Intelligent pump drives. Simulation, condition monitoring, fault diagnosis and energy efficiency; Intelligente Pumpenantriebe. Simulation, Condition Monitoring, Fehlerdiagnose und Energieeffizienz

    Energy Technology Data Exchange (ETDEWEB)

    Kleinmann, Stefan [Allweiler AG, Radolfzell (Germany); Leonardo, Domenico; Koller-Hodac, Agathe [Hochschule fuer Technik Rapperswil (Switzerland)

    2011-07-01

    The authors of the contribution under consideration report on an implementation of a simulation environment and a fault diagnostic system for an oil burner application. Using a modification of the application hardware, an additional increase in efficiency in an advanced control of pump drives is achieved. The properties of the combustion process are not affected adversely. All changes to the system can be investigated in simulations for feasibility and impact. Using the simulation model, a diagnostic system is brought up enabling a remote monitoring for example.

  17. Synthetic seismograms of ground motion near earthquake fault using simulated Green's function method

    Institute of Scientific and Technical Information of China (English)

    ZHAO Zhixin; ZHAO Zhao; XU Jiren; Ryuji Kubota

    2006-01-01

    Seismograms near source fault were synthesized using the hybrid empirical Green's function method where he discretely simulated seismic waveforms are used for Green's functions instead of the observed waveforms of small earthquakes. The Green's function seismic waveforms for small earthquake were calculated by solving wave equation using the pseudo-spectral method with the staggered grid real FFT strategy under a detailed 2-D velocity structure in Kobe region. Magnitude and seismic moment of simulated Green's function waveforms were firstly determined by using the relationship between fault length and corner frequency of source spectrum. The simulated Green's function waveforms were employed to synthesize seismograms of strong ground motion near the earthquake fault. The synthetic seismograms of the target earthquake were performed based on the model with multiple source rupture processes. The results suggest that synthesized seismograms coincide well with observed seismic waveforms of the 1995 Hyogo-ken Nanbu earthquake. The simulated Green's function method is very useful for prediction of the strong ground motion in region without observed seismic waveforms.The present technique spreads application field of the empirical Green's function method.

  18. Ball bearing defect models: A study of simulated and experimental fault signatures

    Science.gov (United States)

    Mishra, C.; Samantaray, A. K.; Chakraborty, G.

    2017-07-01

    Numerical model based virtual prototype of a system can serve as a tool to generate huge amount of data which replace the dependence on expensive and often difficult to conduct experiments. However, the model must be accurate enough to substitute the experiments. The abstraction level and details considered during model development depend on the purpose for which simulated data should be generated. This article concerns development of simulation models for deep groove ball bearings which are used in a variety of rotating machinery. The purpose of the model is to generate vibration signatures which usually contain features of bearing defects. Three different models with increasing level-of-complexity are considered: a bearing kinematics based planar motion block diagram model developed in MATLAB Simulink which does not explicitly consider cage and traction dynamics, a planar motion model with cage, traction and contact dynamics developed using multi-energy domain bond graph formalism in SYMBOLS software, and a detailed spatial multi-body dynamics model with complex contact and traction mechanics developed using ADAMS software. Experiments are conducted using Spectra Quest machine fault simulator with different prefabricated faulted bearings. The frequency domain characteristics of simulated and experimental vibration signals for different bearing faults are compared and conclusions are drawn regarding usefulness of the developed models.

  19. Modified Quasi-Steady State Model of DC System for Transient Stability Simulation under Asymmetric Faults

    Directory of Open Access Journals (Sweden)

    Jun Liu

    2015-01-01

    Full Text Available As using the classical quasi-steady state (QSS model could not be able to accurately simulate the dynamic characteristics of DC transmission and its controlling systems in electromechanical transient stability simulation, when asymmetric fault occurs in AC system, a modified quasi-steady state model (MQSS is proposed. The model firstly analyzes the calculation error induced by classical QSS model under asymmetric commutation voltage, which is mainly caused by the commutation voltage zero offset thus making inaccurate calculation of the average DC voltage and the inverter extinction advance angle. The new MQSS model calculates the average DC voltage according to the actual half-cycle voltage waveform on the DC terminal after fault occurrence, and the extinction advance angle is also derived accordingly, so as to avoid the negative effect of the asymmetric commutation voltage. Simulation experiments show that the new MQSS model proposed in this paper has higher simulation precision than the classical QSS model when asymmetric fault occurs in the AC system, by comparing both of them with the results of detailed electromagnetic transient (EMT model of the DC transmission and its controlling system.

  20. Local Interaction Simulation Approach for Fault Detection in Medical Ultrasonic Transducers

    Directory of Open Access Journals (Sweden)

    Z. Hashemiyan

    2015-01-01

    Full Text Available A new approach is proposed for modelling medical ultrasonic transducers operating in air. The method is based on finite elements and the local interaction simulation approach. The latter leads to significant reductions of computational costs. Transmission and reception properties of the transducer are analysed using in-air reverberation patterns. The proposed approach can help to provide earlier detection of transducer faults and their identification, reducing the risk of misdiagnosis due to poor image quality.

  1. The computer simulation of laser proton acceleration for hadron therapy

    Science.gov (United States)

    Lykov, Vladimir; Baydin, Grigory

    2008-11-01

    The ions acceleration by intensive ultra-short laser pulses has interest in views of them possible applications for proton radiography, production of medical isotopes and hadron therapy. The 3D relativistic PIC-code LegoLPI is developed at RFNC-VNIITF for modeling of intensive laser interaction with plasma. The LegoLPI-code simulations were carried out to find the optimal conditions for generation of proton beams with parameters necessary for hadrons therapy. The performed simulations show that optimal for it may be two-layer foil of aluminum and polyethylene with thickness 100 nm and 50 nm accordingly. The maximum efficiency of laser energy transformation into 200 MeV protons is achieved on irradiating these foils by 30 fs laser pulse with intensity about 2.10^22 W/cm^2. The conclusion is made that lasers with peak power about 0.5-1PW and average power 0.5-1 kW are needed for generation of proton beams with parameters necessary for proton therapy.

  2. Accelerated prompt gamma estimation for clinical proton therapy simulations

    Science.gov (United States)

    Huisman, Brent F. B.; Létang, J. M.; Testa, É.; Sarrut, D.

    2016-11-01

    There is interest in the particle therapy community in using prompt gammas (PGs), a natural byproduct of particle treatment, for range verification and eventually dose control. However, PG production is a rare process and therefore estimation of PGs exiting a patient during a proton treatment plan executed by a Monte Carlo (MC) simulation converges slowly. Recently, different approaches to accelerating the estimation of PG yield have been presented. Sterpin et al (2015 Phys. Med. Biol. 60 4915-46) described a fast analytic method, which is still sensitive to heterogeneities. El Kanawati et al (2015 Phys. Med. Biol. 60 8067-86) described a variance reduction method (pgTLE) that accelerates the PG estimation by precomputing PG production probabilities as a function of energy and target materials, but has as a drawback that the proposed method is limited to analytical phantoms. We present a two-stage variance reduction method, named voxelized pgTLE (vpgTLE), that extends pgTLE to voxelized volumes. As a preliminary step, PG production probabilities are precomputed once and stored in a database. In stage 1, we simulate the interactions between the treatment plan and the patient CT with low statistic MC to obtain the spatial and spectral distribution of the PGs. As primary particles are propagated throughout the patient CT, the PG yields are computed in each voxel from the initial database, as a function of the current energy of the primary, the material in the voxel and the step length. The result is a voxelized image of PG yield, normalized to a single primary. The second stage uses this intermediate PG image as a source to generate and propagate the number of PGs throughout the rest of the scene geometry, e.g. into a detection device, corresponding to the number of primaries desired. We achieved a gain of around 103 for both a geometrical heterogeneous phantom and a complete patient CT treatment plan with respect to analog MC, at a convergence level of 2% relative

  3. Operation Tests for SN Transition Superconducting Fault Current Limiter in the Power System Simulator

    Science.gov (United States)

    Kameda, Hideyuki; Torii, Shinji; Kumano, Teruhisa; Sakaki, Hisayoshi; Kubota, Hiroshi; Yasuda, Kenji

    One of important problems to be solved in Japanese trunk transmission systems is the reduction of short circuit capacity. As this countermeasure, double buses are split into two buses in some substations. In recent years, dispersed generators are introduced in lower voltage classes due to the introduction of the electricity deregulation. In such a distribution system as many dispersed generators are introduced, it is a possibility that the fault current becomes beyond the breaking capacity at the occurrence of short circuit. Introduction of superconducting fault current limiters into a power system is very effective as one of the means to solve the above-mentioned problem, and we have studied on the effective introduction method of them and setting method of their parameters. This paper describes the results of the operation tests for SN transition type of a superconducting fault current limiter using 3 phases of FCL modules against various kinds of system faults or inrush current in the Power System Simulator installed at CRIEPI.

  4. Fault-Tolerant Robot Programming through Simulation with Realistic Sensor Models

    Directory of Open Access Journals (Sweden)

    Axel Waggershauser

    2008-11-01

    Full Text Available We introduce a simulation system for mobile robots that allows a realistic interaction of multiple robots in a common environment. The simulated robots are closely modeled after robots from the EyeBot family and have an identical application programmer interface. The simulation supports driving commands at two levels of abstraction as well as numerous sensors such as shaft encoders, infrared distance sensors, and compass. Simulation of on-board digital cameras via synthetic images allows the use of image processing routines for robot control within the simulation. Specific error models for actuators, distance sensors, camera sensor, and wireless communication have been implemented. Progressively increasing error levels for an application program allows for testing and improving its robustness and fault-tolerance.

  5. Modeling and Simulation of Transient Fault Response at Lillgrund Wind Farm when Subjected to Faults in the Connecting 130 kV Grid

    Energy Technology Data Exchange (ETDEWEB)

    Eliasson, Anders; Isabegovic, Emir

    2009-07-01

    The purpose of this thesis was to investigate what type of faults in the connecting grid should be dimensioning for future wind farms. An investigation of over and under voltages at the main transformer and the turbines inside Lillgrund wind farm was the main goal. The results will be used in the planning stage of future wind farms when performing insulation coordination and determining the protection settings. A model of the Lillgrund wind farm and a part of the connecting 130 kV grid were built in PSCAD/EMTDC. The farm consists of 48 Siemens SWT-2.3-93 2.3 MW wind turbines with full power converters. The turbines were modeled as controllable current sources providing a constant active power output up to the current limit of 1.4 pu. The transmission lines and cables were modeled as frequency dependent (phase) models. The load flows and bus voltages were verified towards a PSS/E model and the transient response was verified towards measuring data from two faults, a line to line fault in the vicinity of Barsebaeck (BBK) and a single line-to-ground fault close to Bunkeflo (BFO) substation. For the simulation, three phase to ground, single line to ground and line to line faults were applied at different locations in the connecting grid and the phase to ground voltages at different buses in the connecting grid and at turbines were studied. These faults were applied for different configurations of the farm. For single line to ground faults, the highest over voltage on a turbine was 1.22 pu (32.87 kV) due to clearing of a fault at BFO (the PCC). For line to line faults, the highest over voltage on a turbine was 1.59 pu (42.83 kV) at the beginning of a fault at KGE one bus away from BFO. Both these cases were when all radials were connected and the turbines ran at full power. The highest over voltage observed at Lillgrund was 1.65 pu (44.45 kV). This over voltage was caused by a three phase to ground fault applied at KGE and occurred at the beginning of the fault and when

  6. Electron acceleration mechanisms in the interaction of ultrashort lasers with underdense plasmas: Experiments and simulations

    Energy Technology Data Exchange (ETDEWEB)

    Faure, J.; Lefebvre, E.; Malka, V.; Marques, J.-R.; Amiranoff, F.; Solodov, A.; Mora, P.

    2002-06-30

    An experiment investigating the production of relativistic electrons from the interaction of ultrashort multi-terawatt laser pulses with an underdense plasma is presented. Electrons were accelerated to tens of MeV and the maximum electron energy increased as the plasma density decreased. Simulations have been performed in order to model the experiment. They show a good agreement with the trends observed in the experiment and the spectra of accelerated electrons could be reproduced successfully. The simulations have been used to study the relative contribution of the different acceleration mechanisms: plasma wave acceleration, direct laser acceleration and stochastic heating. The results show that in low density case (1 percent of the critical density) acceleration by laser is dominant mechanism. The simulations at high density also suggest that direct laser acceleration is more efficient that stochastic heating.

  7. 3D Dynamic Rupture Simulation Across a Complex Fault System: the Mw7.0, 2010, Haiti Earthquake

    Science.gov (United States)

    Douilly, R.; Aochi, H.; Calais, E.; Freed, A. M.

    2013-12-01

    Earthquakes ruptures sometimes take place on a secondary fault and surprisingly do not activate an adjacent major one. The 1989 Loma Prieta earthquake is a classic case where rupture occurred on a blind thrust while the adjacent San Andreas Fault was not triggered during the process. Similar to Loma Prieta, the Mw7.0, January 12 2010, Haiti earthquake also ruptured a secondary blind thrust, the Léogâne fault, adjacent to the main plate boundary, the Enriquillo Plantain Garden Fault, which did not rupture during this event. Aftershock relocalizations delineate the Léogâne rupture with two north dipping segments with slightly different dip, where the easternmost segment had mostly dip-slip motion and the westernmost one had mostly strike-slip motion. In addition, an offshore south dipping structure inferred from the aftershocks to the west of the rupture zone coincides with the offshore Trois Baies reverse fault, a region of increase in Coulomb stress increase. In this study, we investigate the rupture dynamics of the Haiti earthquake in a complex fault system of multiple segments identified by the aftershock relocations. We suppose a background stress regime that is consistent with the type of motion of each fault and with the regional tectonic regime. We initiate a nucleation on the east segment of the Léogâne fault by defining a circular region with a 2 km radius where shear stress is slightly greater than the yield stress. By varying friction on faults and background stress, we find a range of plausible scenarios. In the absence of near-field seismic records of the event, we score the different models against the static deformation field derived from GPS and InSAR at the surface. All the plausible simulations show that the rupture propagates from the eastern to the western segment along the Léogâne fault, but not on the Enriquillo fault nor on the Trois Baies fault. The best-fit simulation shows a significant increase of shear stresses on the Trois Baies

  8. Laboratory measurements of the relative permeability of cataclastic fault rocks: An important consideration for production simulation modelling

    Energy Technology Data Exchange (ETDEWEB)

    Al-Hinai, Suleiman; Fisher, Quentin J. [School of Earth and Environment, University of Leeds, Leeds LS2 9JT (United Kingdom); Al-Busafi, Bader [Petroleum Development of Oman, MAF, Sultanate of Oman, Muscat (Oman); Guise, Phillip; Grattoni, Carlos A. [Rock Deformation Research Limited, School of Earth and Environment, University of Leeds, Leeds LS2 9JT (United Kingdom)

    2008-06-15

    It is becoming increasingly common practice to model the impact of faults on fluid flow within petroleum reservoirs by applying transmissibility multipliers, calculated from the single-phase permeability of fault rocks, to the grid-blocks adjacent to faults in production simulations. The multi-phase flow properties (e.g. relative permeability and capillary pressure) of fault rocks are not considered because special core analysis has never previously been conducted on fault rock samples. Here, we partially fill this knowledge gap by presenting data from the first experiments that have measured the gas relative permeability (k{sub rg}) of cataclastic fault rocks. The cataclastic faults were collected from an outcrop of Permo-Triassic sandstone in the Moray Firth, Scotland; the fault rocks are similar to those found within Rotliegend gas reservoirs in the UK southern North Sea. The relative permeability measurements were made using a gas pulse-decay technique on samples whose water saturation was varied using vapour chambers. The measurements indicate that if the same fault rocks were present in gas reservoirs from the southern Permian Basin they would have k{sub rg} values of <0.02. Failure to take into account relative permeability effects could therefore lead to an overestimation of the transmissibility of faults within gas reservoirs by several orders of magnitude. Incorporation of these new results into a simplified production simulation model can explain the pressure evolution from a compartmentalised Rotliegend gas reservoir from the southern North Sea, offshore Netherlands, which could not easily be explained using only single-phase permeability data from fault rocks. (author)

  9. Fault detection on the Large Hadron Collider at CERN: design, simulation and realization of a High Voltage Pulse Generator

    CERN Document Server

    Cavicchioli, C; Biagi, E; Bozzini, D

    2007-01-01

    This project was developed inside the Quality Assurance Plan (ELQA) of the LHC. The superconducting circuits of the collider show a great complexity concerning the control system, because of various reasons: the tunnel is placed around 50 to 175 m underground, the circuits work at temperatures of 1.9 K, all the structure should be perfectly aligned and the electronic part has considerable dimensions. To maximize the running time of the collider, it is necessary to develop methods for the diagnostic of defects and for the precise localization of the segment of the accelerator that contains the fault. From my studies it emerged that a possible way to localize electrical faults in the LHC superconducting circuits is to combine the use of time domain reflectometry methods and high voltage pulses. Therefore, I have designed and realized a high voltage pulse generator that will be an important instrument for the fault location among the accelerator.

  10. A novel Lagrangian approach for the stable numerical simulation of fault and fracture mechanics

    Science.gov (United States)

    Franceschini, Andrea; Ferronato, Massimiliano; Janna, Carlo; Teatini, Pietro

    2016-06-01

    The simulation of the mechanics of geological faults and fractures is of paramount importance in several applications, such as ensuring the safety of the underground storage of wastes and hydrocarbons or predicting the possible seismicity triggered by the production and injection of subsurface fluids. However, the stable numerical modeling of ground ruptures is still an open issue. The present work introduces a novel formulation based on the use of the Lagrange multipliers to prescribe the constraints on the contact surfaces. The variational formulation is modified in order to take into account the frictional work along the activated fault portion according to the principle of maximum plastic dissipation. The numerical model, developed in the framework of the Finite Element method, provides stable solutions with a fast convergence of the non-linear problem. The stabilizing properties of the proposed model are emphasized with the aid of a realistic numerical example dealing with the generation of ground fractures due to groundwater withdrawal in arid regions.

  11. Saturn: A large area X-ray simulation accelerator

    Science.gov (United States)

    Bloomquist, D. D.; Stinnett, R. W.; McDaniel, D. H.; Lee, J. R.; Sharpe, A. W.; Halbleib, J. A.; Schlitt, L. G.; Spence, P. W.; Corcoran, P.

    1987-06-01

    Saturn is the result of a major metamorphosis of the Particle Beam Fusion Accelerator-I (PBFA-I) from an ICF research facility to the large-area X-ray source of the Simulation Technology Laboratory (STL) project. Renamed Saturn, for its unique multiple-ring diode design, the facility is designed to take advantage of the numerous advances in pulsed power technology. Saturn will include significant upgrades in the energy storage and pulse-forming sections. The 36 magnetically insulated transmission lines (MITLs) that provided power flow to the ion diode of PBFA-I were replaced by a system of vertical triplate water transmission lines. These lines are connected to three horizontal triplate disks in a water convolute section. Power will flow through an insulator stack into radial MITLs that drive the three-ring diode. Saturn is designed to operate with a maximum of 750 kJ coupled to the three-ring e-beam diode with a peak power of 25 TW to provide an X-ray exposure capability of 5 x 10 rads/s (Si) and 5 cal/g (Au) over 500 cm.

  12. Ground-Motion Simulations of Scenario Earthquakes on the Hayward Fault

    Energy Technology Data Exchange (ETDEWEB)

    Aagaard, B; Graves, R; Larsen, S; Ma, S; Rodgers, A; Ponce, D; Schwartz, D; Simpson, R; Graymer, R

    2009-03-09

    We compute ground motions in the San Francisco Bay area for 35 Mw 6.7-7.2 scenario earthquake ruptures involving the Hayward fault. The modeled scenarios vary in rupture length, hypocenter, slip distribution, rupture speed, and rise time. This collaborative effort involves five modeling groups, using different wave propagation codes and domains of various sizes and resolutions, computing long-period (T > 1-2 s) or broadband (T > 0.1 s) synthetic ground motions for overlapping subsets of the suite of scenarios. The simulations incorporate 3-D geologic structure and illustrate the dramatic increase in intensity of shaking for Mw 7.05 ruptures of the entire Hayward fault compared with Mw 6.76 ruptures of the southern two-thirds of the fault. The area subjected to shaking stronger than MMI VII increases from about 10% of the San Francisco Bay urban area in the Mw 6.76 events to more than 40% of the urban area for the Mw 7.05 events. Similarly, combined rupture of the Hayward and Rodgers Creek faults in a Mw 7.2 event extends shaking stronger than MMI VII to nearly 50% of the urban area. For a given rupture length, the synthetic ground motions exhibit the greatest sensitivity to the slip distribution and location inside or near the edge of sedimentary basins. The hypocenter also exerts a strong influence on the amplitude of the shaking due to rupture directivity. The synthetic waveforms exhibit a weaker sensitivity to the rupture speed and are relatively insensitive to the rise time. The ground motions from the simulations are generally consistent with Next Generation Attenuation ground-motion prediction models but contain long-period effects, such as rupture directivity and amplification in shallow sedimentary basins that are not fully captured by the ground-motion prediction models.

  13. The Effects of Off-Fault Plasticity in Earthquake Cycle Simulations

    Science.gov (United States)

    Erickson, B. A.; Dunham, E. M.

    2012-12-01

    Field observations of damage zones around faults reveal regions of fractured or pulverized rocks on the order of several hundred meters surrounding a highly damaged fault core. It has been postulated that these damage zones are the result of the fracturing and healing within the fault zone due to many years of seismogenic cycling. In dynamic rupture simulations which account for inelastic deformation, the influence of plasticity has been shown to significantly alter rupture propagation speed and the residual stress field left near the fault. Plastic strain near the Earth's surface has also been shown to account for a fraction of the inferred shallow slip deficit. We are developing an efficient numerical method to simulate full earthquake cycles of multiple events with rate-and-state friction laws and off-fault plasticity. Although the initial stress state prior to an earthquake is not well understood, our method evolves the system through the interseismic period, therefore generating self-consistent initial conditions prior to rupture. Large time steps can be taken during the interseismic period while much smaller time steps are required to fully resolve quasi-dynamic rupture where we use the the radiation damping approximation to the inertial term for computational efficiency. So far our cycle simulations have been done assuming a linear elastic medium. We have concurrently begun developing methods for allowing plastic deformation in our cycle simulations where the stress is constrained by a Drucker-Prager yield criterion. The idea is to simulate multiple events which allow for inelastic response, in order to understand how plasticity alters the rupture process during each event in the cycle. We will use this model to see what fraction of coseismic strain is accommodated by inelastic deformation throughout the entire earthquake cycle from the interseismic period through the mainshock. Modeling earthquake cycles with plasticity will also allow us to study how an

  14. Accelerated Molecular Dynamics Simulations of Reactive Hydrocarbon Systems

    Energy Technology Data Exchange (ETDEWEB)

    Stuart, Steven J.

    2014-02-25

    The research activities in this project consisted of four different sub-projects. Three different accelerated dynamics techniques (parallel replica dynamics, hyperdynamics, and temperature-accelerated dynamics) were applied to the modeling of pyrolysis of hydrocarbons. In addition, parallel replica dynamics was applied to modeling of polymerization.

  15. Asperity generation and its relationship to seismicity on a planar fault: a laboratory simulation

    Science.gov (United States)

    Selvadurai, P. A.; Glaser, S. D.

    2017-02-01

    Earthquake faults, and all frictional surfaces, establish contact through asperities. A detailed knowledge of how asperities form will enable a better understanding of the manner in which they communicate during foreshock failure sequences that are observed, leading to the larger main shock. We present results of experiments where a pressure sensitive film was used to map, size and measure the magnitudes of the normal stresses at asperities along a seismogenic section of a laboratory simulated fault. We measured seismicity acoustically and foreshocks were found to be the result of localized asperity failure during the nucleation phase of gross fault rupture. Since surface roughness plays an important role in how asperities are formed, two Hurst exponents were measured to characterize a highly worn interface using roughness profiles: (i) long wavelength estimates (H ˜ 0.45) and (ii) short wavelength estimates (H ˜ 0.8-1.2). The short wavelength roughness estimates were computed at the scale of single asperity junction points. Macroscopically, the number of asperities and real contact area increased with additional application of normal force while the mean normal stress remained constant. The ratio of real to nominal contact area was low - ranging from 0.02 < Ar/A0 < 0.05-predicting that the asperities should be elastically independent of each other. Results from the pressure sensitive film showed that asperities were closely spaced and could not be treated as mechanically independent. Larger asperities carried both higher levels of average normal stress and higher levels of normal stress heterogeneity than smaller ones. Using linear stability theorem, the critical slip distance on foreshocking asperities was estimated to be d0 ˜ 0.65-3 μm. The critical slip distance d0 was ˜1.8-11.5 per cent of the premonitory slip needed to initiate gross fault rupture of the interface (20-40 μm) and the overall slip necessary to initiate gross fault rupture was on the order

  16. Simulation of Fault Arc Based on Different Radiation Models in a Closed Tank

    Science.gov (United States)

    Li, Mei; Zhang, Junpeng; Hu, Yang; Zhang, Hantian; Wu, Yifei

    2016-05-01

    This paper focuses on the simulation of a fault arc in a closed tank based on the magneto-hydrodynamic (MHD) method, in which a comparative study of three radiation models, including net emission coefficients (NEC), semi-empirical model based on NEC as well as the P1 model, is developed. The pressure rise calculated by the three radiation models are compared to the measured results. Particularly when the semi-empirical model is used, the effect of different boundary temperatures of the re-absorption layer in the semi-empirical model on pressure rise is concentrated on. The results show that the re-absorption effect in the low-temperature region affects radiation transfer of fault arcs evidently, and thus the internal pressure rise. Compared with the NEC model, P1 and the semi-empirical model with 0.7pressure rise of the fault arc, where is an adjusted parameter involving the boundary temperature of the re-absorption region in the semi-empirical model. supported by National Key Basic Research Program of China (973 Program) (No. 2015CB251002), National Natural Science Foundation of China (Nos. 51221005, 51177124), the Fundamental Research Funds for the Central Universities, the Program for New Century Excellent Talents in University and Shaanxi Province Natural Science Foundation of China (No. 2013JM-7010)

  17. Simulation of Fault Arc Based on Different Radiation Models in a Closed Tank

    Institute of Scientific and Technical Information of China (English)

    LI Mei; ZHANG Junpeng; HU Yang; ZHANG Hantian; WU Yifei

    2016-01-01

    This paper focuses on the simulation of a fault arc in a closed tank based on the magneto-hydrodynamic (MHD) method,in which a comparative study of three radiation models,including net emission coefficients (NEC),semi-empirical model based on NEC as well as the P1 model,is developed.The pressure rise calculated by the three radiation models are compared to the measured results.Particularly when the semi-empirical model is used,the effect of different boundary temperatures of the re-absorption layer in the semi-empirical model on pressure rise is concentrated on.The results show that the re-absorption effect in the low-temperature region affects radiation transfer of fault arcs evidently,and thus the internal pressure rise.Compared with the NEC model,P1 and the semi-empirical model with 0.7 < α < 0.83 are more suitable to calculate the pressure rise of the fault arc,where is an adjusted parameter involving the boundary temperature of the re-absorption region in the semi-empirical model.

  18. Fault diagnosis of reciprocating compressor valve with the method integrating acoustic emission signal and simulated valve motion

    Science.gov (United States)

    Wang, Yuefei; Xue, Chuang; Jia, Xiaohan; Peng, Xueyuan

    2015-05-01

    This paper proposes a method of diagnosing faults in reciprocating compressor valves using the acoustic emission signal coupled with the simulated valve motion. The actual working condition of a valve can be obtained by analyzing the acoustic emission signal in the crank angle domain and the valve movement can be predicted by simulating the valve motion. The exact opening and closing locations of a normal valve, provided by the simulated valve motion, can be used as references for the valve fault diagnosis. The typical valve faults are diagnosed to validate the feasibility and accuracy of the proposed method. The experimental results indicate that this method can easily distinguish the normal valve, valve flutter and valve delayed closing conditions. The characteristic locations of the opening and closing of the suction and discharge valves can be clearly identified in the waveform of the acoustic emission signal and the simulated valve motion.

  19. Final Report for "Community Petascale Project for Accelerator Science and Simulations".

    Energy Technology Data Exchange (ETDEWEB)

    Cary, J. R.; Bruhwiler, D. L.; Stoltz, P. H.; Cormier-Michel, E.; Cowan, B.; Schwartz, B. T.; Bell, G.; Paul, K.; Veitzer, S.

    2013-04-19

    This final report describes the work that has been accomplished over the past 5 years under the Community Petascale Project for Accelerator and Simulations (ComPASS) at Tech-X Corporation. Tech-X had been involved in the full range of ComPASS activities with simulation of laser plasma accelerator concepts, mainly in collaboration with LOASIS program at LBNL, simulation of coherent electron cooling in collaboration with BNL, modeling of electron clouds in high intensity accelerators, in collaboration with researchers at Fermilab and accurate modeling of superconducting RF cavity in collaboration with Fermilab, JLab and Cockcroft Institute in the UK.

  20. Near fault broadband ground motion simulation with empirical Green's functions: the Upper Rhine Graben case study

    Science.gov (United States)

    Del Gaudio, Sergio; Hok, Sébastian; Causse, Mathieu; Festa, Gaetano; Lancieri, Maria

    2016-04-01

    A fundamental stage in seismic hazard assessment is the prediction of realistic ground motion for potential future earthquakes. To do so, one of the steps is to make an estimation of the expected ground motion level and this is commonly done by the use of ground motion prediction equations (GMPEs). Nevertheless GMPEs do not represent the whole variety of source processes and this can lead to incorrect estimates for some specific case studies, such as in the near-fault range because of the lack of records of large earthquakes at short distances. In such cases, ground motion simulations can be a valid tool to complement prediction equations for scenario studies, provided that both source and propagation are accurately described and uncertainties properly addressed. Such simulations, usually referred to as "blind", require the generation of a population of ground motion records that represent the natural variability of the source process for the target earthquake scenario. In this study we performed simulations using the empirical Green's function technique, which consists in using records of small earthquakes as the medium transfer function provided the availability of small earthquakes located close to the target fault and recorded at the target site. The main advantage of this technique is that it does not require a detailed knowledge of the propagation medium, which is not always possible, but requires availability of high quality records of small earthquakes in the target area. We couple this empirical approach with a k-2 kinematic source model, which naturally let us to introduce high frequency in the source description. Here we present an application of our technique to the Upper Rhine Graben. This is an active seismic region with a moderate rate of seismicity and for which it is interesting to provide ground motion estimation in the vicinity of the faults to be compared with estimations traditionally provided by GMPEs in a seismic hazard evaluation study. We

  1. CONFIG - Adapting qualitative modeling and discrete event simulation for design of fault management systems

    Science.gov (United States)

    Malin, Jane T.; Basham, Bryan D.

    1989-01-01

    CONFIG is a modeling and simulation tool prototype for analyzing the normal and faulty qualitative behaviors of engineered systems. Qualitative modeling and discrete-event simulation have been adapted and integrated, to support early development, during system design, of software and procedures for management of failures, especially in diagnostic expert systems. Qualitative component models are defined in terms of normal and faulty modes and processes, which are defined by invocation statements and effect statements with time delays. System models are constructed graphically by using instances of components and relations from object-oriented hierarchical model libraries. Extension and reuse of CONFIG models and analysis capabilities in hybrid rule- and model-based expert fault-management support systems are discussed.

  2. Advanced visualization technology for terascale particle accelerator simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ma, K-L; Schussman, G.; Wilson, B.; Ko, K.; Qiang, J.; Ryne, R.

    2002-11-16

    This paper presents two new hardware-assisted rendering techniques developed for interactive visualization of the terascale data generated from numerical modeling of next generation accelerator designs. The first technique, based on a hybrid rendering approach, makes possible interactive exploration of large-scale particle data from particle beam dynamics modeling. The second technique, based on a compact texture-enhanced representation, exploits the advanced features of commodity graphics cards to achieve perceptually effective visualization of the very dense and complex electromagnetic fields produced from the modeling of reflection and transmission properties of open structures in an accelerator design. Because of the collaborative nature of the overall accelerator modeling project, the visualization technology developed is for both desktop and remote visualization settings. We have tested the techniques using both time varying particle data sets containing up to one billion particle s per time step and electromagnetic field data sets with millions of mesh elements.

  3. Reactor for simulation and acceleration of solar ultraviolet damage

    Energy Technology Data Exchange (ETDEWEB)

    Laue, E.; Gupta, A.

    1979-09-21

    An environmental test chamber providing acceleration of uv radiation and precise temperature control (+- 1/sup 0/C) has been designed, constructed and tested. This chamber allows acceleration of solar ultraviolet up to 30 suns while maintaining temperature of the absorbing surface at 30/sup 0/C to 60/sup 0/C). This test chamber utilizes a filtered medium pressure mercury arc as the source of radiation, and a combination of selenium radiometer and silicon radiometer to monitor solar ultraviolet (295 to 340 nm) and total radiant power output, respectively. Details of design and construction and operational procedures are presented along with typical test data. The test chamber was designed for accelerated testing of solar cell modules.

  4. Simulation studies of laser wakefield acceleration based on typical 100 TW laser facilities

    Institute of Scientific and Technical Information of China (English)

    李大章; 高杰; 朱雄伟; 何安

    2011-01-01

    In this paper, 2-D Particle-In-Cell simulations are made for Laser Wakefield Accelerations (LWFA). As in a real experiment, we perform plasma density scanning for typical 100 TW laser facilities. Several basic laws for self-injected acceleration in a bubb

  5. Cycle-Based Algorithm Used to Accelerate VHDL Simulation

    Institute of Scientific and Technical Information of China (English)

    杨勋; 刘明业

    2000-01-01

    Cycle-based algorithm has very high performance for the simula-tion of synchronous design, but it is confined to synchronous design and it is not as accurate as event-driven algorithm. In this paper, a revised cycle-based algorithm is proposed and implemented in VHDL simulator. Event-driven simulation engine and cycle-based simulation engine have been imbedded in the same simulation environ-ment and can be used to asynchronous design and synchronous design respectively. Thus the simulation performance is improved without losing the flexibility and ac-curacy of event-driven algorithm.

  6. Accelerated stochastic and hybrid methods for spatial simulations of reaction-diffusion systems

    OpenAIRE

    Rossinelli, D; Bayati, B; Koumoutsakos, P.

    2008-01-01

    Spatial distributions characterize the evolution of reaction-diffusion models of several physical, chemical, and biological systems. We present two novel algorithms for the efficient simulation of these models: Spatial т-Leaping (Sт -Leaping), employing a unified acceleration of the stochastic simulation of reaction and diffusion, and Hybrid т-Leaping (Hт-Leaping), combining a deterministic diffusion approximation with a т-Leaping acceleration of the stochastic reactions. The algorithms are v...

  7. Availability simulation software adaptation to the IFMIF accelerator facility RAMI analyses

    OpenAIRE

    Bargalló Font, Enric; Sureda, Pere Joan; Arroyo Macias, José Manuel; Abal López, Javier; Blas Del Hoyo, Alfredo de; Dies Llovera, Javier; Tapia Fernández, Carlos; Mollá Lorente, Joaquin; Ibarra Sanchez, Angel

    2014-01-01

    Several problems were found when using generic reliability tools to perform RAM! (Reliability Availability Maintainability Inspectability) studies for the IFMIF (International Fusion Materials Irradiation Facility) accelerator. A dedicated simulation tool was necessary to model properly the complexity of the accelerator facility.; AvailSim, the availability simulation software used for the International Linear Collider (ILC) became an excellent option to fulfill RAMI analyses needs. Neverthel...

  8. Biodynamic Assessment of Pilot Knee-Board Configurations During Simulated T-38 Catapult Acceleration

    Science.gov (United States)

    2015-04-01

    0041 Biodynamic Assessment of Pilot Knee -Board Configurations During Simulated T-38 Catapult Acceleration Mr. Chris Perry Mr. Chris...to April 2015 4. TITLE AND SUBTITLE Biodynamic Assessment of Pilot Knee -Board Configurations During Simulated T-38 Catapult Acceleration 5a...and converted to AVI format, and stored in the RH Collaborative Biomechanics Data Bank. Photographs were taken of the test set-up prior to each test

  9. A guide to differences between stochastic point-source and stochastic finite-fault simulations

    Science.gov (United States)

    Atkinson, G.M.; Assatourians, K.; Boore, D.M.; Campbell, K.; Motazedian, D.

    2009-01-01

    Why do stochastic point-source and finite-fault simulation models not agree on the predicted ground motions for moderate earthquakes at large distances? This question was posed by Ken Campbell, who attempted to reproduce the Atkinson and Boore (2006) ground-motion prediction equations for eastern North America using the stochastic point-source program SMSIM (Boore, 2005) in place of the finite-source stochastic program EXSIM (Motazedian and Atkinson, 2005) that was used by Atkinson and Boore (2006) in their model. His comparisons suggested that a higher stress drop is needed in the context of SMSIM to produce an average match, at larger distances, with the model predictions of Atkinson and Boore (2006) based on EXSIM; this is so even for moderate magnitudes, which should be well-represented by a point-source model. Why? The answer to this question is rooted in significant differences between point-source and finite-source stochastic simulation methodologies, specifically as implemented in SMSIM (Boore, 2005) and EXSIM (Motazedian and Atkinson, 2005) to date. Point-source and finite-fault methodologies differ in general in several important ways: (1) the geometry of the source; (2) the definition and application of duration; and (3) the normalization of finite-source subsource summations. Furthermore, the specific implementation of the methods may differ in their details. The purpose of this article is to provide a brief overview of these differences, their origins, and implications. This sets the stage for a more detailed companion article, "Comparing Stochastic Point-Source and Finite-Source Ground-Motion Simulations: SMSIM and EXSIM," in which Boore (2009) provides modifications and improvements in the implementations of both programs that narrow the gap and result in closer agreement. These issues are important because both SMSIM and EXSIM have been widely used in the development of ground-motion prediction equations and in modeling the parameters that control

  10. Acceleration techniques for dependability simulation. M.S. Thesis

    Science.gov (United States)

    Barnette, James David

    1995-01-01

    As computer systems increase in complexity, the need to project system performance from the earliest design and development stages increases. We have to employ simulation for detailed dependability studies of large systems. However, as the complexity of the simulation model increases, the time required to obtain statistically significant results also increases. This paper discusses an approach that is application independent and can be readily applied to any process-based simulation model. Topics include background on classical discrete event simulation and techniques for random variate generation and statistics gathering to support simulation.

  11. Simulating the effect of SFCL on limiting the internal fault of synchronous machine

    Energy Technology Data Exchange (ETDEWEB)

    Kheirizad, I [Islamic Republic of Iran Broadcasting, Tehran (Iran, Islamic Republic of); Varahram, M H [Ministry of Science, Research and Technology, Tehran (Iran, Islamic Republic of); Jahed-Motlagh, M R [Azad University of Science and Research, Tehran (Iran, Islamic Republic of); Rahnema, M; Mohammadi, A [Iran University of Science and Technology, Tehran (Iran, Islamic Republic of)], E-mail: hadi_varahram@yahoo.com

    2008-02-01

    In this paper, we have modelled a synchronous generator with internal one phase to ground fault and then the performance of this machine with internal one phase to ground fault have been analyzed. The results show that if the faults occur in vicinity of machine's terminal, then we would have serious damages. To protect the machine from this kind of faults we have suggested integrating a SFCL (superconducting fault current limiter) into the machine's model. The results show that the fault currents in this case will reduce considerably without influencing the normal operation of the machine.

  12. Probabilistic Approach to Enable Extreme-Scale Simulations under Uncertainty and System Faults. Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Knio, Omar [Duke Univ., Durham, NC (United States). Dept. of Mechanical Engineering and Materials Science

    2017-05-05

    The current project develops a novel approach that uses a probabilistic description to capture the current state of knowledge about the computational solution. To effectively spread the computational effort over multiple nodes, the global computational domain is split into many subdomains. Computational uncertainty in the solution translates into uncertain boundary conditions for the equation system to be solved on those subdomains, and many independent, concurrent subdomain simulations are used to account for this bound- ary condition uncertainty. By relying on the fact that solutions on neighboring subdomains must agree with each other, a more accurate estimate for the global solution can be achieved. Statistical approaches in this update process make it possible to account for the effect of system faults in the probabilistic description of the computational solution, and the associated uncertainty is reduced through successive iterations. By combining all of these elements, the probabilistic reformulation allows splitting the computational work over very many independent tasks for good scalability, while being robust to system faults.

  13. Insights From Laboratory Experiments On Simulated Faults With Application To Fracture Evolution In Geothermal Systems

    Energy Technology Data Exchange (ETDEWEB)

    Stephen L. Karner, Ph.D

    2006-06-01

    Laboratory experiments provide a wealth of information related to mechanics of fracture initiation, fracture propagation processes, factors influencing fault strength, and spatio-temporal evolution of fracture properties. Much of the existing literature reports on laboratory studies involving a coupling of thermal, hydraulic, mechanical, and/or chemical processes. As these processes operate within subsurface environments exploited for their energy resource, laboratory results provide insights into factors influencing the mechanical and hydraulic properties of geothermal systems. I report on laboratory observations of strength and fluid transport properties during deformation of simulated faults. The results show systematic trends that vary with stress state, deformation rate, thermal conditions, fluid content, and rock composition. When related to geophysical and geologic measurements obtained from engineered geothermal systems (e.g. microseismicity, wellbore studies, tracer analysis), laboratory results provide a means by which the evolving thermal reservoir can be interpreted in terms of physico-chemical processes. For example, estimates of energy release and microearthquake locations from seismic moment tensor analysis can be related to strength variations observed from friction experiments. Such correlations between laboratory and field data allow for better interpretations about the evolving mechanical and fluid transport properties in the geothermal reservoir – ultimately leading to improvements in managing the resource.

  14. Research on burnout fault of moulded case circuit breaker based on finite element simulation

    Science.gov (United States)

    Xue, Yang; Chang, Shuai; Zhang, Penghe; Xu, Yinghui; Peng, Chuning; Shi, Erwei

    2017-09-01

    In the failure event of molded case circuit breaker, overheating of the molded case near the wiring terminal has a very important proportion. The burnout fault has become an important factor restricting the development of molded case circuit breaker. This paper uses the finite element simulation software to establish the model of molded case circuit breaker by coupling multi-physics field. This model can simulate the operation and study the law of the temperature distribution. The simulation results show that the temperature near the wiring terminal, especially the incoming side of the live wire, of the molded case circuit breaker is much higher than that of the other areas. The steady-state and transient simulation results show that the temperature at the wiring terminals is abnormally increased by increasing the contact resistance of the wiring terminals. This is consistent with the frequent occurrence of burnout of the molded case in this area. Therefore, this paper holds that the burnout failure of the molded case circuit breaker is mainly caused by the abnormal increase of the contact resistance of the wiring terminal.

  15. Cosmic-ray acceleration at collisionless astrophysical shocks using Monte-Carlo simulations

    CERN Document Server

    Wolff, M

    2015-01-01

    Context. The diffusive shock acceleration mechanism has been widely accepted as the acceleration mechanism for galactic cosmic rays. While self-consistent hybrid simulations have shown how power-law spectra are produced, detailed information on the interplay of diffusive particle motion and the turbulent electromagnetic fields responsible for repeated shock crossings are still elusive. Aims. The framework of test-particle theory is applied to investigate the effect of diffusive shock acceleration by inspecting the obtained cosmic-ray energy spectra. The resulting energy spectra can be obtained this way from the particle motion and, depending on the prescribed turbulence model, the influence of stochastic acceleration through plasma waves can be studied. Methods. A numerical Monte-Carlo simulation code is extended to include collisionless shock waves. This allows one to trace the trajectories of test particle while they are being accelerated. In addition, the diffusion coefficients can be obtained directly fro...

  16. Dramatic Decomposition Weakening of Simulated Faults in Carrara Marble at Seismic Slip-rates

    Science.gov (United States)

    Han, R.; Shimamoto, T.; Hirose, T.; Ree, J.

    2005-12-01

    Evolution of fault-zone strength and its weakening mechanisms during an earthquake are critical for understanding of earthquake rupture process. We report dramatic weakening of dry simulated faults in Carrara marble at seismic slip-rates, with frictional coefficient as low as 0.04 (probably the lowest record as rock friction). Calcite decomposition was confirmed by in-situ CO2 detection and other methods and the weakening may require new weakening mechanisms other than currently suggested ones such as frictional melting, thermal pressurization and silica gel formation. We conducted rotary-shear friction experiments on Carrara marble at slip-rates (V) of 0.09-1.24 m/s and normal stresses (σn) of 2.5-13.4 MPa. For preventing a thermal fracturing and applying a high normal load, we used solid cylindrical specimens jacketed with aluminum tubes. Narrow gap was left between the two aluminum tubes to avoid metal-to-metal contact. Our main results can be summarized as follows: (1) Slip weakening occurs in all experiments except for the runs at the lowest V (0.09 m/s); (2) Steady-state friction coefficient (μss) decreases as slip-rate and normal load increase; (3) At the highest V (1.13-1.24 m/s) and σn = 7.3 MPa, the average friction coefficient of initial peak friction (μp) is 0.61 (± 0.02), but the average μss is 0.04! (± 0.01) which is much lower than μp; (4) Decrease in average temperature of sliding surfaces corresponds to increase in friction, and strength recovery occurs very rapidly and completely upon cooling of specimens; (5) XRD and EPMA data show that the gouge for the specimens at V > 0.09 m/s is composed of calcite, lime (CaO) and/or hydrated lime (Ca(OH)2); (6) CO2 gas was detected with sensors during the weakening; (7) Decomposed calcite forms a fault zone consisting of ultrafine-grained gouge, but no melt or amorphous material was identified by optical microscopy or XRD analysis. Calcite decomposition clearly indicates that temperature in the fault

  17. Constraint methods that accelerate free-energy simulations of biomolecules.

    Science.gov (United States)

    Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  18. Constraint methods that accelerate free-energy simulations of biomolecules

    Energy Technology Data Exchange (ETDEWEB)

    Perez, Alberto [Laufer Center for Physical and Quantitative Biology, Stony Brook University, Stony Brook, New York 11794 (United States); MacCallum, Justin L. [Department of Chemistry, University of Calgary, Calgary, Alberta T2N 1N4 (Canada); Coutsias, Evangelos A. [Laufer Center for Physical and Quantitative Biology, Stony Brook University, Stony Brook, New York 11794 (United States); Department of Applied Mathematics, Stony Brook University, Stony Brook, New York 11794 (United States); Dill, Ken A. [Laufer Center for Physical and Quantitative Biology, Stony Brook University, Stony Brook, New York 11794 (United States); Department of Physics and Astronomy, Stony Brook University, Stony Brook, New York 11794 (United States)

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann’s law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  19. Acceleration of a QM/MM-QMC simulation using GPU.

    Science.gov (United States)

    Uejima, Yutaka; Terashima, Tomoharu; Maezono, Ryo

    2011-07-30

    We accelerated an ab initio molecular QMC calculation by using GPGPU. Only the bottle-neck part of the calculation is replaced by CUDA subroutine and performed on GPU. The performance on a (single core CPU + GPU) is compared with that on a (single core CPU with double precision), getting 23.6 (11.0) times faster calculations in single (double) precision treatments on GPU. The energy deviation caused by the single precision treatment was found to be within the accuracy required in the calculation, ∼10(-5) hartree. The accelerated computational nodes mounting GPU are combined to form a hybrid MPI cluster on which we confirmed the performance linearly scales to the number of nodes.

  20. Theoretical benchmarking of laser-accelerated ion fluxes by 2D-PIC simulations

    CERN Document Server

    Mackenroth, Felix; Marklund, Mattias

    2016-01-01

    There currently exists a number of different schemes for laser based ion acceleration in the literature. Some of these schemes are also partly overlapping, making a clear distinction between the schemes difficult in certain parameter regimes. Here, we provide a systematic numerical comparison between the following schemes and their analytical models: light-sail acceleration, Coulomb explosions, hole boring acceleration, and target normal sheath acceleration (TNSA). We study realistic laser parameters and various different target designs, each optimized for one of the acceleration schemes, respectively. As a means of comparing the schemes, we compute the ion current density generated at different laser powers, using two-dimensional particle-in-cell (PIC) simulations, and benchmark the particular analytical models for the corresponding schemes against the numerical results. Finally, we discuss the consequences for attaining high fluxes through the studied laser ion-acceleration schemes.

  1. ADHOCFTSIM: A Simulator of Fault Tolerence In the AD-HOC Networks

    Directory of Open Access Journals (Sweden)

    Esma Insaf Djebbar

    2010-11-01

    Full Text Available The flexibility and diversity of Wireless Mobile Networks offer many opportunities that are not alwaystaken into account by existing distributed systems. In particular, the proliferation of mobile users and theuse of mobile Ad-Hoc promote the formation of collaborative groups to share resources. We propose asolution for the management of fault tolerance in the Ad-Hoc networks, combining the functions neededto better availability of data. Our contribution takes into account the characteristics of mobile terminalsin order to reduce the consumption of resources critical that energy, and to minimize the loss ofinformation. Our solution is based on the formation of clusters, where each is managed by a node leader.This solution is mainly composed of four sub-services, namely: prediction, replication, management ofnodes in the cluster and supervision. We have shown, using several sets of simulation, that our solution istwofold: minimizing the energy consumption which increases the life of the network and better supportdeal with requests lost.

  2. Permeability Evolution With Shearing of Simulated Faults in Unconventional Shale Reservoirs

    Science.gov (United States)

    Wu, W.; Gensterblum, Y.; Reece, J. S.; Zoback, M. D.

    2016-12-01

    Horizontal drilling and multi-stage hydraulic fracturing can lead to fault reactivation, a process thought to influence production from extremely low-permeability unconventional reservoir. A fundamental understanding of permeability changes with shear could be helpful for optimizing reservoir stimulation strategies. We examined the effects of confining pressure and frictional sliding on fault permeability in Eagle Ford shale samples. We performed shear-flow experiments in a triaxial apparatus on four shale samples: (1) clay-rich sample with sawcut fault, (2) calcite-rich sample with sawcut fault, (3) clay-rich sample with natural fault, and (4) calcite-rich sample with natural fault. We used pressure pulse-decay and steady-state flow techniques to measure fault permeability. Initial pore and confining pressures are set to 2.5 MPa and 5.0 MPa, respectively. To investigate the influence of confining pressure on fault permeability, we incrementally raised and lowered the confining pressure and measure permeability at different effective stresses. To examine the effect of frictional sliding on fault permeability, we slide the samples four times at a constant shear displacement rate of 0.043 mm/min for 10 minutes each and measure fault permeability before and after frictional sliding. We used a 3D Laser Scanner to image fault surface topography before and after the experiment. Our results show that frictional sliding can enhance fault permeability at low confining pressures (e.g., ≥5.0 MPa) and reduce fault permeability at high confining pressures (e.g., ≥7.5 MPa). The permeability of sawcut faults almost fully recovers when confining pressure returns to the initial value, and increases with sliding due to asperity damage and subsequent dilation at low confining pressures. In contrast, the permeability of natural faults does not fully recover. It initially increases with sliding, but then decreases with further sliding most likely due to fault gouge blocking fluid

  3. Healing and sliding stability of simulated anhydrite fault gouge : Effects of water, temperature and CO2

    NARCIS (Netherlands)

    Pluymakers, Anne M H|info:eu-repo/dai/nl/357400224; Niemeijer, André R.|info:eu-repo/dai/nl/370832132

    2015-01-01

    Anhydrite-bearing faults are currently of interest to 1) CO2-storage sites capped by anhydrite caprocks (such as those found in the North Sea) and 2) seismically active faults in evaporite formations (such as the Italian Apennines). In order to assess the likelihood of fault reactivation, the mode

  4. Effects of dimensionality on computer simulations of laser-ion acceleration: When are three-dimensional simulations needed?

    Science.gov (United States)

    Yin, L.; Stark, D. J.; Albright, B. J.

    2016-10-01

    Laser-ion acceleration via relativistic induced transparency provides an effective means to accelerate ions to tens of MeV/nucleon over distances of 10s of μm. These ion sources may enable a host of applications, from fast ignition and x-rays sources to medical treatments. Understanding whether two-dimensional (2D) PIC simulations can capture the relevant 3D physics is important to the development of a predictive capability for short-pulse laser-ion acceleration and for economical design studies for applications of these accelerators. In this work, PIC simulations are performed in 3D and in 2D where the direction of the laser polarization is in the simulation plane (2D-P) and out-of-plane (2D-S). Our studies indicate modeling sensitivity to dimensionality and laser polarization. Differences arise in energy partition, electron heating, ion peak energy, and ion spectral shape. 2D-P simulations are found to over-predict electron heating and ion peak energy. The origin of these differences and the extent to which 2D simulations may capture the key acceleration dynamics will be discussed. Work performed under the auspices of the U.S. DOE by the LANS, LLC, Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396. Funding provided by the Los Alamos National Laboratory Directed Research and Development Program.

  5. Simulations and Vacuum Tests of a CLIC Accelerating Structure

    CERN Document Server

    Garion, C

    2011-01-01

    The Compact LInear Collider, under study, is based on room temperature high gradient structures. The vacuum specificities of these cavities are low conductance, large surface areas and a non-baked system. The main issue is to reach UHV conditions (typically 10-7 Pa) in a system where the residual vacuum is driven by water outgassing. A finite element model based on an analogy thermal/vacuum has been built to estimate the vacuum profile in an accelerating structure. Vacuum tests are carried out in a dedicated set-up, the vacuum performances of different configurations are presented and compared with the predictions.

  6. Lattice Boltzmann accelerated direct simulation Monte Carlo for dilute gas flow simulations.

    Science.gov (United States)

    Di Staso, G; Clercx, H J H; Succi, S; Toschi, F

    2016-11-13

    Hybrid particle-continuum computational frameworks permit the simulation of gas flows by locally adjusting the resolution to the degree of non-equilibrium displayed by the flow in different regions of space and time. In this work, we present a new scheme that couples the direct simulation Monte Carlo (DSMC) with the lattice Boltzmann (LB) method in the limit of isothermal flows. The former handles strong non-equilibrium effects, as they typically occur in the vicinity of solid boundaries, whereas the latter is in charge of the bulk flow, where non-equilibrium can be dealt with perturbatively, i.e. according to Navier-Stokes hydrodynamics. The proposed concurrent multiscale method is applied to the dilute gas Couette flow, showing major computational gains when compared with the full DSMC scenarios. In addition, it is shown that the coupling with LB in the bulk flow can speed up the DSMC treatment of the Knudsen layer with respect to the full DSMC case. In other words, LB acts as a DSMC accelerator.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2016 The Author(s).

  7. Lattice Boltzmann accelerated direct simulation Monte Carlo for dilute gas flow simulations

    Science.gov (United States)

    Di Staso, G.; Clercx, H. J. H.; Succi, S.; Toschi, F.

    2016-11-01

    Hybrid particle-continuum computational frameworks permit the simulation of gas flows by locally adjusting the resolution to the degree of non-equilibrium displayed by the flow in different regions of space and time. In this work, we present a new scheme that couples the direct simulation Monte Carlo (DSMC) with the lattice Boltzmann (LB) method in the limit of isothermal flows. The former handles strong non-equilibrium effects, as they typically occur in the vicinity of solid boundaries, whereas the latter is in charge of the bulk flow, where non-equilibrium can be dealt with perturbatively, i.e. according to Navier-Stokes hydrodynamics. The proposed concurrent multiscale method is applied to the dilute gas Couette flow, showing major computational gains when compared with the full DSMC scenarios. In addition, it is shown that the coupling with LB in the bulk flow can speed up the DSMC treatment of the Knudsen layer with respect to the full DSMC case. In other words, LB acts as a DSMC accelerator. This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'.

  8. Determining DfT Hardware by VHDL-AMS Fault Simulation for Biological Micro-Electronic Fluidic Arrays

    NARCIS (Netherlands)

    Kerkhoff, H.G.; Zhang, X.; Liu, H.; Richardson, A.; Nouet, P.; Azais, F.

    2005-01-01

    The interest of microelectronic fluidic arrays for biomedical applications, like DNA determination, is rapidly increasing. In order to evaluate these systems in terms of required Design-for-Test structures, fault simulations in both fluidic and electronic domains are necessary. VHDL-AMS can be used

  9. Modelling and Numerical Simulations of In-Air Reverberation Images for Fault Detection in Medical Ultrasonic Transducers: A Feasibility Study

    Directory of Open Access Journals (Sweden)

    W. Kochański

    2015-01-01

    Full Text Available A simplified two-dimensional finite element model which simulates the in-air reverberation image produced by medical ultrasonic transducers has been developed. The model simulates a linear array consisting of 128 PZT-5A crystals, a tungsten-epoxy backing layer, an Araldite matching layer, and a Perspex lens layer. The thickness of the crystal layer is chosen to simulate pulses centered at 4 MHz. The model is used to investigate whether changes in the electromechanical properties of the individual transducer layers (backing layer, crystal layer, matching layer, and lens layer have an effect on the simulated in-air reverberation image generated. Changes in the electromechanical properties are designed to simulate typical medical transducer faults such as crystal drop-out, lens delamination, and deterioration in piezoelectric efficiency. The simulations demonstrate that fault-related changes in transducer behaviour can be observed in the simulated in-air reverberation image pattern. This exploratory approach may help to provide insight into deterioration in transducer performance and help with early detection of faults.

  10. Numerical simulation of the LAGEOS thermal behavior and thermal accelerations

    NARCIS (Netherlands)

    Andrés, J.I.; Noomen, R.; Vecellio None, S.

    2006-01-01

    The temperature distribution throughout the LAGEOS satellites is simulated numerically with the objective to determine the resulting thermal force. The different elements and materials comprising the spacecraft, with their energy transfer, have been modeled with unprecedented detail. The radiation i

  11. Dynamic fault trees resolution: A conscious trade-off between analytical and simulative approaches

    Energy Technology Data Exchange (ETDEWEB)

    Chiacchio, F., E-mail: chiacchio@dmi.unict.it [Dipartimento di Matematica e Informatica-DMI, Universita degli Studi di Catania (Italy); Compagno, L., E-mail: lco@diim.unict.it [Dipartimento di Ingegneria Industriale e Meccanica-DIIM, Universita degli Studi di Catania (Italy); D' Urso, D., E-mail: ddurso@diim.unict.it [Dipartimento di Ingegneria Industriale e Meccanica-DIIM, Universita degli Studi di Catania (Italy); Manno, G., E-mail: gmanno@dmi.unict.it [Dipartimento di Matematica e Informatica-DMI, Universita degli Studi di Catania (Italy); Trapani, N., E-mail: ntrapani@diim.unict.it [Dipartimento di Ingegneria Industriale e Meccanica-DIIM, Universita degli Studi di Catania (Italy)

    2011-11-15

    Safety assessment in industrial plants with 'major hazards' requires a rigorous combination of both qualitative and quantitative techniques of RAMS. Quantitative assessment can be executed by static or dynamic tools of dependability but, while the former are not sufficient to model exhaustively time-dependent activities, the latter are still too complex to be used with success by the operators of the industrial field. In this paper we present a review of the procedures that can be used to solve quite general dynamic fault trees (DFT) that present a combination of the following characteristics: time dependencies, repeated events and generalized probability failure. Theoretical foundations of the DFT theory are discussed and the limits of the most known DFT tools are presented. Introducing the concept of weak and strong hierarchy, the well-known modular approach is adapted to study a more generic class of DFT. In order to quantify the approximations introduced, an ad-hoc simulative environment is used as benchmark. In the end, a DFT of an accidental scenario is analyzed with both analytical and simulative approaches. Final results are in good agreement and prove how it is possible to implement a suitable Monte Carlo simulation with the features of a spreadsheet environment, able to overcome the limits of the analytical tools, thus encouraging further researches along this direction. - Highlights: > Theoretical foundations of the DFT are reviewed and the limits of the analytical techniques are assessed. > Hierarchical technique is discussed, introducing the concepts of weak and strong equivalence. > Simulative environment developed with a spreadsheet electronic document is tested. > Comparison between the simulative and the analytical results is performed. > Classification of which technique is more suitable is provided, depending on the complexity of the DFT.

  12. Simulation of speed control in acceleration mode of a heavy duty vehicle; Ogatasha no kasokuji ni okeru shasoku seigyo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Endo, S.; Ukawa, H. [Isuzu Advanced Engineering Center, Ltd., Tokyo (Japan); Sanada, K.; Kitagawa, A. [Tokyo Institute of Technology, Tokyo (Japan)

    1997-10-01

    A control law of speed of a heavy duty vehicle in acceleration mode is presented, which is an extended version of a control law in deceleration mode proposed by the authors. The control law is based on constant acceleration strategy. Using the control law, target velocity and target distance can be performed. Both control laws for acceleration and deceleration mode can be represented by a unified mathematical formulae. Some simulation results are shown to demonstrate the control performance. 7 refs., 9 figs., 2 tabs.

  13. Acceleration of Radiance for Lighting Simulation by Using Parallel Computing with OpenCL

    Energy Technology Data Exchange (ETDEWEB)

    Zuo, Wangda; McNeil, Andrew; Wetter, Michael; Lee, Eleanor

    2011-09-06

    We report on the acceleration of annual daylighting simulations for fenestration systems in the Radiance ray-tracing program. The algorithm was optimized to reduce both the redundant data input/output operations and the floating-point operations. To further accelerate the simulation speed, the calculation for matrix multiplications was implemented using parallel computing on a graphics processing unit. We used OpenCL, which is a cross-platform parallel programming language. Numerical experiments show that the combination of the above measures can speed up the annual daylighting simulations 101.7 times or 28.6 times when the sky vector has 146 or 2306 elements, respectively.

  14. Frictional evolution, acoustic emissions activity, and off-fault damage in simulated faults sheared at seismic slip rates

    Science.gov (United States)

    Passelègue, François. X.; Spagnuolo, Elena; Violay, Marie; Nielsen, Stefan; Di Toro, Giulio; Schubnel, Alexandre

    2016-10-01

    We present a series of high-velocity friction tests conducted on Westerly granite, using the Slow to HIgh Velocity Apparatus (SHIVA) installed at Istituto Nazionale di Geofisica e Vulcanologia Roma with acoustic emissions (AEs) monitored at high frequency (4 MHz). Both atmospheric humidity and pore fluid (water) pressure conditions were tested, under effective normal stress σneff in the range 5-20 MPa and at target sliding velocities Vs in the range 0.003-3 m/s. Under atmospheric humidity two consecutive friction drops were observed. The first one is related to flash weakening, and the second one to the formation and growth of a continuous layer of melt in the slip zone. In the presence of fluid, a single drop in friction was observed. Average values of fracture energy are independent of effective normal stress and sliding velocity. However, measurements of elastic wave velocities on the sheared samples suggested that larger damage was induced for 0.1 < Vs<0.3 m/s. This observation is supported by AEs recorded during the test, most of which were detected after the initiation of the second friction drop, once the fault surface temperature was high. Some AEs were detected up to a few seconds after the end of the experiments, indicating thermal rather than mechanical cracking. In addition, the presence of pore water delayed the onset of AEs by cooling effects and by reducing of the heat produced, supporting the link between AEs and the production and diffusion of heat during sliding. Using a thermoelastic crack model developed by Fredrich and Wong (1986), we confirm that damage may be induced by heat diffusion. Indeed, our theoretical results predict accurately the amount of shortening and shortening rate, supporting the idea that gouge production and gouge comminution are in fact largely controlled by thermal cracking. Finally, we discuss the contribution of thermal cracking in the seismic energy balance. In fact, while a dichotomy exists in the literature regarding

  15. Prediction of strong acceleration motion depended on focal mechanism; Shingen mechanism wo koryoshita jishindo yosoku ni tsuite

    Energy Technology Data Exchange (ETDEWEB)

    Kaneda, Y.; Ejiri, J. [Obayashi Corp., Tokyo (Japan)

    1996-10-01

    This paper describes simulation results of strong acceleration motion with varying uncertain fault parameters mainly for a fault model of Hyogo-ken Nanbu earthquake. For the analysis, based on the fault parameters, the strong acceleration motion was simulated using the radiation patterns and the breaking time difference of composite faults as parameters. A statistic waveform composition method was used for the simulation. For the theoretical radiation patterns, directivity was emphasized which depended on the strike of faults, and the maximum acceleration was more than 220 gal. While, for the homogeneous radiation patterns, the maximum accelerations were isotopically distributed around the fault as a center. For variations in the maximum acceleration and the predominant frequency due to the breaking time difference of three faults, the response spectral value of maximum/minimum was about 1.7 times. From the viewpoint of seismic disaster prevention, underground structures including potential faults and non-arranging properties can be grasped using this simulation. Significance of the prediction of strong acceleration motion was also provided through this simulation using uncertain factors, such as breaking time of composite faults, as parameters. 4 refs., 4 figs., 1 tab.

  16. TSP断层模型数值模拟%Numerical Simulation of TSP Fault Model

    Institute of Scientific and Technical Information of China (English)

    林义; 刘争平; 王朝令; 肖缔

    2015-01-01

    隧道在施工开挖中会遇到各种地质问题,其中以断层和软弱带居多,目前隧道地质预报主要采用 TSP(tunnel seismic prediction)系统进行。虽然 TSP 技术应用广泛,但目前对它的研究工作主要集中于工程应用实例,采用正演模拟方法进行的研究较少。笔者采用有限元方法模拟隧道地震波场,采用波场快照与时间记录相结合的方法研究断层对隧道地震波场传播的影响,并对含断层模型的时间记录进行了反演处理,得到了数值模型的速度云图和反射层位图。数据处理结果表明:采用 TSP Win 软件默认值处理得到的速度云图与模型设定的断层位置一致;根据反射层位图,对异常速度带的层状模型来说,P 波预报的准确性更高。研究表明,TSP 系统具有良好的抗噪性能。通过对工程实例的处理,验证了数值模拟所得结论。%During tunnel excavation ,a variety of geological disasters might be encountered ,such as faults ,caves ,et .al . Tunnel seismic prediction (TSP ) is adopted to mitigate the possible damages . Although TSP technology is used widely ,the research about TSP is currently focused on its engineering application cases .We use the finite element method to simulate the tunnel seismic wave field ,employ wave field snapshots and time recording method on the impact of faults on the characteristics of the propagation of tunnel seismic wave field ,and inversely process the time record of model containing the fault .The digital model of the velocity scattered image and the reflection interface position are obtained , and the fault position from velocity scattered image processed with the default values set by using TSPwin is agreed to the one from the model .In respect to the layered model for an abnormal velocity zone ,P‐wave is more precise .The system of TSP is strong for its feature of anti‐noise .The numerical simulation is verified finally

  17. FEM Techniques for High Stress Detection in Accelerated Fatigue Simulation

    Science.gov (United States)

    Veltri, M.

    2016-09-01

    This work presents the theory and a numerical validation study in support to a novel method for a priori identification of fatigue critical regions, with the aim to accelerate durability design in large FEM problems. The investigation is placed in the context of modern full-body structural durability analysis, where a computationally intensive dynamic solution could be required to identify areas with potential for fatigue damage initiation. The early detection of fatigue critical areas can drive a simplification of the problem size, leading to sensible improvement in solution time and model handling while allowing processing of the critical areas in higher detail. The proposed technique is applied to a real life industrial case in a comparative assessment with established practices. Synthetic damage prediction quantification and visualization techniques allow for a quick and efficient comparison between methods, outlining potential application benefits and boundaries.

  18. Availability simulation software adaptation to the IFMIF accelerator facility RAMI analyses

    Energy Technology Data Exchange (ETDEWEB)

    Bargalló, Enric, E-mail: enric.bargallo-font@upc.edu [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Sureda, Pere Joan [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Arroyo, Jose Manuel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Abal, Javier; De Blas, Alfredo; Dies, Javier; Tapia, Carlos [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Mollá, Joaquín; Ibarra, Ángel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain)

    2014-10-15

    Highlights: • The reason why IFMIF RAMI analyses needs a simulation is explained. • Changes, modifications and software validations done to AvailSim are described. • First IFMIF RAMI results obtained with AvailSim 2.0 are shown. • Implications of AvailSim 2.0 in IFMIF RAMI analyses are evaluated. - Abstract: Several problems were found when using generic reliability tools to perform RAMI (Reliability Availability Maintainability Inspectability) studies for the IFMIF (International Fusion Materials Irradiation Facility) accelerator. A dedicated simulation tool was necessary to model properly the complexity of the accelerator facility. AvailSim, the availability simulation software used for the International Linear Collider (ILC) became an excellent option to fulfill RAMI analyses needs. Nevertheless, this software needed to be adapted and modified to simulate the IFMIF accelerator facility in a useful way for the RAMI analyses in the current design phase. Furthermore, some improvements and new features have been added to the software. This software has become a great tool to simulate the peculiarities of the IFMIF accelerator facility allowing obtaining a realistic availability simulation. Degraded operation simulation and maintenance strategies are the main relevant features. In this paper, the necessity of this software, main modifications to improve it and its adaptation to IFMIF RAMI analysis are described. Moreover, first results obtained with AvailSim 2.0 and a comparison with previous results is shown.

  19. Reliability analysis of a wastewater treatment plant using fault tree analysis and Monte Carlo simulation.

    Science.gov (United States)

    Taheriyoun, Masoud; Moradinejad, Saber

    2015-01-01

    The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.

  20. Simulations of ion acceleration at non-relativistic shocks: ii) magnetic field amplification and particle diffusion

    CERN Document Server

    Caprioli, Damiano

    2014-01-01

    We use large hybrid (kinetic ions-fluid electrons) simulations to study ion acceleration and generation of magnetic turbulence due to the streaming of energetic particles that are self-consistently accelerated at non-relativistic shocks. When acceleration is efficient (at quasi-parallel shocks), we find that the magnetic field develops transverse components and is significantly amplified in the pre-shock medium. The total amplification factor is larger than 10 for shocks with Mach number $M=100$, and scales with the square root of $M$. We find that in the shock precursor the energy spectral density of excited magnetic turbulence is proportional to spectral energy distribution of accelerated particles at corresponding resonant momenta, in good agreement with the predictions of quasilinear theory of diffusive shock acceleration. We discuss the role of Bell's instability, which is predicted and found to grow faster than resonant instability in shocks with $M\\gtrsim 30$. Ahead of these strong shocks we distinguis...

  1. Computing elastic‐rebound‐motivated rarthquake probabilities in unsegmented fault models: a new methodology supported by physics‐based simulators

    Science.gov (United States)

    Field, Edward H.

    2015-01-01

    A methodology is presented for computing elastic‐rebound‐based probabilities in an unsegmented fault or fault system, which involves computing along‐fault averages of renewal‐model parameters. The approach is less biased and more self‐consistent than a logical extension of that applied most recently for multisegment ruptures in California. It also enables the application of magnitude‐dependent aperiodicity values, which the previous approach does not. Monte Carlo simulations are used to analyze long‐term system behavior, which is generally found to be consistent with that of physics‐based earthquake simulators. Results cast doubt that recurrence‐interval distributions at points on faults look anything like traditionally applied renewal models, a fact that should be considered when interpreting paleoseismic data. We avoid such assumptions by changing the "probability of what" question (from offset at a point to the occurrence of a rupture, assuming it is the next event to occur). The new methodology is simple, although not perfect in terms of recovering long‐term rates in Monte Carlo simulations. It represents a reasonable, improved way to represent first‐order elastic‐rebound predictability, assuming it is there in the first place, and for a system that clearly exhibits other unmodeled complexities, such as aftershock triggering.

  2. Quench simulations for superconducting elements in the LHC accelerator

    CERN Document Server

    Sonnemann, F

    2000-01-01

    The design of he protection system for he superconducting elements in an accel- erator such as the Large Hadron Collider (LHC),now under construction at CERN, requires a detailed understanding of the hermo-hydraulic and electrodynamic pro- cesses during a quench.A numerical program (SPQR -Simulation Program for Quench Research)has been developed o evaluate temperature and voltage dis ri- butions during a quench as a func ion of space and ime.The quench process is simulated by approximating the heat balance equation with the finite di fference method in presence of variable cooling and powering conditions.The simulation predicts quench propagation along a superconducting cable,forced quenching with heaters,impact of eddy curren s induced by a magnetic field change,and heat trans- fer hrough an insulation layer in o helium,an adjacen conductor or other material. The simulation studies allowed a better understanding of experimental quench data and were used for determining the adequ...

  3. Modeling of fluid injection and withdrawal induced fault activation using discrete element based hydro-mechanical and dynamic coupled simulator

    Science.gov (United States)

    Yoon, Jeoung Seok; Zang, Arno; Zimmermann, Günter; Stephansson, Ove

    2016-04-01

    Operation of fluid injection into and withdrawal from the subsurface for various purposes has been known to induce earthquakes. Such operations include hydraulic fracturing for shale gas extraction, hydraulic stimulation for Enhanced Geothermal System development and waste water disposal. Among these, several damaging earthquakes have been reported in the USA in particular in the areas of high-rate massive amount of wastewater injection [1] mostly with natural fault systems. Oil and gas production have been known to induce earthquake where pore fluid pressure decreases in some cases by several tens of Mega Pascal. One recent seismic event occurred in November 2013 near Azle, Texas where a series of earthquakes began along a mapped ancient fault system [2]. It was studied that a combination of brine production and waste water injection near the fault generated subsurface pressures sufficient to induced earthquakes on near-critically stressed faults. This numerical study aims at investigating the occurrence mechanisms of such earthquakes induced by fluid injection [3] and withdrawal by using hydro-geomechanical coupled dynamic simulator (Itasca's Particle Flow Code 2D). Generic models are setup to investigate the sensitivity of several parameters which include fault orientation, frictional properties, distance from the injection well to the fault, amount of fluid withdrawal around the injection well, to the response of the fault systems and the activation magnitude. Fault slip movement over time in relation to the diffusion of pore pressure is analyzed in detail. Moreover, correlations between the spatial distribution of pore pressure change and the locations of induced seismic events and fault slip rate are investigated. References [1] Keranen KM, Weingarten M, Albers GA, Bekins BA, Ge S, 2014. Sharp increase in central Oklahoma seismicity since 2008 induced by massive wastewater injection, Science 345, 448, DOI: 10.1126/science.1255802. [2] Hornbach MJ, DeShon HR

  4. BEAM DYNAMICS SIMULATIONS OF SARAF ACCELERATOR INCLUDING ERROR PROPAGATION AND IMPLICATIONS FOR THE EURISOL DRIVER

    CERN Document Server

    J. Rodnizki, D. Berkovits, K. Lavie, I. Mardor, A. Shor and Y. Yanay (Soreq NRC, Yavne), K. Dunkel, C. Piel (ACCEL, Bergisch Gladbach), A. Facco (INFN/LNL, Legnaro, Padova), V. Zviagintsev (TRIUMF, Vancouver)

    AbstractBeam dynamics simulations of SARAF (Soreq Applied Research Accelerator Facility) superconducting RF linear accelerator have been performed in order to establish the accelerator design. The multi-particle simulation includes 3D realistic electromagnetic field distributions, space charge forces and fabrication, misalignment and operation errors. A 4 mA proton or deuteron beam is accelerated up to 40 MeV with a moderated rms emittance growth and a high real-estate gradient of 2 MeV/m. An envelope of 40,000 macro-particles is kept under a radius of 1.1 cm, well below the beam pipe bore radius. The accelerator design of SARAF is proposed as an injector for the EURISOL driver accelerator. The Accel 176 MHZ β0=0.09 and β0=0.15 HWR lattice was extended to 90 MeV based on the LNL 352 MHZ β0=0.31 HWR. The matching between both lattices ensures smooth transition and the possibility to extend the accelerator to the required EURISOL ion energy.

  5. Western fault zone of South China Sea and its physical simulation evidences

    Institute of Scientific and Technical Information of China (English)

    SUN Longtao; SUN Zhen; ZHAN Wenhuan; SUN Zongxun; ZHAO Minghui; XIA Shaohong

    2006-01-01

    The western fault zone of the South China Sea is a strike-slip fault system and consists of four typical strike-slip faults. It is the western border of the South China Sea. The formation of the system is due to the extrusion of Indo - China Peninsula caused by the collision of India with Tibet and the spreading of the South China Sea in Cenozoic. There are five episodes of tectonic movement along this fault zone, which plays an important role in the Cenozoic evolution of the South China Sea. By the physical modeling experiments, it can be seen the strike-slip fault undergoes the sinistral and dextral movement due to the relative movement velocity change between the South China Sea block and the Indo - China block. The fault zone controls the evolution of the pull basins locating in the west of the South China Sea.

  6. Accelerated discovery of OLED materials through atomic-scale simulation

    Science.gov (United States)

    Halls, Mathew D.; Giesen, David J.; Hughes, Thomas F.; Goldberg, Alexander; Cao, Yixiang; Kwak, H. Shaun; Mustard, Thomas J.; Gavartin, Jacob

    2016-09-01

    Organic light-emitting diode (OLED) devices are under widespread investigation to displace or complement inorganic optoelectronic devices for solid-state lighting and active displays. The materials in these devices are selected or designed according to their intrinsic and extrinsic electronic properties with concern for efficient charge injection and transport, and desired stability and light emission characteristics. The chemical design space for OLED materials is enormous and there is need for the development of computational approaches to help identify the most promising solutions for experimental development. In this work we will present examples of simulation approaches available to efficiently screen libraries of potential OLED materials; including first-principles prediction of key intrinsic properties, and classical simulation of amorphous morphology and stability. Also, an alternative to exhaustive computational screening is introduced based on a biomimetic evolutionary framework; evolving the molecular structure in the calculated OLED property design space.

  7. GPU accelerated numerical simulations of viscoelastic phase separation model.

    Science.gov (United States)

    Yang, Keda; Su, Jiaye; Guo, Hongxia

    2012-07-05

    We introduce a complete implementation of viscoelastic model for numerical simulations of the phase separation kinetics in dynamic asymmetry systems such as polymer blends and polymer solutions on a graphics processing unit (GPU) by CUDA language and discuss algorithms and optimizations in details. From studies of a polymer solution, we show that the GPU-based implementation can predict correctly the accepted results and provide about 190 times speedup over a single central processing unit (CPU). Further accuracy analysis demonstrates that both the single and the double precision calculations on the GPU are sufficient to produce high-quality results in numerical simulations of viscoelastic model. Therefore, the GPU-based viscoelastic model is very promising for studying many phase separation processes of experimental and theoretical interests that often take place on the large length and time scales and are not easily addressed by a conventional implementation running on a single CPU.

  8. Community Project for Accelerator Science and Simulation (ComPASS)

    Energy Technology Data Exchange (ETDEWEB)

    Simmons, Christopher [Univ. of Texas, Austin, TX (United States); Carey, Varis [Univ. of Texas, Austin, TX (United States)

    2016-10-12

    After concluding our initial exercise (solving a simplified statistical inverse problem with unknown parameter laser intensity) of coupling Vorpal and our parallel statistical library QUESO, we shifted the application focus to DLA. Our efforts focused on developing a Gaussian process (GP) emulator within QUESO for efficient optimization of power couplers within woodpiles. The smaller simulation size (compared with LPA) allows for sufficient “training runs” to develop a reasonable GP statistical emulator for a parameter space of moderate dimension.

  9. Construction of Network Fault Simulation Platform and Event Samples Acquisition Techniques for Event Correlation

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Event correlation is one key technique in network fault management. For the event sample acqui sition problem in event correlation, a novel approach is proposed to collect the samples by constructing net work simtmulation platform. The platform designed can set kinds of network faults according to user's demand and generate a lot of network fault events, which will benefit the research on efficient event correlation tech niques.

  10. A universal alternating immersion simulator for accelerated cyclic corrosion tests

    Energy Technology Data Exchange (ETDEWEB)

    Hassel, A.W.; Bonk, S.; Tsuri, S.; Stratmann, M. [Max-Planck-Institut fuer Eisenforschung GmbH, Duesseldorf (Germany)

    2008-02-15

    A new device for performing accelerated cyclic immersion tests is described. The main achievement is to realise a high cycling rate without a proportional increase in the test duration. The device is also capable of performing tests according to EU ISO 11130 specification. A minimal invasive drying system is used that neither heats air nor sample and the flow rate is still low as to prevent a mechanical delamination of paints or loose corrosion products. A multiple sample set-up is realised that provides individual reference electrodes. The random access through a multiplexer allows individual investigation of the samples even by electrochemical impedance spectroscopy under immersion conditions. The device and its test principle are applicable in both industrial and laboratorial scale applications. Two application examples are given to demonstrate the versatility of the alternating immersion tester. One addresses the corrosion protection performance of different zinc-coated steel sheets; the other quantifies the patina formation kinetics of low-alloyed steels with weathering properties. (Abstract Copyright [2008], Wiley Periodicals, Inc.)

  11. Application of Java Technology to Simulation of Transient Effects in Accelerator Magnets

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Superconducting magnets are one of the key building blocks of modern high-energy particle accelerators. Operating at extremely low temperatures (1.9 K), superconducting magnets produce high magnetic field needed to control the trajectory of beams travelling at nearly the speed of light. With high performance comes considerable complexity represented by several coupled physical domains characterized by multi-rate and multi-scale behaviour. The full exploitation of the LHC, as well as the design of its upgrades and future accelerators calls for more accurate simulations. With such a long-term vision in mind, the STEAM (Simulation of Transient Effects in Accelerator Magnets) project has been establish and is based on two pillars: (i) models developed with optimised solvers for particular sub-problems, (ii) coupling interfaces allowing to exchange information between the models. In order to tackle these challenges and develop a maintainable and extendable simulation framework, a team of developers implemented a ...

  12. Influence of mobile shale on thrust faults: Insights from discrete element simulations

    Science.gov (United States)

    Dean, S. L.; Morgan, J. K.

    2013-12-01

    thrusts are listric, similar to those in the Niger Delta, steepening updip and curving near the intersection with the mobile shale layer. The décollements in our simulations, however, are much more diffuse then interpreted in nature. Discrete thrust faults within the pre-delta layer sole into broader zones of distributed strain within the mobile shale layer. In the frontal fold and thrust belt, both backthrusts and forethrusts were observed, also seen in the western lobe of the Niger Delta. In our simulations, this dual vergence is caused by the rotation of the principal stress in the pre-delta layer from sub-vertical under the sediment wedge, to nearly horizontal in front of the wedge. This rotation is thought to be due to a basinward 'push' created by updip extension along normal faults, which slide within the mobile layer and along the base of the model. This rotation of stresses is not found in the underlying weak mobile layer. The amount of contraction in the fold and thrust belt was about half the amount of extension accommodated beneath the sediment wedge, indicating that a large amount of contraction was distributed throughout the models, including in front of the toe thrusts, rather than being concentrated solely in the fold and thrust belt.

  13. S2-Project: Near-fault earthquake ground motion simulation in the Sulmona alluvial basin

    Science.gov (United States)

    Faccioli, E.; Stupazzini, M.; Galadini, F.; Gori, S.

    2008-12-01

    Recently the Italian Department of Civil Protection (DPC), in cooperation with Istituto Nazionale di Geofisica e Vulcanologia (INGV) has promoted the 'S2' research project (http://nuovoprogettoesse2.stru.polimi.it/) aimed at the design, testing and application of an open-source code for seismic hazard assessment (SHA). The tool envisaged will likely differ in several important respects from an existing international initiative (Open SHA, Field et al., 2003). In particular, while "the OpenSHA collaboration model envisions scientists developing their own attenuation relationships and earthquake rupture forecasts, which they will deploy and maintain in their own systems" , the main purpose of S2 project is to provide a flexible computational tool for SHA, primarily suited for the needs of DPC, which not necessarily are scientific needs. Within S2, a crucial issue is to make alternative approaches available to quantify the ground motion, with emphasis on the near field region. The SHA architecture envisaged will allow for the use of ground motion descriptions other than those yielded by empirical attenuation equations, for instance user generated motions provided by deterministic source and wave propagation simulations. In this contribution, after a brief presentation of Project S2, we intend to illustrate some preliminary 3D scenario simulations performed in the alluvial basin of Sulmona (Central Italy), as an example of the type of descriptions that can be handled in the future SHA architecture. In detail, we selected some seismogenic sources (from the DISS database), believed to be responsible for a number of destructive historical earthquakes, and derive from them a family of simplified geometrical and mechanical source models spanning across a reasonable range of parameters, so that the extent of the main uncertainties can be covered. Then, purely deterministic (for frequencies Element (SE) method, extensively published by Faccioli and his co-workers, and

  14. Accelerating Steered Molecular Dynamics: Toward Smaller Velocities in Forced Unfolding Simulations.

    Science.gov (United States)

    Mücksch, Christian; Urbassek, Herbert M

    2016-03-08

    The simulation of forced unfolding experiments, in which proteins are pulled apart, is conventionally done using steered molecular dynamics. We present here a hybrid scheme in which accelerated molecular dynamics is used together with steered molecular dynamics. We show that the new scheme changes the force-distance curves mainly in the region around the force maximum and thus demonstrate that the improved equilibration of the protein-solvent system brought about by using accelerated molecular dynamics makes the simulation more comparable to experimental data.

  15. A new GPU-accelerated hydrodynamical code for numerical simulation of interacting galaxies

    CERN Document Server

    Igor, Kulikov

    2013-01-01

    In this paper a new scalable hydrodynamic code GPUPEGAS (GPU-accelerated PErformance Gas Astrophysic Simulation) for simulation of interacting galaxies is proposed. The code is based on combination of Godunov method as well as on the original implementation of FlIC method, specially adapted for GPU-implementation. Fast Fourier Transform is used for Poisson equation solution in GPUPEGAS. Software implementation of the above methods was tested on classical gas dynamics problems, new Aksenov's test and classical gravitational gas dynamics problems. Collisionless hydrodynamic approach was used for modelling of stars and dark matter. The scalability of GPUPEGAS computational accelerators is shown.

  16. Beam-beam simulation code BBSIM for particle accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyung J.; Sen, Tanaji; /Fermilab

    2011-01-01

    A highly efficient, fully parallelized, six-dimensional tracking model for simulating interactions of colliding hadron beams in high energy ring colliders and simulating schemes for mitigating their effects is described. The model uses the weak-strong approximation for calculating the head-on interactions when the test beam has lower intensity than the other beam, a look-up table for the efficient calculation of long-range beam-beam forces, and a self-consistent Poisson solver when both beams have comparable intensities. A performance test of the model in a parallel environment is presented. The code is used to calculate beam emittance and beam loss in the Tevatron at Fermilab and compared with measurements. They also present results from the studies of stwo schemes proposed to compensate the beam-beam interactions: (a) the compensation of long-range interactions in the Relativistic Heavy Ion Collider (RHIC) at Brookhaven and the Large Hadron Collider (LHC) at CERN with a current carrying wire, (b) the use of a low energy electron beam to compensate the head-on interactions in RHIC.

  17. Beam-beam simulation code BBSIM for particle accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyung J.; Sen, Tanaji; /Fermilab

    2011-01-01

    A highly efficient, fully parallelized, six-dimensional tracking model for simulating interactions of colliding hadron beams in high energy ring colliders and simulating schemes for mitigating their effects is described. The model uses the weak-strong approximation for calculating the head-on interactions when the test beam has lower intensity than the other beam, a look-up table for the efficient calculation of long-range beam-beam forces, and a self-consistent Poisson solver when both beams have comparable intensities. A performance test of the model in a parallel environment is presented. The code is used to calculate beam emittance and beam loss in the Tevatron at Fermilab and compared with measurements. They also present results from the studies of stwo schemes proposed to compensate the beam-beam interactions: (a) the compensation of long-range interactions in the Relativistic Heavy Ion Collider (RHIC) at Brookhaven and the Large Hadron Collider (LHC) at CERN with a current carrying wire, (b) the use of a low energy electron beam to compensate the head-on interactions in RHIC.

  18. Multi-Physics Modelling of Fault Mechanics Using REDBACK: A Parallel Open-Source Simulator for Tightly Coupled Problems

    Science.gov (United States)

    Poulet, Thomas; Paesold, Martin; Veveakis, Manolis

    2017-03-01

    Faults play a major role in many economically and environmentally important geological systems, ranging from impermeable seals in petroleum reservoirs to fluid pathways in ore-forming hydrothermal systems. Their behavior is therefore widely studied and fault mechanics is particularly focused on the mechanisms explaining their transient evolution. Single faults can change in time from seals to open channels as they become seismically active and various models have recently been presented to explain the driving forces responsible for such transitions. A model of particular interest is the multi-physics oscillator of Alevizos et al. (J Geophys Res Solid Earth 119(6), 4558-4582, 2014) which extends the traditional rate and state friction approach to rate and temperature-dependent ductile rocks, and has been successfully applied to explain spatial features of exposed thrusts as well as temporal evolutions of current subduction zones. In this contribution we implement that model in REDBACK, a parallel open-source multi-physics simulator developed to solve such geological instabilities in three dimensions. The resolution of the underlying system of equations in a tightly coupled manner allows REDBACK to capture appropriately the various theoretical regimes of the system, including the periodic and non-periodic instabilities. REDBACK can then be used to simulate the drastic permeability evolution in time of such systems, where nominally impermeable faults can sporadically become fluid pathways, with permeability increases of several orders of magnitude.

  19. WarpIV: In Situ Visualization and Analysis of Ion Accelerator Simulations.

    Science.gov (United States)

    Rubel, Oliver; Loring, Burlen; Vay, Jean-Luc; Grote, David P; Lehe, Remi; Bulanov, Stepan; Vincenti, Henri; Bethel, E Wes

    2016-01-01

    The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analytics to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. This supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.

  20. Beam equipment electromagnetic interaction in accelerators: simulation and experimental benchmarking

    CERN Document Server

    Passarelli, Andrea; Vaccaro, Vittorio Giorgio; Massa, Rita; Masullo, Maria Rosaria

    One of the most significant technological problems to achieve the nominal performances in the Large Hadron Collider (LHC) concerns the system of collimation of particle beams. The use of collimators crystals, exploiting the channeling effect on extracted beam, has been experimentally demonstrated. The first part of this thesis is about the optimization of UA9 goniometer at CERN, this device used for beam collimation will replace a part of the vacuum chamber. The optimization process, however, requires the calculation of the coupling impedance between the circulating beam and this structure in order to define the threshold of admissible intensity to do not trigger instability processes. Simulations have been performed with electromagnetic codes to evaluate the coupling impedance and to assess the beam-structure interaction. The results clearly showed that the most concerned resonance frequencies are due solely to the open cavity to the compartment of the motors and position sensors considering the crystal in o...

  1. Learning scheme to predict atomic forces and accelerate materials simulations

    Science.gov (United States)

    Botu, V.; Ramprasad, R.

    2015-09-01

    The behavior of an atom in a molecule, liquid, or solid is governed by the force it experiences. If the dependence of this vectorial force on the atomic chemical environment can be learned efficiently with high fidelity from benchmark reference results—using "big-data" techniques, i.e., without resorting to actual functional forms—then this capability can be harnessed to enormously speed up in silico materials simulations. The present contribution provides several examples of how such a force field for Al can be used to go far beyond the length-scale and time-scale regimes presently accessible using quantum-mechanical methods. It is argued that pathways are available to systematically and continuously improve the predictive capability of such a learned force field in an adaptive manner, and that this concept can be generalized to include multiple elements.

  2. Direct numerical simulation of turbulence using GPU accelerated supercomputers

    Science.gov (United States)

    Khajeh-Saeed, Ali; Blair Perot, J.

    2013-02-01

    Direct numerical simulations of turbulence are optimized for up to 192 graphics processors. The results from two large GPU clusters are compared to the performance of corresponding CPU clusters. A number of important algorithm changes are necessary to access the full computational power of graphics processors and these adaptations are discussed. It is shown that the handling of subdomain communication becomes even more critical when using GPU based supercomputers. The potential for overlap of MPI communication with GPU computation is analyzed and then optimized. Detailed timings reveal that the internal calculations are now so efficient that the operations related to MPI communication are the primary scaling bottleneck at all but the very largest problem sizes that can fit on the hardware. This work gives a glimpse of the CFD performance issues will dominate many hardware platform in the near future.

  3. Accelerated Monte Carlo simulations with restricted Boltzmann machines

    Science.gov (United States)

    Huang, Li; Wang, Lei

    2017-01-01

    Despite their exceptional flexibility and popularity, Monte Carlo methods often suffer from slow mixing times for challenging statistical physics problems. We present a general strategy to overcome this difficulty by adopting ideas and techniques from the machine learning community. We fit the unnormalized probability of the physical model to a feed-forward neural network and reinterpret the architecture as a restricted Boltzmann machine. Then, exploiting its feature detection ability, we utilize the restricted Boltzmann machine to propose efficient Monte Carlo updates to speed up the simulation of the original physical system. We implement these ideas for the Falicov-Kimball model and demonstrate an improved acceptance ratio and autocorrelation time near the phase transition point.

  4. Accelerate Monte Carlo Simulations with Restricted Boltzmann Machines

    CERN Document Server

    Huang, Li

    2016-01-01

    Despite their exceptional flexibility and popularity, the Monte Carlo methods often suffer from slow mixing times for challenging statistical physics problems. We present a general strategy to overcome this difficulty by adopting ideas and techniques from the machine learning community. We fit the unnormalized probability of the physical model to a feedforward neural network and reinterpret the architecture as a restricted Boltzmann machine. Then, exploiting its feature detection ability, we utilize the restricted Boltzmann machine for efficient Monte Carlo updates and to speed up the simulation of the original physical system. We implement these ideas for the Falicov-Kimball model and demonstrate improved acceptance ratio and autocorrelation time near the phase transition point.

  5. Cosmic-ray acceleration at collisionless astrophysical shocks using Monte-Carlo simulations

    Science.gov (United States)

    Wolff, M.; Tautz, R. C.

    2015-08-01

    Context. The diffusive shock acceleration mechanism has been widely accepted as the acceleration mechanism for galactic cosmic rays. While self-consistent hybrid simulations have shown how power-law spectra are produced, detailed information on the interplay of diffusive particle motion and the turbulent electromagnetic fields responsible for repeated shock crossings are still elusive. Aims: The framework of test-particle theory is applied to investigate the effect of diffusive shock acceleration by inspecting the obtained cosmic-ray energy spectra. The resulting energy spectra can be obtained this way from the particle motion and, depending on the prescribed turbulence model, the influence of stochastic acceleration through plasma waves can be studied. Methods: A numerical Monte-Carlo simulation code is extended to include collisionless shock waves. This allows one to trace the trajectories of test particle while they are being accelerated. In addition, the diffusion coefficients can be obtained directly from the particle motion, which allows for a detailed understanding of the acceleration process. Results: The classic result of an energy spectrum with E-2 is only reproduced for parallel shocks, while, for all other cases, the energy spectral index is reduced depending on the shock obliqueness. Qualitatively, this can be explained in terms of the diffusion coefficients in the directions that are parallel and perpendicular to the shock front.

  6. Particle-in-cell simulation of x-ray wakefield acceleration and betatron radiation in nanotubes

    Science.gov (United States)

    Zhang, Xiaomei; Tajima, Toshiki; Farinella, Deano; Shin, Youngmin; Mourou, Gerard; Wheeler, Jonathan; Taborek, Peter; Chen, Pisin; Dollar, Franklin; Shen, Baifei

    2016-10-01

    Though wakefield acceleration in crystal channels has been previously proposed, x-ray wakefield acceleration has only recently become a realistic possibility since the invention of the single-cycled optical laser compression technique. We investigate the acceleration due to a wakefield induced by a coherent, ultrashort x-ray pulse guided by a nanoscale channel inside a solid material. By two-dimensional particle-in-cell computer simulations, we show that an acceleration gradient of TeV /cm is attainable. This is about 3 orders of magnitude stronger than that of the conventional plasma-based wakefield accelerations, which implies the possibility of an extremely compact scheme to attain ultrahigh energies. In addition to particle acceleration, this scheme can also induce the emission of high energy photons at ˜O (10 - 100 ) MeV . Our simulations confirm such high energy photon emissions, which is in contrast with that induced by the optical laser driven wakefield scheme. In addition to this, the significantly improved emittance of the energetic electrons has been discussed.

  7. Comparison of scaling laws with PIC simulations for proton acceleration with long wavelength pulses

    Energy Technology Data Exchange (ETDEWEB)

    Turchetti, G., E-mail: turchetti@bo.infn.i [Dipartimento di Fisica Universita di Bologna, INFN Sezione di Bologna (Italy); Sgattoni, A.; Benedetti, C. [Dipartimento di Fisica Universita di Bologna, INFN Sezione di Bologna (Italy); Londrillo, P. [INFN Sezione di Bologna (Italy); Di Lucchio, L. [Dipartimento di Fisica Universita di Bologna, INFN Sezione di Bologna (Italy)

    2010-08-01

    We have performed a survey of proton acceleration induced by long wavelength pulses to explore their peak energy dependence on the pulse intensity, target thickness and density. The simulations carried out with the PIC code ALADYN for a circularly polarized pulse have been compared with the scaling laws for radiation pressure acceleration (RPA) in the thick target and thin target regimes known as hole boring (HB) and relativistic mirror (RM) respectively. Since the critical density scales as {lambda}{sup -2}, longer wavelength pulses allow to work with low density targets several microns thick and with moderate laser power. Under these conditions is possible to enter the RM region, where the key parameter is the ratio {alpha} between twice laser energy and the mirror rest energy; the corresponding acceleration efficiency is given by {alpha}/(1+{alpha}). For a fixed intensity the minimum thickness of the target, and consequently the highest acceleration, is determined by the threshold of self induced transparency. In this case the number of accelerated particles scales with {lambda} whereas the total energy does not depend on it. The agreement of PIC simulations with RPA and RM scalings, including the transition regions, suggests that these scalings can safely be used as the first step in the parametric scans also for large wavelength pulses such as CO{sub 2} lasers, to explore possible alternatives to short wavelength very high power Ti:Sa lasers for proton acceleration.

  8. MHD simulations of Plasma Jets and Plasma-surface interactions in Coaxial Plasma Accelerators

    Science.gov (United States)

    Subramaniam, Vivek; Raja, Laxminarayan

    2016-10-01

    Coaxial plasma accelerators belong to a class of electromagnetic acceleration devices which utilize a self-induced Lorentz force to accelerate magnetized thermal plasma to large velocities ( 40 Km/s). The plasma jet generated as a result, due to its high energy density, can be used to mimic the plasma-surface interactions at the walls of thermonuclear fusion reactors during an Edge Localized Mode (ELM) disruption event. We present the development of a Magnetohydrodynamics (MHD) simulation tool to describe the plasma acceleration and jet formation processes in coaxial plasma accelerators. The MHD model is used to study the plasma-surface impact interaction generated by the impingement of the jet on a target material plate. The study will characterize the extreme conditions generated on the target material surface by resolving the magnetized shock boundary layer interaction and the viscous/thermal diffusion effects. Additionally, since the plasma accelerator is operated in vacuum conditions, a novel plasma-vacuum interface tracking algorithm is developed to simulate the expansion of the high density plasma into a vacuum background in a physically consistent manner.

  9. Role of multiscale heterogeneity in fault slip from quasi-static numerical simulations

    Science.gov (United States)

    Aochi, Hideo; Ide, Satoshi

    2017-07-01

    Quasi-static numerical simulations of slip along a fault interface characterized by multiscale heterogeneity (fractal patch model) are carried out under the assumption that the characteristic distance in the slip-dependent frictional law is scale-dependent. We also consider slip-dependent stress accumulation on patches prior to the weakening process. When two patches of different size are superposed, the slip rate of the smaller patch is reduced when the stress is increased on the surrounding large patch. In the case of many patches over a range of scales, the slip rate on the smaller patches becomes significant in terms of both its amplitude and frequency. Peaks in slip rate are controlled by the surrounding larger patches, which may also be responsible for the segmentation of slip sequences. The use of an explicit slip-strengthening-then-weakening frictional behavior highlights that the strengthening process behind small patches weakens their interaction and reduces the peaks in slip rate, while the slip deficit continues to accumulate in the background. Therefore, it may be possible to image the progress of slip deficit at larger scales if the changes in slip activity on small patches are detectable.

  10. Simulation of Quasi-Adiabatic Beam Capture into Acceleration at the Nuclotron

    CERN Document Server

    Volkov, V I; Issinsky, I B; Kovalenko, A D

    2003-01-01

    The routine RF system being used at the Nuclotron allows one to inject the beam at ramping magnetic field with following acceleration at constant amplitude of accelerating voltage. At these conditions at least a half of the particles circulating in the vacuum chamber after injection is not captured in longitudinal acceptance. At the same time vacuum chamber sizes permit to extend the momentum spread of the beam enough to make gymnastic with it inside the stable zone of longitudinal phase space on the flat magnetic field at injection. A quasi-adiabatic capture was considered for increasing the Nuclotron beam intensity. Simulation of such a kind of process with subsequent acceleration was performed. It was shown that in this case it is possible to capture and accelerate up to 100 % of the injected beam.

  11. The impact of accelerator processors for high-throughput molecular modeling and simulation.

    Science.gov (United States)

    Giupponi, G; Harvey, M J; De Fabritiis, G

    2008-12-01

    The recent introduction of cost-effective accelerator processors (APs), such as the IBM Cell processor and Nvidia's graphics processing units (GPUs), represents an important technological innovation which promises to unleash the full potential of atomistic molecular modeling and simulation for the biotechnology industry. Present APs can deliver over an order of magnitude more floating-point operations per second (flops) than standard processors, broadly equivalent to a decade of Moore's law growth, and significantly reduce the cost of current atom-based molecular simulations. In conjunction with distributed and grid-computing solutions, accelerated molecular simulations may finally be used to extend current in silico protocols by the use of accurate thermodynamic calculations instead of approximate methods and simulate hundreds of protein-ligand complexes with full molecular specificity, a crucial requirement of in silico drug discovery workflows.

  12. Particle-in-cell simulations of plasma accelerators and electron-neutral collisions

    Energy Technology Data Exchange (ETDEWEB)

    Bruhwiler, David L.; Giacone, Rodolfo E.; Cary, John R.; Verboncoeur, John P.; Mardahl, Peter; Esarey, Eric; Leemans, W.P.; Shadwick, B.A.

    2001-10-01

    We present 2-D simulations of both beam-driven and laser-driven plasma wakefield accelerators, using the object-oriented particle-in-cell code XOOPIC, which is time explicit, fully electromagnetic, and capable of running on massively parallel supercomputers. Simulations of laser-driven wakefields with low ({approx}10{sup 16} W/cm{sup 2}) and high ({approx}10{sup 18} W/cm{sup 2}) peak intensity laser pulses are conducted in slab geometry, showing agreement with theory and fluid simulations. Simulations of the E-157 beam wakefield experiment at the Stanford Linear Accelerator Center, in which a 30 GeV electron beam passes through 1 m of preionized lithium plasma, are conducted in cylindrical geometry, obtaining good agreement with previous work. We briefly describe some of the more significant modifications of XOOPIC required by this work, and summarize the issues relevant to modeling relativistic electron-neutral collisions in a particle-in-cell code.

  13. Simulator for an Accelerator-Driven Subcritical Fissile Solution System

    Energy Technology Data Exchange (ETDEWEB)

    Klein, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Day, Christy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Determan, John C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-09-14

    LANL has developed a process to generate a progressive family of system models for a fissile solution system. This family includes a dynamic system simulation comprised of coupled nonlinear differential equations describing the time evolution of the system. Neutron kinetics, radiolytic gas generation and transport, and core thermal hydraulics are included in the DSS. Extensions to explicit operation of cooling loops and radiolytic gas handling are embedded in these systems as is a stability model. The DSS may then be converted to an implementation in Visual Studio to provide a design team the ability to rapidly estimate system performance impacts from a variety of design decisions. This provides a method to assist in optimization of the system design. Once design has been generated in some detail the C++ version of the system model may then be implemented in a LabVIEW user interface to evaluate operator controls and instrumentation and operator recognition and response to off-normal events. Taken as a set of system models the DSS, Visual Studio, and LabVIEW progression provides a comprehensive set of design support tools.

  14. Simulator for an Accelerator-Driven Subcritical Fissile Solution System

    Energy Technology Data Exchange (ETDEWEB)

    Klein, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Day, Christy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Determan, John C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-09-14

    LANL has developed a process to generate a progressive family of system models for a fissile solution system. This family includes a dynamic system simulation comprised of coupled nonlinear differential equations describing the time evolution of the system. Neutron kinetics, radiolytic gas generation and transport, and core thermal hydraulics are included in the DSS. Extensions to explicit operation of cooling loops and radiolytic gas handling are embedded in these systems as is a stability model. The DSS may then be converted to an implementation in Visual Studio to provide a design team the ability to rapidly estimate system performance impacts from a variety of design decisions. This provides a method to assist in optimization of the system design. Once design has been generated in some detail the C++ version of the system model may then be implemented in a LabVIEW user interface to evaluate operator controls and instrumentation and operator recognition and response to off-normal events. Taken as a set of system models the DSS, Visual Studio, and LabVIEW progression provides a comprehensive set of design support tools.

  15. Simulation modeling based method for choosing an effective set of fault tolerance mechanisms for real-time avionics systems

    Science.gov (United States)

    Bakhmurov, A. G.; Balashov, V. V.; Glonina, A. B.; Pashkov, V. N.; Smeliansky, R. L.; Volkanov, D. Yu.

    2013-12-01

    In this paper, the reliability allocation problem (RAP) for real-time avionics systems (RTAS) is considered. The proposed method for solving this problem consists of two steps: (i) creation of an RTAS simulation model at the necessary level of abstraction and (ii) application of metaheuristic algorithm to find an optimal solution (i. e., to choose an optimal set of fault tolerance techniques). When during the algorithm execution it is necessary to measure the execution time of some software components, the simulation modeling is applied. The procedure of simulation modeling also consists of the following steps: automatic construction of simulation model of the RTAS configuration and running this model in a simulation environment to measure the required time. This method was implemented as an experimental software tool. The tool works in cooperation with DYANA simulation environment. The results of experiments with the implemented method are presented. Finally, future plans for development of the presented method and tool are briefly described.

  16. Particle-in-cell simulations of tunneling ionization effects in plasma-based accelerators

    CERN Document Server

    Bruhwiler, D L; Cary, J R; Esarey, E; Leemans, W; Giacone, R E

    2003-01-01

    Plasma-based accelerators can sustain accelerating gradients on the order of 100 GV/m. If the plasma is not fully ionized, fields of this magnitude will ionize neutral atoms via electron tunneling, which can completely change the dynamics of the plasma wake. Particle-in-cell simulations of a high-field plasma wakefield accelerator, using the OOPIC code, which includes field-induced tunneling ionization of neutral Li gas, show that the presence of even moderate neutral gas density significantly degrades the quality of the wakefield. The tunneling ionization model in OOPIC has been validated via a detailed comparison with experimental data from the l'OASIS laboratory. The properties of a wake generated directly from a neutral gas are studied, showing that one can recover the peak fields of the fully ionized plasma simulations, if the density of the electron drive bunch is increased such that the bunch rapidly ionized the gas.

  17. Accelerator simulation and theoretical modelling of radiation effects (SMoRE)

    CERN Document Server

    2018-01-01

    This publication summarizes the findings and conclusions of the IAEA coordinated research project (CRP) on accelerator simulation and theoretical modelling of radiation effects, aimed at supporting Member States in the development of advanced radiation-resistant structural materials for implementation in innovative nuclear systems. This aim can be achieved through enhancement of both experimental neutron-emulation capabilities of ion accelerators and improvement of the predictive efficiency of theoretical models and computer codes. This dual approach is challenging but necessary, because outputs of accelerator simulation experiments need adequate theoretical interpretation, and theoretical models and codes need high dose experimental data for their verification. Both ion irradiation investigations and computer modelling have been the specific subjects of the CRP, and the results of these studies are presented in this publication which also includes state-ofthe- art reviews of four major aspects of the project...

  18. Three decades of development and achievements: the heavy vehicle simulator in accelerated pavement testing

    CSIR Research Space (South Africa)

    Du Plessis, L

    2006-06-01

    Full Text Available The purpose of this paper is two fold. First, it will provide a brief description of the technological developments involved in the Heavy Vehicle Simulator (HVS) accelerated pavement testing equipment. This covers the period from concept in the late...

  19. Power Grid Simulation with GPU-Accelerated Iterative Solvers and Preconditioners

    NARCIS (Netherlands)

    Xu, S.

    2011-01-01

    This thesis deals with two research problems. The first research problem is motivated by the numerical computation involved in the Time Domain Simulation (TDS) of Power Grids. Due to the ever growing size and complexity of Power Grids such as the China National Grid, accelerating TDS has become a st

  20. Comparison of acceleration signals of simulated and real-world backward falls

    NARCIS (Netherlands)

    Klenk, J.; Becker, C.; Lieken, F.; Nicolai, S.; Maetzler, W.; Alt, W.; Zijlstra, W.; Hausdorff, J. M.; van Lummel, R. C.; Chiari, L.; Lindemann, U.

    2011-01-01

    Most of the knowledge on falls of older persons has been obtained from oral reports that might be biased in many ways. Fall simulations are widely used to gain insight into circumstances of falls, but the results, at least concerning fall detection, are not convincing. Variation of acceleration and

  1. Mining-induced fault reactivation associated with the main conveyor belt roadway and safety of the Barapukuria Coal Mine in Bangladesh: Constraints from BEM simulations

    Energy Technology Data Exchange (ETDEWEB)

    Islam, Md. Rafiqul; Shinjo, Ryuichi [Department of Physics and Earth Sciences, University of the Ryukyus, Okinawa, 903-0213 (Japan)

    2009-09-01

    Fault reactivation during underground mining is a critical problem in coal mines worldwide. This paper investigates the mining-induced reactivation of faults associated with the main conveyor belt roadway (CBR) of the Barapukuria Coal Mine in Bangladesh. The stress characteristics and deformation around the faults were investigated by boundary element method (BEM) numerical modeling. The model consists of a simple geometry with two faults (Fb and Fb1) near the CBR and the surrounding rock strata. A Mohr-Coulomb failure criterion with bulk rock properties is applied to analyze the stability and safety around the fault zones, as well as for the entire mining operation. The simulation results illustrate that the mining-induced redistribution of stresses causes significant deformation within and around the two faults. The horizontal and vertical stresses influence the faults, and higher stresses are concentrated near the ends of the two faults. Higher vertical tensional stress is prominent at the upper end of fault Fb. High deviatoric stress values that concentrated at the ends of faults Fb and Fb1 indicate the tendency towards block failure around the fault zones. The deviatoric stress patterns imply that the reinforcement strength to support the roof of the roadway should be greater than 55 MPa along the fault core zone, and should be more than 20 MPa adjacent to the damage zone of the fault. Failure trajectories that extend towards the roof and left side of fault Fb indicate that mining-induced reactivation of faults is not sufficient to generate water inflow into the mine. However, if movement of strata occurs along the fault planes due to regional earthquakes, and if the faults intersect the overlying Lower Dupi Tila aquiclude, then liquefaction could occur along the fault zones and enhance water inflow into the mine. The study also reveals that the hydraulic gradient and the general direction of groundwater flow are almost at right angles with the trends of

  2. Simulation Prediction and Experiment Setup of Vacuum Laser Acceleration at Brookhaven National Lab-Accelerator Test Facility

    CERN Document Server

    Shao, L; Ding, X; Ho, Y K; Kong, Q; Xu, J J; Pogorelsky, I; Yakimenko, V; Kusche, K

    2011-01-01

    This paper presents the pre-experiment plan and prediction of the first stage of Vacuum Laser Acceleration (VLA) collaborating by UCLA, Fudan University and ATF-BNL. This first stage experiment is a Proof-of-Principle to support our previously posted novel VLA theory. Simulations show that based on ATF's current experimental conditions, the electron beam with initial energy of 15MeV can get net energy gain from intense CO2 laser beam. The difference of electron beam energy spread is observable by ATF beam line diagnostics system. Further this energy spread expansion effect increases along with the laser intensity increasing. The proposal has been approved by ATF committee and experiment will be the next project.

  3. Preliminary simulation of a M6.5 earthquake on the Seattle Fault using 3D finite-difference modeling

    Science.gov (United States)

    Stephenson, William J.; Frankel, Arthur D.

    2000-01-01

    A three-dimensional finite-difference simulation of a moderate-sized (M 6.5) thrust-faulting earthquake on the Seattle fault demonstrates the effects of the Seattle Basin on strong ground motion in the Puget lowland. The model area includes the cities of Seattle, Bremerton and Bellevue. We use a recently developed detailed 3D-velocity model of the Seattle Basin in these simulations. The model extended to 20-km depth and assumed rupture on a finite fault with random slip distribution. Preliminary results from simulations of frequencies 0.5 Hz and lower suggest amplification can occur at the surface of the Seattle Basin by the trapping of energy in the Quaternary sediments. Surface waves generated within the basin appear to contribute to amplification throughout the modeled region. Several factors apparently contribute to large ground motions in downtown Seattle: (1) radiation pattern and directivity from the rupture; (2) amplification and energy trapping within the Quaternary sediments; and (3) basin geometry and variation in depth of both Quaternary and Tertiary sediments

  4. Simulation of Electric Faults in Doubly-Fed Induction Generators Employing Advanced Mathematical Modelling

    DEFF Research Database (Denmark)

    Martens, Sebastian; Mijatovic, Nenad; Holbøll, Joachim

    2015-01-01

    Efficient fault detection in generators often require prior knowledge of fault behavior, which can be obtained from theoretical analysis, often carried out by using discrete models of a given generator. Mathematical models are commonly represented in the DQ0 reference frame, which is convenient...... in many areas of electrical machine analysis. However, for fault investigations, the phase-coordinate representation has been found more suitable. This paper presents a mathematical model in phase coordinates of the DFIG with two parallel windings per rotor phase. The model has been implemented in Matlab...... as undesired spectral components, which can be detected by applying frequency spectrum analysis....

  5. Simulation of near-fault bedrock strong ground-motion field by explicit finite element method

    Institute of Scientific and Technical Information of China (English)

    ZHANG Xiao-zhi; HU Jin-jun; XIE Li-li; WANG Hai-yun

    2006-01-01

    Based on presumed active fault and corresponding model, this paper predicted the near-fault ground motion filed of a scenario earthquake (Mw=6 3/4 ) in an active fault by the explicit finite element method in combination with the source time function with improved transmitting artificial boundary and with high-frequency vibration contained.The results indicate that the improved artificial boundary is stable in numerical computation and the predicted strong ground motion has a consistent characteristic with the observed motion.

  6. Fault diagnosis

    Science.gov (United States)

    Abbott, Kathy

    1990-01-01

    The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to

  7. Sensor Fault-Tolerant Control of a Drivetrain Test Rig via an Observer-Based Approach within a Wind Turbine Simulation Model

    Science.gov (United States)

    Georg, Sören; Heyde, Stefan; Schulte, Horst

    2014-12-01

    This paper presents the implementation of an observer-based fault reconstruction and fault-tolerant control scheme on a rapid control prototyping system. The observer runs in parallel to a dynamic wind turbine simulation model and a speed controller, where the latter is used to control the shaft speed of a mechanical drivetrain according to the calculated rotor speed obtained from the wind turbine simulation. An incipient offset fault is added on the measured value of one of the two speed sensors and is reconstructed by means of a Takagi-Sugeno sliding- mode observer. The reconstructed fault value is then subtracted from the faulty sensor value to compensate for the fault. The whole experimental set-up corresponds to a sensor-in-the-loop system.

  8. Two-stage light-gas magnetoplasma accelerator for hypervelocity impact simulation

    Science.gov (United States)

    Khramtsov, P. P.; Vasetskij, V. A.; Makhnach, A. I.; Grishenko, V. M.; Chernik, M. Yu; Shikh, I. A.; Doroshko, M. V.

    2016-11-01

    The development of macroparticles acceleration methods for high-speed impact simulation in a laboratory is an actual problem due to increasing of space flights duration and necessity of providing adequate spacecraft protection against micrometeoroid and space debris impacts. This paper presents results of experimental study of a two-stage light- gas magnetoplasma launcher for acceleration of a macroparticle, in which a coaxial plasma accelerator creates a shock wave in a high-pressure channel filled with light gas. Graphite and steel spheres with diameter of 2.5-4 mm were used as a projectile and were accelerated to the speed of 0.8-4.8 km/s. A launching of particle occurred in vacuum. For projectile velocity control the speed measuring method was developed. The error of this metod does not exceed 5%. The process of projectile flight from the barrel and the process of a particle collision with a target were registered by use of high-speed camera. The results of projectile collision with elements of meteoroid shielding are presented. In order to increase the projectile velocity, the high-pressure channel should be filled with hydrogen. However, we used helium in our experiments for safety reasons. Therefore, we can expect that the range of mass and velocity of the accelerated particles can be extended by use of hydrogen as an accelerating gas.

  9. Laser Ion Acceleration Toward Future Ion Beam Cancer Therapy - Numerical Simulation Sudy-

    CERN Document Server

    Kawata, Shigeo; Nagashima, Toshihiro; Takano, Masahiro; Barada, Daisuke; Kong, Qing; Gu, Yan Jun; Wang, Ping Xiao; Ma, Yan Yun; Wang, Wei Ming

    2013-01-01

    Ion beam has been used in cancer treatment, and has a unique preferable feature to deposit its main energy inside a human body so that cancer cell could be killed by the ion beam. However, conventional ion accelerator tends to be huge in its size and its cost. In this paper a future intense-laser ion accelerator is proposed to make the ion accelerator compact. An intense femtosecond pulsed laser was employed to accelerate ions. The issues in the laser ion accelerator include the energy efficiency from the laser to the ions, the ion beam collimation, the ion energy spectrum control, the ion beam bunching and the ion particle energy control. In the study particle computer simulations were performed to solve the issues, and each component was designed to control the ion beam quality. When an intense laser illuminates a target, electrons in the target are accelerated and leave from the target; temporarily a strong electric field is formed between the high-energy electrons and the target ions, and the target ions ...

  10. Accelerated stochastic and hybrid methods for spatial simulations of reaction diffusion systems

    Science.gov (United States)

    Rossinelli, Diego; Bayati, Basil; Koumoutsakos, Petros

    2008-01-01

    Spatial distributions characterize the evolution of reaction-diffusion models of several physical, chemical, and biological systems. We present two novel algorithms for the efficient simulation of these models: Spatial τ-Leaping ( Sτ-Leaping), employing a unified acceleration of the stochastic simulation of reaction and diffusion, and Hybrid τ-Leaping ( Hτ-Leaping), combining a deterministic diffusion approximation with a τ-Leaping acceleration of the stochastic reactions. The algorithms are validated by solving Fisher's equation and used to explore the role of the number of particles in pattern formation. The results indicate that the present algorithms have a nearly constant time complexity with respect to the number of events (reaction and diffusion), unlike the exact stochastic simulation algorithm which scales linearly.

  11. D-leaping: Accelerating stochastic simulation algorithms for reactions with delays

    Science.gov (United States)

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2009-09-01

    We propose a novel, accelerated algorithm for the approximate stochastic simulation of biochemical systems with delays. The present work extends existing accelerated algorithms by distributing, in a time adaptive fashion, the delayed reactions so as to minimize the computational effort while preserving their accuracy. The accuracy of the present algorithm is assessed by comparing its results to those of the corresponding delay differential equations for a representative biochemical system. In addition, the fluctuations produced from the present algorithm are comparable to those from an exact stochastic simulation with delays. The algorithm is used to simulate biochemical systems that model oscillatory gene expression. The results indicate that the present algorithm is competitive with existing works for several benchmark problems while it is orders of magnitude faster for certain systems of biochemical reactions.

  12. Simulation of isothermal multi-phase fuel-coolant interaction using MPS method with GPU acceleration

    Energy Technology Data Exchange (ETDEWEB)

    Gou, W.; Zhang, S.; Zheng, Y. [Zhejiang Univ., Hangzhou (China). Center for Engineering and Scientific Computation

    2016-07-15

    The energetic fuel-coolant interaction (FCI) has been one of the primary safety concerns in nuclear power plants. Graphical processing unit (GPU) implementation of the moving particle semi-implicit (MPS) method is presented and used to simulate the fuel coolant interaction problem. The governing equations are discretized with the particle interaction model of MPS. Detailed implementation on single-GPU is introduced. The three-dimensional broken dam is simulated to verify the developed GPU acceleration MPS method. The proposed GPU acceleration algorithm and developed code are then used to simulate the FCI problem. As a summary of results, the developed GPU-MPS method showed a good agreement with the experimental observation and theoretical prediction.

  13. Direct Simulations of Particle Acceleration in Fluctuating Electromagnetic Field across a Shock

    CERN Document Server

    Muranushi, Takayuki

    2008-01-01

    We simulate the acceleration processes of collisionless particles in a shock structure with magnetohydrodynamical (MHD) fluctuations. The electromagnetic field is represented as a sum of MHD shock solution ($\\Mag_0, \\Ele_0$) and torsional Alfven modes spectra ($\\delta \\Mag, \\delta \\Ele $). We represent fluctuation modes in logarithmic wavenumber space. Since the electromagnetic fields are represented analytically, our simulations can easily cover as large as eight orders of magnitude in resonant frequency, and do not suffer from spatial limitations of box size or grid spacing. We deterministically calculate the particle trajectories under the Lorenz force for time interval of up to ten years, with a time step of $\\sim 0.5 \\sec$. This is sufficient to resolve Larmor frequencies without a stochastic treatment. Simulations show that the efficiency of the first order Fermi acceleration can be parametrized by the fluctuation amplitude $\\eta \\equiv ^{\\frac 1 2} {B_0}^{-1}$ . Convergence of the numerical results is...

  14. "Final all possible steps"approach for accelerating stochastic simulation of coupled chemical reactions

    Institute of Scientific and Technical Information of China (English)

    ZHOU Wen; PENG Xin-jun; LIU Xiang; YAN Zheng-lou; WANG Yi-fei

    2008-01-01

    In this paper,we develop a modified accelerated stochastic simulation method for chemically reacting systems,called the "final all possible steps"(FAPS)method,which obtains the reliable statistics of all species in any time during the time course with fewer simulation times.Moreover,the FAPS method can be incorporated into the leap methods,which makes the simulation of larger systems more efficient.Numerical results indicate that the proposed methods can be applied to a wide range of chemically reacting systems with a high-precision level and obtain a significant improvement on efficiency over the existing methods.

  15. Simulation on buildup of electron cloud in a proton circular accelerator

    Science.gov (United States)

    Li, Kai-Wei; Liu, Yu-Dong

    2015-10-01

    Electron cloud interaction with high energy positive beams are believed responsible for various undesirable effects such as vacuum degradation, collective beam instability and even beam loss in high power proton circular accelerators. An important uncertainty in predicting electron cloud instability lies in the detailed processes of the generation and accumulation of the electron cloud. The simulation on the build-up of electron cloud is necessary to further studies on beam instability caused by electron clouds. The China Spallation Neutron Source (CSNS) is an intense proton accelerator facility now being built, whose accelerator complex includes two main parts: an H-linac and a rapid cycling synchrotron (RCS). The RCS accumulates the 80 MeV proton beam and accelerates it to 1.6 GeV with a repetition rate of 25 Hz. During beam injection with lower energy, the emerging electron cloud may cause serious instability and beam loss on the vacuum pipe. A simulation code has been developed to simulate the build-up, distribution and density of electron cloud in CSNS/RCS. Supported by National Natural Science Foundation of China (11275221, 11175193)

  16. Simulation studies of laser wakefield acceleration based on typical 100 TW laser facilities

    Institute of Scientific and Technical Information of China (English)

    LI Da-Zhang; GAO Jie; ZHU Xiong-Wei; HE An

    2011-01-01

    In this paper,2-D Particle-In-Cell simulations are made for Laser Wakefield Accelerations(LWFA).As in a real experiment,we perform plasma density scanning for typical 100 TW laser facilities.Several basic laws for self-injected acceleration in a bubble regime are presented.According to these laws,we choose a proper plasma density and then obtain a high quality quasi-monoenergetic electron bunch with arms energy of more than 650 MeV and a bunch length of less than 1.5 μn.

  17. Simulation of the relativistic electron dynamics and acceleration in a linearly-chirped laser pulse

    CERN Document Server

    Jisrawi, Najeh M; Salamin, Yousef I

    2014-01-01

    Theoretical investigations are presented, and their results are discussed, of the laser acceleration of a single electron by a chirped pulse. Fields of the pulse are modeled by simple plane-wave oscillations and a $\\cos^2$ envelope. The dynamics emerge from analytic and numerical solutions to the relativistic Lorentz-Newton equations of motion of the electron in the fields of the pulse. All simulations have been carried out by independent Mathematica and Python codes, with identical results. Configurations of acceleration from a position of rest as well as from injection, axially and sideways, at initial relativistic speeds are studied.

  18. Prediction of near-field strong ground motions for scenario earthquakes on active fault

    Institute of Scientific and Technical Information of China (English)

    Wang Haiyun; Xie Lili; Tao Xiaxin; Li Jie

    2006-01-01

    A method to predict near-field strong ground motions for scenario earthquakes on active faults is proposed. First,macro-source parameters characterizing the entire source area, i.e., global source parameters, including fault length, fault width,rupture area, average slip on the fault plane, etc., are estimated by seismogeology survey, seismicity and seismic scaling laws.Second, slip distributions characterizing heterogeneity or roughness on the fault plane, i.e., local source parameters, are reproduced/evaluated by the hybrid slip model. Finally, the finite fault source model, developed from both the global and local source parameters, is combined with the stochastically synthetic technique of ground motion using the dynamic corner frequency based on seismology. The proposed method is applied to simulate the acceleration time histories on three base-rock stations during the 1994 Northridge earthquake. Comparisons between the predicted and recorded acceleration time histories show that the method is feasible and practicable.

  19. Two-fluid electromagnetic simulations of plasma-jet acceleration with detailed equation-of-state

    Energy Technology Data Exchange (ETDEWEB)

    Thoma, C.; Welch, D. R.; Clark, R. E.; Bruner, N. [Voss Scientific, LLC, Albuquerque, New Mexico 87108 (United States); MacFarlane, J. J.; Golovkin, I. E. [Prism Computational Sciences, Inc., Madison, Wisconsin 53711 (United States)

    2011-10-15

    We describe a new particle-based two-fluid fully electromagnetic algorithm suitable for modeling high density (n{sub i} {approx} 10{sup 17} cm{sup -3}) and high Mach number laboratory plasma jets. In this parameter regime, traditional particle-in-cell (PIC) techniques are challenging due to electron timescale and lengthscale constraints. In this new approach, an implicit field solve allows the use of large timesteps while an Eulerian particle remap procedure allows simulations to be run with very few particles per cell. Hall physics and charge separation effects are included self-consistently. A detailed equation of state (EOS) model is used to evolve the ion charge state and introduce non-ideal gas behavior. Electron cooling due to radiation emission is included in the model as well. We demonstrate the use of these new algorithms in 1D and 2D Cartesian simulations of railgun (parallel plate) jet accelerators using He and Ar gases. The inclusion of EOS and radiation physics reduces the electron temperature, resulting in higher calculated jet Mach numbers in the simulations. We also introduce a surface physics model for jet accelerators in which a frictional drag along the walls leads to axial spreading of the emerging jet. The simulations demonstrate that high Mach number jets can be produced by railgun accelerators for a variety of applications, including high energy density physics experiments.

  20. Forward and adjoint spectral-element simulations of seismic wave propagation using hardware accelerators

    Science.gov (United States)

    Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri

    2015-04-01

    Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  1. Radiation belt electron acceleration during the 17 March 2015 geomagnetic storm: Observations and simulations

    Science.gov (United States)

    Li, W.; Ma, Q.; Thorne, R. M.; Bortnik, J.; Zhang, X.-J.; Li, J.; Baker, D. N.; Reeves, G. D.; Spence, H. E.; Kletzing, C. A.; Kurth, W. S.; Hospodarsky, G. B.; Blake, J. B.; Fennell, J. F.; Kanekal, S. G.; Angelopoulos, V.; Green, J. C.; Goldstein, J.

    2016-06-01

    Various physical processes are known to cause acceleration, loss, and transport of energetic electrons in the Earth's radiation belts, but their quantitative roles in different time and space need further investigation. During the largest storm over the past decade (17 March 2015), relativistic electrons experienced fairly rapid acceleration up to ~7 MeV within 2 days after an initial substantial dropout, as observed by Van Allen Probes. In the present paper, we evaluate the relative roles of various physical processes during the recovery phase of this large storm using a 3-D diffusion simulation. By quantitatively comparing the observed and simulated electron evolution, we found that chorus plays a critical role in accelerating electrons up to several MeV near the developing peak location and produces characteristic flat-top pitch angle distributions. By only including radial diffusion, the simulation underestimates the observed electron acceleration, while radial diffusion plays an important role in redistributing electrons and potentially accelerates them to even higher energies. Moreover, plasmaspheric hiss is found to provide efficient pitch angle scattering losses for hundreds of keV electrons, while its scattering effect on > 1 MeV electrons is relatively slow. Although an additional loss process is required to fully explain the overestimated electron fluxes at multi-MeV, the combined physical processes of radial diffusion and pitch angle and energy diffusion by chorus and hiss reproduce the observed electron dynamics remarkably well, suggesting that quasi-linear diffusion theory is reasonable to evaluate radiation belt electron dynamics during this big storm.

  2. The numerical simulation study of the dynamic evolutionary processes in an earthquake cycle on the Longmen Shan Fault

    Science.gov (United States)

    Tao, Wei; Shen, Zheng-Kang; Zhang, Yong

    2016-04-01

    concentration areas in the model, one is located in the mid and upper crust on the hanging wall where the strain energy could be released by permanent deformation like folding, and the other lies in the deep part of the fault where the strain energy could be released by earthquakes. (5) The whole earthquake dynamic process could be clearly reflected by the evolutions of the strain energy increments on the stages of the earthquake cycle. In the inter-seismic period, the strain energy accumulates relatively slowly; prior to the earthquake, the fault is locking and the strain energy accumulates fast, and some of the strain energy is released on the upper crust on the hanging wall of the fault. In coseismic stage, the strain energy is released fast along the fault. In the poseismic stage, the slow accumulation process of strain recovers rapidly as that in the inerseismic period in around one hundred years. The simulation study in this thesis would help better understand the earthquake dynamic process.

  3. MAGNETIC-ISLAND CONTRACTION AND PARTICLE ACCELERATION IN SIMULATED ERUPTIVE SOLAR FLARES

    Energy Technology Data Exchange (ETDEWEB)

    Guidoni, S. E. [The Catholic University of America, 620 Michigan Avenue Northeast, Washington, DC 20064 (United States); DeVore, C. R.; Karpen, J. T. [Heliophysics Science Division, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Lynch, B. J., E-mail: silvina.e.guidoni@nasa.gov [Space Sciences Laboratory, University of California, Berkeley, CA 94720 (United States)

    2016-03-20

    The mechanism that accelerates particles to the energies required to produce the observed high-energy impulsive emission in solar flares is not well understood. Drake et al. proposed a mechanism for accelerating electrons in contracting magnetic islands formed by kinetic reconnection in multi-layered current sheets (CSs). We apply these ideas to sunward-moving flux ropes (2.5D magnetic islands) formed during fast reconnection in a simulated eruptive flare. A simple analytic model is used to calculate the energy gain of particles orbiting the field lines of the contracting magnetic islands in our ultrahigh-resolution 2.5D numerical simulation. We find that the estimated energy gains in a single island range up to a factor of five. This is higher than that found by Drake et al. for islands in the terrestrial magnetosphere and at the heliopause, due to strong plasma compression that occurs at the flare CS. In order to increase their energy by two orders of magnitude and plausibly account for the observed high-energy flare emission, the electrons must visit multiple contracting islands. This mechanism should produce sporadic emission because island formation is intermittent. Moreover, a large number of particles could be accelerated in each magnetohydrodynamic-scale island, which may explain the inferred rates of energetic-electron production in flares. We conclude that island contraction in the flare CS is a promising candidate for electron acceleration in solar eruptions.

  4. Particle-In-Cell Simulation of Electron Acceleration in Solar Coronal Jets

    CERN Document Server

    Baumann, G

    2012-01-01

    We investigate electron acceleration resulting from 3D magnetic reconnection between an emerging, twisted magnetic flux rope and a pre-existing weak, open magnetic field. We first follow the rise of an unstable, twisted flux tube with a resistive MHD simulation where the numerical resolution is enhanced by using fixed mesh refinement. As in previous MHD investigations of similar situations the rise of the flux tube into the pre-existing inclined coronal magnetic field results in the formation of a solar coronal jet. A snapshot of the MHD model is then used as an initial and boundary condition for a particle-in-cell simulation, using up to half a billion cells and over 20 billion charged particle. Particle acceleration occurs mainly in the reconnection current sheet, with accelerated electrons displaying a power law dN/dE distribution with an index of about -1.65. The main acceleration mechanism is a systematic electric field, striving to maintaining the electric current in the current sheet against losses cau...

  5. Particle-in-cell Simulation of Electron Acceleration in Solar Coronal Jets

    Science.gov (United States)

    Baumann, G.; Nordlund, Å.

    2012-11-01

    We investigate electron acceleration resulting from three-dimensional magnetic reconnection between an emerging, twisted magnetic flux rope and a pre-existing weak, open magnetic field. We first follow the rise of an unstable, twisted flux tube with a resistive MHD simulation where the numerical resolution is enhanced by using fixed mesh refinement. As in previous MHD investigations of similar situations, the rise of the flux tube into the pre-existing inclined coronal magnetic field results in the formation of a solar coronal jet. A snapshot of the MHD model is then used as an initial and boundary condition for a particle-in-cell simulation, using up to half a billion cells and over 20 billion charged particles. Particle acceleration occurs mainly in the reconnection current sheet, with accelerated electrons displaying a power law in the energy probability distribution with an index of around -1.5. The main acceleration mechanism is a systematic electric field, striving to maintaining the electric current in the current sheet against losses caused by electrons not being able to stay in the current sheet for more than a few seconds at a time.

  6. Energy loss of a high charge bunched electron beam in plasma: Simulations, scaling, and accelerating wakefields

    Directory of Open Access Journals (Sweden)

    J. B. Rosenzweig

    2004-06-01

    Full Text Available The energy loss and gain of a beam in the nonlinear, “blowout” regime of the plasma wakefield accelerator, which features ultrahigh accelerating fields, linear transverse focusing forces, and nonlinear plasma motion, has been asserted, through previous observations in simulations, to scale linearly with beam charge. Additionally, from a recent analysis by Barov et al., it has been concluded that for an infinitesimally short beam, the energy loss is indeed predicted to scale linearly with beam charge for arbitrarily large beam charge. This scaling is predicted to hold despite the onset of a relativistic, nonlinear response by the plasma, when the number of beam particles occupying a cubic plasma skin depth exceeds that of plasma electrons within the same volume. This paper is intended to explore the deviations from linear energy loss using 2D particle-in-cell simulations that arise in the case of experimentally relevant finite length beams. The peak accelerating field in the plasma wave excited behind the finite-length beam is also examined, with the artifact of wave spiking adding to the apparent persistence of linear scaling of the peak field amplitude into the nonlinear regime. At large enough normalized charge, the linear scaling of both decelerating and accelerating fields collapses, with serious consequences for plasma wave excitation efficiency. Using the results of parametric particle-in-cell studies, the implications of these results for observing severe deviations from linear scaling in present and planned experiments are discussed.

  7. Constraints on particle acceleration sites in the Crab Nebula from relativistic MHD simulations

    CERN Document Server

    Olmi, Barbara; Amato, Elena; Bucciantini, Niccolò

    2015-01-01

    The Crab Nebula is one of the most efficient accelerators in the Galaxy and the only galactic source showing direct evidence of PeV particles. In spite of this, the physical process behind such effective acceleration is still a deep mystery. While particle acceleration, at least at the highest energies, is commonly thought to occur at the pulsar wind termination shock, the properties of the upstream flow are thought to be non-uniform along the shock surface, and important constraints on the mechanism at work come from exact knowledge of where along this surface particles are being accelerated. Here we use axisymmetric relativistic MHD simulations to obtain constraints on the acceleration site(s) of particles of different energies in the Crab Nebula. Various scenarios are considered for the injection of particles responsible for synchrotron radiation in the different frequency bands, radio, optical and X-rays. The resulting emission properties are compared with available data on the multi wavelength time varia...

  8. Benchmark of Space Charge Simulations and Comparison with Experimental Results for High Intensity, Low Energy Accelerators

    CERN Document Server

    Cousineau, Sarah M

    2005-01-01

    Space charge effects are a major contributor to beam halo and emittance growth leading to beam loss in high intensity, low energy accelerators. As future accelerators strive towards unprecedented levels of beam intensity and beam loss control, a more comprehensive understanding of space charge effects is required. A wealth of simulation tools have been developed for modeling beams in linacs and rings, and with the growing availability of high-speed computing systems, computationally expensive problems that were inconceivable a decade ago are now being handled with relative ease. This has opened the field for realistic simulations of space charge effects, including detailed benchmarks with experimental data. A great deal of effort is being focused in this direction, and several recent benchmark studies have produced remarkably successful results. This paper reviews the achievements in space charge benchmarking in the last few years, and discusses the challenges that remain.

  9. Electric field simulation and measurement of a pulse line ion accelerator

    Institute of Scientific and Technical Information of China (English)

    SHEN Xiao-Kang; ZHANG Zi-Min; CAO Shu-Chun; ZHAO Hong-Wei; WANG Bo; SHEN Xiao-Li; ZHAO Quan-Tang; LIU Ming; JING Yi

    2012-01-01

    An oil dielectric helical pulse line to demonstrate the principles of a Pulse Line Ion Accelerator (PL1A) has been designed and fabricated.The simulation of the axial electric field of an accelerator with CST code has been completed and the simulation results show complete agreement with the theoretical calculations.To fully understand the real value of the electric field excited from the helical line in PLIA,an optical electric integrated electric field measurement system was adopted.The measurement result shows that the real magnitude of axial electric field is smaller than that calculated,probably due to the actual pitch of the resister column which is much less than that of helix.

  10. Load management strategy for Particle-In-Cell simulations in high energy particle acceleration

    Science.gov (United States)

    Beck, A.; Frederiksen, J. T.; Dérouillat, J.

    2016-09-01

    In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.

  11. Simultaneous Estimation of Geophysical Parameters with Microwave Radiometer Data based on Accelerated Simulated Annealing: SA

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2012-07-01

    Full Text Available Method for geophysical parameter estimations with microwave radiometer data based on Simulated Annealing: SA is proposed. Geophysical parameters which are estimated with microwave radiometer data are closely related each other. Therefore simultaneous estimation makes constraints in accordance with the relations. On the other hand, SA requires huge computer resources for convergence. In order to accelerate convergence process, oscillated decreasing function is proposed for cool down function. Experimental results show that remarkable improvements are observed for geophysical parameter estimations.

  12. Monte Carlo simulation of photon migration in 3D turbid media accelerated by graphics processing units.

    Science.gov (United States)

    Fang, Qianqian; Boas, David A

    2009-10-26

    We report a parallel Monte Carlo algorithm accelerated by graphics processing units (GPU) for modeling time-resolved photon migration in arbitrary 3D turbid media. By taking advantage of the massively parallel threads and low-memory latency, this algorithm allows many photons to be simulated simultaneously in a GPU. To further improve the computational efficiency, we explored two parallel random number generators (RNG), including a floating-point-only RNG based on a chaotic lattice. An efficient scheme for boundary reflection was implemented, along with the functions for time-resolved imaging. For a homogeneous semi-infinite medium, good agreement was observed between the simulation output and the analytical solution from the diffusion theory. The code was implemented with CUDA programming language, and benchmarked under various parameters, such as thread number, selection of RNG and memory access pattern. With a low-cost graphics card, this algorithm has demonstrated an acceleration ratio above 300 when using 1792 parallel threads over conventional CPU computation. The acceleration ratio drops to 75 when using atomic operations. These results render the GPU-based Monte Carlo simulation a practical solution for data analysis in a wide range of diffuse optical imaging applications, such as human brain or small-animal imaging.

  13. Simulations and measurements of coupling impedance for modern particle accelerator devices

    CERN Document Server

    AUTHOR|(CDS)2158523; Biancacci, Nicolò; Mostacci, Andrea

    In this document it has been treated the study of the coupling impedance in modern devices, already installed or not, in different particle accelerators. In the specific case: • For a device in-phase of project, several simulations for impedance calculation have been done. • For a component already realized and used, measurements of coupling impedance value have been done. Simulations are used to determine the impact of the interconnect between to magnets, designed for the future particle accelerator FCC, on the overall impedance of the machine which is about 100 km long. In particular has been done a check between theory, simulations and measurements of components already built, allowing a better and deeper study of the component we have analysed. Controls that probably will be helpful to have a clear guideline in future works. The measurements instead concern in an existing component that was already used in LHC, the longest particle accelerator ever realised on the planet, 27 km long. The coupling impe...

  14. An Accelerating Solution for N-Body MOND Simulation with FPGA-SoC

    Directory of Open Access Journals (Sweden)

    Bo Peng

    2016-01-01

    Full Text Available As a modified-gravity proposal to handle the dark matter problem on galactic scales, Modified Newtonian Dynamics (MOND has shown a great success. However, the N-body MOND simulation is quite challenged by its computation complexity, which appeals to acceleration of the simulation calculation. In this paper, we present a highly integrated accelerating solution for N-body MOND simulations. By using the FPGA-SoC, which integrates both FPGA and SoC (system on chip in one chip, our solution exhibits potentials for better performance, higher integration, and lower power consumption. To handle the calculation bottleneck of potential summation, on one hand, we develop a strategy to simplify the pipeline, in which the square calculation task is conducted by the DSP48E1 of Xilinx 7 series FPGAs, so as to reduce the logic resource utilization of each pipeline; on the other hand, advantages of particle-mesh scheme are taken to overcome the bottleneck on bandwidth. Our experiment results show that 2 more pipelines can be integrated in Zynq-7020 FPGA-SoC with the simplified pipeline, and the bandwidth requirement is reduced significantly. Furthermore, our accelerating solution has a full range of advantages over different processors. Compared with GPU, our work is about 10 times better in performance per watt and 50% better in performance per cost.

  15. Tracking parameter simulation for the Turkish accelerator center particle factory tracker system

    Energy Technology Data Exchange (ETDEWEB)

    Tapan, I., E-mail: ilhan@uludag.edu.tr; Pilicer, E.; Pilicer, F.B.

    2016-09-21

    The silicon tracker part of the Turkish Accelerator Center super charm particle factory detector was designed for effectively tracking charged particles with momentum values up to 2.0 GeV/c. In this work, the FLUKA simulation code has been used to estimate the track parameters and their resolutions in the designed tracker system. These results have been compared with those obtained by the tkLayout software package. The simulated track parameter resolutions are compatible with the physics goals of the tracking detector.

  16. Cosmological Shocks in Eulerian Simulations: Main Properties and Cosmic Rays Acceleration

    CERN Document Server

    Vazza, F; Gheller, C

    2008-01-01

    Aims: morpholgies, number and energy distributions of Cosmological Shock Waves from a set of ENZO cosmological simulations are produced, along with a study of the connection with Cosmic Rays processes in different environments. Method: we perform cosmological simulations with the public release of the PPM code ENZO, adopt a simple and physically motivated numerical setup to follow the evolution of cosmic structures at the resolution of 125kpc per cell, and characterise shocks with a new post processing scheme. Results: we estimate the efficency of the acceleration of Cosmic Ray particles and present the first comparison of our results with existing limits from observations of galaxy clusters.

  17. Stochastic finite-fault modelling of strong earthquakes in Narmada South Fault, Indian Shield

    Indian Academy of Sciences (India)

    P Sengupta

    2012-06-01

    The Narmada South Fault in the Indian peninsular shield region is associated with moderate-to-strong earthquakes. The prevailing hazard evidenced by the earthquake-related fatalities in the region imparts significance to the investigations of the seismogenic environment. In the present study, the prevailing seismotectonic conditions specified by parameters associated with source, path and site conditions are appraised. Stochastic finite-fault models are formulated for each scenario earthquake. The simulated peak ground accelerations for the rock sites from the possible mean maximum earthquake of magnitude 6.8 goes as high as 0.24 g while fault-rupture of magnitude 7.1 exhibits a maximum peak ground acceleration of 0.36 g. The results suggest that present hazard specification of Bureau of Indian Standards as inadequate. The present study is expected to facilitate development of ground motion models for deterministic and probabilistic seismic hazard analysis of the region.

  18. Fault Monitoring and Fault Recovery Control for Position Moored Tanker

    DEFF Research Database (Denmark)

    Fang, Shaoji; Blanke, Mogens

    2011-01-01

    This paper addresses fault tolerant control for position mooring of a shuttle tanker operating in the North Sea. A complete framework for fault diagnosis is presented but the loss of a sub-sea mooring line buoyancy element is given particular attention, since this fault could lead to mooring line....... Properties of detection and fault-tolerant control are demonstrated by high fidelity simulations....

  19. GPU-Accelerated PIC/MCC Simulation of Laser-Plasma Interaction Using BUMBLEBEE

    Science.gov (United States)

    Jin, Xiaolin; Huang, Tao; Chen, Wenlong; Wu, Huidong; Tang, Maowen; Li, Bin

    2015-11-01

    The research of laser-plasma interaction in its wide applications relies on the use of advanced numerical simulation tools to achieve high performance operation while reducing computational time and cost. BUMBLEBEE has been developed to be a fast simulation tool used in the research of laser-plasma interactions. BUMBLEBEE uses a 1D3V electromagnetic PIC/MCC algorithm that is accelerated by using high performance Graphics Processing Unit (GPU) hardware. BUMBLEBEE includes a friendly user-interface module and four physics simulators. The user-interface provides a powerful solid-modeling front end and graphical and computational post processing functionality. The solver of BUMBLEBEE has four modules for now, which are used to simulate the field ionization, electron collisional ionization, binary coulomb collision and laser-plasma interaction processes. The ionization characteristics of laser-neutral interaction and the generation of high-energy electrons have been analyzed by using BUMBLEBEE for validation.

  20. Frictional properties of simulated anhydrite-dolomite fault gouge and implications for seismogenic potential

    NARCIS (Netherlands)

    Pluymakers, A.M.H.; Niemeijer, A.R.; Spiers, C.J.

    2016-01-01

    The frictional properties of anhydrite-dolomite fault gouges, and the effects of CO2 upon them, are of key importance in assessing the risks associated with CO2 storage in reservoir formations capped by anhydrite-dolomite sequences, and in understanding seismicity occurring in such formations (such

  1. Numerical simulation on the movement law of overlying strata in the stope with a fault and analysis of its influence on the ground gas drainage boreholes

    Institute of Scientific and Technical Information of China (English)

    HU Qian-ting; YAN Jing-jing; CHENG Guo-qiang

    2007-01-01

    In order to study the influence of a fault on the movement law of the overlying strata as well as its effect on the gas drainage boreholes, based on the practical situation of 1242(1) panel at Xieqiao Mine in Huainan, the Finite Element Method (FEM) model was built up, and the distribution of the stress field and the displacement field of the overlying strata in the stope with a fault were simulated by using the FEM software ANSYS. The results indicate that because of the existence of the fault, the horizontal displacement of overlying strata near the gas drainage borehole becomes larger than that in the stope without a fault, and the distribution of the stress field of the overlying strata changes greatly. When the working face is far away from the fault, the distribution of the stress field is approximately symmetrical. As the working face advances to the place 50 m away from the fault, the stress range at the right side goaf area is as twice as that at the left side.Here, the stress distribution area of goaf area and the fault plane run through, the fracture-connected-zone is formed. It can be presumed that the gas adsorbed in the coal and rock will flow into the fault zone along the fracture-connected-zone, which causes the quantity of gas drainage reduce remarkably.

  2. Automated detection and analysis of particle beams in laser-plasma accelerator simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ushizima, Daniela Mayumi; Geddes, C.G.; Cormier-Michel, E.; Bethel, E. Wes; Jacobsen, J.; Prabhat, ,; R.ubel, O.; Weber, G,; Hamann, B.

    2010-05-21

    Numerical simulations of laser-plasma wakefield (particle) accelerators model the acceleration of electrons trapped in plasma oscillations (wakes) left behind when an intense laser pulse propagates through the plasma. The goal of these simulations is to better understand the process involved in plasma wake generation and how electrons are trapped and accelerated by the wake. Understanding of such accelerators, and their development, offer high accelerating gradients, potentially reducing size and cost of new accelerators. One operating regime of interest is where a trapped subset of electrons loads the wake and forms an isolated group of accelerated particles with low spread in momentum and position, desirable characteristics for many applications. The electrons trapped in the wake may be accelerated to high energies, the plasma gradient in the wake reaching up to a gigaelectronvolt per centimeter. High-energy electron accelerators power intense X-ray radiation to terahertz sources, and are used in many applications including medical radiotherapy and imaging. To extract information from the simulation about the quality of the beam, a typical approach is to examine plots of the entire dataset, visually determining the adequate parameters necessary to select a subset of particles, which is then further analyzed. This procedure requires laborious examination of massive data sets over many time steps using several plots, a routine that is unfeasible for large data collections. Demand for automated analysis is growing along with the volume and size of simulations. Current 2D LWFA simulation datasets are typically between 1GB and 100GB in size, but simulations in 3D are of the order of TBs. The increase in the number of datasets and dataset sizes leads to a need for automatic routines to recognize particle patterns as particle bunches (beam of electrons) for subsequent analysis. Because of the growth in dataset size, the application of machine learning techniques for

  3. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Pareja, S. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' Carlos Haya' , Avda. Carlos Haya, s/n, E-29010 Malaga (Spain)], E-mail: garciapareja@gmail.com; Vilches, M. [Servicio de Fisica y Proteccion Radiologica, Hospital Regional Universitario ' Virgen de las Nieves' , Avda. de las Fuerzas Armadas, 2, E-18014 Granada (Spain); Lallena, A.M. [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)

    2007-09-21

    The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool.

  4. GPU-accelerated Tersoff potentials for massively parallel Molecular Dynamics simulations

    Science.gov (United States)

    Nguyen, Trung Dac

    2017-03-01

    The Tersoff potential is one of the empirical many-body potentials that has been widely used in simulation studies at atomic scales. Unlike pair-wise potentials, the Tersoff potential involves three-body terms, which require much more arithmetic operations and data dependency. In this contribution, we have implemented the GPU-accelerated version of several variants of the Tersoff potential for LAMMPS, an open-source massively parallel Molecular Dynamics code. Compared to the existing MPI implementation in LAMMPS, the GPU implementation exhibits a better scalability and offers a speedup of 2.2X when run on 1000 compute nodes on the Titan supercomputer. On a single node, the speedup ranges from 2.0 to 8.0 times, depending on the number of atoms per GPU and hardware configurations. The most notable features of our GPU-accelerated version include its design for MPI/accelerator heterogeneous parallelism, its compatibility with other functionalities in LAMMPS, its ability to give deterministic results and to support both NVIDIA CUDA- and OpenCL-enabled accelerators. Our implementation is now part of the GPU package in LAMMPS and accessible for public use.

  5. Proton and Helium Injection Into First Order Fermi Acceleration at Shocks: Hybrid Simulation and Analysis

    Science.gov (United States)

    Dudnikova, Galina; Malkov, Mikhail; Sagdeev, Roald; Liseykina, Tatjana; Hanusch, Adrian

    2016-10-01

    Elemental composition of galactic cosmic rays (CR) probably holds the key to their origin. Most likely, they are accelerated at collisionless shocks in supernova remnants, but the acceleration mechanism is not entirely understood. One complicated problem is ``injection'', a process whereby the shock selects a tiny fraction of particles to keep on crossing its front and gain more energy. Comparing the injection rates of particles with different mass to charge ratio is a powerful tool for studying this process. Recent advances in measurements of CR He/p ratio have provided particularly important new clues. We performed a series of hybrid simulations and analyzed a joint injection of protons and Helium, in conjunction with upstream waves they generate. The emphasis of this work is on the bootstrap aspects of injection manifested in particle confinement to the shock and, therefore, their continuing acceleration by the self-driven waves. The waves are initially generated by He and protons in separate spectral regions, and their interaction plays a crucial role in particle acceleration. The work is ongoing and new results will be reported along with their analysis and comparison with the latest data from the AMS-02 space-based spectrometer. Work supported Grant RFBR 16-01-00209, NASA ATP-program under Award NNX14AH36G, and by the US Department of Energy under Award No. DE-FG02-04ER54738.

  6. Issues for Simulation of Galactic Cosmic Ray Exposures for Radiobiological Research at Ground Based Accelerators

    Directory of Open Access Journals (Sweden)

    Myung-Hee Y Kim

    2015-06-01

    Full Text Available For research on the health risks of galactic cosmic rays (GCR ground-based accelerators have been used for radiobiology research with mono-energetic beams of single high charge, Z and energy, E (HZE particles. In this paper we consider the pros and cons of a GCR reference field at a particle accelerator. At the NASA Space Radiation Laboratory (NSRL we have proposed a GCR simulator, which implements a new rapid switching mode and higher energy beam extraction to 1.5 GeV/u, in order to integrate multiple ions into a single simulation within hours or longer for chronic exposures. After considering the GCR environment and energy limitations of NSRL, we performed extensive simulation studies using the stochastic transport code, GERMcode (GCR Event Risk Model to define a GCR reference field using 9 HZE particle beam-energy combinations each with a unique absorber thickness to provide fragmentation and 10 or more energies of proton and 4He beams. The reference field is shown to well represent the charge dependence of GCR dose in several energy bins behind shielding compared to a simulated GCR environment. However a more significant challenge for space radiobiology research is to consider chronic GCR exposure of up to 3 years in relation to simulations with animal models of human risks. We discuss issues in approaches to map important biological time scales in experimental models using ground-based simulation with extended exposure of up to a few weeks using chronic or fractionation exposures. A kinetics model of HZE particle hit probabilities suggests that experimental simulations of several weeks will be needed to avoid high fluence rate artifacts, which places limitations on the experiments to be performed. Ultimately risk estimates are limited by theoretical understanding, and focus on improving understanding of mechanisms and development of experimental models to improve this understanding should remain the highest priority for space radiobiology

  7. Accelerated Molecular Dynamics Simulations of Ligand Binding to a Muscarinic G-protein Coupled Receptor

    Science.gov (United States)

    Kappel, Kalli; Miao, Yinglong; McCammon, J. Andrew

    2017-01-01

    Elucidating the detailed process of ligand binding to a receptor is pharmaceutically important for identifying druggable binding sites. With the ability to provide atomistic detail, computational methods are well poised to study these processes. Here, accelerated molecular dynamics (aMD) is proposed to simulate processes of ligand binding to a G-protein coupled receptor (GPCR), in this case the M3 muscarinic receptor, which is a target for treating many human diseases, including cancer, diabetes and obesity. Long-timescale aMD simulations were performed to observe the binding of three chemically diverse ligand molecules: antagonist tiotropium (TTP), partial agonist arecoline (ARc), and full agonist acetylcholine (ACh). In comparison with earlier microsecond-timescale conventional MD simulations, aMD greatly accelerated the binding of ACh to the receptor orthosteric ligand-binding site and the binding of TTP to an extracellular vestibule. Further aMD simulations also captured binding of ARc to the receptor orthosteric site. Additionally, all three ligands were observed to bind in the extracellular vestibule during their binding pathways, suggesting that it is a metastable binding site. This study demonstrates the applicability of aMD to protein-ligand binding, especially the drug recognition of GPCRs. PMID:26537408

  8. GPU-accelerated Monte Carlo simulation of particle coagulation based on the inverse method

    Science.gov (United States)

    Wei, J.; Kruis, F. E.

    2013-09-01

    Simulating particle coagulation using Monte Carlo methods is in general a challenging computational task due to its numerical complexity and the computing cost. Currently, the lowest computing costs are obtained when applying a graphic processing unit (GPU) originally developed for speeding up graphic processing in the consumer market. In this article we present an implementation of accelerating a Monte Carlo method based on the Inverse scheme for simulating particle coagulation on the GPU. The abundant data parallelism embedded within the Monte Carlo method is explained as it will allow an efficient parallelization of the MC code on the GPU. Furthermore, the computation accuracy of the MC on GPU was validated with a benchmark, a CPU-based discrete-sectional method. To evaluate the performance gains by using the GPU, the computing time on the GPU against its sequential counterpart on the CPU were compared. The measured speedups show that the GPU can accelerate the execution of the MC code by a factor 10-100, depending on the chosen particle number of simulation particles. The algorithm shows a linear dependence of computing time with the number of simulation particles, which is a remarkable result in view of the n2 dependence of the coagulation.

  9. Three-dimensional simulations of ground motions in the Seattle region for earthquakes in the Seattle fault zone

    Science.gov (United States)

    Frankel, A.; Stephenson, W.

    2000-01-01

    We used the 3D finite-difference method to model observed seismograms of two earthquakes (ML 4.9 and 3.5) in the Seattle region and to simulate ground motions for hypothetical M 6.5 and M 5.0 earthquakes on the Seattle fault, for periods greater than 2 sec. A 3D velocity model of the Seattle Basin was constructed from studies that analyzed seismic-reflection surveys, borehole logs, and gravity and aeromagnetic data. The observations and the simulations highlight the importance of the Seattle Basin on long-period ground motions. For earthquakes occurring just south of the basin, the edge of the basin and the variation of the thickness of the Quaternary deposits in the basin produce much larger surface waves than expected from flat-layered models. The data consist of seismograms recorded by instruments deployed in Seattle by the USGS and the University of Washington (UW). The 3D simulation reproduces the peak amplitude and duration of most of the seismograms of the June 1997 Bremerton event (ML 4.9) recorded in Seattle. We found the focal mechanism for this event that best fits the observed seismograms in Seattle by combining Green's functions determined from the 3D simulations for the six fundamental moment couples. The February 1997 event (ML 3.5) to the south of the Seattle Basin exhibits a large surface-wave arrival at UW whose amplitude is matched by the synthetics in our 3D velocity model, for a source depth of 9 km. The M 6.5 simulations incorporated a fractal slip distribution on the fault plane. These simulations produced the largest ground motions in an area that includes downtown Seattle. This is mainly caused by rupture directed up dip toward downtown, radiation pattern of the source, and the turning of S waves by the velocity gradient in the Seattle basin. Another area of high ground motion is located about 13 km north of the fault and is caused by an increase in the amplitude of higher-mode Rayleigh waves caused by the thinning of the Quaternary

  10. Measurements and simulations of wakefields at the Accelerator Test Facility 2

    Science.gov (United States)

    Snuverink, J.; Ainsworth, R.; Boogert, S. T.; Cullinan, F. J.; Lyapin, A.; Kim, Y. I.; Kubo, K.; Kuroda, S.; Okugi, T.; Tauchi, T.; Terunuma, N.; Urakawa, J.; White, G. R.

    2016-09-01

    Wakefields are an important factor in accelerator design, and are a real concern when preserving the low beam emittance in modern machines. Charge dependent beam size growth has been observed at the Accelerator Test Facility (ATF2), a test accelerator for future linear collider beam delivery systems. Part of the explanation of this beam size growth is wakefields. In this paper we present numerical calculations of the wakefields produced by several types of geometrical discontinuities in the beam line as well as tracking simulations to estimate the induced effects. We also discuss precision beam kick measurements performed with the ATF2 cavity beam position monitor system for a test wakefield source in a movable section of the vacuum chamber. Using an improved model independent method we measured a wakefield kick for this movable section of about 0.49 V /pC /mm , which, compared to the calculated value from electromagnetic simulations of 0.41 V /pC /mm , is within the systematic error.

  11. GPU acceleration of Monte Carlo simulations for polarized photon scattering in anisotropic turbid media.

    Science.gov (United States)

    Li, Pengcheng; Liu, Celong; Li, Xianpeng; He, Honghui; Ma, Hui

    2016-09-20

    In earlier studies, we developed scattering models and the corresponding CPU-based Monte Carlo simulation programs to study the behavior of polarized photons as they propagate through complex biological tissues. Studying the simulation results in high degrees of freedom that created a demand for massive simulation tasks. In this paper, we report a parallel implementation of the simulation program based on the compute unified device architecture running on a graphics processing unit (GPU). Different schemes for sphere-only simulations and sphere-cylinder mixture simulations were developed. Diverse optimizing methods were employed to achieve the best acceleration. The final-version GPU program is hundreds of times faster than the CPU version. Dependence of the performance on input parameters and precision were also studied. It is shown that using single precision in the GPU simulations results in very limited losses in accuracy. Consumer-level graphics cards, even those in laptop computers, are more cost-effective than scientific graphics cards for single-precision computation.

  12. Topical pimecrolimus and tacrolimus do not accelerate photocarcinogenesis in hairless mice after UVA or simulated solar radiation

    DEFF Research Database (Denmark)

    Lerche, C.M.; Philipsen, P.A.; Poulsen, T.;

    2009-01-01

    the absence of carcinogenic effect of tacrolimus alone and in combination with simulated solar radiation (SSR) on hairless mice. The aim of this study is to determine whether pimecrolimus accelerates photocarcinogenesis in combination with SSR or pimecrolimus and tacrolimus accelerate photocarcinogenesis...

  13. FAULT DIAGNOSIS WITH MULTI-STATE ALARMS IN A NUCLEAR POWER CONTROL SIMULATOR

    Energy Technology Data Exchange (ETDEWEB)

    Austin Ragsdale; Roger Lew; Brian P. Dyre; Ronald L. Boring

    2012-10-01

    This research addresses how alarm systems can increase operator performance within nuclear power plant operations. The experiment examined the effect of two types of alarm systems (two-state and three-state alarms) on alarm compliance and diagnosis for two types of faults differing in complexity. We hypothesized three-state alarms would improve performance in alarm recognition and fault diagnoses over that of two-state alarms. We used sensitivity and criterion based on Signal Detection Theory to measure performance. We further hypothesized that operator trust would be highest when using three-state alarms. The findings from this research showed participants performed better and had more trust in three-state alarms compared to two-state alarms. Furthermore, these findings have significant theoretical implications and practical applications as they apply to improving the efficiency and effectiveness of nuclear power plant operations.

  14. Fault Diagnosis with Multi-State Alarms in a Nuclear Power Control Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Stuart A. Ragsdale; Roger Lew; Ronald L. Boring

    2014-09-01

    This research addresses how alarm systems can increase operator performance within nuclear power plant operations. The experiment examined the effects of two types of alarm systems (two-state and three-state alarms) on alarm compliance and diagnosis for two types of faults differing in complexity. We hypothesized the use of three-state alarms would improve performance in alarm recognition and fault diagnoses over that of two-state alarms. Sensitivity and criterion based on the Signal Detection Theory were used to measure performance. We further hypothesized that operator trust would be highest when using three-state alarms. The findings from this research showed participants performed better and had more trust in three-state alarms compared to two-state alarms. Furthermore, these findings have significant theoretical implications and practical applications as they apply to improving the efficiency and effectiveness of nuclear power plant operations.

  15. Accelerated simulation of stochastic particle removal processes in particle-resolved aerosol models

    Science.gov (United States)

    Curtis, J. H.; Michelotti, M. D.; Riemer, N.; Heath, M. T.; West, M.

    2016-10-01

    Stochastic particle-resolved methods have proven useful for simulating multi-dimensional systems such as composition-resolved aerosol size distributions. While particle-resolved methods have substantial benefits for highly detailed simulations, these techniques suffer from high computational cost, motivating efforts to improve their algorithmic efficiency. Here we formulate an algorithm for accelerating particle removal processes by aggregating particles of similar size into bins. We present the Binned Algorithm for particle removal processes and analyze its performance with application to the atmospherically relevant process of aerosol dry deposition. We show that the Binned Algorithm can dramatically improve the efficiency of particle removals, particularly for low removal rates, and that computational cost is reduced without introducing additional error. In simulations of aerosol particle removal by dry deposition in atmospherically relevant conditions, we demonstrate about 50-times increase in algorithm efficiency.

  16. Accelerated simulation of stochastic particle removal processes in particle-resolved aerosol models

    Energy Technology Data Exchange (ETDEWEB)

    Curtis, J.H. [Department of Atmospheric Sciences, University of Illinois at Urbana–Champaign, 105 S. Gregory St., Urbana, IL 61801 (United States); Michelotti, M.D. [Department of Computer Science, University of Illinois at Urbana–Champaign, 201 North Goodwin Avenue, Urbana, IL 61801 (United States); Riemer, N. [Department of Atmospheric Sciences, University of Illinois at Urbana–Champaign, 105 S. Gregory St., Urbana, IL 61801 (United States); Heath, M.T. [Department of Computer Science, University of Illinois at Urbana–Champaign, 201 North Goodwin Avenue, Urbana, IL 61801 (United States); West, M., E-mail: mwest@illinois.edu [Department of Mechanical Science and Engineering, University of Illinois at Urbana–Champaign, 1206 W. Green St., Urbana, IL 61801 (United States)

    2016-10-01

    Stochastic particle-resolved methods have proven useful for simulating multi-dimensional systems such as composition-resolved aerosol size distributions. While particle-resolved methods have substantial benefits for highly detailed simulations, these techniques suffer from high computational cost, motivating efforts to improve their algorithmic efficiency. Here we formulate an algorithm for accelerating particle removal processes by aggregating particles of similar size into bins. We present the Binned Algorithm for particle removal processes and analyze its performance with application to the atmospherically relevant process of aerosol dry deposition. We show that the Binned Algorithm can dramatically improve the efficiency of particle removals, particularly for low removal rates, and that computational cost is reduced without introducing additional error. In simulations of aerosol particle removal by dry deposition in atmospherically relevant conditions, we demonstrate about 50-times increase in algorithm efficiency.

  17. Accelerated molecular dynamics and equation-free methods for simulating diffusion in solids.

    Energy Technology Data Exchange (ETDEWEB)

    Deng, Jie; Zimmerman, Jonathan A.; Thompson, Aidan Patrick; Brown, William Michael (Oak Ridge National Laboratories, Oak Ridge, TN); Plimpton, Steven James; Zhou, Xiao Wang; Wagner, Gregory John; Erickson, Lindsay Crowl

    2011-09-01

    Many of the most important and hardest-to-solve problems related to the synthesis, performance, and aging of materials involve diffusion through the material or along surfaces and interfaces. These diffusion processes are driven by motions at the atomic scale, but traditional atomistic simulation methods such as molecular dynamics are limited to very short timescales on the order of the atomic vibration period (less than a picosecond), while macroscale diffusion takes place over timescales many orders of magnitude larger. We have completed an LDRD project with the goal of developing and implementing new simulation tools to overcome this timescale problem. In particular, we have focused on two main classes of methods: accelerated molecular dynamics methods that seek to extend the timescale attainable in atomistic simulations, and so-called 'equation-free' methods that combine a fine scale atomistic description of a system with a slower, coarse scale description in order to project the system forward over long times.

  18. Design, simulation and construction of quadrupole magnets for focusing electron beam in powerful industrial electron accelerator

    Directory of Open Access Journals (Sweden)

    S KH Mousavi

    2015-09-01

    Full Text Available In this paper the design and simulation of quadrupole magnets and electron beam optical of that by CST Studio code has been studied. Based on simulation result the magnetic quadrupole has been done for using in beam line of first Iranian powerful electron accelerator. For making the suitable magnetic field the effects of material and core geometry and coils current variation on quadrupole magnetic field have been studied. For test of quadrupole magnet the 10 MeV beam energy and 0.5 pi mm mrad emittance of input beam has been considered. We see the electron beam through the quadrupole magnet focus in one side and defocus in other side. The optimum of distance between two quadrupole magnets for low emittance have been achieved. The simulation results have good agreement with experimental results

  19. 3-D Simulations of Plasma Wakefield Acceleration with Non-Idealized Plasmas and Beams

    Energy Technology Data Exchange (ETDEWEB)

    Deng, S.; Katsouleas, T.; Lee, S.; Muggli, P.; /Southern California U.; Mori, W.B.; Hemker, R.; Ren, C.; Huang, C.; Dodd, E.; Blue, B.E.; Clayton, C.E.; Joshi, C.; Wang,; /UCLA; Decker, F.J.; Hogan, M.J.; Iverson, R.H.; O' Connell, C.; Raimondi, P.; Walz, D.; /SLAC

    2005-09-27

    3-D Particle-in-cell OSIRIS simulations of the current E-162 Plasma Wakefield Accelerator Experiment are presented in which a number of non-ideal conditions are modeled simultaneously. These include tilts on the beam in both planes, asymmetric beam emittance, beam energy spread and plasma inhomogeneities both longitudinally and transverse to the beam axis. The relative importance of the non-ideal conditions is discussed and a worst case estimate of the effect of these on energy gain is obtained. The simulation output is then propagated through the downstream optics, drift spaces and apertures leading to the experimental diagnostics to provide insight into the differences between actual beam conditions and what is measured. The work represents a milestone in the level of detail of simulation comparisons to plasma experiments.

  20. GPU-accelerated Red Blood Cells Simulations with Transport Dissipative Particle Dynamics

    CERN Document Server

    Blumers, Ansel L; Li, Zhen; Li, Xuejin; Karniadakis, George E

    2016-01-01

    Mesoscopic numerical simulations provide a unique approach for the quantification of the chemical influences on red blood cell functionalities. The transport Dissipative Particles Dynamics (tDPD) method can lead to such effective multiscale simulations due to its ability to simultaneously capture mesoscopic advection, diffusion, and reaction. In this paper, we present a GPU-accelerated red blood cell simulation package based on a tDPD adaptation of our red blood cell model, which can correctly recover the cell membrane viscosity, elasticity, bending stiffness, and cross-membrane chemical transport. The package essentially processes all computational workloads in parallel by GPU, and it incorporates multi-stream scheduling and non-blocking MPI communications to improve inter-node scalability. Our code is validated for accuracy and compared against the CPU counterpart for speed. Strong scaling and weak scaling are also presented to characterizes scalability. We observe a speedup of 10.1 on one GPU over all 16 c...

  1. Simulation of variation characteristics at thermostabilization of 27 GHz biperiodical accelerating structure

    Science.gov (United States)

    Kluchevskaya, Y. D.; Polozov, S. M.

    2016-07-01

    It was proposed to develop the biperiodical accelerating structure with operating frequency of 27 GHz to assess the possibility of design a compact accelerating structure for medical application. It is necessary to do the more careful simulation of variation characteristics this case because of decrease of wavelength 3-10 times in comparison with conventional structures 10 and 3 cm ranges. Results of such study are presented in the article. Also a combination of high electromagnetic fields and long pulses at a high operating frequency leads to the temperature increase in the structure, thermal deformation and significant change of the resonator characteristics, including the frequency of the RF pulse. Development results of three versions of system of temperature stabilization also discuses.

  2. Validation of frequency and mode extraction calculations from time-domain simulations of accelerator cavities

    CERN Document Server

    Austin, T M; Ovtchinnikov, S; Werner, G R; Bellantoni, L

    2010-01-01

    The recently developed frequency extraction algorithm [G.R. Werner and J.R. Cary, J. Comp. Phys. 227, 5200 (2008)] that enables a simple FDTD algorithm to be transformed into an efficient eigenmode solver is applied to a realistic accelerator cavity modeled with embedded boundaries and Richardson extrapolation. Previously, the frequency extraction method was shown to be capable of distinguishing M degenerate modes by running M different simulations and to permit mode extraction with minimal post-processing effort that only requires solving a small eigenvalue problem. Realistic calculations for an accelerator cavity are presented in this work to establish the validity of the method for realistic modeling scenarios and to illustrate the complexities of the computational validation process. The method is found to be able to extract the frequencies with error that is less than a part in 10^5. The corrected experimental and computed values differ by about one parts in 10^$, which is accounted for (in largest part)...

  3. Revisiting FPGA Acceleration of Molecular Dynamics Simulation with Dynamic Data Flow Behavior in High-Level Synthesis

    CERN Document Server

    Cong, Jason; Kianinejad, Hassan; Wei, Peng

    2016-01-01

    Molecular dynamics (MD) simulation is one of the past decade's most important tools for enabling biology scientists and researchers to explore human health and diseases. However, due to the computation complexity of the MD algorithm, it takes weeks or even months to simulate a comparatively simple biology entity on conventional multicore processors. The critical path in molecular dynamics simulations is the force calculation between particles inside the simulated environment, which has abundant parallelism. Among various acceleration platforms, FPGA is an attractive alternative because of its low power and high energy efficiency. However, due to its high programming cost using RTL, none of the mainstream MD software packages has yet adopted FPGA for acceleration. In this paper we revisit the FPGA acceleration of MD in high-level synthesis (HLS) so as to provide affordable programming cost. Our experience with the MD acceleration demonstrates that HLS optimizations such as loop pipelining, module duplication a...

  4. PEM fuel cell fault detection and identification using differential method: simulation and experimental validation

    Science.gov (United States)

    Frappé, E.; de Bernardinis, A.; Bethoux, O.; Candusso, D.; Harel, F.; Marchand, C.; Coquery, G.

    2011-05-01

    PEM fuel cell performance and lifetime strongly depend on the polymer membrane and MEA hydration. As the internal moisture is very sensitive to the operating conditions (temperature, stoichiometry, load current, water management…), keeping the optimal working point is complex and requires real-time monitoring. This article focuses on PEM fuel cell stack health diagnosis and more precisely on stack fault detection monitoring. This paper intends to define new, simple and effective methods to get relevant information on usual faults or malfunctions occurring in the fuel cell stack. For this purpose, the authors present a fault detection method using simple and non-intrusive on-line technique based on the space signature of the cell voltages. The authors have the objective to minimize the number of embedded sensors and instrumentation in order to get a precise, reliable and economic solution in a mass market application. A very low number of sensors are indeed needed for this monitoring and the associated algorithm can be implemented on-line. This technique is validated on a 20-cell PEMFC stack. It demonstrates that the developed method is particularly efficient in flooding case. As a matter of fact, it uses directly the stack as a sensor which enables to get a quick feedback on its state of health.

  5. An application of numerical simulation techniques to improve the resolution of offshore fault kinematics using seafloor geodetic methods

    Science.gov (United States)

    Nishimura, Sou; Ando, Masataka; Tadokoro, Keiichi

    2005-08-01

    Geodetic measurements reveal a number of tectonic phenomena, such as coseismic and postseismic displacements of earthquakes and interplate coupling on plate interfaces. However, since geodetic measurements are limited to land, slip distribution is poorly resolved offshore, though well constrained in the landward areas. Due to the poverty of offshore data, tectonic motion near trench axes has not been measured. Seafloor geodetic observations provide important information on offshore tectonics. Improved offshore resolution would allow determination of strain accumulation and release processes near trench axes. In this study, using numerical simulation, we discuss the potential for improvement of slip resolution in an offshore area using seafloor geodetic measurements. The plate interface along the Nankai trough is modeled by 36 planar fault segments, whose length and width, respectively, are set to 60 km and 50 km. Three hundred and seventy-five GPS observation sites on land and 10 seafloor sites aligned 60 km off the coast are used for the simulation. We carry out a checkerboard test and compare the estimated slip pattern with the given checkerboard pattern. Models that do not include seafloor sites generate large discrepancies in offshore deformation between the initial and estimated slip patterns, although there are similarities in coastal regions. This indicates poor resolution in offshore areas. When we apply our model to include seafloor sites, the difference between the initial and estimated slip patterns decreases for most of the modeled fault segments. Comparison between these two cases suggests the potential for use of seafloor geodetic techniques to improve offshore resolution.

  6. Accelerating Wright-Fisher Forward Simulations on the Graphics Processing Unit.

    Science.gov (United States)

    Lawrie, David S

    2017-09-07

    Forward Wright-Fisher simulations are powerful in their ability to model complex demography and selection scenarios, but suffer from slow execution on the Central Processor Unit (CPU), thus limiting their usefulness. However, the single-locus Wright-Fisher forward algorithm is exceedingly parallelizable, with many steps that are so-called "embarrassingly parallel," consisting of a vast number of individual computations that are all independent of each other and thus capable of being performed concurrently. The rise of modern Graphics Processing Units (GPUs) and programming languages designed to leverage the inherent parallel nature of these processors have allowed researchers to dramatically speed up many programs that have such high arithmetic intensity and intrinsic concurrency. The presented GPU Optimized Wright-Fisher simulation, or "GO Fish" for short, can be used to simulate arbitrary selection and demographic scenarios while running over 250-fold faster than its serial counterpart on the CPU. Even modest GPU hardware can achieve an impressive speedup of over two orders of magnitude. With simulations so accelerated, one can not only do quick parametric bootstrapping of previously estimated parameters, but also use simulated results to calculate the likelihoods and summary statistics of demographic and selection models against real polymorphism data, all without restricting the demographic and selection scenarios that can be modeled or requiring approximations to the single-locus forward algorithm for efficiency. Further, as many of the parallel programming techniques used in this simulation can be applied to other computationally intensive algorithms important in population genetics, GO Fish serves as an exciting template for future research into accelerating computation in evolution. GO Fish is part of the Parallel PopGen Package available at: http://dl42.github.io/ParallelPopGen/. Copyright © 2017 Lawrie.

  7. Accelerated Monte Carlo simulation on the chemical stage in water radiolysis using GPU

    Science.gov (United States)

    Tian, Zhen; Jiang, Steve B.; Jia, Xun

    2017-04-01

    The accurate simulation of water radiolysis is an important step to understand the mechanisms of radiobiology and quantitatively test some hypotheses regarding radiobiological effects. However, the simulation of water radiolysis is highly time consuming, taking hours or even days to be completed by a conventional CPU processor. This time limitation hinders cell-level simulations for a number of research studies. We recently initiated efforts to develop gMicroMC, a GPU-based fast microscopic MC simulation package for water radiolysis. The first step of this project focused on accelerating the simulation of the chemical stage, the most time consuming stage in the entire water radiolysis process. A GPU-friendly parallelization strategy was designed to address the highly correlated many-body simulation problem caused by the mutual competitive chemical reactions between the radiolytic molecules. Two cases were tested, using a 750 keV electron and a 5 MeV proton incident in pure water, respectively. The time-dependent yields of all the radiolytic species during the chemical stage were used to evaluate the accuracy of the simulation. The relative differences between our simulation and the Geant4-DNA simulation were on average 5.3% and 4.4% for the two cases. Our package, executed on an Nvidia Titan black GPU card, successfully completed the chemical stage simulation of the two cases within 599.2 s and 489.0 s. As compared with Geant4-DNA that was executed on an Intel i7-5500U CPU processor and needed 28.6 h and 26.8 h for the two cases using a single CPU core, our package achieved a speed-up factor of 171.1-197.2.

  8. Numerical simulations of Hall-effect plasma accelerators on a magnetic-field-aligned mesh

    Science.gov (United States)

    Mikellides, Ioannis G.; Katz, Ira

    2012-10-01

    The ionized gas in Hall-effect plasma accelerators spans a wide range of spatial and temporal scales, and exhibits diverse physics some of which remain elusive even after decades of research. Inside the acceleration channel a quasiradial applied magnetic field impedes the current of electrons perpendicular to it in favor of a significant component in the E×B direction. Ions are unmagnetized and, arguably, of wide collisional mean free paths. Collisions between the atomic species are rare. This paper reports on a computational approach that solves numerically the 2D axisymmetric vector form of Ohm's law with no assumptions regarding the resistance to classical electron transport in the parallel relative to the perpendicular direction. The numerical challenges related to the large disparity of the transport coefficients in the two directions are met by solving the equations on a computational mesh that is aligned with the applied magnetic field. This approach allows for a large physical domain that extends more than five times the thruster channel length in the axial direction and encompasses the cathode boundary where the lines of force can become nonisothermal. It also allows for the self-consistent solution of the plasma conservation laws near the anode boundary, and for simulations in accelerators with complex magnetic field topologies. Ions are treated as an isothermal, cold (relative to the electrons) fluid, accounting for the ion drag in the momentum equation due to ion-neutral (charge-exchange) and ion-ion collisions. The density of the atomic species is determined using an algorithm that eliminates the statistical noise associated with discrete-particle methods. Numerical simulations are presented that illustrate the impact of the above-mentioned features on our understanding of the plasma in these accelerators.

  9. GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method

    Science.gov (United States)

    Gong, Chunye; Liu, Jie; Chi, Lihua; Huang, Haowei; Fang, Jingyue; Gong, Zhenghu

    2011-07-01

    Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates ( Sn) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.

  10. Clouds and Precipitation Simulated by the US DOE Accelerated Climate Modeling for Energy (ACME)

    Science.gov (United States)

    Xie, S.; Lin, W.; Yoon, J. H.; Ma, P. L.; Rasch, P. J.; Ghan, S.; Zhang, K.; Zhang, Y.; Zhang, C.; Bogenschutz, P.; Gettelman, A.; Larson, V. E.; Neale, R. B.; Park, S.; Zhang, G. J.

    2015-12-01

    A new US Department of Energy (DOE) climate modeling effort is to develop an Accelerated Climate Model for Energy (ACME) to accelerate the development and application of fully coupled, state-of-the-art Earth system models for scientific and energy application. ACME is a high-resolution climate model with a 0.25 degree in horizontal and more than 60 levels in the vertical. It starts from the Community Earth System Model (CESM) with notable changes to its physical parameterizations and other components. This presentation provides an overview on the ACME model's capability in simulating clouds and precipitation and its sensitivity to convection schemes. Results with using several state-of-the-art cumulus convection schemes, including those unified parameterizations that are being developed in the climate community, will be presented. These convection schemes are evaluated in a multi-scale framework including both short-range hindcasts and free-running climate simulations with both satellite data and ground-based measurements. Running climate model in short-range hindcasts has been proven to be an efficient way to understand model deficiencies. The analysis is focused on those systematic errors in clouds and precipitation simulations that are shared in many climate models. The goal is to understand what model deficiencies might be primarily responsible for these systematic errors.

  11. Stable boosted-frame simulations of laser-wakefield acceleration using Galilean coordinates

    Science.gov (United States)

    Lehe, Remi; Kirchen, Manuel; Godfrey, Brendan; Maier, Andreas; Vay, Jean-Luc

    2016-10-01

    While Particle-In-Cell (PIC) simulations of laser-wakefield acceleration are typically very computationally expensive, it is well-known that representing the system in a well-chosen Lorentz frame can reduce the computational cost by orders of magnitude. One of the limitation of this ``boosted-frame'' technique is the Numerical Cherenkov Instability (NCI) - a numerical instability that rapidly grows in the boosted frame and must be eliminated in order to obtain valid physical results. Several methods have been proposed in order to eliminate the NCI, but they introduce additional numerical corrections (e.g. heavy smoothing, unphysical modification of the dispersion relation, etc.) which could potentially alter the physics. By contrast, here we show that, for boosted-frame simulations of laser-wakefield acceleration, the NCI can be eliminated simply by integrating the PIC equations in Galilean coordinates (a.k.a comoving coordinates), without additional numerical correction. Using this technique, we show excellent agreement between simulations in the laboratory frame and Lorentz-boosted frame, with more than 2 orders of magnitude speedup in the latter case. Work supported by US-DOE Contracts DE-AC02-05CH11231.

  12. Hierarchical Acceleration of Multilevel Monte Carlo Methods for Computationally Expensive Simulations in Reservoir Modeling

    Science.gov (United States)

    Zhang, G.; Lu, D.; Webster, C.

    2014-12-01

    The rational management of oil and gas reservoir requires an understanding of its response to existing and planned schemes of exploitation and operation. Such understanding requires analyzing and quantifying the influence of the subsurface uncertainties on predictions of oil and gas production. As the subsurface properties are typically heterogeneous causing a large number of model parameters, the dimension independent Monte Carlo (MC) method is usually used for uncertainty quantification (UQ). Recently, multilevel Monte Carlo (MLMC) methods were proposed, as a variance reduction technique, in order to improve computational efficiency of MC methods in UQ. In this effort, we propose a new acceleration approach for MLMC method to further reduce the total computational cost by exploiting model hierarchies. Specifically, for each model simulation on a new added level of MLMC, we take advantage of the approximation of the model outputs constructed based on simulations on previous levels to provide better initial states of new simulations, which will help improve efficiency by, e.g. reducing the number of iterations in linear system solving or the number of needed time-steps. This is achieved by using mesh-free interpolation methods, such as Shepard interpolation and radial basis approximation. Our approach is applied to a highly heterogeneous reservoir model from the tenth SPE project. The results indicate that the accelerated MLMC can achieve the same accuracy as standard MLMC with a significantly reduced cost.

  13. Mixed-field GCR Simulations for Radiobiological Research Using Ground Based Accelerators

    Science.gov (United States)

    Kim, Myung-Hee Y.; Rusek, Adam; Cucinotta, Francis A.

    2014-01-01

    Space radiation is comprised of a large number of particle types and energies, which have differential ionization power from high energy protons to high charge and energy (HZE) particles and secondary neutrons produced by galactic cosmic rays (GCR). Ground based accelerators such as the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (BNL) are used to simulate space radiation for radiobiology research and dosimetry, electronics parts, and shielding testing using mono-energetic beams for single ion species. As a tool to support research on new risk assessment models, we have developed a stochastic model of heavy ion beams and space radiation effects, the GCR Event-based Risk Model computer code (GERMcode). For radiobiological research on mixed-field space radiation, a new GCR simulator at NSRL is proposed. The NSRL-GCR simulator, which implements the rapid switching mode and the higher energy beam extraction to 1.5 GeV/u, can integrate multiple ions into a single simulation to create GCR Z-spectrum in major energy bins. After considering the GCR environment and energy limitations of NSRL, a GCR reference field is proposed after extensive simulation studies using the GERMcode. The GCR reference field is shown to reproduce the Z and LET spectra of GCR behind shielding within 20% accuracy compared to simulated full GCR environments behind shielding. A major challenge for space radiobiology research is to consider chronic GCR exposure of up to 3-years in relation to simulations with cell and animal models of human risks. We discuss possible approaches to map important biological time scales in experimental models using ground-based simulation with extended exposure of up to a few weeks and fractionation approaches at a GCR simulator.

  14. Particle acceleration with anomalous pitch angle scattering in 2D magnetohydrodynamic reconnection simulations

    Science.gov (United States)

    Borissov, A.; Kontar, E. P.; Threlfall, J.; Neukirch, T.

    2017-09-01

    The conversion of magnetic energy into other forms (such as plasma heating, bulk plasma flows, and non-thermal particles) during solar flares is one of the outstanding open problems in solar physics. It is generally accepted that magnetic reconnection plays a crucial role in these conversion processes. In order to achieve the rapid energy release required in solar flares, an anomalous resistivity, which is orders of magnitude higher than the Spitzer resistivity, is often used in magnetohydrodynamic (MHD) simulations of reconnection in the corona. The origin of Spitzer resistivity is based on Coulomb scattering, which becomes negligible at the high energies achieved by accelerated particles. As a result, simulations of particle acceleration in reconnection events are often performed in the absence of any interaction between accelerated particles and any background plasma. This need not be the case for scattering associated with anomalous resistivity caused by turbulence within solar flares, as the higher resistivity implies an elevated scattering rate. We present results of test particle calculations, with and without pitch angle scattering, subject to fields derived from MHD simulations of two-dimensional (2D) X-point reconnection. Scattering rates proportional to the ratio of the anomalous resistivity to the local Spitzer resistivity, as well as at fixed values, are considered. Pitch angle scattering, which is independent of the anomalous resistivity, causes higher maximum energies in comparison to those obtained without scattering. Scattering rates which are dependent on the local anomalous resistivity tend to produce fewer highly energised particles due to weaker scattering in the separatrices, even though scattering in the current sheet may be stronger when compared to resistivity-independent scattering. Strong scattering also causes an increase in the number of particles exiting the computational box in the reconnection outflow region, as opposed to along the

  15. An FFT-accelerated time-domain multiconductor transmission line simulator

    KAUST Repository

    Bagci, Hakan

    2010-02-01

    A fast time-domain multiconductor transmission line (MTL) simulator for analyzing general MTL networks is presented. The simulator models the networks as homogeneous MTLs that are excited by external fields and driven/terminated/ connected by potentially nonlinear lumped circuitry. It hybridizes an MTL solver derived from time-domain integral equations (TDIEs) in unknown wave coefficients for each MTL with a circuit solver rooted in modified nodal analysis equations in unknown node voltages and voltage-source currents for each circuit. These two solvers are rigorously interfaced at MTL and circuit terminals, and the resulting coupled system of equations is solved simultaneously for all MTL and circuit unknowns at each time step. The proposed simulator is amenable to hybridization, is fast Fourier transform (FFT)-accelerated, and is highly accurate: 1) It can easily be hybridized with TDIE-based field solvers (in a fully rigorous mathematical framework) for performing electromagnetic interference and compatibility analysis on electrically large and complex structures loaded with MTL networks. 2) It is accelerated by an FFT algorithm that calculates temporal convolutions of time-domain MTL Green functions in only O(Ntlog2 N t) rather than O(Ntt2) operations, where N t is the number of time steps of simulation. Moreover, the algorithm, which operates on temporal samples of MTL Green functions, is indifferent to the method used to obtain them. 3) It approximates MTL voltages, currents, and wave coefficients, using high-order temporal basis functions. Various numerical examples, including the crosstalk analysis of a (twisted) unshielded twisted-pair (UTP)-CAT5 cable and the analysis of field coupling into UTP-CAT5 and RG-58 cables located on an airplane, are presented to demonstrate the accuracy, efficiency, and versatility of the proposed simulator. © 2010 IEEE.

  16. Simulation studies of crystal-photodetector assemblies for the Turkish accelerator center particle factory electromagnetic calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Kocak, F., E-mail: fkocak@uludag.edu.tr

    2015-07-01

    The Turkish Accelerator Center Particle Factory detector will be constructed for the detection of the produced particles from the collision of a 1 GeV electron beam against a 3.6 GeV positron beam. PbWO{sub 4} and CsI(Tl) crystals are considered for the construction of the electromagnetic calorimeter part of the detector. The generated optical photons in these crystals are detected by avalanche or PIN photodiodes. Geant4 simulation code has been used to estimate the energy resolution of the calorimeter for these crystal–photodiode assemblies.

  17. Numerical simulations of flow field in the target region of accelerator-driven subcritical reactor system

    CERN Document Server

    Chen Hai Yan

    2002-01-01

    Numerical simulations of flow field were performed by using the PHOENICS 3.2 code for the proposed spallation target of accelerator-driven subcritical reactor system (ADS). The fluid motion in the target is axisymmetric and is treated as a 2-D steady-state problem. A body-fitted coordinate system (BFC) is then chosen and a two-dimensional mesh of the flow channel is generated. Results are presented for the ADS target under both upward and downward flow, and for the target with diffuser plate installed below the window under downward flow

  18. Simulation of Cascaded Longitudinal-Space-Charge Amplifier at the Fermilab Accelerator Science & Technology (Fast) Facility

    Energy Technology Data Exchange (ETDEWEB)

    Halavanau, A. [Northern Illinois U.; Piot, P. [Northern Illinois U.

    2015-12-01

    Cascaded Longitudinal Space Charge Amplifiers (LSCA) have been proposed as a mechanism to generate density modulation over a board spectral range. The scheme has been recently demonstrated in the optical regime and has confirmed the production of broadband optical radiation. In this paper we investigate, via numerical simulations, the performance of a cascaded LSCA beamline at the Fermilab Accelerator Science & Technology (FAST) facility to produce broadband ultraviolet radiation. Our studies are carried out using elegant with included tree-based grid-less space charge algorithm.

  19. High energy gain in three-dimensional simulations of light sail acceleration

    Energy Technology Data Exchange (ETDEWEB)

    Sgattoni, A., E-mail: andrea.sgattoni@polimi.it [Dipartimento di Energia, Politecnico di Milano, Milano (Italy); CNR, Istituto Nazionale di Ottica, u.o.s. “Adriano Gozzini,” Pisa (Italy); Sinigardi, S. [CNR, Istituto Nazionale di Ottica, u.o.s. “Adriano Gozzini,” Pisa (Italy); Dipartimento di Fisica e Astronomia, Università di Bologna, Bologna (Italy); INFN sezione di Bologna, Bologna (Italy); Macchi, A. [CNR, Istituto Nazionale di Ottica, u.o.s. “Adriano Gozzini,” Pisa (Italy); Dipartimento di Fisica “Enrico Fermi,” Università di Pisa, Pisa (Italy)

    2014-08-25

    The dynamics of radiation pressure acceleration in the relativistic light sail regime are analysed by means of large scale, three-dimensional (3D) particle-in-cell simulations. Differently to other mechanisms, the 3D dynamics leads to faster and higher energy gain than in 1D or 2D geometry. This effect is caused by the local decrease of the target density due to transverse expansion leading to a “lighter sail.” However, the rarefaction of the target leads to an earlier transition to transparency limiting the energy gain. A transverse instability leads to a structured and inhomogeneous ion distribution.

  20. Simulation study of accelerator based quasi-mono-energetic epithermal neutron beams for BNCT.

    Science.gov (United States)

    Adib, M; Habib, N; Bashter, I I; El-Mesiry, M S; Mansy, M S

    2016-01-01

    Filtered neutron techniques were applied to produce quasi-mono-energetic neutron beams in the energy range of 1.5-7.5 keV at the accelerator port using the generated neutron spectrum from a Li (p, n) Be reaction. A simulation study was performed to characterize the filter components and transmitted beam lines. The feature of the filtered beams is detailed in terms of optimal thickness of the primary and additive components. A computer code named "QMNB-AS" was developed to carry out the required calculations. The filtered neutron beams had high purity and intensity with low contamination from the accompanying thermal, fast neutrons and γ-rays.

  1. Parallel, Multigrid Finite Element Simulator for Fractured/Faulted and Other Complex Reservoirs based on Common Component Architecture (CCA)

    Energy Technology Data Exchange (ETDEWEB)

    Milind Deo; Chung-Kan Huang; Huabing Wang

    2008-08-31

    Black-oil, compositional and thermal simulators have been developed to address different physical processes in reservoir simulation. A number of different types of discretization methods have also been proposed to address issues related to representing the complex reservoir geometry. These methods are more significant for fractured reservoirs where the geometry can be particularly challenging. In this project, a general modular framework for reservoir simulation was developed, wherein the physical models were efficiently decoupled from the discretization methods. This made it possible to couple any discretization method with different physical models. Oil characterization methods are becoming increasingly sophisticated, and it is possible to construct geologically constrained models of faulted/fractured reservoirs. Discrete Fracture Network (DFN) simulation provides the option of performing multiphase calculations on spatially explicit, geologically feasible fracture sets. Multiphase DFN simulations of and sensitivity studies on a wide variety of fracture networks created using fracture creation/simulation programs was undertaken in the first part of this project. This involved creating interfaces to seamlessly convert the fracture characterization information into simulator input, grid the complex geometry, perform the simulations, and analyze and visualize results. Benchmarking and comparison with conventional simulators was also a component of this work. After demonstration of the fact that multiphase simulations can be carried out on complex fracture networks, quantitative effects of the heterogeneity of fracture properties were evaluated. Reservoirs are populated with fractures of several different scales and properties. A multiscale fracture modeling study was undertaken and the effects of heterogeneity and storage on water displacement dynamics in fractured basements were investigated. In gravity-dominated systems, more oil could be recovered at a given pore

  2. Simulation of diatomic gas-wall interaction and accommodation coefficients for negative ion sources and accelerators

    Science.gov (United States)

    Sartori, E.; Brescaccin, L.; Serianni, G.

    2016-02-01

    Particle-wall interactions determine in different ways the operating conditions of plasma sources, ion accelerators, and beams operating in vacuum. For instance, a contribution to gas heating is given by ion neutralization at walls; beam losses and stray particle production—detrimental for high current negative ion systems such as beam sources for fusion—are caused by collisional processes with residual gas, with the gas density profile that is determined by the scattering of neutral particles at the walls. This paper shows that Molecular Dynamics (MD) studies at the nano-scale can provide accommodation parameters for gas-wall interactions, such as the momentum accommodation coefficient and energy accommodation coefficient: in non-isothermal flows (such as the neutral gas in the accelerator, coming from the plasma source), these affect the gas density gradients and influence efficiency and losses in particular of negative ion accelerators. For ideal surfaces, the computation also provides the angular distribution of scattered particles. Classical MD method has been applied to the case of diatomic hydrogen molecules. Single collision events, against a frozen wall or a fully thermal lattice, have been simulated by using probe molecules. Different modelling approximations are compared.

  3. Probing particle acceleration in lower hybrid turbulence via synthetic diagnostics produced by PIC simulations

    Science.gov (United States)

    Cruz, F.; Fonseca, R. A.; Silva, L. O.; Rigby, A.; Gregori, G.; Bamford, R. A.; Bingham, R.; Koenig, M.

    2016-10-01

    Efficient particle acceleration in astrophysical shocks can only be achieved in the presence of initial high energy particles. A candidate mechanism to provide an initial seed of energetic particles is lower hybrid turbulence (LHT). This type of turbulence is commonly excited in regions where space and astrophysical plasmas interact with large obstacles. Due to the nature of LH waves, energy can be resonantly transferred from ions (travelling perpendicular to the magnetic field) to electrons (travelling parallel to it) and the consequent motion of the latter in turbulent shock electromagnetic fields is believed to be responsible for the observed x-ray fluxes from non-thermal electrons produced in astrophysical shocks. Here we present PIC simulations of plasma flows colliding with magnetized obstacles showing the formation of a bow shock and the consequent development of LHT. The plasma and obstacle parameters are chosen in order to reproduce the results obtained in a recent experiment conducted at the LULI laser facility at Ecole Polytechnique (France) to study accelerated electrons via LHT. The wave and particle spectra are studied and used to produce synthetic diagnostics that show good qualitative agreement with experimental results. Work supported by the European Research Council (Accelerates ERC-2010-AdG 267841).

  4. Rapid acceleration leads to rapid weakening in earthquake-like laboratory experiments

    Science.gov (United States)

    Chang, Jefferson C.; Lockner, David A.; Reches, Z.

    2012-01-01

    After nucleation, a large earthquake propagates as an expanding rupture front along a fault. This front activates countless fault patches that slip by consuming energy stored in Earth’s crust. We simulated the slip of a fault patch by rapidly loading an experimental fault with energy stored in a spinning flywheel. The spontaneous evolution of strength, acceleration, and velocity indicates that our experiments are proxies of fault-patch behavior during earthquakes of moment magnitude (Mw) = 4 to 8. We show that seismically determined earthquake parameters (e.g., displacement, velocity, magnitude, or fracture energy) can be used to estimate the intensity of the energy release during an earthquake. Our experiments further indicate that high acceleration imposed by the earthquake’s rupture front quickens dynamic weakening by intense wear of the fault zone.

  5. Simulation and experimental study of the solid pulse forming lines for dielectric wall accelerator

    Institute of Scientific and Technical Information of China (English)

    ZHAO Quan-Tang; YUAN Ping; ZHANG Zi-Min; CAO Shu-Chun; SHEN Xiao-Kang; LIU Ming; JING Yi; ZHAO Hong-Wei

    2011-01-01

    Two types of pulse forming lines for dielectric wall accelerator (DWA) were investigated preliminarily.By simulation with CST Microwave Studio,the results indicate the pulse forming process,which can help to understand the voltage wave transmission process and optimize the line parameters.Furthermore,the principle of the pulse forming process was proved by experiments and some excellent pulse waveforms were obtained.During the experiments,the Blumlein line and zero integral pulse (ZIP) forming line,constructed with aluminum foil,poly plate and air gap self-closing switch,were tested.The full width at half maximum (FWHM) of the waveform is 16 nanoseconds (BL) and 17 nanoseconds (ZIP line),and the formed pulse voltage amplitude is 5 kV (BL) and +2.2 kV/-1.6 kV (ZIP line).The experiments result coincides well with the simulation.

  6. R-leaping: Accelerating the stochastic simulation algorithm by reaction leaps

    Science.gov (United States)

    Auger, Anne; Chatelain, Philippe; Koumoutsakos, Petros

    2006-08-01

    A novel algorithm is proposed for the acceleration of the exact stochastic simulation algorithm by a predefined number of reaction firings (R-leaping) that may occur across several reaction channels. In the present approach, the numbers of reaction firings are correlated binomial distributions and the sampling procedure is independent of any permutation of the reaction channels. This enables the algorithm to efficiently handle large systems with disparate rates, providing substantial computational savings in certain cases. Several mechanisms for controlling the accuracy and the appearance of negative species are described. The advantages and drawbacks of R-leaping are assessed by simulations on a number of benchmark problems and the results are discussed in comparison with established methods.

  7. SIMULATION TOOL OF VELOCITY AND TEMPERATURE PROFILES IN THE ACCELERATED COOLING PROCESS OF HEAVY PLATES

    Directory of Open Access Journals (Sweden)

    Antônio Adel dos Santos

    2014-10-01

    Full Text Available The aim of this paper was to develop and apply mathematical models for determining the velocity and temperature profiles of heavy plates processed by accelerated cooling at Usiminas’ Plate Mill in Ipatinga. The development was based on the mathematical/numerical representation of physical phenomena occurring in the processing line. Production data from 3334 plates processed in the Plate Mill were used for validating the models. A user-friendly simulation tool was developed within the Visual Basic framework, taking into account all steel grades produced, the configuration parameters of the production line and these models. With the aid of this tool the thermal profile through the plate thickness for any steel grade and dimensions can be generated, which allows the tuning of online process control models. The simulation tool has been very useful for the development of new steel grades, since the process variables can be related to the thermal profile, which affects the mechanical properties of the steels.

  8. GPU-accelerated simulation of colloidal suspensions with direct hydrodynamic interactions

    CERN Document Server

    Kopp, Michael

    2012-01-01

    Solvent-mediated hydrodynamic interactions between colloidal particles can significantly alter their dynamics. We discuss the implementation of Stokesian dynamics in leading approximation for streaming processors as provided by the compute unified device architecture (CUDA) of recent graphics processors (GPUs). Thereby, the simulation of explicit solvent particles is avoided and hydrodynamic interactions can easily be accounted for in already available, highly accelerated molecular dynamics simulations. Special emphasis is put on efficient memory access and numerical stability. The algorithm is applied to the periodic sedimentation of a cluster of four suspended particles. Finally, we investigate the runtime performance of generic memory access patterns of complexity $O(N^2)$ for various GPU algorithms relying on either hardware cache or shared memory.

  9. BrainFrame: A node-level heterogeneous accelerator platform for neuron simulations.

    Science.gov (United States)

    Smaragdos, Georgios; Chatzikonstantis, Georgios; Kukreja, Rahul; Sidiropoulos, Harry; Rodopoulos, Dimitrios; Sourdis, Ioannis; Al-Ars, Zaid; Kachris, Christoforos; Soudris, Dimitrios; de Zeeuw, Chris; Strydis, Christos

    2017-07-14

    Objective: The advent of High-Performance Computing (HPC) in recent years has led to its increasing use in brain study through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a homogeneous acceleration platform to effectively address the complete array of modeling requirements. Approach: In this paper we propose and build BrainFrame, a heterogeneous acceleration platform that incorporates three distinct acceleration technologies, an Intel Xeon-Phi CPU, a NVidia GP-GPU and a Maxeler Dataflow Engine. The PyNN software framework is also integrated into the platform. As a challenging proof of concept, we analyze the performance of BrainFrame on different experiment instances of a state-of-the-art neuron model, representing the Inferior-Olivary Nucleus using a biophysically-meaningful, extended Hodgkin-Huxley representation. The model instances take into account not only the neuronal-network dimensions but also different network-connectivity densities, which can drastically affect the workload's performance characteristics. Main results: The combined use of different HPC fabrics demonstrated that BrainFrame is better able to cope with the modeling diversity encountered in realistic experiments. Our performance analysis shows clearly that the model directly affects performance and all three technologies are required to cope with all the model use cases. Significance: The BrainFrame framework is designed to transparently configure and select the appropriate back-end accelerator technology for use per simulation run. The PyNN integration provides a familiar bridge to the vast number of models already available. Additionally, it gives a clear roadmap for extending the platform support beyond the proof of concept, with improved usability and directly

  10. Monte Carlo Simulation of Siemens ONCOR Linear Accelerator with BEAMnrc and DOSXYZnrc Code.

    Science.gov (United States)

    Jabbari, Keyvan; Anvar, Hossein Saberi; Tavakoli, Mohammad Bagher; Amouheidari, Alireza

    2013-07-01

    The Monte Carlo method is the most accurate method for simulation of radiation therapy equipment. The linear accelerators (linac) are currently the most widely used machines in radiation therapy centers. In this work, a Monte Carlo modeling of the Siemens ONCOR linear accelerator in 6 MV and 18 MV beams was performed. The results of simulation were validated by measurements in water by ionization chamber and extended dose range (EDR2) film in solid water. The linac's X-ray particular are so sensitive to the properties of primary electron beam. Square field size of 10 cm × 10 cm produced by the jaws was compared with ionization chamber and film measurements. Head simulation was performed with BEAMnrc and dose calculation with DOSXYZnrc for film measurements and 3ddose file produced by DOSXYZnrc analyzed used homemade MATLAB program. At 6 MV, the agreement between dose calculated by Monte Carlo modeling and direct measurement was obtained to the least restrictive of 1%, even in the build-up region. At 18 MV, the agreement was obtained 1%, except for in the build-up region. In the build-up region, the difference was 1% at 6 MV and 2% at 18 MV. The mean difference between measurements and Monte Carlo simulation is very small in both of ONCOR X-ray energy. The results are highly accurate and can be used for many applications such as patient dose calculation in treatment planning and in studies that model this linac with small field size like intensity-modulated radiation therapy technique.

  11. Fault-Tolerant Robot Programming through Simulation with Realistic Sensor Models

    OpenAIRE

    Axel Waggershauser; Thomas Braeunl; Andreas Koestler

    2008-01-01

    We introduce a simulation system for mobile robots that allows a realistic interaction of multiple robots in a common environment. The simulated robots are closely modeled after robots from the EyeBot family and have an identical application programmer interface. The simulation supports driving commands at two levels of abstraction as well as numerous sensors such as shaft encoders, infrared distance sensors, and compass. Simulation of on-board digital cameras via synthetic images allows the ...

  12. PEM Fuel Cells with Bio-Ethanol Processor Systems A Multidisciplinary Study of Modelling, Simulation, Fault Diagnosis and Advanced Control

    CERN Document Server

    Feroldi, Diego; Outbib, Rachid

    2012-01-01

    An apparently appropriate control scheme for PEM fuel cells may actually lead to an inoperable plant when it is connected to other unit operations in a process with recycle streams and energy integration. PEM Fuel Cells with Bio-Ethanol Processor Systems presents a control system design that provides basic regulation of the hydrogen production process with PEM fuel cells. It then goes on to construct a fault diagnosis system to improve plant safety above this control structure. PEM Fuel Cells with Bio-Ethanol Processor Systems is divided into two parts: the first covers fuel cells and the second discusses plants for hydrogen production from bio-ethanol to feed PEM fuel cells. Both parts give detailed analyses of modeling, simulation, advanced control, and fault diagnosis. They give an extensive, in-depth discussion of the problems that can occur in fuel cell systems and propose a way to control these systems through advanced control algorithms. A significant part of the book is also given over to computer-aid...

  13. Accelerated Molecular Dynamics Simulations with the AMOEBA Polarizable Force Field on Graphics Processing Units.

    Science.gov (United States)

    Lindert, Steffen; Bucher, Denis; Eastman, Peter; Pande, Vijay; McCammon, J Andrew

    2013-11-12

    The accelerated molecular dynamics (aMD) method has recently been shown to enhance the sampling of biomolecules in molecular dynamics (MD) simulations, often by several orders of magnitude. Here, we describe an implementation of the aMD method for the OpenMM application layer that takes full advantage of graphics processing units (GPUs) computing. The aMD method is shown to work in combination with the AMOEBA polarizable force field (AMOEBA-aMD), allowing the simulation of long time-scale events with a polarizable force field. Benchmarks are provided to show that the AMOEBA-aMD method is efficiently implemented and produces accurate results in its standard parametrization. For the BPTI protein, we demonstrate that the protein structure described with AMOEBA remains stable even on the extended time scales accessed at high levels of accelerations. For the DNA repair metalloenzyme endonuclease IV, we show that the use of the AMOEBA force field is a significant improvement over fixed charged models for describing the enzyme active-site. The new AMOEBA-aMD method is publicly available (http://wiki.simtk.org/openmm/VirtualRepository) and promises to be interesting for studying complex systems that can benefit from both the use of a polarizable force field and enhanced sampling.

  14. Monte Carlo Simulation of a Linear Accelerator and Electron Beam Parameters Used in Radiotherapy

    Directory of Open Access Journals (Sweden)

    Mohammad Taghi Bahreyni Toossi

    2009-06-01

    Full Text Available Introduction: In recent decades, several Monte Carlo codes have been introduced for research and medical applications. These methods provide both accurate and detailed calculation of particle transport from linear accelerators. The main drawback of Monte Carlo techniques is the extremely long computing time that is required in order to obtain a dose distribution with good statistical accuracy. Material and Methods: In this study, the MCNP-4C Monte Carlo code was used to simulate the electron beams generated by a Neptun 10 PC linear accelerator. The depth dose curves and related parameters to depth dose and beam profiles were calculated for 6, 8 and 10 MeV electron beams with different field sizes and these data were compared with the corresponding measured values. The actual dosimetry was performed by employing a Welhofer-Scanditronix dose scanning system, semiconductor detectors and ionization chambers. Results: The result showed good agreement (better than 2% between calculated and measured depth doses and lateral dose profiles for all energies in different field sizes. Also good agreements were achieved between calculated and measured related electron beam parameters such as E0, Rq, Rp and R50. Conclusion: The simulated model of the linac developed in this study is capable of computing electron beam data in a water phantom for different field sizes and the resulting data can be used to predict the dose distributions in other complex geometries.

  15. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    Science.gov (United States)

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.

  16. Simulations of radiation pressure ion acceleration with the VEGA Petawatt laser

    Science.gov (United States)

    Stockhausen, Luca C.; Torres, Ricardo; Conejero Jarque, Enrique

    2016-09-01

    The Spanish Pulsed Laser Centre (CLPU) is a new high-power laser facility for users. Its main system, VEGA, is a CPA Ti:Sapphire laser which, in its final phase, will be able to reach Petawatt peak powers in pulses of 30 fs with a pulse contrast of 1 :1010 at 1 ps. The extremely low level of pre-pulse intensity makes this system ideally suited for studying the laser interaction with ultrathin targets. We have used the particle-in-cell (PIC) code OSIRIS to carry out 2D simulations of the acceleration of ions from ultrathin solid targets under the unique conditions provided by VEGA, with laser intensities up to 1022 W cm-2 impinging normally on 20 - 60 nm thick overdense plasmas, with different polarizations and pre-plasma scale lengths. We show how signatures of the radiation pressure-dominated regime, such as layer compression and bunch formation, are only present with circular polarization. By passively shaping the density gradient of the plasma, we demonstrate an enhancement in peak energy up to tens of MeV and monoenergetic features. On the contrary linear polarization at the same intensity level causes the target to blow up, resulting in much lower energies and broader spectra. One limiting factor of Radiation Pressure Acceleration is the development of Rayleigh-Taylor like instabilities at the interface of the plasma and photon fluid. This results in the formation of bubbles in the spatial profile of laser-accelerated proton beams. These structures were previously evidenced both experimentally and theoretically. We have performed 2D simulations to characterize this bubble-like structure and report on the dependency on laser and target parameters.

  17. Comparative Simulations of 2D and 3D Mixed Convection Flow in a Faulted Basin: an Example from the Yarmouk Gorge, Israel and Jordan

    Science.gov (United States)

    Magri, F.; Inbar, N.; Raggad, M.; Möller, S.; Siebert, C.; Möller, P.; Kuehn, M.

    2014-12-01

    Lake Kinneret (Lake Tiberias or Sea of Galilee) is the most important freshwater reservoir in the Northern Jordan Valley. Simulations that couple fluid flow, heat and mass transport are built to understand the mechanisms responsible for the salinization of this important resource. Here the effects of permeability distribution on 2D and 3D convective patterns are compared. 2D simulations indicate that thermal brine in Haon and some springs in the Yamourk Gorge (YG) are the result of mixed convection, i.e. the interaction between the regional flow from the bordering heights and thermally-driven flow (Magri et al., 2014). Calibration of the calculated temperature profiles suggests that the faults in Haon and the YG provides paths for ascending hot waters, whereas the fault in the Golan recirculates water between 1 and 2 km depths. At higher depths, faults induce 2D layered convection in the surrounding units. The 2D assumption for a faulted basin can oversimplify the system, and the conclusions might not be fully correct. The 3D results also point to mixed convection as the main mechanism for the thermal anomalies. However, in 3D the convective structures are more complex allowing for longer flow paths and residence times. In the fault planes, hydrothermal convection develops in a finger regime enhancing inflow and outflow of heat in the system. Hot springs can form locally at the surface along the fault trace. By contrast, the layered cells extending from the faults into the surrounding sediments are preserved and are similar to those simulated in 2D. The results are consistent with the theory from Zhao et al. (2003), which predicts that 2D and 3D patterns have the same probability to develop given the permeability and temperature ranges encountered in geothermal fields. The 3D approach has to be preferred to the 2D in order to capture all patterns of convective flow, particularly in the case of planar high permeability regions such as faults. Magri, F., et al., 2014

  18. Acceleration of a Particle-in-Cell Code for Space Plasma Simulations with OpenACC

    Science.gov (United States)

    Peng, Ivy Bo; Markidis, Stefano; Vaivads, Andris; Vencels, Juris; Deca, Jan; Lapenta, Giovanni; Hart, Alistair; Laure, Erwin

    2015-04-01

    We simulate space plasmas with the Particle-in-cell (PIC) method that uses computational particles to mimic electrons and protons in solar wind and in Earth magnetosphere. The magnetic and electric fields are computed by solving the Maxwell's equations on a computational grid. In each PIC simulation step, there are four major phases: interpolation of fields to particles, updating the location and velocity of each particle, interpolation of particles to grids and solving the Maxwell's equations on the grid. We use the iPIC3D code, which was implemented in C++, using both MPI and OpenMP, for our case study. By November 2014, heterogeneous systems using hardware accelerators such as Graphics Processing Unit (GPUs) and the Many Integrated Core (MIC) coprocessors for high performance computing continue growth in the top 500 most powerful supercomputers world wide. Scientific applications for numerical simulations need to adapt to using accelerators to achieve portability and scalability in the coming exascale systems. In our work, we conduct a case study of using OpenACC to offload the computation intensive parts: particle mover and interpolation of particles to grids, in a massively parallel Particle-in-Cell simulation code, iPIC3D, to multi-GPU systems. We use MPI for inter-node communication for halo exchange and communicating particles. We identify the most promising parts suitable for GPUs accelerator by profiling using CrayPAT. We implemented manual deep copy to address the challenges of porting C++ classes to GPU. We document the necessary changes in the exiting algorithms to adapt for GPU computation. We present the challenges and findings as well as our methodology for porting a Particle-in-Cell code to multi-GPU systems using OpenACC. In this work, we will present the challenges, findings and our methodology of porting a Particle-in-Cell code for space applications as follows: We profile the iPIC3D code by Cray Performance Analysis Tool (CrayPAT) and identify

  19. Three-dimensional simulation of laser–plasma-based electron acceleration

    Indian Academy of Sciences (India)

    A Upadhyay; K Patel; B S Rao; P A Naik; P D Gupta

    2012-04-01

    A sequential three-dimensional (3D) particle-in-cell simulation code PICPSI-3D with a user friendly graphical user interface (GUI) has been developed and used to study the interaction of plasma with ultrahigh intensity laser radiation. A case study of laser–plasma-based electron acceleration has been carried out to assess the performance of this code. Simulations have been performed for a Gaussian laser beam of peak intensity 5 × 1019 W/cm2 propagating through an underdense plasma of uniform density 1 × 1019 cm-3, and for a Gaussian laser beam of peak intensity 1.5 × 1019 W/cm2 propagating through an underdense plasma of uniform density 3.5 × 1019 cm-3. The electron energy spectrum has been evaluated at different time-steps during the propagation of the laser beam. When the plasma density is 1 × 1019 cm-3, simulations show that the electron energy spectrum forms a monoenergetic peak at ∼ 14 MeV, with an energy spread of ± 7 MeV. On the other hand, when the plasma density is 3.5 × 1019 cm-3, simulations show that the electron energy spectrum forms a monoenergetic peak at ∼ 23 MeV, with an energy spread of ± 7.5 MeV.

  20. GPU accelerated simulations of bluff body flows using vortex particle methods

    Science.gov (United States)

    Rossinelli, Diego; Bergdorf, Michael; Cottet, Georges-Henri; Koumoutsakos, Petros

    2010-05-01

    We present a GPU accelerated solver for simulations of bluff body flows in 2D using a remeshed vortex particle method and the vorticity formulation of the Brinkman penalization technique to enforce boundary conditions. The efficiency of the method relies on fast and accurate particle-grid interpolations on GPUs for the remeshing of the particles and the computation of the field operators. The GPU implementation uses OpenGL so as to perform efficient particle-grid operations and a CUFFT-based solver for the Poisson equation with unbounded boundary conditions. The accuracy and performance of the GPU simulations and their relative advantages/drawbacks over CPU based computations are reported in simulations of flows past an impulsively started circular cylinder from Reynolds numbers between 40 and 9500. The results indicate up to two orders of magnitude speed up of the GPU implementation over the respective CPU implementations. The accuracy of the GPU computations depends on the Re number of the flow. For Re up to 1000 there is little difference between GPU and CPU calculations but this agreement deteriorates (albeit remaining to within 5% in drag calculations) for higher Re numbers as the single precision of the GPU adversely affects the accuracy of the simulations.

  1. The DOE Accelerated Strategic Computing Initiative: Challenges and opportunities for predictive materials simulation capabilities

    Science.gov (United States)

    Mailhiot, Christian

    1998-05-01

    In response to the unprecedented national security challenges emerging from the end of nuclear testing, the Defense Programs of the Department of Energy has developed a long-term strategic plan based on a vigorous Science-Based Stockpile Stewardship (SBSS) program. The main objective of the SBSS program is to ensure confidence in the performance, safety, and reliability of the stockpile on the basis of a fundamental science-based approach. A central element of this approach is the development of predictive, ‘full-physics’, full-scale computer simulation tools. As a critical component of the SBSS program, the Accelerated Strategic Computing Initiative (ASCI) was established to provide the required advances in computer platforms and to enable predictive, physics-based simulation capabilities. In order to achieve the ASCI goals, fundamental problems in the fields of computer and physical sciences of great significance to the entire scientific community must be successfully solved. Foremost among the key elements needed to develop predictive simulation capabilities, the development of improved physics-based materials models is a cornerstone. We indicate some of the materials theory, modeling, and simulation challenges and illustrate how the ASCI program will enable both the hardware and the software tools necessary to advance the state-of-the-art in the field of computational condensed matter and materials physics.

  2. Simulations of slow positron production using a low energy electron accelerator

    CERN Document Server

    O'Rourke, B E; Kinomura, A; Kuroda, R; Minehara, E; Ohdaira, T; Oshima, N; Suzuki, R

    2011-01-01

    Monte Carlo simulations of slow positron production via energetic electron interaction with a solid target have been performed. The aim of the simulations was to determine the expected slow positron beam intensity from a low energy, high current electron accelerator. By simulating (a) the fast positron production from a tantalum electron-positron converter and (b) the positron depth deposition profile in a tungsten moderator, the slow positron production probability per incident electron was estimated. Normalizing the calculated result to the measured slow positron yield at the present AIST LINAC the expected slow positron yield as a function of energy was determined. For an electron beam energy of 5 MeV (10 MeV) and current 240 $\\mu$A (30 $\\mu$A) production of a slow positron beam of intensity 5 $\\times$ 10$^{6}$ s$^{-1}$ is predicted. The simulation also calculates the average energy deposited in the converter per electron, allowing an estimate of the beam heating at a given electron energy and current. For...

  3. Fourier analysis of Solar atmospheric numerical simulations accelerated with GPUs (CUDA).

    Science.gov (United States)

    Marur, A.

    2015-12-01

    Solar dynamics from the convection zone creates a variety of waves that may propagate through the solar atmosphere. These waves are important in facilitating the energy transfer between the sun's surface and the corona as well as propagating energy throughout the solar system. How and where these waves are dissipated remains an open question. Advanced 3D numerical simulations have furthered our understanding of the processes involved. Fourier transforms to understand the nature of the waves by finding the frequency and wavelength of these waves through the simulated atmosphere, as well as the nature of their propagation and where they get dissipated. In order to analyze the different waves produced by the aforementioned simulations and models, Fast Fourier Transform algorithms will be applied. Since the processing of the multitude of different layers of the simulations (of the order of several 100^3 grid points) would be time intensive and inefficient on a CPU, CUDA, a computing architecture that harnesses the power of the GPU, will be used to accelerate the calculations.

  4. Dynamic earthquake sequence simulations with fault constitutive law accounting for brittle-plastic transition and pressure solution-precipitation creep

    Science.gov (United States)

    Noda, Hiroyuki; Shimamoto, Toshihiko

    2015-04-01

    Fault mechanical behavior is presumably dictated by a pressure-sensitive friction law in the brittle regime where cataclastic deformation dominates, and by a pressure-insensitive flow law in the plastic regime where milonytes are generated. A fault constitutive law in the transitional regime is of great importance in considering earthquake cycles as evidenced by field observations of repeating brittle and ductile deformations [e.g., Sibson 1980]. Shimamoto and Noda [2014] proposed an empirical method of connecting the friction law and the flow law without introducing a new parameter, and demonstrated 2-D dynamic earthquake sequence simulations for a strike-slip fault [e.g., Lapusta et al., 2000] with the friction-to-flow law. A logarithmic rate- and state-dependent friction law (aging law) and a rate- and state-dependent flow law (power law) [Noda and Shimamoto, 2010] with a quartzite steady-state flow law (power exponent n = 4) [Hirth et al., 2001] were adopted for the friction law and the flow law, respectively. Our numerical models are realization of conceptual fault models [e.g., Scholz, 1988]. "Christmas tree" stress profiles appear as a result of evolution of the system, and fluctuate with time. During the interseismic periods, creep fronts penetrated into the locked depth, slow slip events were generated, and then nucleation of dynamic rupture took place either in the shallower or deeper creeping region. The dynamic ruptures spanned the locked depth, reaching the ground surface and extending downwards even deeper than the depth of maximum pre-stress where the deformation mode was in the transitional regime preseismically where S-C mylonitic texture was expected [Shimamoto, 1989]. The coseismic deformation was in the frictional regime because the pure flow law predicts tremendously high flow stress at high strain rate and "the weaker wins". Our simulations reproduced repeating overprint of brittle and ductile deformations. We attempt here to include pressure

  5. Sensitivity of tsunami wave profiles and inundation simulations to earthquake slip and fault geometry for the 2011 Tohoku earthquake

    KAUST Repository

    Goda, Katsuichiro

    2014-09-01

    In this study, we develop stochastic random-field slip models for the 2011 Tohoku earthquake and conduct a rigorous sensitivity analysis of tsunami hazards with respect to the uncertainty of earthquake slip and fault geometry. Synthetic earthquake slip distributions generated from the modified Mai-Beroza method captured key features of inversion-based source representations of the mega-thrust event, which were calibrated against rich geophysical observations of this event. Using original and synthesised earthquake source models (varied for strike, dip, and slip distributions), tsunami simulations were carried out and the resulting variability in tsunami hazard estimates was investigated. The results highlight significant sensitivity of the tsunami wave profiles and inundation heights to the coastal location and the slip characteristics, and indicate that earthquake slip characteristics are a major source of uncertainty in predicting tsunami risks due to future mega-thrust events.

  6. On-X Heart Valve Prosthesis: Numerical Simulation of Hemodynamic Performance in Accelerating Systole.

    Science.gov (United States)

    Mirkhani, Nima; Davoudi, Mohammad Reza; Hanafizadeh, Pedram; Javidi, Daryoosh; Saffarian, Niloofar

    2016-09-01

    Numerical simulation of the bileaflet mechanical heart valves (BMHVs) has been of interest for many researchers due to its capability of predicting hemodynamic performance. A lot of studies have tried to simulate this three-dimensional complex flow in order to analyze the effect of different valve designs on the blood flow pattern. However, simplified models and prescribed motion for the leaflets were utilized. In this paper, transient complex blood flow in the location of ascending aorta has been investigated in a realistic model by fully coupled simulation. Geometry model for the aorta and the replaced valve is constructed based on the medical images and extracted point clouds. A 23-mm On-X Medical BMHV as the new generation design has been selected for the flow field analysis. The two-way coupling simulation is conducted throughout the accelerating phase in order to obtain valve dynamics in the opening process. The complex flow field in the hinge recess is captured precisely for all leaflet positions and recirculating zones and elevated shear stress areas have been observed. Results indicate that On-X valve yields relatively less transvalvular pressure gradient which would lower cardiac external work. Furthermore, converging inlet leads to a more uniform flow and consequently less turbulent eddies. However, the leaflets cannot open fully due to middle diffuser-shaped orifice. In addition, asymmetric butterfly-shaped hinge design and converging orifice leads to better hemodynamic performance. With the help of two-way fluid solid interaction simulation, leaflet angle follows the experimental trends more precisely rather than the prescribed motion in previous 3D simulations.

  7. Magnetohydrodynamic simulation study of plasma jets and plasma-surface contact in coaxial plasma accelerators

    Science.gov (United States)

    Subramaniam, Vivek; Raja, Laxminarayan L.

    2017-06-01

    Recent experiments by Loebner et al. [IEEE Trans. Plasma Sci. 44, 1534 (2016)] studied the effect of a hypervelocity jet emanating from a coaxial plasma accelerator incident on target surfaces in an effort to mimic the transient loading created during edge localized mode disruption events in fusion plasmas. In this paper, we present a magnetohydrodynamic (MHD) numerical model to simulate plasma jet formation and plasma-surface contact in this coaxial plasma accelerator experiment. The MHD system of equations is spatially discretized using a cell-centered finite volume formulation. The temporal discretization is performed using a fully implicit backward Euler scheme and the resultant stiff system of nonlinear equations is solved using the Newton method. The numerical model is employed to obtain some key insights into the physical processes responsible for the generation of extreme stagnation conditions on the target surfaces. Simulations of the plume (without the target plate) are performed to isolate and study phenomena such as the magnetic pinch effect that is responsible for launching pressure pulses into the jet free stream. The simulations also yield insights into the incipient conditions responsible for producing the pinch, such as the formation of conductive channels. The jet-target impact studies indicate the existence of two distinct stages involved in the plasma-surface interaction. A fast transient stage characterized by a thin normal shock transitions into a pseudo-steady stage that exhibits an extended oblique shock structure. A quadratic scaling of the pinch and stagnation conditions with the total current discharged between the electrodes is in qualitative agreement with the results obtained in the experiments. This also illustrates the dominant contribution of the magnetic pressure term in determining the magnitude of the quantities of interest.

  8. Accelerating Monte Carlo simulations of radiation therapy dose distributions using wavelet threshold de-noising.

    Science.gov (United States)

    Deasy, Joseph O; Wickerhauser, M Victor; Picard, Mathieu

    2002-10-01

    The Monte Carlo dose calculation method works by simulating individual energetic photons or electrons as they traverse a digital representation of the patient anatomy. However, Monte Carlo results fluctuate until a large number of particles are simulated. We propose wavelet threshold de-noising as a postprocessing step to accelerate convergence of Monte Carlo dose calculations. A sampled rough function (such as Monte Carlo noise) gives wavelet transform coefficients which are more nearly equal in amplitude than those of a sampled smooth function. Wavelet hard-threshold de-noising sets to zero those wavelet coefficients which fall below a threshold; the image is then reconstructed. We implemented the computationally efficient 9,7-biorthogonal filters in the C language. Transform results were averaged over transform origin selections to reduce artifacts. A method for selecting best threshold values is described. The algorithm requires about 336 floating point arithmetic operations per dose grid point. We applied wavelet threshold de-noising to two two-dimensional dose distributions: a dose distribution generated by 10 MeV electrons incident on a water phantom with a step-heterogeneity, and a slice from a lung heterogeneity phantom. Dose distributions were simulated using the Integrated Tiger Series Monte Carlo code. We studied threshold selection, resulting dose image smoothness, and resulting dose image accuracy as a function of the number of source particles. For both phantoms, with a suitable value of the threshold parameter, voxel-to-voxel noise was suppressed with little introduction of bias. The roughness of wavelet de-noised dose distributions (according to a Laplacian metric) was nearly independent of the number of source electrons, though the accuracy of the de-noised dose image improved with increasing numbers of source electrons. We conclude that wavelet shrinkage de-noising is a promising method for effectively accelerating Monte Carlo dose calculations

  9. Convergence acceleration for partitioned simulations of the fluid-structure interaction in arteries

    Science.gov (United States)

    Radtke, Lars; Larena-Avellaneda, Axel; Debus, Eike Sebastian; Düster, Alexander

    2016-06-01

    We present a partitioned approach to fluid-structure interaction problems arising in analyses of blood flow in arteries. Several strategies to accelerate the convergence of the fixed-point iteration resulting from the coupling of the fluid and the structural sub-problem are investigated. The Aitken relaxation and variants of the interface quasi-Newton -least-squares method are applied to different test cases. A hybrid variant of two well-known variants of the interface quasi-Newton-least-squares method is found to perform best. The test cases cover the typical boundary value problem faced when simulating the fluid-structure interaction in arteries, including a strong added mass effect and a wet surface which accounts for a large part of the overall surface of each sub-problem. A rubber-like Neo Hookean material model and a soft-tissue-like Holzapfel-Gasser-Ogden material model are used to describe the artery wall and are compared in terms of stability and computational expenses. To avoid any kind of locking, high-order finite elements are used to discretize the structural sub-problem. The finite volume method is employed to discretize the fluid sub-problem. We investigate the influence of mass-proportional damping and the material model chosen for the artery on the performance and stability of the acceleration strategies as well as on the simulation results. To show the applicability of the partitioned approach to clinical relevant studies, the hemodynamics in a pathologically deformed artery are investigated, taking the findings of the test case simulations into account.

  10. Simulation of accelerator transmutation of long-lived nuclear wastes; Simulation de transmutation de dechets nucleaires a vie longue par accelerateur

    Energy Technology Data Exchange (ETDEWEB)

    Wolff-Bacha Fabienne [Paris-11 Univ., 91 - Orsay (France)

    1997-07-09

    The incineration of minor actinides with a hybrid reactor (i.e. coupled with an accelerator) could reduce their radioactivity. The scientific tool used for simulations, the GEANT code implemented on a paralleled computer, has been confirmed initially on thin and thick targets and by simulation of a pressurized water reactor, a fast reactor like Superphenix, and a molten salt fast hybrid reactor `ATP`. Simulating a thermal hybrid reactor seems to indicate the non-negligible presence of neutrons which diffuse back to the accelerator. In spite of simplifications, the simulation of a molten lead fast hybrid reactor (as the CERN Fast Energy Amplifier) might indicate difficulties in the radial power distribution in the core, the life time of the window and the activated air leak risk. Finally, we propose a thermoelectric compact hybrid reactor, PRAHE - small atomic board hybrid reactor - the principle of which allows a neutron coupling between the accelerator and the reactor. (author) 270 refs., 91 figs., 31 tabs.

  11. Frictional and sealing behavior of simulated anhydrite fault gouge : Effects of CO2 and implications for fault stability and caprock integrity

    NARCIS (Netherlands)

    Pluymakers, A.M.H.

    2015-01-01

    To limit climate change, humanity needs to limit atmospheric CO2 concentrations, hence reduce CO2 emissions. An attractive option to do this involves capture at industrial sources followed by storage in depleted oil and gas reservoirs. In such reservoir systems, faults cutting the topseal are consid

  12. Fault Detection Based on Tracking Differentiator Applied on the Suspension System of Maglev Train

    Directory of Open Access Journals (Sweden)

    Hehong Zhang

    2015-01-01

    Full Text Available A fault detection method based on the optimized tracking differentiator is introduced. It is applied on the acceleration sensor of the suspension system of maglev train. It detects the fault of the acceleration sensor by comparing the acceleration integral signal with the speed signal obtained by the optimized tracking differentiator. This paper optimizes the control variable when the states locate within or beyond the two-step reachable region to improve the performance of the approximate linear discrete tracking differentiator. Fault-tolerant control has been conducted by feedback based on the speed signal acquired from the optimized tracking differentiator when the acceleration sensor fails. The simulation and experiment results show the practical usefulness of the presented method.

  13. Simulation Research of Fault Model of Detecting Rotor Dynamic Eccentricity in Brushless DC Motor Based on Motor Current Signature Analysis

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    The Brushless Direct Current (BLDC) motor is widely used in aerospace area, CNC machines and servo systems that require the high control accuracy Once the faults occur in the motor, it will cause great damage to the whole system. Mechanical faults are common in electric machines, and account for up to 50%-60% of the faults. Approximately, 80% of the mechanical faults lead to the eccentricity. So it is necessary to monitor the health condition of the motor to ensure the faults can be detected earlier and measures will be taken to imorove the reliability.

  14. Computational Materials Science and Chemistry: Accelerating Discovery and Innovation through Simulation-Based Engineering and Science

    Energy Technology Data Exchange (ETDEWEB)

    Crabtree, George [Argonne National Lab. (ANL), Argonne, IL (United States); Glotzer, Sharon [University of Michigan; McCurdy, Bill [University of California Davis; Roberto, Jim [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2010-07-26

    This report is based on a SC Workshop on Computational Materials Science and Chemistry for Innovation on July 26-27, 2010, to assess the potential of state-of-the-art computer simulations to accelerate understanding and discovery in materials science and chemistry, with a focus on potential impacts in energy technologies and innovation. The urgent demand for new energy technologies has greatly exceeded the capabilities of today's materials and chemical processes. To convert sunlight to fuel, efficiently store energy, or enable a new generation of energy production and utilization technologies requires the development of new materials and processes of unprecedented functionality and performance. New materials and processes are critical pacing elements for progress in advanced energy systems and virtually all industrial technologies. Over the past two decades, the United States has developed and deployed the world's most powerful collection of tools for the synthesis, processing, characterization, and simulation and modeling of materials and chemical systems at the nanoscale, dimensions of a few atoms to a few hundred atoms across. These tools, which include world-leading x-ray and neutron sources, nanoscale science facilities, and high-performance computers, provide an unprecedented view of the atomic-scale structure and dynamics of materials and the molecular-scale basis of chemical processes. For the first time in history, we are able to synthesize, characterize, and model materials and chemical behavior at the length scale where this behavior is controlled. This ability is transformational for the discovery process and, as a result, confers a significant competitive advantage. Perhaps the most spectacular increase in capability has been demonstrated in high performance computing. Over the past decade, computational power has increased by a factor of a million due to advances in hardware and software. This rate of improvement, which shows no sign of

  15. Application of variance reduction techniques in Monte Carlo simulation of clinical electron linear accelerator

    Science.gov (United States)

    Zoubair, M.; El Bardouni, T.; El Gonnouni, L.; Boulaich, Y.; El Bakkari, B.; El Younoussi, C.

    2012-01-01

    Computation time constitutes an important and a problematic parameter in Monte Carlo simulations, which is inversely proportional to the statistical errors so there comes the idea to use the variance reduction techniques. These techniques play an important role in reducing uncertainties and improving the statistical results. Several variance reduction techniques have been developed. The most known are Transport cutoffs, Interaction forcing, Bremsstrahlung splitting and Russian roulette. Also, the use of a phase space seems to be appropriate to reduce enormously the computing time. In this work, we applied these techniques on a linear accelerator (LINAC) using the MCNPX computer Monte Carlo code. This code gives a rich palette of variance reduction techniques. In this study we investigated various cards related to the variance reduction techniques provided by MCNPX. The parameters found in this study are warranted to be used efficiently in MCNPX code. Final calculations are performed in two steps that are related by a phase space. Results show that, comparatively to direct simulations (without neither variance-reduction nor phase space), the adopted method allows an improvement in the simulation efficiency by a factor greater than 700.

  16. A Coupled Multiphysics Approach for Simulating Induced Seismicity, Ground Acceleration and Structural Damage

    Science.gov (United States)

    Podgorney, Robert; Coleman, Justin; Wilkins, Amdrew; Huang, Hai; Veeraraghavan, Swetha; Xia, Yidong; Permann, Cody

    2017-04-01

    Numerical modeling has played an important role in understanding the behavior of coupled subsurface thermal-hydro-mechanical (THM) processes associated with a number of energy and environmental applications since as early as the 1970s. While the ability to rigorously describe all key tightly coupled controlling physics still remains a challenge, there have been significant advances in recent decades. These advances are related primarily to the exponential growth of computational power, the development of more accurate equations of state, improvements in the ability to represent heterogeneity and reservoir geometry, and more robust nonlinear solution schemes. The work described in this paper documents the development and linkage of several fully-coupled and fully-implicit modeling tools. These tools simulate: (1) the dynamics of fluid flow, heat transport, and quasi-static rock mechanics; (2) seismic wave propagation from the sources of energy release through heterogeneous material; and (3) the soil-structural damage resulting from ground acceleration. These tools are developed in Idaho National Laboratory's parallel Multiphysics Object Oriented Simulation Environment, and are integrated together using a global implicit approach. The governing equations are presented, the numerical approach for simultaneously solving and coupling the three coupling physics tools is discussed, and the data input and output methodology is outlined. An example is presented to demonstrate the capabilities of the coupled multiphysics approach. The example involves simulating a system conceptually similar to the geothermal development in Basel Switzerland, and the resultant induced seismicity, ground motion and structural damage is predicted.

  17. Accelerating atomistic simulations through self-learning bond-boost hyperdynamics

    Energy Technology Data Exchange (ETDEWEB)

    Perez, Danny [Los Alamos National Laboratory; Voter, Arthur F [Los Alamos National Laboratory

    2008-01-01

    By altering the potential energy landscape on which molecular dynamics are carried out, the hyperdynamics method of Voter enables one to significantly accelerate the simulation state-to-state dynamics of physical systems. While very powerful, successful application of the method entails solving the subtle problem of the parametrization of the so-called bias potential. In this study, we first clarify the constraints that must be obeyed by the bias potential and demonstrate that fast sampling of the biased landscape is key to the obtention of proper kinetics. We then propose an approach by which the bond boost potential of Miron and Fichthorn can be safely parametrized based on data acquired in the course of a molecular dynamics simulation. Finally, we introduce a procedure, the Self-Learning Bond Boost method, in which the parametrization is step efficiently carried out on-the-fly for each new state that is visited during the simulation by safely ramping up the strength of the bias potential up to its optimal value. The stability and accuracy of the method are demonstrated.

  18. Shock experiments and numerical simulations on low energy portable electrically exploding foil accelerators.

    Science.gov (United States)

    Saxena, A K; Kaushik, T C; Gupta, Satish C

    2010-03-01

    Two low energy (1.6 and 8 kJ) portable electrically exploding foil accelerators are developed for moderately high pressure shock studies at small laboratory scale. Projectile velocities up to 4.0 km/s have been measured on Kapton flyers of thickness 125 microm and diameter 8 mm, using an in-house developed Fabry-Perot velocimeter. An asymmetric tilt of typically few milliradians has been measured in flyers using fiber optic technique. High pressure impact experiments have been carried out on tantalum, and aluminum targets up to pressures of 27 and 18 GPa, respectively. Peak particle velocities at the target-glass interface as measured by Fabry-Perot velocimeter have been found in good agreement with the reported equation of state data. A one-dimensional hydrodynamic code based on realistic models of equation of state and electrical resistivity has been developed to numerically simulate the flyer velocity profiles. The developed numerical scheme is validated against experimental and simulation data reported in literature on such systems. Numerically computed flyer velocity profiles and final flyer velocities have been found in close agreement with the previously reported experimental results with a significant improvement over reported magnetohydrodynamic simulations. Numerical modeling of low energy systems reported here predicts flyer velocity profiles higher than experimental values, indicating possibility of further improvement to achieve higher shock pressures.

  19. 3D simulations of young core-collapse supernova remnants undergoing efficient particle acceleration

    CERN Document Server

    Ferrand, Gilles

    2016-01-01

    Within our Galaxy, supernova remnants are believed to be the major sources of cosmic rays up to the "knee". However important questions remain regarding the share of the hadronic and leptonic components, and the fraction of the supernova energy channelled into these components. We address such question by the means of numerical simulations that combine a hydrodynamic treatment of the shock wave with a kinetic treatment of particle acceleration. Performing 3D simulations allows us to produce synthetic projected maps and spectra of the thermal and non-thermal emission, that can be compared with multi-wavelength observations (in radio, X-rays, and gamma-rays). Supernovae come in different types, and although their energy budget is of the same order, their remnants have different properties, and so may contribute in different ways to the pool of Galactic cosmic-rays. Our first simulations were focused on thermonuclear supernovae, like Tycho's SNR, that usually occur in a mostly undisturbed medium. Here we present...

  20. Accelerating development of metal organic framework membranes using atomically detailed simulations

    Science.gov (United States)

    Keskin, Seda

    A new group of nanoporous materials, metal organic frameworks (MOFs), have emerged as a fascinating alternative to more traditional nanoporous materials for membrane based gas separations. Although hundreds of different MOF structures have been synthesized in powder forms, very little is currently known about the potential performance of MOFs as membranes since fabrication and testing of membranes from new materials require a large amount of time and resources. The purpose of this thesis is to predict the macroscopic flux of multi-component gas mixtures through MOF-based membranes with information obtained from detailed atomistic simulations. First, atomically detailed simulations of gas adsorption and diffusion in MOFs combined with a continuum description of a membrane are introduced to predict the performance of MOF membranes. These results are compared with the only available experimental data for a MOF membrane. An efficient approximate method based on limited information from molecular simulations to accelerate the modeling of MOF membranes is then introduced. The accuracy and computational efficiency of different modeling approaches are discussed. A robust screening strategy is proposed to screen numerous MOF materials to identify the ones with the high membrane selectivity and to direct experimental efforts to the most promising of many possible MOF materials. This study provides the first predictions of any kind about the potential of MOFs as membranes and demonstrates that using molecular modeling for this purpose can be a useful means of identifying the phenomena that control the performance of MOFs as membranes.

  1. Optimal Acceleration-Velocity-Bounded Trajectory Planning in Dynamic Crowd Simulation

    Directory of Open Access Journals (Sweden)

    Fu Yue-wen

    2014-01-01

    Full Text Available Creating complex and realistic crowd behaviors, such as pedestrian navigation behavior with dynamic obstacles, is a difficult and time consuming task. In this paper, we study one special type of crowd which is composed of urgent individuals, normal individuals, and normal groups. We use three steps to construct the crowd simulation in dynamic environment. The first one is that the urgent individuals move forward along a given path around dynamic obstacles and other crowd members. An optimal acceleration-velocity-bounded trajectory planning method is utilized to model their behaviors, which ensures that the durations of the generated trajectories are minimal and the urgent individuals are collision-free with dynamic obstacles (e.g., dynamic vehicles. In the second step, a pushing model is adopted to simulate the interactions between urgent members and normal ones, which ensures that the computational cost of the optimal trajectory planning is acceptable. The third step is obligated to imitate the interactions among normal members using collision avoidance behavior and flocking behavior. Various simulation results demonstrate that these three steps give realistic crowd phenomenon just like the real world.

  2. Experimental Characterization of a Plasma Deflagration Accelerator for Simulating Fusion Wall Response to Disruption Events

    Science.gov (United States)

    Underwood, Thomas; Loebner, Keith; Cappelli, Mark

    2016-10-01

    In this work, the suitability of a pulsed deflagration accelerator to simulate the interaction of edge-localized modes with plasma first wall materials is investigated. Experimental measurements derived from a suite of diagnostics are presented that focus on the both the properties of the plasma jet and the manner in which such jets couple with material interfaces. Detailed measurements of the thermodynamic plasma state variables within the jet are presented using a quadruple Langmuir probe operating in current-saturation mode. This data in conjunction with spectroscopic measurements of H α Stark broadening via a fast-framing, intensified CCD camera provide spatial and temporal measurements of how the plasma density and temperature scale as a function of input energy. Using these measurements, estimates for the energy flux associated with the deflagration accelerator are found to be completely tunable over a range spanning 150 MW m-2 - 30 GW m-2. The plasma-material interface is investigated using tungsten tokens exposed to the plasma plume under variable conditions. Visualizations of resulting shock structures are achieved through Schlieren cinematography and energy transfer dynamics are discussed by presenting temperature measurements of exposed materials. This work is supported by the U.S. Department of Energy Stewardship Science Academic Program in addition to the National Defense Science Engineering Graduate Fellowship.

  3. Accelerated electronic structure-based molecular dynamics simulations of shock-induced chemistry

    Science.gov (United States)

    Cawkwell, Marc

    2015-06-01

    The initiation and progression of shock-induced chemistry in organic materials at moderate temperatures and pressures are slow on the time scales available to regular molecular dynamics simulations. Accessing the requisite time scales is particularly challenging if the interatomic bonding is modeled using accurate yet expensive methods based explicitly on electronic structure. We have combined fast, energy conserving extended Lagrangian Born-Oppenheimer molecular dynamics with the parallel replica accelerated molecular dynamics formalism to study the relatively sluggish shock-induced chemistry of benzene around 13-20 GPa. We model interatomic bonding in hydrocarbons using self-consistent tight binding theory with an accurate and transferable parameterization. Shock compression and its associated transient, non-equilibrium effects are captured explicitly by combining the universal liquid Hugoniot with a simple shrinking-cell boundary condition. A number of novel methods for improving the performance of reactive electronic structure-based molecular dynamics by adapting the self-consistent field procedure on-the-fly will also be discussed. The use of accelerated molecular dynamics has enabled us to follow the initial stages of the nucleation and growth of carbon clusters in benzene under thermodynamic conditions pertinent to experiments.

  4. Aacsfi-PSC. Advanced accelerator concepts for strong field interaction simulated with the Plasma-Simulation-Code

    Energy Technology Data Exchange (ETDEWEB)

    Ruhl, Hartmut [Munich Univ. (Germany). Chair for Computational and Plasma Physics

    2016-11-01

    Since the installation of SuperMUC phase 2 the 9216 nodes of phase 1 are more easily available for large scale runs allowing for the thin foil and AWAKE simulations. Besides phase 2 could be used in parallel for high throughput of the ion acceleration simulations. Challenging to our project were the full-volume checkpoints required by PIC that strained the I/O-subsystem of SuperMUC to its limits. New approaches considered for the next generation system, like burst buffers could overcome this bottleneck. Additionally, as the FDTD solver in PIC is strongly bandwidth bound, PSC will benefit profoundly from high-bandwidth memory (HBM) that most likely will be available in future HPC machines. This will be of great advantage as in 2018 phase II of AWAKE should begin, with a longer plasma channel further increasing the need for additional computing resources. Last but not least, it is expected that our methods used in plasma physics (many body interaction with radiation) will be more and more adapted for medical diagnostics and treatments. For this research field we expect centimeter sized volumes with necessary resolutions of tens of micro meters resulting in boxes of >10{sup 12} voxels (100-200 TB) on a regular basis. In consequence the demand for computing time and especially for data storage and data handling capacities will also increase significantly.

  5. The GENGA Code: Gravitational Encounters in N-body simulations with GPU Acceleration

    CERN Document Server

    Grimm, Simon L

    2014-01-01

    We describe a GPU implementation of a hybrid symplectic N-body integrator, GENGA (Gravitational ENcounters with Gpu Acceleration), designed to integrate planet and planetesimal dynamics in the late stage of planet formation and stability analysis of planetary systems. GENGA is based on the integration scheme of the Mercury code (Chambers 1999), which handles close encounters with very good energy conservation. It uses mixed variable integration (Wisdom & Holman 1991) when the motion is a perturbed Kepler orbit and combines this with a direct N-body Bulirsch-Stoer method during close encounters. The GENGA code supports three simulation modes: Integration of up to 2048 massive bodies, integration with up to a million test particles, or parallel integration of a large number of individual planetary systems. GENGA is written in CUDA C and runs on all Nvidia GPUs with compute capability of at least 2.0. All operations are performed in parallel, including the close encounter detection and the grouping of indepe...

  6. Vlasov Simulations of Ladder Climbing and Autoresonant Acceleration of Langmuir Waves

    Science.gov (United States)

    Hara, Kentaro; Barth, Ido; Kaminski, Erez; Dodin, Ilya; Fisch, Nathaniel

    2016-10-01

    The energy of plasma waves can be moved up and down the spectrum using chirped modulations of plasma parameters, which can be driven by external fields. Depending on the discreteness of the wave spectrum, this phenomenon is called ladder climbing (LC) or autroresonant acceleration (AR) of plasmons, and was first proposed by Barth et al. based on a linear fluid model. Here, we report a demonstration of LC/AR from first principles using fully nonlinear Vlasov simulations of collisionless bounded plasma. We show that, in agreement to the basic theory, plasmons survive substantial transformations of the spectrum and are destroyed only when their wave numbers become large enough to trigger Landau damping. The work was supported by the NNSA SSAA Program through DOE Research Grant No. DE-NA0002948 and the DTRA Grant No. HDTRA1-11-1-0037.

  7. GPU accelerated Monte Carlo simulation of Brownian motors dynamics with CUDA

    CERN Document Server

    Spiechowicz, J; Machura, L

    2014-01-01

    This work presents an updated and extended guide on methods of a proper acceleration of the Monte Carlo integration of stochastic differential equations with the commonly available NVIDIA Graphics Processing Units using the CUDA programming environment. We outline the general aspects of the scientific computing on graphics cards and demonstrate them with two models of a well known phenomenon of the noise induced transport of Brownian motors in periodic structures. As a source of fluctuations in the considered systems we selected the three most commonly occurring noises: the Gaussian white noise, the white Poissonian noise and the dichotomous process also known as a random telegraph signal. The detailed discussion on various aspects of the applied numerical schemes is also presented. The measured speedup can be of the astonishing order of 2000 when compared to a typical CPU. This number significantly expands the range of problems solvable by use of stochastic simulations, allowing even an interactive research ...

  8. Current status of MCNP6 as a simulation tool useful for space and accelerator applications

    CERN Document Server

    Mashnik, S G; Hughes, H G; Prael, R E; Sierk, A J

    2012-01-01

    For the past several years, a major effort has been undertaken at Los Alamos National Laboratory (LANL) to develop the transport code MCNP6, the latest LANL Monte-Carlo transport code representing a merger and improvement of MCNP5 and MCNPX. We emphasize a description of the latest developments of MCNP6 at higher energies to improve its reliability in calculating rare-isotope production, high-energy cumulative particle production, and a gamut of reactions important for space-radiation shielding, cosmic-ray propagation, and accelerator applications. We present several examples of validation and verification of MCNP6 compared to a wide variety of intermediate- and high-energy experimental data on reactions induced by photons, mesons, nucleons, and nuclei at energies from tens of MeV to about 1 TeV/nucleon, and compare to results from other modern simulation tools.

  9. GPU accelerated Monte Carlo simulation of Brownian motors dynamics with CUDA

    Science.gov (United States)

    Spiechowicz, J.; Kostur, M.; Machura, L.

    2015-06-01

    This work presents an updated and extended guide on methods of a proper acceleration of the Monte Carlo integration of stochastic differential equations with the commonly available NVIDIA Graphics Processing Units using the CUDA programming environment. We outline the general aspects of the scientific computing on graphics cards and demonstrate them with two models of a well known phenomenon of the noise induced transport of Brownian motors in periodic structures. As a source of fluctuations in the considered systems we selected the three most commonly occurring noises: the Gaussian white noise, the white Poissonian noise and the dichotomous process also known as a random telegraph signal. The detailed discussion on various aspects of the applied numerical schemes is also presented. The measured speedup can be of the astonishing order of about 3000 when compared to a typical CPU. This number significantly expands the range of problems solvable by use of stochastic simulations, allowing even an interactive research in some cases.

  10. Tools for simulation of high beam intensity ion accelerators; Simulationswerkzeuge fuer die Berechnung hochintensiver Ionenbeschleuniger

    Energy Technology Data Exchange (ETDEWEB)

    Tiede, Rudolf

    2009-07-09

    A new particle-in-cell space charge routine based on a fast Fourier transform was developed and implemented to the LORASR code. It provides the ability to perform up to several 100 batch run simulations with up to 1 million macroparticles each within reasonable computation time. The new space charge routine was successfully validated in the framework of the European ''High Intensity Pulsed Proton Injectors'' (HIPPI) collaboration: Several static Poisson solver benchmarking comparisons were performed, as well as particle tracking comparisons along the GSI UNILAC Alvarez section. Moreover machine error setting routines and data analysis tools were developed and applied on error studies for the ''Heidelberg Cacer Therapy'' (HICAT) IH-type drift tube linear accelerator (linac), the FAIR Facility Proton Linac and the proposal of a linac for the ''International Fusion Materials Irradiation Facility'' (IFMIF) based on superconducting CH-type structures. (orig.)

  11. The effect of mechanical discontinuities on the growth of faults

    Science.gov (United States)

    Bonini, Lorenzo; Basili, Roberto; Bonanno, Emanuele; Toscani, Giovanni; Burrato, Pierfrancesco; Seno, Silvio; Valensise, Gianluca

    2016-04-01

    The growth of natural faults is controlled by several factors, including the nature of host rocks, the strain rate, the temperature, and the presence of fluids. In this work we focus on the mechanical characteristics of host rocks, and in particular on the role played by thin mechanical discontinuities on the upward propagation of faults and on associated secondary effects such as folding and fracturing. Our approach uses scaled, analogue models where natural rocks are simulated by wet clay (kaolin). A clay cake is placed above two rigid blocks in a hanging wall/footwall configuration on either side of a planar fault. Fault activity is simulated by motor-controlled movements of the hanging wall. We reproduce three types of faults: a 45°-dipping normal fault, a 45°-dipping reverse fault and a 30°-dipping reverse fault. These angles are selected as representative of most natural dip-slip faults. The analogues of the mechanical discontinuities are obtained by precutting the wet clay cake before starting the hanging wall movement. We monitor the experiments with high-resolution cameras and then obtain most of the data through the Digital Image Correlation method (D.I.C.). This technique accurately tracks the trajectories of the particles of the analogue material during the deformation process: this allows us to extract displacement field vectors plus the strain and shear rate distributions on the lateral side of the clay block, where the growth of new faults is best seen. Initially we run a series of isotropic experiments, i.e. experiments without discontinuities, to generate a reference model: then we introduce the discontinuities. For the extensional models they are cut at different dip angles, from horizontal to 45°-dipping, both synthetic and antithetic with respect to the master fault, whereas only horizontal discontinuities are introduced in the contractional models. Our experiments show that such discontinuities control: 1) the propagation rate of faults

  12. Acceleration of Monte Carlo simulation of photon migration in complex heterogeneous media using Intel many-integrated core architecture.

    Science.gov (United States)

    Gorshkov, Anton V; Kirillin, Mikhail Yu

    2015-08-01

    Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing.

  13. Numerical Simulation of Fault Interaction in a Trans-Tensional Setting, the La Paz Los Cabos Region, Baja California, Mexico.

    Science.gov (United States)

    Zielke, O.; Arrowsmith, R. J.

    2006-12-01

    A number of medium to large normal faulting earthquakes occurred in the La Paz-Los Cabos region, at the tip of Baja California, within the last four decades. They (along with the tectonic geomorphology of the fault zones on the Peninsula) demonstrate that the existing structures in the area are active and capable of hazardous earthquakes. The goal of this study is to understand how the individual active faults in this region affect the behavior of the fault system as a whole. What role does fault interaction (i.e., stress transfer) and earthquake triggering play in the La Paz-Los Cabos region? Do we know all the significant, active and therefore hazardous structures that are part of the fault system? Are these structures capable of releasing the tectonically accumulated strain? What role does that fault system play in the regional, trans-tensional setting? To approach these questions we utilize a numerical model, based on derivations by Okada (1992), with which we compute the strain distribution and Coulomb failure stress for a given (frictionless) displacement along a rectangular fault patch and its interactions with other faults of the fault array. Beginning with simple geometric models of the fault system in the La Paz-Los Cabos region, we investigate under what conditions individual earthquakes may have triggered subsequent events. We focused on the M =5.6 event on April 4th 1969 that may have had an effect on the timing of the M=6.2 event on June 30th 1995. The proximity of these two earthquakes (epicenters only 60km apart) supports the idea that stress transfer caused by the 1969 event may have altered the seismic cycle of the fault activated in 1995. Because fault geometry and slip distribution during these two events are not well known, we explore the parameter space to learn under what conditions the 1969 event may have triggered the 1995 event. We apply the empirical relations among magnitude, fault geometry, and displacement, derived by Wells

  14. Nonlinear Site Response Due to Large Ground Acceleration: Observation and Computer Simulation

    Science.gov (United States)

    Noguchi, S.; Furumura, T.; Sasatani, T.

    2009-12-01

    We studied nonlinear site response due to large ground acceleration during the 2003 off-Miyagi Earthquake (Mw7.0) in Japan by means of horizontal-to-vertical spectral ratio analysis of S-wave motion. The results were then confirmed by finite-difference method (FDM) simulation of nonlinear seismic wave propagation. A nonlinear site response is often observed at soft sediment sites, and even at hard bedrock sites which are covered by thin soil layers. Nonlinear site response can be induced by strong ground motion whose peak ground acceleration (PGA) exceeds about 100 cm/s/s, and seriously affects the amplification of high frequency ground motion and PGA. Noguchi and Sasatani (2008) developed an efficient technique for quantitative evaluation of nonlinear site response using the horizontal-to-vertical spectral ratio of S-wave (S-H/V) derived from strong ground motion records, based on Wen et al. (2006). We applied this technique to perform a detailed analysis of the properties of nonlinear site response based on a large amount of data recorded at 132 K-NET and KiK-net strong motion stations in Northern Japan during the off-Miyagi Earthquake. We succeeded in demonstrating a relationship between ground motion level, nonlinear site response and surface soil characteristics. For example, the seismic data recorded at KiK-net IWTH26 showed obvious characteristics of nonlinear site response when the PGA exceeded 100 cm/s/s. As the ground motion level increased, the dominant peak of S-H/V shifted to lower frequency, the high frequency level of S-H/V dropped, and PGA amplification decreased. On the other hand, the records at MYGH03 seemed not to be affected by nonlinear site response even for high ground motion levels in which PGA exceeds 800 cm/s/s. The characteristics of such nonlinear site amplification can be modeled by evaluating Murnaghan constants (e.g. McCall, 1994), which are the third-order elastic constants. In order to explain the observed characteristics of

  15. A Comparison Between GATE and MCNPX Monte Carlo Codes in Simulation of Medical Linear Accelerator

    Science.gov (United States)

    Sadoughi, Hamid-Reza; Nasseri, Shahrokh; Momennezhad, Mahdi; Sadeghi, Hamid-Reza; Bahreyni-Toosi, Mohammad-Hossein

    2014-01-01

    Radiotherapy dose calculations can be evaluated by Monte Carlo (MC) simulations with acceptable accuracy for dose prediction in complicated treatment plans. In this work, Standard, Livermore and Penelope electromagnetic (EM) physics packages of GEANT4 application for tomographic emission (GATE) 6.1 were compared versus Monte Carlo N-Particle eXtended (MCNPX) 2.6 in simulation of 6 MV photon Linac. To do this, similar geometry was used for the two codes. The reference values of percentage depth dose (PDD) and beam profiles were obtained using a 6 MV Elekta Compact linear accelerator, Scanditronix water phantom and diode detectors. No significant deviations were found in PDD, dose profile, energy spectrum, radial mean energy and photon radial distribution, which were calculated by Standard and Livermore EM models and MCNPX, respectively. Nevertheless, the Penelope model showed an extreme difference. Statistical uncertainty in all the simulations was MCNPX, Standard, Livermore and Penelope models, respectively. Differences between spectra in various regions, in radial mean energy and in photon radial distribution were due to different cross section and stopping power data and not the same simulation of physics processes of MCNPX and three EM models. For example, in the Standard model, the photoelectron direction was sampled from the Gavrila-Sauter distribution, but the photoelectron moved in the same direction of the incident photons in the photoelectric process of Livermore and Penelope models. Using the same primary electron beam, the Standard and Livermore EM models of GATE and MCNPX showed similar output, but re-tuning of primary electron beam is needed for the Penelope model. PMID:24696804

  16. Progress towards the development of transient ram accelerator simulation as part of the U.S. Air Force Armament Directorate Research Program

    Science.gov (United States)

    Sinha, N.; York, B. J.; Dash, S. M.; Drabczuk, R.; Rolader, G. E.

    1992-07-01

    This paper describes the development of an advanced CFD simulation capability in support of the U.S. Air Force Armament Directorate's ram accelerator research initiative. The state-of-the-art CRAFT computer code has been specialized for high fidelity, transient ram accelerator simulations via inclusion of generalized dynamic gridding, solution adaptive grid clustering, high pressure thermochemistry, etc. Selected ram accelerator simulations are presented which serve to exhibit the CRAFT code's capabilities and identify some of the principal research/design issues.

  17. Simulative and experimental investigation on stator winding turn and unbalanced supply voltage fault diagnosis in induction motors using Artificial Neural Networks.

    Science.gov (United States)

    Lashkari, Negin; Poshtan, Javad; Azgomi, Hamid Fekri

    2015-11-01

    The three-phase shift between line current and phase voltage of induction motors can be used as an efficient fault indicator to detect and locate inter-turn stator short-circuit (ITSC) fault. However, unbalanced supply voltage is one of the contributing factors that inevitably affect stator currents and therefore the three-phase shift. Thus, it is necessary to propose a method that is able to identify whether the unbalance of three currents is caused by ITSC or supply voltage fault. This paper presents a feedforward multilayer-perceptron Neural Network (NN) trained by back propagation, based on monitoring negative sequence voltage and the three-phase shift. The data which are required for training and test NN are generated using simulated model of stator. The experimental results are presented to verify the superior accuracy of the proposed method.

  18. Fault Monitooring and Fault Recovery Control for Position Moored Tanker

    DEFF Research Database (Denmark)

    Fang, Shaoji; Blanke, Mogens

    2009-01-01

    This paper addresses fault tolerant control for position mooring of a shuttle tanker operating in the North Sea. A complete framework for fault diagnosis is presented but the loss of a sub-sea mooring line buoyancy element is given particular attention, since this fault could lead to line breakage...... algorithm is proposed to accommodate buoyancy element failure and keep the mooring system in a safe state. Detection properties and fault-tolerant control are demonstrated by high delity simulations...

  19. Study on Fault Current of DFIG during Slight Fault Condition

    OpenAIRE

    Xiangping Kong; Zhe Zhang; Xianggen Yin; Zhenxing Li

    2013-01-01

    In order to ensure the safety of DFIG when severe fault happens, crowbar protection is adopted. But during slight fault condition, the crowbar protection will not trip, and the DFIG is still excited by AC-DC-AC converter. In this condition, operation characteristics of the converter have large influence on the fault current characteristics of DFIG. By theoretical analysis and digital simulation, the fault current characteristics of DFIG during slight voltage dips are studied. And the influenc...

  20. Accelerated path integral methods for atomistic simulations at ultra-low temperatures.

    Science.gov (United States)

    Uhl, Felix; Marx, Dominik; Ceriotti, Michele

    2016-08-07

    Path integral methods provide a rigorous and systematically convergent framework to include the quantum mechanical nature of atomic nuclei in the evaluation of the equilibrium properties of molecules, liquids, or solids at finite temperature. Such nuclear quantum effects are often significant for light nuclei already at room temperature, but become crucial at cryogenic temperatures such as those provided by superfluid helium as a solvent. Unfortunately, the cost of converged path integral simulations increases significantly upon lowering the temperature so that the computational burden of simulating matter at the typical superfluid helium temperatures becomes prohibitive. Here we investigate how accelerated path integral techniques based on colored noise generalized Langevin equations, in particular the so-called path integral generalized Langevin equation thermostat (PIGLET) variant, perform in this extreme quantum regime using as an example the quasi-rigid methane molecule and its highly fluxional protonated cousin, CH5 (+). We show that the PIGLET technique gives a speedup of two orders of magnitude in the evaluation of structural observables and quantum kinetic energy at ultralow temperatures. Moreover, we computed the spatial spread of the quantum nuclei in CH4 to illustrate the limits of using such colored noise thermostats close to the many body quantum ground state.

  1. Towards accelerating Smoothed Particle Hydrodynamics simulations for free-surface flows on multi-GPU clusters

    CERN Document Server

    Valdez-Balderas, Daniel; Rogers, Benedict D; Crespo, Alejandro J C

    2012-01-01

    Starting from the single graphics processing unit (GPU) version of the Smoothed Particle Hydrodynamics (SPH) code DualSPHysics, a multi-GPU SPH program is developed for free-surface flows. The approach is based on a spatial decomposition technique, whereby different portions (sub-domains) of the physical system under study are assigned to different GPUs. Communication between devices is achieved with the use of Message Passing Interface (MPI) application programming interface (API) routines. The use of the sorting algorithm radix sort for inter-GPU particle migration and sub-domain halo building (which enables interaction between SPH particles of different subdomains) is described in detail. With the resulting scheme it is possible, on the one hand, to carry out simulations that could also be performed on a single GPU, but they can now be performed even faster than on one of these devices alone. On the other hand, accelerated simulations can be performed with up to 32 million particles on the current architec...

  2. Lorentz boosted frame simulation of Laser wakefield acceleration in quasi-3D geometry

    CERN Document Server

    Yu, Peicheng; Davidson, Asher; Tableman, Adam; Dalichaouch, Thamine; Meyers, Michael D; Tsung, Frank S; Decyk, Viktor K; Fiuza, Frederico; Vieira, Jorge; Fonseca, Ricardo A; Lu, Wei; Silva, Luis O; Mori, Warren B

    2015-01-01

    When modeling laser wakefield acceleration (LWFA) using the particle-in-cell (PIC) algorithm in a Lorentz boosted frame, the plasma is drifting relativistically at $\\beta_b c$ towards the laser, which can lead to a computational speedup of $\\sim \\gamma_b^2=(1-\\beta_b^2)^{-1}$. Meanwhile, when LWFA is modeled in the quasi-3D geometry in which the electromagnetic fields and current are decomposed into a limited number of azimuthal harmonics, speedups are achieved by modeling three dimensional problems with the computation load on the order of two dimensional $r-z$ simulations. Here, we describe how to combine the speed ups from the Lorentz boosted frame and quasi-3D algorithms. The key to the combination is the use of a hybrid Yee-FFT solver in the quasi-3D geometry that can be used to effectively eliminate the Numerical Cerenkov Instability (NCI) that inevitably arises in a Lorentz boosted frame due to the unphysical coupling of Langmuir modes and EM modes of the relativistically drifting plasma in these simul...

  3. Simulation and experimental studies on electron cloud effects in particle accelerators

    CERN Document Server

    Romano, Annalisa; Cimino, Roberto; Iadarola, Giovanni; Rumolo, Giovanni

    Electron Cloud (EC) effects represent a serious limitation for particle accelerators operating with intense beams of positively charged particles. This Master thesis work presents simulation and experimental studies on EC effects carried out in collaboration with the European Organization for Nuclear Research (CERN) in Geneva and with the INFN-LNF laboratories in Frascati. During the Long Shut- down 1 (LS1, 2013-2014), a new detector for EC measurements has been installed in one of the main magnets of the CERN Proton Synchrotron (PS) to study the EC formation in presence of a strong magnetic field. The aim is to develop a reli- able EC model of the PS vacuum chamber in order to identify possible limitation for the future high intensity and high brightness beams foreseen by Large Hadron Collider (LHC) Injectors Upgrade (LIU) project. Numerical simulations with the new PyECLOUD code were performed in order to quantify the expected signal at the detector under different beam conditions. The experimental activity...

  4. GPU accelerated flow solver for direct numerical simulation of turbulent flows

    Science.gov (United States)

    Salvadore, Francesco; Bernardini, Matteo; Botti, Michela

    2013-02-01

    Graphical processing units (GPUs), characterized by significant computing performance, are nowadays very appealing for the solution of computationally demanding tasks in a wide variety of scientific applications. However, to run on GPUs, existing codes need to be ported and optimized, a procedure which is not yet standardized and may require non trivial efforts, even to high-performance computing specialists. In the present paper we accurately describe the porting to CUDA (Compute Unified Device Architecture) of a finite-difference compressible Navier-Stokes solver, suitable for direct numerical simulation (DNS) of turbulent flows. Porting and validation processes are illustrated in detail, with emphasis on computational strategies and techniques that can be applied to overcome typical bottlenecks arising from the porting of common computational fluid dynamics solvers. We demonstrate that a careful optimization work is crucial to get the highest performance from GPU accelerators. The results show that the overall speedup of one NVIDIA Tesla S2070 GPU is approximately 22 compared with one AMD Opteron 2352 Barcelona chip and 11 compared with one Intel Xeon X5650 Westmere core. The potential of GPU devices in the simulation of unsteady three-dimensional turbulent flows is proved by performing a DNS of a spatially evolving compressible mixing layer.

  5. GPU accelerated flow solver for direct numerical simulation of turbulent flows

    Energy Technology Data Exchange (ETDEWEB)

    Salvadore, Francesco [CASPUR – via dei Tizii 6/b, 00185 Rome (Italy); Bernardini, Matteo, E-mail: matteo.bernardini@uniroma1.it [Department of Mechanical and Aerospace Engineering, University of Rome ‘La Sapienza’ – via Eudossiana 18, 00184 Rome (Italy); Botti, Michela [CASPUR – via dei Tizii 6/b, 00185 Rome (Italy)

    2013-02-15

    Graphical processing units (GPUs), characterized by significant computing performance, are nowadays very appealing for the solution of computationally demanding tasks in a wide variety of scientific applications. However, to run on GPUs, existing codes need to be ported and optimized, a procedure which is not yet standardized and may require non trivial efforts, even to high-performance computing specialists. In the present paper we accurately describe the porting to CUDA (Compute Unified Device Architecture) of a finite-difference compressible Navier–Stokes solver, suitable for direct numerical simulation (DNS) of turbulent flows. Porting and validation processes are illustrated in detail, with emphasis on computational strategies and techniques that can be applied to overcome typical bottlenecks arising from the porting of common computational fluid dynamics solvers. We demonstrate that a careful optimization work is crucial to get the highest performance from GPU accelerators. The results show that the overall speedup of one NVIDIA Tesla S2070 GPU is approximately 22 compared with one AMD Opteron 2352 Barcelona chip and 11 compared with one Intel Xeon X5650 Westmere core. The potential of GPU devices in the simulation of unsteady three-dimensional turbulent flows is proved by performing a DNS of a spatially evolving compressible mixing layer.

  6. Accelerating the Design of Solar Thermal Fuel Materials through High Throughput Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Y; Grossman, JC

    2014-12-01

    Solar thermal fuels (STF) store the energy of sunlight, which can then be released later in the form of heat, offering an emission-free and renewable solution for both solar energy conversion and storage. However, this approach is currently limited by the lack of low-cost materials with high energy density and high stability. In this Letter, we present an ab initio high-throughput computational approach to accelerate the design process and allow for searches over a broad class of materials. The high-throughput screening platform we have developed can run through large numbers of molecules composed of earth-abundant elements and identifies possible metastable structures of a given material. Corresponding isomerization enthalpies associated with the metastable structures are then computed. Using this high-throughput simulation approach, we have discovered molecular structures with high isomerization enthalpies that have the potential to be new candidates for high-energy density STF. We have also discovered physical principles to guide further STF materials design through structural analysis. More broadly, our results illustrate the potential of using high-throughput ab initio simulations to design materials that undergo targeted structural transitions.

  7. Multi-GPU Accelerated Multi-Spin Monte Carlo Simulations of the 2D Ising Model

    CERN Document Server

    Block, Benjamin; Preis, Tobias; 10.1016/j.cpc.2010.05.005

    2010-01-01

    A modern graphics processing unit (GPU) is able to perform massively parallel scientific computations at low cost. We extend our implementation of the checkerboard algorithm for the two dimensional Ising model [T. Preis et al., J. Comp. Phys. 228, 4468 (2009)] in order to overcome the memory limitations of a single GPU which enables us to simulate significantly larger systems. Using multi-spin coding techniques, we are able to accelerate simulations on a single GPU by factors up to 35 compared to an optimized single Central Processor Unit (CPU) core implementation which employs multi-spin coding. By combining the Compute Unified Device Architecture (CUDA) with the Message Parsing Interface (MPI) on the CPU level, a single Ising lattice can be updated by a cluster of GPUs in parallel. For large systems, the computation time scales nearly linearly with the number of GPUs used. As proof of concept we reproduce the critical temperature of the 2D Ising model using finite size scaling techniques.

  8. Particle Acceleration and the Origin of X-ray Flares in GRMHD simulations of Sgr A*

    CERN Document Server

    Ball, David; Psaltis, Dimitrios; Chan, Chi-kwan

    2016-01-01

    Significant X-ray variability and flaring has been observed from Sgr A* but is poorly understood from a theoretical standpoint. We perform GRMHD simulations that take into account a population of non-thermal electrons with energy distributions and injection rates that are motivated by PIC simulations of magnetic reconnection. We explore the effects of including these non-thermal electrons on the predicted broadband variability of Sgr A* and find that X-ray variability is a generic result of localizing non-thermal electrons to highly magnetized regions, where particles are likely to be accelerated via magnetic reconnection. The proximity of these high-field regions to the event horizon forms a natural connection between IR and X-ray variability and accounts for the rapid timescales associated with the X-ray flares. The qualitative nature of this variability is consistent with observations, producing X-ray flares that are always coincident with IR flares, but not vice versa, i.e., there are a number of IR flare...

  9. Accelerated path integral methods for atomistic simulations at ultra-low temperatures

    Science.gov (United States)

    Uhl, Felix; Marx, Dominik; Ceriotti, Michele

    2016-08-01

    Path integral methods provide a rigorous and systematically convergent framework to include the quantum mechanical nature of atomic nuclei in the evaluation of the equilibrium properties of molecules, liquids, or solids at finite temperature. Such nuclear quantum effects are often significant for light nuclei already at room temperature, but become crucial at cryogenic temperatures such as those provided by superfluid helium as a solvent. Unfortunately, the cost of converged path integral simulations increases significantly upon lowering the temperature so that the computational burden of simulating matter at the typical superfluid helium temperatures becomes prohibitive. Here we investigate how accelerated path integral techniques based on colored noise generalized Langevin equations, in particular the so-called path integral generalized Langevin equation thermostat (PIGLET) variant, perform in this extreme quantum regime using as an example the quasi-rigid methane molecule and its highly fluxional protonated cousin, CH5+. We show that the PIGLET technique gives a speedup of two orders of magnitude in the evaluation of structural observables and quantum kinetic energy at ultralow temperatures. Moreover, we computed the spatial spread of the quantum nuclei in CH4 to illustrate the limits of using such colored noise thermostats close to the many body quantum ground state.

  10. The Italian Project S2 - Task 4:Near-fault earthquake ground motion simulation in the Sulmona alluvial basin

    Science.gov (United States)

    Stupazzini, M.; Smerzini, C.; Cauzzi, C.; Faccioli, E.; Galadini, F.; Gori, S.

    2009-04-01

    Quarteroni and co- workers, starting from 1996, and the computational code GeoELSE (Stupazzini et al., 2009; http://GeoELSE.stru.polimi.it/). Finally, numerical results are compared with available data and attenuation relationships of peak values of ground motion in the near-fault regions elsewhere. Based on the results of this work, the unfavorable interaction between fault rupture, radiation mechanism and complex geological conditions may give rise to large values of peak ground velocity (exceeding 1 m/s) even in low-to-moderate seismicity areas, and therefore increase considerably the level of seismic risk, especially in highly populated and industrially active regions, such as the Central Italy. Faccioli E., Maggio F., Paolucci R. and Quarteroni A. (1997),2D and 3D elastic wave propagation by a pseudo-spectral domain decomposition method, Journal of Seismology, 1, 237-251. Field, E.H., T.H. Jordan, and C.A. Cornell (2003), OpenSHA: A Developing Community-Modeling Environment for Seismic Hazard Analysis, Seism. Res. Lett. 74, 406-419. Stupazzini M., R. Paolucci, H. Igel (2009), Near-fault earthquake ground motion simulation in the Grenoble Valley by a high-performance spectral element code, accepted for publication in Bull. of the Seism. Soc. of America.

  11. Computational Materials Science and Chemistry: Accelerating Discovery and Innovation through Simulation-Based Engineering and Science

    Energy Technology Data Exchange (ETDEWEB)

    Crabtree, George [Argonne National Lab. (ANL), Argonne, IL (United States); Glotzer, Sharon [University of Michigan; McCurdy, Bill [University of California Davis; Roberto, Jim [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2010-07-26

    This report is based on a SC Workshop on Computational Materials Science and Chemistry for Innovation on July 26-27, 2010, to assess the potential of state-of-the-art computer simulations to accelerate understanding and discovery in materials science and chemistry, with a focus on potential impacts in energy technologies and innovation. The urgent demand for new energy technologies has greatly exceeded the capabilities of today's materials and chemical processes. To convert sunlight to fuel, efficiently store energy, or enable a new generation of energy production and utilization technologies requires the development of new materials and processes of unprecedented functionality and performance. New materials and processes are critical pacing elements for progress in advanced energy systems and virtually all industrial technologies. Over the past two decades, the United States has developed and deployed the world's most powerful collection of tools for the synthesis, processing, characterization, and simulation and modeling of materials and chemical systems at the nanoscale, dimensions of a few atoms to a few hundred atoms across. These tools, which include world-leading x-ray and neutron sources, nanoscale science facilities, and high-performance computers, provide an unprecedented view of the atomic-scale structure and dynamics of materials and the molecular-scale basis of chemical processes. For the first time in history, we are able to synthesize, characterize, and model materials and chemical behavior at the length scale where this behavior is controlled. This ability is transformational for the discovery process and, as a result, confers a significant competitive advantage. Perhaps the most spectacular increase in capability has been demonstrated in high performance computing. Over the past decade, computational power has increased by a factor of a million due to advances in hardware and software. This rate of improvement, which shows no sign of

  12. Combining rigorous diffraction calculation and GPU accelerated nonsequential raytracing for high precision simulation of a linear grating spectrometer

    Science.gov (United States)

    Mauch, Florian; Fleischle, David; Lyda, Wolfram; Osten, Wolfgang; Krug, Torsten; Häring, Reto

    2011-05-01

    Simulation of grating spectrometers constitutes the problem of propagating a spectrally broad light field through a macroscopic optical system that contains a nanostructured grating surface. The interest of the simulation is to quantify and optimize the stray light behaviour, which is the limiting factor in modern high end spectrometers. In order to accomplish this we present a simulation scheme that combines a RCWA (rigorous coupled wave analysis) simulation of the grating surface with a selfmade GPU (graphics processor unit) accelerated nonsequential raytracer. Using this, we are able to represent the broad spectrum of the light field as a superposition of many monochromatic raysets and handle the huge raynumber in reasonable time.

  13. Advanced Simulation and Optimization Tools for Dynamic Aperture of Non-scaling FFAGs and Accelerators including Modern User Interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Mills, F.; Makino, Kyoko; Berz, Martin; Johnstone, C.

    2010-09-01

    With the U.S. experimental effort in HEP largely located at laboratories supporting the operations of large, highly specialized accelerators, colliding beam facilities, and detector facilities, the understanding and prediction of high energy particle accelerators becomes critical to the success, overall, of the DOE HEP program. One area in which small businesses can contribute to the ongoing success of the U.S. program in HEP is through innovations in computer techniques and sophistication in the modeling of high-energy accelerators. Accelerator modeling at these facilities is performed by experts with the product generally highly specific and representative only of in-house accelerators or special-interest accelerator problems. Development of new types of accelerators like FFAGs with their wide choices of parameter modifications, complicated fields, and the simultaneous need to efficiently handle very large emittance beams requires the availability of new simulation environments to assure predictability in operation. In this, ease of use and interfaces are critical to realizing a successful model, or optimization of a new design or working parameters of machines. In Phase I, various core modules for the design and analysis of FFAGs were developed and Graphical User Interfaces (GUI) have been investigated instead of the more general yet less easily manageable console-type output COSY provides.

  14. Muscle contributions to centre of mass acceleration during turning gait in typically developing children: A simulation study.

    Science.gov (United States)

    Dixon, Philippe C; Jansen, Karen; Jonkers, Ilse; Stebbins, Julie; Theologis, Tim; Zavatsky, Amy B

    2015-12-16

    Turning while walking requires substantial joint kinematic and kinetic adaptations compared to straight walking in order to redirect the body centre of mass (COM) towards the new walking direction. The role of muscles and external forces in controlling and redirecting the COM during turning remains unclear. The aim of this study was to compare the contributors to COM medio-lateral acceleration during 90° pre-planned turns about the inside limb (spin) and straight walking in typically developing children. Simulations of straight walking and turning gait based on experimental motion data were implemented in OpenSim. The contributors to COM global medio-lateral acceleration during the approach (outside limb) and turn (inside limb) stance phase were quantified via an induced acceleration analysis. Changes in medio-lateral COM acceleration occurred during both turning phases, compared to straight walking (pgait and may be used clinically to guide the management of gait disorders in populations with restricted gait ability.

  15. Accelerating groundwater flow simulation in MODFLOW using JASMIN-based parallel computing.

    Science.gov (United States)

    Cheng, Tangpei; Mo, Zeyao; Shao, Jingli

    2014-01-01

    To accelerate the groundwater flow simulation process, this paper reports our work on developing an efficient parallel simulator through rebuilding the well-known software MODFLOW on JASMIN (J Adaptive Structured Meshes applications Infrastructure). The rebuilding process is achieved by designing patch-based data structure and parallel algorithms as well as adding slight modifications to the compute flow and subroutines in MODFLOW. Both the memory requirements and computing efforts are distributed among all processors; and to reduce communication cost, data transfers are batched and conveniently handled by adding ghost nodes to each patch. To further improve performance, constant-head/inactive cells are tagged and neglected during the linear solving process and an efficient load balancing strategy is presented. The accuracy and efficiency are demonstrated through modeling three scenarios: The first application is a field flow problem located at Yanming Lake in China to help design reasonable quantity of groundwater exploitation. Desirable numerical accuracy and significant performance enhancement are obtained. Typically, the tagged program with load balancing strategy running on 40 cores is six times faster than the fastest MICCG-based MODFLOW program. The second test is simulating flow in a highly heterogeneous aquifer. The AMG-based JASMIN program running on 40 cores is nine times faster than the GMG-based MODFLOW program. The third test is a simplified transient flow problem with the order of tens of millions of cells to examine the scalability. Compared to 32 cores, parallel efficiency of 77 and 68% are obtained on 512 and 1024 cores, respectively, which indicates impressive scalability.

  16. Generalized Temporal Acceleration Scheme for Kinetic Monte Carlo Simulations of Surface Catalytic Processes by Scaling the Rates of Fast Reactions.

    Science.gov (United States)

    Dybeck, Eric Christopher; Plaisance, Craig Patrick; Neurock, Matthew

    2017-02-14

    A novel algorithm has been developed to achieve temporal acceleration during kinetic Monte Carlo (KMC) simulations of surface catalytic processes. This algorithm allows for the direct simulation of reaction networks containing kinetic processes occurring on vastly disparate timescales which computationally overburden standard KMC methods. Previously developed methods for temporal acceleration in KMC have been designed for specific systems and often require a priori information from the user such as identifying the fast and slow processes. In the approach presented herein, quasi-equilibrated processes are identified automatically based on previous executions of the forward and reverse reactions. Temporal acceleration is achieved by automatically scaling the intrinsic rate constants of the quasi-equilibrated processes, bringing their rates closer to the timescales of the slow kinetically relevant non-equilibrated processes. All reactions are still simulated directly, although with modified rate constants. Abrupt changes in the underlying dynamics of the reaction network are identified during the simulation and the reaction rate constants are rescaled accordingly. The algorithm has been utilized here to model the Fischer-Tropsch synthesis reaction over ruthenium nanoparticles. This reaction network has multiple timescale-disparate processes which would be intractable to simulate without the aid of temporal acceleration. The accelerated simulations are found to give reaction rates and selectivities indistinguishable from those calculated by an equivalent mean-field kinetic model. The computational savings of the algorithm can span many orders of magnitude in realistic systems and the computational cost is not limited by the magnitude of the timescale disparity in the system processes. Furthermore, the algorithm has been designed in a generic fashion and can easily be applied to other surface catalytic processes of interest.

  17. Dislocation motion and the microphysics of flash heating and weakening of faults during earthquakes

    OpenAIRE

    Elena Spagnuolo; Oliver Plümper; Marie Violay; Andrea Cavallo; Giulio Di Toro

    2016-01-01

    Earthquakes are the result of slip along faults and are due to the decrease of rock frictional strength (dynamic weakening) with increasing slip and slip rate. Friction experiments simulating the abrupt accelerations (>>10 m/s2), slip rates (~1 m/s), and normal stresses (>>10 MPa) expected at the passage of the earthquake rupture along the front of fault patches, measured large fault dynamic weakening for slip rates larger than a critical velocity of 0.01–0.1 m/s. The dynamic weak...

  18. Dislocation motion and the microphysics of flash heating and weakening of faults during earthquakes

    NARCIS (Netherlands)

    Spagnuolo, Elena; Plümper, Oliver; Violay, Marie; Cavallo, Andrea; Di Toro, Giulio

    2016-01-01

    Earthquakes are the result of slip along faults and are due to the decrease of rock frictional strength (dynamic weakening) with increasing slip and slip rate. Friction experiments simulating the abrupt accelerations (>>10 m/s2), slip rates (~1 m/s), and normal stresses (>>10 MPa) expected at the pa

  19. Accelerated 20-year sunlight exposure simulation of a photochromic foldable intraocular lens in a rabbit model

    Science.gov (United States)

    Werner, Liliana; Abdel-Aziz, Salwa; Peck, Carolee Cutler; Monson, Bryan; Espandar, Ladan; Zaugg, Brian; Stringham, Jack; Wilcox, Chris; Mamalis, Nick

    2011-01-01

    PURPOSE To assess the long-term biocompatibility and photochromic stability of a new photochromic hydrophobic acrylic intraocular lens (IOL) under extended ultraviolet (UV) light exposure. SETTING John A. Moran Eye Center, University of Utah, Salt Lake City, Utah, USA. DESIGN Experimental study. METHODS A Matrix Aurium photochromic IOL was implanted in right eyes and a Matrix Acrylic IOL without photochromic properties (n = 6) or a single-piece AcrySof Natural SN60AT (N = 5) IOL in left eyes of 11 New Zealand rabbits. The rabbits were exposed to a UV light source of 5 mW/cm2 for 3 hours during every 8-hour period, equivalent to 9 hours a day, and followed for up to 12 months. The photochromic changes were evaluated during slitlamp examination by shining a penlight UV source in the right eye. After the rabbits were humanely killed and the eyes enucleated, study and control IOLs were explanted and evaluated in vitro on UV exposure and studied histopathologically. RESULTS The photochromic IOL was as biocompatible as the control IOLs after 12 months under conditions simulating at least 20 years of UV exposure. In vitro evaluation confirmed the retained optical properties, with photochromic changes observed within 7 seconds of UV exposure. The rabbit eyes had clinical and histopathological changes expected in this model with a 12-month follow-up. CONCLUSIONS The new photochromic IOL turned yellow only on exposure to UV light. The photochromic changes were reversible, reproducible, and stable over time. The IOL was biocompatible with up to 12 months of accelerated UV exposure simulation. PMID:21241924

  20. Computer simulations for a deceleration and radio frequency quadrupole instrument for accelerator ion beams

    Energy Technology Data Exchange (ETDEWEB)

    Eliades, J.A., E-mail: j.eliades@alum.utoronto.ca; Kim, J.K.; Song, J.H.; Yu, B.Y.

    2015-10-15

    Radio-frequency quadrupole (RFQ) technology incorporated into the low energy ion beam line of an accelerator system can greatly broaden the range of applications and facilitate unique experimental capabilities. However, ten’s of keV kinetic energy negative ion beams with large emittances and energy spreads must first be decelerated down to <100 eV for ion–gas interactions, placing special demands on the deceleration optics and RFQ design. A system with large analyte transmission in the presence of gas has so far proven challenging. Presented are computer simulations using SIMION 8.1 for an ion deceleration and RFQ ion guide instrument design. Code included user-defined gas pressure gradients and threshold energies for ion–gas collisional losses. Results suggest a 3 mm diameter, 35 keV {sup 36}Cl{sup −} ion beam with 8 eV full-width half maximum Gaussian energy spread and 35 mrad angular divergence can be efficiently decelerated and then cooled in He gas, with a maximum pressure of 7 mTorr, to 2 eV within 450 mm in the RFQs. Vacuum transmissions were 100%. Ion energy distributions at initial RFQ capture are shown to be much larger than the average value expected from the deceleration potential and this appears to be a general result arising from kinetic energy gain in the RFQ field. In these simulations, a potential for deceleration to 25 eV resulted in a 30 eV average energy distribution with a small fraction of ions >70 eV.

  1. Fault Estimation

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, H.

    2002-01-01

    This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis prob-lems are reformulated in the so-called standard problem setup introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis pr...... problems can be solved by standard optimization tech-niques. The proposed methods include: (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; (2) FE for systems with parametric faults, and (3) FE for a class of nonlinear systems.......This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis prob-lems are reformulated in the so-called standard problem setup introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis...

  2. TrishearCreator: A tool for the kinematic simulation and strain analysis of trishear fault-propagation folding with growth strata

    Science.gov (United States)

    Liu, Chun; Yin, Hongwei; Zhu, Lili

    2012-12-01

    TrishearCreator is a platform independent web program constructed in Flash, which enables fold modeling, numerical simulation of trishear fault-propagation folding and strain analysis, etc. In the program, various types of original strata, such as folds and inclined strata can be easily constructed via adjusting shape parameters. In the simulation of trishear fault-propagation folding, growth strata and strain ellipses are calculated and displayed simultaneously. This web-based program is easy to use. Model parameters are changed by simple mouse actions, which have the advantage of speed and simplicity. And it gives an instant visual appreciation of the effect of changing the parameters that are used to construct the initial configuration of the model and the fold-propagation folding. These data can be exported to a text file, and be shared with other geologists to replay the kinematic evolution of structures using the program.

  3. Accelerating moderately stiff chemical kinetics in reactive-flow simulations using GPUs

    CERN Document Server

    Niemeyer, Kyle E

    2014-01-01

    The chemical kinetics ODEs arising from operator-split reactive-flow simulations were solved on GPUs using explicit integration algorithms. Nonstiff chemical kinetics of a hydrogen oxidation mechanism (9 species and 38 irreversible reactions) were computed using the explicit fifth-order Runge-Kutta-Cash-Karp method, and the GPU-accelerated version performed faster than single- and six-core CPU versions by factors of 126 and 25, respectively, for 524,288 ODEs. Moderately stiff kinetics, represented with mechanisms for hydrogen/carbon-monoxide (13 species and 54 irreversible reactions) and methane (53 species and 634 irreversible reactions) oxidation, were computed using the stabilized explicit second-order Runge-Kutta-Chebyshev (RKC) algorithm. The GPU-based RKC implementation demonstrated an increase in performance of nearly 59 and 10 times, for problem sizes consisting of 262,144 ODEs and larger, than the single- and six-core CPU-based RKC algorithms using the hydrogen/carbon-monoxide mechanism. With the met...

  4. The GENGA code: gravitational encounters in N-body simulations with GPU acceleration

    Energy Technology Data Exchange (ETDEWEB)

    Grimm, Simon L.; Stadel, Joachim G., E-mail: sigrimm@physik.uzh.ch [Institute for Computational Science, University of Zürich, Winterthurerstrasse 190, CH-8057 Zürich (Switzerland)

    2014-11-20

    We describe an open source GPU implementation of a hybrid symplectic N-body integrator, GENGA (Gravitational ENcounters with Gpu Acceleration), designed to integrate planet and planetesimal dynamics in the late stage of planet formation and stability analyses of planetary systems. GENGA uses a hybrid symplectic integrator to handle close encounters with very good energy conservation, which is essential in long-term planetary system integration. We extended the second-order hybrid integration scheme to higher orders. The GENGA code supports three simulation modes: integration of up to 2048 massive bodies, integration with up to a million test particles, or parallel integration of a large number of individual planetary systems. We compare the results of GENGA to Mercury and pkdgrav2 in terms of energy conservation and performance and find that the energy conservation of GENGA is comparable to Mercury and around two orders of magnitude better than pkdgrav2. GENGA runs up to 30 times faster than Mercury and up to 8 times faster than pkdgrav2. GENGA is written in CUDA C and runs on all NVIDIA GPUs with a computing capability of at least 2.0.

  5. GPU accelerated Hybrid Tree Algorithm for Collision-less N-body Simulations

    CERN Document Server

    Watanabe, Tsuyoshi

    2014-01-01

    We propose a hybrid tree algorithm for reducing calculation and communication cost of collision-less N-body simulations. The concept of our algorithm is that we split interaction force into two parts: hard-force from neighbor particles and soft-force from distant particles, and applying different time integration for the forces. For hard-force calculation, we can efficiently reduce the calculation and communication cost of the parallel tree code because we only need data of neighbor particles for this part. We implement the algorithm on GPU clusters to accelerate force calculation for both hard and soft force. As the result of implementing the algorithm on GPU clusters, we were able to reduce the communication cost and the total execution time to 40% and 80% of that of a normal tree algorithm, respectively. In addition, the reduction factor relative the normal tree algorithm is smaller for large number of processes, and we expect that the execution time can be ultimately reduced down to about 70% of the norma...

  6. Monte Carlo simulations for 20 MV X-ray spectrum reconstruction of a linear induction accelerator

    Institute of Scientific and Technical Information of China (English)

    WANG Yi; LI Qin; JIANG Xiao-Guo

    2012-01-01

    To study the spectrum reconstruction of the 20 MV X-ray generated by the Dragon-I linear induction accelerator,the Monte Carlo method is applied to simulate the attenuations of the X-ray in the attenuators of different thicknesses and thus provide the transmission data.As is known,the spectrum estimation from transmission data is an ill-conditioned problem.The method based on iterative perturbations is employed to derive the X-ray spectra,where initial guesses are used to start the process.This algorithm takes into account not only the minimization of the differences between the measured and the calculated transmissions but also the smoothness feature of the spectrum function.In this work,various filter materials are put to use as the attenuator,and the condition for an accurate and robust solution of the X-ray spectrum calculation is demonstrated.The influences of the scattering photons within different intervals of emergence angle on the X-ray spectrum reconstruction are also analyzed.

  7. Numerical simulations of recent proton acceleration experiments with sub-100 TW laser systems

    Energy Technology Data Exchange (ETDEWEB)

    Sinigardi, Stefano, E-mail: sinigardi@bo.infn.it

    2016-09-01

    Recent experiments carried out at the Italian National Research Center, National Optics Institute Department in Pisa, are showing interesting results regarding maximum proton energies achievable with sub-100 TW laser systems. While laser systems are being continuously upgraded in laboratories around the world, at the same time a new trend on stabilizing and making ion acceleration results reproducible is growing in importance. Almost all applications require a beam with fixed performance, so that the energy spectrum and the total charge exhibit moderate shot to shot variations. This result is surely far from being achieved, but many paths are being explored in order to reach it. Some of the reasons for this variability come from fluctuations in laser intensity and focusing, due to optics instability. Other variation sources come from small differences in the target structure. The target structure can vary substantially, when it is impacted by the main pulse, due to the prepulse duration and intensity, the shape of the main pulse and the total energy deposited. In order to qualitatively describe the prepulse effect, we will present a two dimensional parametric scan of its relevant parameters. A single case is also analyzed with a full three dimensional simulation, obtaining reasonable agreement between the numerical and the experimental energy spectrum.

  8. Particle acceleration inside PWN: Simulation and observational constraints with INTEGRAL; Acceleration de particules au sein des vents relativistes de pulsar: simulation et contraintes observationelles avec le satellite INTEGRAL

    Energy Technology Data Exchange (ETDEWEB)

    Forot, M

    2006-12-15

    The context of this thesis is to gain new constraints on the different particle accelerators that occur in the complex environment of neutron stars: in the pulsar magnetosphere, in the striped wind or wave outside the light cylinder, in the jets and equatorial wind, and at the wind terminal shock. An important tool to constrain both the magnetic field and primary particle energies is to image the synchrotron ageing of the population, but it requires a careful modelling of the magnetic field evolution in the wind flow. The current models and understanding of these different accelerators, the acceleration processes and open questions have been reviewed in the first part of the thesis. The instrumental part of this work involves the IBIS imager, on board the INTEGRAL satellite, that provides images with 12' resolution from 17 keV to MeV where the SPI spectrometer takes over up, to 10 MeV, but with a reduced 2 degrees resolution. A new method for using the double-layer IBIS imager as a Compton telescope with coded mask aperture. Its performance has been measured. The Compton scattering information and the achieved sensitivity also open a new window for polarimetry in gamma rays. A method has been developed to extract the linear polarization properties and to check the instrument response for fake polarimetric signals in the various backgrounds and projection effects.

  9. Acceleration of the matrix multiplication of Radiance three phase daylighting simulations with parallel computing on heterogeneous hardware of personal computer

    Energy Technology Data Exchange (ETDEWEB)

    Zuo, Wangda [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wetter, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lee, Eleanor S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2013-05-23

    Building designers are increasingly relying on complex fenestration systems to reduce energy consumed for lighting and HVAC in low energy buildings. Radiance, a lighting simulation program, has been used to conduct daylighting simulations for complex fenestration systems. Depending on the configurations, the simulation can take hours or even days using a personal computer. This paper describes how to accelerate the matrix multiplication portion of a Radiance three-phase daylight simulation by conducting parallel computing on heterogeneous hardware of a personal computer. The algorithm was optimized and the computational part was implemented in parallel using OpenCL. The speed of new approach was evaluated using various daylighting simulation cases on a multicore central processing unit and a graphics processing unit. Based on the measurements and analysis of the time usage for the Radiance daylighting simulation, further speedups can be achieved by using fast I/O devices and storing the data in a binary format.

  10. Application of the Reduction of Scale Range in a Lorentz Boosted Frame to the Numerical Simulation of Particle Acceleration Devices

    Energy Technology Data Exchange (ETDEWEB)

    Vay, J.-L.; Fawley, W.M.; Geddes, C.G.R.; Cormier-Michel, E.; Grote, D.P.

    2009-05-01

    It has been shown [1] that it may be computationally advantageous to perform computer simulations in a boosted frame for a certain class of systems: particle beams interacting with electron clouds, free electron lasers, and laser-plasma accelerators. However, even if the computer model relies on a covariant set of equations, it was also pointed out that algorithmic difficulties related to discretization errors may have to be overcome in order to take full advantage of the potential speedup [2] . In this paper, we focus on the analysis of the complication of data input and output in a Lorentz boosted frame simulation, and describe the procedures that were implemented in the simulation code Warp[3]. We present our most recent progress in the modeling of laser wakefield acceleration in a boosted frame, and describe briefly the potential benefits of calculating in a boosted frame for the modeling of coherent synchrotron radiation.

  11. Solar Energetic Particle Acceleration in the Solar Corona with Simulated Field Line Random Walk and Wave Generation

    Science.gov (United States)

    Arthur, A. D.; le Roux, J. A.

    2014-12-01

    Observations of extreme solar energetic particle (SEP) events associated with coronal mass ejection driven shocks have detected particle energies up to a few GeV at 1 AU within the first ~10 minutes to 1 hour of shock acceleration. It is currently not well understood whether or not shock acceleration can act alone in these events or if some combination of successive shocks or solar flares is required. To investigate this, we updated our current model which has been successfully applied to the termination shock and traveling interplanetary shocks. The model solves the time-dependent Focused Transport Equation including particle preheating due to the cross shock electric field and the divergence, adiabatic compression, and acceleration of the solar wind. Particle interaction with MHD wave turbulence is modeled in terms of gyro-resonant interactions with parallel propagating Alfvén waves and diffusive shock acceleration is included via the first-order Fermi mechanism for parallel shocks. The observed onset times of the extreme SEP events place the shock in the corona when the particles escape upstream, therefore, we extended our model to include coronal conditions for the solar wind and magnetic field. Additional features were introduced to investigate two aspects of MHD wave turbulence in contributing to efficient particle acceleration at a single fast parallel shock; (1) We simulate field-line random walk on time scales much larger than a particle gyro-period to investigate how the stochastic element added to particle injection and the first-order Fermi mechanism affects the efficiency of particle acceleration. (2) Previous modeling efforts show that the ambient solar wind turbulence is too weak to quickly accelerate SEPs to GeV energies. To improve the efficiency of acceleration for a single shock, we included upstream Alfvén wave amplification due to gyro-resonant interactions with SEPs and we constrained the wave growth to not violate the Bohm limit.

  12. Community Project for Accelerator Science and Simulation (ComPASS) Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Cary, John R. [Tech-X Corporation, Boulder, CO (United States); Cowan, Benjamin M. [Tech-X Corporation, Boulder, CO (United States); Veitzer, S. A. [Tech-X Corporation, Boulder, CO (United States)

    2016-03-04

    Tech-X participated across the full range of ComPASS activities, with efforts in the Energy Frontier primarily through modeling of laser plasma accelerators and dielectric laser acceleration, in the Intensity Frontier primarily through electron cloud modeling, and in Uncertainty Quantification being applied to dielectric laser acceleration. In the following we present the progress and status of our activities for the entire period of the ComPASS project for the different areas of Energy Frontier, Intensity Frontier and Uncertainty Quantification.

  13. Strain modelling of extensional fault-propagation folds based on an improved non-linear trishear model: A numerical simulation analysis

    Science.gov (United States)

    Zhao, Haonan; Guo, Zhaojie; Yu, Xiangjiang

    2017-02-01

    This paper focuses on the strain modelling of extensional fault-propagation folds to reveal the effects of key factors on the strain accumulation and the relationship between the geometry and strain distribution of fault-related folds. A velocity-geometry-strain method is proposed for the analysis of the total strain and its accumulation process within the trishear zone of an extensional fault-propagation fold. This paper improves the non-linear trishear model proposed by Jin and Groshong (2006). Based on the improved model, the distribution of the strain rate within the trishear zone and the total strain are obtained. The numerical simulations of different parameters performed in this study indicate that the shape factor R, the total apical angle, and the P/S ratio control the final geometry and strain distribution of extensional fault-propagation folds. A small P/S ratio, a small apical angle, and an R value far greater or far smaller than 1 produce an asymmetric, narrow, and strongly deformed trishear zone. The velocity-geometry-strain analysis method is applied to two natural examples from Big Brushy Canyon in Texas and the northwestern Red Sea in Egypt. The strain distribution within the trishear zone is closely related to the geometry of the folds.

  14. The $\\Lambda$CDM simulations of Keller and Wadsley do not account for the MOND mass-discrepancy-acceleration relation

    CERN Document Server

    Milgrom, Mordehai

    2016-01-01

    Keller and Wadsley (2016) have smugly suggested, recently, that the end of MOND may be in view. This is based on their claim that their highly-restricted sample of $\\Lambda$CDM-simulated galaxies are "consistent" with the observed MOND mass-discrepancy-acceleration relation (MDAR), in particular, with its recent update by McGaugh et al. (2016), based on the SPARC sample. From this they extrapolate to "$\\Lambda$CDM is fully consistent" with the MDAR. I explain why these simulated galaxies do not show that $\\Lambda$CDM accounts for the MDAR. a. Their sample of simulated galaxies contains only 18 high-mass galaxies, within a narrow range of one order of magnitude in baryonic mass, at the very high end of the observed, SPARC sample, which spans 4.5 orders of magnitude in mass. More importantly, the simulated sample has none of the low-mass, low-acceleration galaxies -- abundant in SPARC -- which encapsulate the crux and the nontrivial aspects of the predicted and observed MDAR. The low-acceleration part of the si...

  15. Mechanical Fault Diagnosis of High Voltage Circuit Breakers Based on Variational Mode Decomposition and Multi-Layer Classifier

    Directory of Open Access Journals (Sweden)

    Nantian Huang

    2016-11-01

    Full Text Available Mechanical fault diagnosis of high-voltage circuit breakers (HVCBs based on vibration signal analysis is one of the most significant issues in improving the reliability and reducing the outage cost for power systems. The limitation of training samples and types of machine faults in HVCBs causes the existing mechanical fault diagnostic methods to recognize new types of machine faults easily without training samples as either a normal condition or a wrong fault type. A new mechanical fault diagnosis method for HVCBs based on variational mode decomposition (VMD and multi-layer classifier (MLC is proposed to improve the accuracy of fault diagnosis. First, HVCB vibration signals during operation are measured using an acceleration sensor. Second, a VMD algorithm is used to decompose the vibration signals into several intrinsic mode functions (IMFs. The IMF matrix is divided into submatrices to compute the local singular values (LSV. The maximum singular values of each submatrix are selected as the feature vectors for fault diagnosis. Finally, a MLC composed of two one-class support vector machines (OCSVMs and a support vector machine (SVM is constructed to identify the fault type. Two layers of independent OCSVM are adopted to distinguish normal or fault conditions with known or unknown fault types, respectively. On this basis, SVM recognizes the specific fault type. Real diagnostic experiments are conducted with a real SF6 HVCB with normal and fault states. Three different faults (i.e., jam fault of the iron core, looseness of the base screw, and poor lubrication of the connecting lever are simulated in a field experiment on a real HVCB to test the feasibility of the proposed method. Results show that the classification accuracy of the new method is superior to other traditional methods.

  16. Network Fault Diagnosis Using DSM

    Institute of Scientific and Technical Information of China (English)

    Jiang Hao; Yan Pu-liu; Chen Xiao; Wu Jing

    2004-01-01

    Difference similitude matrix (DSM) is effective in reducing information system with its higher reduction rate and higher validity. We use DSM method to analyze the fault data of computer networks and obtain the fault diagnosis rules. Through discretizing the relative value of fault data, we get the information system of the fault data. DSM method reduces the information system and gets the diagnosis rules. The simulation with the actual scenario shows that the fault diagnosis based on DSM can obtain few and effective rules.

  17. Fault Diagnosis of Simulation Circuit Based on Negative Selection Algorithm%基于否定选择算法的模拟电路故障诊断

    Institute of Scientific and Technical Information of China (English)

    王玉珏; 漆德宁

    2015-01-01

    针对传统智能诊断技术受限于先验知识、模拟电路故障多样性等不足,对基于否定选择算法的模拟电路故障诊断进行研究。分析人工免疫系统中的否定选择算法原理及应用,介绍实值否定选择算法的产生机制,提出与自体耐受和Monte Carlo相结合的优化算法,通过Fish’s Iris数据仿真显示,并将优化算法运用于电阻电路的8种软故障诊断。结果表明:优化算法的总体检测率达90%,能降低成熟检测器冗余,节省计算空间。%Research the fault diagnosis of simulation circuit based on negative selection algorithm to deal with traditional intelligent diagnosis technology shortages such as prior knowledge limit and simulation circuit fault variety and so on. Analyze principle and application of negative selection algorithm in artificial immune system. Introduce the mechanism of the real-valued negative selection algorithm, proposes optimized algorithm which combines Monte Carlo with self tolerance. By means of Fish’s Iris simulation result, use optimized algorithm in the redundancy of detections and saves space of computer. Use optimized algorithm in the fault diagnosis of resistance circuit which has eight soft fault kinds. The results show that the total detection rate of optimized algorithm reaches 90%, it can reduce redundancy of mature detector, and save computation space.

  18. Simulations

    CERN Document Server

    Ngada, N M

    2015-01-01

    The complexity and cost of building and running high-power electrical systems make the use of simulations unavoidable. The simulations available today provide great understanding about how systems really operate. This paper helps the reader to gain an insight into simulation in the field of power converters for particle accelerators. Starting with the definition and basic principles of simulation, two simulation types, as well as their leading tools, are presented: analog and numerical simulations. Some practical applications of each simulation type are also considered. The final conclusion then summarizes the main important items to keep in mind before opting for a simulation tool or before performing a simulation.

  19. Analysis of Uncertainties in Protection Heater Delay Time Measurements and Simulations in Nb$_{3}$Sn High-Field Accelerator Magnets

    CERN Document Server

    Salmi, Tiina; Marchevsky, Maxim; Bajas, Hugo; Felice, Helene; Stenvall, Antti

    2015-01-01

    The quench protection of superconducting high-field accelerator magnets is presently based on protection heaters, which are activated upon quench detection to accelerate the quench propagation within the winding. Estimations of the heater delay to initiate a normal zone in the coil are essential for the protection design. During the development of Nb3Sn magnets for the LHC luminosity upgrade, protection heater delays have been measured in several experiments, and a new computational tool CoHDA (Code for Heater Delay Analysis) has been developed for heater design. Several computational quench analyses suggest that the efficiency of the present heater technology is on the borderline of protecting the magnets. Quantifying the inevitable uncertainties related to the measured and simulated delays is therefore of pivotal importance. In this paper, we analyze the uncertainties in the heater delay measurements and simulations using data from five impregnated high-field Nb3Sn magnets with different heater geometries. ...

  20. Fractal properties and simulation of micro-seismicity for seismic hazard analysis: a comparison of North Anatolian and San Andreas Fault Zones

    Directory of Open Access Journals (Sweden)

    Naside Ozer

    2012-02-01

    Full Text Available We analyzed statistical properties of earthquakes in western Anatolia as well as the North Anatolian Fault Zone (NAFZ in terms of spatio-temporal variations of fractal dimensions, p- and b-values. During statistically homogeneous periods characterized by closer fractal dimension values, we propose that occurrence of relatively larger shocks (M >= 5.0 is unlikely. Decreases in seismic activity in such intervals result in spatial b-value distributions that are primarily stable. Fractal dimensions decrease with time in proportion to increasing seismicity. Conversely, no spatiotemporal patterns were observed for p-value changes. In order to evaluate failure probabilities and simulate earthquake occurrence in the western NAFZ, we applied a modified version of the renormalization group method. Assuming an increase in small earthquakes is indicative of larger shocks, we apply the mentioned model to micro-seismic (M<= 3.0 activity, and test our results using San Andreas Fault Zone (SAFZ data. We propose that fractal dimension is a direct indicator of material heterogeneity and strength. Results from a model suggest simulated and observed earthquake occurrences are coherent, and may be used for seismic hazard estimation on creeping strike-slip fault zones.

  1. Process Level Fault Simulator of Intelligent Substation%智能变电站过程层故障仿真装置

    Institute of Scientific and Technical Information of China (English)

    张炳达; 姚浩

    2014-01-01

    A kind of device used for simulating the process level fault of intelligent substation is designed to test the protection relay.Based on the“capture-modify-forward"method,the sampling value (SV) and generic obj ect oriented substation event (GOOSE) in the process level network is transformed artificially.The information-entropy detection method is used to diagnose the voltage sags so that joint simulation between process level fault and power system fault can be realized.The fault simulator can easily access the process level network with the full duplex design.Out of consideration for both the operation time and hardware consumption,a multiply-accumulator with less pipeline levels is developed and multiplexing technology is used in information-entropy arithmetic.Tests show that the increased transmission delay by the fault simulator is less than 3μs,which won”t have an effect on the process level network.%设计了一种用于继电保护装置测试的智能变电站过程层故障仿真装置。以“截取—修改—转发”方式人为地改变过程层网络中的采样值报文和通用面向对象变电站事件(GOOSE)报文,通过信息熵检测法监视电压突变信息,实现过程层故障、电力系统故障的联合仿真。采用全双工设计方案,使故障仿真装置方便地接入过程层网络。为兼顾运算时间和硬件开资,开发了流水线级数较少的乘累加器,并在信息熵算法中采用复用技术。试验表明,故障仿真装置引起的传输延时增量不超过3μs,对过程层网络不产生影响。

  2. Horizontal Accelerator

    Data.gov (United States)

    Federal Laboratory Consortium — The Horizontal Accelerator (HA) Facility is a versatile research tool available for use on projects requiring simulation of the crash environment. The HA Facility is...

  3. Estimating kinetic rates from accelerated molecular dynamics simulations: Alanine dipeptide in explicit solvent as a case study

    Science.gov (United States)

    de Oliveira, César Augusto F.; Hamelberg, Donald; McCammon, J. Andrew

    2007-11-01

    Molecular dynamics (MD) simulation is the standard computational technique used to obtain information on the time evolution of the conformations of proteins and many other molecular systems. However, for most biological systems of interest, the time scale for slow conformational transitions is still inaccessible to standard MD simulations. Several sampling methods have been proposed to address this issue, including the accelerated molecular dynamics method. In this work, we study the extent of sampling of the phi/psi space of alanine dipeptide in explicit water using accelerated molecular dynamics and present a framework to recover the correct kinetic rate constant for the helix to beta-strand transition. We show that the accelerated MD can drastically enhance the sampling of the phi/psi conformational phase space when compared to normal MD. In addition, the free energy density plots of the phi/psi space show that all minima regions are accurately sampled and the canonical distribution is recovered. Moreover, the kinetic rate constant for the helix to beta-strand transition is accurately estimated from these simulations by relating the diffusion coefficient to the local energetic roughness of the energy landscape. Surprisingly, even for such a low barrier transition, it is difficult to obtain enough transitions to accurately estimate the rate constant when one uses normal MD.

  4. Geological Characteristics and Numerical Simulation of Badong Fault in TGP Reservoir Area%长江三峡巴东断裂地质特征及数值模拟分析

    Institute of Scientific and Technical Information of China (English)

    邓清禄; 陈波

    2002-01-01

    The new county-seat town of Badong in the reservoir area of the Three Gorges Project is located on a huge arcuate slope with a convex bank toward north. The slope is cut by a fault, Badong fault, trending in east-west in its back part. It is concerned if the huge arcuate slope is related to mass rock creep, and what is the role of the Badong fault in the formation of the huge arc slope? The Badong fault was put into main consideration in this paper. The data from field investigation were reviewed. Three main features of the Badong fault were summarized: a bedding fault between the Jialingjiang formation (T\\-1j) and Badong formation (T\\-2b), breccias with compound component, and multiple stages of activity. It was proposed that most of the breccias were formed by fracture-filling. To understand the state of stress and behavior of deformation of the fault during the incision of the Yangtze River as well as the initiation and development of the slope, numerical simulation was conducted. Results indicate that there is a tensional stress zone in the upper part of the fault, and that activity of the fault is dominated by bedding sliding. Opening was also noted in the upper part of the fault in the late periods. The results are consistent with the field observation of the fault. The displacement in the slope is small, which makes us conclude that there is no certain relation between the formation of the arcuate slope and the Badong fault.

  5. Investigation of the ability for slow slip events to trigger earthquakes through a comparison of seismic and geodetic observations with fault slip simulations

    Science.gov (United States)

    Colella, H.; Brudzinski, M. R.; Richards-Dinger, K. B.

    2013-12-01

    There is growing evidence that slow slip events (SSEs) promote nearby seismicity, often in the form of swarms of small to moderate earthquakes or as tectonic tremor that are primarily swarms of low frequency earthquakes. Yet the question remains whether SSEs are capable of triggering large to great earthquakes due to the generally small stress change associated with typical SSEs. There have been a few recent large to great earthquakes that appear to have been preceded by evidence of a SSE, but whether these cases are rare is not yet clear. Even if several other cases were documented, it would still be difficult to translate this information into quantitative estimates of the hazard increase during a given SSE. In this study, we move towards this long-term goal with an earthquake simulator (RSQSim), which is capable of modeling a variety of fault slip behaviors (i.e. earthquakes, SSEs, and continuous creep), and can generate robust statistics on the relationships between SSEs, microseismicity, and large/great earthquakes. The simulations seek to explain observations like those of the recent Mw 7.4 Ometepec, Mexico and Mw 6.5 Cook Strait earthquakes that show increased seismic activity in small fault patches between the transition zone and mainshock source zone during SSEs that were in process in the months leading up to, and particularly immediately prior to, the mainshock. The potential causative relationships will be probed with models that use a range of fault properties and configurations. One hypothesis to test is whether observed swarms of seismicity during SSEs represent fast slip on weaker areas of the plate interface that are more sensitive to the relatively small stress changes associated with SSEs. An alternative hypothesis is that no area of the fault is more easily influenced by SSEs, just that the relative prevalence of smaller earthquakes leads to more frequent observations of the triggered mainshocks.

  6. Corrosion behaviour of steel during accelerated carbonation of solutions which simulate the pore concrete solution

    Directory of Open Access Journals (Sweden)

    Alonso, C.

    1987-06-01

    Full Text Available In spite of the numerous studies carried out on carbonation of the concrete, very few data have been published on the mechanism of steel depassivation and the corrosion rates involved in this type of phenomenon. Also some uncertainties remain as to the chemical composition of the pore solution of a carbonated concrete. Random behaviours related with the changes in the corrosion rate of steel during accelerated carbonation of cement mortars have suggested the need to study the process in a more simple medium which allows the isolation of the different parameters. Thus, saturated Ca(0H2 -base solutions with different additions of KOH and NaOH have been used to simulate the real pore concrete solution. In the present work, simultaneous changes in the pH value, corrosion potential and corrosion rate (measured by means of the determination of the Polarization Resistance of steel roads have been monitored during accelerated carbonation produced by a constant flux through the solution of CO2 gas and/or air.

    A pesar de los numerosos estudios realizados en torno a la carbonatación del hormigón, son muy pocos los datos publicados acerca del mecanismo de despasivación del acero y las velocidades de corrosión implicadas en el proceso de corrosión por carbonatación. Por otra parte, aún no se conoce la composición de la fase acuosa de un hormigón carbonatado. Cierta erraticidad en los cambios registrados en la velocidad de corrosión del acero durante la carbonatación acelerada de morteros de cemento, puso de manifiesto la necesidad del estudio del proceso en un sistema simplificado que permitiera considerar aisladamente cada uno de los distintos parámetros. A este fin se utilizaron como disoluciones de simulación de la fase acuosa intersticial del hormigón, disoluciones saturadas de Ca(0H2 con distintas adiciones de NaOH o KOH. En el presente trabajo, se han registrado simultáneamente los cambios en

  7. Simulations of the Acceleration of Externally Injected Electrons in a Plasma Excited in the Linear Regime

    CERN Document Server

    Delerue, Nicolas; Jenzer, Stéphane; Kazamias, Sophie; Lucas, Bruno; Maynard, Gilles; Pittman, Moana

    2016-01-01

    We have investigated numerically the coupling between a 10 \\si{MeV} electron bunch of high charge (\\SI{> 100}{pc}) with a laser generated accelerating plasma wave. Our results show that a high efficiency coupling can be achieved using a \\SI{50}{TW}, \\SI{100}{\\micro \\meter} wide laser beam, yielding accelerating field above \\SI{1}{ GV/m}. We propose an experiment where these predictions could be tested.

  8. Electron cloud studies for CERN particle accelerators and simulation code development

    OpenAIRE

    Iadarola, Giovanni

    2014-01-01

    In a particle accelerator free electrons in the beam chambers can be generated by different mechanisms like the ionization of the residual gas or the photoemission from the chamber’s wall due to the synchrotron radiation emitted by the beam. The electromagnetic field of the beam can accelerate these electrons and project them onto the chamber’s wall. According to their impact energy and to the Secondary Electron Yield (SEY) of the surface, secondary electrons can be generated. Especially...

  9. Procedure of evaluating parameters of inland earthquakes caused by long strike-slip faults for ground motion prediction

    Science.gov (United States)

    Ju, Dianshu; Dan, Kazuo; Fujiwara, Hiroyuki; Morikawa, Nobuyuki

    2016-04-01

    We proposed a procedure of evaluating fault parameters of asperity models for predicting strong ground motions from inland earthquakes caused by long strike-slip faults. In order to obtain averaged dynamic stress drops, we adopted the formula obtained by dynamic fault rupturing simulations for surface faults of the length from 15 to 100 km, because the formula of the averaged static stress drops for circular cracks, commonly adopted in existing procedures, cannot be applied to surface faults or long faults. The averaged dynamic stress drops were estimated to be 3.4 MPa over the entire fault and 12.2 MPa on the asperities, from the data of 10 earthquakes in Japan and 13 earthquakes in other countries. The procedure has a significant feature that the average slip on the seismic faults longer than about 80 km is constant, about 300 cm. In order to validate our proposed procedure, we made a model for a 141 km long strike-slip fault by our proposed procedure for strike-slip faults, predicted ground motions, and showed that the resultant motions agreed well with the records of the 1999 Kocaeli, Turkey, earthquake (Mw 7.6) and with the peak ground accelerations and peak ground velocities by the GMPE of Si and Midorikawa (1999).

  10. Diagnosis Method for Analog Circuit Hard fault and Soft Fault

    Directory of Open Access Journals (Sweden)

    Baoru Han

    2013-09-01

    Full Text Available Because the traditional BP neural network slow convergence speed, easily falling in local minimum and the learning process will appear oscillation phenomena. This paper introduces a tolerance analog circuit hard fault and soft fault diagnosis method based on adaptive learning rate and the additional momentum algorithm BP neural network. Firstly, tolerance analog circuit is simulated by OrCAD / Pspice circuit simulation software, accurately extracts fault waveform data by matlab program automatically. Secondly, using the adaptive learning rate and momentum BP algorithm to train neural network, and then applies it to analog circuit hard fault and soft fault diagnosis. With shorter training time, high precision and global convergence effectively reduces the misjudgment, missing, it can improve the accuracy of fault diagnosis and fast.  

  11. TRANSMISSION LINE FAULT ANALYSIS USING WAVELET THEORY

    Directory of Open Access Journals (Sweden)

    Ravindra Malkar

    2012-06-01

    Full Text Available This paper describes a Wavelet transform technique to analyze power system disturbance such as transmission line faults with Biorthogonal and Haar wavelets. In this work, wavelet transform based approach,which is used to detect transmission line faults, is proposed. The coefficient of discrete approximation of the dyadic wavelet transform with different wavelets are used to be an index for transmission line fault detection and faulted – phase selection and select which wavelet is suitable for this application. MATLAB/Simulation is used to generate fault signals. Simulation results reveal that the performance of the proposed fault detection indicator is promising and easy to implement for computer relaying application.

  12. Kinematic source model for simulation of near-fault ground motion field using explicit finite element method

    Institute of Scientific and Technical Information of China (English)

    Zhang Xiaozhi; Hu Jinjun; Xie Lili; Wang Haiyun

    2006-01-01

    This paper briefly reviews the characteristics and major processes of the explicit finite element method in modeling the near-fault ground motion field. The emphasis is on the finite element-related problems in the finite fault source modeling. A modified kinematic source model is presented, in which vibration with some high frequency components is introduced into the traditional slip time function to ensure that the source and ground motion include sufficient high frequency components. The model presented is verified through a simple modeling example. It is shown that the predicted near-fault ground motion field exhibits similar characteristics to those observed in strong motion records, such as the hanging wall effect, vertical effect, fling step effect and velocity pulse effect, etc.

  13. Fault-related dolomitization in the Orpesa Ranges (Iberian Chain, E Spain): reactive transport simulations and field data constraints

    Science.gov (United States)

    Gomez-Rivas, E.; Martin-Martin, J. D.; Corbella, M.; Teixell, A.

    2009-04-01

    The relationships between hydrothermal fluid circulation and fracturing that lead to mineral dissolution and/or precipitation in carbonate rocks have direct impacts on the evolution and final distribution of hydrocarbon reservoir permeability. Understanding the coupling between these processes is important for predicting permeability and improving hydrocarbon recovery. We present a case study of dolomitization processes in Cretaceous limestone from the Orpesa Ranges (Iberian Chain, E Spain). Extending over part of the Maestrat Cretaceous Basin, the Orpesa area is deformed by extensional faults. These faults accommodated thick sequences of shallow marine limestone, mainly during Aptian times. The syn-rift carbonates are partially dolomitized due to the circulation and mixing of hydrothermal fluids along normal faults and bedding. Both Aptian and later Neogene extensional faults must have served as conduits for the circulation of fluids. MVT deposits of Paleocene age are well documented in the Maestrat basin and may also be related to dolomitization. Samples of host rocks and vein fillings have been collected along strike and analyzed in different fault sections to characterize fluid and rock composition, track flow pathways and map the relationships of fluid flow with respect to the main normal faults in the area. Using field and geochemical data from the Orpesa Ranges carbonates, we have developed reactive-transport models to study the influence of different parameters in the dolomitization of carbonates related to the circulation and mixing of hydrothermal fluids at the outcrop scale. We present results from models that were run with constant and non-constant permeability. The main parameters analyzed include: initial porosity and permeability of layers and fractures, composition of fluids, groundwater and brines flux, composition of layers, reactive surface of minerals, differences in vertical and horizontal permeability, and presence or absence of stratigraphic

  14. 基于流程模拟的化工故障检测技术%Chemical process fault detection technology based on process simulation

    Institute of Scientific and Technical Information of China (English)

    李秀喜; 袁延江

    2014-01-01

    提出了一种使用MATLAB仿真工具箱Simulink与动态模拟软件Aspen Dynamics相互调用来实现化工过程监测的方法。该方法具有以下优点:Aspen Dynamics能够快速建立精确的动态模型,具有完善的物性数据库,同时可以方便根据实际的化工过程对模型进行调整;使用Simulink仿真工具箱可以实时采集数据作为模型输入,同时完成对数据的必要处理。为了检测方法的可行性,将其应用于一个虚拟精馏过程来检验监测效果,结果表明,其可以实现对存在生产计划变更过程的故障监测和无生产计划变更过程中故障的监测。%A chemical process monitoring method using Simulink Toolbox of MATLAB to invoke Aspen Dynamics to dynamically simulate the chemical process was proposed. In most of the previous literatures about model-based fault detection, the mechanistic model should be built by hand,which is very time-consuming and requires the user to have a high professional quality. Using this method can build dynamics simulation for chemical in a quick and accurate way using Aspen Dynamics even the people who don’t have high professional quality in chemical engineering, meanwhile, the data collected from the factory often require correction, using Simulink Toolbox can conveniently correct the model data and measured data, at the same time the real-time factory data was collected as input data for dynamics simulation in order to achieve real-time process monitoring. The method was tested using a virtual distillation process in Aspen Dynamics, the results show that it can detect faults in chemical process with and without production change. Because the fault data also comes from a distillation process using Aspen Dynamics, the simulate data and the measured data without process fault are very similar, the method to correct the simulation data was not include.

  15. Simulations of particle acceleration beyond the classical synchrotron burnoff limit in magnetic reconnection: An explanation of the Crab flares

    CERN Document Server

    Cerutti, Benoit; Uzdensky, Dmitri A; Begelman, Mitchell C

    2013-01-01

    It is generally accepted that astrophysical sources cannot emit synchrotron radiation above 160 MeV in their rest frame. This limit is given by the balance between the accelerating electric force and the radiation reaction force acting on the electrons. The discovery of synchrotron gamma-ray flares in the Crab Nebula, well above this limit, challenges this classical picture of particle acceleration. To overcome this limit, particles must accelerate in a region of high electric field and low magnetic field. This is possible only with a non-ideal magnetohydrodynamic process, like magnetic reconnection. We present the first numerical evidence of particle acceleration beyond the synchrotron burnoff limit, using a set of 2D particle-in-cell simulations of ultra-relativistic pair plasma reconnection. We use a new code, Zeltron, that includes self-consistently the radiation reaction force in the equation of motion of the particles. We demonstrate that the most energetic particles move back and forth across the recon...

  16. Near-fault strong ground motion simulation of the May 12, 2008, Mw7.9 Wenchuan Earthquake by dynamical composite source model%应用动态复合震源模型模拟汶川Mw7.9地震强地面运动

    Institute of Scientific and Technical Information of China (English)

    孟令媛; 史保平

    2011-01-01

    for the Wenchuan Earthquake. In addition, the near-fault peak ground acceleration (PGA) distributions resulted from current simulation show much higher PGA values in the areas of Wenchuan, Beichuan and Qingchuan than other places, which is consistent with recent field observation and reports. The map of PGA distribution also indicates that, compared with other two segments of earthquake faulting, the ground motion caused by Wenchuan-Yingxiu thrusting is much stronger on the hanging wall than on the footwall, for example, the PGA ratios of the hanging wall to footwall could reach 1.72 : 1, 2.5 : 1 and 1.77 : 1 for N-S, E-W and UP components, respectively, at a given distance of 5 km from the fault trace on both sides of the fault. In fact, the numerical modeling developed in this study has the great potential application in the ground motion estimation/prediction for the earthquake engineering purpose. Furthermore, the numerical algorithm could also be used to generate the near-real-time shaking map in the implementation level if incorporated current finite fault inverse technique.%2008年5月12日中国汶川地区发生Mw7.9地震,震中位置103.4°E,31.0°N.主要发震断层空间展布长达300多公里,由南西方向到北东方向呈现明显的分段性,汶川一映秀段逆冲为主兼有少量的右旋走滑分量;安县一北川段为逆冲一右旋走滑的断层错动;青川段以右旋走滑为主兼有少量逆冲分量.采用改进后的复合震源强地面运动预测模型,建立了长为320 km,宽为20 km的断层破裂运动学模型,实现了断层分段、空间倾角、滑动方向连续变化的动态设定.数值模拟结果给出了近断层两侧(上、下盘)的地面加速度的分布特征,并同卧龙、郫县走石山及绵竹清平强震观测记录进行了对比分析.模拟加速度时程曲线无论在波形、持续时间、频率分量、峰值大小同观测记录都具有较好的相似性.利用现有83个已知经纬度台

  17. Simulation studies of the ion beam transport system in a compact electrostatic accelerator-based D-D neutron generator

    Directory of Open Access Journals (Sweden)

    Das Basanta Kumar

    2014-01-01

    Full Text Available The study of an ion beam transport mechanism contributes to the production of a good quality ion beam with a higher current and better beam emittance. The simulation of an ion beam provides the basis for optimizing the extraction system and the acceleration gap for the ion source. In order to extract an ion beam from an ion source, a carefully designed electrode system for the required beam energy must be used. In our case, a self-extracted penning ion source is used for ion generation, extraction and acceleration with a single accelerating gap for the production of neutrons. The characteristics of the ion beam extracted from this ion source were investigated using computer code SIMION 8.0. The ion trajectories from different locations of the plasma region were investigated. The simulation process provided a good platform for a study on optimizing the extraction and focusing system of the ion beam transported to the required target position without any losses and provided an estimation of beam emittance.

  18. Fault Diagnosis and Simulation of Photovoltaic Battery for Spacecraft%航天器光伏电池故障诊断及仿真

    Institute of Scientific and Technical Information of China (English)

    李洁; 张可

    2016-01-01

    The spacecraft fault diagnosis method of photovoltaic cells is studied to detect the fault point .Wavelet trans‐form method is used to analyze photovoltaic battery point of failure .The noise conditions for fault signal are denoised .Many kinds of wavelet basis function test spacecraft photovoltaic cell mutation point are combined .The simulation results show that the spacecraft is the abrupt change point of the pv cells detect spacecraft photovoltaic battery failure point ,proves the feasibility of this method .%论文研究了航天器光伏电池故障诊断方法检测出故障点的问题。用小波变换方法分析光伏电池故障点;对噪声干扰情况下的故障信号进行消噪处理;结合多种小波基函数检测航天器光伏电池突变点;仿真结果表明航天器光伏电池测出的突变点就是航天器光伏电池故障点,证明了该文方法的可行性。

  19. Monte Carlo simulations of ultra high vacuum and synchrotron radiation for particle accelerators

    CERN Document Server

    AUTHOR|(CDS)2082330; Leonid, Rivkin

    With preparation of Hi-Lumi LHC fully underway, and the FCC machines under study, accelerators will reach unprecedented energies and along with it very large amount of synchrotron radiation (SR). This will desorb photoelectrons and molecules from accelerator walls, which contribute to electron cloud buildup and increase the residual pressure - both effects reducing the beam lifetime. In current accelerators these two effects are among the principal limiting factors, therefore precise calculation of synchrotron radiation and pressure properties are very important, desirably in the early design phase. This PhD project shows the modernization and a major upgrade of two codes, Molflow and Synrad, originally written by R. Kersevan in the 1990s, which are based on the test-particle Monte Carlo method and allow ultra-high vacuum and synchrotron radiation calculations. The new versions contain new physics, and are built as an all-in-one package - available to the public. Existing vacuum calculation methods are overvi...

  20. PV Systems Reliability Final Technical Report: Ground Fault Detection

    Energy Technology Data Exchange (ETDEWEB)

    Lavrova, Olga [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Flicker, Jack David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Johnson, Jay [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-01-01

    We have examined ground faults in PhotoVoltaic (PV) arrays and the efficacy of fuse, current detection (RCD), current sense monitoring/relays (CSM), isolation/insulation (Riso) monitoring, and Ground Fault Detection and Isolation (GFID) using simulations based on a Simulation Program with Integrated Circuit Emphasis SPICE ground fault circuit model, experimental ground faults installed on real arrays, and theoretical equations.

  1. Tool for Viewing Faults Under Terrain

    Science.gov (United States)

    Siegel, Herbert, L.; Li, P. Peggy

    2005-01-01

    Multi Surface Light Table (MSLT) is an interactive software tool that was developed in support of the QuakeSim project, which has created an earthquake- fault database and a set of earthquake- simulation software tools. MSLT visualizes the three-dimensional geometries of faults embedded below the terrain and animates time-varying simulations of stress and slip. The fault segments, represented as rectangular surfaces at dip angles, are organized into collections, that is, faults. An interface built into MSLT queries and retrieves fault definitions from the QuakeSim fault database. MSLT also reads time-varying output from one of the QuakeSim simulation tools, called "Virtual California." Stress intensity is represented by variations in color. Slips are represented by directional indicators on the fault segments. The magnitudes of the slips are represented by the duration of the directional indicators in time. The interactive controls in MSLT provide a virtual track-ball, pan and zoom, translucency adjustment, simulation playback, and simulation movie capture. In addition, geographical information on the fault segments and faults is displayed on text windows. Because of the extensive viewing controls, faults can be seen in relation to one another, and to the terrain. These relations can be realized in simulations. Correlated slips in parallel faults are visible in the playback of Virtual California simulations.

  2. The time dependent propensity function for acceleration of spatial stochastic simulation of reaction-diffusion systems

    Science.gov (United States)

    Fu, Jin; Wu, Sheng; Li, Hong; Petzold, Linda R.

    2014-10-01

    The inhomogeneous stochastic simulation algorithm (ISSA) is a fundamental method for spatial stochastic simulation. However, when diffusion events occur more frequently than reaction events, simulating the diffusion events by ISSA is quite costly. To reduce this cost, we propose to use the time dependent propensity function in each step. In this way we can avoid simulating individual diffusion events, and use the time interval between two adjacent reaction events as the simulation stepsize. We demonstrate that the new algorithm can achieve orders of magnitude efficiency gains over widely-used exact algorithms, scales well with increasing grid resolution, and maintains a high level of accuracy.

  3. The time dependent propensity function for acceleration of spatial stochastic simulation of reaction–diffusion systems

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Jin, E-mail: iamfujin@hotmail.com [Department of Computer Science, University of California, Santa Barbara (United States); Wu, Sheng, E-mail: sheng@cs.ucsb.edu [Department of Computer Science, University of California, Santa Barbara (United States); Li, Hong, E-mail: hong.li@teradata.com [Teradata Inc., El Segundo, California (United States); Petzold, Linda R., E-mail: petzold@cs.ucsb.edu [Department of Computer Science, University of California, Santa Barbara (United States)

    2014-10-01

    The inhomogeneous stochastic simulation algorithm (ISSA) is a fundamental method for spatial stochastic simulation. However, when diffusion events occur more frequently than reaction events, simulating the diffusion events by ISSA is quite costly. To reduce this cost, we propose to use the time dependent propensity function in each step. In this way we can avoid simulating individual diffusion events, and use the time interval between two adjacent reaction events as the simulation stepsize. We demonstrate that the new algorithm can achieve orders of magnitude efficiency gains over widely-used exact algorithms, scales well with increasing grid resolution, and maintains a high level of accuracy.

  4. Application of the reduction of scale range in a Lorentz boosted frame to the numerical simulation of particle acceleration devices.

    Energy Technology Data Exchange (ETDEWEB)

    Vay, J; Fawley, W M; Geddes, C G; Cormier-Michel, E; Grote, D P

    2009-05-05

    It has been shown that the ratio of longest to shortest space and time scales of a system of two or more components crossing at relativistic velocities is not invariant under Lorentz transformation. This implies the existence of a frame of reference minimizing an aggregate measure of the ratio of space and time scales. It was demonstrated that this translated into a reduction by orders of magnitude in computer simulation run times, using methods based on first principles (e.g., Particle-In-Cell), for particle acceleration devices and for problems such as: free electron laser, laser-plasma accelerator, and particle beams interacting with electron clouds. Since then, speed-ups ranging from 75 to more than four orders of magnitude have been reported for the simulation of either scaled or reduced models of the above-cited problems. In it was shown that to achieve full benefits of the calculation in a boosted frame, some of the standard numerical techniques needed to be revised. The theory behind the speed-up of numerical simulation in a boosted frame, latest developments of numerical methods, and example applications with new opportunities that they offer are all presented.

  5. Exploring the Physics Limitations of Compact High Gradient Accelerating Structures Simulations of the Electron Current Spectrometer Setup in Geant4

    CERN Document Server

    Van Vliet, Philine Julia

    2017-01-01

    The high field gradient of 100 MV/m that will be applied to the accelerator cavities of the Compact Linear Collider (CLIC), gives rise to the problem of RF breakdowns. The field collapses and a plasma of electrons and ions is being formed in the cavity, preventing the RF field from penetrating the cavity. Electrons in the plasma are being accelerated and ejected out, resulting in a breakdown current up to a few Amp`eres, measured outside the cavities. These breakdowns lead to luminosity loss, so reducing their amount is of great importance. For this, a better understanding of the physics behind RF breakdowns is needed. To study these breakdowns, the XBox 2 test facility has a spectrometer setup installed after the RF cavity that is being conditioned. For this report, a simulation of this spectrometer setup has been made using Geant4. Once a detailed simulation of the RF field and cavity has been made, it can be connected to this simulation of the spectrometer setup and used to recreate the data that has b...

  6. Robust Fault Diagnosis Design for Linear Multiagent Systems with Incipient Faults

    Directory of Open Access Journals (Sweden)

    Jingping Xia

    2015-01-01

    Full Text Available The design of a robust fault estimation observer is studied for linear multiagent systems subject to incipient faults. By considering the fact that incipient faults are in low-frequency domain, the fault estimation of such faults is proposed for discrete-time multiagent systems based on finite-frequency technique. Moreover, using the decomposition design, an equivalent conclusion is given. Simulation results of a numerical example are presented to demonstrate the effectiveness of the proposed techniques.

  7. 基于BOOST船用柴油机热工故障仿真研究%Research on thermal fault simulation for marine diesel engine based on BOOST

    Institute of Scientific and Technical Information of China (English)

    黄加亮; 谢敢

    2015-01-01

    利用BOOST仿真软件,建立电控化改造后的4190型柴油机工作过程数学模型。该模型的燃烧放热模块采用MCC燃烧模型,传热模块选用Woschni 1978模型,并用该电控柴油机试验数据验证模型的可靠性。结果表明,仿真计算与实验值误差均在2%以内。在此基础上,结合中速电控柴油机的特点,对柴油机在额定工况下喷油定时故障、喷油器喷孔磨损、单缸停油、中冷器效率下降、压气机效率下降及排气阀关闭定时故障进行仿真计算,探索电控柴油机性能指标与热工参数对不同故障的变化规律,从而为船用中速电控柴油机故障监测、诊断提供可行的依据。%The working process mathematical model of 4190 type marine diesel engine after electronically controlled transformation is established by using AVL BOOST simulation software. The model of heat release module using MCC combustion model, heat transfer module using Woschni 1978 model. And the model reliability is validated by means of the electronically controlled diesel engine test data. The simulating calculations and experimental values indicate that the error is less than 2%. On this basis, the combination of medium-speed electronically controlled diesel engine characteristics, simulation calculations of injection timing fault, injector nozzle wear, single cylinder cut-out, intercooler efficiency reduces, compressor efficiency drops as well as the exhaust valves closing timing fault, etc. at rated operating conditions of diesel engine are carried out. And the performance and thermal parameters for the variation of different faults of electronically controlled diesel engine is explored, thus, the fault monitoring, diagnostics provide a viable basis and feasible reference for electronic control medium-speed marine diesel engine.

  8. Insulin adsorption on crystalline SiO2: Comparison between polar and nonpolar surfaces using accelerated molecular-dynamics simulations

    Science.gov (United States)

    Nejad, Marjan A.; Mücksch, Christian; Urbassek, Herbert M.

    2017-02-01

    Adsorption of insulin on polar and nonpolar surfaces of crystalline SiO2 (cristobalite and α -quartz) is studied using molecular dynamics simulation. Acceleration techniques are used in order to sample adsorption phase space efficiently and to identify realistic adsorption conformations. We find major differences between the polar and nonpolar surfaces. Electrostatic interactions govern the adsorption on polar surfaces and can be described by the alignment of the protein dipole with the surface dipole; hence spreading of the protein on the surface is irrelevant. On nonpolar surfaces, on the other hand, van-der-Waals interaction dominates, inducing surface spreading of the protein.

  9. Characterization of the neutron for linear accelerator shielding wall using a Monte Carlo Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dong Yeon [Dept. of Radiation Oncology, Dongnam Inst. of Radiological and Medical Science, Busan (Korea, Republic of); Park, Eun Tae [Dept. of Radiation Oncology, Inje University Busan Paik Hospital, Busan (Korea, Republic of); Kim, Jung Hoon [Dept. of Radiologic Science, College of Health Sciences, Catholic University of Pusan, Busan (Korea, Republic of)

    2016-03-15

    As previous studies to proceed with the evaluation of the radioactive at linear accelerator's shielding concrete wall. And the shielding wall was evaluated the characteristics for the incoming neutron. As a result, the shielding wall is the average amount of incoming neutrons 10 MV 4.63E-7%, 15 MV 9.69E-6%, showed the occurrence of 20 MV 2.18E-5%. The proportion of thermal neutrons of which are found to be approximately 18-33%. The neutron generation rate can be seen as a slight numerical order. However, in consideration of the linear accelerator operating time we can not ignore the effects of neutrons. Accordingly radioactive problem of the radiation shield wall of the treatment room will be this should be considered.

  10. Simulating Earthquake Rupture and Off-Fault Fracture Response: Application to the Safety Assessment of the Swedish Nuclear Waste Repository

    KAUST Repository

    Falth, B.

    2014-12-09

    To assess the long-term safety of a deep repository of spent nuclear fuel, upper bound estimates of seismically induced secondary fracture shear displacements are needed. For this purpose, we analyze a model including an earthquake fault, which is surrounded by a number of smaller discontinuities representing fractures on which secondary displacements may be induced. Initial stresses are applied and a rupture is initiated at a predefined hypocenter and propagated at a specified rupture speed. During rupture we monitor shear displacements taking place on the nearby fracture planes in response to static as well as dynamic effects. As a numerical tool, we use the 3Dimensional Distinct Element Code (3DEC) because it has the capability to handle numerous discontinuities with different orientations and at different locations simultaneously. In tests performed to benchmark the capability of our method to generate and propagate seismic waves, 3DEC generates results in good agreement with results from both Stokes solution and the Compsyn code package. In a preliminary application of our method to the nuclear waste repository site at Forsmark, southern Sweden, we assume end-glacial stress conditions and rupture on a shallow, gently dipping, highly prestressed fault with low residual strength. The rupture generates nearly complete stress drop and an M-w 5.6 event on the 12 km(2) rupture area. Of the 1584 secondary fractures (150 m radius), with a wide range of orientations and locations relative to the fault, a majority move less than 5 mm. The maximum shear displacement is some tens of millimeters at 200 m fault-fracture distance.

  11. Modelling Beam Dynamics and RF Production in Two Beam Accelerators with a Hybrid Simulation Tool

    Science.gov (United States)

    Lidia, Steven

    2000-04-01

    A hybrid mapping and PIC code is described and applied to the study of transient-to-steady-state phenomena of beam dynamics and rf power production in relativistic-klystron two-beam accelerators. Beam and beamline parameters appropriate to a single device that produces 40-100 MW per meter over 10 meters with a 120 ns pulse length are described and used.

  12. Acceleration Induced Voltage Variations in the Electrocardiogram during Exhaustive Simulated Aerial Combat Maneuvering

    Science.gov (United States)

    1982-01-01

    ACESSION *40 3.IEJPtEN TS CAT ALOG NUMBER SAM TR # 81-330 soI 11f(~(~ 4. TITLE (and Subtitle) S. TYPE OF REPORT & PERIOD COVERED Acceleration-Induced...PROJECT, TASK USAF School of Aerospace Medicine (VNB) AREA & WORK UNIT NUMBERSo Aerospace Medical Division (AFSC) S Brooks AFB, TX 78235 I. CONTROLLING...OFFICE NAME AND ADDRESS 12. REPORT DATE USAF School of Aerospace Medicine 28 July 1981 Aerospace Medical Division (AFSC) 13. NUMBER OF PAGES Brooks AFB

  13. A Hardware-Accelerated Fast Adaptive Vortex-Based Flow Simulation Software Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Applied Scientific Research has recently developed a Lagrangian vortex-boundary element method for the grid-free simulation of unsteady incompressible...

  14. Anisotropic hydrogen diffusion in α-Zr and Zircaloy predicted by accelerated kinetic Monte Carlo simulations

    Science.gov (United States)

    Zhang, Yongfeng; Jiang, Chao; Bai, Xianming

    2017-01-01

    This report presents an accelerated kinetic Monte Carlo (KMC) method to compute the diffusivity of hydrogen in hcp metals and alloys, considering both thermally activated hopping and quantum tunneling. The acceleration is achieved by replacing regular KMC jumps in trapping energy basins formed by neighboring tetrahedral interstitial sites, with analytical solutions for basin exiting time and probability. Parameterized by density functional theory (DFT) calculations, the accelerated KMC method is shown to be capable of efficiently calculating hydrogen diffusivity in α-Zr and Zircaloy, without altering the kinetics of long-range diffusion. Above room temperature, hydrogen diffusion in α-Zr and Zircaloy is dominated by thermal hopping, with negligible contribution from quantum tunneling. The diffusivity predicted by this DFT + KMC approach agrees well with that from previous independent experiments and theories, without using any data fitting. The diffusivity along is found to be slightly higher than that along , with the anisotropy saturated at about 1.20 at high temperatures, resolving contradictory results in previous experiments. Demonstrated using hydrogen diffusion in α-Zr, the same method can be extended for on-lattice diffusion in hcp metals, or systems with similar trapping basins. PMID:28106154

  15. Acceleration of Plasma Flows in the Solar Atmosphere Due to Magnetofluid Coupling - Simulation and Analysis

    CERN Document Server

    Mahajan, S M; Mikeladze, S V; Sigua, K I; Mahajan, Swadesh M.; Shatashvili, Nana L.; Mikeladze, Solomon V.; Sigua, Ketevan I.

    2005-01-01

    Within the framework of a two-fluid description possible pathways for the generation of fast flows (dynamical as well as steady) in the lower solar atmosphere is established. It is shown that a primary plasma flow (locally sub-Alfv\\'enic) is accelerated when interacting with emerging/ambient arcade--like closed field structures. The acceleration implies a conversion of thermal and field energies to kinetic energy of the flow. The time-scale for creating reasonably fast flows ($\\gtrsim 100$ km/s) is dictated by the initial ion skin depth while the amplification of the flow depends on local $\\beta $. It is shown, for the first time, that distances over which the flows become "fast" are $\\sim 0.01 R_s$ from the interaction surface; later the fast flow localizes (with dimensions $\\lesssim 0.05 R_S$) in the upper central region of the original arcade. For fixed initial temperature the final speed ($\\gtrsim 500 km/s$) of the accelerated flow, and the modification of the field structure are independent of the time-d...

  16. Anisotropic hydrogen diffusion in α-Zr and Zircaloy predicted by accelerated kinetic Monte Carlo simulations

    Science.gov (United States)

    Zhang, Yongfeng; Jiang, Chao; Bai, Xianming

    2017-01-01

    This report presents an accelerated kinetic Monte Carlo (KMC) method to compute the diffusivity of hydrogen in hcp metals and alloys, considering both thermally activated hopping and quantum tunneling. The acceleration is achieved by replacing regular KMC jumps in trapping energy basins formed by neighboring tetrahedral interstitial sites, with analytical solutions for basin exiting time and probability. Parameterized by density functional theory (DFT) calculations, the accelerated KMC method is shown to be capable of efficiently calculating hydrogen diffusivity in α-Zr and Zircaloy, without altering the kinetics of long-range diffusion. Above room temperature, hydrogen diffusion in α-Zr and Zircaloy is dominated by thermal hopping, with negligible contribution from quantum tunneling. The diffusivity predicted by this DFT + KMC approach agrees well with that from previous independent experiments and theories, without using any data fitting. The diffusivity along is found to be slightly higher than that along , with the anisotropy saturated at about 1.20 at high temperatures, resolving contradictory results in previous experiments. Demonstrated using hydrogen diffusion in α-Zr, the same method can be extended for on-lattice diffusion in hcp metals, or systems with similar trapping basins.

  17. Large earthquakes and creeping faults

    Science.gov (United States)

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  18. Expected damage to accelerator equipment due to the impact of the full LHC beam: beam instrumentation, experiments and simulations

    CERN Document Server

    Burkart, Florian

    The Large Hadron Collider (LHC) is the biggest and most powerful particle accelerator in the world, designed to collide two proton beams with particle momentum of 7 TeV/c each. The stored energy of 362MJ in each beam is sufficient to melt 500 kg of copper or to evaporate about 300 liter of water. An accidental release of even a small fraction of the beam energy can cause severe damage to accelerator equipment. Reliable machine protection systems are necessary to safely operate the accelerator complex. To design a machine protection system, it is essential to know the damage potential of the stored beam and the consequences in case of a failure. One (catastrophic) failure would be, if the entire beam is lost in the aperture due to a problem with the beam dumping system. This thesis presents the simulation studies, results of a benchmarking experiment, and detailed target investigation, for this failure case. In the experiment, solid copper cylinders were irradiated with the 440GeV proton beam delivered by the ...

  19. Accelerated molecular dynamics simulations of the octopamine receptor using GPUs: discovery of an alternate agonist-binding position.

    Science.gov (United States)

    Kastner, Kevin W; Izaguirre, Jesús A

    2016-10-01

    Octopamine receptors (OARs) perform key biological functions in invertebrates, making this class of G-protein coupled receptors (GPCRs) worth considering for insecticide development. However, no crystal structures and very little research exists for OARs. Furthermore, GPCRs are large proteins, are suspended in a lipid bilayer, and are activated on the millisecond timescale, all of which make conventional molecular dynamics (MD) simulations infeasible, even if run on large supercomputers. However, accelerated Molecular Dynamics (aMD) simulations can reduce this timescale to even hundreds of nanoseconds, while running the simulations on graphics processing units (GPUs) would enable even small clusters of GPUs to have processing power equivalent to hundreds of CPUs. Our results show that aMD simulations run on GPUs can successfully obtain the active and inactive state conformations of a GPCR on this reduced timescale. Furthermore, we discovered a potential alternate active-state agonist-binding position in the octopamine receptor which has yet to be observed and may be a novel GPCR agonist-binding position. These results demonstrate that a complex biological system with an activation process on the millisecond timescale can be successfully simulated on the nanosecond timescale using a simple computing system consisting of a small number of GPUs. Proteins 2016; 84:1480-1489. © 2016 Wiley Periodicals, Inc.

  20. Probabilistic fault localization with sliding windows

    Institute of Scientific and Technical Information of China (English)

    ZHANG Cheng; LIAO JianXin; LI TongHong; ZHU XiaoMin

    2012-01-01

    Fault localization is a central element in network fault management.This paper takes a weighted bipartite graph as a fault propagation model and presents a heuristic fault localization algorithm based on the idea of incremental coverage,which is resilient to inaccurate fault propagation model and the noisy environment.Furthermore,a sliding window mechanism is proposed to tackle the inaccuracy of this algorithm in the presence of improper time windows.As shown in the simulation study,our scheme achieves higher detection rate and lower false positive rate in the noisy environment as well as in the presence of inaccurate windows,than current fault localization algorithms.

  1. Identification of Transient and Permanent Faults

    Institute of Scientific and Technical Information of China (English)

    李幼仪; 董新洲; 孙元章

    2003-01-01

    A new algorithm was developed for arcing fault detection based on high-frequency current transients analyzed with wavelet transforms to avoid automatic reclosing on permanent faults. The characteristics of arc currents during transient faults were investigated. The current curves of transient and permanent faults are quite similar since current variation from the fault arc is much less than the voltage variation. However, the fault current details are quite different because of the arc extinguishing and reigniting. Dyadic wavelet transforms were used to identify the current variation since wavelet transform has time-frequency localization ability. Many electric magnetic transient program (EMTP) simulations have verified the feasibility of the algorithm.

  2. EGS4 and MCNP4b MC Simulation of a Siemens KD2 Accelerator in 6 MV Photon Mode

    CERN Document Server

    Chaves, A; Fragoso, M; Lopes, C; Oliveira, C; Peralta, L; Rodrigues, P; Seco, J; Trindade, A

    2001-01-01

    The geometry of a Siemens Mevatron KD2 linear accelerator in 6 MV photon mode was modeled with EGS4 and MCNP4b. Energy spectra and other phase space distributions have been extensively compared in different plans along the beam line. The differences found have been evaluated both qualitative and quantitatively. The final aim was that both codes, running in different operating systems and with a common set of simulation conditions, met the requirement of fitting the experimental depth dose curves and dose profiles, measured in water for different field sizes. Whereas depth dose calculations are in a certain extent insensible to some simulation parameters like electron nominal energy, dose profiles have revealed to be a much better indicator to appreciate that feature. Fine energy tuning has been tried and the best fit was obtained for a nominal electron energy of 6.15 MeV.

  3. SIMULATIONS OF PARTICLE ACCELERATION BEYOND THE CLASSICAL SYNCHROTRON BURNOFF LIMIT IN MAGNETIC RECONNECTION: AN EXPLANATION OF THE CRAB FLARES

    Energy Technology Data Exchange (ETDEWEB)

    Cerutti, B.; Werner, G. R.; Uzdensky, D. A. [Center for Integrated Plasma Studies, Physics Department, University of Colorado, UCB 390, Boulder, CO 80309-0390 (United States); Begelman, M. C., E-mail: benoit.cerutti@colorado.edu, E-mail: greg.werner@colorado.edu, E-mail: uzdensky@colorado.edu, E-mail: mitch@jila.colorado.edu [JILA, University of Colorado and National Institute of Standards and Technology, UCB 440, Boulder, CO 80309-0440 (United States)

    2013-06-20

    It is generally accepted that astrophysical sources cannot emit synchrotron radiation above 160 MeV in their rest frame. This limit is given by the balance between the accelerating electric force and the radiation reaction force acting on the electrons. The discovery of synchrotron gamma-ray flares in the Crab Nebula, well above this limit, challenges this classical picture of particle acceleration. To overcome this limit, particles must accelerate in a region of high electric field and low magnetic field. This is possible only with a non-ideal magnetohydrodynamic process, like magnetic reconnection. We present the first numerical evidence of particle acceleration beyond the synchrotron burnoff limit, using a set of two-dimensional particle-in-cell simulations of ultra-relativistic pair plasma reconnection. We use a new code, Zeltron, that includes self-consistently the radiation reaction force in the equation of motion of the particles. We demonstrate that the most energetic particles move back and forth across the reconnection layer, following relativistic Speiser orbits. These particles then radiate >160 MeV synchrotron radiation rapidly, within a fraction of a full gyration, after they exit the layer. Our analysis shows that the high-energy synchrotron flux is highly variable in time because of the strong anisotropy and inhomogeneity of the energetic particles. We discover a robust positive correlation between the flux and the cut-off energy of the emitted radiation, mimicking the effect of relativistic Doppler amplification. A strong guide field quenches the emission of >160 MeV synchrotron radiation. Our results are consistent with the observed properties of the Crab flares, supporting the reconnection scenario.

  4. Research on Simulation of Contact Force of Fault Planetary Gear Train%故障行星轮系接触力仿真研究∗

    Institute of Scientific and Technical Information of China (English)

    向玲; 陈涛

    2015-01-01

    In order to obtain the changing pattern of contact force of fault planetary gear train, pro/E and ADAMS were used to build the modal of fault planetary gear train and the computation method for contact force based on Hertz theory was also introduced, with the dynamic modal, the contact force and the frequen-cy spectrum of gear meshing were simulated. The simulation results show that the contact force of fault plan-etary gear train contains significant cyclical impact in time domain graph and obvious modulation phenome-non. In frequency domain, not only the failure frequency is found, but also fault characteristic that failure frequency stands for side band in meshing frequency and the frequency doubling is appeared. At the same time, As also can be seen from the frequency domain:the carrier wave is meshing frequency and the modu-lation wave is revolution frequency of the planet wheel.%为获得故障行星轮系啮合传动时接触力的变化规律,运用pro/E与ADAMS建立故障行星轮系的动力学模型,结合Hertz接触理论的接触力计算方法,对故障行星轮系啮合传动时的接触力的变化规律及其频谱特征进行仿真研究。仿真结果表明:故障行星轮系的接触力在时域上具有显著的周期性冲击和明显的调制现象。在频域上,不仅出现了故障频率,而且在啮合频率及其倍频处出现了以故障频率为边频带的故障特征。同时,从频域中还可以看出:载波频率为啮合频率,调制频率为行星轮的公转频率。

  5. Integrated design of fault reconstruction and fault-tolerant control against actuator faults using learning observers

    Science.gov (United States)

    Jia, Qingxian; Chen, Wen; Zhang, Yingchun; Li, Huayi

    2016-12-01

    This paper addresses the problem of integrated fault reconstruction and fault-tolerant control in linear systems subject to actuator faults via learning observers (LOs). A reconfigurable fault-tolerant controller is designed based on the constructed LO to compensate for the influence of actuator faults by stabilising the closed-loop system. An integrated design of the proposed LO and the fault-tolerant controller is explored such that their performance can be simultaneously considered and their coupling problem can be effectively solved. In addition, such an integrated design is formulated in terms of linear matrix inequalities (LMIs) that can be conveniently solved in a unified framework using LMI optimisation technique. At last, simulation studies on a micro-satellite attitude control system are provided to verify the effectiveness of the proposed approach.

  6. Coronal heating and wind acceleration by nonlinear Alfvén waves – global simulations with gravity, radiation, and conduction

    Directory of Open Access Journals (Sweden)

    T. K. Suzuki

    2008-03-01

    Full Text Available We review our recent results of global one-dimensional (1-D MHD simulations for the acceleration of solar and stellar winds. We impose transverse photospheric motions corresponding to the granulations, which generate outgoing Alfvén waves. We treat the propagation and dissipation of the Alfvén waves and consequent heating from the photosphere by dynamical simulations in a self-consistent manner. Nonlinear dissipation of Alfven waves becomes quite effective owing to the stratification of the atmosphere (the outward decrease of the density. We show that the coronal heating and the solar wind acceleration in the open magnetic field regions are natural consequence of the footpoint fluctuations of the magnetic fields at the surface (photosphere. We find that the properties of the solar wind sensitively depend on the fluctuation amplitudes at the solar surface because of the nonlinearity of the Alfvén waves, and that the wind speed at 1 AU is mainly controlled by the field strength and geometry of flux tubes. Based on these results, we point out that both fast and slow solar winds can be explained by the dissipation of nonlinear Alfvén waves in a unified manner. We also discuss winds from red giant stars driven by Alfvén waves, focusing on different aspects from the solar wind.

  7. H$^{-}$ ion source for CERN's Linac4 accelerator: simulation, experimental validation and optimization of the hydrogen plasma

    CERN Document Server

    Mattei, Stefano; Lettry, Jacques

    2017-07-25

    Linac4 is the new negative hydrogen ion (H$^-$) linear accelerator of the European Organization for Nuclear Research (CERN). Its ion source operates on the principle of Radio-Frequency Inductively Coupled Plasma (RF-ICP) and it is required to provide 50~mA of H$^-$ beam in pulses of 600~$\\mu$s with a repetition rate up to 2 Hz and within an RMS emittance of 0.25~$\\pi$~mm~mrad in order to fullfil the requirements of the accelerator. This thesis is dedicated to the characterization of the hydrogen plasma in the Linac4 H$^-$ ion source. We have developed a Particle-In-Cell Monte Carlo Collision (PIC-MCC) code to simulate the RF-ICP heating mechanism and performed measurements to benchmark the fraction of the simulation outputs that can be experimentally accessed. The code solves self-consistently the interaction between the electromagnetic field generated by the RF coil and the resulting plasma response, including a kinetic description of charged and neutral species. A fully-implicit implementation allowed to si...

  8. Monte Carlo simulation of electron beams from an accelerator head using PENELOPE

    Science.gov (United States)

    Sempau, J.; Sánchez-Reyes, A.; Salvat, F.; Oulad ben Tahar, H.; Jiang, S. B.; Fernández-Varea, J. M.

    2001-04-01

    The Monte Carlo code PENELOPE has been used to simulate electron beams from a Siemens Mevatron KDS linac with nominal energies of 6, 12 and 18 MeV. Owing to its accuracy, which stems from that of the underlying physical interaction models, PENELOPE is suitable for simulating problems of interest to the medical physics community. It includes a geometry package that allows the definition of complex quadric geometries, such as those of irradiation instruments, in a straightforward manner. Dose distributions in water simulated with PENELOPE agree well with experimental measurements using a silicon detector and a monitoring ionization chamber. Insertion of a lead slab in the incident beam at the surface of the water phantom produces sharp variations in the dose distributions, which are correctly reproduced by the simulation code. Results from PENELOPE are also compared with those of equivalent simulations with the EGS4-based user codes BEAM and DOSXYZ. Angular and energy distributions of electrons and photons in the phase-space plane (at the downstream end of the applicator) obtained from both simulation codes are similar, although significant differences do appear in some cases. These differences, however, are shown to have a negligible effect on the calculated dose distributions. Various practical aspects of the simulations, such as the calculation of statistical uncertainties and the effect of the `latent' variance in the phase-space file, are discussed in detail.

  9. Enabling Lorentz boosted frame particle-in-cell simulations of laser wakefield acceleration in quasi-3D geometry

    Science.gov (United States)

    Yu, Peicheng; Xu, Xinlu; Davidson, Asher; Tableman, Adam; Dalichaouch, Thamine; Li, Fei; Meyers, Michael D.; An, Weiming; Tsung, Frank S.; Decyk, Viktor K.; Fiuza, Frederico; Vieira, Jorge; Fonseca, Ricardo A.; Lu, Wei; Silva, Luis O.; Mori, Warren B.

    2016-07-01

    When modeling laser wakefield acceleration (LWFA) using the particle-in-cell (PIC) algorithm in a Lorentz boosted frame, the plasma is drifting relativistically at βb c towards the laser, which can lead to a computational speedup of ∼ γb2 = (1 - βb2)-1. Meanwhile, when LWFA is modeled in the quasi-3D geometry in which the electromagnetic fields and current are decomposed into a limited number of azimuthal harmonics, speedups are achieved by modeling three dimensional (3D) problems with the computational loads on the order of two dimensional r - z simulations. Here, we describe a method to combine the speedups from the Lorentz boosted frame and quasi-3D algorithms. The key to the combination is the use of a hybrid Yee-FFT solver in the quasi-3D geometry that significantly mitigates the Numerical Cerenkov Instability (NCI) which inevitably arises in a Lorentz boosted frame due to the unphysical coupling of Langmuir modes and EM modes of the relativistically drifting plasma in these simulations. In addition, based on the space-time distribution of the LWFA data in the lab and boosted frame, we propose to use a moving window to follow the drifting plasma, instead of following the laser driver as is done in the LWFA lab frame simulations, in order to further reduce the computational loads. We describe the details of how the NCI is mitigated for the quasi-3D geometry, the setups for simulations which combine the Lorentz boosted frame, quasi-3D geometry, and the use of a moving window, and compare the results from these simulations against their corresponding lab frame cases. Good agreement is obtained among these sample simulations, particularly when there is no self-trapping, which demonstrates it is possible to combine the Lorentz boosted frame and the quasi-3D algorithms when modeling LWFA. We also discuss the preliminary speedups achieved in these sample simulations.

  10. GPU accelerated Monte-Carlo simulation of SEM images for metrology

    Science.gov (United States)

    Verduin, T.; Lokhorst, S. R.; Hagen, C. W.

    2016-03-01

    In this work we address the computation times of numerical studies in dimensional metrology. In particular, full Monte-Carlo simulation programs for scanning electron microscopy (SEM) image acquisition are known to be notoriously slow. Our quest in reducing the computation time of SEM image simulation has led us to investigate the use of graphics processing units (GPUs) for metrology. We have succeeded in creating a full Monte-Carlo simulation program for SEM images, which runs entirely on a GPU. The physical scattering models of this GPU simulator are identical to a previous CPU-based simulator, which includes the dielectric function model for inelastic scattering and also refinements for low-voltage SEM applications. As a case study for the performance, we considered the simulated exposure of a complex feature: an isolated silicon line with rough sidewalls located on a at silicon substrate. The surface of the rough feature is decomposed into 408 012 triangles. We have used an exposure dose of 6 mC/cm2, which corresponds to 6 553 600 primary electrons on average (Poisson distributed). We repeat the simulation for various primary electron energies, 300 eV, 500 eV, 800 eV, 1 keV, 3 keV and 5 keV. At first we run the simulation on a GeForce GTX480 from NVIDIA. The very same simulation is duplicated on our CPU-based program, for which we have used an Intel Xeon X5650. Apart from statistics in the simulation, no difference is found between the CPU and GPU simulated results. The GTX480 generates the images (depending on the primary electron energy) 350 to 425 times faster than a single threaded Intel X5650 CPU. Although this is a tremendous speedup, we actually have not reached the maximum throughput because of the limited amount of available memory on the GTX480. Nevertheless, the speedup enables the fast acquisition of simulated SEM images for metrology. We now have the potential to investigate case studies in CD-SEM metrology, which otherwise would take unreasonable

  11. Simulation of accelerated strip cooling on the hot rolling mill run-out roller table

    Directory of Open Access Journals (Sweden)

    E.Makarov

    2016-07-01

    Full Text Available A mathematical model of the thermal state of the metal in the run-out roller table continuous wide hot strip mill. The mathematical model takes into account heat generation due to the polymorphic γ → α transformation of supercooled austenite phase state and the influence of the chemical composition of the steel on the physical properties of the metal. The model allows calculation of modes of accelerated cooling strips on run-out roller table continuous wide hot strip mill. Winding temperature calculation error does not exceed 20°C for 98.5 % of strips of low-carbon and low-alloy steels

  12. Test simulation of neutron damage to electronic components using accelerator facilities

    Energy Technology Data Exchange (ETDEWEB)

    King, D.B., E-mail: dbking@sandia.gov; Fleming, R.M.; Bielejec, E.S.; McDonald, J.K.; Vizkelethy, G.

    2015-12-15

    The purpose of this work is to demonstrate equivalent bipolar transistor damage response to neutrons and silicon ions. We report on irradiation tests performed at the White Sands Missile Range Fast Burst Reactor, the Sandia National Laboratories (SNL) Annular Core Research Reactor, the SNL SPHINX accelerator, and the SNL Ion Beam Laboratory using commercial silicon npn bipolar junction transistors (BJTs) and III–V Npn heterojunction bipolar transistors (HBTs). Late time and early time gain metrics as well as defect spectra measurements are reported.

  13. Computer simulation of 2-D and 3-D ion beam extraction and acceleration

    Energy Technology Data Exchange (ETDEWEB)

    Ido, Shunji; Nakajima, Yuji [Saitama Univ., Urawa (Japan). Faculty of Engineering

    1997-03-01

    The two-dimensional code and the three-dimensional code have been developed to study the physical features of the ion beams in the extraction and acceleration stages. By using the two-dimensional code, the design of first electrode(plasma grid) is examined in regard to the beam divergence. In the computational studies by using the three-dimensional code, the axis-off model of ion beam is investigated. It is found that the deflection angle of ion beam is proportional to the gap displacement of the electrodes. (author)

  14. GPU-Accelerated Sparse Matrix Solvers for Large-Scale Simulations Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Many large-scale numerical simulations can be broken down into common mathematical routines. While the applications may differ, the need to perform functions such as...

  15. Monte-Carlo Simulation of the Features of Bi-Reactior Accelerator Driven Systems

    CERN Document Server

    Bznuni, S A; Khudaverdian, A G; Barashenkov, V S; Sosnin, A N; Polyanskii, A A

    2002-01-01

    Parameters of accelerator-driven systems containing two "cascade" subcritical assemblies (liquid metal fast reactor, used as a neutron booster, and a thermal reactor, where main heat production is taking place) are investigated. Three main reactor cores analogous to VVER-1000, MSBR-1000 and CANDU-6 reactors are considered. Functioning in a safe mode (k_{eff}=0.94-0.98) these systems under consideration demonstrate much larger capacity in the wide range of k_{eff} in comparison with analogous systems without intermediate fast booster reactor and simultaneously having the density of thermal neutron flux equal to Phi^{max}=10^{14} cm^{-2}c^{-1} and operating with the fast and thermal zones they are capable to transmute the whole scope of nuclear waste reducing the requirements on the beam current of the accelerator by one order of magnitude. It seems to be the most important in case when molten salt thermal breeder reactor cores are considered as a main heat generating zone.

  16. 3D simulations of supernova remnants evolution including non-linear particle acceleration

    CERN Document Server

    Ferrand, Gilles; Ballet, Jean; Teyssier, Romain; Fraschetti, Federico

    2009-01-01

    If a sizeable fraction of the energy of supernova remnant shocks is channeled into energetic particles (commonly identified with Galactic cosmic rays), then the morphological evolution of the remnants must be distinctly modified. Evidence of such modifications has been recently obtained with the Chandra and XMM-Newton X-ray satellites. To investigate these effects, we coupled a semi-analytical kinetic model of shock acceleration with a 3D hydrodynamic code (by means of an effective adiabatic index). This enables us to study the time-dependent compression of the region between the forward and reverse shocks due to the back reaction of accelerated particles, concomitantly with the development of the Rayleigh-Taylor hydrodynamic instability at the contact discontinuity. Density profiles depend critically on the injection level eta of particles: for eta up to about 10^-4 modifications are weak and progressive, for eta of the order of 10^-3 modifications are strong and immediate. Nevertheless, the extension of the...

  17. Accelerated Nodal Discontinuous Galerkin Simulations for Reverse Time Migration with Large Clusters

    CERN Document Server

    Modave, Axel; Mulder, Wim A; Warburton, Tim

    2015-01-01

    Improving both accuracy and computational performance of numerical tools is a major challenge for seismic imaging and generally requires specialized implementations to make full use of modern parallel architectures. We present a computational strategy for reverse-time migration (RTM) with accelerator-aided clusters. A new imaging condition computed from the pressure and velocity fields is introduced. The model solver is based on a high-order discontinuous Galerkin time-domain (DGTD) method for the pressure-velocity system with unstructured meshes and multi-rate local time-stepping. We adopted the MPI+X approach for distributed programming where X is a threaded programming model. In this work we chose OCCA, a unified framework that makes use of major multi-threading languages (e.g. CUDA and OpenCL) and offers the flexibility to run on several hardware architectures. DGTD schemes are suitable for efficient computations with accelerators thanks to localized element-to-element coupling and the dense algebraic ope...

  18. On the use of reverse Brownian motion to accelerate hybrid simulations

    Science.gov (United States)

    Bakarji, Joseph; Tartakovsky, Daniel M.

    2017-04-01

    Multiscale and multiphysics simulations are two rapidly developing fields of scientific computing. Efficient coupling of continuum (deterministic or stochastic) constitutive solvers with their discrete (stochastic, particle-based) counterparts is a common challenge in both kinds of simulations. We focus on interfacial, tightly coupled simulations of diffusion that combine continuum and particle-based solvers. The latter employs the reverse Brownian motion (rBm), a Monte Carlo approach that allows one to enforce inhomogeneous Dirichlet, Neumann, or Robin boundary conditions and is trivially parallelizable. We discuss numerical approaches for improving the accuracy of rBm in the presence of inhomogeneous Neumann boundary conditions and alternative strategies for coupling the rBm solver with its continuum counterpart. Numerical experiments are used to investigate the convergence, stability, and computational efficiency of the proposed hybrid algorithm.

  19. Computational acceleration of orbital neutral sensor ionizer simulation through phenomena separation

    Science.gov (United States)

    Font, Gabriel I.

    2016-07-01

    Simulation of orbital phenomena is often difficult because of the non-continuum nature of the flow, which forces the use of particle methods, and the disparate time scales, which make long run times necessary. In this work, the computational work load has been reduced by taking advantage of the low number of collisions between different species. This allows each population of particles to be brought into convergence separately using a time step size optimized for its particular motion. The converged populations are then brought together to simulate low probability phenomena, such as ionization or excitation, on much longer time scales. The result of this technique has the effect of reducing run times by a factor of 103-104. The technique was applied to the simulation of a low earth orbit neutral species sensor with an ionizing element. Comparison with laboratory experiments of ion impacts generated by electron flux shows very good agreement.

  20. Light scattering microscopy measurements of single nuclei compared with GPU-accelerated FDTD simulations

    Science.gov (United States)

    Stark, Julian; Rothe, Thomas; Kieß, Steffen; Simon, Sven; Kienle, Alwin

    2016-04-01

    Single cell nuclei were investigated using two-dimensional angularly and spectrally resolved scattering microscopy. We show that even for a qualitative comparison of experimental and theoretical data, the standard Mie model of a homogeneous sphere proves to be insufficient. Hence, an accelerated finite-difference time-domain method using a graphics processor unit and domain decomposition was implemented to analyze the experimental scattering patterns. The measured cell nuclei were modeled as single spheres with randomly distributed spherical inclusions of different size and refractive index representing the nucleoli and clumps of chromatin. Taking into account the nuclear heterogeneity of a large number of inclusions yields a qualitative agreement between experimental and theoretical spectra and illustrates the impact of the nuclear micro- and nanostructure on the scattering patterns.

  1. Seismological Studies for Tensile Faults

    Directory of Open Access Journals (Sweden)

    Gwo-Bin Ou

    2008-01-01

    Full Text Available A shear slip fault, an equivalence of a double couple source, has often been assumed to be a kinematic source model in ground motion simulation. Estimation of seismic moment based on the shear slip model indicates the size of an earthquake. However, if the dislocation of the hanging wall relative to the footwall includes not only a shear slip tangent to the fault plane but also expansion and compression normal to the fault plane, the radiating seismic waves will feature differences from those out of the shear slip fault. Taking account of the effects resulting from expansion and compression to a fault plane, we can resolve the tension and pressure axes as well as the fault plane solution more exactly from ground motions than previously, and can evaluate how far a fault zone opens or contracts during a developing rupture. In addition to a tensile angle and Poisson¡¦s ratio for the medium, a tensile fault with five degrees of freedom has been extended from the shear slip fault with only three degrees of freedom, strike, dip, and slip.

  2. Research on Fault Simulation Technology Based on Virtual Prototype for Recoil System%基于虚拟样机的反后坐装置故障仿真技术研究

    Institute of Scientific and Technical Information of China (English)

    张静波; 程力; 胡慧斌; 曹立军

    2012-01-01

    针对反后坐装置测试性差、故障知识匮乏等难题;将虚拟样机作为一种新的定量推理机制引人故障仿真与知识获取领域;以某新型自行火炮反后坐装置为研究对象,在Pro/E和MSC.ADAMS环境下建立了虚拟样机;以射击过程为例,对影响反后坐装置工作性能的关键因素及故障过程进行仿真,为获取故障知识、确定故障阈值提供参考依据.%To solve the difficulties of bad testability and deficient fault knowledge for recoil system. Virtual Prototype is firstly adopted into fault simulation and knowledge acquisition fields as a new kind of quantitative reasoning mechanism. The recoil system of a new -type Self -Propelled Gun is selected as research object. Its Virtual Prototype is established in Pro/E and MSC. ADAMS environments. The fir-ing process is selected as a simulation example. The key influencing factors and typical fault processes are simulated, which provide abundant references for fault knowledge acquisition and fault thresholdes confirmation.

  3. Development of attenuation relation for the near fault ground motion from the characteristic earthquake

    Institute of Scientific and Technical Information of China (English)

    SHI Bao-ping; LIU Bo-yan; ZHANG Jian

    2007-01-01

    A composite source model has been used to simulate a broadband strong ground motion with an associated fault rupture process. A scenario earthquake fault model has been used to generate 1 000 earthquake events with a magnitude of Mw8.0. The simulated results show that, for the characteristic event with a strike-slip faulting, the characteristics of near fault ground motion is strongly dependent on the rupture directivity. If the distance between the sites and fault was given, the ground motion in the forward direction (Site A) is much larger than that in the backward direction (Site C) and that close to the fault (Site B). The SH waves radiated from the fault, which corresponds to the fault-normal component plays a key role in the ground motion amplification. Corresponding to the sites A, B, and C, the statistical analysis shows that the ratio of their aPG is 2.15:1.5:1 and their standard deviations are about 0.12, 0.11, and 0.13, respectively. If these results are applied in the current probabilistic seismic hazard analysis (PSHA), then, for the lower annual frequency of exceedance of peak ground acceleration, the predicted aPG from the hazard curve could reduce by 30% or more compared with the current PSHA model used in the developing of seismic hazard map in the USA. Therefore, with a consideration of near fault ground motion caused by the rupture directivity, the regression model used in the development of the regional attenuation relation should be modified accordingly.

  4. Time accelerated Monte Carlo simulations of biological networks using the binomial tau-leap method.

    Science.gov (United States)

    Chatterjee, Abhijit; Mayawala, Kapil; Edwards, Jeremy S; Vlachos, Dionisios G

    2005-05-01

    Developing a quantitative understanding of intracellular networks requires simulations and computational analyses. However, traditional differential equation modeling tools are often inadequate due to the stochasticity of intracellular reaction networks that can potentially influence the phenotypic characteristics. Unfortunately, stochastic simulations are computationally too intense for most biological systems. Herein, we have utilized the recently developed binomial tau-leap method to carry out stochastic simulations of the epidermal growth factor receptor induced mitogen activated protein kinase cascade. Results indicate that the binomial tau-leap method is computationally 100-1000 times more efficient than the exact stochastic simulation algorithm of Gillespie. Furthermore, the binomial tau-leap method avoids negative populations and accurately captures the species populations along with their fluctuations despite the large difference in their size. http://www.dion.che.udel.edu/multiscale/Introduction.html. Fortran 90 code available for academic use by email. Details about the binomial tau-leap algorithm, software and a manual are available at the above website.

  5. Accelerating all-atom MD simulations of lipids using a modified virtual-sites technique

    DEFF Research Database (Denmark)

    Loubet, Bastien; Kopec, Wojciech; Khandelia, Himanshu

    2014-01-01

    We present two new implementations of the virtual sites technique which completely suppresses the degrees of freedom of the hydrogen atoms in a lipid bilayer allowing for an increased time step of 5 fs in all-atom simulations of the CHARMM36 force field. One of our approaches uses the derivation ...

  6. GPU-accelerated molecular dynamics simulation for study of liquid crystalline flows

    Science.gov (United States)

    Sunarso, Alfeus; Tsuji, Tomohiro; Chono, Shigeomi

    2010-08-01

    We have developed a GPU-based molecular dynamics simulation for the study of flows of fluids with anisotropic molecules such as liquid crystals. An application of the simulation to the study of macroscopic flow (backflow) generation by molecular reorientation in a nematic liquid crystal under the application of an electric field is presented. The computations of intermolecular force and torque are parallelized on the GPU using the cell-list method, and an efficient algorithm to update the cell lists was proposed. Some important issues in the implementation of computations that involve a large number of arithmetic operations and data on the GPU that has limited high-speed memory resources are addressed extensively. Despite the relatively low GPU occupancy in the calculation of intermolecular force and torque, the computation on a recent GPU is about 50 times faster than that on a single core of a recent CPU, thus simulations involving a large number of molecules using a personal computer are possible. The GPU-based simulation should allow an extensive investigation of the molecular-level mechanisms underlying various macroscopic flow phenomena in fluids with anisotropic molecules.

  7. Accelerating molecular simulations of proteins using Bayesian inference on weak information

    Science.gov (United States)

    Perez, Alberto; MacCallum, Justin L.; Dill, Ken A.

    2015-01-01

    Atomistic molecular dynamics (MD) simulations of protein molecules are too computationally expensive to predict most native structures from amino acid sequences. Here, we integrate “weak” external knowledge into folding simulations to predict protein structures, given their sequence. For example, we instruct the computer “to form a hydrophobic core,” “to form good secondary structures,” or “to seek a compact state.” This kind of information has been too combinatoric, nonspecific, and vague to help guide MD simulations before. Within atomistic replica-exchange molecular dynamics (REMD), we develop a statistical mechanical framework, modeling using limited data with coarse physical insight(s) (MELD + CPI), for harnessing weak information. As a test, we apply MELD + CPI to predict the native structures of 20 small proteins. MELD + CPI samples to within less than 3.2 Å from native for all 20 and correctly chooses the native structures (<4 Å) for 15 of them, including ubiquitin, a millisecond folder. MELD + CPI is up to five orders of magnitude faster than brute-force MD, satisfies detailed balance, and should scale well to larger proteins. MELD + CPI may be useful where physics-based simulations are needed to study protein mechanisms and populations and where we have some heuristic or coarse physical knowledge about states of interest. PMID:26351667

  8. Adaptively trained reduced-order model for acceleration of oscillatory flow simulations

    CSIR Research Space (South Africa)

    Oxtoby, Oliver F

    2012-07-01

    Full Text Available onto these basis modes using the method of Galerkin projection. While most ROM techniques try to speed up a sequence of similar simulations by first generating the ROM using selected representative runs, and then applying it to others, here...

  9. Accelerated SPECT Monte Carlo Simulation Using Multiple Projection Sampling and Convolution-Based Forced Detection

    Science.gov (United States)

    Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.

    2010-01-01

    Monte Carlo (MC) is a well-utilized tool for simulating photon transport in single photon emission computed tomography (SPECT) due to its ability to accurately model physical processes of photon transport. As a consequence of this accuracy, it suffers from a relatively low detection efficiency and long computation time. One technique used to improve the speed of MC modeling is the effective and well-established variance reduction technique (VRT) known as forced detection (FD). With this method, photons are followed as they traverse the object under study but are then forced to travel in the direction of the detector surface, whereby they are detected at a single detector location. Another method, called convolution-based forced detection (CFD), is based on the fundamental idea of FD with the exception that detected photons are detected at multiple detector locations and determined with a distance-dependent blurring kernel. In order to further increase the speed of MC, a method named multiple projection convolution-based forced detection (MP-CFD) is presented. Rather than forcing photons to hit a single detector, the MP-CFD method follows the photon transport through the object but then, at each scatter site, forces the photon to interact with a number of detectors at a variety of angles surrounding the object. This way, it is possible to simulate all the projection images of a SPECT simulation in parallel, rather than as independent projections. The result of this is vastly improved simulation time as much of the computation load of simulating photon transport through the object is done only once for all projection angles. The results of the proposed MP-CFD method agrees well with the experimental data in measurements of point spread function (PSF), producing a correlation coefficient (r2) of 0.99 compared to experimental data. The speed of MP-CFD is shown to be about 60 times faster than a regular forced detection MC program with similar results. PMID:20811587

  10. Study on Fault Current of DFIG during Slight Fault Condition

    Directory of Open Access Journals (Sweden)

    Xiangping Kong

    2013-04-01

    Full Text Available In order to ensure the safety of DFIG when severe fault happens, crowbar protection is adopted. But during slight fault condition, the crowbar protection will not trip, and the DFIG is still excited by AC-DC-AC converter. In this condition, operation characteristics of the converter have large influence on the fault current characteristics of DFIG. By theoretical analysis and digital simulation, the fault current characteristics of DFIG during slight voltage dips are studied. And the influence of controller parameters of converter on the fault current characteristics is analyzed emphatically. It builds a basis for the construction of relay protection which is suitable for the power gird with accession of DFIG.

  11. Biomineralization behavior of a vinylphosphonic acid-based copolymer added with polymerization accelerator in simulated body fluid

    Directory of Open Access Journals (Sweden)

    Ryo Hamai

    2015-12-01

    Full Text Available Apatite-polymer composites have been evaluated in terms of its potential application as bone substitutes. Biomimetic processes using simulated body fluid (SBF are well-known methods for preparation of such composites. They are reliant on specific functional groups to induce the heterogeneous apatite nucleation and phosphate groups possess good apatite-forming ability in SBF. Improving the degree of polymerization is important for obtaining phosphate-containing polymers, because the release of significant quantities of monomer or low molecular weight polymers can lead to suppression of the apatite formation. To date, there have been very few studies pertaining to the effect of adding a polymerization accelerator to the polymerization reaction involved in the formation of these composite materials under physiological conditions. In this study, we have prepared a copolymer from triethylene glycol dimethacrylate and vinylphosphonic acid (VPA in the presence of different amounts of sodium p-toluenesulfinate (p-TSS as a polymerization accelerator. The effects of p-TSS on the chemical durability and apatite formation of the copolymers were investigated in SBF. The addition of 0.1–1.0 wt% of p-TSS was effective for suppressing the dissolution of the copolymers in SBF, whereas larger amount had a detrimental effect. A calcium polyvinylphosphate instead of the apatite was precipitated in SBF.

  12. Understanding the effect of touchdown distance and ankle joint kinematics on sprint acceleration performance through computer simulation.

    Science.gov (United States)

    Bezodis, Neil Edward; Trewartha, Grant; Salo, Aki Ilkka Tapio

    2015-06-01

    This study determined the effects of simulated technique manipulations on early acceleration performance. A planar seven-segment angle-driven model was developed and quantitatively evaluated based on the agreement of its output to empirical data from an international-level male sprinter (100 m personal best = 10.28 s). The model was then applied to independently assess the effects of manipulating touchdown distance (horizontal distance between the foot and centre of mass) and range of ankle joint dorsiflexion during early stance on horizontal external power production during stance. The model matched the empirical data with a mean difference of 5.2%. When the foot was placed progressively further forward at touchdown, horizontal power production continually reduced. When the foot was placed further back, power production initially increased (a peak increase of 0.7% occurred at 0.02 m further back) but decreased as the foot continued to touchdown further back. When the range of dorsiflexion during early stance was reduced, exponential increases in performance were observed. Increasing negative touchdown distance directs the ground reaction force more horizontally; however, a limit to the associated performance benefit exists. Reducing dorsiflexion, which required achievable increases in the peak ankle plantar flexor moment, appears potentially beneficial for improving early acceleration performance.

  13. Design for dependability: A simulation-based approach. Ph.D. Thesis, 1993

    Science.gov (United States)

    Goswami, Kumar K.

    1994-01-01

    This research addresses issues in simulation-based system level dependability analysis of fault-tolerant computer systems. The issues and difficulties of providing a general simulation-based approach for system level analysis are discussed and a methodology that address and tackle these issues is presented. The proposed methodology is designed to permit the study of a wide variety of architectures under various fault conditions. It permits detailed functional modeling of architectural features such as sparing policies, repair schemes, routing algorithms as well as other fault-tolerant mechanisms, and it allows the execution of actual application software. One key benefit of this approach is that the behavior of a system under faults does not have to be pre-defined as it is normally done. Instead, a system can be simulated in detail and injected with faults to determine its failure modes. The thesis describes how object-oriented design is used to incorporate this methodology into a general purpose design and fault injection package called DEPEND. A software model is presented that uses abstractions of application programs to study the behavior and effect of software on hardware faults in the early design stage when actual code is not available. Finally, an acceleration technique that combines hierarchical simulation, time acceleration algorithms and hybrid simulation to reduce simulation time is introduced.

  14. Accelerating mono-domain cardiac electrophysiology simulations using OpenCL

    Directory of Open Access Journals (Sweden)

    Wülfers Eike M.

    2015-09-01

    Full Text Available Using OpenCL, we developed a cross-platform software to compute electrical excitation conduction in cardiac tissue. OpenCL allowed the software to run parallelized and on different computing devices (e.g., CPUs and GPUs. We used the macroscopic mono-domain model for excitation conduction and an atrial myocyte model by Courtemanche et al. for ionic currents. On a CPU with 12 HyperThreading-enabled Intel Xeon 2.7 GHz cores, we achieved a speed-up of simulations by a factor of 1.6 against existing software that uses OpenMPI. On two high-end AMD FirePro D700 GPUs the OpenCL software ran 2.4 times faster than the OpenMPI implementation. The more nodes the discretized simulation domain contained, the higher speed-ups were achieved.

  15. Commissioning of a medical accelerator photon beam Monte Carlo simulation using wide-field profiles

    Science.gov (United States)

    Pena, J.; Franco, L.; Gómez, F.; Iglesias, A.; Lobato, R.; Mosquera, J.; Pazos, A.; Pardo, J.; Pombar, M.; Rodríguez, A.; Sendón, J.

    2004-11-01

    A method for commissioning an EGSnrc Monte Carlo simulation of medical linac photon beams through wide-field lateral profiles at moderate depth in a water phantom is presented. Although depth-dose profiles are commonly used for nominal energy determination, our study shows that they are quite insensitive to energy changes below 0.3 MeV (0.6 MeV) for a 6 MV (15 MV) photon beam. Also, the depth-dose profile dependence on beam radius adds an additional uncertainty in their use for tuning nominal energy. Simulated 40 cm × 40 cm lateral profiles at 5 cm depth in a water phantom show greater sensitivity to both nominal energy and radius. Beam parameters could be determined by comparing only these curves with measured data.

  16. Commissioning of a medical accelerator photon beam Monte Carlo simulation using wide-field profiles

    Energy Technology Data Exchange (ETDEWEB)

    Pena, J [Departamento de Fisica de PartIculas, Facultade de Fisica, 15782 Santiago de Compostela (Spain); Franco, L [Departamento de Fisica de PartIculas, Facultade de Fisica, 15782 Santiago de Compostela (Spain); Gomez, F [Departamento de Fisica de PartIculas, Facultade de Fisica, 15782 Santiago de Compostela (Spain); Iglesias, A [Departamento de Fisica de PartIculas, Facultade de Fisica, 15782 Santiago de Compostela (Spain); Lobato, R [Hospital ClInico Universitario de Santiago, Santiago de Compostela (Spain); Mosquera, J [Hospital ClInico Universitario de Santiago, Santiago de Compostela (Spain); Pazos, A [Departamento de Fisica de PartIculas, Facultade de Fisica, 15782 Santiago de Compostela (Spain); Pardo, J [Departamento de Fisica de PartIculas, Facultade de Fisica, 15782 Santiago de Compostela (Spain); Pombar, M [Hospital ClInico Universitario de Santiago, Santiago de Compostela (Spain); RodrIguez, A [Departamento de Fisica de PartIculas, Facultade de Fisica, 15782 Santiago de Compostela (Spain); Sendon, J [Hospital ClInico Universitario de Santiago, Santiago de Compostela (Spain)

    2004-11-07

    A method for commissioning an EGSnrc Monte Carlo simulation of medical linac photon beams through wide-field lateral profiles at moderate depth in a water phantom is presented. Although depth-dose profiles are commonly used for nominal energy determination, our study shows that they are quite insensitive to energy changes below 0.3 MeV (0.6 MeV) for a 6 MV (15 MV) photon beam. Also, the depth-dose profile dependence on beam radius adds an additional uncertainty in their use for tuning nominal energy. Simulated 40 cm x 40 cm lateral profiles at 5 cm depth in a water phantom show greater sensitivity to both nominal energy and radius. Beam parameters could be determined by comparing only these curves with measured data.

  17. A cutoff phenomenon in accelerated stochastic simulations of chemical kinetics via flow averaging (FLAVOR-SSA)

    Science.gov (United States)

    Bayati, Basil; Owhadi, Houman; Koumoutsakos, Petros

    2010-12-01

    We present a simple algorithm for the simulation of stiff, discrete-space, continuous-time Markov processes. The algorithm is based on the concept of flow averaging for the integration of stiff ordinary and stochastic differential equations and ultimately leads to a straightforward variation of the the well-known stochastic simulation algorithm (SSA). The speedup that can be achieved by the present algorithm [flow averaging integrator SSA (FLAVOR-SSA)] over the classical SSA comes naturally at the expense of its accuracy. The error of the proposed method exhibits a cutoff phenomenon as a function of its speed-up, allowing for optimal tuning. Two numerical examples from chemical kinetics are provided to illustrate the efficiency of the method.

  18. GPU acceleration of a nonhydrostatic model for the internal solitary waves simulation

    Institute of Scientific and Technical Information of China (English)

    CHEN Tong-qing; ZHANG Qing-he

    2013-01-01

    The parallel computing algorithm for a nonhydrostatic model on one or multiple Graphic Processing Units (GPUs) for the simulation of internal solitary waves is presented and discussed.The computational efficiency of the GPU scheme is analyzed by a series of numerical experiments,including an ideal case and the field scale simulations,performed on the workstation and the supercomputer system.The calculated results show that the speedup of the developed GPU-based parallel computing scheme,compared to the implementation on a single CPU core,increases with the number of computational grid cells,and the speedup can increase quasilinearly with respect to the number of involved GPUs for the problem with relatively large number of grid cells within 32 GPUs.

  19. FPGA Hardware Acceleration of Monte Carlo Simulations for the Ising Model

    CERN Document Server

    Ortega-Zamorano, Francisco; Cannas, Sergio A; Jerez, José M; Franco, Leonardo

    2016-01-01

    A two-dimensional Ising model with nearest-neighbors ferromagnetic interactions is implemented in a Field Programmable Gate Array (FPGA) board.Extensive Monte Carlo simulations were carried out using an efficient hardware representation of individual spins and a combined global-local LFSR random number generator. Consistent results regarding the descriptive properties of magnetic systems, like energy, magnetization and susceptibility are obtained while a speed-up factor of approximately 6 times is achieved in comparison to previous FPGA-based published works and almost $10^4$ times in comparison to a standard CPU simulation. A detailed description of the logic design used is given together with a careful analysis of the quality of the random number generator used. The obtained results confirm the potential of FPGAs for analyzing the statistical mechanics of magnetic systems.

  20. Accelerating the convergence of replica exchange simulations using Gibbs sampling and adaptive temperature sets

    CERN Document Server

    Vogel, Thomas

    2015-01-01

    We recently introduced a novel replica-exchange scheme in which an individual replica can sample from states encountered by other replicas at any previous time by way of a global configuration database, enabling the fast propagation of relevant states through the whole ensemble of replicas. This mechanism depends on the knowledge of global thermodynamic functions which are measured during the simulation and not coupled to the heat bath temperatures driving the individual simulations. Therefore, this setup also allows for a continuous adaptation of the temperature set. In this paper, we will review the new scheme and demonstrate its capability. The method is particularly useful for the fast and reliable estimation of the microcanonical temperature T(U) or, equivalently, of the density of states g(U) over a wide range of energies.