WorldWideScience

Sample records for high-performance physics simulations

  1. A high performance computing framework for physics-based modeling and simulation of military ground vehicles

    Science.gov (United States)

    Negrut, Dan; Lamb, David; Gorsich, David

    2011-06-01

    This paper describes a software infrastructure made up of tools and libraries designed to assist developers in implementing computational dynamics applications running on heterogeneous and distributed computing environments. Together, these tools and libraries compose a so called Heterogeneous Computing Template (HCT). The heterogeneous and distributed computing hardware infrastructure is assumed herein to be made up of a combination of CPUs and Graphics Processing Units (GPUs). The computational dynamics applications targeted to execute on such a hardware topology include many-body dynamics, smoothed-particle hydrodynamics (SPH) fluid simulation, and fluid-solid interaction analysis. The underlying theme of the solution approach embraced by HCT is that of partitioning the domain of interest into a number of subdomains that are each managed by a separate core/accelerator (CPU/GPU) pair. Five components at the core of HCT enable the envisioned distributed computing approach to large-scale dynamical system simulation: (a) the ability to partition the problem according to the one-to-one mapping; i.e., spatial subdivision, discussed above (pre-processing); (b) a protocol for passing data between any two co-processors; (c) algorithms for element proximity computation; and (d) the ability to carry out post-processing in a distributed fashion. In this contribution the components (a) and (b) of the HCT are demonstrated via the example of the Discrete Element Method (DEM) for rigid body dynamics with friction and contact. The collision detection task required in frictional-contact dynamics (task (c) above), is shown to benefit on the GPU of a two order of magnitude gain in efficiency when compared to traditional sequential implementations. Note: Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not imply its endorsement, recommendation, or favoring by the United States Army. The views and

  2. High performance electromagnetic simulation tools

    Science.gov (United States)

    Gedney, Stephen D.; Whites, Keith W.

    1994-10-01

    Army Research Office Grant #DAAH04-93-G-0453 has supported the purchase of 24 additional compute nodes that were installed in the Intel iPsC/860 hypercube at the Univesity Of Kentucky (UK), rendering a 32-node multiprocessor. This facility has allowed the investigators to explore and extend the boundaries of electromagnetic simulation for important areas of defense concerns including microwave monolithic integrated circuit (MMIC) design/analysis and electromagnetic materials research and development. The iPSC/860 has also provided an ideal platform for MMIC circuit simulations. A number of parallel methods based on direct time-domain solutions of Maxwell's equations have been developed on the iPSC/860, including a parallel finite-difference time-domain (FDTD) algorithm, and a parallel planar generalized Yee-algorithm (PGY). The iPSC/860 has also provided an ideal platform on which to develop a 'virtual laboratory' to numerically analyze, scientifically study and develop new types of materials with beneficial electromagnetic properties. These materials simulations are capable of assembling hundreds of microscopic inclusions from which an electromagnetic full-wave solution will be obtained in toto. This powerful simulation tool has enabled research of the full-wave analysis of complex multicomponent MMIC devices and the electromagnetic properties of many types of materials to be performed numerically rather than strictly in the laboratory.

  3. Utilities for high performance dispersion model PHYSIC

    International Nuclear Information System (INIS)

    Yamazawa, Hiromi

    1992-09-01

    The description and usage of the utilities for the dispersion calculation model PHYSIC were summarized. The model was developed in the study of developing high performance SPEEDI with the purpose of introducing meteorological forecast function into the environmental emergency response system. The procedure of PHYSIC calculation consists of three steps; preparation of relevant files, creation and submission of JCL, and graphic output of results. A user can carry out the above procedure with the help of the Geographical Data Processing Utility, the Model Control Utility, and the Graphic Output Utility. (author)

  4. High performance ultrasonic field simulation on complex geometries

    Science.gov (United States)

    Chouh, H.; Rougeron, G.; Chatillon, S.; Iehl, J. C.; Farrugia, J. P.; Ostromoukhov, V.

    2016-02-01

    Ultrasonic field simulation is a key ingredient for the design of new testing methods as well as a crucial step for NDT inspection simulation. As presented in a previous paper [1], CEA-LIST has worked on the acceleration of these simulations focusing on simple geometries (planar interfaces, isotropic materials). In this context, significant accelerations were achieved on multicore processors and GPUs (Graphics Processing Units), bringing the execution time of realistic computations in the 0.1 s range. In this paper, we present recent works that aim at similar performances on a wider range of configurations. We adapted the physical model used by the CIVA platform to design and implement a new algorithm providing a fast ultrasonic field simulation that yields nearly interactive results for complex cases. The improvements over the CIVA pencil-tracing method include adaptive strategies for pencil subdivisions to achieve a good refinement of the sensor geometry while keeping a reasonable number of ray-tracing operations. Also, interpolation of the times of flight was used to avoid time consuming computations in the impulse response reconstruction stage. To achieve the best performance, our algorithm runs on multi-core superscalar CPUs and uses high performance specialized libraries such as Intel Embree for ray-tracing, Intel MKL for signal processing and Intel TBB for parallelization. We validated the simulation results by comparing them to the ones produced by CIVA on identical test configurations including mono-element and multiple-element transducers, homogeneous, meshed 3D CAD specimens, isotropic and anisotropic materials and wave paths that can involve several interactions with interfaces. We show performance results on complete simulations that achieve computation times in the 1s range.

  5. High-Performance Beam Simulator for the LANSCE Linac

    International Nuclear Information System (INIS)

    Pang, Xiaoying; Rybarcyk, Lawrence J.; Baily, Scott A.

    2012-01-01

    A high performance multiparticle tracking simulator is currently under development at Los Alamos. The heart of the simulator is based upon the beam dynamics simulation algorithms of the PARMILA code, but implemented in C++ on Graphics Processing Unit (GPU) hardware using NVIDIA's CUDA platform. Linac operating set points are provided to the simulator via the EPICS control system so that changes of the real time linac parameters are tracked and the simulation results updated automatically. This simulator will provide valuable insight into the beam dynamics along a linac in pseudo real-time, especially where direct measurements of the beam properties do not exist. Details regarding the approach, benefits and performance are presented.

  6. Crystal and molecular simulation of high-performance polymers.

    Science.gov (United States)

    Colquhoun, H M; Williams, D J

    2000-03-01

    Single-crystal X-ray analyses of oligomeric models for high-performance aromatic polymers, interfaced to computer-based molecular modeling and diffraction simulation, have enabled the determination of a range of previously unknown polymer crystal structures from X-ray powder data. Materials which have been successfully analyzed using this approach include aromatic polyesters, polyetherketones, polythioetherketones, polyphenylenes, and polycarboranes. Pure macrocyclic homologues of noncrystalline polyethersulfones afford high-quality single crystals-even at very large ring sizes-and have provided the first examples of a "protein crystallographic" approach to the structures of conventionally amorphous synthetic polymers.

  7. Simulation model of a twin-tail, high performance airplane

    Science.gov (United States)

    Buttrill, Carey S.; Arbuckle, P. Douglas; Hoffler, Keith D.

    1992-01-01

    The mathematical model and associated computer program to simulate a twin-tailed high performance fighter airplane (McDonnell Douglas F/A-18) are described. The simulation program is written in the Advanced Continuous Simulation Language. The simulation math model includes the nonlinear six degree-of-freedom rigid-body equations, an engine model, sensors, and first order actuators with rate and position limiting. A simplified form of the F/A-18 digital control laws (version 8.3.3) are implemented. The simulated control law includes only inner loop augmentation in the up and away flight mode. The aerodynamic forces and moments are calculated from a wind-tunnel-derived database using table look-ups with linear interpolation. The aerodynamic database has an angle-of-attack range of -10 to +90 and a sideslip range of -20 to +20 degrees. The effects of elastic deformation are incorporated in a quasi-static-elastic manner. Elastic degrees of freedom are not actively simulated. In the engine model, the throttle-commanded steady-state thrust level and the dynamic response characteristics of the engine are based on airflow rate as determined from a table look-up. Afterburner dynamics are switched in at a threshold based on the engine airflow and commanded thrust.

  8. MUMAX: A new high-performance micromagnetic simulation tool

    International Nuclear Information System (INIS)

    Vansteenkiste, A.; Van de Wiele, B.

    2011-01-01

    We present MUMAX, a general-purpose micromagnetic simulation tool running on graphical processing units (GPUs). MUMAX is designed for high-performance computations and specifically targets large simulations. In that case speedups of over a factor 100 x can be obtained compared to the CPU-based OOMMF program developed at NIST. MUMAX aims to be general and broadly applicable. It solves the classical Landau-Lifshitz equation taking into account the magnetostatic, exchange and anisotropy interactions, thermal effects and spin-transfer torque. Periodic boundary conditions can optionally be imposed. A spatial discretization using finite differences in two or three dimensions can be employed. MUMAX is publicly available as open-source software. It can thus be freely used and extended by community. Due to its high computational performance, MUMAX should open up the possibility of running extensive simulations that would be nearly inaccessible with typical CPU-based simulators. - Highlights: → Novel, open-source micromagnetic simulator on GPU hardware. → Speedup of ∝100x compared to other widely used tools. → Extensively validated against standard problems. → Makes previously infeasible simulations accessible.

  9. Simulations of KSTAR high performance steady state operation scenarios

    International Nuclear Information System (INIS)

    Na, Yong-Su; Kessel, C.E.; Park, J.M.; Yi, Sumin; Kim, J.Y.; Becoulet, A.; Sips, A.C.C.

    2009-01-01

    We report the results of predictive modelling of high performance steady state operation scenarios in KSTAR. Firstly, the capabilities of steady state operation are investigated with time-dependent simulations using a free-boundary plasma equilibrium evolution code coupled with transport calculations. Secondly, the reproducibility of high performance steady state operation scenarios developed in the DIII-D tokamak, of similar size to that of KSTAR, is investigated using the experimental data taken from DIII-D. Finally, the capability of ITER-relevant steady state operation is investigated in KSTAR. It is found that KSTAR is able to establish high performance steady state operation scenarios; β N above 3, H 98 (y, 2) up to 2.0, f BS up to 0.76 and f NI equals 1.0. In this work, a realistic density profile is newly introduced for predictive simulations by employing the scaling law of a density peaking factor. The influence of the current ramp-up scenario and the transport model is discussed with respect to the fusion performance and non-inductive current drive fraction in the transport simulations. As observed in the experiments, both the heating and the plasma current waveforms in the current ramp-up phase produce a strong effect on the q-profile, the fusion performance and also on the non-inductive current drive fraction in the current flattop phase. A criterion in terms of q min is found to establish ITER-relevant steady state operation scenarios. This will provide a guideline for designing the current ramp-up phase in KSTAR. It is observed that the transport model also affects the predictive values of fusion performance as well as the non-inductive current drive fraction. The Weiland transport model predicts the highest fusion performance as well as non-inductive current drive fraction in KSTAR. In contrast, the GLF23 model exhibits the lowest ones. ITER-relevant advanced scenarios cannot be obtained with the GLF23 model in the conditions given in this work

  10. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    International Nuclear Information System (INIS)

    Pop, Florin

    2014-01-01

    Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  11. Comprehensive Simulation Lifecycle Management for High Performance Computing Modeling and Simulation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — There are significant logistical barriers to entry-level high performance computing (HPC) modeling and simulation (M IllinoisRocstar) sets up the infrastructure for...

  12. High performance computer code for molecular dynamics simulations

    International Nuclear Information System (INIS)

    Levay, I.; Toekesi, K.

    2007-01-01

    Complete text of publication follows. Molecular Dynamics (MD) simulation is a widely used technique for modeling complicated physical phenomena. Since 2005 we are developing a MD simulations code for PC computers. The computer code is written in C++ object oriented programming language. The aim of our work is twofold: a) to develop a fast computer code for the study of random walk of guest atoms in Be crystal, b) 3 dimensional (3D) visualization of the particles motion. In this case we mimic the motion of the guest atoms in the crystal (diffusion-type motion), and the motion of atoms in the crystallattice (crystal deformation). Nowadays, it is common to use Graphics Devices in intensive computational problems. There are several ways to use this extreme processing performance, but never before was so easy to programming these devices as now. The CUDA (Compute Unified Device) Architecture introduced by nVidia Corporation in 2007 is a very useful for every processor hungry application. A Unified-architecture GPU include 96-128, or more stream processors, so the raw calculation performance is 576(!) GFLOPS. It is ten times faster, than the fastest dual Core CPU [Fig.1]. Our improved MD simulation software uses this new technology, which speed up our software and the code run 10 times faster in the critical calculation code segment. Although the GPU is a very powerful tool, it has a strongly paralleled structure. It means, that we have to create an algorithm, which works on several processors without deadlock. Our code currently uses 256 threads, shared and constant on-chip memory, instead of global memory, which is 100 times slower than others. It is possible to implement the total algorithm on GPU, therefore we do not need to download and upload the data in every iteration. On behalf of maximal throughput, every thread run with the same instructions

  13. A high performance scientific cloud computing environment for materials simulations

    OpenAIRE

    Jorissen, Kevin; Vila, Fernando D.; Rehr, John J.

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including...

  14. High performance real-time flight simulation at NASA Langley

    Science.gov (United States)

    Cleveland, Jeff I., II

    1994-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be deterministic and be completed in as short a time as possible. This includes simulation mathematical model computational and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, personnel at NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to a standard input/output system to provide for high bandwidth, low latency data acquisition and distribution. The Computer Automated Measurement and Control technology (IEEE standard 595) was extended to meet the performance requirements for real-time simulation. This technology extension increased the effective bandwidth by a factor of ten and increased the performance of modules necessary for simulator communications. This technology is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications of this technology are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC have completed the development of the use of supercomputers for simulation mathematical model computational to support real-time flight simulation. This includes the development of a real-time operating system and the development of specialized software and hardware for the CAMAC simulator network. This work, coupled with the use of an open systems software architecture, has advanced the state of the art in real time flight simulation. The data acquisition technology innovation and experience with recent developments in this technology are described.

  15. High performance stream computing for particle beam transport simulations

    International Nuclear Information System (INIS)

    Appleby, R; Bailey, D; Higham, J; Salt, M

    2008-01-01

    Understanding modern particle accelerators requires simulating charged particle transport through the machine elements. These simulations can be very time consuming due to the large number of particles and the need to consider many turns of a circular machine. Stream computing offers an attractive way to dramatically improve the performance of such simulations by calculating the simultaneous transport of many particles using dedicated hardware. Modern Graphics Processing Units (GPUs) are powerful and affordable stream computing devices. The results of simulations of particle transport through the booster-to-storage-ring transfer line of the DIAMOND synchrotron light source using an NVidia GeForce 7900 GPU are compared to the standard transport code MAD. It is found that particle transport calculations are suitable for stream processing and large performance increases are possible. The accuracy and potential speed gains are compared and the prospects for future work in the area are discussed

  16. A high performance scientific cloud computing environment for materials simulations

    Science.gov (United States)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  17. Physics of integrated high-performance NSTX plasmas

    International Nuclear Information System (INIS)

    Menard, J. E.; Bell, M. G.; Bell, R. E.; Fredrickson, E. D.; Gates, D. A.; Heidbrink, W.; Kaita, R.; Kaye, S. M.; Kessel, C. E.; Kugel, H.; LeBlanc, B. P.; Lee, K. C.; Levinton, F. M.; Maingi, R.; Medley, S. S.; Mikkelsen, D. R.; Mueller, D.; Nishino, N.; Ono, M.; Park, H.; Park, W.; Paul, S. F.; Peebles, T.; Peng, M.; Raman, R.; Redi, M.; Roquemore, L.; Sabbagh, S. A.; Skiner, C. H.; Sontag, A.; Soukhanovskii, V.; Stratton, B.; Stutman, D.; Synakowski, E.; Takase, Y.; Taylor, G.; Tritz, K.; Wade, M.; Wilson, J. R.; Zhu, W.

    2005-01-01

    An overarching goal of magnetic fusion research is the integration of steady state operation with high fusion power density, high plasma β, good thermal and fast particle confinement, and manageable heat and particle fluxes to reactor internal components. NSTX has made significant progress in integrating and understanding the interplay between these competing elements. Sustained high elongation up to 2.5 and H-mode transitions during the I p ramp-up have increased β p and reduced l i at high current resulting in I p flat-top durations exceeding 0.8s for I p >0.8MA. These shape and profile changes delay the onset of deleterious global MHD activity yielding β N values >4.5 and β T ∼20% maintained for several current diffusion times. Higher ∫ N discharges operating above the non-wall limit are sustained via rotational stabilization of the RWM. H-mode confinement scaling factors relative to H98(y,2) span the range 1±0.4 for B T >4kG and show a stron (Nearly linear) residual scaling with B T . Power balance analysis indicates the electron thermal transport dominates the loss power in beam-heated H m ode discharges, but the core χ e can be significantly reduced through current profile modification consistent with reversed magnetic shear. Small ELM regimes have been obtained in high performance plasmas on NSTX, but the ELM type and associated pedestal energy loss are found to depend sensitively on the boundary elongation, magnetic balance, and edge collisionality. NPA data and TRANSP analysis suggest resonant interactions with mid-radius tearing modes may lead to large fast-ion transport. The associated fast-ion diffusion and/or loss likely impact(s) both the driven current and power deposition profiles from NBI heating. Results from experiments to initiate the plasma without the ohmic solenoid and integrated scenario with the TSC code will also be described. (Author)

  18. Physics of high performance deuterium-tritium plasmas in TFTR

    International Nuclear Information System (INIS)

    McGuire, K.M.; Batha, S.

    1996-11-01

    During the past two years, deuterium-tritium (D-T) plasmas in the Tokamak Fusion Test Reactor (TFTR) have been used to study fusion power production, isotope effects associated with tritium fueling, and alpha-particle physics in several operational regimes. The peak fusion power has been increased to 10.7 MW in the supershot mode through the use of increased plasma current and toroidal magnetic field and extensive lithium wall conditioning. The high-internal-inductance (high-I i ) regime in TFTR has been extended in plasma current and has achieved 8.7 MW of fusion power. Studies of the effects of tritium on confinement have now been carried out in ohmic, NBI- and ICRF- heated L-mode and reversed-shear plasmas. In general, there is an enhancement in confinement time in D-T plasmas which is most pronounced in supershot and high-I i discharges, weaker in L-mode plasmas with NBI and ICRF heating and smaller still in ohmic plasmas. In reversed-shear discharges with sufficient deuterium-NBI heating power, internal transport barriers have been observed to form, leading to enhanced confinement. Large decreases in the ion heat conductivity and particle transport are inferred within the transport barrier. It appears that higher heating power is required to trigger the formation of a transport barrier with D-T NBI and the isotope effect on energy confinement is nearly absent in these enhanced reverse-shear plasmas. Many alpha-particle physics issues have been studied in the various operating regimes including confinement of the alpha particles, their redistribution by sawteeth, and their loss due to MHD instabilities with low toroidal mode numbers. In weak-shear plasmas, alpha-particle destabilization of a toroidal Alfven eigenmode has been observed

  19. High performance thermal stress analysis on the earth simulator

    International Nuclear Information System (INIS)

    Noriyuki, Kushida; Hiroshi, Okuda; Genki, Yagawa

    2003-01-01

    In this study, the thermal stress finite element analysis code optimized for the earth simulator was developed. A processor node of which of the earth simulator is the 8-way vector processor, and each processor can communicate using the message passing interface. Thus, there are two ways to parallelize the finite element method on the earth simulator. The first method is to assign one processor for one sub-domain, and the second method is to assign one node (=8 processors) for one sub-domain considering the shared memory type parallelization. Considering that the preconditioned conjugate gradient (PCG) method, which is one of the suitable linear equation solvers for the large-scale parallel finite element methods, shows the better convergence behavior if the number of domains is the smaller, we have determined to employ PCG and the hybrid parallelization, which is based on the shared and distributed memory type parallelization. It has been said that it is hard to obtain the good parallel or vector performance, since the finite element method is based on unstructured grids. In such situation, the reordering is inevitable to improve the computational performance [2]. In this study, we used three reordering methods, i.e. Reverse Cuthil-McKee (RCM), cyclic multicolor (CM) and diagonal jagged descending storage (DJDS)[3]. RCM provides the good convergence of the incomplete lower-upper (ILU) PCG, but causes the load imbalance. On the other hand, CM provides the good load balance, but worsens the convergence of ILU PCG if the vector length is so long. Therefore, we used the combined-method of RCM and CM. DJDS is the method to store the sparse matrices such that longer vector length can be obtained. For attaining the efficient inter-node parallelization, such partitioning methods as the recursive coordinate bisection (RCM) or MeTIS have been used. Computational performance of the practical large-scale engineering problems will be shown at the meeting. (author)

  20. Physics of high performance JET plasmas in D-T

    International Nuclear Information System (INIS)

    2001-01-01

    JET has recently operated with deuterium-tritium (D-T) mixtures, carried out an ITER physics campaign in hydrogen, deuterium, D-T and tritium, installed the Mark IIGB ''Gas Box'' divertor fully by remote handling and started physics experiments with this more closed divertor. The D-T experiments set records for fusion power (16.1 MW), ratio of fusion power to plasma input power (0.62, and 0.95±0.17 if a similar plasma could be obtained in steady-state) and fusion duration (4 MW for 4 s). A large scale tritium supply and processing plant, the first of its kind, allowed the repeated use of the 20 g tritium on site to supply 99.3 g of tritium to the machine. The H-mode threshold power is significantly lower in D-T, but the global energy confinement time is practically unchanged (no isotope effect). Dimensionless scaling ''Wind Tunnel'' experiments in D-T extrapolate to ignition with ITER parameters. The scaling is close to gyroBohm, but the mass dependence is not correct. Separating the thermal plasma energy into core and pedestal contributions could resolve this discrepancy (leading to proper gyroBohm scaling for the core) and also account for confinement degradation at high density and at high radiated power. Four radio frequency heating schemes have been tested successfully in D-T, showing good agreement with calculations. Alpha particle heating has been clearly observed and is consistent with classical expectations. Internal transport barriers have been established in optimised magnetic shear discharges for the first time in D-T and steady-state conditions have been approached with simultaneous internal and edge transport barriers. First results with the newly installed Mark IIGB divertor show that the in/out symmetry of the divertor plasma can be modified using differential gas fuelling, that optimised shear discharges can be produced, and that krypton gas puffing is effective in restoring L-mode edge conditions and establishing an internal transport barrier in

  1. Physics of high performance jet plasmas in D-T

    International Nuclear Information System (INIS)

    1999-01-01

    JET has recently operated with deuterium-tritium (D-T) mixtures, carried out an ITER physics campaign in hydrogen, deuterium, D-T and tritium, installed the Mark IIGB 'Gas Box' divertor fully by remote handling and started physics experiments with this more closed divertor. The D-T experiments set records for fusion power (16.1 MW), ratio of fusion power to plasma input power (0.62, and 0.95±0.17 if a similar plasma could be obtained in steady-state) and fusion duration (4 MW for 4 s). A large scale tritium supply and processing plant, the first of its kind, allowed the repeated use of the 20 g tritium on site to supply 99.3 g of tritium to the machine. The H-mode threshold power is significantly lower in D-T, but the global energy confinement time is practically unchanged (no isotope effect). Dimensionless scaling 'Wind Tunnel' experiments in D-T extrapolate to ignition with ITER parameters. The scaling is close to gyroBohm, but the mass dependence is not correct. Separating the thermal plasma energy into core and pedestal contributions could resolve this discrepancy (leading to proper gyroBohm scaling for the core) and also account for confinement degradation at high density and at high radiated power. Four radio frequency heating schemes have been tested successfully in D-T, showing good agreement with calculations. Alpha particle heating has been clearly observed and is consistent with classical expectations. Internal transport barriers have been established in optimised magnetic shear discharges for the first time in D-T and steady-state conditions have been approached with simultaneous internal and edge transport barriers. First results with the newly installed Mark IIGB divertor show that the in/out symmetry of the divertor plasma can be modified using differential gas fuelling, that optimised shear discharges can be produced, and that krypton gas puffing is effective in restoring L-mode edge conditions and establishing an internal transport barrier in such

  2. Mixed-Language High-Performance Computing for Plasma Simulations

    Directory of Open Access Journals (Sweden)

    Quanming Lu

    2003-01-01

    Full Text Available Java is receiving increasing attention as the most popular platform for distributed computing. However, programmers are still reluctant to embrace Java as a tool for writing scientific and engineering applications due to its still noticeable performance drawbacks compared with other programming languages such as Fortran or C. In this paper, we present a hybrid Java/Fortran implementation of a parallel particle-in-cell (PIC algorithm for plasma simulations. In our approach, the time-consuming components of this application are designed and implemented as Fortran subroutines, while less calculation-intensive components usually involved in building the user interface are written in Java. The two types of software modules have been glued together using the Java native interface (JNI. Our mixed-language PIC code was tested and its performance compared with pure Java and Fortran versions of the same algorithm on a Sun E6500 SMP system and a Linux cluster of Pentium~III machines.

  3. High-Performance Modeling of Carbon Dioxide Sequestration by Coupling Reservoir Simulation and Molecular Dynamics

    KAUST Repository

    Bao, Kai; Yan, Mi; Allen, Rebecca; Salama, Amgad; Lu, Ligang; Jordan, Kirk E.; Sun, Shuyu; Keyes, David E.

    2015-01-01

    The present work describes a parallel computational framework for carbon dioxide (CO2) sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel high-performance-computing (HPC) systems

  4. Reusable Object-Oriented Solutions for Numerical Simulation of PDEs in a High Performance Environment

    Directory of Open Access Journals (Sweden)

    Andrea Lani

    2006-01-01

    Full Text Available Object-oriented platforms developed for the numerical solution of PDEs must combine flexibility and reusability, in order to ease the integration of new functionalities and algorithms. While designing similar frameworks, a built-in support for high performance should be provided and enforced transparently, especially in parallel simulations. The paper presents solutions developed to effectively tackle these and other more specific problems (data handling and storage, implementation of physical models and numerical methods that have arisen in the development of COOLFluiD, an environment for PDE solvers. Particular attention is devoted to describe a data storage facility, highly suitable for both serial and parallel computing, and to discuss the application of two design patterns, Perspective and Method-Command-Strategy, that support extensibility and run-time flexibility in the implementation of physical models and generic numerical algorithms respectively.

  5. Student Engagement in High-Performing Schools: Relationships to Mental and Physical Health

    Science.gov (United States)

    Conner, Jerusha O.; Pope, Denise

    2014-01-01

    This chapter examines how the three most common types of engagement found among adolescents attending high-performing high schools relate to indicators of mental and physical health. [This article originally appeared as NSSE Yearbook Vol. 113, No. 1.

  6. High-Performance Modeling of Carbon Dioxide Sequestration by Coupling Reservoir Simulation and Molecular Dynamics

    KAUST Repository

    Bao, Kai

    2015-10-26

    The present work describes a parallel computational framework for carbon dioxide (CO2) sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel high-performance-computing (HPC) systems. In this framework, a parallel reservoir simulator, reservoir-simulation toolbox (RST), solves the flow and transport equations that describe the subsurface flow behavior, whereas the MD simulations are performed to provide the required physical parameters. Technologies from several different fields are used to make this novel coupled system work efficiently. One of the major applications of the framework is the modeling of large-scale CO2 sequestration for long-term storage in subsurface geological formations, such as depleted oil and gas reservoirs and deep saline aquifers, which has been proposed as one of the few attractive and practical solutions to reduce CO2 emissions and address the global-warming threat. Fine grids and accurate prediction of the properties of fluid mixtures under geological conditions are essential for accurate simulations. In this work, CO2 sequestration is presented as a first example for coupling reservoir simulation and MD, although the framework can be extended naturally to the full multiphase multicomponent compositional flow simulation to handle more complicated physical processes in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our MD simulations compared with published data, and good scalability is observed with the massively parallel HPC systems. The performance and capacity of the proposed framework are well-demonstrated with several experiments with hundreds of millions to one billion cells. To the best of our knowledge, the present work represents the first attempt to couple reservoir simulation and molecular simulation for large-scale modeling. Because of the complexity of

  7. An Advanced, Interactive, High-Performance Liquid Chromatography Simulator and Instructor Resources

    Science.gov (United States)

    Boswell, Paul G.; Stoll, Dwight R.; Carr, Peter W.; Nagel, Megan L.; Vitha, Mark F.; Mabbott, Gary A.

    2013-01-01

    High-performance liquid chromatography (HPLC) simulation software has long been recognized as an effective educational tool, yet many of the existing HPLC simulators are either too expensive, outdated, or lack many important features necessary to make them widely useful for educational purposes. Here, a free, open-source HPLC simulator is…

  8. High performance MRI simulations of motion on multi-GPU systems.

    Science.gov (United States)

    Xanthis, Christos G; Venetis, Ioannis E; Aletras, Anthony H

    2014-07-04

    MRI physics simulators have been developed in the past for optimizing imaging protocols and for training purposes. However, these simulators have only addressed motion within a limited scope. The purpose of this study was the incorporation of realistic motion, such as cardiac motion, respiratory motion and flow, within MRI simulations in a high performance multi-GPU environment. Three different motion models were introduced in the Magnetic Resonance Imaging SIMULator (MRISIMUL) of this study: cardiac motion, respiratory motion and flow. Simulation of a simple Gradient Echo pulse sequence and a CINE pulse sequence on the corresponding anatomical model was performed. Myocardial tagging was also investigated. In pulse sequence design, software crushers were introduced to accommodate the long execution times in order to avoid spurious echoes formation. The displacement of the anatomical model isochromats was calculated within the Graphics Processing Unit (GPU) kernel for every timestep of the pulse sequence. Experiments that would allow simulation of custom anatomical and motion models were also performed. Last, simulations of motion with MRISIMUL on single-node and multi-node multi-GPU systems were examined. Gradient Echo and CINE images of the three motion models were produced and motion-related artifacts were demonstrated. The temporal evolution of the contractility of the heart was presented through the application of myocardial tagging. Better simulation performance and image quality were presented through the introduction of software crushers without the need to further increase the computational load and GPU resources. Last, MRISIMUL demonstrated an almost linear scalable performance with the increasing number of available GPU cards, in both single-node and multi-node multi-GPU computer systems. MRISIMUL is the first MR physics simulator to have implemented motion with a 3D large computational load on a single computer multi-GPU configuration. The incorporation

  9. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  10. High Performance Electrical Modeling and Simulation Verification Test Suite - Tier I; TOPICAL

    International Nuclear Information System (INIS)

    SCHELLS, REGINA L.; BOGDAN, CAROLYN W.; WIX, STEVEN D.

    2001-01-01

    This document describes the High Performance Electrical Modeling and Simulation (HPEMS) Global Verification Test Suite (VERTS). The VERTS is a regression test suite used for verification of the electrical circuit simulation codes currently being developed by the HPEMS code development team. This document contains descriptions of the Tier I test cases

  11. Progress on H5Part: A Portable High Performance Parallel Data Interface for Electromagnetics Simulations

    International Nuclear Information System (INIS)

    Adelmann, Andreas; Gsell, Achim; Oswald, Benedikt; Schietinger, Thomas; Bethel, Wes; Shalf, John; Siegerist, Cristina; Stockinger, Kurt

    2007-01-01

    Significant problems facing all experimental and computational sciences arise from growing data size and complexity. Common to all these problems is the need to perform efficient data I/O on diverse computer architectures. In our scientific application, the largest parallel particle simulations generate vast quantities of six-dimensional data. Such a simulation run produces data for an aggregate data size up to several TB per run. Motivated by the need to address data I/O and access challenges, we have implemented H5Part, an open source data I/O API that simplifies the use of the Hierarchical Data Format v5 library (HDF5). HDF5 is an industry standard for high performance, cross-platform data storage and retrieval that runs on all contemporary architectures from large parallel supercomputers to laptops. H5Part, which is oriented to the needs of the particle physics and cosmology communities, provides support for parallel storage and retrieval of particles, structured and in the future unstructured meshes. In this paper, we describe recent work focusing on I/O support for particles and structured meshes and provide data showing performance on modern supercomputer architectures like the IBM POWER 5

  12. Aging analysis of high performance FinFET flip-flop under Dynamic NBTI simulation configuration

    Science.gov (United States)

    Zainudin, M. F.; Hussin, H.; Halim, A. K.; Karim, J.

    2018-03-01

    A mechanism known as Negative-bias Temperature Instability (NBTI) degrades a main electrical parameters of a circuit especially in terms of performance. So far, the circuit design available at present are only focussed on high performance circuit without considering the circuit reliability and robustness. In this paper, the main circuit performances of high performance FinFET flip-flop such as delay time, and power were studied with the presence of the NBTI degradation. The aging analysis was verified using a 16nm High Performance Predictive Technology Model (PTM) based on different commands available at Synopsys HSPICE. The results shown that the circuit under the longer dynamic NBTI simulation produces the highest impact in the increasing of gate delay and decrease in the average power reduction from a fresh simulation until the aged stress time under a nominal condition. In addition, the circuit performance under a varied stress condition such as temperature and negative stress gate bias were also studied.

  13. High-Performance Modeling and Simulation of Anchoring in Granular Media for NEO Applications

    Science.gov (United States)

    Quadrelli, Marco B.; Jain, Abhinandan; Negrut, Dan; Mazhar, Hammad

    2012-01-01

    NASA is interested in designing a spacecraft capable of visiting a near-Earth object (NEO), performing experiments, and then returning safely. Certain periods of this mission would require the spacecraft to remain stationary relative to the NEO, in an environment characterized by very low gravity levels; such situations require an anchoring mechanism that is compact, easy to deploy, and upon mission completion, easy to remove. The design philosophy used in this task relies on the simulation capability of a high-performance multibody dynamics physics engine. On Earth, it is difficult to create low-gravity conditions, and testing in low-gravity environments, whether artificial or in space, can be costly and very difficult to achieve. Through simulation, the effect of gravity can be controlled with great accuracy, making it ideally suited to analyze the problem at hand. Using Chrono::Engine, a simulation pack age capable of utilizing massively parallel Graphic Processing Unit (GPU) hardware, several validation experiments were performed. Modeling of the regolith interaction has been carried out, after which the anchor penetration tests were performed and analyzed. The regolith was modeled by a granular medium composed of very large numbers of convex three-dimensional rigid bodies, subject to microgravity levels and interacting with each other with contact, friction, and cohesional forces. The multibody dynamics simulation approach used for simulating anchors penetrating a soil uses a differential variational inequality (DVI) methodology to solve the contact problem posed as a linear complementarity method (LCP). Implemented within a GPU processing environment, collision detection is greatly accelerated compared to traditional CPU (central processing unit)- based collision detection. Hence, systems of millions of particles interacting with complex dynamic systems can be efficiently analyzed, and design recommendations can be made in a much shorter time. The figure

  14. LIAR -- A computer program for the modeling and simulation of high performance linacs

    International Nuclear Information System (INIS)

    Assmann, R.; Adolphsen, C.; Bane, K.; Emma, P.; Raubenheimer, T.; Siemann, R.; Thompson, K.; Zimmermann, F.

    1997-04-01

    The computer program LIAR (LInear Accelerator Research Code) is a numerical modeling and simulation tool for high performance linacs. Amongst others, it addresses the needs of state-of-the-art linear colliders where low emittance, high-intensity beams must be accelerated to energies in the 0.05-1 TeV range. LIAR is designed to be used for a variety of different projects. LIAR allows the study of single- and multi-particle beam dynamics in linear accelerators. It calculates emittance dilutions due to wakefield deflections, linear and non-linear dispersion and chromatic effects in the presence of multiple accelerator imperfections. Both single-bunch and multi-bunch beams can be simulated. Several basic and advanced optimization schemes are implemented. Present limitations arise from the incomplete treatment of bending magnets and sextupoles. A major objective of the LIAR project is to provide an open programming platform for the accelerator physics community. Due to its design, LIAR allows straight-forward access to its internal FORTRAN data structures. The program can easily be extended and its interactive command language ensures maximum ease of use. Presently, versions of LIAR are compiled for UNIX and MS Windows operating systems. An interface for the graphical visualization of results is provided. Scientific graphs can be saved in the PS and EPS file formats. In addition a Mathematica interface has been developed. LIAR now contains more than 40,000 lines of source code in more than 130 subroutines. This report describes the theoretical basis of the program, provides a reference for existing features and explains how to add further commands. The LIAR home page and the ONLINE version of this manual can be accessed under: http://www.slac.stanford.edu/grp/arb/rwa/liar.htm

  15. High performance simulation of lattice physics using enhanced transputer arrays

    International Nuclear Information System (INIS)

    Hey, A.J.G.; Jesshope, C.R.; Nicole, D.A.

    1986-01-01

    The authors describe an architecture under construction at Southampton using arrays of communicating transputers with enhanced floating-point capabilities. Performance in the Gigaflop range is expected. Algorithms for taking explicit advantage of this MIMD architecture are discussed using the Occam programming paradigm. (Auth.)

  16. Hybrid Microscopy: Enabling Inexpensive High-Performance Imaging through Combined Physical and Optical Magnifications.

    Science.gov (United States)

    Zhang, Yu Shrike; Chang, Jae-Byum; Alvarez, Mario Moisés; Trujillo-de Santiago, Grissel; Aleman, Julio; Batzaya, Byambaa; Krishnadoss, Vaishali; Ramanujam, Aishwarya Aravamudhan; Kazemzadeh-Narbat, Mehdi; Chen, Fei; Tillberg, Paul W; Dokmeci, Mehmet Remzi; Boyden, Edward S; Khademhosseini, Ali

    2016-03-15

    To date, much effort has been expended on making high-performance microscopes through better instrumentation. Recently, it was discovered that physical magnification of specimens was possible, through a technique called expansion microscopy (ExM), raising the question of whether physical magnification, coupled to inexpensive optics, could together match the performance of high-end optical equipment, at a tiny fraction of the price. Here we show that such "hybrid microscopy" methods--combining physical and optical magnifications--can indeed achieve high performance at low cost. By physically magnifying objects, then imaging them on cheap miniature fluorescence microscopes ("mini-microscopes"), it is possible to image at a resolution comparable to that previously attainable only with benchtop microscopes that present costs orders of magnitude higher. We believe that this unprecedented hybrid technology that combines expansion microscopy, based on physical magnification, and mini-microscopy, relying on conventional optics--a process we refer to as Expansion Mini-Microscopy (ExMM)--is a highly promising alternative method for performing cost-effective, high-resolution imaging of biological samples. With further advancement of the technology, we believe that ExMM will find widespread applications for high-resolution imaging particularly in research and healthcare scenarios in undeveloped countries or remote places.

  17. The computer program LIAR for the simulation and modeling of high performance linacs

    International Nuclear Information System (INIS)

    Assmann, R.; Adolphsen, C.; Bane, K.; Emma, P.; Raubenheimer, T.O.; Siemann, R.; Thompson, K.; Zimmermann, F.

    1997-07-01

    High performance linear accelerators are the central components of the proposed next generation of linear colliders. They must provide acceleration of up to 750 GeV per beam while maintaining small normalized emittances. Standard simulation programs, mainly developed for storage rings, did not meet the specific requirements for high performance linacs with high bunch charges and strong wakefields. The authors present the program. LIAR (LInear Accelerator Research code) that includes single and multi-bunch wakefield effects, a 6D coupled beam description, specific optimization algorithms and other advanced features. LIAR has been applied to and checked against the existing Stanford Linear Collider (SLC), the linacs of the proposed Next Linear Collider (NLC) and the proposed Linac Coherent Light Source (LCLS) at SLAC. Its modular structure allows easy extension for different purposes. The program is available for UNIX workstations and Windows PC's

  18. High Performance Wideband CMOS CCI and its Application in Inductance Simulator Design

    Directory of Open Access Journals (Sweden)

    ARSLAN, E.

    2012-08-01

    Full Text Available In this paper, a new, differential pair based, low-voltage, high performance and wideband CMOS first generation current conveyor (CCI is proposed. The proposed CCI has high voltage swings on ports X and Y and very low equivalent impedance on port X due to super source follower configuration. It also has high voltage swings (close to supply voltages on input and output ports and wideband current and voltage transfer ratios. Furthermore, two novel grounded inductance simulator circuits are proposed as application examples. Using HSpice, it is shown that the simulation results of the proposed CCI and also of the presented inductance simulators are in very good agreement with the expected ones.

  19. OpenMM 4: A Reusable, Extensible, Hardware Independent Library for High Performance Molecular Simulation.

    Science.gov (United States)

    Eastman, Peter; Friedrichs, Mark S; Chodera, John D; Radmer, Randall J; Bruns, Christopher M; Ku, Joy P; Beauchamp, Kyle A; Lane, Thomas J; Wang, Lee-Ping; Shukla, Diwakar; Tye, Tony; Houston, Mike; Stich, Timo; Klein, Christoph; Shirts, Michael R; Pande, Vijay S

    2013-01-08

    OpenMM is a software toolkit for performing molecular simulations on a range of high performance computing architectures. It is based on a layered architecture: the lower layers function as a reusable library that can be invoked by any application, while the upper layers form a complete environment for running molecular simulations. The library API hides all hardware-specific dependencies and optimizations from the users and developers of simulation programs: they can be run without modification on any hardware on which the API has been implemented. The current implementations of OpenMM include support for graphics processing units using the OpenCL and CUDA frameworks. In addition, OpenMM was designed to be extensible, so new hardware architectures can be accommodated and new functionality (e.g., energy terms and integrators) can be easily added.

  20. High Performance Computing and Storage Requirements for Nuclear Physics: Target 2017

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wasserman, Harvey [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-04-30

    In April 2014, NERSC, ASCR, and the DOE Office of Nuclear Physics (NP) held a review to characterize high performance computing (HPC) and storage requirements for NP research through 2017. This review is the 12th in a series of reviews held by NERSC and Office of Science program offices that began in 2009. It is the second for NP, and the final in the second round of reviews that covered the six Office of Science program offices. This report is the result of that review

  1. High performance simulation for the Silva project using the tera computer

    Energy Technology Data Exchange (ETDEWEB)

    Bergeaud, V.; La Hargue, J.P.; Mougery, F. [CS Communication and Systemes, 92 - Clamart (France); Boulet, M.; Scheurer, B. [CEA Bruyeres-le-Chatel, 91 - Bruyeres-le-Chatel (France); Le Fur, J.F.; Comte, M.; Benisti, D.; Lamare, J. de; Petit, A. [CEA Saclay, 91 - Gif sur Yvette (France)

    2003-07-01

    In the context of the SILVA Project (Atomic Vapor Laser Isotope Separation), numerical simulation of the plant scale propagation of laser beams through uranium vapour was a great challenge. The PRODIGE code has been developed to achieve this goal. Here we focus on the task of achieving high performance simulation on the TERA computer. We describe the main issues for optimizing the parallelization of the PRODIGE code on TERA. Thus, we discuss advantages and drawbacks of the implemented diagonal parallelization scheme. As a consequence, it has been found fruitful to fit out the code in three aspects: memory allocation, MPI communications and interconnection network bandwidth usage. We stress out the interest of MPI/IO in this context and the benefit obtained for production computations on TERA. Finally, we shall illustrate our developments. We indicate some performance measurements reflecting the good parallelization properties of PRODIGE on the TERA computer. The code is currently used for demonstrating the feasibility of the laser propagation at a plant enrichment level and for preparing the 2003 Menphis experiment. We conclude by emphasizing the contribution of high performance TERA simulation to the project. (authors)

  2. High performance simulation for the Silva project using the tera computer

    International Nuclear Information System (INIS)

    Bergeaud, V.; La Hargue, J.P.; Mougery, F.; Boulet, M.; Scheurer, B.; Le Fur, J.F.; Comte, M.; Benisti, D.; Lamare, J. de; Petit, A.

    2003-01-01

    In the context of the SILVA Project (Atomic Vapor Laser Isotope Separation), numerical simulation of the plant scale propagation of laser beams through uranium vapour was a great challenge. The PRODIGE code has been developed to achieve this goal. Here we focus on the task of achieving high performance simulation on the TERA computer. We describe the main issues for optimizing the parallelization of the PRODIGE code on TERA. Thus, we discuss advantages and drawbacks of the implemented diagonal parallelization scheme. As a consequence, it has been found fruitful to fit out the code in three aspects: memory allocation, MPI communications and interconnection network bandwidth usage. We stress out the interest of MPI/IO in this context and the benefit obtained for production computations on TERA. Finally, we shall illustrate our developments. We indicate some performance measurements reflecting the good parallelization properties of PRODIGE on the TERA computer. The code is currently used for demonstrating the feasibility of the laser propagation at a plant enrichment level and for preparing the 2003 Menphis experiment. We conclude by emphasizing the contribution of high performance TERA simulation to the project. (authors)

  3. High performance cellular level agent-based simulation with FLAME for the GPU.

    Science.gov (United States)

    Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela

    2010-05-01

    Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.

  4. Design of the HELICS High-Performance Transmission-Distribution-Communication-Market Co-Simulation Framework

    Energy Technology Data Exchange (ETDEWEB)

    Palmintier, Bryan S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnamurthy, Dheepak [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Top, Philip [Lawrence Livermore National Laboratories; Smith, Steve [Lawrence Livermore National Laboratories; Daily, Jeff [Pacific Northwest National Laboratory; Fuller, Jason [Pacific Northwest National Laboratory

    2017-10-12

    This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layered co-simulation architecture.

  5. Statistical physics of fracture: scientific discovery through high-performance computing

    International Nuclear Information System (INIS)

    Kumar, Phani; Nukala, V V; Simunovic, Srdan; Mills, Richard T

    2006-01-01

    The paper presents the state-of-the-art algorithmic developments for simulating the fracture of disordered quasi-brittle materials using discrete lattice systems. Large scale simulations are often required to obtain accurate scaling laws; however, due to computational complexity, the simulations using the traditional algorithms were limited to small system sizes. We have developed two algorithms: a multiple sparse Cholesky downdating scheme for simulating 2D random fuse model systems, and a block-circulant preconditioner for simulating 2D random fuse model systems. Using these algorithms, we were able to simulate fracture of largest ever lattice system sizes (L = 1024 in 2D, and L = 64 in 3D) with extensive statistical sampling. Our recent simulations on 1024 processors of Cray-XT3 and IBM Blue-Gene/L have further enabled us to explore fracture of 3D lattice systems of size L = 200, which is a significant computational achievement. These largest ever numerical simulations have enhanced our understanding of physics of fracture; in particular, we analyze damage localization and its deviation from percolation behavior, scaling laws for damage density, universality of fracture strength distribution, size effect on the mean fracture strength, and finally the scaling of crack surface roughness

  6. High-performance modeling of CO2 sequestration by coupling reservoir simulation and molecular dynamics

    KAUST Repository

    Bao, Kai; Yan, Mi; Lu, Ligang; Allen, Rebecca; Salam, Amgad; Jordan, Kirk E.; Sun, Shuyu

    2013-01-01

    multicomponent compositional flow simulation to handle more complicated physical process in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our

  7. GPU-based high performance Monte Carlo simulation in neutron transport

    Energy Technology Data Exchange (ETDEWEB)

    Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Inteligencia Artificial Aplicada], e-mail: cmnap@ien.gov.br

    2009-07-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)

  8. GPU-based high performance Monte Carlo simulation in neutron transport

    International Nuclear Information System (INIS)

    Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A.

    2009-01-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)

  9. A Grid-Based Cyber Infrastructure for High Performance Chemical Dynamics Simulations

    Directory of Open Access Journals (Sweden)

    Khadka Prashant

    2008-10-01

    Full Text Available Chemical dynamics simulation is an effective means to study atomic level motions of molecules, collections of molecules, liquids, surfaces, interfaces of materials, and chemical reactions. To make chemical dynamics simulations globally accessible to a broad range of users, recently a cyber infrastructure was developed that provides an online portal to VENUS, a popular chemical dynamics simulation program package, to allow people to submit simulation jobs that will be executed on the web server machine. In this paper, we report new developments of the cyber infrastructure for the improvement of its quality of service by dispatching the submitted simulations jobs from the web server machine onto a cluster of workstations for execution, and by adding an animation tool, which is optimized for animating the simulation results. The separation of the server machine from the simulation-running machine improves the service quality by increasing the capacity to serve more requests simultaneously with even reduced web response time, and allows the execution of large scale, time-consuming simulation jobs on the powerful workstation cluster. With the addition of an animation tool, the cyber infrastructure automatically converts, upon the selection of the user, some simulation results into an animation file that can be viewed on usual web browsers without requiring installation of any special software on the user computer. Since animation is essential for understanding the results of chemical dynamics simulations, this animation capacity provides a better way for understanding simulation details of the chemical dynamics. By combining computing resources at locations under different administrative controls, this cyber infrastructure constitutes a grid environment providing physically and administratively distributed functionalities through a single easy-to-use online portal

  10. Direct numerical simulation of reactor two-phase flows enabled by high-performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Jun; Cambareri, Joseph J.; Brown, Cameron S.; Feng, Jinyong; Gouws, Andre; Li, Mengnan; Bolotnov, Igor A.

    2018-04-01

    Nuclear reactor two-phase flows remain a great engineering challenge, where the high-resolution two-phase flow database which can inform practical model development is still sparse due to the extreme reactor operation conditions and measurement difficulties. Owing to the rapid growth of computing power, the direct numerical simulation (DNS) is enjoying a renewed interest in investigating the related flow problems. A combination between DNS and an interface tracking method can provide a unique opportunity to study two-phase flows based on first principles calculations. More importantly, state-of-the-art high-performance computing (HPC) facilities are helping unlock this great potential. This paper reviews the recent research progress of two-phase flow DNS related to reactor applications. The progress in large-scale bubbly flow DNS has been focused not only on the sheer size of those simulations in terms of resolved Reynolds number, but also on the associated advanced modeling and analysis techniques. Specifically, the current areas of active research include modeling of sub-cooled boiling, bubble coalescence, as well as the advanced post-processing toolkit for bubbly flow simulations in reactor geometries. A novel bubble tracking method has been developed to track the evolution of bubbles in two-phase bubbly flow. Also, spectral analysis of DNS database in different geometries has been performed to investigate the modulation of the energy spectrum slope due to bubble-induced turbulence. In addition, the single-and two-phase analysis results are presented for turbulent flows within the pressurized water reactor (PWR) core geometries. The related simulations are possible to carry out only with the world leading HPC platforms. These simulations are allowing more complex turbulence model development and validation for use in 3D multiphase computational fluid dynamics (M-CFD) codes.

  11. Multi-scale high-performance fluid flow: Simulations through porous media

    KAUST Repository

    Perović, Nevena

    2016-08-03

    Computational fluid dynamic (CFD) calculations on geometrically complex domains such as porous media require high geometric discretisation for accurately capturing the tested physical phenomena. Moreover, when considering a large area and analysing local effects, it is necessary to deploy a multi-scale approach that is both memory-intensive and time-consuming. Hence, this type of analysis must be conducted on a high-performance parallel computing infrastructure. In this paper, the coupling of two different scales based on the Navier–Stokes equations and Darcy\\'s law is described followed by the generation of complex geometries, and their discretisation and numerical treatment. Subsequently, the necessary parallelisation techniques and a rather specific tool, which is capable of retrieving data from the supercomputing servers and visualising them during the computation runtime (i.e. in situ) are described. All advantages and possible drawbacks of this approach, together with the preliminary results and sensitivity analyses are discussed in detail.

  12. Multi-scale high-performance fluid flow: Simulations through porous media

    KAUST Repository

    Perović, Nevena; Frisch, Jé rô me; Salama, Amgad; Sun, Shuyu; Rank, Ernst; Mundani, Ralf Peter

    2016-01-01

    Computational fluid dynamic (CFD) calculations on geometrically complex domains such as porous media require high geometric discretisation for accurately capturing the tested physical phenomena. Moreover, when considering a large area and analysing local effects, it is necessary to deploy a multi-scale approach that is both memory-intensive and time-consuming. Hence, this type of analysis must be conducted on a high-performance parallel computing infrastructure. In this paper, the coupling of two different scales based on the Navier–Stokes equations and Darcy's law is described followed by the generation of complex geometries, and their discretisation and numerical treatment. Subsequently, the necessary parallelisation techniques and a rather specific tool, which is capable of retrieving data from the supercomputing servers and visualising them during the computation runtime (i.e. in situ) are described. All advantages and possible drawbacks of this approach, together with the preliminary results and sensitivity analyses are discussed in detail.

  13. THC-MP: High performance numerical simulation of reactive transport and multiphase flow in porous media

    Science.gov (United States)

    Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu

    2015-07-01

    The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.

  14. Simulation of the High Performance Time to Digital Converter for the ATLAS Muon Spectrometer trigger upgrade

    International Nuclear Information System (INIS)

    Meng, X.T.; Levin, D.S.; Chapman, J.W.; Zhou, B.

    2016-01-01

    The ATLAS Muon Spectrometer endcap thin-Resistive Plate Chamber trigger project compliments the New Small Wheel endcap Phase-1 upgrade for higher luminosity LHC operation. These new trigger chambers, located in a high rate region of ATLAS, will improve overall trigger acceptance and reduce the fake muon trigger incidence. These chambers must generate a low level muon trigger to be delivered to a remote high level processor within a stringent latency requirement of 43 bunch crossings (1075 ns). To help meet this requirement the High Performance Time to Digital Converter (HPTDC), a multi-channel ASIC designed by CERN Microelectronics group, has been proposed for the digitization of the fast front end detector signals. This paper investigates the HPTDC performance in the context of the overall muon trigger latency, employing detailed behavioral Verilog simulations in which the latency in triggerless mode is measured for a range of configurations and under realistic hit rate conditions. The simulation results show that various HPTDC operational configurations, including leading edge and pair measurement modes can provide high efficiency (>98%) to capture and digitize hits within a time interval satisfying the Phase-1 latency tolerance.

  15. High-performance modeling of CO2 sequestration by coupling reservoir simulation and molecular dynamics

    KAUST Repository

    Bao, Kai

    2013-01-01

    The present work describes a parallel computational framework for CO2 sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel HPC systems. In this framework, a parallel reservoir simulator, Reservoir Simulation Toolbox (RST), solves the flow and transport equations that describe the subsurface flow behavior, while the molecular dynamics simulations are performed to provide the required physical parameters. Numerous technologies from different fields are employed to make this novel coupled system work efficiently. One of the major applications of the framework is the modeling of large scale CO2 sequestration for long-term storage in the subsurface geological formations, such as depleted reservoirs and deep saline aquifers, which has been proposed as one of the most attractive and practical solutions to reduce the CO2 emission problem to address the global-warming threat. To effectively solve such problems, fine grids and accurate prediction of the properties of fluid mixtures are essential for accuracy. In this work, the CO2 sequestration is presented as our first example to couple the reservoir simulation and molecular dynamics, while the framework can be extended naturally to the full multiphase multicomponent compositional flow simulation to handle more complicated physical process in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our MD simulations compared with published data, and good scalability are observed with the massively parallel HPC systems. The performance and capacity of the proposed framework are well demonstrated with several experiments with hundreds of millions to a billion cells. To our best knowledge, the work represents the first attempt to couple the reservoir simulation and molecular simulation for large scale modeling. Due to the complexity of the subsurface systems

  16. Investigating the Mobility of Light Autonomous Tracked Vehicles using a High Performance Computing Simulation Capability

    Science.gov (United States)

    Negrut, Dan; Mazhar, Hammad; Melanz, Daniel; Lamb, David; Jayakumar, Paramsothy; Letherwood, Michael; Jain, Abhinandan; Quadrelli, Marco

    2012-01-01

    This paper is concerned with the physics-based simulation of light tracked vehicles operating on rough deformable terrain. The focus is on small autonomous vehicles, which weigh less than 100 lb and move on deformable and rough terrain that is feature rich and no longer representable using a continuum approach. A scenario of interest is, for instance, the simulation of a reconnaissance mission for a high mobility lightweight robot where objects such as a boulder or a ditch that could otherwise be considered small for a truck or tank, become major obstacles that can impede the mobility of the light autonomous vehicle and negatively impact the success of its mission. Analyzing and gauging the mobility and performance of these light vehicles is accomplished through a modeling and simulation capability called Chrono::Engine. Chrono::Engine relies on parallel execution on Graphics Processing Unit (GPU) cards.

  17. Modern Physics Simulations

    Science.gov (United States)

    Brandt, Douglas; Hiller, John R.; Moloney, Michael J.

    1995-10-01

    The Consortium for Upper Level Physics Software (CUPS) has developed a comprehensive series of Nine Book/Software packages that Wiley will publish in FY `95 and `96. CUPS is an international group of 27 physicists, all with extensive backgrounds in the research, teaching, and development of instructional software. The project is being supported by the National Science Foundation (PHY-9014548), and it has received other support from the IBM Corp., Apple Computer Corp., and George Mason University. The Simulations being developed are: Astrophysics, Classical Mechanics, Electricity & Magnetism, Modern Physics, Nuclear and Particle Physics, Quantum Mechanics, Solid State, Thermal and Statistical, and Wave and Optics.

  18. COMSOL-PHREEQC: a tool for high performance numerical simulation of reactive transport phenomena

    International Nuclear Information System (INIS)

    Nardi, Albert; Vries, Luis Manuel de; Trinchero, Paolo; Idiart, Andres; Molinero, Jorge

    2012-01-01

    Document available in extended abstract form only. Comsol Multiphysics (COMSOL, from now on) is a powerful Finite Element software environment for the modelling and simulation of a large number of physics-based systems. The user can apply variables, expressions or numbers directly to solid and fluid domains, boundaries, edges and points, independently of the computational mesh. COMSOL then internally compiles a set of equations representing the entire model. The availability of extremely powerful pre and post processors makes COMSOL a numerical platform well known and extensively used in many branches of sciences and engineering. On the other hand, PHREEQC is a freely available computer program for simulating chemical reactions and transport processes in aqueous systems. It is perhaps the most widely used geochemical code in the scientific community and is openly distributed. The program is based on equilibrium chemistry of aqueous solutions interacting with minerals, gases, solid solutions, exchangers, and sorption surfaces, but also includes the capability to model kinetic reactions with rate equations that are user-specified in a very flexible way by means of Basic statements directly written in the input file. Here we present COMSOL-PHREEQC, a software interface able to communicate and couple these two powerful simulators by means of a Java interface. The methodology is based on Sequential Non Iterative Approach (SNIA), where PHREEQC is compiled as a dynamic subroutine (iPhreeqc) that is called by the interface to solve the geochemical system at every element of the finite element mesh of COMSOL. The numerical tool has been extensively verified by comparison with computed results of 1D, 2D and 3D benchmark examples solved with other reactive transport simulators. COMSOL-PHREEQC is parallelized so that CPU time can be highly optimized in multi-core processors or clusters. Then, fully 3D detailed reactive transport problems can be readily simulated by means of

  19. Simulating the Physical World

    Science.gov (United States)

    Berendsen, Herman J. C.

    2004-06-01

    The simulation of physical systems requires a simplified, hierarchical approach which models each level from the atomistic to the macroscopic scale. From quantum mechanics to fluid dynamics, this book systematically treats the broad scope of computer modeling and simulations, describing the fundamental theory behind each level of approximation. Berendsen evaluates each stage in relation to its applications giving the reader insight into the possibilities and limitations of the models. Practical guidance for applications and sample programs in Python are provided. With a strong emphasis on molecular models in chemistry and biochemistry, this book will be suitable for advanced undergraduate and graduate courses on molecular modeling and simulation within physics, biophysics, physical chemistry and materials science. It will also be a useful reference to all those working in the field. Additional resources for this title including solutions for instructors and programs are available online at www.cambridge.org/9780521835275. The first book to cover the wide range of modeling and simulations, from atomistic to the macroscopic scale, in a systematic fashion Providing a wealth of background material, it does not assume advanced knowledge and is eminently suitable for course use Contains practical examples and sample programs in Python

  20. libRoadRunner: a high performance SBML simulation and analysis library.

    Science.gov (United States)

    Somogyi, Endre T; Bouteiller, Jean-Marie; Glazier, James A; König, Matthias; Medley, J Kyle; Swat, Maciej H; Sauro, Herbert M

    2015-10-15

    This article presents libRoadRunner, an extensible, high-performance, cross-platform, open-source software library for the simulation and analysis of models expressed using Systems Biology Markup Language (SBML). SBML is the most widely used standard for representing dynamic networks, especially biochemical networks. libRoadRunner is fast enough to support large-scale problems such as tissue models, studies that require large numbers of repeated runs and interactive simulations. libRoadRunner is a self-contained library, able to run both as a component inside other tools via its C++ and C bindings, and interactively through its Python interface. Its Python Application Programming Interface (API) is similar to the APIs of MATLAB ( WWWMATHWORKSCOM: ) and SciPy ( HTTP//WWWSCIPYORG/: ), making it fast and easy to learn. libRoadRunner uses a custom Just-In-Time (JIT) compiler built on the widely used LLVM JIT compiler framework. It compiles SBML-specified models directly into native machine code for a variety of processors, making it appropriate for solving extremely large models or repeated runs. libRoadRunner is flexible, supporting the bulk of the SBML specification (except for delay and non-linear algebraic equations) including several SBML extensions (composition and distributions). It offers multiple deterministic and stochastic integrators, as well as tools for steady-state analysis, stability analysis and structural analysis of the stoichiometric matrix. libRoadRunner binary distributions are available for Mac OS X, Linux and Windows. The library is licensed under Apache License Version 2.0. libRoadRunner is also available for ARM-based computers such as the Raspberry Pi. http://www.libroadrunner.org provides online documentation, full build instructions, binaries and a git source repository. hsauro@u.washington.edu or somogyie@indiana.edu Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2015. This work is written

  1. StagBL : A Scalable, Portable, High-Performance Discretization and Solver Layer for Geodynamic Simulation

    Science.gov (United States)

    Sanan, P.; Tackley, P. J.; Gerya, T.; Kaus, B. J. P.; May, D.

    2017-12-01

    uninterrupted pipeline from toy/teaching codes to high-performance, extreme-scale solves. StagBLDemo replicates the functionality of an advanced MATLAB-style regional geodynamics code, thus providing users with a concrete procedure to exceed the performance and scalability limitations of smaller-scale tools.

  2. Simulation and high performance computing-Building a predictive capability for fusion

    International Nuclear Information System (INIS)

    Strand, P.I.; Coelho, R.; Coster, D.; Eriksson, L.-G.; Imbeaux, F.; Guillerminet, Bernard

    2010-01-01

    The Integrated Tokamak Modelling Task Force (ITM-TF) is developing an infrastructure where the validation needs, as being formulated in terms of multi-device data access and detailed physics comparisons aiming for inclusion of synthetic diagnostics in the simulation chain, are key components. As the activity and the modelling tools are aimed for general use, although focused on ITER plasmas, a device independent approach to data transport and a standardized approach to data management (data structures, naming, and access) is being developed in order to allow cross-validation between different fusion devices using a single toolset. Extensive work has already gone into, and is continuing to go into, the development of standardized descriptions of the data (Consistent Physical Objects). The longer term aim is a complete simulation platform which is expected to last and be extended in different ways for the coming 30 years. The technical underpinning is therefore of vital importance. In particular the platform needs to be extensible and open-ended to be able to take full advantage of not only today's most advanced technologies but also be able to marshal future developments. As a full level comprehensive prediction of ITER physics rapidly becomes expensive in terms of computing resources, the simulation framework needs to be able to use both grid and HPC computing facilities. Hence data access and code coupling technologies are required to be available for a heterogeneous, possibly distributed, environment. The developments in this area are pursued in a separate project-EUFORIA (EU Fusion for ITER Applications) which is providing about 15 professional person year (ppy) per annum from 14 different institutes. The range and size of the activity is not only technically challenging but is providing some unique management challenges in that a large and geographically distributed team (a truly pan-European set of researchers) need to be coordinated on a fairly detailed

  3. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    Science.gov (United States)

    Moon, Hongsik

    changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.

  4. A High Performance Chemical Simulation Preprocessor and Source Code Generator, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Numerical simulations of chemical kinetics are a critical component of aerospace research, Earth systems research, and energy research. These simulations enable a...

  5. A lattice-particle approach for the simulation of fracture processes in fiber-reinforced high-performance concrete

    NARCIS (Netherlands)

    Montero-Chacón, F.; Schlangen, H.E.J.G.; Medina, F.

    2013-01-01

    The use of fiber-reinforced high-performance concrete (FRHPC) is becoming more extended; therefore it is necessary to develop tools to simulate and better understand its behavior. In this work, a discrete model for the analysis of fracture mechanics in FRHPC is presented. The plain concrete matrix,

  6. Optimized Parallel Discrete Event Simulation (PDES) for High Performance Computing (HPC) Clusters

    National Research Council Canada - National Science Library

    Abu-Ghazaleh, Nael

    2005-01-01

    The aim of this project was to study the communication subsystem performance of state of the art optimistic simulator Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES...

  7. Use of high performance networks and supercomputers for real-time flight simulation

    Science.gov (United States)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  8. Design and Simulation of a High Performance Emergency Data Delivery Protocol

    DEFF Research Database (Denmark)

    Swartz, Kevin; Wang, Di

    2007-01-01

    The purpose of this project was to design a high performance data delivery protocol, capable of delivering data as quickly as possible to a base station or target node. This protocol was designed particularly for wireless network topologies, but could also be applied towards a wired system....... An emergency is defined as any event with high priority that needs to be handled immediately. It is assumed that this emergency event is important enough that energy efficiency is not a factor in our protocol. The desired effect is for fast as possible delivery to the base station for rapid event handling....

  9. Inductively coupled plasma emission spectrometric detection of simulated high performance liquid chromatographic peaks

    International Nuclear Information System (INIS)

    Fraley, D.M.; Yates, D.; Manahan, S.E.

    1979-01-01

    Because of its multielement capability, element-specificity, and low detection limits, inductively coupled plasma optical emission spectrometry (ICP) is a very promising technique for the detection of specific elemental species separated by high performance liquid chromatography (HPLC). This paper evaluated ICP as a detector for HPLC peaks containing specific elements. Detection limits for a number of elements have been evaluated in terms of the minimum detectable concentration of the element at the chromatographic peak maximum. The elements studies were Al, As, B, Ba, Ca, Cd, Co, Cr, Cu, Fe, K, Li, Mg, Mn, Mo, Na, Ni, P, Pb, Sb, Se, Sr, Ti, V, and Zn. In addition, ICP was compared with atomic absorption spectrometry for the detection of HPLC peaks composed of EDTA and NTA chelates of copper. Furthermore, ICP was compared to uv solution absorption for the detection of copper chelates. 6 figures, 4 tables

  10. Comparison of turbulence measurements from DIII-D low-mode and high-performance plasmas to turbulence simulations and models

    International Nuclear Information System (INIS)

    Rhodes, T.L.; Leboeuf, J.-N.; Sydora, R.D.; Groebner, R.J.; Doyle, E.J.; McKee, G.R.; Peebles, W.A.; Rettig, C.L.; Zeng, L.; Wang, G.

    2002-01-01

    Measured turbulence characteristics (correlation lengths, spectra, etc.) in low-confinement (L-mode) and high-performance plasmas in the DIII-D tokamak [Luxon et al., Proceedings Plasma Physics and Controlled Nuclear Fusion Research 1986 (International Atomic Energy Agency, Vienna, 1987), Vol. I, p. 159] show many similarities with the characteristics determined from turbulence simulations. Radial correlation lengths Δr of density fluctuations from L-mode discharges are found to be numerically similar to the ion poloidal gyroradius ρ θ,s , or 5-10 times the ion gyroradius ρ s over the radial region 0.2 θ,s or 5-10 times ρ s , an experiment was performed which modified ρ θs while keeping other plasma parameters approximately fixed. It was found that the experimental Δr did not scale as ρ θ,s , which was similar to low-resolution UCAN simulations. Finally, both experimental measurements and gyrokinetic simulations indicate a significant reduction in the radial correlation length from high-performance quiescent double barrier discharges, as compared to normal L-mode, consistent with reduced transport in these high-performance plasmas

  11. Physical intelligence at work: Servant-leadership development for high performance

    Science.gov (United States)

    Jim Saveland

    2001-01-01

    In October 2000, the RMRS Leadership Team attended a one-day seminar on leadership presented by Stephen Covey (1990). Covey talked about the role of a leader being respecting, integrating and developing body, heart, mind, and spirit. Integrating our physical, emotional, mental and spiritual selves is a popular theme (e.g. Leonard and Murphy 1995, Levey and Levey 1998,...

  12. Development and testing of high performance pseudo random number generator for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Chakraborty, Brahmananda

    2009-01-01

    Random number plays an important role in any Monte Carlo simulation. The accuracy of the results depends on the quality of the sequence of random numbers employed in the simulation. These include randomness of the random numbers, uniformity of their distribution, absence of correlation and long period. In a typical Monte Carlo simulation of particle transport in a nuclear reactor core, the history of a particle from its birth in a fission event until its death by an absorption or leakage event is tracked. The geometry of the core and the surrounding materials are exactly modeled in the simulation. To track a neutron history one needs random numbers for determining inter collision distance, nature of the collision, the direction of the scattered neutron etc. Neutrons are tracked in batches. In one batch approximately 2000-5000 neutrons are tracked. The statistical accuracy of the results of the simulation depends on the total number of particles (number of particles in one batch multiplied by the number of batches) tracked. The number of histories to be generated is usually large for a typical radiation transport problem. To track a very large number of histories one needs to generate a long sequence of independent random numbers. In other words the cycle length of the random number generator (RNG) should be more than the total number of random numbers required for simulating the given transport problem. The number of bits of the machine generally limits the cycle length. For a binary machine of p bits the maximum cycle length is 2 p . To achieve higher cycle length in the same machine one has to use either register arithmetic or bit manipulation technique

  13. Finite element simulations and experiments of ballistic impacts on high performance PE composite material

    NARCIS (Netherlands)

    Herlaar, K.; Jagt-Deutekom, M.J. van der; Jacobs, M.J.N.

    2005-01-01

    The use of lightweight composite armour concepts is essential for the protection of future combat systems, both vehicles and personal. The design of such armour systems is challenging due to the complex material behaviour. Finite element simulations can be used to help understand the important

  14. High performance discrete event simulations to evaluate complex industrial systems, the case of automatic

    NARCIS (Netherlands)

    Hoekstra, A.G.; Dorst, L.; Bergman, M.; Lagerberg, J.; Visser, A.; Yakali, H.; Groen, F.; Hertzberger, L.O.

    1997-01-01

    We have developed a Modelling and Simulation platform for technical evaluation of Electronic Toll Collection on Motor Highways. This platform is used in a project of the Dutch government to assess the technical feasibility of Toll Collection systems proposed by industry. Motivated by this work we

  15. Time Step Considerations when Simulating Dynamic Behavior of High Performance Homes

    Energy Technology Data Exchange (ETDEWEB)

    Tabares-Velasco, Paulo Cesar

    2016-09-01

    Building energy simulations, especially those concerning pre-cooling strategies and cooling/heating peak demand management, require careful analysis and detailed understanding of building characteristics. Accurate modeling of the building thermal response and material properties for thermally massive walls or advanced materials like phase change materials (PCMs) are critically important.

  16. H5Part A Portable High Performance Parallel Data Interface for Particle Simulations

    CERN Document Server

    Adelmann, Andreas; Shalf, John M; Siegerist, Cristina

    2005-01-01

    Largest parallel particle simulations, in six dimensional phase space generate wast amont of data. It is also desirable to share data and data analysis tools such as ParViT (Particle Visualization Toolkit) among other groups who are working on particle-based accelerator simulations. We define a very simple file schema built on top of HDF5 (Hierarchical Data Format version 5) as well as an API that simplifies the reading/writing of the data to the HDF5 file format. HDF5 offers a self-describing machine-independent binary file format that supports scalable parallel I/O performance for MPI codes on a variety of supercomputing systems and works equally well on laptop computers. The API is available for C, C++, and Fortran codes. The file format will enable disparate research groups with very different simulation implementations to share data transparently and share data analysis tools. For instance, the common file format will enable groups that depend on completely different simulation implementations to share c...

  17. Development of three-dimensional neoclassical transport simulation code with high performance Fortran on a vector-parallel computer

    International Nuclear Information System (INIS)

    Satake, Shinsuke; Okamoto, Masao; Nakajima, Noriyoshi; Takamaru, Hisanori

    2005-11-01

    A neoclassical transport simulation code (FORTEC-3D) applicable to three-dimensional configurations has been developed using High Performance Fortran (HPF). Adoption of computing techniques for parallelization and a hybrid simulation model to the δf Monte-Carlo method transport simulation, including non-local transport effects in three-dimensional configurations, makes it possible to simulate the dynamism of global, non-local transport phenomena with a self-consistent radial electric field within a reasonable computation time. In this paper, development of the transport code using HPF is reported. Optimization techniques in order to achieve both high vectorization and parallelization efficiency, adoption of a parallel random number generator, and also benchmark results, are shown. (author)

  18. LIAR: A COMPUTER PROGRAM FOR THE SIMULATION AND MODELING OF HIGH PERFORMANCE LINACS

    International Nuclear Information System (INIS)

    Adolphsen, Chris

    2003-01-01

    The computer program LIAR (''LInear Accelerator Research code'') is a numerical simulation and tracking program for linear colliders. The LIAR project was started at SLAC in August 1995 in order to provide a computing and simulation tool that specifically addresses the needs of high energy linear colliders. LIAR is designed to be used for a variety of different linear accelerators. It has been applied for and checked against the existing Stanford Linear Collider (SLC) as well as the linacs of the proposed Next Linear Collider (NLC) and the proposed Linac Coherent Light Source (LCLS). The program includes wakefield effects, a 4D coupled beam description, specific optimization algorithms and other advanced features. We describe the most important concepts and highlights of the program. After having presented the LIAR program at the LINAC96 and the PAC97 conferences, we do now introduce it to the European particle accelerator community

  19. C-STARS Baltimore Simulation Center Military Trauma Training Program: Training for High Performance Trauma Teams

    Science.gov (United States)

    2013-09-19

    simulation room and intermittent access to conference and debriefing space. While the C-STARS program had priority for access to this space, it had to...Input (if required) Skin cool to touch Temp: 35.8 C FAST positive for splenic injury 500 mL blood in vac • Correctly interpret radiography...shoveling snow and the pain continued. He is moderately obese , does not exercise, uses EtOH frequently and has a 35-year hx of tobacco use

  20. Application of High Performance Computing for Simulations of N-Dodecane Jet Spray with Evaporation

    Science.gov (United States)

    2016-11-01

    is unlimited. 10 6. References 1. Malbec L-M, Egúsquiza J, Bruneaux G, Meijer M. Characterization of a set of ECN spray A injectors : nozzle to...sprays and develop a predictive theory for comparison to measurements in the laboratory of turbulent diesel sprays. 15. SUBJECT TERMS high...models into future simulations of turbulent jet sprays and develop a predictive theory for comparison to measurements in the lab of turbulent diesel

  1. Simulation-Driven Development and Optimization of a High-Performance Six-Dimensional Wrist Force/Torque Sensor

    Directory of Open Access Journals (Sweden)

    Qiaokang LIANG

    2010-05-01

    Full Text Available This paper describes the Simulation-Driven Development and Optimization (SDDO of a six-dimensional force/torque sensor with high performance. By the implementation of the SDDO, the developed sensor possesses high performance such as high sensitivity, linearity, stiffness and repeatability simultaneously, which is hard for tranditional force/torque sensor. Integrated approach provided by software ANSYS was used to streamline and speed up the process chain and thereby to deliver results significantly faster than traditional approaches. The result of calibration experiment possesses some impressive characters, therefore the developed fore/torque sensor can be usefully used in industry and the methods of design can also be used to develop industrial product.

  2. High Performance Computation of a Jet in Crossflow by Lattice Boltzmann Based Parallel Direct Numerical Simulation

    Directory of Open Access Journals (Sweden)

    Jiang Lei

    2015-01-01

    Full Text Available Direct numerical simulation (DNS of a round jet in crossflow based on lattice Boltzmann method (LBM is carried out on multi-GPU cluster. Data parallel SIMT (single instruction multiple thread characteristic of GPU matches the parallelism of LBM well, which leads to the high efficiency of GPU on the LBM solver. With present GPU settings (6 Nvidia Tesla K20M, the present DNS simulation can be completed in several hours. A grid system of 1.5 × 108 is adopted and largest jet Reynolds number reaches 3000. The jet-to-free-stream velocity ratio is set as 3.3. The jet is orthogonal to the mainstream flow direction. The validated code shows good agreement with experiments. Vortical structures of CRVP, shear-layer vortices and horseshoe vortices, are presented and analyzed based on velocity fields and vorticity distributions. Turbulent statistical quantities of Reynolds stress are also displayed. Coherent structures are revealed in a very fine resolution based on the second invariant of the velocity gradients.

  3. High Performance Electrical Modeling and Simulation Software Normal Environment Verification and Validation Plan, Version 1.0; TOPICAL

    International Nuclear Information System (INIS)

    WIX, STEVEN D.; BOGDAN, CAROLYN W.; MARCHIONDO JR., JULIO P.; DEVENEY, MICHAEL F.; NUNEZ, ALBERT V.

    2002-01-01

    The requirements in modeling and simulation are driven by two fundamental changes in the nuclear weapons landscape: (1) The Comprehensive Test Ban Treaty and (2) The Stockpile Life Extension Program which extends weapon lifetimes well beyond their originally anticipated field lifetimes. The move from confidence based on nuclear testing to confidence based on predictive simulation forces a profound change in the performance asked of codes. The scope of this document is to improve the confidence in the computational results by demonstration and documentation of the predictive capability of electrical circuit codes and the underlying conceptual, mathematical and numerical models as applied to a specific stockpile driver. This document describes the High Performance Electrical Modeling and Simulation software normal environment Verification and Validation Plan

  4. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens

    International Nuclear Information System (INIS)

    Oelerich, Jan Oliver; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D.; Volz, Kerstin

    2017-01-01

    Highlights: • We present STEMsalabim, a modern implementation of the multislice algorithm for simulation of STEM images. • Our package is highly parallelizable on high-performance computing clusters, combining shared and distributed memory architectures. • With STEMsalabim, computationally and memory expensive STEM image simulations can be carried out within reasonable time. - Abstract: We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space.

  5. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens

    Energy Technology Data Exchange (ETDEWEB)

    Oelerich, Jan Oliver, E-mail: jan.oliver.oelerich@physik.uni-marburg.de; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D.; Volz, Kerstin

    2017-06-15

    Highlights: • We present STEMsalabim, a modern implementation of the multislice algorithm for simulation of STEM images. • Our package is highly parallelizable on high-performance computing clusters, combining shared and distributed memory architectures. • With STEMsalabim, computationally and memory expensive STEM image simulations can be carried out within reasonable time. - Abstract: We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space.

  6. GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers

    Directory of Open Access Journals (Sweden)

    Mark James Abraham

    2015-09-01

    Full Text Available GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. These work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. The latest best-in-class compressed trajectory storage format is supported.

  7. Performance of space charge simulations using High Performance Computing (HPC) cluster

    CERN Document Server

    Bartosik, Hannes; CERN. Geneva. ATS Department

    2017-01-01

    In 2016 a collaboration agreement between CERN and Istituto Nazionale di Fisica Nucleare (INFN) through its Centro Nazionale Analisi Fotogrammi (CNAF, Bologna) was signed [1], which foresaw the purchase and installation of a cluster of 20 nodes with 32 cores each, connected with InfiniBand, at CNAF for the use of CERN members to develop parallelized codes as well as conduct massive simulation campaigns with the already available parallelized tools. As outlined in [1], after the installation and the set up of the first 12 nodes, the green light to proceed with the procurement and installation of the next 8 nodes can be given only after successfully passing an acceptance test based on two specific benchmark runs. This condition is necessary to consider the first batch of the cluster operational and complying with the desired performance specifications. In this brief note, we report the results of the above mentioned acceptance test.

  8. Optical Characterization and Energy Simulation of Glazing for High-Performance Windows

    International Nuclear Information System (INIS)

    Jonsson, Andreas

    2010-01-01

    This thesis focuses on one important component of the energy system - the window. Windows are installed in buildings mainly to create visual contact with the surroundings and to let in daylight, and should also be heat and sound insulating. This thesis covers four important aspects of windows: antireflection and switchable coatings, energy simulations and optical measurements. Energy simulations have been used to compare different windows and also to estimate the performance of smart or switchable windows, whose transmittance can be regulated. The results from this thesis show the potential of the emerging technology of smart windows, not only from a daylight and an energy perspective, but also for comfort and well-being. The importance of a well functioning control system for such windows, is pointed out. To fulfill all requirements of modern windows, they often have two or more panes. Each glass surface leads to reflection of light and therefore less daylight is transmitted. It is therefore of interest to find ways to increase the transmittance. In this thesis antireflection coatings, similar to those found on eye-glasses and LCD screens, have been investigated. For large area applications such as windows, it is necessary to use techniques which can easily be adapted to large scale manufacturing at low cost. Such a technique is dip-coating in a sol-gel of porous silica. Antireflection coatings have been deposited on glass and plastic materials to study both visual and energy performance and it has been shown that antireflection coatings increase the transmittance of windows without negatively affecting the thermal insulation and the energy efficiency. Optical measurements are important for quantifying product properties for comparisons and evaluations. It is important that new measurement routines are simple and applicable to standard commercial instruments. Different systematic error sources for optical measurements of patterned light diffusing samples using

  9. A high-performance model for shallow-water simulations in distributed and heterogeneous architectures

    Science.gov (United States)

    Conde, Daniel; Canelas, Ricardo B.; Ferreira, Rui M. L.

    2017-04-01

    One of the most common challenges in hydrodynamic modelling is the trade off one must make between highly resolved simulations and the time required for their computation. In the particular case of urban floods, modelers are often forced to simplify the complex geometries of the problem, or to implicitly include some of its hydrodynamic effects, due to the typically very large spatial scales involved and limited computational resources. At CEris - Instituto Superior Técnico, Universidade de Lisboa - the STAV-2D shallow-water model, particularly suited for strong transient flows in complex and dynamic geometries, has been under development for the past recent years (Canelas et al., 2013 & Conde et al., 2013). The model is based on an explicit, first-order 2DH finite-volume discretization scheme for unstructured triangular meshes, in which a flux-splitting technique is paired with a reviewed Roe-Riemann solver, yielding a model applicable to discontinuous flows over time-evolving geometries. STAV-2D features solid transport in both Euleran and Lagrangian forms, with the first aiming at describing the transport of fine natural sediments and the latter aimed at large individual debris. The model has been validated with theoretical solutions and laboratory experiments (Canelas et al., 2013 & Conde et al., 2015). This work presents our most recent effort in STAV-2D: the re-design of the code in a modern Object-Oriented parallel framework for heterogeneous computations in CPUs and GPUs. The programming language of choice for this re-design was C++, due to its wide support of established and emerging parallel programming interfaces. The current implementation of STAV-2D provides two different levels of parallel granularity: inter-node and intra-node. Inter-node parallelism is achieved by distributing a simulation across a set of worker nodes, with communication between nodes being explicitly managed through MPI. At this level, the main difficulty is associated with the

  10. Numerical simulation in plasma physics

    International Nuclear Information System (INIS)

    Samarskii, A.A.

    1980-01-01

    Plasma physics is not only a field for development of physical theories and mathematical models but also an object of application of the computational experiment comprising analytical and numerical methods adapted for computers. The author considers only MHD plasma physics problems. Examples treated are dissipative structures in plasma; MHD model of solar dynamo; supernova explosion simulation; and plasma compression by a liner. (Auth.)

  11. Applying GIS and high performance agent-based simulation for managing an Old World Screwworm fly invasion of Australia.

    Science.gov (United States)

    Welch, M C; Kwan, P W; Sajeev, A S M

    2014-10-01

    Agent-based modelling has proven to be a promising approach for developing rich simulations for complex phenomena that provide decision support functions across a broad range of areas including biological, social and agricultural sciences. This paper demonstrates how high performance computing technologies, namely General-Purpose Computing on Graphics Processing Units (GPGPU), and commercial Geographic Information Systems (GIS) can be applied to develop a national scale, agent-based simulation of an incursion of Old World Screwworm fly (OWS fly) into the Australian mainland. The development of this simulation model leverages the combination of massively data-parallel processing capabilities supported by NVidia's Compute Unified Device Architecture (CUDA) and the advanced spatial visualisation capabilities of GIS. These technologies have enabled the implementation of an individual-based, stochastic lifecycle and dispersal algorithm for the OWS fly invasion. The simulation model draws upon a wide range of biological data as input to stochastically determine the reproduction and survival of the OWS fly through the different stages of its lifecycle and dispersal of gravid females. Through this model, a highly efficient computational platform has been developed for studying the effectiveness of control and mitigation strategies and their associated economic impact on livestock industries can be materialised. Copyright © 2014 International Atomic Energy Agency 2014. Published by Elsevier B.V. All rights reserved.

  12. Design of the HELICS High-Performance Transmission-Distribution-Communication-Market Co-Simulation Framework: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Palmintier, Bryan S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnamurthy, Dheepak [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Top, Philip [Lawrence Livermore National Laboratories; Smith, Steve [Lawrence Livermore National Laboratories; Daily, Jeff [Pacific Northwest National Laboratory; Fuller, Jason [Pacific Northwest National Laboratory

    2017-09-12

    This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layered co-simulation architecture.

  13. Physics detector simulation facility system software description

    International Nuclear Information System (INIS)

    Allen, J.; Chang, C.; Estep, P.; Huang, J.; Liu, J.; Marquez, M.; Mestad, S.; Pan, J.; Traversat, B.

    1991-12-01

    Large and costly detectors will be constructed during the next few years to study the interactions produced by the SSC. Efficient, cost-effective designs for these detectors will require careful thought and planning. Because it is not possible to test fully a proposed design in a scaled-down version, the adequacy of a proposed design will be determined by a detailed computer model of the detectors. Physics and detector simulations will be performed on the computer model using high-powered computing system at the Physics Detector Simulation Facility (PDSF). The SSCL has particular computing requirements for high-energy physics (HEP) Monte Carlo calculations for the simulation of SSCL physics and detectors. The numerical calculations to be performed in each simulation are lengthy and detailed; they could require many more months per run on a VAX 11/780 computer and may produce several gigabytes of data per run. Consequently, a distributed computing environment of several networked high-speed computing engines is envisioned to meet these needs. These networked computers will form the basis of a centralized facility for SSCL physics and detector simulation work. Our computer planning groups have determined that the most efficient, cost-effective way to provide these high-performance computing resources at this time is with RISC-based UNIX workstations. The modeling and simulation application software that will run on the computing system is usually written by physicists in FORTRAN language and may need thousands of hours of supercomputing time. The system software is the ''glue'' which integrates the distributed workstations and allows them to be managed as a single entity. This report will address the computing strategy for the SSC

  14. Physical modeling and high-performance GPU computing for characterization, interception, and disruption of hazardous near-Earth objects

    Science.gov (United States)

    Kaplinger, Brian Douglas

    For the past few decades, both the scientific community and the general public have been becoming more aware that the Earth lives in a shooting gallery of small objects. We classify all of these asteroids and comets, known or unknown, that cross Earth's orbit as near-Earth objects (NEOs). A look at our geologic history tells us that NEOs have collided with Earth in the past, and we expect that they will continue to do so. With thousands of known NEOs crossing the orbit of Earth, there has been significant scientific interest in developing the capability to deflect an NEO from an impacting trajectory. This thesis applies the ideas of Smoothed Particle Hydrodynamics (SPH) theory to the NEO disruption problem. A simulation package was designed that allows efficacy simulation to be integrated into the mission planning and design process. This is done by applying ideas in high-performance computing (HPC) on the computer graphics processing unit (GPU). Rather than prove a concept through large standalone simulations on a supercomputer, a highly parallel structure allows for flexible, target dependent questions to be resolved. Built around nonclassified data and analysis, this computer package will allow academic institutions to better tackle the issue of NEO mitigation effectiveness.

  15. High performance multi-scale and multi-physics computation of nuclear power plant subjected to strong earthquake. An Overview

    International Nuclear Information System (INIS)

    Yoshimura, Shinobu; Kawai, Hiroshi; Sugimoto, Shin'ichiro; Hori, Muneo; Nakajima, Norihiro; Kobayashi, Kei

    2010-01-01

    Recently importance of nuclear energy has been recognized again due to serious concerns of global warming and energy security. In parallel, it is one of critical issues to verify safety capability of ageing nuclear power plants (NPPs) subjected to strong earthquake. Since 2007, we have been developing the multi-scale and multi-physics based numerical simulator for quantitatively predicting actual quake-proof capability of ageing NPPs under operation or just after plant trip subjected to strong earthquake. In this paper, we describe an overview of the simulator with some preliminary results. (author)

  16. IMPETUS - Interactive MultiPhysics Environment for Unified Simulations.

    Science.gov (United States)

    Ha, Vi Q; Lykotrafitis, George

    2016-12-08

    We introduce IMPETUS - Interactive MultiPhysics Environment for Unified Simulations, an object oriented, easy-to-use, high performance, C++ program for three-dimensional simulations of complex physical systems that can benefit a large variety of research areas, especially in cell mechanics. The program implements cross-communication between locally interacting particles and continuum models residing in the same physical space while a network facilitates long-range particle interactions. Message Passing Interface is used for inter-processor communication for all simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Simulated experiments in modern physics

    International Nuclear Information System (INIS)

    Tirnini, Mahmud Hasan

    1981-01-01

    Author.In this thesis a number of the basic experiments of atomic and nuclear physics are simulated on a microcomputer interfaced to a chart recorder and CRT. These will induce the student to imagine that he is actually performing the experiments. He will collect data to be worked out. The thesis covers the relevant material to set up such experiments in the modern physics laboratory

  18. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  19. Cellulose Nanocrystal Templated Graphene Nanoscrolls for High Performance Supercapacitors and Hydrogen Storage: An Experimental and Molecular Simulation Study.

    Science.gov (United States)

    Dhar, Prodyut; Gaur, Surendra Singh; Kumar, Amit; Katiyar, Vimal

    2018-03-01

    Graphene nanoscrolls (GNS), due to their remarkably interesting properties, have attracted significant interest with applications in various engineering sectors. However, uncontrolled morphologies, poor yield and low quality GNS produced through traditional routes are major challenges associated. We demonstrate sustainable approach of utilizing bio-derived cellulose nanocrystals (CNCs) as template for fabrication of GNS with tunable morphological dimensions ranging from micron-to-nanoscale(controlled length 1 μm), alongwith encapsulation of catalytically active metallic-species in scroll interlayers. The surface-modified magnetic CNCs acts as structural-directing agents which provides enough momentum to initiate self-scrolling phenomenon of graphene through van der Waals forces and π-π interactions, mechanism of which is demonstrated through experimental and molecular simulation studies. The proposed approach of GNS fabrication provides flexibility to tune physico-chemical properties of GNS by simply varying interlayer spacing, scrolling density and fraction of encapsulated metallic nanoparticles. The hybrid GNS with confined palladium or platinum nanoparticles (at lower loading ~1 wt.%) shows enhanced hydrogen storage capacity (~0.2 wt.% at~20 bar and ~273 K) and excellent supercapacitance behavior (~223-357 F/g) for prolonged cycles (retention ~93.5-96.4% at ~10000 cycles). The current strategy of utilizing bio-based templates can be further extended to incorporate complex architectures or nanomaterials in GNS core or inter-layers, which will potentially broaden its applications in fabrication of high-performance devices.

  20. High-performance simulation-based algorithms for an alpine ski racer’s trajectory optimization in heterogeneous computer systems

    Directory of Open Access Journals (Sweden)

    Dębski Roman

    2014-09-01

    Full Text Available Effective, simulation-based trajectory optimization algorithms adapted to heterogeneous computers are studied with reference to the problem taken from alpine ski racing (the presented solution is probably the most general one published so far. The key idea behind these algorithms is to use a grid-based discretization scheme to transform the continuous optimization problem into a search problem over a specially constructed finite graph, and then to apply dynamic programming to find an approximation of the global solution. In the analyzed example it is the minimum-time ski line, represented as a piecewise-linear function (a method of elimination of unfeasible solutions is proposed. Serial and parallel versions of the basic optimization algorithm are presented in detail (pseudo-code, time and memory complexity. Possible extensions of the basic algorithm are also described. The implementation of these algorithms is based on OpenCL. The included experimental results show that contemporary heterogeneous computers can be treated as μ-HPC platforms-they offer high performance (the best speedup was equal to 128 while remaining energy and cost efficient (which is crucial in embedded systems, e.g., trajectory planners of autonomous robots. The presented algorithms can be applied to many trajectory optimization problems, including those having a black-box represented performance measure

  1. A Mesoscopic Simulation for the Early-Age Shrinkage Cracking Process of High Performance Concrete in Bridge Engineering

    Directory of Open Access Journals (Sweden)

    Guodong Li

    2017-01-01

    Full Text Available On a mesoscopic level, high performance concrete (HPC was assumed to be a heterogeneous composite material consisting of aggregates, mortar, and pores. The concrete mesoscopic structure model had been established based on CT image reconstruction. By combining this model with continuum mechanics, damage mechanics, and fracture mechanics, a relatively complete system for concrete mesoscopic mechanics analysis was established to simulate the process of early-age shrinkage cracking in HPC. This process was based on the dispersion crack model. The results indicated that the interface between the aggregate and mortar was the crack point caused by shrinkage cracks in HPC. The locations of early-age shrinkage cracks in HPC were associated with the spacing and the size of the aggregate particle. However, the shrinkage deformation size of the mortar was related to the scope of concrete cracking and was independent of the crack position. Whereas lower water to cement ratios can improve the early strength of concrete, this ratio cannot control early-age shrinkage cracks in HPC.

  2. Development and testing of high-performance fuel pin simulators for boiling experiments in liquid metal flow

    International Nuclear Information System (INIS)

    Casal, V.

    1976-01-01

    There are unknown phenomena, about local and integral boiling events in the core of sodium cooled fast breeder reactors. Therefore at GfK depend out-of-pile boiling experiments have been performed using electrically heated dummies of fuel element bundles. The success of these tests and the amount of information derived from them depend exclusively on the successful simulation of the fuel pins by electrically heated rods as regards the essential physical properties. The report deals with the development and testing of heater rods for sodium boiling experiments in bundles including up to 91 heated pins

  3. Effects of vacuum thermal cycling on mechanical and physical properties of high performance carbon/bismaleimide composite

    International Nuclear Information System (INIS)

    Yu Qi; Chen Ping; Gao Yu; Mu Jujie; Chen Yongwu; Lu Chun; Liu Dong

    2011-01-01

    Highlights: → The level of cross-links was improved to a certain extent. → The thermal stability was firstly improved and then decreased. → The transverse and longitudinal CTE were both determined by the degree of interfacial debonding. → The mass loss ratio increases firstly and then reaches a plateau value. → The surface morphology was altered and the surface roughness increased firstly and then decreased. → The transverse tensile strength was reduced. → The flexural strength increased firstly and then decreased to a plateau value. → The ILSS increased firstly and then decreased to a plateau value. - Abstract: The aim of this article was to investigate the effects of vacuum thermal cycling on mechanical and physical properties of high performance carbon/bismaleimide (BMI) composites used in aerospace. The changes in dynamic mechanical properties and thermal stability were characterized by dynamic mechanical analysis (DMA) and thermogravimetric analysis (TGA), respectively. The changes in linear coefficient of thermal expansion (CTE) were measured in directions perpendicular and parallel to the fiber direction, respectively. The outgassing behavior of the composites were examined. The evolution of surface morphology and surface roughness were observed by atomic force microscopy (AFM). Changes in mechanical properties including transverse tensile strength, flexural strength and interlaminar shear strength (ILSS) were measured. The results indicated that the vacuum thermal cycling could improve the crosslinking degree and the thermal stability of resin matrix to a certain extent, and induce matrix outgassing and thermal stress, thereby leading to the mass loss and the interfacial debonding of the composite. The degradation in transverse tensile strength was caused by joint effects of the matrix outgassing and the interfacial debonding, while the changes in flexural strength and ILSS were affected by a competing effect between the crosslinking degree

  4. A predictive analytic model for high-performance tunneling field-effect transistors approaching non-equilibrium Green's function simulations

    International Nuclear Information System (INIS)

    Salazar, Ramon B.; Appenzeller, Joerg; Ilatikhameneh, Hesameddin; Rahman, Rajib; Klimeck, Gerhard

    2015-01-01

    A new compact modeling approach is presented which describes the full current-voltage (I-V) characteristic of high-performance (aggressively scaled-down) tunneling field-effect-transistors (TFETs) based on homojunction direct-bandgap semiconductors. The model is based on an analytic description of two key features, which capture the main physical phenomena related to TFETs: (1) the potential profile from source to channel and (2) the elliptic curvature of the complex bands in the bandgap region. It is proposed to use 1D Poisson's equations in the source and the channel to describe the potential profile in homojunction TFETs. This allows to quantify the impact of source/drain doping on device performance, an aspect usually ignored in TFET modeling but highly relevant in ultra-scaled devices. The compact model is validated by comparison with state-of-the-art quantum transport simulations using a 3D full band atomistic approach based on non-equilibrium Green's functions. It is shown that the model reproduces with good accuracy the data obtained from the simulations in all regions of operation: the on/off states and the n/p branches of conduction. This approach allows calculation of energy-dependent band-to-band tunneling currents in TFETs, a feature that allows gaining deep insights into the underlying device physics. The simplicity and accuracy of the approach provide a powerful tool to explore in a quantitatively manner how a wide variety of parameters (material-, size-, and/or geometry-dependent) impact the TFET performance under any bias conditions. The proposed model presents thus a practical complement to computationally expensive simulations such as the 3D NEGF approach

  5. Hadron therapy physics and simulations

    CERN Document Server

    d’Ávila Nunes, Marcos

    2014-01-01

    This brief provides an in-depth overview of the physics of hadron therapy, ranging from the history to the latest contributions to the subject. It covers the mechanisms of protons and carbon ions at the molecular level (DNA breaks and proteins 53BP1 and RPA), the physics and mathematics of accelerators (Cyclotron and Synchrotron), microdosimetry measurements (with new results so far achieved), and Monte Carlo simulations in hadron therapy using FLUKA (CERN) and MCHIT (FIAS) software. The text also includes information about proton therapy centers and carbon ion centers (PTCOG), as well as a comparison and discussion of both techniques in treatment planning and radiation monitoring. This brief is suitable for newcomers to medical physics as well as seasoned specialists in radiation oncology.

  6. Simulating physics with cellular automata

    Energy Technology Data Exchange (ETDEWEB)

    Vichniac, G Y

    1984-01-01

    Cellular automata are dynamical systems where space, time, and variables are discrete. They are shown on two-dimensional examples to be capable of non-numerical simulations of physics. They are useful for faithful parallel processing of lattice models. At another level, they exhibit behaviours and illustrate concepts that are unmistakably physical, such as non-ergodicity and order parameters, frustration, relaxation to chaos through period doublings, a conspicuous arrow of time in reversible microscopic dynamics, causality and light-cone, and non-separability. In general, they constitute exactly computable models for complex phenomena and large-scale correlations that result from very simple short-range interactions. The author studies their space, time, and intrinsic symmetries and the corresponding conservation laws, with an emphasis on the conservation of information obeyed by reversible cellular automata. 60 references.

  7. Physical simulations using centrifuge techniques

    International Nuclear Information System (INIS)

    Sutherland, H.J.

    1981-01-01

    Centrifuge techniques offer a technique for doing physical simulations of the long-term mechanical response of deep ocean sediment to the emplacement of waste canisters and to the temperature gradients generated by them. Preliminary investigations of the scaling laws for pertinent phenomena indicate that the time scaling will be consistent among them and equal to the scaling factor squared. This result implies that this technique will permit accelerated-life-testing of proposed configurations; i.e, long-term studies may be done in relatively short times. Presently, existing centrifuges are being modified to permit scale model testing. This testing will start next year

  8. High performance conductometry

    International Nuclear Information System (INIS)

    Saha, B.

    2000-01-01

    Inexpensive but high performance systems have emerged progressively for basic and applied measurements in physical and analytical chemistry on one hand, and for on-line monitoring and leak detection in plants and facilities on the other. Salient features of the developments will be presented with specific examples

  9. High performance in software development

    CERN Multimedia

    CERN. Geneva; Haapio, Petri; Liukkonen, Juha-Matti

    2015-01-01

    What are the ingredients of high-performing software? Software development, especially for large high-performance systems, is one the most complex tasks mankind has ever tried. Technological change leads to huge opportunities but challenges our old ways of working. Processing large data sets, possibly in real time or with other tight computational constraints, requires an efficient solution architecture. Efficiency requirements span from the distributed storage and large-scale organization of computation and data onto the lowest level of processor and data bus behavior. Integrating performance behavior over these levels is especially important when the computation is resource-bounded, as it is in numerics: physical simulation, machine learning, estimation of statistical models, etc. For example, memory locality and utilization of vector processing are essential for harnessing the computing power of modern processor architectures due to the deep memory hierarchies of modern general-purpose computers. As a r...

  10. Simulation of the Physics of Flight

    Science.gov (United States)

    Lane, W. Brian

    2013-01-01

    Computer simulations continue to prove to be a valuable tool in physics education. Based on the needs of an Aviation Physics course, we developed the PHYSics of FLIght Simulator (PhysFliS), which numerically solves Newton's second law for an airplane in flight based on standard aerodynamics relationships. The simulation can be used to pique…

  11. A High Performance Computing Framework for Physics-based Modeling and Simulation of Military Ground Vehicles

    Science.gov (United States)

    2011-03-25

    cluster. The co-processing idea is the enabler of the heterogeneous computing concept advertised recently as the paradigm capable of delivering exascale ...Petascale to Exascale : Extending Intel’s HPC Commitment: http://download.intel.com/pressroom/archive/reference/ISC_2010_Skaugen_keynote.pdf in

  12. Arx: a toolset for the efficient simulation and direct synthesis of high-performance signal processing algorithms

    NARCIS (Netherlands)

    Hofstra, K.L.; Gerez, Sabih H.

    2007-01-01

    This paper addresses the efficient implementation of highperformance signal-processing algorithms. In early stages of such designs many computation-intensive simulations may be necessary. This calls for hardware description formalisms targeted for efficient simulation (such as the programming

  13. Molecular simulation workflows as parallel algorithms: the execution engine of Copernicus, a distributed high-performance computing platform.

    Science.gov (United States)

    Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik

    2015-06-09

    Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus.

  14. Enhanced nonlinearity interval mapping scheme for high-performance simulation-optimization of watershed-scale BMP placement

    Science.gov (United States)

    Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn

    2015-03-01

    Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.

  15. High performance direct gravitational N-body simulations on graphics processing units II: An implementation in CUDA

    NARCIS (Netherlands)

    Belleman, R.G.; Bédorf, J.; Portegies Zwart, S.F.

    2008-01-01

    We present the results of gravitational direct N-body simulations using the graphics processing unit (GPU) on a commercial NVIDIA GeForce 8800GTX designed for gaming computers. The force evaluation of the N-body problem is implemented in "Compute Unified Device Architecture" (CUDA) using the GPU to

  16. Refficientlib: an efficient load-rebalanced adaptive mesh refinement algorithm for high-performance computational physics meshes

    OpenAIRE

    Baiges Aznar, Joan; Bayona Roa, Camilo Andrés

    2017-01-01

    No separate or additional fees are collected for access to or distribution of the work. In this paper we present a novel algorithm for adaptive mesh refinement in computational physics meshes in a distributed memory parallel setting. The proposed method is developed for nodally based parallel domain partitions where the nodes of the mesh belong to a single processor, whereas the elements can belong to multiple processors. Some of the main features of the algorithm presented in this paper a...

  17. High performance shallow water kernels for parallel overland flow simulations based on FullSWOF2D

    KAUST Repository

    Wittmann, Roland

    2017-01-25

    We describe code optimization and parallelization procedures applied to the sequential overland flow solver FullSWOF2D. Major difficulties when simulating overland flows comprise dealing with high resolution datasets of large scale areas which either cannot be computed on a single node either due to limited amount of memory or due to too many (time step) iterations resulting from the CFL condition. We address these issues in terms of two major contributions. First, we demonstrate a generic step-by-step transformation of the second order finite volume scheme in FullSWOF2D towards MPI parallelization. Second, the computational kernels are optimized by the use of templates and a portable vectorization approach. We discuss the load imbalance of the flux computation due to dry and wet cells and propose a solution using an efficient cell counting approach. Finally, scalability results are shown for different test scenarios along with a flood simulation benchmark using the Shaheen II supercomputer.

  18. High performance pseudo-analytical simulation of multi-object adaptive optics over multi-GPU systems

    KAUST Repository

    Abdelfattah, Ahmad; Gendron, É ric; Gratadour, Damien; Keyes, David E.; Ltaief, Hatem; Sevin, Arnaud; Vidal, Fabrice

    2014-01-01

    Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique dedicated to the special case of wide-field multi-object spectrographs (MOS). It applies dedicated wavefront corrections to numerous independent tiny patches spread over a large field of view (FOV). The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. The output of this study helps the design of a new instrument called MOSAIC, a multi-object spectrograph proposed for the European Extremely Large Telescope (E-ELT). We have developed a novel hybrid pseudo-analytical simulation scheme that allows us to accurately simulate in detail the tomographic problem. The main challenge resides in the computation of the tomographic reconstructor, which involves pseudo-inversion of a large dense symmetric matrix. The pseudo-inverse is computed using an eigenvalue decomposition, based on the divide and conquer algorithm, on multicore systems with multi-GPUs. Thanks to a new symmetric matrix-vector product (SYMV) multi-GPU kernel, our overall implementation scores significant speedups over standard numerical libraries on multicore, like Intel MKL, and up to 60% speedups over the standard MAGMA implementation on 8 Kepler K20c GPUs. At 40,000 unknowns, this appears to be the largest-scale tomographic AO matrix solver submitted to computation, to date, to our knowledge and opens new research directions for extreme scale AO simulations. © 2014 Springer International Publishing Switzerland.

  19. Multi-physics corrosion modeling for sustainability assessment of steel reinforced high performance fiber reinforced cementitious composites

    DEFF Research Database (Denmark)

    Lepech, M.; Michel, Alexander; Geiker, Mette

    2016-01-01

    and widespread depassivation, are the mechanism behind experimental results of HPFRCC steel corrosion studies found in the literature. Such results provide an indication of the fundamental mechanisms by which steel reinforced HPFRCC materials may be more durable than traditional reinforced concrete and other......Using a newly developed multi-physics transport, corrosion, and cracking model, which models these phenomena as a coupled physiochemical processes, the role of HPFRCC crack control and formation in regulating steel reinforcement corrosion is investigated. This model describes transport of water...... and chemical species, the electric potential distribution in the HPFRCC, the electrochemical propagation of steel corrosion, and the role of microcracks in the HPFRCC material. Numerical results show that the reduction in anode and cathode size on the reinforcing steel surface, due to multiple crack formation...

  20. Physically realistic modeling of maritime training simulation

    OpenAIRE

    Cieutat , Jean-Marc

    2003-01-01

    Maritime training simulation is an important matter of maritime teaching, which requires a lot of scientific and technical skills.In this framework, where the real time constraint has to be maintained, all physical phenomena cannot be studied; the most visual physical phenomena relating to the natural elements and the ship behaviour are reproduced only. Our swell model, based on a surface wave simulation approach, permits to simulate the shape and the propagation of a regular train of waves f...

  1. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry; Etienne, Vincent; Gashawbeza, Ewenet; Curiel, Emesto Sandoval; Khan, Azizur; Feki, Saber; Kortas, Samuel

    2017-01-01

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey's acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less than

  2. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry

    2017-02-27

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey\\'s acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less

  3. Implementation of a Monte Carlo simulation environment for fully 3D PET on a high-performance parallel platform

    CERN Document Server

    Zaidi, H; Morel, Christian

    1998-01-01

    This paper describes the implementation of the Eidolon Monte Carlo program designed to simulate fully three-dimensional (3D) cylindrical positron tomographs on a MIMD parallel architecture. The original code was written in Objective-C and developed under the NeXTSTEP development environment. Different steps involved in porting the software on a parallel architecture based on PowerPC 604 processors running under AIX 4.1 are presented. Basic aspects and strategies of running Monte Carlo calculations on parallel computers are described. A linear decrease of the computing time was achieved with the number of computing nodes. The improved time performances resulting from parallelisation of the Monte Carlo calculations makes it an attractive tool for modelling photon transport in 3D positron tomography. The parallelisation paradigm used in this work is independent from the chosen parallel architecture

  4. Development and verification of a high performance multi-group SP3 transport capability in the ARTEMIS core simulator

    International Nuclear Information System (INIS)

    Van Geemert, Rene

    2008-01-01

    For satisfaction of future global customer needs, dedicated efforts are being coordinated internationally and pursued continuously at AREVA NP. The currently ongoing CONVERGENCE project is committed to the development of the ARCADIA R next generation core simulation software package. ARCADIA R will be put to global use by all AREVA NP business regions, for the entire spectrum of core design processes, licensing computations and safety studies. As part of the currently ongoing trend towards more sophisticated neutronics methodologies, an SP 3 nodal transport concept has been developed for ARTEMIS which is the steady-state and transient core simulation part of ARCADIA R . For enabling a high computational performance, the SP N calculations are accelerated by applying multi-level coarse mesh re-balancing. In the current implementation, SP 3 is about 1.4 times as expensive computationally as SP 1 (diffusion). The developed SP 3 solution concept is foreseen as the future computational workhorse for many-group 3D pin-by-pin full core computations by ARCADIA R . With the entire numerical workload being highly parallelizable through domain decomposition techniques, associated CPU-time requirements that adhere to the efficiency needs in the nuclear industry can be expected to become feasible in the near future. The accuracy enhancement obtainable by using SP 3 instead of SP 1 has been verified by a detailed comparison of ARTEMIS 16-group pin-by-pin SP N results with KAERI's DeCart reference results for the 2D pin-by-pin Purdue UO 2 /MOX benchmark. This article presents the accuracy enhancement verification and quantifies the achieved ARTEMIS-SP 3 computational performance for a number of 2D and 3D multi-group and multi-box (up to pin-by-pin) core computations. (authors)

  5. Proceeding of A3 foresight program seminar on critical physics issues specific to steady state sustainment of high-performance plasmas 2014

    International Nuclear Information System (INIS)

    Morita, Shigeru; Hu Liqun; Oh, Yeong-Kook

    2014-10-01

    The A3 Foresight Program titled by 'Critical Physics Issues Specific to Steady State Sustainment of High-Performance Plasmas', based on the scientific collaboration among China, Japan and Korea in the field of plasma physics, has been started from August 2012 under the auspice of the Japan Society for the Promotion of Science (JSPS, Japan), the National Research Foundation of Korea (NRF, Korea) and the National Natural Science Foundation of China (NSFC, China). The main purpose of this project is to enhance joint experiments on three Asian advanced fully superconducting fusion devices (EAST in China, LHD in Japan and KSTAR in Korea) and other magnetic confinement devices to several key physics issues on steady state sustainment of high-performance plasmas. The fourth seminar on the A3 collaboration, as the fifth meeting of A3 program, took place in Kagoshima, Japan, 23-26 June 2014, which was hosted by National Institute for Fusion Science, to discuss achievement during past two years and to summarize intermediate report. New collaborative research was also encouraged as well as participation of young scientists. The topics include steady state sustainment of magnetic configuration, edge and divertor plasma control and confinement of alpha particles. This issue is the collection of 41 papers presented at the entitled meeting. All the 41 of the presented papers are indexed individually. (J.P.N.)

  6. Computer simulation analysis on the machinability of alumina dispersion enforced copper alloy for high performance compact heat exchanger

    International Nuclear Information System (INIS)

    Ishiyama, Shintaro; Muto, Yasushi

    2001-01-01

    Feasibility study on a HTGR-GT (High Temperature Gas cooled Reactor-Gas Turbine) system is examining the application of the high strength / high thermal conductivity alumina dispersed copper (AL-25) in the ultra-fine rectangle plate fin of the recuperator for the system. However, it is very difficult to manufacture a ultra-fine fin by large-scale plastic deformation from the hard and brittle Al-25 foil. Therefor, in present study, to establish the fine fin manufacturing technology of the AL-25 foil, it did the processing simulation of the fine fin first by the large-scale elasto-plastic finite element analysis (FEM) and it estimated a forming limit. Next, it experimentally made the manufacturing equipment where it is possible to do new processing using these analytical results, and it implemented a manufacturing experiment on the AL-25 foil. With these results, the following conclusion was obtained. (1) It did the processing simulation to manufacture a fine rectangle fin (fin height x pitch x thickness, 3 mm x 4 mm x 0.156 mm) from AL-25 foil (Thickness=0.156 mm) by the large-scale elasto-plastic FEM using the double action processing method. As a result, the manufacturing of a fine rectangle fin found a possible thing in the following condition by the double action processing method. It made that 0.8 mm and 0.25 mm were a best value respectively in the R part and the clearance between dies by making double action processing examination equipment experimentally and implementing a manufacturing examination using this equipment. (2) It succeeded in the manufacturing of the fine fin that the height x pitch x thickness is 3 mm x 4 mm x (0.156 mm±0.001 mm) after implementing a fine rectangle fin manufacturing examination from the AL-25 foil. (3) The change of the process of the deformation and the thickness by the processing of the AL-25 foil which was estimated by the large-scale elasto-plastic FEM showed the result of the processing experiment and good agreement

  7. Viscoelastic Waves Simulation in a Blocky Medium with Fluid-Saturated Interlayers Using High-Performance Computing

    Science.gov (United States)

    Sadovskii, Vladimir; Sadovskaya, Oxana

    2017-04-01

    A thermodynamically consistent approach to the description of linear and nonlinear wave processes in a blocky medium, which consists of a large number of elastic blocks interacting with each other via pliant interlayers, is proposed. The mechanical properties of interlayers are defined by means of the rheological schemes of different levels of complexity. Elastic interaction between the blocks is considered in the framework of the linear elasticity theory [1]. The effects of viscoelastic shear in the interblock interlayers are taken into consideration using the Pointing-Thomson rheological scheme. The model of an elastic porous material is used in the interlayers, where the pores collapse if an abrupt compressive stress is applied. On the basis of the Biot equations for a fluid-saturated porous medium, a new mathematical model of a blocky medium is worked out, in which the interlayers provide a convective fluid motion due to the external perturbations. The collapse of pores is modeled within the generalized rheological approach, wherein the mechanical properties of a material are simulated using four rheological elements. Three of them are the traditional elastic, viscous and plastic elements, the fourth element is the so-called rigid contact [2], which is used to describe the behavior of materials with different resistance to tension and compression. Thermodynamic consistency of the equations in interlayers with the equations in blocks guarantees fulfillment of the energy conservation law for a blocky medium in a whole, i.e. kinetic and potential energy of the system is the sum of kinetic and potential energies of the blocks and interlayers. As a result of discretization of the equations of the model, robust computational algorithm is constructed, that is stable because of the thermodynamic consistency of the finite difference equations at a discrete level. The splitting method by the spatial variables and the Godunov gap decay scheme are used in the blocks, the

  8. Proceedings of A3 foresight program seminar on critical physics issues specific to steady state sustainment of high-performance plasmas

    International Nuclear Information System (INIS)

    Morita, Shigeru; Hu Liqun; Oh, Yeong-Kook

    2013-06-01

    The A3 Foresight Program titled by 'Critical Physics Issues Specific to Steady State Sustainment of High-Performance Plasmas', based on the scientific collaboration among China, Japan and Korea in the field of plasma physics, has been newly started from August 2012 under the auspice of the Japan Society for the Promotion of Science (JSPS, Japan), the National Research Foundation of Korea (NRF, Korea) and the National Natural Science Foundation of China (NSFC, China). A seminar on the A3 collaboration took place in Hotel Gozensui, Kushiro, Japan, 22-25 January 2013. This seminar was organized by National Institute for Fusion Science. One special talk and 36 oral talks were presented in the seminar including 13 Chinese, 14 Japanese and 9 Korean attendees. Steady state sustainment of high-performance plasmas is a crucial issue for realizing a nuclear fusion reactor. This seminar was motivated along the issues. Results on fusion experiments and theory obtained through A3 foresight program during recent two years were discussed and summarized. Possible direction of future collaboration and further encouragement of scientific activity of younger scientists were also discussed in this seminar with future experimental plans in three countries. This issue is the collection of 29 papers presented at the entitled meeting. All the 29 of the presented papers are indexed individually. (J.P.N.)

  9. High performance computing applied to simulation of the flow in pipes; Computacao de alto desempenho aplicada a simulacao de escoamento em dutos

    Energy Technology Data Exchange (ETDEWEB)

    Cozin, Cristiane; Lueders, Ricardo; Morales, Rigoberto E.M. [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil). Dept. de Engenharia Mecanica

    2008-07-01

    In recent years, computer cluster has emerged as a real alternative to solution of problems which require high performance computing. Consequently, the development of new applications has been driven. Among them, flow simulation represents a real computational burden specially for large systems. This work presents a study of using parallel computing for numerical fluid flow simulation in pipelines. A mathematical flow model is numerically solved. In general, this procedure leads to a tridiagonal system of equations suitable to be solved by a parallel algorithm. In this work, this is accomplished by a parallel odd-oven reduction method found in the literature which is implemented on Fortran programming language. A computational platform composed by twelve processors was used. Many measures of CPU times for different tridiagonal system sizes and number of processors were obtained, highlighting the communication time between processors as an important issue to be considered when evaluating the performance of parallel applications. (author)

  10. Partnership For Edge Physics Simulation

    Energy Technology Data Exchange (ETDEWEB)

    PARASHAR, MANISH

    2018-04-02

    In this effort, we will extend our prior work as part of CPES (i.e., DART and DataSpaces) to support in-situ tight coupling between application codes that exploits data locality and core-level parallelism to maximize on-chip data exchange and reuse. This will be accomplished by mapping coupled simulations so that the data exchanges are more localized within the nodes. Coupled simulation workflows can more effectively utilize the resources available on emerging HEC platforms if they can be mapped and executed to exploit data locality as well as the communication patterns between application components. Scheduling and running such workflows requires an extended framework that should (1) provide a unified hybrid abstraction to enable coordination and data sharing across computation tasks that run on the heterogeneous multi-core-based systems, and (2) develop a data-locality based dynamic tasks scheduling approach to increase on-chip or intra-node data exchanges and in-situ execution. This effort will extend our prior work as part of CPES (i.e., DART and DataSpaces), which provided a simple virtual shared-space abstraction hosted at the staging nodes, to support application coordination, data sharing and active data processing services. Moreover, it will transparently manage the low-level operations associated with the inter-application data exchange, such as data redistributions, and will enable running coupled simulation workflow on multi-cores computing platforms.

  11. Plasma physics via particle simulation

    International Nuclear Information System (INIS)

    Birdsall, C.K.

    1981-01-01

    Plasmas are studied by following the motion of many particles in applied and self fields, analytically, experimentally and computationally. Plasmas for magnetic fusion energy devices are very hot, nearly collisionless and magnetized, with scale lengths of many ion gyroradii and Debye lengths. The analytic studies of such plasmas are very difficult as the plasma is nonuniform, anisotropic and nonlinear. The experimental studies have become very expensive in time and money, as the size, density and temperature approach fusion reactor values. Computational studies using many particles and/or fluids have complemented both theories and experiments for many years and have progressed to fully three dimensional electromagnetic models, albeit with hours of running times on the fastest largest computers. Particle simulation methods are presented in some detail, showing particle advance from acceleration to velocity to position, followed by calculation of the fields from charge and current densities and then further particle advance, and so on. Limitations due to the time stepping and use of a spatial grid are given, to avoid inaccuracies and instabilities. Examples are given for an electrostatic program in one dimension of an orbit averaging program, and for a three dimensional electromagnetic program. Applications of particle simulations of plasmas in magnetic and inertial fusion devices continue to grow, as well as to plasmas and beams in peripheral devices, such as sources, accelerators, and converters. (orig.)

  12. Design and Study of Cognitive Network Physical Layer Simulation Platform

    Directory of Open Access Journals (Sweden)

    Yongli An

    2014-01-01

    Full Text Available Cognitive radio technology has received wide attention for its ability to sense and use idle frequency. IEEE 802.22 WRAN, the first to follow the standard in cognitive radio technology, is featured by spectrum sensing and wireless data transmission. As far as wireless transmission is concerned, the availability and implementation of a mature and robust physical layer algorithm are essential to high performance. For the physical layer of WRAN using OFDMA technology, this paper proposes a synchronization algorithm and at the same time provides a public platform for the improvement and verification of that new algorithm. The simulation results show that the performance of the platform is highly close to the theoretical value.

  13. Coincidental match of numerical simulation and physics

    Science.gov (United States)

    Pierre, B.; Gudmundsson, J. S.

    2010-08-01

    Consequences of rapid pressure transients in pipelines range from increased fatigue to leakages and to complete ruptures of pipeline. Therefore, accurate predictions of rapid pressure transients in pipelines using numerical simulations are critical. State of the art modelling of pressure transient in general, and water hammer in particular include unsteady friction in addition to the steady frictional pressure drop, and numerical simulations rely on the method of characteristics. Comparison of rapid pressure transient calculations by the method of characteristics and a selected high resolution finite volume method highlights issues related to modelling of pressure waves and illustrates that matches between numerical simulations and physics are purely coincidental.

  14. Physical Characterization of Florida International University Simulants

    Energy Technology Data Exchange (ETDEWEB)

    HANSEN, ERICHK.

    2004-08-19

    Florida International University shipped Laponite, clay (bentonite and kaolin blend), and Quality Assurance Requirements Document AZ-101 simulants to the Savannah River Technology Center for physical characterization and to report the results. The objectives of the task were to measure the physical properties of the fluids provided by FIU and to report the results. The physical properties were measured using the approved River Protection Project Waste Treatment Plant characterization procedure [Ref. 1]. This task was conducted in response to the work outlined in CCN066794 [Ref. 2], authored by Gary Smith and William Graves of RPP-WTP.

  15. High performance homes

    DEFF Research Database (Denmark)

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    Can prefabrication contribute to the development of high performance homes? To answer this question, this chapter defines high performance in more broadly inclusive terms, acknowledging the technical, architectural, social and economic conditions under which energy consumption and production occur....... Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy...

  16. Morphology of Gas Release in Physical Simulants

    Energy Technology Data Exchange (ETDEWEB)

    Daniel, Richard C.; Burns, Carolyn A.; Crawford, Amanda D.; Hylden, Laura R.; Bryan, Samuel A.; MacFarlan, Paul J.; Gauglitz, Phillip A.

    2014-07-03

    This report documents testing activities conducted as part of the Deep Sludge Gas Release Event Project (DSGREP). The testing described in this report focused on evaluating the potential retention and release mechanisms of hydrogen bubbles in underground radioactive waste storage tanks at Hanford. The goal of the testing was to evaluate the rate, extent, and morphology of gas release events in simulant materials. Previous, undocumented scoping tests have evidenced dramatically different gas release behavior from simulants with similar physical properties. Specifically, previous gas release tests have evaluated the extent of release of 30 Pa kaolin and 30 Pa bentonite clay slurries. While both materials are clays and both have equivalent material shear strength using a shear vane, it was found that upon stirring, gas was released immediately and completely from bentonite clay slurry while little if any gas was released from the kaolin slurry. The motivation for the current work is to replicate these tests in a controlled quality test environment and to evaluate the release behavior for another simulant used in DSGREP testing. Three simulant materials were evaluated: 1) a 30 Pa kaolin clay slurry, 2) a 30 Pa bentonite clay slurry, and 3) Rayleigh-Taylor (RT) Simulant (a simulant designed to support DSGREP RT instability testing. Entrained gas was generated in these simulant materials using two methods: 1) application of vacuum over about a 1-minute period to nucleate dissolved gas within the simulant and 2) addition of hydrogen peroxide to generate gas by peroxide decomposition in the simulants over about a 16-hour period. Bubble release was effected by vibrating the test material using an external vibrating table. When testing with hydrogen peroxide, gas release was also accomplished by stirring of the simulant.

  17. Hazard-to-Risk: High-Performance Computing Simulations of Large Earthquake Ground Motions and Building Damage in the Near-Fault Region

    Science.gov (United States)

    Miah, M.; Rodgers, A. J.; McCallen, D.; Petersson, N. A.; Pitarka, A.

    2017-12-01

    We are running high-performance computing (HPC) simulations of ground motions for large (magnitude, M=6.5-7.0) earthquakes in the near-fault region (steel moment frame buildings throughout the near-fault domain. For ground motions, we are using SW4, a fourth order summation-by-parts finite difference time-domain code running on 10,000-100,000's of cores. Earthquake ruptures are generated using the Graves and Pitarka (2017) method. We validated ground motion intensity measurements against Ground Motion Prediction Equations. We considered two events (M=6.5 and 7.0) for vertical strike-slip ruptures with three-dimensional (3D) basin structures, including stochastic heterogeneity. We have also considered M7.0 scenarios for a Hayward Fault rupture scenario which effects the San Francisco Bay Area and northern California using both 1D and 3D earth structure. Dynamic, inelastic response of canonical buildings is computed with the NEVADA, a nonlinear, finite-deformation finite element code. Canonical buildings include 3-, 9-, 20- and 40-story steel moment frame buildings. Damage potential is tracked by the peak inter-story drift (PID) ratio, which measures the maximum displacement between adjacent floors of the building and is strongly correlated with damage. PID ratios greater 1.0 generally indicate non-linear response and permanent deformation of the structure. We also track roof displacement to identify permanent deformation. PID (damage) for a given earthquake scenario (M, slip distribution, hypocenter) is spatially mapped throughout the SW4 domain with 1-2 km resolution. Results show that in the near fault region building damage is correlated with peak ground velocity (PGV), while farther away (> 20 km) it is better correlated with peak ground acceleration (PGA). We also show how simulated ground motions have peaks in the response spectra that shift to longer periods for larger magnitude events and for locations of forward directivity, as has been reported by

  18. High Performance Marine Vessels

    CERN Document Server

    Yun, Liang

    2012-01-01

    High Performance Marine Vessels (HPMVs) range from the Fast Ferries to the latest high speed Navy Craft, including competition power boats and hydroplanes, hydrofoils, hovercraft, catamarans and other multi-hull craft. High Performance Marine Vessels covers the main concepts of HPMVs and discusses historical background, design features, services that have been successful and not so successful, and some sample data of the range of HPMVs to date. Included is a comparison of all HPMVs craft and the differences between them and descriptions of performance (hydrodynamics and aerodynamics). Readers will find a comprehensive overview of the design, development and building of HPMVs. In summary, this book: Focuses on technology at the aero-marine interface Covers the full range of high performance marine vessel concepts Explains the historical development of various HPMVs Discusses ferries, racing and pleasure craft, as well as utility and military missions High Performance Marine Vessels is an ideal book for student...

  19. High Performance Macromolecular Material

    National Research Council Canada - National Science Library

    Forest, M

    2002-01-01

    .... In essence, most commercial high-performance polymers are processed through fiber spinning, following Nature and spider silk, which is still pound-for-pound the toughest liquid crystalline polymer...

  20. Interactive physically-based sound simulation

    Science.gov (United States)

    Raghuvanshi, Nikunj

    The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation

  1. Physics-Based Simulations of Natural Hazards

    Science.gov (United States)

    Schultz, Kasey William

    Earthquakes and tsunamis are some of the most damaging natural disasters that we face. Just two recent events, the 2004 Indian Ocean earthquake and tsunami and the 2011 Haiti earthquake, claimed more than 400,000 lives. Despite their catastrophic impacts on society, our ability to predict these natural disasters is still very limited. The main challenge in studying the earthquake cycle is the non-linear and multi-scale properties of fault networks. Earthquakes are governed by physics across many orders of magnitude of spatial and temporal scales; from the scale of tectonic plates and their evolution over millions of years, down to the scale of rock fracturing over milliseconds to minutes at the sub-centimeter scale during an earthquake. Despite these challenges, there are useful patterns in earthquake occurrence. One such pattern, the frequency-magnitude relation, relates the number of large earthquakes to small earthquakes and forms the basis for assessing earthquake hazard. However the utility of these relations is proportional to the length of our earthquake records, and typical records span at most a few hundred years. Utilizing physics based interactions and techniques from statistical physics, earthquake simulations provide rich earthquake catalogs allowing us to measure otherwise unobservable statistics. In this dissertation I will discuss five applications of physics-based simulations of natural hazards, utilizing an earthquake simulator called Virtual Quake. The first is an overview of computing earthquake probabilities from simulations, focusing on the California fault system. The second uses simulations to help guide satellite-based earthquake monitoring methods. The third presents a new friction model for Virtual Quake and describes how we tune simulations to match reality. The fourth describes the process of turning Virtual Quake into an open source research tool. This section then focuses on a resulting collaboration using Virtual Quake for a detailed

  2. The challenge of quantum computer simulations of physical phenomena

    International Nuclear Information System (INIS)

    Ortiz, G.; Knill, E.; Gubernatis, J.E.

    2002-01-01

    The goal of physics simulation using controllable quantum systems ('physics imitation') is to exploit quantum laws to advantage, and thus accomplish efficient simulation of physical phenomena. In this Note, we discuss the fundamental concepts behind this paradigm of information processing, such as the connection between models of computation and physical systems. The experimental simulation of a toy quantum many-body problem is described

  3. Simulation of General Physics laboratory exercise

    International Nuclear Information System (INIS)

    Aceituno, P; Hernández-Cabrera, A; Hernández-Aceituno, J

    2015-01-01

    Laboratory exercises are an important part of general Physics teaching, both during the last years of high school and the first year of college education. Due to the need to acquire enough laboratory equipment for all the students, and the widespread access to computers rooms in teaching, we propose the development of computer simulated laboratory exercises. A representative exercise in general Physics is the calculation of the gravity acceleration value, through the free fall motion of a metal ball. Using a model of the real exercise, we have developed an interactive system which allows students to alter the starting height of the ball to obtain different fall times. The simulation was programmed in ActionScript 3, so that it can be freely executed in any operative system; to ensure the accuracy of the calculations, all the input parameters of the simulations were modelled using digital measurement units, and to allow a statistical management of the resulting data, measurement errors are simulated through limited randomization

  4. Characterization of cure kinetics and physical properties of a high performance, glass fiber-reinforced epoxy prepreg and a novel fluorine-modified, amine-cured commercial epoxy

    Science.gov (United States)

    Bilyeu, Bryan

    Kinetic equation parameters for the curing reaction of a commercial glass fiber reinforced high performance epoxy prepreg composed of the tetrafunctional epoxy tetraglycidyl 4,4-diaminodiphenyl methane (TGDDM), the tetrafunctional amine curing agent 4,4'-diaminodiphenylsulfone (DDS) and an ionic initiator/accelerator, are determined by various thermal analysis techniques and the results compared. The reaction is monitored by heat generated determined by differential scanning calorimetry (DSC) and by high speed DSC when the reaction rate is high. The changes in physical properties indicating increasing conversion are followed by shifts in glass transition temperature determined by DSC, temperature-modulated DSC (TMDSC), step scan DSC and high speed DSC, thermomechanical (TMA) and dynamic mechanical (DMA) analysis and thermally stimulated depolarization (TSD). Changes in viscosity, also indicative of degree of conversion, are monitored by DMA. Thermal stability as a function of degree of cure is monitored by thermogravimetric analysis (TGA). The parameters of the general kinetic equations, including activation energy and rate constant, are explained and used to compare results of various techniques. The utilities of the kinetic descriptions are demonstrated in the construction of a useful time-temperature-transformation (TTT) diagram and a continuous heating transformation (CHT) diagram for rapid determination of processing parameters in the processing of prepregs. Shrinkage due to both resin consolidation and fiber rearrangement is measured as the linear expansion of the piston on a quartz dilatometry cell using TMA. The shrinkage of prepregs was determined to depend on the curing temperature, pressure applied and the fiber orientation. Chemical modification of an epoxy was done by mixing a fluorinated aromatic amine (aniline) with a standard aliphatic amine as a curing agent for a commercial Diglycidylether of Bisphenol-A (DGEBA) epoxy. The resulting cured network

  5. David Adler Lectureship Award in the Field of Materials Physics: Racetrack Memory - a high-performance, storage class memory using magnetic domain-walls manipulated by current

    Science.gov (United States)

    Parkin, Stuart

    2012-02-01

    Racetrack Memory is a novel high-performance, non-volatile storage-class memory in which magnetic domains are used to store information in a ``magnetic racetrack'' [1]. The magnetic racetrack promises a solid state memory with storage capacities and cost rivaling that of magnetic disk drives but with much improved performance and reliability: a ``hard disk on a chip''. The magnetic racetrack is comprised of a magnetic nanowire in which a series of magnetic domain walls are shifted to and fro along the wire using nanosecond-long pulses of spin polarized current [2]. We have demonstrated the underlying physics that makes Racetrack Memory possible [3,4] and all the basic functions - creation, and manipulation of a train of domain walls and their detection. The physics underlying the current induced dynamics of domain walls will also be discussed. In particular, we show that the domain walls respond as if they have mass, leading to significant inertial driven motion of the domain walls over long times after the current pulses are switched off [3]. We also demonstrate that in perpendicularly magnetized nanowires there are two independent current driving mechanisms: one derived from bulk spin-dependent scattering that drives the domain walls in the direction of electron flow, and a second interfacial mechanism that can drive the domain walls either along or against the electron flow, depending on subtle changes in the nanowire structure. Finally, we demonstrate thermally induced spin currents are large enough that they can be used to manipulate domain walls. [4pt] [1] S.S.P. Parkin, US Patent 6,834,005 (2004); S.S.P. Parkin et al., Science 320, 190 (2008); S.S.P. Parkin, Scientific American (June 2009). [0pt] [2] M. Hayashi, L. Thomas, R. Moriya, C. Rettner and S.S.P. Parkin, Science 320, 209 (2008). [0pt] [3] L. Thomas, R. Moriya, C. Rettner and S.S.P. Parkin, Science 330, 1810 (2010). [0pt] [4] X. Jiang et al. Nat. Comm. 1:25 (2010) and Nano Lett. 11, 96 (2011).

  6. VIII Brazilian Meeting on Simulational Physics (BMSP)

    International Nuclear Information System (INIS)

    Branco, N. S.; Figueiredo, W.; Plascak, J. A.; Santos, M.

    2016-01-01

    This special issue includes invited and selected articles of the VIII Brazilian Meeting on Simulational Physics (BMSP), held in Florianópolis, Santa Catarina, Brazil, from 3rd to 8th August, 2015. This is the eighth such meeting, and the second one to have contributed papers published in Journal of Physics: Conference Series (the other was the VII BMSP). The previous meetings in the BMSP series took place in the mountains of Minas Gerais, in the region of the Brazilian Pantanal, and in the shores of Paraíba. Now, for the first time, the Meeting was held in Florianópolis, with its pleasing shores, the capital of Santa Catarina state. The VIII BMSP brought together about 50 researchers from all over the world for a vibrant and productive conference. As in the previous meetings, the talks and posters highlighted recent advances in applications, algorithms, and implementation of computer simulation methods for the study of condensed matter, materials, and out of equilibrium, quantum and biologically motivated systems. We are sure that this meeting series will continue to be an important occasion for people working in simulational physics to exchange ideas and discuss the state of the art of this always expanding field. We are very glad to put together this special issue, and are most appreciative of the efforts of the editors of the Journal of Physics: Conference Series for making this publication possible. We are grateful for the outstanding work of the Florianopolis team, for the financial support of the Brazilian agencies CAPES and CNPq, and of the Federal Universities UFPB and UFSC. At last, but not least, we would like to acknowledge all of the authors for their written submissions. (paper)

  7. Simulations and Experiments in Astronomy and Physics

    Science.gov (United States)

    Maloney, F. P.; Maurone, P. A.; Dewarf, L. E.

    1998-12-01

    There are new approaches to teaching astronomy and physics in the laboratory setting, involving the use of computers as tools to simulate events and concepts which can be illuminated in no other reasonable way. With the computer, it is possible to travel back in time to replicate the sky as Galileo saw it. Astronomical phenomena which reveal themselves only after centuries of real time may be compressed in the computer to a simulation of several minutes. Observations simulated on the computer do not suffer from the vagaries of weather, fixed time or geographic position, or non-repeatability. In physics, the computer allows us to secure data for experiments which, by their nature, may not be amenable to human interaction. These could include experiments with very fast or very slow timescales, large number of data samples, complex or tedious manipulation of the data which hides the fundamental nature of the experiment, or data sampling which would need a specialized probe, such as for acid rain. This innovation has become possible only recently, due to the availability and affordability of sophisticated computer hardware and software. We have developed a laboratory experience for non-scientists who need an introductory course in astronomy or physics. Our approach makes extensive use of computers in this laboratory. Using commercially available software, the students use the computer as a time machine and a space craft to explore and rediscover fundamental science. The physics experiments are classical in nature, and the computer acts as a data collector and presenter, freeing the student from the tedium of repetitive data gathering and replotting. In this way, the student is encouraged to explore, to try new things, to refine the measurements, and to discover the principles underlying the observed phenomena.

  8. Plasma simulation studies using multilevel physics models

    International Nuclear Information System (INIS)

    Park, W.; Belova, E.V.; Fu, G.Y.; Tang, X.Z.; Strauss, H.R.; Sugiyama, L.E.

    1999-01-01

    The question of how to proceed toward ever more realistic plasma simulation studies using ever increasing computing power is addressed. The answer presented here is the M3D (Multilevel 3D) project, which has developed a code package with a hierarchy of physics levels that resolve increasingly complete subsets of phase-spaces and are thus increasingly more realistic. The rationale for the multilevel physics models is given. Each physics level is described and examples of its application are given. The existing physics levels are fluid models (3D configuration space), namely magnetohydrodynamic (MHD) and two-fluids; and hybrid models, namely gyrokinetic-energetic-particle/MHD (5D energetic particle phase-space), gyrokinetic-particle-ion/fluid-electron (5D ion phase-space), and full-kinetic-particle-ion/fluid-electron level (6D ion phase-space). Resolving electron phase-space (5D or 6D) remains a future project. Phase-space-fluid models are not used in favor of δf particle models. A practical and accurate nonlinear fluid closure for noncollisional plasmas seems not likely in the near future. copyright 1999 American Institute of Physics

  9. TOWARD END-TO-END MODELING FOR NUCLEAR EXPLOSION MONITORING: SIMULATION OF UNDERGROUND NUCLEAR EXPLOSIONS AND EARTHQUAKES USING HYDRODYNAMIC AND ANELASTIC SIMULATIONS, HIGH-PERFORMANCE COMPUTING AND THREE-DIMENSIONAL EARTH MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Rodgers, A; Vorobiev, O; Petersson, A; Sjogreen, B

    2009-07-06

    This paper describes new research being performed to improve understanding of seismic waves generated by underground nuclear explosions (UNE) by using full waveform simulation, high-performance computing and three-dimensional (3D) earth models. The goal of this effort is to develop an end-to-end modeling capability to cover the range of wave propagation required for nuclear explosion monitoring (NEM) from the buried nuclear device to the seismic sensor. The goal of this work is to improve understanding of the physical basis and prediction capabilities of seismic observables for NEM including source and path-propagation effects. We are pursuing research along three main thrusts. Firstly, we are modeling the non-linear hydrodynamic response of geologic materials to underground explosions in order to better understand how source emplacement conditions impact the seismic waves that emerge from the source region and are ultimately observed hundreds or thousands of kilometers away. Empirical evidence shows that the amplitudes and frequency content of seismic waves at all distances are strongly impacted by the physical properties of the source region (e.g. density, strength, porosity). To model the near-source shock-wave motions of an UNE, we use GEODYN, an Eulerian Godunov (finite volume) code incorporating thermodynamically consistent non-linear constitutive relations, including cavity formation, yielding, porous compaction, tensile failure, bulking and damage. In order to propagate motions to seismic distances we are developing a one-way coupling method to pass motions to WPP (a Cartesian anelastic finite difference code). Preliminary investigations of UNE's in canonical materials (granite, tuff and alluvium) confirm that emplacement conditions have a strong effect on seismic amplitudes and the generation of shear waves. Specifically, we find that motions from an explosion in high-strength, low-porosity granite have high compressional wave amplitudes and weak

  10. INL High Performance Building Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Jennifer D. Morton

    2010-02-01

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design

  11. High performance systems

    Energy Technology Data Exchange (ETDEWEB)

    Vigil, M.B. [comp.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  12. Danish High Performance Concretes

    DEFF Research Database (Denmark)

    Nielsen, M. P.; Christoffersen, J.; Frederiksen, J.

    1994-01-01

    In this paper the main results obtained in the research program High Performance Concretes in the 90's are presented. This program was financed by the Danish government and was carried out in cooperation between The Technical University of Denmark, several private companies, and Aalborg University...... concretes, workability, ductility, and confinement problems....

  13. High performance homes

    DEFF Research Database (Denmark)

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    . Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy...

  14. Analyzing Virtual Physics Simulations with Tracker

    Science.gov (United States)

    Claessens, Tom

    2017-12-01

    In the physics teaching community, Tracker is well known as a user-friendly open source video analysis software, authored by Douglas Brown. With this tool, the user can trace markers indicated on a video or on stroboscopic photos and perform kinematic analyses. Tracker also includes a data modeling tool that allows one to fit some theoretical equations of motion onto experimentally obtained data. In the field of particle mechanics, Tracker has been effectively used for learning and teaching about projectile motion, "toss up" and free-fall vertical motion, and to explain the principle of mechanical energy conservation. Also, Tracker has been successfully used in rigid body mechanics to interpret the results of experiments with rolling/slipping cylinders and moving rods. In this work, I propose an original method in which Tracker is used to analyze virtual computer simulations created with a physics-based motion solver, instead of analyzing video recording or stroboscopic photos. This could be an interesting approach to study kinematics and dynamics problems in physics education, in particular when there is no or limited access to physical labs. I demonstrate the working method with a typical (but quite challenging) problem in classical mechanics: a slipping/rolling cylinder on a rough surface.

  15. Plasma simulation studies using multilevel physics models

    International Nuclear Information System (INIS)

    Park, W.; Belova, E.V.; Fu, G.Y.

    2000-01-01

    The question of how to proceed toward ever more realistic plasma simulation studies using ever increasing computing power is addressed. The answer presented here is the M3D (Multilevel 3D) project, which has developed a code package with a hierarchy of physics levels that resolve increasingly complete subsets of phase-spaces and are thus increasingly more realistic. The rationale for the multilevel physics models is given. Each physics level is described and examples of its application are given. The existing physics levels are fluid models (3D configuration space), namely magnetohydrodynamic (MHD) and two-fluids; and hybrid models, namely gyrokinetic-energetic-particle/MHD (5D energetic particle phase-space), gyrokinetic-particle-ion/fluid-electron (5D ion phase-space), and full-kinetic-particle-ion/fluid-electron level (6D ion phase-space). Resolving electron phase-space (5D or 6D) remains a future project. Phase-space-fluid models are not used in favor of delta f particle models. A practical and accurate nonlinear fluid closure for noncollisional plasmas seems not likely in the near future

  16. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  17. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  18. Sonification of simulations in computational physics

    International Nuclear Information System (INIS)

    Vogt, K.

    2010-01-01

    Sonification is the translation of information for auditory perception, excluding speech itself. The cognitive performance of pattern recognition is striking for sound, and has too long been disregarded by the scientific mainstream. Examples of 'spontaneous sonification' and systematic research for about 20 years have proven that sonification provides a valuable tool for the exploration of scientific data. The data in this thesis stem from computational physics, where numerical simulations are applied to problems in physics. Prominent examples are spin models and lattice quantum field theories. The corresponding data lend themselves very well to innovative display methods: they are structured on discrete lattices, often stochastic, high-dimensional and abstract, and they provide huge amounts of data. Furthermore, they have no inher- ently perceptual dimension. When designing the sonification of simulation data, one has to make decisions on three levels, both for the data and the sound model: the level of meaning (phenomenological; metaphoric); of structure (in time and space), and of elements ('display units' vs. 'gestalt units'). The design usually proceeds as a bottom-up or top-down process. This thesis provides a 'toolbox' for helping in these decisions. It describes tools that have proven particularly useful in the context of simulation data. An explicit method of top-down sonification design is the metaphoric sonification method, which is based on expert interviews. Furthermore, qualitative and quantitative evaluation methods are presented, on the basis of which a set of evaluation criteria is proposed. The translation between a scientific and the sound synthesis domain is elucidated by a sonification operator. For this formalization, a collection of notation modules is provided. Showcases are discussed in detail that have been developed in the interdisciplinary research projects SonEnvir and QCD-audio, during the second Science By Ear workshop and during a

  19. High-Performance Networking

    CERN Multimedia

    CERN. Geneva

    2003-01-01

    The series will start with an historical introduction about what people saw as high performance message communication in their time and how that developed to the now to day known "standard computer network communication". It will be followed by a far more technical part that uses the High Performance Computer Network standards of the 90's, with 1 Gbit/sec systems as introduction for an in depth explanation of the three new 10 Gbit/s network and interconnect technology standards that exist already or emerge. If necessary for a good understanding some sidesteps will be included to explain important protocols as well as some necessary details of concerned Wide Area Network (WAN) standards details including some basics of wavelength multiplexing (DWDM). Some remarks will be made concerning the rapid expanding applications of networked storage.

  20. Simulation and computation in health physics training

    International Nuclear Information System (INIS)

    Lakey, S.R.A.; Gibbs, D.C.C.; Marchant, C.P.

    1980-01-01

    The Royal Naval College has devised a number of computer aided learning programmes applicable to health physics which include radiation shield design and optimisation, environmental impact of a reactor accident, exposure levels produced by an inert radioactive gas cloud, and the prediction of radiation detector response in various radiation field conditions. Analogue computers are used on reduced or fast time scales because time dependent phenomenon are not always easily assimilated in real time. The build-up and decay of fission products, the dynamics of intake of radioactive material and reactor accident dynamics can be effectively simulated. It is essential to relate these simulations to real time and the College applies a research reactor and analytical phantom to this end. A special feature of the reactor is a chamber which can be supplied with Argon-41 from reactor exhaust gases to create a realistic gaseous contamination environment. Reactor accident situations are also taught by using role playing sequences carried out in real time in the emergency facilities associated with the research reactor. These facilities are outlined and the training technique illustrated with examples of the calculations and simulations. The training needs of the future are discussed, with emphasis on optimisation and cost-benefit analysis. (H.K.)

  1. High Performance Concrete

    Directory of Open Access Journals (Sweden)

    Traian Oneţ

    2009-01-01

    Full Text Available The paper presents the last studies and researches accomplished in Cluj-Napoca related to high performance concrete, high strength concrete and self compacting concrete. The purpose of this paper is to raid upon the advantages and inconveniences when a particular concrete type is used. Two concrete recipes are presented, namely for the concrete used in rigid pavement for roads and another one for self-compacting concrete.

  2. High performance polymeric foams

    International Nuclear Information System (INIS)

    Gargiulo, M.; Sorrentino, L.; Iannace, S.

    2008-01-01

    The aim of this work was to investigate the foamability of high-performance polymers (polyethersulfone, polyphenylsulfone, polyetherimide and polyethylenenaphtalate). Two different methods have been used to prepare the foam samples: high temperature expansion and two-stage batch process. The effects of processing parameters (saturation time and pressure, foaming temperature) on the densities and microcellular structures of these foams were analyzed by using scanning electron microscopy

  3. Scalable high-performance algorithm for the simulation of exciton-dynamics. Application to the light harvesting complex II in the presence of resonant vibrational modes

    DEFF Research Database (Denmark)

    Kreisbeck, Christoph; Kramer, Tobias; Aspuru-Guzik, Alán

    2014-01-01

    high-performance many-core platforms using the Open Compute Language (OpenCL). For the light-harvesting complex II (LHC II) found in spinach, the HEOM results deviate from predictions of approximate theories and clarify the time-scale of the transfer-process. We investigate the impact of resonantly...

  4. Clojure high performance programming

    CERN Document Server

    Kumar, Shantanu

    2013-01-01

    This is a short, practical guide that will teach you everything you need to know to start writing high performance Clojure code.This book is ideal for intermediate Clojure developers who are looking to get a good grip on how to achieve optimum performance. You should already have some experience with Clojure and it would help if you already know a little bit of Java. Knowledge of performance analysis and engineering is not required. For hands-on practice, you should have access to Clojure REPL with Leiningen.

  5. High performance data transfer

    Science.gov (United States)

    Cottrell, R.; Fang, C.; Hanushevsky, A.; Kreuger, W.; Yang, W.

    2017-10-01

    The exponentially increasing need for high speed data transfer is driven by big data, and cloud computing together with the needs of data intensive science, High Performance Computing (HPC), defense, the oil and gas industry etc. We report on the Zettar ZX software. This has been developed since 2013 to meet these growing needs by providing high performance data transfer and encryption in a scalable, balanced, easy to deploy and use way while minimizing power and space utilization. In collaboration with several commercial vendors, Proofs of Concept (PoC) consisting of clusters have been put together using off-the- shelf components to test the ZX scalability and ability to balance services using multiple cores, and links. The PoCs are based on SSD flash storage that is managed by a parallel file system. Each cluster occupies 4 rack units. Using the PoCs, between clusters we have achieved almost 200Gbps memory to memory over two 100Gbps links, and 70Gbps parallel file to parallel file with encryption over a 5000 mile 100Gbps link.

  6. Incorporating Haptic Feedback in Simulation for Learning Physics

    Science.gov (United States)

    Han, Insook; Black, John B.

    2011-01-01

    The purpose of this study was to investigate the effectiveness of a haptic augmented simulation in learning physics. The results indicate that haptic augmented simulations, both the force and kinesthetic and the purely kinesthetic simulations, were more effective than the equivalent non-haptic simulation in providing perceptual experiences and…

  7. High performance sapphire windows

    Science.gov (United States)

    Bates, Stephen C.; Liou, Larry

    1993-02-01

    High-quality, wide-aperture optical access is usually required for the advanced laser diagnostics that can now make a wide variety of non-intrusive measurements of combustion processes. Specially processed and mounted sapphire windows are proposed to provide this optical access to extreme environment. Through surface treatments and proper thermal stress design, single crystal sapphire can be a mechanically equivalent replacement for high strength steel. A prototype sapphire window and mounting system have been developed in a successful NASA SBIR Phase 1 project. A large and reliable increase in sapphire design strength (as much as 10x) has been achieved, and the initial specifications necessary for these gains have been defined. Failure testing of small windows has conclusively demonstrated the increased sapphire strength, indicating that a nearly flawless surface polish is the primary cause of strengthening, while an unusual mounting arrangement also significantly contributes to a larger effective strength. Phase 2 work will complete specification and demonstration of these windows, and will fabricate a set for use at NASA. The enhanced capabilities of these high performance sapphire windows will lead to many diagnostic capabilities not previously possible, as well as new applications for sapphire.

  8. High Performance Computing Software Applications for Space Situational Awareness

    Science.gov (United States)

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  9. High performance proton accelerators

    International Nuclear Information System (INIS)

    Favale, A.J.

    1989-01-01

    In concert with this theme this paper briefly outlines how Grumman, over the past 4 years, has evolved from a company that designed and fabricated a Radio Frequency Quadrupole (RFQ) accelerator from the Los Alamos National Laboratory (LANL) physics and specifications to a company who, as prime contractor, is designing, fabricating, assembling and commissioning the US Army Strategic Defense Commands (USA SDC) Continuous Wave Deuterium Demonstrator (CWDD) accelerator as a turn-key operation. In the case of the RFQ, LANL scientists performed the physics analysis, established the specifications supported Grumman on the mechanical design, conducted the RFQ tuning and tested the RFQ at their laboratory. For the CWDD Program Grumman has the responsibility for the physics and engineering designs, assembly, testing and commissioning albeit with the support of consultants from LANL, Lawrence Berkeley Laboratory (LBL) and Brookhaven National laboratory. In addition, Culham Laboratory and LANL are team members on CWDD. LANL scientists have reviewed the physics design as well as a USA SDC review board. 9 figs

  10. High Performance Parallel Processing (HPPP) Finite Element Simulation of Fluid Structure Interactions Final Report CRADA No. TC-0824-94-A

    Energy Technology Data Exchange (ETDEWEB)

    Couch, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ziegler, D. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2018-01-24

    This project was a muki-partner CRADA. This was a partnership between Alcoa and LLNL. AIcoa developed a system of numerical simulation modules that provided accurate and efficient threedimensional modeling of combined fluid dynamics and structural response.

  11. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1991-03-15

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour.

  12. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour

  13. Proceedings of the third meeting for A3 foresight program workshop on critical physics issues specific to steady state sustainment of high-performance plasmas

    International Nuclear Information System (INIS)

    Hu Liqun; Morita, Shigeru; Oh, Yeong-Kook

    2013-12-01

    To enhance close collaborations among scientists in three Asian countries (China, Japan and Korea), A3 foresight program on Plasma Physics was newly started from August 2012 under the auspice of JSPS (Japan), NRF (Korea) and NSFC (China). The main purpose of this project is to solve several key physics issues through joint experiments on three Asian advanced fully superconducting fusion devices (EAST in China, LHD in Japan and KSTAR in Korea) and other magnetic confinement devices to carry out multi-faceted and complementary physics researches. To summarize the progress and achievement in the first academic year under this A3 foresight program, this workshop was hosted by Institute of Plasma Physics, Chinese Academy of Sciences and held in Beijing during 19-24 May, 2013. Collaborated research and communication with other A3 programs and bilateral programs, as well as participation of young scientists were encouraged in this workshop. The topics include steady state sustainment of magnetic configurations, edge and divertor plasma control and confinement of alpha particles. This issue is the collection of 40 papers presented at the entitled meeting. All the 40 of the presented papers are indexed individually. (J.P.N.)

  14. Proceeding of A3 foresight program seminar on critical physics issues specific to steady state sustainment of high-performance plasmas 2015

    International Nuclear Information System (INIS)

    Hu Liqun; Morita, Shigeru; Oh, Yeong-Kook

    2015-12-01

    To enhance close collaborations among scientists in three Asian countries (China, Japan and Korea), A3 foresight program on Plasma Physics was launched from August 2012 under the auspice of JSPS (Japan), NRF (Korea) and NSFC (China). The main purpose of this project is to solve several key physics issues through joint experiments on three Asian advanced fully superconducting fusion devices (EAST in China, LHD in Japan and KSTAR in Korea) and other magnetic confinement devices to carry out multi-faceted and complementary physics researches. To summarize the progress and achievement in the second academic year under this A3 foresight program, the 6th workshop hosted by Institute of Plasma Physics, Chinese Academy of Sciences was held in Nanning during 6-9 January, 2015. The research collaboration carried out by young scientists was also encouraged with participation of graduated students. The three topics of steady state sustainment of magnetic configurations, edge and divertor plasma control and confinement of alpha particles are mainly discussed in addition to relevant studies in small devices. This issue is the collection of 41 papers presented at the entitled meeting. The 39 of the presented papers are indexed individually. (J.P.N.)

  15. Physical Models and Virtual Reality Simulators in Otolaryngology.

    Science.gov (United States)

    Javia, Luv; Sardesai, Maya G

    2017-10-01

    The increasing role of simulation in the medical education of future otolaryngologists has followed suit with other surgical disciplines. Simulators make it possible for the resident to explore and learn in a safe and less stressful environment. The various subspecialties in otolaryngology use physical simulators and virtual-reality simulators. Although physical simulators allow the operator to make direct contact with its components, virtual-reality simulators allow the operator to interact with an environment that is computer generated. This article gives an overview of the various types of physical simulators and virtual-reality simulators used in otolaryngology that have been reported in the literature. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. A prospective, randomized study addressing the need for physical simulation following virtual simulation

    International Nuclear Information System (INIS)

    Valicenti, Richard K.; Waterman, Frank M.; Corn, Benjamin W.; Curran, Walter J.

    1997-01-01

    Purpose: To accurately implement a treatment plan obtained by virtual or CT simulation, conventional or physical simulation is still widely used. To evaluate the need for physical simulation, we prospectively randomized patients to undergo physical simulation or no additional simulation after virtual simulation. Methods and Materials: From July 1995 to September 1996, 75 patients underwent conformal four-field radiation therapy planning for prostate cancer with a commercial grade CT simulator. The patients were randomized to undergo either port filming immediately following physical simulation or port filming alone. The precision of implementing the devised plan was evaluated by comparing simulator radiographs and/or port films against the digitally reconstructed radiographs (DRRs) for x, y, and z displacements of the isocenter. Changes in beam aperture were also prospectively evaluated. Results: Thirty-seven patients were randomized to undergo physical simulation and first day port filming, and 38 had first day treatment verification films only without a physical simulation. Seventy-eight simulator radiographs and 195 first day treatment port films were reviewed. There was no statistically significant reduction in treatment setup error (>5 mm) if patients underwent physical simulation following virtual simulation. No patient required a resimulation, and there was no significant difference in changes of beam aperture. Conclusions: Following virtual simulation, physical simulation may not be necessary to accurately implement the conformal four-field technique. Because port filming appears to be sufficient to assure precise and reliable execution of a devised treatment plan, physical simulation may be eliminated from the process of CT based planning when virtual simulation is available

  17. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  18. A new high-performance 3D multiphase flow code to simulate volcanic blasts and pyroclastic density currents: example from the Boxing Day event, Montserrat

    Science.gov (United States)

    Ongaro, T. E.; Clarke, A.; Neri, A.; Voight, B.; Widiwijayanti, C.

    2005-12-01

    For the first time the dynamics of directed blasts from explosive lava-dome decompression have been investigated by means of transient, multiphase flow simulations in 2D and 3D. Multiphase flow models developed for the analysis of pyroclastic dispersal from explosive eruptions have been so far limited to 2D axisymmetric or Cartesian formulations which cannot properly account for important 3D features of the volcanic system such as complex morphology and fluid turbulence. Here we use a new parallel multiphase flow code, named PDAC (Pyroclastic Dispersal Analysis Code) (Esposti Ongaro et al., 2005), able to simulate the transient and 3D thermofluid-dynamics of pyroclastic dispersal produced by collapsing columns and volcanic blasts. The code solves the equations of the multiparticle flow model of Neri et al. (2003) on 3D domains extending up to several kilometres in 3D and includes a new description of the boundary conditions over topography which is automatically acquired from a DEM. The initial conditions are represented by a compact volume of gas and pyroclasts, with clasts of different sizes and densities, at high temperature and pressure. Different dome porosities and pressurization models were tested in 2D to assess the sensitivity of the results to the distribution of initial gas pressure, and to the total mass and energy stored in the dome, prior to 3D modeling. The simulations have used topographies appropriate for the 1997 Boxing Day directed blast on Montserrat, which eradicated the village of St. Patricks. Some simulations tested the runout of pyroclastic density currents over the ocean surface, corresponding to observations of over-water surges to several km distances at both locations. The PDAC code was used to perform 3D simulations of the explosive event on the actual volcano topography. The results highlight the strong topographic control on the propagation of the dense pyroclastic flows, the triggering of thermal instabilities, and the elutriation

  19. The development of high performance numerical simulation code for transient groundwater flow and reactive solute transport problems based on local discontinuous Galerkin method

    International Nuclear Information System (INIS)

    Suzuki, Shunichi; Motoshima, Takayuki; Naemura, Yumi; Kubo, Shin; Kanie, Shunji

    2009-01-01

    The authors develop a numerical code based on Local Discontinuous Galerkin Method for transient groundwater flow and reactive solute transport problems in order to make it possible to do three dimensional performance assessment on radioactive waste repositories at the earliest stage possible. Local discontinuous Galerkin Method is one of mixed finite element methods which are more accurate ones than standard finite element methods. In this paper, the developed numerical code is applied to several problems which are provided analytical solutions in order to examine its accuracy and flexibility. The results of the simulations show the new code gives highly accurate numeric solutions. (author)

  20. High performance germanium MOSFETs

    Energy Technology Data Exchange (ETDEWEB)

    Saraswat, Krishna [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States)]. E-mail: saraswat@stanford.edu; Chui, Chi On [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Krishnamohan, Tejas [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Kim, Donghyun [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Nayfeh, Ammar [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Pethe, Abhijit [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States)

    2006-12-15

    Ge is a very promising material as future channel materials for nanoscale MOSFETs due to its high mobility and thus a higher source injection velocity, which translates into higher drive current and smaller gate delay. However, for Ge to become main-stream, surface passivation and heterogeneous integration of crystalline Ge layers on Si must be achieved. We have demonstrated growth of fully relaxed smooth single crystal Ge layers on Si using a novel multi-step growth and hydrogen anneal process without any graded buffer SiGe layer. Surface passivation of Ge has been achieved with its native oxynitride (GeO {sub x}N {sub y} ) and high-permittivity (high-k) metal oxides of Al, Zr and Hf. High mobility MOSFETs have been demonstrated in bulk Ge with high-k gate dielectrics and metal gates. However, due to their smaller bandgap and higher dielectric constant, most high mobility materials suffer from large band-to-band tunneling (BTBT) leakage currents and worse short channel effects. We present novel, Si and Ge based heterostructure MOSFETs, which can significantly reduce the BTBT leakage currents while retaining high channel mobility, making them suitable for scaling into the sub-15 nm regime. Through full band Monte-Carlo, Poisson-Schrodinger and detailed BTBT simulations we show a dramatic reduction in BTBT and excellent electrostatic control of the channel, while maintaining very high drive currents in these highly scaled heterostructure DGFETs. Heterostructure MOSFETs with varying strained-Ge or SiGe thickness, Si cap thickness and Ge percentage were fabricated on bulk Si and SOI substrates. The ultra-thin ({approx}2 nm) strained-Ge channel heterostructure MOSFETs exhibited >4x mobility enhancements over bulk Si devices and >10x BTBT reduction over surface channel strained SiGe devices.

  1. High performance germanium MOSFETs

    International Nuclear Information System (INIS)

    Saraswat, Krishna; Chui, Chi On; Krishnamohan, Tejas; Kim, Donghyun; Nayfeh, Ammar; Pethe, Abhijit

    2006-01-01

    Ge is a very promising material as future channel materials for nanoscale MOSFETs due to its high mobility and thus a higher source injection velocity, which translates into higher drive current and smaller gate delay. However, for Ge to become main-stream, surface passivation and heterogeneous integration of crystalline Ge layers on Si must be achieved. We have demonstrated growth of fully relaxed smooth single crystal Ge layers on Si using a novel multi-step growth and hydrogen anneal process without any graded buffer SiGe layer. Surface passivation of Ge has been achieved with its native oxynitride (GeO x N y ) and high-permittivity (high-k) metal oxides of Al, Zr and Hf. High mobility MOSFETs have been demonstrated in bulk Ge with high-k gate dielectrics and metal gates. However, due to their smaller bandgap and higher dielectric constant, most high mobility materials suffer from large band-to-band tunneling (BTBT) leakage currents and worse short channel effects. We present novel, Si and Ge based heterostructure MOSFETs, which can significantly reduce the BTBT leakage currents while retaining high channel mobility, making them suitable for scaling into the sub-15 nm regime. Through full band Monte-Carlo, Poisson-Schrodinger and detailed BTBT simulations we show a dramatic reduction in BTBT and excellent electrostatic control of the channel, while maintaining very high drive currents in these highly scaled heterostructure DGFETs. Heterostructure MOSFETs with varying strained-Ge or SiGe thickness, Si cap thickness and Ge percentage were fabricated on bulk Si and SOI substrates. The ultra-thin (∼2 nm) strained-Ge channel heterostructure MOSFETs exhibited >4x mobility enhancements over bulk Si devices and >10x BTBT reduction over surface channel strained SiGe devices

  2. Designing and simulation smart multifunctional continuous logic device as a basic cell of advanced high-performance sensor systems with MIMO-structure

    Science.gov (United States)

    Krasilenko, Vladimir G.; Nikolskyy, Aleksandr I.; Lazarev, Alexander A.

    2015-01-01

    We have proposed a design and simulation of hardware realizations of smart multifunctional continuous logic devices (SMCLD) as advanced basic cells of the sensor systems with MIMO- structure for images processing and interconnection. The SMCLD realize function of two-valued, multi-valued and continuous logics with current inputs and current outputs. Such advanced basic cells realize function nonlinear time-pulse transformation, analog-to-digital converters and neural logic. We showed advantages of such elements. It's have a number of advantages: high speed and reliability, simplicity, small power consumption, high integration level. The conception of construction of SMCLD consists in the use of a current mirrors realized on 1.5μm technology CMOS transistors. Presence of 50÷70 transistors, 1 PD and 1 LED makes the offered circuits quite compact. The simulation results of NOT, MIN, MAX, equivalence (EQ), normalize summation, averaging and other functions, that implemented SMCLD, showed that the level of logical variables can change from 0.1μA to 10μA for low-power consumption variants. The SMCLD have low power consumption <1mW and processing time about 1÷11μS at supply voltage 2.4÷3.3V.

  3. Developing iPad-Based Physics Simulations That Can Help People Learn Newtonian Physics Concepts

    Science.gov (United States)

    Lee, Young-Jin

    2015-01-01

    The aims of this study are: (1) to develop iPad-based computer simulations called iSimPhysics that can help people learn Newtonian physics concepts; and (2) to assess its educational benefits and pedagogical usefulness. To facilitate learning, iSimPhysics visualizes abstract physics concepts, and allows for conducting a series of computer…

  4. A task-based parallelism and vectorized approach to 3D Method of Characteristics (MOC) reactor simulation for high performance computing architectures

    Science.gov (United States)

    Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.

    2016-05-01

    In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.

  5. Chemical Adsorption and Physical Confinement of Polysulfides with the Janus-faced Interlayer for High-performance Lithium-Sulfur Batteries.

    Science.gov (United States)

    Chiochan, Poramane; Kaewruang, Siriroong; Phattharasupakun, Nutthaphon; Wutthiprom, Juthaporn; Maihom, Thana; Limtrakul, Jumras; Nagarkar, Sanjog; Horike, Satoshi; Sawangphruk, Montree

    2017-12-18

    We design the Janus-like interlayer with two different functional faces for suppressing the shuttle of soluble lithium polysulfides (LPSs) in lithium-sulfur batteries (LSBs). At the front face, the conductive functionalized carbon fiber paper (f-CFP) having oxygen-containing groups i.e., -OH and -COOH on its surface was placed face to face with the sulfur cathode serving as the first barrier accommodating the volume expansion during cycling process and the oxygen-containing groups can also adsorb the soluble LPSs via lithium bonds. At the back face, a crystalline coordination network of [Zn(H 2 PO 4 ) 2 (TzH) 2 ] n (ZnPTz) was coated on the back side of f-CFP serving as the second barrier retarding the left LPSs passing through the front face via both physical confinement and chemical adsorption (i.e. Li bonding). The LSB using the Janus-like interlayer exhibits a high reversible discharge capacity of 1,416 mAh g -1 at 0.1C with a low capacity fading of 0.05% per cycle, 92% capacity retention after 200 cycles and ca. 100% coulombic efficiency. The fully charged LSB cell can practically supply electricity to a spinning motor with a nominal voltage of 3.0 V for 28 min demonstrating many potential applications.

  6. A prospective study to determine the need for physical simulation following virtual simulation

    International Nuclear Information System (INIS)

    Valicenti, R.K.; Waterman, F.M.; Corn, B.W.; Sweet, J.; Curran, W.J.

    1996-01-01

    Purpose: Virtual simulation is CT based planning utilizing computed digitally reconstructed radiographs (DRRs) in a manner similar to conventional fluoroscopic simulation. However, conventional or physical simulation is still widely used to assure precise implementation of the devised plan. To evaluate the need for performing physical simulation, we prospectively studied patients undergoing virtual simulation who either had or did not have a subsequent physical simulation. Materials and Methods: From July, 1995 to February, 1996, 48 patients underwent conformal 4-field radiation therapy for prostate cancer using a commercial grade spiral CT simulator. All patients were immobilized in a foam body cast and positioned by using a fiducial laser marking system. Following prostate and seminal vesicle definition on a slice-by-slice basis, virtual simulation was performed. The isocenter defined by this process was marked on both the patient and the immobilization device before leaving the CT simulator room. The isocenter position of the devised plan was evaluated by three verification methods: physical simulation, first day treatment port filming, and port filming immediately following physical simulation. Simulator radiographs and port films were compared against DRRs for x, y, and z deviations of the isocenter. These deviations were used as a measure of the implementation precision achieved by each verification method. Results: Thirty-seven patients underwent physical simulation and first day port filming. Eleven had first day treatment verification films only and never had a physical simulation. A total of 79 simulator radiographs and 126 first day treatment port films were reviewed. The tabulation of all deviations is as follows: There was significantly more setup error (≥ 5 mm) observed when the devised treatment was implemented in the treatment room as opposed to the physical simulator. The physical simulator did not lead to a significant reduction in setup error

  7. Integrating Simulated Physics and Device Virtualization in Control System Testbeds

    OpenAIRE

    Redwood , Owen; Reynolds , Jason; Burmester , Mike

    2016-01-01

    Part 3: INFRASTRUCTURE MODELING AND SIMULATION; International audience; Malware and forensic analyses of embedded cyber-physical systems are tedious, manual processes that testbeds are commonly not designed to support. Additionally, attesting the physics impact of embedded cyber-physical system malware has no formal methodologies and is currently an art. This chapter describes a novel testbed design methodology that integrates virtualized embedded industrial control systems and physics simula...

  8. Simulation of granular soil behaviour using the bullet physics library

    OpenAIRE

    Izadi, Ehsan; Bezuijen, Adam

    2015-01-01

    A physics engine is computer software which provides a simulation of certain physical systems, such as rigid body dynamics, soft body dynamics and fluid dynamics. Physics engines were firstly developed for using in animation and gaming industry ; nevertheless, due to fast calculation speed they are attracting more and more attetion from researchers of the engineering fields. Since physics engines are capable of performing fast calculations on multibody rigid dynamic systems, soil particles ca...

  9. Rapid processing of data based on high-performance algorithms for solving inverse problems and 3D-simulation of the tsunami and earthquakes

    Science.gov (United States)

    Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.

    2012-04-01

    We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to

  10. The path toward HEP High Performance Computing

    International Nuclear Information System (INIS)

    Apostolakis, John; Brun, René; Gheata, Andrei; Wenzel, Sandro; Carminati, Federico

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit

  11. Sophistication of computational science and fundamental physics simulations

    International Nuclear Information System (INIS)

    Ishiguro, Seiji; Ito, Atsushi; Usami, Shunsuke; Ohtani, Hiroaki; Sakagami, Hitoshi; Toida, Mieko; Hasegawa, Hiroki; Horiuchi, Ritoku; Miura, Hideaki

    2016-01-01

    Numerical experimental reactor research project is composed of the following studies: (1) nuclear fusion simulation research with a focus on specific physical phenomena of specific equipment, (2) research on advanced simulation method to increase predictability or expand its application range based on simulation, (3) visualization as the foundation of simulation research, (4) research for advanced computational science such as parallel computing technology, and (5) research aiming at elucidation of fundamental physical phenomena not limited to specific devices. Specifically, a wide range of researches with medium- to long-term perspectives are being developed: (1) virtual reality visualization, (2) upgrading of computational science such as multilayer simulation method, (3) kinetic behavior of plasma blob, (4) extended MHD theory and simulation, (5) basic plasma process such as particle acceleration due to interaction of wave and particle, and (6) research related to laser plasma fusion. This paper reviews the following items: (1) simultaneous visualization in virtual reality space, (2) multilayer simulation of collisionless magnetic reconnection, (3) simulation of microscopic dynamics of plasma coherent structure, (4) Hall MHD simulation of LHD, (5) numerical analysis for extension of MHD equilibrium and stability theory, (6) extended MHD simulation of 2D RT instability, (7) simulation of laser plasma, (8) simulation of shock wave and particle acceleration, and (9) study on simulation of homogeneous isotropic MHD turbulent flow. (A.O.)

  12. Coupled multi-physics simulation frameworks for reactor simulation: A bottom-up approach

    International Nuclear Information System (INIS)

    Tautges, Timothy J.; Caceres, Alvaro; Jain, Rajeev; Kim, Hong-Jun; Kraftcheck, Jason A.; Smith, Brandon M.

    2011-01-01

    A 'bottom-up' approach to multi-physics frameworks is described, where first common interfaces to simulation data are developed, then existing physics modules are adapted to communicate through those interfaces. Physics modules read and write data through those common interfaces, which also provide access to common simulation services like parallel IO, mesh partitioning, etc.. Multi-physics codes are assembled as a combination of physics modules, services, interface implementations, and driver code which coordinates calling these various pieces. Examples of various physics modules and services connected to this framework are given. (author)

  13. An introduction to computer simulation methods applications to physical systems

    CERN Document Server

    Gould, Harvey; Christian, Wolfgang

    2007-01-01

    Now in its third edition, this book teaches physical concepts using computer simulations. The text incorporates object-oriented programming techniques and encourages readers to develop good programming habits in the context of doing physics. Designed for readers at all levels , An Introduction to Computer Simulation Methods uses Java, currently the most popular programming language. Introduction, Tools for Doing Simulations, Simulating Particle Motion, Oscillatory Systems, Few-Body Problems: The Motion of the Planets, The Chaotic Motion of Dynamical Systems, Random Processes, The Dynamics of Many Particle Systems, Normal Modes and Waves, Electrodynamics, Numerical and Monte Carlo Methods, Percolation, Fractals and Kinetic Growth Models, Complex Systems, Monte Carlo Simulations of Thermal Systems, Quantum Systems, Visualization and Rigid Body Dynamics, Seeing in Special and General Relativity, Epilogue: The Unity of Physics For all readers interested in developing programming habits in the context of doing phy...

  14. Hamiltonian circuited simulations in reactor physics

    International Nuclear Information System (INIS)

    Rio Hirowati Shariffudin

    2002-01-01

    In the assessment of suitability of reactor designs and in the investigations into reactor safety, the steady state of a nuclear reactor has to be studied carefully. The analysis can be done through mockup designs but this approach costs a lot of money and consumes a lot of time. A less expensive approach is via simulations where the reactor and its neutron interactions are modelled mathematically. Finite difference discretization of the diffusion operator has been used to approximate the steady state multigroup neutron diffusion equations. The steps include the outer scheme which estimates the resulting right hand side of the matrix equation, the group scheme which calculates the upscatter problem and the inner scheme which solves for the flux for a particular group. The Hamiltonian circuited simulations for the inner iterations of the said neutron diffusion equation enable the effective use of parallel computing, especially where the solutions of multigroup neutron diffusion equations involving two or more space dimensions are required. (Author)

  15. Control of complex physically simulated robot groups

    Science.gov (United States)

    Brogan, David C.

    2001-10-01

    Actuated systems such as robots take many forms and sizes but each requires solving the difficult task of utilizing available control inputs to accomplish desired system performance. Coordinated groups of robots provide the opportunity to accomplish more complex tasks, to adapt to changing environmental conditions, and to survive individual failures. Similarly, groups of simulated robots, represented as graphical characters, can test the design of experimental scenarios and provide autonomous interactive counterparts for video games. The complexity of writing control algorithms for these groups currently hinders their use. A combination of biologically inspired heuristics, search strategies, and optimization techniques serve to reduce the complexity of controlling these real and simulated characters and to provide computationally feasible solutions.

  16. The online simulation of core physics in nuclear power plant

    International Nuclear Information System (INIS)

    Zhao Qiang

    2005-01-01

    The three-dimensional power distribution in core is one of the most important status variables of nuclear reactor. In order to monitor the 3-D in core power distribution timely and accurately, the online simulation system of core physics was designed in the paper. This system combines core physics simulation with the data, which is from the plant and reactor instrumentation. The design of the system consists of the hardware part and the software part. The online simulation system consists of a main simulation computer and a simulation operation station. The online simulation system software includes of the real-time simulation support software, the system communication software, the simulation program and the simulation interface software. Two-group and three-dimensional neutron kinetics model with six groups delayed neutrons was used in the real-time simulation of nuclear reactor core physics. According to the characteristics of the nuclear reactor, the core was divided into many nodes. Resolving the neutron equation, the method of separate variables was used. The input data from the plant and reactor instrumentation system consist of core thermal power, loop temperatures and pressure, control rod positions, boron concentration, core exit thermocouple data, Excore detector signals, in core flux detectors signals. There are two purposes using the data, one is to ensure that the model is as close as the current actual reactor condition, and the other is to calibrate the calculated power distribution. In this paper, the scheme of the online simulation system was introduced. Under the real-time simulation support system, the simulation program is being compiled. Compared with the actual operational data, the elementary simulation results were reasonable and correct. (author)

  17. Computer Simulations for Lab Experiences in Secondary Physics

    Science.gov (United States)

    Murphy, David Shannon

    2012-01-01

    Physical science instruction often involves modeling natural systems, such as electricity that possess particles which are invisible to the unaided eye. The effect of these particles' motion is observable, but the particles are not directly observable to humans. Simulations have been developed in physics, chemistry and biology that, under certain…

  18. Electrical Storm Simulation to Improve the Learning Physics Process

    Science.gov (United States)

    Martínez Muñoz, Miriam; Jiménez Rodríguez, María Lourdes; Gutiérrez de Mesa, José Antonio

    2013-01-01

    This work is part of a research project whose main objective is to understand the impact that the use of Information and Communication Technology (ICT) has on the teaching and learning process on the subject of Physics. We will show that, with the use of a storm simulator, physics students improve their learning process on one hand they understand…

  19. High-performance scientific computing in the cloud

    Science.gov (United States)

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  20. High Performance Computing in Science and Engineering '14

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  1. APPLICATION OF INTERACTIVE ONLINE SIMULATIONS IN THE PHYSICS LABORATORY ACTIVITIES

    Directory of Open Access Journals (Sweden)

    Nina P. Dementievska

    2013-09-01

    Full Text Available Physics teachers should have professional competences, aimed at the use of online technologies associated with physical experiments. Lack of teaching materials for teachers in Ukrainian language leads to the use of virtual laboratories and computer simulations by traditional methods of education, not by the latest innovative modern educational technology, which may limit their use and greatly reduce their effectiveness. Ukrainian teaching literature has practically no information about the assessment of competencies, research skills of students for the laboratory activities. The aim of the article is to describe some components of instructional design for the Web site with simulations in school physical experiments and their evaluation.

  2. Enriching Triangle Mesh Animations with Physically Based Simulation.

    Science.gov (United States)

    Li, Yijing; Xu, Hongyi; Barbic, Jernej

    2017-10-01

    We present a system to combine arbitrary triangle mesh animations with physically based Finite Element Method (FEM) simulation, enabling control over the combination both in space and time. The input is a triangle mesh animation obtained using any method, such as keyframed animation, character rigging, 3D scanning, or geometric shape modeling. The input may be non-physical, crude or even incomplete. The user provides weights, specified using a minimal user interface, for how much physically based simulation should be allowed to modify the animation in any region of the model, and in time. Our system then computes a physically-based animation that is constrained to the input animation to the amount prescribed by these weights. This permits smoothly turning physics on and off over space and time, making it possible for the output to strictly follow the input, to evolve purely based on physically based simulation, and anything in between. Achieving such results requires a careful combination of several system components. We propose and analyze these components, including proper automatic creation of simulation meshes (even for non-manifold and self-colliding undeformed triangle meshes), converting triangle mesh animations into animations of the simulation mesh, and resolving collisions and self-collisions while following the input.

  3. Monte Carlo Simulation in Statistical Physics An Introduction

    CERN Document Server

    Binder, Kurt

    2010-01-01

    Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. The fifth edition covers Classical as well as Quantum Monte Carlo methods. Furthermore a new chapter on the sampling of free-energy landscapes has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was awarded the Berni J. Alder CECAM Award for Computational Physics 2001 as well ...

  4. Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics

    Directory of Open Access Journals (Sweden)

    Daniel Laney

    2014-01-01

    Full Text Available This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. We compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.

  5. Impact of detector simulation in particle physics collider experiments

    Science.gov (United States)

    Daniel Elvira, V.

    2017-06-01

    Through the last three decades, accurate simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics (HEP) experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detector simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the precision of the physics results and publication turnaround, from data-taking to submission. It also presents estimates of the cost and economic impact of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data with increasingly complex detectors, taxing heavily the performance of simulation and reconstruction software. Consequently, exploring solutions to speed up simulation and reconstruction software to satisfy the growing demand of computing resources in a time of flat budgets is a matter that deserves immediate attention. The article ends with a short discussion on the potential solutions that are being considered, based on leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering HEP code for concurrency and parallel computing.

  6. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  7. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  8. Modelling of thermalhydraulics and reactor physics in simulators

    International Nuclear Information System (INIS)

    Miettinen, J.

    1994-01-01

    The evolution of thermalhydraulic analysis methods for analysis and simulator purposes has brought closer the thermohydraulic models in both application areas. In large analysis codes like RELAP5, TRAC, CATHARE and ATHLET the accuracy for calculating complicated phenomena has been emphasized, but in spite of large development efforts many generic problems remain unsolved. For simulator purposes fast running codes have been developed and these include only limited assessment efforts. But these codes have more simulator friendly features than large codes, like portability and modular code structure. In this respect the simulator experiences with SMABRE code are discussed. Both large analysis codes and special simulator codes have their advances in simulator applications. The evolution of reactor physical calculation methods in simulator applications has started from simple point kinetic models. For analysis purposes accurate 1-D and 3-D codes have been developed being capable for fast and complicated transients. For simulator purposes capability for simulation of instruments has been emphasized, but the dynamic simulation capability has been less significant. The approaches for 3-dimensionality in simulators requires still quite much development, before the analysis accuracy is reached. (orig.) (8 refs., 2 figs., 2 tabs.)

  9. RavenDB high performance

    CERN Document Server

    Ritchie, Brian

    2013-01-01

    RavenDB High Performance is comprehensive yet concise tutorial that developers can use to.This book is for developers & software architects who are designing systems in order to achieve high performance right from the start. A basic understanding of RavenDB is recommended, but not required. While the book focuses on advanced topics, it does not assume that the reader has a great deal of prior knowledge of working with RavenDB.

  10. PREFACE: International conference on Computer Simulation in Physics and beyond (CSP2015)

    Science.gov (United States)

    2016-02-01

    The International conference on Computer Simulations in Physics and beyond (CSP2015) was held from 6-10 September 2015 at the campus of the Moscow Institute for Electronics and Mathematics (MIEM), National Research University Higher School of Economics, Moscow. Computer simulations are in increasingly popular tool for scientific research, supplementing experimental and analytical research. The main goal of the conference is contributing to the development of methods and algorithms which take into account trends in hardware development, which may help with intensive research. The conference also allowed senior scientists and students to have the opportunity to speak each other and exchange ideas and views on the developments in the area of high-performance computing in science. We would like to take this opportunity to thank our sponsors: the Russian Foundation for Basic Research, Federal Agency of Scientific Organizations, and Higher School of Economics.

  11. Geometry simulation and physics with the CMS forward pixel detector

    Energy Technology Data Exchange (ETDEWEB)

    Parashar, N [Purdue University Calumet, Hammond, Indiana (United States)], E-mail: Neeti@fnal.gov

    2008-06-15

    The Forward Pixel Detector of CMS is an integral part of the Tracking system, which will play a key role in addressing the full physics potential of the collected data. It has a very complex geometry that encompasses multilayer structure of its detector modules. This presentation describes the development of geometry simulation for the Forward Pixel Detector. A new geometry package has been developed, which uses the detector description database (DDD) interface for the XML (eXtensive Markup Language) to GEANT simulation. This is necessary for digitization and GEANT4 reconstruction software for tracking. The expected physics performance is also discussed.

  12. Geometry simulation and physics with the CMS forward pixel detector

    International Nuclear Information System (INIS)

    Parashar, N

    2008-01-01

    The Forward Pixel Detector of CMS is an integral part of the Tracking system, which will play a key role in addressing the full physics potential of the collected data. It has a very complex geometry that encompasses multilayer structure of its detector modules. This presentation describes the development of geometry simulation for the Forward Pixel Detector. A new geometry package has been developed, which uses the detector description database (DDD) interface for the XML (eXtensive Markup Language) to GEANT simulation. This is necessary for digitization and GEANT4 reconstruction software for tracking. The expected physics performance is also discussed

  13. Physics validation of detector simulation tools for LHC

    International Nuclear Information System (INIS)

    Beringer, J.

    2004-01-01

    Extensive studies aimed at validating the physics processes built into the detector simulation tools Geant4 and Fluka are in progress within all Large Hardon Collider (LHC) experiments, within the collaborations developing these tools, and within the LHC Computing Grid (LCG) Simulation Physics Validation Project, which has become the primary forum for these activities. This work includes detailed comparisons with test beam data, as well as benchmark studies of simple geometries and materials with single incident particles of various energies for which experimental data is available. We give an overview of these validation activities with emphasis on the latest results

  14. THREE-DIMENSIONAL WEB-BASED PHYSICS SIMULATION APPLICATION FOR PHYSICS LEARNING TOOL

    Directory of Open Access Journals (Sweden)

    William Salim

    2012-10-01

    Full Text Available The purpose of this research is to present a multimedia application for doing simulation in Physics. The application is a web based simulator that implementing HTML5, WebGL, and JavaScript. The objects and the environment will be in three dimensional views. This application is hoped will become the substitute for practicum activity. The current development is the application only covers Newtonian mechanics. Questionnaire and literature study is used as the data collecting method. While Waterfall Method used as the design method. The result is Three-DimensionalPhysics Simulator as online web application. Three-Dimensionaldesign and mentor-mentee relationship is the key features of this application. The conclusion made is Three-DimensionalPhysics Simulator already fulfilled in both design and functionality according to user. This application also helps them to understand Newtonian mechanics by simulation. Improvements are needed, because this application only covers Newtonian Mechanics. There is a lot possibility in the future that this simulation can also covers other Physics topic, such as optic, energy, or electricity.Keywords: Simulation, Physic, Learning Tool, HTML5, WebGL

  15. Numerical simulation and physical aspects of supersonic vortex breakdown

    Science.gov (United States)

    Liu, C. H.; Kandil, O. A.; Kandil, H. A.

    1993-01-01

    Existing numerical simulations and physical aspects of subsonic and supersonic vortex-breakdown modes are reviewed. The solution to the problem of supersonic vortex breakdown is emphasized in this paper and carried out with the full Navier-Stokes equations for compressible flows. Numerical simulations of vortex-breakdown modes are presented in bounded and unbounded domains. The effects of different types of downstream-exit boundary conditions are studied and discussed.

  16. A simulated test of physical starting and reactor physics on zero power facility of PWR

    International Nuclear Information System (INIS)

    Yao Zewu; Ji Huaxiang; Chen Zhicheng; Yao Zhiquan; Chen Chen; Li Yuwen

    1995-01-01

    The core neutron economics has been verified through experiments conducted at a zero power reactor with baffles of various thickness. A simulated test of physical starting of Qinshan PWR has been introduced. The feasibility and safety of the programme are verified. The research provides a valuable foundation for developing physical starting programme

  17. Tsunami Early Warning via a Physics-Based Simulation Pipeline

    Science.gov (United States)

    Wilson, J. M.; Rundle, J. B.; Donnellan, A.; Ward, S. N.; Komjathy, A.

    2017-12-01

    Through independent efforts, physics-based simulations of earthquakes, tsunamis, and atmospheric signatures of these phenomenon have been developed. With the goal of producing tsunami forecasts and early warning tools for at-risk regions, we join these three spheres to create a simulation pipeline. The Virtual Quake simulator can produce thousands of years of synthetic seismicity on large, complex fault geometries, as well as the expected surface displacement in tsunamigenic regions. These displacements are used as initial conditions for tsunami simulators, such as Tsunami Squares, to produce catalogs of potential tsunami scenarios with probabilities. Finally, these tsunami scenarios can act as input for simulations of associated ionospheric total electron content, signals which can be detected by GNSS satellites for purposes of early warning in the event of a real tsunami. We present the most recent developments in this project.

  18. Route complexity and simulated physical ageing negatively influence wayfinding

    NARCIS (Netherlands)

    Zijlstra, Emma; Hagedoorn, Mariet; Krijnen, Wim P.; Schans, van der Cornelis; Mobach, Mark P.

    The aim of this age-simulation field experiment was to assess the influence of route complexity and physical ageing on wayfinding. Seventy-five people (aged 18-28) performed a total of 108 wayfinding tasks (i.e., 42 participants performed two wayfinding tasks and 33 performed one wayfinding task),

  19. Physics-based simulation models for EBSD: advances and challenges

    Science.gov (United States)

    Winkelmann, A.; Nolze, G.; Vos, M.; Salvat-Pujol, F.; Werner, W. S. M.

    2016-02-01

    EBSD has evolved into an effective tool for microstructure investigations in the scanning electron microscope. The purpose of this contribution is to give an overview of various simulation approaches for EBSD Kikuchi patterns and to discuss some of the underlying physical mechanisms.

  20. Three-dimensional simulations of free-electron laser physics

    International Nuclear Information System (INIS)

    McVey, B.D.

    1985-09-01

    A computer code has been developed to simulate three-dimensional free-electron laser physics. A mathematical formulation of the FEL equations is presented, and the numerical solution of the problem is described. Sample results from the computer code are discussed. 23 refs., 6 figs., 2 tabs

  1. Designing a High Performance Parallel Personal Cluster

    OpenAIRE

    Kapanova, K. G.; Sellier, J. M.

    2016-01-01

    Today, many scientific and engineering areas require high performance computing to perform computationally intensive experiments. For example, many advances in transport phenomena, thermodynamics, material properties, computational chemistry and physics are possible only because of the availability of such large scale computing infrastructures. Yet many challenges are still open. The cost of energy consumption, cooling, competition for resources have been some of the reasons why the scientifi...

  2. Physical habitat simulation system reference manual: version II

    Science.gov (United States)

    Milhous, Robert T.; Updike, Marlys A.; Schneider, Diane M.

    1989-01-01

    There are four major components of a stream system that determine the productivity of the fishery (Karr and Dudley 1978). These are: (1) flow regime, (2) physical habitat structure (channel form, substrate distribution, and riparian vegetation), (3) water quality (including temperature), and (4) energy inputs from the watershed (sediments, nutrients, and organic matter). The complex interaction of these components determines the primary production, secondary production, and fish population of the stream reach. The basic components and interactions needed to simulate fish populations as a function of management alternatives are illustrated in Figure I.1. The assessment process utilizes a hierarchical and modular approach combined with computer simulation techniques. The modular components represent the "building blocks" for the simulation. The quality of the physical habitat is a function of flow and, therefore, varies in quality and quantity over the range of the flow regime. The conceptual framework of the Incremental Methodology and guidelines for its application are described in "A Guide to Stream Habitat Analysis Using the Instream Flow Incremental Methodology" (Bovee 1982). Simulation of physical habitat is accomplished using the physical structure of the stream and streamflow. The modification of physical habitat by temperature and water quality is analyzed separately from physical habitat simulation. Temperature in a stream varies with the seasons, local meteorological conditions, stream network configuration, and the flow regime; thus, the temperature influences on habitat must be analysed on a stream system basis. Water quality under natural conditions is strongly influenced by climate and the geological materials, with the result that there is considerable natural variation in water quality. When we add the activities of man, the possible range of water quality possibilities becomes rather large. Consequently, water quality must also be analysed on a

  3. Design and experimentally measure a high performance metamaterial filter

    Science.gov (United States)

    Xu, Ya-wen; Xu, Jing-cheng

    2018-03-01

    Metamaterial filter is a kind of expecting optoelectronic device. In this paper, a metal/dielectric/metal (M/D/M) structure metamaterial filter is simulated and measured. Simulated results indicate that the perfect impedance matching condition between the metamaterial filter and the free space leads to the transmission band. Measured results show that the proposed metamaterial filter achieves high performance transmission on TM and TE polarization directions. Moreover, the high transmission rate is also can be obtained when the incident angle reaches to 45°. Further measured results show that the transmission band can be expanded through optimizing structural parameters. The central frequency of the transmission band is also can be adjusted through optimizing structural parameters. The physical mechanism behind the central frequency shifted is solved through establishing an equivalent resonant circuit model.

  4. 3rd International Conference on High Performance Scientific Computing

    CERN Document Server

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  5. Hygrothermal Numerical Simulation Tools Applied to Building Physics

    CERN Document Server

    Delgado, João M P Q; Ramos, Nuno M M; Freitas, Vasco Peixoto

    2013-01-01

    This book presents a critical review on the development and application of hygrothermal analysis methods to simulate the coupled transport processes of Heat, Air, and Moisture (HAM) transfer for one or multidimensional cases. During the past few decades there has been relevant development in this field of study and an increase in the professional use of tools that simulate some of the physical phenomena that are involved in Heat, Air and Moisture conditions in building components or elements. Although there is a significant amount of hygrothermal models referred in the literature, the vast majority of them are not easily available to the public outside the institutions where they were developed, which restricts the analysis of this book to only 14 hygrothermal modelling tools. The special features of this book are (a) a state-of-the-art of numerical simulation tools applied to building physics, (b) the boundary conditions importance, (c) the material properties, namely, experimental methods for the measuremen...

  6. Computational physics simulation of classical and quantum systems

    CERN Document Server

    Scherer, Philipp O J

    2017-01-01

    This textbook presents basic numerical methods and applies them to a large variety of physical models in multiple computer experiments. Classical algorithms and more recent methods are explained. Partial differential equations are treated generally comparing important methods, and equations of motion are solved by a large number of simple as well as more sophisticated methods. Several modern algorithms for quantum wavepacket motion are compared. The first part of the book discusses the basic numerical methods, while the second part simulates classical and quantum systems. Simple but non-trivial examples from a broad range of physical topics offer readers insights into the numerical treatment but also the simulated problems. Rotational motion is studied in detail, as are simple quantum systems. A two-level system in an external field demonstrates elementary principles from quantum optics and simulation of a quantum bit. Principles of molecular dynamics are shown. Modern bounda ry element methods are presented ...

  7. Computer simulation studies in condensed-matter physics 5. Proceedings

    International Nuclear Information System (INIS)

    Landau, D.P.; Mon, K.K.; Schuettler, H.B.

    1993-01-01

    As the role of computer simulations began to increase in importance, we sensed a need for a ''meeting place'' for both experienced simulators and neophytes to discuss new techniques and results in an environment which promotes extended discussion. As a consequence of these concerns, The Center for Simulational Physics established an annual workshop on Recent Developments in Computer Simulation Studies in Condensed-Matter Physics. This year's workshop was the fifth in this series and the interest which the scientific community has shown demonstrates quite clearly the useful purpose which the series has served. The workshop was held at the University of Georgia, February 17-21, 1992, and these proceedings from a record of the workshop which is published with the goal of timely dissemination of the papers to a wider audience. The proceedings are divided into four parts. The first part contains invited papers which deal with simulational studies of classical systems and includes an introduction to some new simulation techniques and special purpose computers as well. A separate section of the proceedings is devoted to invited papers on quantum systems including new results for strongly correlated electron and quantum spin models. The third section is comprised of a single, invited description of a newly developed software shell designed for running parallel programs. The contributed presentations comprise the final chapter. (orig.). 79 figs

  8. High Performance Bulk Thermoelectric Materials

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Zhifeng [Boston College, Chestnut Hill, MA (United States)

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  9. High-Performance Operating Systems

    DEFF Research Database (Denmark)

    Sharp, Robin

    1999-01-01

    Notes prepared for the DTU course 49421 "High Performance Operating Systems". The notes deal with quantitative and qualitative techniques for use in the design and evaluation of operating systems in computer systems for which performance is an important parameter, such as real-time applications......, communication systems and multimedia systems....

  10. Engineering uses of physics-based ground motion simulations

    Science.gov (United States)

    Baker, Jack W.; Luco, Nicolas; Abrahamson, Norman A.; Graves, Robert W.; Maechling, Phillip J.; Olsen, Kim B.

    2014-01-01

    This paper summarizes validation methodologies focused on enabling ground motion simulations to be used with confidence in engineering applications such as seismic hazard analysis and dynmaic analysis of structural and geotechnical systems. Numberical simullation of ground motion from large erthquakes, utilizing physics-based models of earthquake rupture and wave propagation, is an area of active research in the earth science community. Refinement and validatoin of these models require collaboration between earthquake scientists and engineering users, and testing/rating methodolgies for simulated ground motions to be used with confidence in engineering applications. This paper provides an introduction to this field and an overview of current research activities being coordinated by the Souther California Earthquake Center (SCEC). These activities are related both to advancing the science and computational infrastructure needed to produce ground motion simulations, as well as to engineering validation procedures. Current research areas and anticipated future achievements are also discussed.

  11. EDITORIAL: High performance under pressure High performance under pressure

    Science.gov (United States)

    Demming, Anna

    2011-11-01

    nanoelectromechanical systems. Researchers in China exploit the coupling between piezoelectric and semiconducting properties of ZnO in an optimised diode device design [6]. They used a Schottky rather than an ohmic contact to depress the off current. In addition they used ZnO nanobelts that have dominantly polar surfaces instead of [0001] ZnO nanowires to enhance the on current under the small applied forces obtained by using an atomic force microscopy tip. The nanobelts have potential for use in random access memory devices. Much of the success in applying piezoresistivity in device applications stems from a deepening understanding of the mechanisms behind the process. A collaboration of researchers in the USA and China have proposed a new criterion for identifying the carrier type of individual ZnO nanowires based on the piezoelectric output of a nanowire when it is mechanically deformed by a conductive atomic force microscopy tip in contact mode [7]. The p-type/n-type shell/core nanowires give positive piezoelectric outputs, while the n-type nanowires produce negative piezoelectric outputs. In this issue Zhong Lin Wang and colleagues in Italy and the US report theoretical investigations into the piezoresistive behaviour of ZnO nanowires for energy harvesting. The work develops previous research on the ability of vertically aligned ZnO nanowires under uniaxial compression to power a nanodevice, in particular a pH sensor [8]. Now the authors have used finite element simulations to study the system. Among their conclusions they find that, for typical geometries and donor concentrations, the length of the nanowire does not significantly influence the maximum output piezopotential because the potential mainly drops across the tip. This has important implications for low-cost, CMOS- and microelectromechanical-systems-compatible fabrication of nanogenerators. The simulations also reveal the influence of the dielectric surrounding the nanowire on the output piezopotential, especially for

  12. Influence of baryonic physics in simulations of spiral galaxies

    International Nuclear Information System (INIS)

    Halle, A.

    2013-01-01

    The modelling of baryonic physics in numerical simulations of disc galaxies allows us to study the evolution of the different components, the physical state of the gas and the star formation. The present work aims at investigating in particular the role of the cold and dense molecular phase, which could play a role of gas reservoir in the outer galaxy discs, with low star formation efficiency. After a presentation of galaxies with a focus on spiral galaxies, their interstellar medium and dynamical evolution, we review the current state of hydrodynamical numerical simulations and the implementation of baryonic physics. We then present the simulations we performed. These include the cooling to low temperatures, and a molecular hydrogen component. The cooling functions we use include cooling by metals, for temperatures as low as 100 K, and cooling by H 2 due to collisions with H, He and other H 2 molecules. We use a TreeSPH type code that considers the stellar and gaseous components and black matter as particles. We especially test the impact of the presence of molecular hydrogen in simulations with several feedback efficiencies, and find that the molecular hydrogen allows in all cases some slow stellar formation to occur in the outer disc, with an effect on the vertical structure of the disc that is sensitive to the feedback efficiency. Molecular hydrogen is therefore able to play the role of gas reservoir in external parts of spiral galaxies, which accrete gas from cosmic filaments all along their lives

  13. Research of Simulation in Character Animation Based on Physics Engine

    Directory of Open Access Journals (Sweden)

    Yang Yu

    2017-01-01

    Full Text Available Computer 3D character animation essentially is a product, which is combined with computer graphics and robotics, physics, mathematics, and the arts. It is based on computer hardware and graphics algorithms and related sciences rapidly developed new technologies. At present, the mainstream character animation technology is based on the artificial production of key technologies and capture frames based on the motion capture device technology. 3D character animation is widely used not only in the production of film, animation, and other commercial areas but also in virtual reality, computer-aided education, flight simulation, engineering simulation, military simulation, and other fields. In this paper, we try to study physics based character animation to solve these problems such as poor real-time interaction that appears in the character, low utilization rate, and complex production. The paper deeply studied the kinematics, dynamics technology, and production technology based on the motion data. At the same time, it analyzed ODE, PhysX, Bullet, and other variety of mainstream physics engines and studied OBB hierarchy bounding box tree, AABB hierarchical tree, and other collision detection algorithms. Finally, character animation based on ODE is implemented, which is simulation of the motion and collision process of a tricycle.

  14. An Integrated Simulation Module for Cyber-Physical Automation Systems

    Directory of Open Access Journals (Sweden)

    Francesco Ferracuti

    2016-05-01

    Full Text Available The integration of Wireless Sensors Networks (WSNs into Cyber Physical Systems (CPSs is an important research problem to solve in order to increase the performances, safety, reliability and usability of wireless automation systems. Due to the complexity of real CPSs, emulators and simulators are often used to replace the real control devices and physical connections during the development stage. The most widespread simulators are free, open source, expandable, flexible and fully integrated into mathematical modeling tools; however, the connection at a physical level and the direct interaction with the real process via the WSN are only marginally tackled; moreover, the simulated wireless sensor motes are not able to generate the analogue output typically required for control purposes. A new simulation module for the control of a wireless cyber-physical system is proposed in this paper. The module integrates the COntiki OS JAva Simulator (COOJA, a cross-level wireless sensor network simulator, and the LabVIEW system design software from National Instruments. The proposed software module has been called “GILOO” (Graphical Integration of Labview and cOOja. It allows one to develop and to debug control strategies over the WSN both using virtual or real hardware modules, such as the National Instruments Real-Time Module platform, the CompactRio, the Supervisory Control And Data Acquisition (SCADA, etc. To test the proposed solution, we decided to integrate it with one of the most popular simulators, i.e., the Contiki OS, and wireless motes, i.e., the Sky mote. As a further contribution, the Contiki Sky DAC driver and a new “Advanced Sky GUI” have been proposed and tested in the COOJA Simulator in order to provide the possibility to develop control over the WSN. To test the performances of the proposed GILOO software module, several experimental tests have been made, and interesting preliminary results are reported. The GILOO module has been

  15. An Integrated Simulation Module for Cyber-Physical Automation Systems.

    Science.gov (United States)

    Ferracuti, Francesco; Freddi, Alessandro; Monteriù, Andrea; Prist, Mariorosario

    2016-05-05

    The integration of Wireless Sensors Networks (WSNs) into Cyber Physical Systems (CPSs) is an important research problem to solve in order to increase the performances, safety, reliability and usability of wireless automation systems. Due to the complexity of real CPSs, emulators and simulators are often used to replace the real control devices and physical connections during the development stage. The most widespread simulators are free, open source, expandable, flexible and fully integrated into mathematical modeling tools; however, the connection at a physical level and the direct interaction with the real process via the WSN are only marginally tackled; moreover, the simulated wireless sensor motes are not able to generate the analogue output typically required for control purposes. A new simulation module for the control of a wireless cyber-physical system is proposed in this paper. The module integrates the COntiki OS JAva Simulator (COOJA), a cross-level wireless sensor network simulator, and the LabVIEW system design software from National Instruments. The proposed software module has been called "GILOO" (Graphical Integration of Labview and cOOja). It allows one to develop and to debug control strategies over the WSN both using virtual or real hardware modules, such as the National Instruments Real-Time Module platform, the CompactRio, the Supervisory Control And Data Acquisition (SCADA), etc. To test the proposed solution, we decided to integrate it with one of the most popular simulators, i.e., the Contiki OS, and wireless motes, i.e., the Sky mote. As a further contribution, the Contiki Sky DAC driver and a new "Advanced Sky GUI" have been proposed and tested in the COOJA Simulator in order to provide the possibility to develop control over the WSN. To test the performances of the proposed GILOO software module, several experimental tests have been made, and interesting preliminary results are reported. The GILOO module has been applied to a smart home

  16. Identifying High Performance ERP Projects

    OpenAIRE

    Stensrud, Erik; Myrtveit, Ingunn

    2002-01-01

    Learning from high performance projects is crucial for software process improvement. Therefore, we need to identify outstanding projects that may serve as role models. It is common to measure productivity as an indicator of performance. It is vital that productivity measurements deal correctly with variable returns to scale and multivariate data. Software projects generally exhibit variable returns to scale, and the output from ERP projects is multivariate. We propose to use Data Envelopment ...

  17. Neo4j high performance

    CERN Document Server

    Raj, Sonal

    2015-01-01

    If you are a professional or enthusiast who has a basic understanding of graphs or has basic knowledge of Neo4j operations, this is the book for you. Although it is targeted at an advanced user base, this book can be used by beginners as it touches upon the basics. So, if you are passionate about taming complex data with the help of graphs and building high performance applications, you will be able to get valuable insights from this book.

  18. Computational Physics Simulation of Classical and Quantum Systems

    CERN Document Server

    Scherer, Philipp O. J

    2010-01-01

    This book encapsulates the coverage for a two-semester course in computational physics. The first part introduces the basic numerical methods while omitting mathematical proofs but demonstrating the algorithms by way of numerous computer experiments. The second part specializes in simulation of classical and quantum systems with instructive examples spanning many fields in physics, from a classical rotor to a quantum bit. All program examples are realized as Java applets ready to run in your browser and do not require any programming skills.

  19. Physics Detector Simulation Facility (PDSF) architecture/utilization

    International Nuclear Information System (INIS)

    Scipioni, B.

    1993-05-01

    The current systems architecture for the SSCL's Physics Detector Simulation Facility (PDSF) is presented. Systems analysis data is presented and discussed. In particular, these data disclose the effectiveness of utilization of the facility for meeting the needs of physics computing, especially as concerns parallel architecture and processing. Detailed design plans for the highly networked, symmetric, parallel, UNIX workstation-based facility are given and discussed in light of the design philosophy. Included are network, CPU, disk, router, concentrator, tape, user and job capacities and throughput

  20. Computational physics. Simulation of classical and quantum systems

    Energy Technology Data Exchange (ETDEWEB)

    Scherer, Philipp O.J. [TU Muenchen (Germany). Physikdepartment T38

    2010-07-01

    This book encapsulates the coverage for a two-semester course in computational physics. The first part introduces the basic numerical methods while omitting mathematical proofs but demonstrating the algorithms by way of numerous computer experiments. The second part specializes in simulation of classical and quantum systems with instructive examples spanning many fields in physics, from a classical rotor to a quantum bit. All program examples are realized as Java applets ready to run in your browser and do not require any programming skills. (orig.)

  1. Modern industrial simulation tools: Kernel-level integration of high performance parallel processing, object-oriented numerics, and adaptive finite element analysis. Final report, July 16, 1993--September 30, 1997

    Energy Technology Data Exchange (ETDEWEB)

    Deb, M.K.; Kennon, S.R.

    1998-04-01

    A cooperative R&D effort between industry and the US government, this project, under the HPPP (High Performance Parallel Processing) initiative of the Dept. of Energy, started the investigations into parallel object-oriented (OO) numerics. The basic goal was to research and utilize the emerging technologies to create a physics-independent computational kernel for applications using adaptive finite element method. The industrial team included Computational Mechanics Co., Inc. (COMCO) of Austin, TX (as the primary contractor), Scientific Computing Associates, Inc. (SCA) of New Haven, CT, Texaco and CONVEX. Sandia National Laboratory (Albq., NM) was the technology partner from the government side. COMCO had the responsibility of the main kernel design and development, SCA had the lead in parallel solver technology and guidance on OO technologies was Sandia`s main expertise in this venture. CONVEX and Texaco supported the partnership by hardware resource and application knowledge, respectively. As such, a minimum of fifty-percent cost-sharing was provided by the industry partnership during this project. This report describes the R&D activities and provides some details about the prototype kernel and example applications.

  2. Physics and detector simulation facility Type O workstation specifications

    International Nuclear Information System (INIS)

    Chartrand, G.; Cormell, L.R.; Hahn, R.; Jacobson, D.; Johnstad, H.; Leibold, P.; Marquez, M.; Ramsey, B.; Roberts, L.; Scipioni, B.; Yost, G.P.

    1990-11-01

    This document specifies the requirements for the front-end network of workstations of a distributed computing facility. This facility will be needed to perform the physics and detector simulations for the design of Superconducting Super Collider (SSC) detectors, and other computations in support of physics and detector needs. A detailed description of the computer simulation facility is given in the overall system specification document. This document provides revised subsystem specifications for the network of monitor-less Type 0 workstations. The requirements specified in this document supersede the requirements given. In Section 2 a brief functional description of the facility and its use are provided. The list of detailed specifications (vendor requirements) is given in Section 3 and the qualifying requirements (benchmarks) are described in Section 4

  3. Introduction to statistical physics and to computer simulations

    CERN Document Server

    Casquilho, João Paulo

    2015-01-01

    Rigorous and comprehensive, this textbook introduces undergraduate students to simulation methods in statistical physics. The book covers a number of topics, including the thermodynamics of magnetic and electric systems; the quantum-mechanical basis of magnetism; ferrimagnetism, antiferromagnetism, spin waves and magnons; liquid crystals as a non-ideal system of technological relevance; and diffusion in an external potential. It also covers hot topics such as cosmic microwave background, magnetic cooling and Bose-Einstein condensation. The book provides an elementary introduction to simulation methods through algorithms in pseudocode for random walks, the 2D Ising model, and a model liquid crystal. Any formalism is kept simple and derivations are worked out in detail to ensure the material is accessible to students from subjects other than physics.

  4. Enhanced Verification Test Suite for Physics Simulation Codes

    Energy Technology Data Exchange (ETDEWEB)

    Kamm, J R; Brock, J S; Brandon, S T; Cotrell, D L; Johnson, B; Knupp, P; Rider, W; Trucano, T; Weirs, V G

    2008-10-10

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations. The key points of this document are: (1) Verification deals with mathematical correctness of the numerical algorithms in a code, while validation deals with physical correctness of a simulation in a regime of interest. This document is about verification. (2) The current seven-problem Tri-Laboratory Verification Test Suite, which has been used for approximately five years at the DOE WP laboratories, is limited. (3) Both the methodology for and technology used in verification analysis have evolved and been improved since the original test suite was proposed. (4) The proposed test problems are in three basic areas: (a) Hydrodynamics; (b) Transport processes; and (c) Dynamic strength-of-materials. (5) For several of the proposed problems we provide a 'strong sense verification benchmark', consisting of (i) a clear mathematical statement of the problem with sufficient information to run a computer simulation, (ii) an explanation of how the code result and benchmark solution are to be evaluated, and (iii) a description of the acceptance criterion for simulation code results. (6) It is proposed that the set of verification test problems with which any particular code be evaluated include some of the problems described in this document. Analysis of the proposed verification test problems constitutes part of a necessary--but not sufficient--step that builds confidence in physics and engineering simulation codes. More complicated test cases, including physics models of

  5. Coupled Multi-physical Simulations for the Assessment of Nuclear Waste Repository Concepts: Modeling, Software Development and Simulation

    Science.gov (United States)

    Massmann, J.; Nagel, T.; Bilke, L.; Böttcher, N.; Heusermann, S.; Fischer, T.; Kumar, V.; Schäfers, A.; Shao, H.; Vogel, P.; Wang, W.; Watanabe, N.; Ziefle, G.; Kolditz, O.

    2016-12-01

    As part of the German site selection process for a high-level nuclear waste repository, different repository concepts in the geological candidate formations rock salt, clay stone and crystalline rock are being discussed. An open assessment of these concepts using numerical simulations requires physical models capturing the individual particularities of each rock type and associated geotechnical barrier concept to a comparable level of sophistication. In a joint work group of the Helmholtz Centre for Environmental Research (UFZ) and the German Federal Institute for Geosciences and Natural Resources (BGR), scientists of the UFZ are developing and implementing multiphysical process models while BGR scientists apply them to large scale analyses. The advances in simulation methods for waste repositories are incorporated into the open-source code OpenGeoSys. Here, recent application-driven progress in this context is highlighted. A robust implementation of visco-plasticity with temperature-dependent properties into a framework for the thermo-mechanical analysis of rock salt will be shown. The model enables the simulation of heat transport along with its consequences on the elastic response as well as on primary and secondary creep or the occurrence of dilatancy in the repository near field. Transverse isotropy, non-isothermal hydraulic processes and their coupling to mechanical stresses are taken into account for the analysis of repositories in clay stone. These processes are also considered in the near field analyses of engineered barrier systems, including the swelling/shrinkage of the bentonite material. The temperature-dependent saturation evolution around the heat-emitting waste container is described by different multiphase flow formulations. For all mentioned applications, we illustrate the workflow from model development and implementation, over verification and validation, to repository-scale application simulations using methods of high performance computing.

  6. Enhanced verification test suite for physics simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Kamm, James R.; Brock, Jerry S.; Brandon, Scott T.; Cotrell, David L.; Johnson, Bryan; Knupp, Patrick; Rider, William J.; Trucano, Timothy G.; Weirs, V. Gregory

    2008-09-01

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations.

  7. A system for designing and simulating particle physics experiments

    International Nuclear Information System (INIS)

    Zelazny, R.; Strzalkowski, P.

    1987-01-01

    In view of the rapid development of experimental facilities and their costs, the systematic design and preparation of particle physics experiments have become crucial. A software system is proposed as an aid for the experimental designer, mainly for experimental geometry analysis and experimental simulation. The following model is adopted: the description of an experiment is formulated in a language (here called XL) and put by its processor in a data base. The language is based on the entity-relationship-attribute approach. The information contained in the data base can be reported and analysed by an analyser (called XA) and modifications can be made at any time. In particular, the Monte Carlo methods can be used in experiment simulation for both physical phenomena in experimental set-up and detection analysis. The general idea of the system is based on the design concept of ISDOS project information systems. The characteristics of the simulation module are similar to those of the CERN Geant system, but some extensions are proposed. The system could be treated as a component of greater, integrated software environment for the design of particle physics experiments, their monitoring and data processing. (orig.)

  8. Physics Simulations of fluids - a brief overview of Phoenix FD

    CERN Multimedia

    CERN. Geneva; Nikolov, Svetlin

    2014-01-01

    The presentation will briefly describe the simulation and rendering of fluids with Phoenix FD, and then proceed into implementation details. We will present our methods of parallelizing the core simulation algorithms and our utilization of the GPU. We will also show how we take advantage of computational fluid dynamics specifics in order to speed up the preview and final rendering, thus achieving a quick pipeline for the creation of various visual effects. About the speakers Ivaylo Iliev is a Senior Software developer at Chaos Group and is the creator of the Phoenix FD simulator for fluid effects. He has a strong interest in physics and has worked on military simulators before focusing on visual effects. He has a Master?s degree from the Varna Technical University. Svetlin Nikolov is a Senior Software developer at Chaos Group with keen interest in physics and artificial intelligence and 7 years of experience in the software industry. He comes from a game development background with a focu...

  9. Nuclear and Particle Physics Simulations: The Consortium of Upper-Level Physics Software

    Science.gov (United States)

    Bigelow, Roberta; Moloney, Michael J.; Philpott, John; Rothberg, Joseph

    1995-06-01

    The Consortium for Upper Level Physics Software (CUPS) has developed a comprehensive series of Nine Book/Software packages that Wiley will publish in FY `95 and `96. CUPS is an international group of 27 physicists, all with extensive backgrounds in the research, teaching, and development of instructional software. The project is being supported by the National Science Foundation (PHY-9014548), and it has received other support from the IBM Corp., Apple Computer Corp., and George Mason University. The Simulations being developed are: Astrophysics, Classical Mechanics, Electricity & Magnetism, Modern Physics, Nuclear and Particle Physics, Quantum Mechanics, Solid State, Thermal and Statistical, and Wave and Optics.

  10. Materials used to simulate physical properties of human skin.

    Science.gov (United States)

    Dąbrowska, A K; Rotaru, G-M; Derler, S; Spano, F; Camenzind, M; Annaheim, S; Stämpfli, R; Schmid, M; Rossi, R M

    2016-02-01

    For many applications in research, material development and testing, physical skin models are preferable to the use of human skin, because more reliable and reproducible results can be obtained. This article gives an overview of materials applied to model physical properties of human skin to encourage multidisciplinary approaches for more realistic testing and improved understanding of skin-material interactions. The literature databases Web of Science, PubMed and Google Scholar were searched using the terms 'skin model', 'skin phantom', 'skin equivalent', 'synthetic skin', 'skin substitute', 'artificial skin', 'skin replica', and 'skin model substrate.' Articles addressing material developments or measurements that include the replication of skin properties or behaviour were analysed. It was found that the most common materials used to simulate skin are liquid suspensions, gelatinous substances, elastomers, epoxy resins, metals and textiles. Nano- and micro-fillers can be incorporated in the skin models to tune their physical properties. While numerous physical skin models have been reported, most developments are research field-specific and based on trial-and-error methods. As the complexity of advanced measurement techniques increases, new interdisciplinary approaches are needed in future to achieve refined models which realistically simulate multiple properties of human skin. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Computational physics simulation of classical and quantum systems

    CERN Document Server

    Scherer, Philipp O J

    2013-01-01

    This textbook presents basic and advanced computational physics in a very didactic style. It contains very-well-presented and simple mathematical descriptions of many of the most important algorithms used in computational physics. Many clear mathematical descriptions of important techniques in computational physics are given. The first part of the book discusses the basic numerical methods. A large number of exercises and computer experiments allows to study the properties of these methods. The second part concentrates on simulation of classical and quantum systems. It uses a rather general concept for the equation of motion which can be applied to ordinary and partial differential equations. Several classes of integration methods are discussed including not only the standard Euler and Runge Kutta method but also multistep methods and the class of Verlet methods which is introduced by studying the motion in Liouville space. Besides the classical methods, inverse interpolation is discussed, together with the p...

  12. Learning From Where Students Look While Observing Simulated Physical Phenomena

    Science.gov (United States)

    Demaree, Dedra

    2005-04-01

    The Physics Education Research (PER) Group at the Ohio State University (OSU) has developed Virtual Reality (VR) programs for teaching introductory physics concepts. Winter 2005, the PER group worked with OSU's cognitive science eye-tracking lab to probe what features students look at while using our VR programs. We see distinct differences in the features students fixate on depending upon whether or not they have formally studied the related physics. Students who first make predictions seem to fixate more on the relevant features of the simulation than those who do not, regardless of their level of education. It is known that students sometimes perform an experiment and report results consistent with their misconceptions but inconsistent with the experimental outcome. We see direct evidence of one student holding onto misconceptions despite fixating frequently on the information needed to understand the correct answer. Future studies using these technologies may prove valuable for tackling difficult questions regarding student learning.

  13. High performance computing system in the framework of the Higgs boson studies

    CERN Document Server

    Belyaev, Nikita; The ATLAS collaboration

    2017-01-01

    The Higgs boson physics is one of the most important and promising fields of study in modern High Energy Physics. To perform precision measurements of the Higgs boson properties, the use of fast and efficient instruments of Monte Carlo event simulation is required. Due to the increasing amount of data and to the growing complexity of the simulation software tools, the computing resources currently available for Monte Carlo simulation on the LHC GRID are not sufficient. One of the possibilities to address this shortfall of computing resources is the usage of institutes computer clusters, commercial computing resources and supercomputers. In this paper, a brief description of the Higgs boson physics, the Monte-Carlo generation and event simulation techniques are presented. A description of modern high performance computing systems and tests of their performance are also discussed. These studies have been performed on the Worldwide LHC Computing Grid and Kurchatov Institute Data Processing Center, including Tier...

  14. ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS

    International Nuclear Information System (INIS)

    WONG, CPC; MALANG, S; NISHIO, S; RAFFRAY, R; SAGARA, S

    2002-01-01

    OAK A271 ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS. First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability

  15. 5th International Conference on High Performance Scientific Computing

    CERN Document Server

    Hoang, Xuan; Rannacher, Rolf; Schlöder, Johannes

    2014-01-01

    This proceedings volume gathers a selection of papers presented at the Fifth International Conference on High Performance Scientific Computing, which took place in Hanoi on March 5-9, 2012. The conference was organized by the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University, Ho Chi Minh City University of Technology, and the Vietnam Institute for Advanced Study in Mathematics. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and practical applications. Subjects covered include mathematical modeling; numerical simulation; methods for optimization and control; parallel computing; software development; and applications of scientific computing in physics, mechanics and biomechanics, material science, hydrology, chemistry, biology, biotechnology, medicine, sports, psychology, transport, logistics, com...

  16. 6th International Conference on High Performance Scientific Computing

    CERN Document Server

    Phu, Hoang; Rannacher, Rolf; Schlöder, Johannes

    2017-01-01

    This proceedings volume highlights a selection of papers presented at the Sixth International Conference on High Performance Scientific Computing, which took place in Hanoi, Vietnam on March 16-20, 2015. The conference was jointly organized by the Heidelberg Institute of Theoretical Studies (HITS), the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) at Heidelberg University, and the Vietnam Institute for Advanced Study in Mathematics, Ministry of Education The contributions cover a broad, interdisciplinary spectrum of scientific computing and showcase recent advances in theory, methods, and practical applications. Subjects covered numerical simulation, methods for optimization and control, parallel computing, and software development, as well as the applications of scientific computing in physics, mechanics, biomechanics and robotics, material science, hydrology, biotechnology, medicine, transport, scheduling, and in...

  17. High Performance Proactive Digital Forensics

    International Nuclear Information System (INIS)

    Alharbi, Soltan; Traore, Issa; Moa, Belaid; Weber-Jahnke, Jens

    2012-01-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  18. Preparing a voxel-simulator of Alderson Rando physical phantom

    International Nuclear Information System (INIS)

    Boia, Leonardo S.; Martins, Maximiano C.; Silva, Ademir X.; Salmon Junior, Helio A.; Soares, Alessandro F.N.S.

    2011-01-01

    There are, nowadays, sorts of anthropomorphycal phantoms which are used for simulation of radiation transport by the matter and also the deposition of energy in such radiation in human tissues and organs, because an in-vitro dosimetry becomes very either complicated or even impossible in some cases. In the present work we prepared a computational phantom in voxels based on computational tomography of Rando-Alderson. This phantom is one of the most known human body simulators on the scope of ionizing radiation dosimetry, and it is used for radioprotection issues and dosimetry from radiotherapy and brachytherapy treatments as well. The preparation of a voxel simulator starts with the image acquisition by a tomograph found at COI/RJ (Clinicas Oncologicas Integradas). The images were generated with 1mm cuts and collected for analysis. After that step the images were processed in SAPDI (Sistema Automatizado de Processamento Digital de Imagem) in order to amplify the images regions intending to facilitate the task in their segmentation. SAPDI is based on parameters described by Hounsfield scale. After that, it has begun discretization of elements in IDs voxels using Scan2MCNP software - which converts images to a sequential text file containing the voxels' IDs ready to be introduced into MCNPX input; however, this set can be turned to a voxel's IDs matrix and used in other Monte Carlo codes, such as Geant4, PENELOPE and EGSnrc. Finished this step, the simulator is able to simulate with accurate geometry the physical phantom. It's possible to study a large number of cases by computational techniques of geometry's insertions of tumors and TLDs, which makes this simulator a research material useful for a lot of subjects. (author)

  19. Preparing a voxel-simulator of Alderson Rando physical phantom

    Energy Technology Data Exchange (ETDEWEB)

    Boia, Leonardo S.; Martins, Maximiano C.; Silva, Ademir X., E-mail: lboia@con.ufrj.br, E-mail: ademir@con.ufrj.br [Programa de Engenharia Nuclear (PEN/COPPE/UFRJ). Universidade Federal do Rio de Janeiro, RJ (Brazil); Salmon Junior, Helio A., E-mail: heliosalmon@coinet.com.br [COI - Clinicas Oncologicas Integradas, MD.X Barra Medical Center, Rio de Janeiro, RJ (Brazil); Soares, Alessandro F.N.S., E-mail: afacure@cnen.gov.br [Comissao Nacional de Engenharia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil)

    2011-07-01

    There are, nowadays, sorts of anthropomorphycal phantoms which are used for simulation of radiation transport by the matter and also the deposition of energy in such radiation in human tissues and organs, because an in-vitro dosimetry becomes very either complicated or even impossible in some cases. In the present work we prepared a computational phantom in voxels based on computational tomography of Rando-Alderson. This phantom is one of the most known human body simulators on the scope of ionizing radiation dosimetry, and it is used for radioprotection issues and dosimetry from radiotherapy and brachytherapy treatments as well. The preparation of a voxel simulator starts with the image acquisition by a tomograph found at COI/RJ (Clinicas Oncologicas Integradas). The images were generated with 1mm cuts and collected for analysis. After that step the images were processed in SAPDI (Sistema Automatizado de Processamento Digital de Imagem) in order to amplify the images regions intending to facilitate the task in their segmentation. SAPDI is based on parameters described by Hounsfield scale. After that, it has begun discretization of elements in IDs voxels using Scan2MCNP software - which converts images to a sequential text file containing the voxels' IDs ready to be introduced into MCNPX input; however, this set can be turned to a voxel's IDs matrix and used in other Monte Carlo codes, such as Geant4, PENELOPE and EGSnrc. Finished this step, the simulator is able to simulate with accurate geometry the physical phantom. It's possible to study a large number of cases by computational techniques of geometry's insertions of tumors and TLDs, which makes this simulator a research material useful for a lot of subjects. (author)

  20. Integrated plasma control for high performance tokamaks

    International Nuclear Information System (INIS)

    Humphreys, D.A.; Deranian, R.D.; Ferron, J.R.; Johnson, R.D.; LaHaye, R.J.; Leuer, J.A.; Penaflor, B.G.; Walker, M.L.; Welander, A.S.; Jayakumar, R.J.; Makowski, M.A.; Khayrutdinov, R.R.

    2005-01-01

    Sustaining high performance in a tokamak requires controlling many equilibrium shape and profile characteristics simultaneously with high accuracy and reliability, while suppressing a variety of MHD instabilities. Integrated plasma control, the process of designing high-performance tokamak controllers based on validated system response models and confirming their performance in detailed simulations, provides a systematic method for achieving and ensuring good control performance. For present-day devices, this approach can greatly reduce the need for machine time traditionally dedicated to control optimization, and can allow determination of high-reliability controllers prior to ever producing the target equilibrium experimentally. A full set of tools needed for this approach has recently been completed and applied to present-day devices including DIII-D, NSTX and MAST. This approach has proven essential in the design of several next-generation devices including KSTAR, EAST, JT-60SC, and ITER. We describe the method, results of design and simulation tool development, and recent research producing novel approaches to equilibrium and MHD control in DIII-D. (author)

  1. Dynamic simulation of flash drums using rigorous physical property calculations

    Directory of Open Access Journals (Sweden)

    F. M. Gonçalves

    2007-06-01

    Full Text Available The dynamics of flash drums is simulated using a formulation adequate for phase modeling with equations of state (EOS. The energy and mass balances are written as differential equations for the internal energy and the number of moles of each species. The algebraic equations of the model, solved at each time step, are those of a flash with specified internal energy, volume and mole numbers (UVN flash. A new aspect of our dynamic simulations is the use of direct iterations in phase volumes (instead of pressure for solving the algebraic equations. It was also found that an iterative procedure previously suggested in the literature for UVN flashes becomes unreliable close to phase boundaries and a new alternative is proposed. Another unusual aspect of this work is that the model expressions, including the physical properties and their analytical derivatives, were quickly implemented using computer algebra.

  2. gemcWeb: A Cloud Based Nuclear Physics Simulation Software

    Science.gov (United States)

    Markelon, Sam

    2017-09-01

    gemcWeb allows users to run nuclear physics simulations from the web. Being completely device agnostic, scientists can run simulations from anywhere with an Internet connection. Having a full user system, gemcWeb allows users to revisit and revise their projects, and share configurations and results with collaborators. gemcWeb is based on simulation software gemc, which is based on standard GEant4. gemcWeb requires no C++, gemc, or GEant4 knowledge. Using a simple but powerful GUI allows users to configure their project from geometries and configurations stored on the deployment server. Simulations are then run on the server, with results being posted to the user, and then securely stored. Python based and open-source, the main version of gemcWeb is hosted internally at Jefferson National Labratory and used by the CLAS12 and Electron-Ion Collider Project groups. However, as the software is open-source, and hosted as a GitHub repository, an instance can be deployed on the open web, or any institution's intra-net. An instance can be configured to host experiments specific to an institution, and the code base can be modified by any individual or group. Special thanks to: Maurizio Ungaro, PhD., creator of gemc; Markus Diefenthaler, PhD., advisor; and Kyungseon Joo, PhD., advisor.

  3. Physics Detector Simulation Facility Phase II system software description

    International Nuclear Information System (INIS)

    Scipioni, B.; Allen, J.; Chang, C.; Huang, J.; Liu, J.; Mestad, S.; Pan, J.; Marquez, M.; Estep, P.

    1993-05-01

    This paper presents the Physics Detector Simulation Facility (PDSF) Phase II system software. A key element in the design of a distributed computing environment for the PDSF has been the separation and distribution of the major functions. The facility has been designed to support batch and interactive processing, and to incorporate the file and tape storage systems. By distributing these functions, it is often possible to provide higher throughput and resource availability. Similarly, the design is intended to exploit event-level parallelism in an open distributed environment

  4. High performance parallel computers for science

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  5. Implementation of interactive virtual simulation of physical systems

    International Nuclear Information System (INIS)

    Sanchez, H; Escobar, J J; Gonzalez, J D; Beltran, J

    2014-01-01

    Considering the limited availability of laboratories for physics teaching and the difficulties this causes in the learning of school students in Santa Marta Colombia, we have developed software in order to generate greater student interaction with the phenomena physical and improve their understanding. Thereby, this system has been proposed in an architecture Model/View- View- Model (MVVM), sharing the benefits of MVC. Basically, this pattern consists of 3 parts: The Model, that is responsible for business logic related. The View, which is the part with which we are most familiar and the user sees. Its role is to display data to the user and allowing manipulation of the data of the application. The ViewModel, which is the middle part of the Model and the View (analogous to the Controller in the MVC pattern), as well as being responsible for implementing the behavior of the view to respond to user actions and expose data model in a way that is easy to use links to data in the view. .NET Framework 4.0 and editing package Silverlight 4 and 5 are the main requirements needed for the deployment of physical simulations that are hosted in the web application and a web browser (Internet Explorer, Mozilla Firefox or Chrome). The implementation of this innovative application in educational institutions has shown that students improved their contextualization of physical phenomena

  6. Physics validation studies for muon collider detector background simulations

    International Nuclear Information System (INIS)

    Morris, Aaron Owen

    2011-01-01

    Within the broad discipline of physics, the study of the fundamental forces of nature and the most basic constituents of the universe belongs to the field of particle physics. While frequently referred to as 'high-energy physics,' or by the acronym 'HEP,' particle physics is not driven just by the quest for ever-greater energies in particle accelerators. Rather, particle physics is seen as having three distinct areas of focus: the cosmic, intensity, and energy frontiers. These three frontiers all provide different, but complementary, views of the basic building blocks of the universe. Currently, the energy frontier is the realm of hadron colliders like the Tevatron at Fermi National Accelerator Laboratory (Fermilab) or the Large Hadron Collider (LHC) at CERN. While the LHC is expected to be adequate for explorations up to 14 TeV for the next decade, the long development lead time for modern colliders necessitates research and development efforts in the present for the next generation of colliders. This paper focuses on one such next-generation machine: a muon collider. Specifically, this paper focuses on Monte Carlo simulations of beam-induced backgrounds vis-a-vis detector region contamination. Initial validation studies of a few muon collider physics background processes using G4beamline have been undertaken and results presented. While these investigations have revealed a number of hurdles to getting G4beamline up to the level of more established simulation suites, such as MARS, the close communication between us, as users, and the G4beamline developer, Tom Roberts, has allowed for rapid implementation of user-desired features. The main example of user-desired feature implementation, as it applies to this project, is Bethe-Heitler muon production. Regarding the neutron interaction issues, we continue to study the specifics of how GEANT4 implements nuclear interactions. The GEANT4 collaboration has been contacted regarding the minor discrepancies in the neutron

  7. Development of high performance cladding

    International Nuclear Information System (INIS)

    Kiuchi, Kiyoshi

    2003-01-01

    The developments of superior next-generation light water reactor are requested on the basis of general view points, such as improvement of safety, economics, reduction of radiation waste and effective utilization of plutonium, until 2030 year in which conventional reactor plants should be renovate. Improvements of stainless steel cladding for conventional high burn-up reactor to more than 100 GWd/t, developments of manufacturing technology for reduced moderation-light water reactor (RMWR) of breeding ratio beyond 1.0 and researches of water-materials interaction on super critical pressure-water cooled reactor are carried out in Japan Atomic Energy Research Institute. Stable austenite stainless steel has been selected for fuel element cladding of advanced boiling water reactor (ABWR). The austenite stain less has the superiority for anti-irradiation properties, corrosion resistance and mechanical strength. A hard spectrum of neutron energy up above 0.1 MeV takes place in core of the reduced moderation-light water reactor, as liquid metal-fast breeding reactor (LMFBR). High performance cladding for the RMWR fuel elements is required to get anti-irradiation properties, corrosion resistance and mechanical strength also. Slow strain rate test (SSRT) of SUS 304 and SUS 316 are carried out for studying stress corrosion cracking (SCC). Irradiation tests in LMFBR are intended to obtain irradiation data for damaged quantity of the cladding materials. (M. Suetake)

  8. High performance fuel technology development

    Energy Technology Data Exchange (ETDEWEB)

    Koon, Yang Hyun; Kim, Keon Sik; Park, Jeong Yong; Yang, Yong Sik; In, Wang Kee; Kim, Hyung Kyu [KAERI, Daejeon (Korea, Republic of)

    2012-01-15

    {omicron} Development of High Plasticity and Annular Pellet - Development of strong candidates of ultra high burn-up fuel pellets for a PCI remedy - Development of fabrication technology of annular fuel pellet {omicron} Development of High Performance Cladding Materials - Irradiation test of HANA claddings in Halden research reactor and the evaluation of the in-pile performance - Development of the final candidates for the next generation cladding materials. - Development of the manufacturing technology for the dual-cooled fuel cladding tubes. {omicron} Irradiated Fuel Performance Evaluation Technology Development - Development of performance analysis code system for the dual-cooled fuel - Development of fuel performance-proving technology {omicron} Feasibility Studies on Dual-Cooled Annular Fuel Core - Analysis on the property of a reactor core with dual-cooled fuel - Feasibility evaluation on the dual-cooled fuel core {omicron} Development of Design Technology for Dual-Cooled Fuel Structure - Definition of technical issues and invention of concept for dual-cooled fuel structure - Basic design and development of main structure components for dual- cooled fuel - Basic design of a dual-cooled fuel rod.

  9. A New Approach to Monte Carlo Simulations in Statistical Physics

    Science.gov (United States)

    Landau, David P.

    2002-08-01

    Monte Carlo simulations [1] have become a powerful tool for the study of diverse problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, most often in the canonical ensemble, and over the past several decades enormous improvements have been made in performance. Nonetheless, difficulties arise near phase transitions-due to critical slowing down near 2nd order transitions and to metastability near 1st order transitions, and these complications limit the applicability of the method. We shall describe a new Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is known, all thermodynamic properties can be calculated. This approach can be extended to multi-dimensional parameter spaces and should be effective for systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc. Generalizations should produce a broadly applicable optimization tool. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).

  10. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  11. Multi-Physics Simulation of TREAT Kinetics using MAMMOTH

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, Mark; Gleicher, Frederick; Ortensi, Javier; Alberti, Anthony; Palmer, Todd

    2015-11-01

    With the advent of next generation reactor systems and new fuel designs, the U.S. Department of Energy (DOE) has identified the need for the resumption of transient testing of nuclear fuels. DOE has decided that the Transient Reactor Test Facility (TREAT) at Idaho National Laboratory (INL) is best suited for future testing. TREAT is a thermal neutron spectrum nuclear test facility that is designed to test nuclear fuels in transient scenarios. These specific fuels transient tests range from simple temperature transients to full fuel melt accidents. The current TREAT core is driven by highly enriched uranium (HEU) dispersed in a graphite matrix (1:10000 U-235/C atom ratio). At the center of the core, fuel is removed allowing for the insertion of an experimental test vehicle. TREAT’s design provides experimental flexibility and inherent safety during neutron pulsing. This safety stems from the graphite in the driver fuel having a strong negative temperature coefficient of reactivity resulting from a thermal Maxwellian shift with increased leakage, as well as graphite acting as a temperature sink. Air cooling is available, but is generally used post-transient for heat removal. DOE and INL have expressed a desire to develop a simulation capability that will accurately model the experiments before they are irradiated at the facility, with an emphasis on effective and safe operation while minimizing experimental time and cost. At INL, the Multi-physics Object Oriented Simulation Environment (MOOSE) has been selected as the model development framework for this work. This paper describes the results of preliminary simulations of a TREAT fuel element under transient conditions using the MOOSE-based MAMMOTH reactor physics tool.

  12. Quantum simulations and many-body physics with light.

    Science.gov (United States)

    Noh, Changsuk; Angelakis, Dimitris G

    2017-01-01

    In this review we discuss the works in the area of quantum simulation and many-body physics with light, from the early proposals on equilibrium models to the more recent works in driven dissipative platforms. We start by describing the founding works on Jaynes-Cummings-Hubbard model and the corresponding photon-blockade induced Mott transitions and continue by discussing the proposals to simulate effective spin models and fractional quantum Hall states in coupled resonator arrays (CRAs). We also analyse the recent efforts to study out-of-equilibrium many-body effects using driven CRAs, including the predictions for photon fermionisation and crystallisation in driven rings of CRAs as well as other dynamical and transient phenomena. We try to summarise some of the relatively recent results predicting exotic phases such as super-solidity and Majorana like modes and then shift our attention to developments involving 1D nonlinear slow light setups. There the simulation of strongly correlated phases characterising Tonks-Girardeau gases, Luttinger liquids, and interacting relativistic fermionic models is described. We review the major theory results and also briefly outline recent developments in ongoing experimental efforts involving different platforms in circuit QED, photonic crystals and nanophotonic fibres interfaced with cold atoms.

  13. Simulated, Emulated, and Physical Investigative Analysis (SEPIA) of networked systems.

    Energy Technology Data Exchange (ETDEWEB)

    Burton, David P.; Van Leeuwen, Brian P.; McDonald, Michael James; Onunkwo, Uzoma A.; Tarman, Thomas David; Urias, Vincent E.

    2009-09-01

    This report describes recent progress made in developing and utilizing hybrid Simulated, Emulated, and Physical Investigative Analysis (SEPIA) environments. Many organizations require advanced tools to analyze their information system's security, reliability, and resilience against cyber attack. Today's security analysis utilize real systems such as computers, network routers and other network equipment, computer emulations (e.g., virtual machines) and simulation models separately to analyze interplay between threats and safeguards. In contrast, this work developed new methods to combine these three approaches to provide integrated hybrid SEPIA environments. Our SEPIA environments enable an analyst to rapidly configure hybrid environments to pass network traffic and perform, from the outside, like real networks. This provides higher fidelity representations of key network nodes while still leveraging the scalability and cost advantages of simulation tools. The result is to rapidly produce large yet relatively low-cost multi-fidelity SEPIA networks of computers and routers that let analysts quickly investigate threats and test protection approaches.

  14. Recent progress of Geant4 electromagnetic physics for calorimeter simulation

    Science.gov (United States)

    Incerti, S.; Ivanchenko, V.; Novak, M.

    2018-02-01

    We report on recent progress in the Geant4 electromagnetic (EM) physics sub-packages. New interfaces and models introduced recently in Geant4 10.3 are already used in LHC applications and may be useful for any type of simulation. Additional developments for EM physics are available with the new public version Geant4 10.4 (December, 2017). Important developments for calorimetry applications were carried out for the modeling of single and multiple scattering of charged particles. Corrections to scattering of positrons and to sampling of displacement have recently been added to the Geant4 default Urban model. The fully theory-based Goudsmit-Saunderson (GS) model for electron/positron multiple scattering was recently reviewed and a new improved version is available in Geant4 10.4. For testing purposes for novel calorimeters we provide a configuration of electron scattering based on the GS model or on the single scattering model (SS) instead of the Urban model. In addition, the GS model with Mott corrections enabled is included in the option4 EM physics constructor. This EM configuration provides the most accurate results for scattering of electrons and positrons.

  15. Physical model of the nuclear fuel cycle simulation code SITON

    International Nuclear Information System (INIS)

    Brolly, Á.; Halász, M.; Szieberth, M.; Nagy, L.; Fehér, S.

    2017-01-01

    Finding answers to main challenges of nuclear energy, like resource utilisation or waste minimisation, calls for transient fuel cycle modelling. This motivation led to the development of SITON v2.0 a dynamic, discrete facilities/discrete materials and also discrete events fuel cycle simulation code. The physical model of the code includes the most important fuel cycle facilities. Facilities can be connected flexibly; their number is not limited. Material transfer between facilities is tracked by taking into account 52 nuclides. Composition of discharged fuel is determined using burnup tables except for the 2400 MW thermal power design of the Gas-Cooled Fast Reactor (GFR2400). For the GFR2400 the FITXS method is used, which fits one-group microscopic cross-sections as polynomial functions of the fuel composition. This method is accurate and fast enough to be used in fuel cycle simulations. Operation of the fuel cycle, i.e. material requests and transfers, is described by discrete events. In advance of the simulation reactors and plants formulate their requests as events; triggered requests are tracked. After that, the events are simulated, i.e. the requests are fulfilled and composition of the material flow between facilities is calculated. To demonstrate capabilities of SITON v2.0, a hypothetical transient fuel cycle is presented in which a 4-unit VVER-440 reactor park was replaced by one GFR2400 that recycled its own spent fuel. It is found that the GFR2400 can be started if the cooling time of its spent fuel is 2 years. However, if the cooling time is 5 years it needs an additional plutonium feed, which can be covered from the spent fuel of a Generation III light water reactor.

  16. Advanced high performance solid wall blanket concepts

    International Nuclear Information System (INIS)

    Wong, C.P.C.; Malang, S.; Nishio, S.; Raffray, R.; Sagara, A.

    2002-01-01

    First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability

  17. High-performance vertical organic transistors.

    Science.gov (United States)

    Kleemann, Hans; Günther, Alrun A; Leo, Karl; Lüssem, Björn

    2013-11-11

    Vertical organic thin-film transistors (VOTFTs) are promising devices to overcome the transconductance and cut-off frequency restrictions of horizontal organic thin-film transistors. The basic physical mechanisms of VOTFT operation, however, are not well understood and VOTFTs often require complex patterning techniques using self-assembly processes which impedes a future large-area production. In this contribution, high-performance vertical organic transistors comprising pentacene for p-type operation and C60 for n-type operation are presented. The static current-voltage behavior as well as the fundamental scaling laws of such transistors are studied, disclosing a remarkable transistor operation with a behavior limited by injection of charge carriers. The transistors are manufactured by photolithography, in contrast to other VOTFT concepts using self-assembled source electrodes. Fluorinated photoresist and solvent compounds allow for photolithographical patterning directly and strongly onto the organic materials, simplifying the fabrication protocol and making VOTFTs a prospective candidate for future high-performance applications of organic transistors. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. The path toward HEP High Performance Computing

    CERN Document Server

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  19. High-Performance Data Converters

    DEFF Research Database (Denmark)

    Steensgaard-Madsen, Jesper

    -resolution internal D/A converters are required. Unit-element mismatch-shaping D/A converters are analyzed, and the concept of mismatch-shaping is generalized to include scaled-element D/A converters. Several types of scaled-element mismatch-shaping D/A converters are proposed. Simulations show that, when implemented...... in a standard CMOS technology, they can be designed to yield 100 dB performance at 10 times oversampling. The proposed scaled-element mismatch-shaping D/A converters are well suited for use as the feedback stage in oversampled delta-sigma quantizers. It is, however, not easy to make full use of their potential......-order difference of the output signal from the loop filter's first integrator stage. This technique avoids the need for accurate matching of analog and digital filters that characterizes the MASH topology, and it preserves the signal-band suppression of quantization errors. Simulations show that quantizers...

  20. First experience of vectorizing electromagnetic physics models for detector simulation

    Energy Technology Data Exchange (ETDEWEB)

    Amadio, G. [Sao Paulo State U.; Apostolakis, J. [CERN; Bandieramonte, M. [Catania Astrophys. Observ.; Bianchini, C. [Mackenzie Presbiteriana U.; Bitzes, G. [CERN; Brun, R. [CERN; Canal, P. [Fermilab; Carminati, F. [CERN; Licht, J.de Fine [U. Copenhagen (main); Duhem, L. [Intel, Santa Clara; Elvira, D. [Fermilab; Gheata, A. [CERN; Jun, S. Y. [Fermilab; Lima, G. [Fermilab; Novak, M. [CERN; Presbyterian, M. [Bhabha Atomic Res. Ctr.; Shadura, O. [CERN; Seghal, R. [Bhabha Atomic Res. Ctr.; Wenzel, S. [CERN

    2015-12-23

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. The GeantV vector prototype for detector simulations has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth, parallelization needed to achieve optimal performance or memory access latency and speed. An additional challenge is to avoid the code duplication often inherent to supporting heterogeneous platforms. In this paper we present the first experience of vectorizing electromagnetic physics models developed for the GeantV project.

  1. First experience of vectorizing electromagnetic physics models for detector simulation

    International Nuclear Information System (INIS)

    Amadio, G; Bianchini, C; Apostolakis, J; Bitzes, G; Brun, R; Carminati, F; Gheata, A; Novak, M; Shadura, O; Wenzel, S; Bandieramonte, M; Canal, P; Elvira, D; Jun, S Y; Lima, G; Licht, J de Fine; Duhem, L; Presbyterian, M; Seghal, R

    2015-01-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. The GeantV vector prototype for detector simulations has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth, parallelization needed to achieve optimal performance or memory access latency and speed. An additional challenge is to avoid the code duplication often inherent to supporting heterogeneous platforms. In this paper we present the first experience of vectorizing electromagnetic physics models developed for the GeantV project. (paper)

  2. First experience of vectorizing electromagnetic physics models for detector simulation

    Science.gov (United States)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Bianchini, C.; Bitzes, G.; Brun, R.; Canal, P.; Carminati, F.; de Fine Licht, J.; Duhem, L.; Elvira, D.; Gheata, A.; Jun, S. Y.; Lima, G.; Novak, M.; Presbyterian, M.; Shadura, O.; Seghal, R.; Wenzel, S.

    2015-12-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. The GeantV vector prototype for detector simulations has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth, parallelization needed to achieve optimal performance or memory access latency and speed. An additional challenge is to avoid the code duplication often inherent to supporting heterogeneous platforms. In this paper we present the first experience of vectorizing electromagnetic physics models developed for the GeantV project.

  3. Physical optics simulations with PHASE for SwissFEL beamlines

    Energy Technology Data Exchange (ETDEWEB)

    Flechsig, U.; Follath, R.; Reiche, S. [Paul Scherrer Institut, Swiss Light Source, 5232 Villigen PSI (Switzerland); Bahrdt, J. [Helmholtz Zentrum Berlin (Germany)

    2016-07-27

    PHASE is a software tool for physical optics simulation based on the stationary phase approximation method. The code is under continuous development since about 20 years and has been used for instance for fundamental studies and ray tracing of various beamlines at the Swiss Light Source. Along with the planning for SwissFEL a new hard X-ray free electron laser under construction, new features have been added to permit practical performance predictions including diffraction effects which emerge with the fully coherent source. We present the application of the package on the example of the ARAMIS 1 beamline at SwissFEL. The X-ray pulse calculated with GENESIS and given as an electrical field distribution has been propagated through the beamline to the sample position. We demonstrate the new features of PHASE like the treatment of measured figure errors, apertures and coatings of the mirrors and the application of Fourier optics propagators for free space propagation.

  4. Holistic simulation of geotechnical installation processes numerical and physical modelling

    CERN Document Server

    2015-01-01

    The book provides suitable methods for the simulations of boundary value problems of geotechnical installation processes with reliable prediction for the deformation behavior of structures in static or dynamic interaction with the soil. It summarizes the basic research of a research group from scientists dealing with constitutive relations of soils and their implementations as well as contact element formulations in FE-codes. Numerical and physical experiments are presented providing benchmarks for future developments in this field. Boundary value problems have been formulated and solved with the developed tools in order to show the effectivity of the methods. Parametric studies of geotechnical installation processes in order to identify the governing parameters for the optimization of the process are given in such a way that the findings can be recommended to practice for further use. For many design engineers in practice the assessment of the serviceability of nearby structures due to geotechnical installat...

  5. High performance light water reactor

    International Nuclear Information System (INIS)

    Squarer, D.; Schulenberg, T.; Struwe, D.; Oka, Y.; Bittermann, D.; Aksan, N.; Maraczy, C.; Kyrki-Rajamaeki, R.; Souyri, A.; Dumaz, P.

    2003-01-01

    The objective of the high performance light water reactor (HPLWR) project is to assess the merit and economic feasibility of a high efficiency LWR operating at thermodynamically supercritical regime. An efficiency of approximately 44% is expected. To accomplish this objective, a highly qualified team of European research institutes and industrial partners together with the University of Tokyo is assessing the major issues pertaining to a new reactor concept, under the co-sponsorship of the European Commission. The assessment has emphasized the recent advancement achieved in this area by Japan. Additionally, it accounts for advanced European reactor design requirements, recent improvements, practical design aspects, availability of plant components and the availability of high temperature materials. The final objective of this project is to reach a conclusion on the potential of the HPLWR to help sustain the nuclear option, by supplying competitively priced electricity, as well as to continue the nuclear competence in LWR technology. The following is a brief summary of the main project achievements:-A state-of-the-art review of supercritical water-cooled reactors has been performed for the HPLWR project.-Extensive studies have been performed in the last 10 years by the University of Tokyo. Therefore, a 'reference design', developed by the University of Tokyo, was selected in order to assess the available technological tools (i.e. computer codes, analyses, advanced materials, water chemistry, etc.). Design data and results of the analysis were supplied by the University of Tokyo. A benchmark problem, based on the 'reference design' was defined for neutronics calculations and several partners of the HPLWR project carried out independent analyses. The results of these analyses, which in addition help to 'calibrate' the codes, have guided the assessment of the core and the design of an improved HPLWR fuel assembly. Preliminary selection was made for the HPLWR scale

  6. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  7. Physical and chemical stability of palonosetron hydrochloride with dacarbazine and with methylprednisolone sodium succinate during simulated y-site administration.

    Science.gov (United States)

    Trissel, Lawrence A; Zhang, Yanping; Xu, Quanyun A

    2006-01-01

    The objective of this study was to evaluate the physical and chemical stability of mixtures of undiluted palonosetron hydrochloride 50 micrograms/mL with dacarbazine 4 mg/mL and with methylprednisolone sodium succinate 5 mg/mL in 5% dextrose injection during simulated Y-site administration. Triplicate test samples were prepared by admixing 7.5 mL of palonosetron hydrochloride with 7.5 mL of dacarbazine solution and, separately, methylprednisolone sodium succinate solution. Physical stability was assessed by using a multistep evaluation procedure that included both turbidimetric and particulate measurement as well as visual inspection. Chemical stability was assessed by using stability-indicating high-performance liquid chromatographic analytical techniques that determined drug concentrations. Evaluations were performed immediately after mixing and 1 and 4 hours after mixing. The palonosetron hydrochloride-dacarbazine samples were clear and colorless when viewed in normal fluorescent room light and when viewed with a Tyndall beam. Measured turbidities remained unchanged; particulate contents were low and exhibited little change. High-performance liquid chromatography analysis revealed that palonosetron hydrochloride and dacarbazine remained stable throughout the 4-hour test with no drug loss. Palonosetron hydrochloride is, therefore, physically compatible and chemically stable with dacarbazine during Y-site administration. Within 4 hours, the mixtures of palonosetron hydrochloride and methylprednisolone sodium succinate developed a microprecipitate that became a white precipitate visible to the unaided eye. The precipitate was analyzed and identified as methylprednisolone. Palonosetron hydrochloride is incompatible with methylprednisolone sodium succinate.

  8. Numerical Simulations of Granular Physics in the Solar System

    Science.gov (United States)

    Ballouz, Ronald

    2017-08-01

    Granular physics is a sub-discipline of physics that attempts to combine principles that have been developed for both solid-state physics and engineering (such as soil mechanics) with fluid dynamics in order to formulate a coherent theory for the description of granular materials, which are found in both terrestrial (e.g., earthquakes, landslides, and pharmaceuticals) and extra-terrestrial settings (e.g., asteroids surfaces, asteroid interiors, and planetary ring systems). In the case of our solar system, the growth of this sub-discipline has been key in helping to interpret the formation, structure, and evolution of both asteroids and planetary rings. It is difficult to develop a deterministic theory for granular materials due to the fact that granular systems are composed of a large number of elements that interact through a non-linear combination of various forces (mechanical, gravitational, and electrostatic, for example) leading to a high degree of stochasticity. Hence, we study these environments using an N-body code, pkdgrav, that is able to simulate the gravitational, collisional, and cohesive interactions of grains. Using pkdgrav, I have studied the size segregation on asteroid surfaces due to seismic shaking (the Brazil-nut effect), the interaction of the OSIRIS-REx asteroid sample-return mission sampling head, TAGSAM, with the surface of the asteroid Bennu, the collisional disruptions of rubble-pile asteroids, and the formation of structure in Saturn's rings. In all of these scenarios, I have found that the evolution of a granular system depends sensitively on the intrinsic properties of the individual grains (size, shape, sand surface roughness). For example, through our simulations, we have been able to determine relationships between regolith properties and the amount of surface penetration a spacecraft achieves upon landing. Furthermore, we have demonstrated that this relationship also depends on the strength of the local gravity. By comparing our

  9. Indoor Air Quality in High Performance Schools

    Science.gov (United States)

    High performance schools are facilities that improve the learning environment while saving energy, resources, and money. The key is understanding the lifetime value of high performance schools and effectively managing priorities, time, and budget.

  10. Carpet Aids Learning in High Performance Schools

    Science.gov (United States)

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  11. Micromagnetics on high-performance workstation and mobile computational platforms

    Science.gov (United States)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  12. Analysis of the physical simulation on Fourier transform infrared spectrometer

    Science.gov (United States)

    Yue, Peng-yuan; Wan, Yu-xi; Zhao, Zhen

    2017-10-01

    A kind of oscillating arm type Fourier Transform Infrared Spectrometer (FTS) which based on the corner cube retroreflector is presented, and its principle and properties are studied. It consists of a pair of corner cube retroreflector, beam splitter and compensator. The optical path difference(OPD) is created by oscillating reciprocating motion of the moving corner cube pair, and the OPD value is four times the physical shift value of the moving corner cube pair. Due to the basic property of corner cube retroreflector, the oscillating arm type FTS has no tilt problems. It is almost ideal for very high resolution infrared spectrometer. However, there are some factors to reduce the FTS capability. First, wavefront aberration due to the figures of these surfaces will reduce modulation of FTS system; second, corner cube retroreflector consist of three plane mirror, and orthogonal to each other. When there is a deviation from right angle, it will reduced the modulation of system; third, the apexes of corner cube retroreflector are symmetric about the surface of beam splitter, if one or both of the corner cube retroreflector is displaced laterally from its nominal position, phase of off-axis rays returning from the two arms were difference, this also contributes to loss of modulation of system. In order to solve these problems, this paper sets up a non-sequential interference model, and a small amount of oscillating arm rotation is set to realize the dynamic simulation process, the dynamic interference energy data were acquired at different times, and calculated the modulation of the FTS system. In the simulation, the influence of wedge error of beam splitter, compensator or between them were discussed; effects of oscillating arm shaft deviation from the coplanar of beam splitter was analyzed; and compensation effect of corner cube retroreflector alignment on beam splitter, oscillating arm rotary shaft alignment error is analyzed. In addition, the adjustment procedure

  13. Quantifying the Physical Response to a Contemporary Amateur Boxing Simulation.

    Science.gov (United States)

    Finlay, Mitchell J; Greig, Matt; Page, Richard M

    2018-04-01

    Finlay, MJ, Greig, M, and Page, RM. Quantifying the physical response to a contemporary amateur boxing simulation. J Strength Cond Res 32(4): 1005-1012, 2018-This study examined the physical response to a contemporary boxing-specific exercise protocol (BSEP), based on notational analysis of amateur boxing. Nine male senior elite amateur boxers completed a 3 × 3-minute BSEP, with a 1-minute passive recovery period interspersing each round. Average (HRave) and peak (HRpeak) heart rates, average (V[Combining Dot Above]O2ave) and peak oxygen consumptions (V[Combining Dot Above]O2peak), blood lactate (BLa) concentrations, rating of perceived exertion, and both triaxial and uniaxial PlayerLoad metrics were recorded during the completion of the BSEP. Blood lactate concentration increased significantly in each round (Round 1 = 2.4 ± 1.3 mmol·L; Round 2 = 3.3 ± 1.7 mmol·L; Round 3 = 4.3 ± 2.6 mmol·L). Significantly lower HRave and HRpeak values were found in the first round (HRave: 150 ± 15 b·min; HRpeak: 162 ± 12 b·min) when compared with the second (HRave: 156 ± 16 b·min; HRpeak: 166 ± 13 b·min) and third (HRave: 150 ± 15 b·min; HRpeak: 169 ± 14 b·min). No significant differences were found in any of the V[Combining Dot Above]O2 or PlayerLoad metrics recorded during the BSEP. The BSEP based on notational analysis elicited a fatigue response across rounds, confirming its validity. The BSEP can be used as a training tool for boxing-specific conditioning with implications for reduced injury risk, and to assess the physical response to boxing-specific interventions. Moreover, the BSEP can also be manipulated to suit all levels of participants or training phases, with practical applications in performance monitoring and microcycle periodization.

  14. Simulating biological processes: stochastic physics from whole cells to colonies

    Science.gov (United States)

    Earnest, Tyler M.; Cole, John A.; Luthey-Schulten, Zaida

    2018-05-01

    The last few decades have revealed the living cell to be a crowded spatially heterogeneous space teeming with biomolecules whose concentrations and activities are governed by intrinsically random forces. It is from this randomness, however, that a vast array of precisely timed and intricately coordinated biological functions emerge that give rise to the complex forms and behaviors we see in the biosphere around us. This seemingly paradoxical nature of life has drawn the interest of an increasing number of physicists, and recent years have seen stochastic modeling grow into a major subdiscipline within biological physics. Here we review some of the major advances that have shaped our understanding of stochasticity in biology. We begin with some historical context, outlining a string of important experimental results that motivated the development of stochastic modeling. We then embark upon a fairly rigorous treatment of the simulation methods that are currently available for the treatment of stochastic biological models, with an eye toward comparing and contrasting their realms of applicability, and the care that must be taken when parameterizing them. Following that, we describe how stochasticity impacts several key biological functions, including transcription, translation, ribosome biogenesis, chromosome replication, and metabolism, before considering how the functions may be coupled into a comprehensive model of a ‘minimal cell’. Finally, we close with our expectation for the future of the field, focusing on how mesoscopic stochastic methods may be augmented with atomic-scale molecular modeling approaches in order to understand life across a range of length and time scales.

  15. Physical layer simulation study for the coexistence of WLAN standards

    Energy Technology Data Exchange (ETDEWEB)

    Howlader, M. K. [Marquette Univ., 222 Haggerty Hall, P. O. Box 1881, Milwaukee, WI 53201 (United States); Keiger, C. [Analysis and Measurement Services Corporation, 9111 Cross Park Drive, Knoxville, TN 37923 (United States); Ewing, P. D. [Oak Ridge National Laboratory, MS-6006, P. O. Box 2008, Oak Ridge, TN 37831 (United States); Govan, T. V. [U.S. Nuclear Regulatory Commission, MS T-10-D20, 11545 Rockville Pike, Rockville, MD 20852 (United States)

    2006-07-01

    This paper presents the results of a study on the performance of wireless local area network (WLAN) devices in the presence of interference from other wireless devices. To understand the coexistence of these wireless protocols, simplified physical-layer-system models were developed for the Bluetooth, Wireless Fidelity (WiFi), and Zigbee devices, all of which operate within the 2.4-GHz frequency band. The performances of these protocols were evaluated using Monte-Carlo simulations under various interference and channel conditions. The channel models considered were basic additive white Gaussian noise (AWGN), Rayleigh fading, and site-specific fading. The study also incorporated the basic modulation schemes, multiple access techniques, and channel allocations of the three protocols. This research is helping the U.S. Nuclear Regulatory Commission (NRC) understand the coexistence issues associated with deploying wireless devices and could prove useful in the development of a technical basis for guidance to address safety-related issues with the implementation of wireless systems in nuclear facilities. (authors)

  16. Physical layer simulation study for the coexistence of WLAN standards

    International Nuclear Information System (INIS)

    Howlader, M. K.; Keiger, C.; Ewing, P. D.; Govan, T. V.

    2006-01-01

    This paper presents the results of a study on the performance of wireless local area network (WLAN) devices in the presence of interference from other wireless devices. To understand the coexistence of these wireless protocols, simplified physical-layer-system models were developed for the Bluetooth, Wireless Fidelity (WiFi), and Zigbee devices, all of which operate within the 2.4-GHz frequency band. The performances of these protocols were evaluated using Monte-Carlo simulations under various interference and channel conditions. The channel models considered were basic additive white Gaussian noise (AWGN), Rayleigh fading, and site-specific fading. The study also incorporated the basic modulation schemes, multiple access techniques, and channel allocations of the three protocols. This research is helping the U.S. Nuclear Regulatory Commission (NRC) understand the coexistence issues associated with deploying wireless devices and could prove useful in the development of a technical basis for guidance to address safety-related issues with the implementation of wireless systems in nuclear facilities. (authors)

  17. Developments in numerical simulation of IFE target and chamber physics

    International Nuclear Information System (INIS)

    Velarde, G.; Minguez, E.; Alonso, E.; Gil, J.M.; Malerba, L.; Marian, J.; Martel, P.; Martinez-Val, J.M.; Munoz, R.; Ogando, F.; Perlado, J.M.; Piera, M.; Reyes, S.; Rubiano, J.G.; Sanz, J.; Sauvan, P.; Velarde, M.; Velarde, P.

    2000-01-01

    The work presented outlines the global frame given at the Institute of Nuclear Fusion (DENIM) for having an integral perspective of the different research areas with the development of Inertial Fusion for energy generation. The coupling of a new radiation transport (RT) solver with an existing multi-material fluid dynamics code using Adaptive Mesh Refinement (ARM) is presented in Section 2, including improvements and additional information about the solver precision. In Section 3, new developments in the atomic physics codes under target conditions, to determine populations, opacity data and emissivities have been performed. Exotic and innovative ideas about Inertial Fusion Energy (IFE), as catalytic fuels and Z-pinches have been explored, and they are explained in Section 4. Numerical simulations demonstrate important reductions in the tritium inventory. Section 5 is devoted to safety and environment of the IFE. Uncertainties analysis in activation calculations have been included in the ACAB activation code, and also calculations on pulse activation in IFE reactors and on the activation of target debris in NIF are presented. A comparison of the accidental releases of tritium from some IFE reactors computed using MACCS2 code is explained. Finally, Section 6 contains the research on the basic mechanisms of neutron damage in SiC (low-activation material) and FeCu alloy using the DENIM/LLNL molecular dynamics code MDCASK. (authors)

  18. Evaluation of static physics performance of the jPET-D4 by Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Hasegawa, Tomoyuki [Allied Health Sciences, Kitasato University, Kitasato 1-15-1, Sagamihara, Kanagawa, 228-8555 (Japan); Yoshida, Eiji [Molecular Imaging Centre, National Institute of Radiological Sciences, Anagawa 4-9-1, Inage, Chiba, 263-8555 (Japan); Kobayashi, Ayako [Graduate School of Human Health Sciences, Tokyo Metropolitan University, Arakawa, Tokyo, 116-8551 (Japan); Shibuya, Kengo [Molecular Imaging Centre, National Institute of Radiological Sciences, Anagawa 4-9-1, Inage, Chiba, 263-8555 (Japan); Nishikido, Fumihiko [Molecular Imaging Centre, National Institute of Radiological Sciences, Anagawa 4-9-1, Inage, Chiba, 263-8555 (Japan); Kobayashi, Tetsuya [Graduate School of Science and Technology, Chiba University, 1-33 Yayoi, Inage, Chiba, 263-8522 (Japan); Suga, Mikio [Graduate School of Science and Technology, Chiba University, 1-33 Yayoi, Inage, Chiba, 263-8522 (Japan); Yamaya, Taiga [Molecular Imaging Centre, National Institute of Radiological Sciences, Anagawa 4-9-1, Inage, Chiba, 263-8555 (Japan); Kitamura, Keishi [Shimadzu Corporation, 1 Nishinokyo-kuwabara-cho, Nakagyo-ku, Kyoto, 604-8511 (Japan); Maruyama, Koichi [Allied Health Sciences, Kitasato University, Kitasato 1-15-1, Sagamihara, Kanagawa, 228-8555 (Japan); Murayama, Hideo [Molecular Imaging Centre, National Institute of Radiological Sciences, Anagawa 4-9-1, Inage, Chiba, 263-8555 (Japan)

    2007-01-07

    The jPET-D4 is the first PET scanner to introduce a unique four-layer depth-of-interaction (DOI) detector scheme in order to achieve high sensitivity and uniform high spatial resolution. This paper compares measurement and Monte Carlo simulation results of the static physics performance of this prototype research PET scanner. Measurement results include single and coincidence energy spectra, point and line source sensitivities, axial sensitivity profile (slice profile) and scatter fraction. We use GATE (Geant4 application for tomographic emission) as a Monte Carlo radiation transport model. Experimental results are reproduced well by the simulation model with reasonable assumptions on characteristic responses of the DOI detectors. In a previous study, the jPET-D4 was shown to provide a uniform spatial resolution as good as 3 mm (FHWM). In the present study, we demonstrate that a high sensitivity, 11.3 {+-} 0.5%, is provided at the FOV centre. However, about three-fourths of this sensitivity is related to multiple-crystal events, for which some misidentification of the crystal cannot be avoided. Therefore, it is crucial to develop a more efficient way to identify the crystal of interaction and to reduce misidentification in order to make use of these high performance values simultaneously. We expect that effective sensitivity can be improved by replacing the GSO crystals with more absorptive crystals such as BGO and LSO. The results we describe here are essential to take full advantage of the next generation PET systems that have DOI recognition capability.

  19. Seventeenth Workshop on Computer Simulation Studies in Condensed-Matter Physics

    CERN Document Server

    Landau, David P; Schütler, Heinz-Bernd; Computer Simulation Studies in Condensed-Matter Physics XVI

    2006-01-01

    This status report features the most recent developments in the field, spanning a wide range of topical areas in the computer simulation of condensed matter/materials physics. Both established and new topics are included, ranging from the statistical mechanics of classical magnetic spin models to electronic structure calculations, quantum simulations, and simulations of soft condensed matter. The book presents new physical results as well as novel methods of simulation and data analysis. Highlights of this volume include various aspects of non-equilibrium statistical mechanics, studies of properties of real materials using both classical model simulations and electronic structure calculations, and the use of computer simulations in teaching.

  20. A federation of simulations based on cellular automata in cyber-physical systems

    Directory of Open Access Journals (Sweden)

    Hoang Van Tran

    2016-02-01

    Full Text Available In cyber-physical system (CPS, cooperation between a variety of computational and physical elements usually poses difficulties to current modelling and simulation tools. Although much research has proposed to address those challenges, most solutions do not completely cover uncertain interactions in CPS. In this paper, we present a new approach to federate simulations for CPS. A federation is a combination of, and coordination between simulations upon a standard of communication. In addition, a mixed simulation is defined as several parallel simulations federated in a common time progress. Such simulations run on the models of physical systems, which are built based on cellular automata theory. The experimental results are performed on a federation of three simulations of forest fire spread, river pollution diffusion and wireless sensor network. The obtained results can be utilized to observe and predict the behaviours of physical systems in their interactions.

  1. High performance carbon nanocomposites for ultracapacitors

    Science.gov (United States)

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  2. High performance computations using dynamical nucleation theory

    International Nuclear Information System (INIS)

    Windus, T L; Crosby, L D; Kathmann, S M

    2008-01-01

    Chemists continue to explore the use of very large computations to perform simulations that describe the molecular level physics of critical challenges in science. In this paper, we describe the Dynamical Nucleation Theory Monte Carlo (DNTMC) model - a model for determining molecular scale nucleation rate constants - and its parallel capabilities. The potential for bottlenecks and the challenges to running on future petascale or larger resources are delineated. A 'master-slave' solution is proposed to scale to the petascale and will be developed in the NWChem software. In addition, mathematical and data analysis challenges are described

  3. Computer science of the high performance; Informatica del alto rendimiento

    Energy Technology Data Exchange (ETDEWEB)

    Moraleda, A.

    2008-07-01

    The high performance computing is taking shape as a powerful accelerator of the process of innovation, to drastically reduce the waiting times for access to the results and the findings in a growing number of processes and activities as complex and important as medicine, genetics, pharmacology, environment, natural resources management or the simulation of complex processes in a wide variety of industries. (Author)

  4. Simulation-Based Performance Assessment: An Innovative Approach to Exploring Understanding of Physical Science Concepts

    Science.gov (United States)

    Gale, Jessica; Wind, Stefanie; Koval, Jayma; Dagosta, Joseph; Ryan, Mike; Usselman, Marion

    2016-01-01

    This paper illustrates the use of simulation-based performance assessment (PA) methodology in a recent study of eighth-grade students' understanding of physical science concepts. A set of four simulation-based PA tasks were iteratively developed to assess student understanding of an array of physical science concepts, including net force,…

  5. Simulation-based Education for Endoscopic Third Ventriculostomy : A Comparison Between Virtual and Physical Training Models

    NARCIS (Netherlands)

    Breimer, Gerben E.; Haji, Faizal A.; Bodani, Vivek; Cunningham, Melissa S.; Lopez-Rios, Adriana-Lucia; Okrainec, Allan; Drake, James M.

    BACKGROUND: The relative educational benefits of virtual reality (VR) and physical simulation models for endoscopic third ventriculostomy (ETV) have not been evaluated "head to head." OBJECTIVE: To compare and identify the relative utility of a physical and VR ETV simulation model for use in

  6. Delivering high performance BWR fuel reliably

    International Nuclear Information System (INIS)

    Schardt, J.F.

    1998-01-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  7. High-performance liquid chromatography - Ultraviolet method for the determination of total specific migration of nine ultraviolet absorbers in food simulants based on 1,1,3,3-Tetramethylguanidine and organic phase anion exchange solid phase extraction to remove glyceride.

    Science.gov (United States)

    Wang, Jianling; Xiao, Xiaofeng; Chen, Tong; Liu, Tingfei; Tao, Huaming; He, Jun

    2016-06-17

    The glyceride in oil food simulant usually causes serious interferences to target analytes and leads to failure of the normal function of the RP-HPLC column. In this work, a convenient HPLC-UV method for the determination of the total specific migration of nine ultraviolet (UV) absorbers in food simulants was developed based on 1,1,3,3-tetramethylguanidine (TMG) and organic phase anion exchange (OPAE) SPE to efficiently remove glyceride in olive oil simulant. In contrast to the normal ion exchange carried out in an aqueous solution or aqueous phase environment, the OPAE SPE was performed in the organic phase environments, and the time-consuming and challenging extraction of the nine UV absorbers from vegetable oil with aqueous solution could be readily omitted. The method was proved to have good linearity (r≥0.99992), precision (intra-day RSD≤3.3%), and accuracy(91.0%≤recoveries≤107%); furthermore, the lower limit of quantifications (0.05-0.2mg/kg) in five types of food simulants(10% ethanol, 3% acetic acid, 20% ethanol, 50% ethanol and olive oil) was observed. The method was found to be well suited for quantitative determination of the total specific migration of the nine UV absorbers both in aqueous and vegetable oil simulant according to Commission Regulation (EU) No. 10/2011. Migration levels of the nine UV absorbers were determined in 31 plastic samples, and UV-24, UV-531, HHBP and UV-326 were frequently detected, especially in olive oil simulant for UV-326 in PE samples. In addition, the OPAE SPE procedure was also been applied to efficiently enrich or purify seven antioxidants in olive oil simulant. Results indicate that this procedure will have more extensive applications in the enriching or purification of the extremely weak acidic compounds with phenol hydroxyl group that are relatively stable in TMG n-hexane solution and that can be barely extracted from vegetable oil. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. 14th annual Results and Review Workshop on High Performance Computing in Science and Engineering

    CERN Document Server

    Nagel, Wolfgang E; Resch, Michael M; Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2011; High Performance Computing in Science and Engineering '11

    2012-01-01

    This book presents the state-of-the-art in simulation on supercomputers. Leading researchers present results achieved on systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2011. The reports cover all fields of computational science and engineering, ranging from CFD to computational physics and chemistry, to computer science, with a special emphasis on industrially relevant applications. Presenting results for both vector systems and microprocessor-based systems, the book allows readers to compare the performance levels and usability of various architectures. As HLRS

  9. Believability in simplifications of large scale physically based simulation

    KAUST Repository

    Han, Donghui; Hsu, Shu-wei; McNamara, Ann; Keyser, John

    2013-01-01

    We verify two hypotheses which are assumed to be true only intuitively in many rigid body simulations. I: In large scale rigid body simulation, viewers may not be able to perceive distortion incurred by an approximated simulation method. II: Fixing objects under a pile of objects does not affect the visual plausibility. Visual plausibility of scenarios simulated with these hypotheses assumed true are measured using subjective rating from viewers. As expected, analysis of results supports the truthfulness of the hypotheses under certain simulation environments. However, our analysis discovered four factors which may affect the authenticity of these hypotheses: number of collisions simulated simultaneously, homogeneity of colliding object pairs, distance from scene under simulation to camera position, and simulation method used. We also try to find an objective metric of visual plausibility from eye-tracking data collected from viewers. Analysis of these results indicates that eye-tracking does not present a suitable proxy for measuring plausibility or distinguishing between types of simulations. © 2013 ACM.

  10. Impact of the genfit2 Kalman-filter-based algorithms on physics simulations performed with PandaRoot

    Energy Technology Data Exchange (ETDEWEB)

    Prencipe, Elisabetta; Ritman, James [Forschungszentrum Juelich, IKP1, Juelich (Germany); Collaboration: PANDA-Collaboration

    2016-07-01

    PANDA is a planned experiment at FAIR (Darmstadt) with a cooled antiproton beam in a range [1.5;15] GeV/c, allowing a wide physics program in nuclear and particle physics. It is the only experiment worldwide, which combines a solenoid field (B=2 T) and a dipole field (B=2 Tm) in an experiment with a fixed target topology, in that energy regime. The tracking system of PANDA involves the presence of a high performance silicon vertex detector, a GEM detector, a Straw-Tubes central tracker, a forward tracking system, and a luminosity monitor. The offline tracking algorithm is developed within the PandaRoot framework, which is a part of the FAIRRoot project. The algorithm here presented is based on a tool containing the Kalman Filter equations and a deterministic annealing filter (genfit). The Kalman-Filter-based algorithms have a wide range of applications; among those in particle physics they can perform extrapolations of track parameters and covariance matrices. The impact on physics simulations performed for the PANDA experiment is shown for the first time, with the PandaRoot framework: improvement is shown for those channels where a good low momentum tracking is required (p{sub T}<400 MeV/c), i.e. D mesons and Λ reconstruction, of about a factor 2.

  11. Improving UV Resistance of High Performance Fibers

    Science.gov (United States)

    Hassanin, Ahmed

    % rutile TiO2 nanoparticles showed excellent protection of braid from PBO. Only 7.5% strength loss was observed. To optimize the degree of protection of the sheath loaded with UV blocker particles, computational models were developed to optimize the protective layer thickness/weight and the amount of UV particles that provide the maximum protection with lightest weight of the protective layer and minimum amount of UV particles. The simulated results were found to be higher that the experimental results due to the tendency of nanoparticles to be agglomerated in real experiments. The third approach to achieve a maximum protection with the minimum weight added is constructing a sleeve from SpectraRTM (Ultra High Molecular Weight Polyethylene (UHMWPE) high performance fiber), which is known to resist UV, woven fabric. Covering the braid from PBO fiber with Spectra RTM woven fabric provide hybrid structure with two compatible components that can share the load and thus maintain the high strength to weight ratio. Although the SpectraRTM fabric had maximum cover factor, 20 % of visible light and about 15 % of UV were able to penetrate the fabric. This transmittance of UV-VIS light negatively affected the protection performance of the SpectraRTM woven fabric layer. It is thought that SpectraRTM fabric be coated with a thin layer (mentioned earlier) containing UV blocker for additional protection while maintain strength contribution to the hybrid structure. To maximize the strength to weight ratio of the hybrid structure (with core from PBO braid and sheath from SpectraRTM woven fabric) an established finite element model was utilized. The theoretical results using the finite element theory indicated that by controlling the bending rigidity of the filling yarn of the SpectraRTM fabric, the extension at peak load of woven fabric in warp direction (loading direction) could be controlled to match the braid extension at peak load. The match in the extension at peak load of the two

  12. Design and evaluation of dynamic replication strategies for a high-performance data grid

    International Nuclear Information System (INIS)

    Ranganathan, K.; Foster, I.

    2001-01-01

    Physics experiments that generate large amounts of data need to be able to share it with researchers around the world. High performance grids facilitate the distribution of such data to geographically remote places. Dynamic replication can be used as a technique to reduce bandwidth consumption and access latency in accessing these huge amounts of data. The authors describe a simulation framework that we have developed to model a grid scenario, which enables comparative studies of alternative dynamic replication strategies. The authors present preliminary results obtained with this simulator, in which we evaluate the performance of six different replication strategies for three different kinds of access patterns. The simulation results show that the best strategy has significant savings in latency and bandwidth consumption if the access patterns contain a moderate amount of geographical locality

  13. Monte Carlo simulation in statistical physics an introduction

    CERN Document Server

    Binder, Kurt

    1992-01-01

    The Monte Carlo method is a computer simulation method which uses random numbers to simulate statistical fluctuations The method is used to model complex systems with many degrees of freedom Probability distributions for these systems are generated numerically and the method then yields numerically exact information on the models Such simulations may be used tosee how well a model system approximates a real one or to see how valid the assumptions are in an analyical theory A short and systematic theoretical introduction to the method forms the first part of this book The second part is a practical guide with plenty of examples and exercises for the student Problems treated by simple sampling (random and self-avoiding walks, percolation clusters, etc) are included, along with such topics as finite-size effects and guidelines for the analysis of Monte Carlo simulations The two parts together provide an excellent introduction to the theory and practice of Monte Carlo simulations

  14. Real-Time Animation Using a Mix of Physical Simulation and Kinematics

    NARCIS (Netherlands)

    van Welbergen, H.; Zwiers, Jakob; Ruttkay, Z.M.

    2009-01-01

    Expressive animation (such as gesturing or conducting) is typically generated using procedural animation techniques. These techniques offer precision in both timing and limb placement, but they lack physical realism. On the other hand, physical simulation offers physical realism, but does not

  15. Alternative High-Performance Ceramic Waste Forms

    Energy Technology Data Exchange (ETDEWEB)

    Sundaram, S. K. [Alfred Univ., NY (United States)

    2017-02-01

    This final report (M5NU-12-NY-AU # 0202-0410) summarizes the results of the project titled “Alternative High-Performance Ceramic Waste Forms,” funded in FY12 by the Nuclear Energy University Program (NEUP Project # 12-3809) being led by Alfred University in collaboration with Savannah River National Laboratory (SRNL). The overall focus of the project is to advance fundamental understanding of crystalline ceramic waste forms and to demonstrate their viability as alternative waste forms to borosilicate glasses. We processed single- and multiphase hollandite waste forms based on simulated waste streams compositions provided by SRNL based on the advanced fuel cycle initiative (AFCI) aqueous separation process developed in the Fuel Cycle Research and Development (FCR&D). For multiphase simulated waste forms, oxide and carbonate precursors were mixed together via ball milling with deionized water using zirconia media in a polyethylene jar for 2 h. The slurry was dried overnight and then separated from the media. The blended powders were then subjected to melting or spark plasma sintering (SPS) processes. Microstructural evolution and phase assemblages of these samples were studied using x-ray diffraction (XRD), scanning electron microscopy (SEM), energy dispersion analysis of x-rays (EDAX), wavelength dispersive spectrometry (WDS), transmission electron spectroscopy (TEM), selective area x-ray diffraction (SAXD), and electron backscatter diffraction (EBSD). These results showed that the processing methods have significant effect on the microstructure and thus the performance of these waste forms. The Ce substitution into zirconolite and pyrochlore materials was investigated using a combination of experimental (in situ XRD and x-ray absorption near edge structure (XANES)) and modeling techniques to study these single phases independently. In zirconolite materials, a transition from the 2M to the 4M polymorph was observed with increasing Ce content. The resulting

  16. High-performance ceramics. Fabrication, structure, properties

    International Nuclear Information System (INIS)

    Petzow, G.; Tobolski, J.; Telle, R.

    1996-01-01

    The program ''Ceramic High-performance Materials'' pursued the objective to understand the chaining of cause and effect in the development of high-performance ceramics. This chain of problems begins with the chemical reactions for the production of powders, comprises the characterization, processing, shaping and compacting of powders, structural optimization, heat treatment, production and finishing, and leads to issues of materials testing and of a design appropriate to the material. The program ''Ceramic High-performance Materials'' has resulted in contributions to the understanding of fundamental interrelationships in terms of materials science, which are summarized in the present volume - broken down into eight special aspects. (orig./RHM)

  17. High Performance Grinding and Advanced Cutting Tools

    CERN Document Server

    Jackson, Mark J

    2013-01-01

    High Performance Grinding and Advanced Cutting Tools discusses the fundamentals and advances in high performance grinding processes, and provides a complete overview of newly-developing areas in the field. Topics covered are grinding tool formulation and structure, grinding wheel design and conditioning and applications using high performance grinding wheels. Also included are heat treatment strategies for grinding tools, using grinding tools for high speed applications, laser-based and diamond dressing techniques, high-efficiency deep grinding, VIPER grinding, and new grinding wheels.

  18. Strategy Guideline: High Performance Residential Lighting

    Energy Technology Data Exchange (ETDEWEB)

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  19. High-performance phase-field modeling

    KAUST Repository

    Vignal, Philippe

    2015-04-27

    Many processes in engineering and sciences involve the evolution of interfaces. Among the mathematical frameworks developed to model these types of problems, the phase-field method has emerged as a possible solution. Phase-fields nonetheless lead to complex nonlinear, high-order partial differential equations, whose solution poses mathematical and computational challenges. Guaranteeing some of the physical properties of the equations has lead to the development of efficient algorithms and discretizations capable of recovering said properties by construction [2, 5]. This work builds-up on these ideas, and proposes novel discretization strategies that guarantee numerical energy dissipation for both conserved and non-conserved phase-field models. The temporal discretization is based on a novel method which relies on Taylor series and ensures strong energy stability. It is second-order accurate, and can also be rendered linear to speed-up the solution process [4]. The spatial discretization relies on Isogeometric Analysis, a finite element method that possesses the k-refinement technology and enables the generation of high-order, high-continuity basis functions. These basis functions are well suited to handle the high-order operators present in phase-field models. Two-dimensional and three dimensional results of the Allen-Cahn, Cahn-Hilliard, Swift-Hohenberg and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  20. A semi-physical simulation platform of attitude determination and control system for satellite

    Directory of Open Access Journals (Sweden)

    Yuanjin Yu

    2016-05-01

    Full Text Available A semi-physical simulation platform for attitude determination and control system is proposed to verify the attitude estimator and controller on ground. A simulation target, a host PC, many attitude sensors, and actuators compose the simulation platform. The simulation target is composed of a central processing unit board with VxWorks operating system and many input/output boards connected via Compact Peripheral Component Interconnect bus. The executable programs in target are automatically generated from the simulation models in Simulink based on Real-Time Workshop of MATLAB. A three-axes gyroscope, a three-axes magnetometer, a sun sensor, a star tracer, three flywheels, and a Global Positioning System receiver are connected to the simulation target, which formulates the attitude control cycle of a satellite. The simulation models of the attitude determination and control system are described in detail. Finally, the semi-physical simulation platform is used to demonstrate the availability and rationality of the control scheme of a micro-satellite. Comparing the results between the numerical simulation in Simulink and the semi-physical simulation, the semi-physical simulation platform is available and the control scheme successfully achieves three-axes stabilization.

  1. Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Merzari, E. [Argonne National Lab. (ANL), Argonne, IL (United States); Shemon, E. R. [Argonne National Lab. (ANL), Argonne, IL (United States); Yu, Y. Q. [Argonne National Lab. (ANL), Argonne, IL (United States); Thomas, J. W. [Argonne National Lab. (ANL), Argonne, IL (United States); Obabko, A. [Argonne National Lab. (ANL), Argonne, IL (United States); Jain, Rajeev [Argonne National Lab. (ANL), Argonne, IL (United States); Mahadevan, Vijay [Argonne National Lab. (ANL), Argonne, IL (United States); Tautges, Timothy [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Solberg, Jerome [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ferencz, Robert Mark [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Whitesides, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-12-21

    This report describes to employ SHARP to perform a first-of-a-kind analysis of the core radial expansion phenomenon in an SFR. This effort required significant advances in the framework Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit used to drive the coupled simulations, manipulate the mesh in response to the deformation of the geometry, and generate the necessary modified mesh files. Furthermore, the model geometry is fairly complex, and consistent mesh generation for the three physics modules required significant effort. Fully-integrated simulations of a 7-assembly mini-core test problem have been performed, and the results are presented here. Physics models of a full-core model of the Advanced Burner Test Reactor have also been developed for each of the three physics modules. Standalone results of each of the three physics modules for the ABTR are presented here, which provides a demonstration of the feasibility of the fully-integrated simulation.

  2. High performance liquid chromatographic determination of ...

    African Journals Online (AJOL)

    STORAGESEVER

    2010-02-08

    ) high performance liquid chromatography (HPLC) grade .... applications. These are important requirements if the reagent is to be applicable to on-line pre or post column derivatisation in a possible automation of the analytical.

  3. Analog circuit design designing high performance amplifiers

    CERN Document Server

    Feucht, Dennis

    2010-01-01

    The third volume Designing High Performance Amplifiers applies the concepts from the first two volumes. It is an advanced treatment of amplifier design/analysis emphasizing both wideband and precision amplification.

  4. Strategies and Experiences Using High Performance Fortran

    National Research Council Canada - National Science Library

    Shires, Dale

    2001-01-01

    .... High performance Fortran (HPF) is a relative new addition to the Fortran dialect It is an attempt to provide an efficient high-level Fortran parallel programming language for the latest generation of been debatable...

  5. Embedded High Performance Scalable Computing Systems

    National Research Council Canada - National Science Library

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  6. Gradient High Performance Liquid Chromatography Method ...

    African Journals Online (AJOL)

    Purpose: To develop a gradient high performance liquid chromatography (HPLC) method for the simultaneous determination of phenylephrine (PHE) and ibuprofen (IBU) in solid ..... nimesulide, phenylephrine. Hydrochloride, chlorpheniramine maleate and caffeine anhydrous in pharmaceutical dosage form. Acta Pol.

  7. High performance computing in Windows Azure cloud

    OpenAIRE

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  8. Carbon nanomaterials for high-performance supercapacitors

    OpenAIRE

    Tao Chen; Liming Dai

    2013-01-01

    Owing to their high energy density and power density, supercapacitors exhibit great potential as high-performance energy sources for advanced technologies. Recently, carbon nanomaterials (especially, carbon nanotubes and graphene) have been widely investigated as effective electrodes in supercapacitors due to their high specific surface area, excellent electrical and mechanical properties. This article summarizes the recent progresses on the development of high-performance supercapacitors bas...

  9. Delivering high performance BWR fuel reliably

    Energy Technology Data Exchange (ETDEWEB)

    Schardt, J.F. [GE Nuclear Energy, Wilmington, NC (United States)

    1998-07-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  10. HPTA: High-Performance Text Analytics

    OpenAIRE

    Vandierendonck, Hans; Murphy, Karen; Arif, Mahwish; Nikolopoulos, Dimitrios S.

    2017-01-01

    One of the main targets of data analytics is unstructured data, which primarily involves textual data. High-performance processing of textual data is non-trivial. We present the HPTA library for high-performance text analytics. The library helps programmers to map textual data to a dense numeric representation, which can be handled more efficiently. HPTA encapsulates three performance optimizations: (i) efficient memory management for textual data, (ii) parallel computation on associative dat...

  11. High-performance computing — an overview

    Science.gov (United States)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  12. Workshop on data acquisition and trigger system simulations for high energy physics

    International Nuclear Information System (INIS)

    1992-01-01

    This report discusses the following topics: DAQSIM: A data acquisition system simulation tool; Front end and DCC Simulations for the SDC Straw Tube System; Simulation of Non-Blocklng Data Acquisition Architectures; Simulation Studies of the SDC Data Collection Chip; Correlation Studies of the Data Collection Circuit ampersand The Design of a Queue for this Circuit; Fast Data Compression ampersand Transmission from a Silicon Strip Wafer; Simulation of SCI Protocols in Modsim; Visual Design with vVHDL; Stochastic Simulation of Asynchronous Buffers; SDC Trigger Simulations; Trigger Rates, DAQ ampersand Online Processing at the SSC; Planned Enhancements to MODSEM II ampersand SIMOBJECT -- an Overview -- R.; DAGAR -- A synthesis system; Proposed Silicon Compiler for Physics Applications; Timed -- LOTOS in a PROLOG Environment: an Algebraic language for Simulation; Modeling and Simulation of an Event Builder for High Energy Physics Data Acquisition Systems; A Verilog Simulation for the CDF DAQ; Simulation to Design with Verilog; The DZero Data Acquisition System: Model and Measurements; DZero Trigger Level 1.5 Modeling; Strategies Optimizing Data Load in the DZero Triggers; Simulation of the DZero Level 2 Data Acquisition System; A Fast Method for Calculating DZero Level 1 Jet Trigger Properties and Physics Input to DAQ Studies

  13. Workshop on data acquisition and trigger system simulations for high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1992-12-31

    This report discusses the following topics: DAQSIM: A data acquisition system simulation tool; Front end and DCC Simulations for the SDC Straw Tube System; Simulation of Non-Blocklng Data Acquisition Architectures; Simulation Studies of the SDC Data Collection Chip; Correlation Studies of the Data Collection Circuit & The Design of a Queue for this Circuit; Fast Data Compression & Transmission from a Silicon Strip Wafer; Simulation of SCI Protocols in Modsim; Visual Design with vVHDL; Stochastic Simulation of Asynchronous Buffers; SDC Trigger Simulations; Trigger Rates, DAQ & Online Processing at the SSC; Planned Enhancements to MODSEM II & SIMOBJECT -- an Overview -- R.; DAGAR -- A synthesis system; Proposed Silicon Compiler for Physics Applications; Timed -- LOTOS in a PROLOG Environment: an Algebraic language for Simulation; Modeling and Simulation of an Event Builder for High Energy Physics Data Acquisition Systems; A Verilog Simulation for the CDF DAQ; Simulation to Design with Verilog; The DZero Data Acquisition System: Model and Measurements; DZero Trigger Level 1.5 Modeling; Strategies Optimizing Data Load in the DZero Triggers; Simulation of the DZero Level 2 Data Acquisition System; A Fast Method for Calculating DZero Level 1 Jet Trigger Properties and Physics Input to DAQ Studies.

  14. High performance visual display for HENP detectors

    CERN Document Server

    McGuigan, M; Spiletic, J; Fine, V; Nevski, P

    2001-01-01

    A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactiv...

  15. Tech-X Corporation releases simulation code for solving complex problems in plasma physics : VORPAL code provides a robust environment for simulating plasma processes in high-energy physics, IC fabrications and material processing applications

    CERN Multimedia

    2005-01-01

    Tech-X Corporation releases simulation code for solving complex problems in plasma physics : VORPAL code provides a robust environment for simulating plasma processes in high-energy physics, IC fabrications and material processing applications

  16. Physics for JavaScript games, animation, and simulations with HTML5 Canvas

    CERN Document Server

    Dobre, Adrian

    2014-01-01

    Have you ever wanted to include believable physical behaviors in your games and projects to give them that extra edge? Physics for JavaScript Games, Animation, and Simulations teaches you how to incorporate real physics, such as gravity, friction, and buoyancy, into your HTML5 games, animations, and simulations. It also includes more advanced topics, such as particle systems, which are essential for creating effects such as sparks or smoke. The book also addresses the key issue of balancing accuracy and simplicity in your games and simulations, and the final chapters provide you with the infor

  17. Eighteenth Workshop on Recent Developments in Computer Simulation Studies in Condensed Matter Physics

    CERN Document Server

    Landau, David P; Schüttler, Heinz-Bernd; Computer Simulation Studies in Condensed-Matter Physics XVIII

    2006-01-01

    This volume represents a "status report" emanating from presentations made during the 18th Annual Workshop on Computer Simulations Studies in Condensed Matter Physics at the Center for Simulational Physics at the University of Georgia in March 2005. It provides a broad overview of the most recent advances in the field, spanning the range from statistical physics to soft condensed matter and biological systems. Results on nanostructures and materials are included as are several descriptions of advances in quantum simulations and quantum computing as well as.methodological advances.

  18. High performance LiNi0.5Mn1.5O4 cathode by Al-coating and Al3+-doping through a physical vapor deposition method

    International Nuclear Information System (INIS)

    Sun, Peng; Ma, Ying; Zhai, Tianyou; Li, Huiqiao

    2016-01-01

    Highlights: • Metal Al was used as an electrical conductive coating material for LiNi 0.5 Mn 1.5 O 4 . • The uniform surface coating layer of metal Al was successfully achieved with adjusted thickness through a physical vapor deposition technology. • Al 3+ -doped LiNi 0.5 Mn 1.5 O 4 can be easily obtained by further directly annealing of Al-coated LiNi 0.5 Mn 1.5 O 4 in air. • The conductive Al-coating layer can greatly improve the rate performance and cycle stability of LiNi 0.5 Mn 1.5 O 4 . - Abstract: In this work, spinel LiNi 0.5 Mn 1.5 O 4 (LNMO) hollow microspheres are synthesized by an impregnation method using microsphere MnO 2 as both the precursor and template. To enhance the electrical conductivity of LNMO, metal Al was employed for the first time as a coating material for LNMO. Though an Electron-beam Vapor Deposition approach, the surface of LNMO can be easily coated by a tight layer of Al nanoparticles with adjusted thickness. Further annealing the Al-coated sample at 800 °C in air, the Al 3+ -doped LNMO can be obtained. The effects of Al-coating and Al 3+ -doping on the sample morphology and structure are investigated by SEM, TEM, XRD and FT-IR. The electrochemical properties of Al-coated LNMO and Al 3+ -doped LNMO are measured with comparison of bare LNMO by charge/discharge tests and electrochemical impedance spectroscopy (EIS). The results show that both Al-coating and Al 3+ -doping can greatly enhance the cycle performance and rate capability of LNMO. Especially for Al-coated LNMO, it shows the lowest battery impedance due to the existence of conductive Al coating layer, thus delivers the best rate performance among the three. The physical coating procedure used in this work may provide a new facile modification approach for other cathode materials.

  19. Development of CANDU prototype fuel handling simulator - concept and some simulation results with physical network modeling approach

    Energy Technology Data Exchange (ETDEWEB)

    Xu, X.P. [Candu Energy Inc, Mississauga, Ontario (Canada)

    2012-07-01

    This paper reviewed the need for a fuel handling(FH) simulator in training operators and maintenance personnel, in FH design enhancement based on operating experience (OPEX), and the potential application of Virtual Reality (VR) based simulation technology. Modeling and simulation of the fuelling machine (FM) magazine drive plant (one of the CANDU FH sub-systems) was described. The work established the feasibility of modeling and simulating a physical FH drive system using the physical network approach and computer software tools. The concept and approach can be applied similarly to create the other FH subsystem plant models, which are expected to be integrated with control modules to develop a master FH control model and further to create a virtual FH system. (author)

  20. Development of CANDU prototype fuel handling simulator - concept and some simulation results with physical network modeling approach

    International Nuclear Information System (INIS)

    Xu, X.P.

    2012-01-01

    This paper reviewed the need for a fuel handling(FH) simulator in training operators and maintenance personnel, in FH design enhancement based on operating experience (OPEX), and the potential application of Virtual Reality (VR) based simulation technology. Modeling and simulation of the fuelling machine (FM) magazine drive plant (one of the CANDU FH sub-systems) was described. The work established the feasibility of modeling and simulating a physical FH drive system using the physical network approach and computer software tools. The concept and approach can be applied similarly to create the other FH subsystem plant models, which are expected to be integrated with control modules to develop a master FH control model and further to create a virtual FH system. (author)

  1. High performance visual display for HENP detectors

    International Nuclear Information System (INIS)

    McGuigan, Michael; Smith, Gordon; Spiletic, John; Fine, Valeri; Nevski, Pavel

    2001-01-01

    A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactive control, including the ability to slice, search and mark areas of the detector. We incorporate the ability to make a high quality still image of a view of the detector and the ability to generate animations and a fly through of the detector and output these to MPEG or VRML models. We develop data compression hardware and software so that remote interactive visualization will be possible among dispersed collaborators. We obtain real time visual display for events accumulated during simulations

  2. C++ Toolbox for Object-Oriented Modeling and Dynamic Simulation of Physical Systems

    DEFF Research Database (Denmark)

    Wagner, Falko Jens; Poulsen, Mikael Zebbelin

    1999-01-01

    This paper presents the efforts made in an ongoing project that exploits the advantages of using object-oriented methodologies for describing and simulating dynamical systems. The background for this work is a search for new and better ways to simulate physical systems.......This paper presents the efforts made in an ongoing project that exploits the advantages of using object-oriented methodologies for describing and simulating dynamical systems. The background for this work is a search for new and better ways to simulate physical systems....

  3. Validation of newly developed physical laparoscopy simulator in transabdominal preperitoneal (TAPP) inguinal hernia repair.

    Science.gov (United States)

    Nishihara, Yuichi; Isobe, Yoh; Kitagawa, Yuko

    2017-12-01

    A realistic simulator for transabdominal preperitoneal (TAPP) inguinal hernia repair would enhance surgeons' training experience before they enter the operating theater. The purpose of this study was to create a novel physical simulator for TAPP inguinal hernia repair and obtain surgeons' opinions regarding its efficacy. Our novel TAPP inguinal hernia repair simulator consists of a physical laparoscopy simulator and a handmade organ replica model. The physical laparoscopy simulator was created by three-dimensional (3D) printing technology, and it represents the trunk of the human body and the bendability of the abdominal wall under pneumoperitoneal pressure. The organ replica model was manually created by assembling materials. The TAPP inguinal hernia repair simulator allows for the performance of all procedures required in TAPP inguinal hernia repair. Fifteen general surgeons performed TAPP inguinal hernia repair using our simulator. Their opinions were scored on a 5-point Likert scale. All participants strongly agreed that the 3D-printed physical simulator and organ replica model were highly useful for TAPP inguinal hernia repair training (median, 5 points) and TAPP inguinal hernia repair education (median, 5 points). They felt that the simulator would be effective for TAPP inguinal hernia repair training before entering the operating theater. All surgeons considered that this simulator should be introduced in the residency curriculum. We successfully created a physical simulator for TAPP inguinal hernia repair training using 3D printing technology and a handmade organ replica model created with inexpensive, readily accessible materials. Preoperative TAPP inguinal hernia repair training using this simulator and organ replica model may be of benefit in the training of all surgeons. All general surgeons involved in the present study felt that this simulator and organ replica model should be used in their residency curriculum.

  4. Generation of initial geometries for the simulation of the physical system in the DualPHYsics code

    International Nuclear Information System (INIS)

    Segura Q, E.

    2013-01-01

    In the diverse research areas of the Instituto Nacional de Investigaciones Nucleares (ININ) are different activities related to science and technology, one of great interest is the study and treatment of the collection and storage of radioactive waste. Therefore at ININ the draft on the simulation of the pollutants diffusion in the soil through a porous medium (third stage) has this problem inherent aspects, hence a need for such a situation is to generate the initial geometry of the physical system For the realization of the simulation method is implemented smoothed particle hydrodynamics (SPH). This method runs in DualSPHysics code, which has great versatility and ability to simulate phenomena of any physical system where hydrodynamic aspects combine. In order to simulate a physical system DualSPHysics code, you need to preset the initial geometry of the system of interest, then this is included in the input file of the code. The simulation sets the initial geometry through regular geometric bodies positioned at different points in space. This was done through a programming language (Fortran, C + +, Java, etc..). This methodology will provide the basis to simulate more complex geometries future positions and form. (Author)

  5. The use of physical model simulation to emulate an AGV material handling system

    International Nuclear Information System (INIS)

    Hurley, R.G.; Coffman, P.E.; Dixon, J.R.; Walacavage, J.G.

    1987-01-01

    This paper describes an application of physical modeling to the simulation of a prototype AGV (Automatic Guided Vehicle) material handling system. Physical modeling is the study of complex automated manufacturing and material handling systems through the use of small scale components controlled by mini and/or microcomputers. By modeling the mechanical operations of the proposed AGV material handling system, it was determined that control algorithms and AGV dispatch rules could be developed and evaluated. This paper presents a brief explanation of physical modeling as a simulation tool and addresses in detail the development of the control algorithm, dispatching rules, and a prototype physical model of a flexible machining system

  6. Simulation technology achievement of students in physical education classes.

    Directory of Open Access Journals (Sweden)

    Тіmoshenko A.V.

    2010-06-01

    Full Text Available Technology of evaluation of progress was studied during employments by physical exercises. Possibility of the use of design method was probed in an educational process during determination of progress of students. The value of mathematical models in pedagogical activity in the field of physical culture and sport is certain. Mathematical models are offered for the evaluation of success of student young people during employments swimming. Possibility of development of models of evaluation of success is rotined on sporting games, track-and-field, gymnastics.

  7. U.S. Army Physical Demands Study: Reliability of Simulations of Physically Demanding Tasks Performed by Combat Arms Soldiers.

    Science.gov (United States)

    Foulis, Stephen A; Redmond, Jan E; Frykman, Peter N; Warr, Bradley J; Zambraski, Edward J; Sharp, Marilyn A

    2017-12-01

    Foulis, SA, Redmond, JE, Frykman, PN, Warr, BJ, Zambraski, EJ, and Sharp, MA. U.S. Army physical demands study: reliability of simulations of physically demanding tasks performed by combat arms soldiers. J Strength Cond Res 31(12): 3245-3252, 2017-Recently, the U.S. Army has mandated that soldiers must successfully complete the physically demanding tasks of their job to graduate from their Initial Military Training. Evaluating individual soldiers in the field is difficult; however, simulations of these tasks may aid in the assessment of soldiers' abilities. The purpose of this study was to determine the reliability of simulated physical soldiering tasks relevant to combat arms soldiers. Three cohorts of ∼50 soldiers repeated a subset of 8 simulated tasks 4 times over 2 weeks. Simulations included: sandbag carry, casualty drag, and casualty evacuation from a vehicle turret, move under direct fire, stow ammunition on a tank, load the main gun of a tank, transferring ammunition with a field artillery supply vehicle, and a 4-mile foot march. Reliability was assessed using intraclass correlation coefficients (ICCs), standard errors of measurement (SEMs), and 95% limits of agreement. Performance of the casualty drag and foot march did not improve across trials (p > 0.05), whereas improvements, suggestive of learning effects, were observed on the remaining 6 tasks (p ≤ 0.05). The ICCs ranged from 0.76 to 0.96, and the SEMs ranged from 3 to 16% of the mean. These 8 simulated tasks show high reliability. Given proper practice, they are suitable for evaluating the ability of Combat Arms Soldiers to complete the physical requirements of their jobs.

  8. Assessing Practical Skills in Physics Using Computer Simulations

    Science.gov (United States)

    Walsh, Kevin

    2018-01-01

    Computer simulations have been used very effectively for many years in the teaching of science but the focus has been on cognitive development. This study, however, is an investigation into the possibility that a student's experimental skills in the real-world environment can be judged via the undertaking of a suitably chosen computer simulation…

  9. Simulation of particle suspensions at the Institute for Computational Physics

    NARCIS (Netherlands)

    Harting, J.D.R.; Hecht, M.; Herrmann, H.J.; Nagel, W.E.; Jäger, W.; Resch, M.M.

    2006-01-01

    In this report we describe some of our projects related to the simulation of particle-laden flows. We give a short introduction to the topic and the methods used, namely the Stochastic Rotation Dynamics and the lattice Boltzmann method. Then, we show results from our work related to the behaviour of

  10. CMS: Simulated Physical-Biogeochemical Data, SABGOM Model, Gulf of Mexico, 2005-2010

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset contains monthly mean ocean surface physical and biogeochemical data for the Gulf of Mexico simulated by the South Atlantic Bight and Gulf of Mexico...

  11. Interferences and events on epistemic shifts in physics through computer simulations

    CERN Document Server

    Warnke, Martin

    2017-01-01

    Computer simulations are omnipresent media in today's knowledge production. For scientific endeavors such as the detection of gravitational waves and the exploration of subatomic worlds, simulations are essential; however, the epistemic status of computer simulations is rather controversial as they are neither just theory nor just experiment. Therefore, computer simulations have challenged well-established insights and common scientific practices as well as our very understanding of knowledge. This volume contributes to the ongoing discussion on the epistemic position of computer simulations in a variety of physical disciplines, such as quantum optics, quantum mechanics, and computational physics. Originating from an interdisciplinary event, it shows that accounts of contemporary physics can constructively interfere with media theory, philosophy, and the history of science.

  12. Enabling high performance computational science through combinatorial algorithms

    International Nuclear Information System (INIS)

    Boman, Erik G; Bozdag, Doruk; Catalyurek, Umit V; Devine, Karen D; Gebremedhin, Assefaw H; Hovland, Paul D; Pothen, Alex; Strout, Michelle Mills

    2007-01-01

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation

  13. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  14. Enabling high performance computational science through combinatorial algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Erik G [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Bozdag, Doruk [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Catalyurek, Umit V [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Devine, Karen D [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Gebremedhin, Assefaw H [Computer Science and Center for Computational Science, Old Dominion University (United States); Hovland, Paul D [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Pothen, Alex [Computer Science and Center for Computational Science, Old Dominion University (United States); Strout, Michelle Mills [Computer Science, Colorado State University (United States)

    2007-07-15

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation.

  15. Wavy channel transistor for area efficient high performance operation

    KAUST Repository

    Fahad, Hossain M.

    2013-04-05

    We report a wavy channel FinFET like transistor where the channel is wavy to increase its width without any area penalty and thereby increasing its drive current. Through simulation and experiments, we show the effectiveness of such device architecture is capable of high performance operation compared to conventional FinFETs with comparatively higher area efficiency and lower chip latency as well as lower power consumption.

  16. Modelling physics detectors in a computer aided design system for simulation purposes

    International Nuclear Information System (INIS)

    Ahvenainen, J.; Oksakivi, T.; Vuoskoski, J.

    1995-01-01

    The possibility of transferring physics detector models from computer aided design systems into physics simulation packages like GEANT is receiving increasing attention. The problem of exporting detector models constructed in CAD systems into GEANT is well known. We discuss the problem and describe an application, called DDT, which allows one to design detector models in a CAD system and then transfer the models into GEANT for simulation purposes. (orig.)

  17. Perception of realism during mock resuscitations by pediatric housestaff: the impact of simulated physical features.

    Science.gov (United States)

    Donoghue, Aaron J; Durbin, Dennis R; Nadel, Frances M; Stryjewski, Glenn R; Kost, Suzanne I; Nadkarni, Vinay M

    2010-02-01

    Physical signs that can be seen, heard, and felt are one of the cardinal features that convey realism in patient simulations. In critically ill children, physical signs are relied on for clinical management despite their subjective nature. Current technology is limited in its ability to effectively simulate some of these subjective signs; at the same time, data supporting the educational benefit of simulated physical features as a distinct entity are lacking. We surveyed pediatric housestaff as to the realism of scenarios with and without simulated physical signs. Residents at three children's hospitals underwent a before-and-after assessment of performance in mock resuscitations requiring Pediatric Advanced Life Support (PALS), with a didactic review of PALS as the intervention between the assessments. Each subject was randomized to a simulator with physical features either activated (simulator group) or deactivated (mannequin group). Subjects were surveyed as to the realism of the scenarios. Univariate analysis of responses was done between groups. Subjects in the high-fidelity group were surveyed as to the relative importance of specific physical features in enhancing realism. Fifty-one subjects completed all surveys. Subjects in the high-fidelity group rated all scenarios more highly than low-fidelity subjects; the difference achieved statistical significance in scenarios featuring a patient in asystole or pulseless ventricular tachycardia (P realism. PALS scenarios were rated as highly realistic by pediatric residents. Slight differences existed between subjects exposed to simulated physical features and those not exposed to them; these differences were most pronounced in scenarios involving pulselessness. Specific physical features were rated as more important than others by subjects. Data from these surveys may be informative in designing future simulation technology.

  18. High performance bio-integrated devices

    Science.gov (United States)

    Kim, Dae-Hyeong; Lee, Jongha; Park, Minjoon

    2014-06-01

    In recent years, personalized electronics for medical applications, particularly, have attracted much attention with the rise of smartphones because the coupling of such devices and smartphones enables the continuous health-monitoring in patients' daily life. Especially, it is expected that the high performance biomedical electronics integrated with the human body can open new opportunities in the ubiquitous healthcare. However, the mechanical and geometrical constraints inherent in all standard forms of high performance rigid wafer-based electronics raise unique integration challenges with biotic entities. Here, we describe materials and design constructs for high performance skin-mountable bio-integrated electronic devices, which incorporate arrays of single crystalline inorganic nanomembranes. The resulting electronic devices include flexible and stretchable electrophysiology electrodes and sensors coupled with active electronic components. These advances in bio-integrated systems create new directions in the personalized health monitoring and/or human-machine interfaces.

  19. Strategy Guideline. Partnering for High Performance Homes

    Energy Technology Data Exchange (ETDEWEB)

    Prahl, Duncan [IBACOS, Inc., Pittsburgh, PA (United States)

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  20. Photonuclear physics models, simulations, and experiments for nuclear nonproliferation

    International Nuclear Information System (INIS)

    Clarke, S.; Downar, T.; Pozzi, S.; Flaska, M.; Mihalczo, J.; Padovani, E.; Hunt, A.

    2007-01-01

    This work illustrates a methodology based on photon interrogation and coincidence counting for determining the characteristics of fissile material. The feasibility of the proposed methods was demonstrated using the Monte Carlo-based MCNPX/MCNP-PoliMi code system capable of simulating the full statistics of the neutron and photon field generated by the photon interrogation of fissile and non-fissile materials with high-energy photons. These simulations were compared to the prompt time-of-flight data taken at the Idaho Accelerator Center immediately following the photon interrogation of a depleted uranium target. The results agree very well with the measured data for interrogation with 15-MeV endpoint Bremsstrahlung photons at two different detector separation distances. (authors)

  1. Simulating direct shear tests with the Bullet physics library: A validation study.

    Science.gov (United States)

    Izadi, Ehsan; Bezuijen, Adam

    2018-01-01

    This study focuses on the possible uses of physics engines, and more specifically the Bullet physics library, to simulate granular systems. Physics engines are employed extensively in the video gaming, animation and movie industries to create physically plausible scenes. They are designed to deliver a fast, stable, and optimal simulation of certain systems such as rigid bodies, soft bodies and fluids. This study focuses exclusively on simulating granular media in the context of rigid body dynamics with the Bullet physics library. The first step was to validate the results of the simulations of direct shear testing on uniform-sized metal beads on the basis of laboratory experiments. The difference in the average angle of mobilized frictions was found to be only 1.0°. In addition, a very close match was found between dilatancy in the laboratory samples and in the simulations. A comprehensive study was then conducted to determine the failure and post-failure mechanism. We conclude with the presentation of a simulation of a direct shear test on real soil which demonstrated that Bullet has all the capabilities needed to be used as software for simulating granular systems.

  2. Simbios: an NIH national center for physics-based simulation of biological structures.

    Science.gov (United States)

    Delp, Scott L; Ku, Joy P; Pande, Vijay S; Sherman, Michael A; Altman, Russ B

    2012-01-01

    Physics-based simulation provides a powerful framework for understanding biological form and function. Simulations can be used by biologists to study macromolecular assemblies and by clinicians to design treatments for diseases. Simulations help biomedical researchers understand the physical constraints on biological systems as they engineer novel drugs, synthetic tissues, medical devices, and surgical interventions. Although individual biomedical investigators make outstanding contributions to physics-based simulation, the field has been fragmented. Applications are typically limited to a single physical scale, and individual investigators usually must create their own software. These conditions created a major barrier to advancing simulation capabilities. In 2004, we established a National Center for Physics-Based Simulation of Biological Structures (Simbios) to help integrate the field and accelerate biomedical research. In 6 years, Simbios has become a vibrant national center, with collaborators in 16 states and eight countries. Simbios focuses on problems at both the molecular scale and the organismal level, with a long-term goal of uniting these in accurate multiscale simulations.

  3. High performance parallel I/O

    CERN Document Server

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  4. Quantum Accelerators for High-Performance Computing Systems

    OpenAIRE

    Britt, Keith A.; Mohiyaddin, Fahd A.; Humble, Travis S.

    2017-01-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantu...

  5. Simulating next-generation Cyber-physical computing platforms

    OpenAIRE

    Burgio, Paolo; Álvarez Martínez, Carlos; Ayguadé Parra, Eduard; Filgueras Izquierdo, Antonio; Jiménez González, Daniel; Martorell Bofill, Xavier; Navarro, Nacho; Giorgi, Roberto

    2015-01-01

    In specific domains, such as cyber-physical systems, platforms are quickly evolving to include multiple (many-) cores and programmable logic in a single system-on-chip, while includ- ing interfaces to commodity sensors/actuators. Programmable Logic (e.g., FPGA) allows for greater flexibility and dependability. However, the task of extracting the performance/watt potentia l of heterogeneous many-cores is often demanded at the application level, and this h...

  6. Anticipated simulation of Angra-1 start-up physical tests

    International Nuclear Information System (INIS)

    Fernandes, V.B.; Perrotta, J.A.; Silva Ipojuca, T. da; Ponzoni Filho, P.

    1981-01-01

    Some results foreseen by the Department of Nuclear Fuel (DCN. O) in Furnas for the measurements that will be realized during the start-up integrated tests for Angra-1 are presented. All the forecasting is based on a DCN.O proper correlation methodology, developed from basic physical principles, using computer codes developed by the authors or public computer codes adapted to this methodology. (E.G.) [pt

  7. Physics-based simulations of the impacts forest management practices have on hydrologic response

    Science.gov (United States)

    Adrianne Carr; Keith Loague

    2012-01-01

    The impacts of logging on near-surface hydrologic response at the catchment and watershed scales were examined quantitatively using numerical simulation. The simulations were conducted with the Integrated Hydrology Model (InHM) for the North Fork of Caspar Creek Experimental Watershed, located near Fort Bragg, California. InHM is a comprehensive physics-based...

  8. Introduction to Stochastic Simulations for Chemical and Physical Processes: Principles and Applications

    Science.gov (United States)

    Weiss, Charles J.

    2017-01-01

    An introduction to digital stochastic simulations for modeling a variety of physical and chemical processes is presented. Despite the importance of stochastic simulations in chemistry, the prevalence of turn-key software solutions can impose a layer of abstraction between the user and the underlying approach obscuring the methodology being…

  9. Basic Guidelines to Introduce Electric Circuit Simulation Software in a General Physics Course

    Science.gov (United States)

    Moya, A. A.

    2018-01-01

    The introduction of electric circuit simulation software for undergraduate students in a general physics course is proposed in order to contribute to the constructive learning of electric circuit theory. This work focuses on the lab exercises based on dc, transient and ac analysis in electric circuits found in introductory physics courses, and…

  10. Circuit simulation and physical implementation for a memristor-based colpitts oscillator

    OpenAIRE

    Hongmin Deng; Dongping Wang

    2017-01-01

    This paper implements two kinds of memristor-based colpitts oscillators, namely, the circuit where the memristor is added into the feedback network of the oscillator in parallel and series, respectively. First, a MULTISIM simulation circuit for the memristive colpitts oscillator is built, where an emulator constructed by some off-the-shelf components is utilized to replace the memristor. Then the physical system is implemented in terms of the MULTISIM simulation circuit. Circuit simulation an...

  11. Physical Mapping Using Simulated Annealing and Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Vesterstrøm, Jacob Svaneborg

    2003-01-01

    optimization method when searching for an ordering of the fragments in PM. In this paper, we applied an evolutionary algorithm to the problem, and compared its performance to that of SA and local search on simulated PM data, in order to determine the important factors in finding a good ordering of the segments....... The analysis highlights the importance of a good PM model, a well-correlated fitness function, and high quality hybridization data. We suggest that future work in PM should focus on design of more reliable fitness functions and on developing error-screening algorithms....

  12. Tsunami Simulators in Physical Modelling - Concept to Practical Solutions

    Science.gov (United States)

    Chandler, Ian; Allsop, William; Robinson, David; Rossetto, Tiziana; McGovern, David; Todd, David

    2017-04-01

    Whilst many researchers have conducted simple 'tsunami impact' studies, few engineering tools are available to assess the onshore impacts of tsunami, with no agreed methods available to predict loadings on coastal defences, buildings or related infrastructure. Most previous impact studies have relied upon unrealistic waveforms (solitary or dam-break waves and bores) rather than full-duration tsunami waves, or have used simplified models of nearshore and over-land flows. Over the last 10+ years, pneumatic Tsunami Simulators for the hydraulic laboratory have been developed into an exciting and versatile technology, allowing the forces of real-world tsunami to be reproduced and measured in a laboratory environment for the first time. These devices have been used to model generic elevated and N-wave tsunamis up to and over simple shorelines, and at example coastal defences and infrastructure. They have also reproduced full-duration tsunamis including Mercator 2004 and Tohoku 2011, both at 1:50 scale. Engineering scale models of these tsunamis have measured wave run-up on simple slopes, forces on idealised sea defences, pressures / forces on buildings, and scour at idealised buildings. This presentation will describe how these Tsunami Simulators work, demonstrate how they have generated tsunami waves longer than the facilities within which they operate, and will present research results from three generations of Tsunami Simulators. Highlights of direct importance to natural hazard modellers and coastal engineers include measurements of wave run-up levels, forces on single and multiple buildings and comparison with previous theoretical predictions. Multiple buildings have two malign effects. The density of buildings to flow area (blockage ratio) increases water depths and flow velocities in the 'streets'. But the increased building densities themselves also increase the cost of flow per unit area (both personal and monetary). The most recent study with the Tsunami

  13. Team Development for High Performance Management.

    Science.gov (United States)

    Schermerhorn, John R., Jr.

    1986-01-01

    The author examines a team development approach to management that creates shared commitments to performance improvement by focusing the attention of managers on individual workers and their task accomplishments. It uses the "high-performance equation" to help managers confront shared beliefs and concerns about performance and develop realistic…

  14. Validated High Performance Liquid Chromatography Method for ...

    African Journals Online (AJOL)

    Purpose: To develop a simple, rapid and sensitive high performance liquid chromatography (HPLC) method for the determination of cefadroxil monohydrate in human plasma. Methods: Schimadzu HPLC with LC solution software was used with Waters Spherisorb, C18 (5 μm, 150mm × 4.5mm) column. The mobile phase ...

  15. An Introduction to High Performance Fortran

    Directory of Open Access Journals (Sweden)

    John Merlin

    1995-01-01

    Full Text Available High Performance Fortran (HPF is an informal standard for extensions to Fortran 90 to assist its implementation on parallel architectures, particularly for data-parallel computation. Among other things, it includes directives for specifying data distribution across multiple memories, and concurrent execution features. This article provides a tutorial introduction to the main features of HPF.

  16. High Performance Work Systems for Online Education

    Science.gov (United States)

    Contacos-Sawyer, Jonna; Revels, Mark; Ciampa, Mark

    2010-01-01

    The purpose of this paper is to identify the key elements of a High Performance Work System (HPWS) and explore the possibility of implementation in an online institution of higher learning. With the projected rapid growth of the demand for online education and its importance in post-secondary education, providing high quality curriculum, excellent…

  17. Debugging a high performance computing program

    Science.gov (United States)

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  18. High Performance Networks for High Impact Science

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  19. Teacher Accountability at High Performing Charter Schools

    Science.gov (United States)

    Aguirre, Moises G.

    2016-01-01

    This study will examine the teacher accountability and evaluation policies and practices at three high performing charter schools located in San Diego County, California. Charter schools are exempted from many laws, rules, and regulations that apply to traditional school systems. By examining the teacher accountability systems at high performing…

  20. Technology Leadership in Malaysia's High Performance School

    Science.gov (United States)

    Yieng, Wong Ai; Daud, Khadijah Binti

    2017-01-01

    Headmaster as leader of the school also plays a role as a technology leader. This applies to the high performance schools (HPS) headmaster as well. The HPS excel in all aspects of education. In this study, researcher is interested in examining the role of the headmaster as a technology leader through interviews with three headmasters of high…

  1. Toward High Performance in Industrial Refrigeration Systems

    DEFF Research Database (Denmark)

    Thybo, C.; Izadi-Zamanabadi, Roozbeh; Niemann, H.

    2002-01-01

    Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design...

  2. Towards high performance in industrial refrigeration systems

    DEFF Research Database (Denmark)

    Thybo, C.; Izadi-Zamanabadi, R.; Niemann, Hans Henrik

    2002-01-01

    Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design...

  3. Validated high performance liquid chromatographic (HPLC) method ...

    African Journals Online (AJOL)

    STORAGESEVER

    2010-02-22

    Feb 22, 2010 ... specific and accurate high performance liquid chromatographic method for determination of ZER in micro-volumes ... tional medicine as a cure for swelling, sores, loss of appetite and ... Receptor Activator for Nuclear Factor κ B Ligand .... The effect of ... be suitable for preclinical pharmacokinetic studies. The.

  4. Validated High Performance Liquid Chromatography Method for ...

    African Journals Online (AJOL)

    Purpose: To develop a simple, rapid and sensitive high performance liquid ... response, tailing factor and resolution of six replicate injections was < 3 %. ... Cefadroxil monohydrate, Human plasma, Pharmacokinetics Bioequivalence ... Drug-free plasma was obtained from the local .... Influence of probenicid on the renal.

  5. High-performance OPCPA laser system

    International Nuclear Information System (INIS)

    Zuegel, J.D.; Bagnoud, V.; Bromage, J.; Begishev, I.A.; Puth, J.

    2006-01-01

    Optical parametric chirped-pulse amplification (OPCPA) is ideally suited for amplifying ultra-fast laser pulses since it provides broadband gain across a wide range of wavelengths without many of the disadvantages of regenerative amplification. A high-performance OPCPA system has been demonstrated as a prototype for the front end of the OMEGA Extended Performance (EP) Laser System. (authors)

  6. High-performance OPCPA laser system

    Energy Technology Data Exchange (ETDEWEB)

    Zuegel, J.D.; Bagnoud, V.; Bromage, J.; Begishev, I.A.; Puth, J. [Rochester Univ., Lab. for Laser Energetics, NY (United States)

    2006-06-15

    Optical parametric chirped-pulse amplification (OPCPA) is ideally suited for amplifying ultra-fast laser pulses since it provides broadband gain across a wide range of wavelengths without many of the disadvantages of regenerative amplification. A high-performance OPCPA system has been demonstrated as a prototype for the front end of the OMEGA Extended Performance (EP) Laser System. (authors)

  7. Comparing Dutch and British high performing managers

    NARCIS (Netherlands)

    Waal, A.A. de; Heijden, B.I.J.M. van der; Selvarajah, C.; Meyer, D.

    2016-01-01

    National cultures have a strong influence on the performance of organizations and should be taken into account when studying the traits of high performing managers. At the same time, many studies that focus upon the attributes of successful managers show that there are attributes that are similar

  8. Project materials [Commercial High Performance Buildings Project

    Energy Technology Data Exchange (ETDEWEB)

    None

    2001-01-01

    The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

  9. High performance structural ceramics for nuclear industry

    International Nuclear Information System (INIS)

    Pujari, Vimal K.; Faker, Paul

    2006-01-01

    A family of Saint-Gobain structural ceramic materials and products produced by its High performance Refractory Division is described. Over the last fifty years or so, Saint-Gobain has been a leader in developing non oxide ceramic based novel materials, processes and products for application in Nuclear, Chemical, Automotive, Defense and Mining industries

  10. A new high performance current transducer

    International Nuclear Information System (INIS)

    Tang Lijun; Lu Songlin; Li Deming

    2003-01-01

    A DC-100 kHz current transducer is developed using a new technique on zero-flux detecting principle. It was shown that the new current transducer is of high performance, its magnetic core need not be selected very stringently, and it is easy to manufacture

  11. Vacuum thermochromatography: physical principles and Monte Carlo simulation

    International Nuclear Information System (INIS)

    Zvara, I.

    2014-01-01

    The title method for preparative separation of infinitesimal amounts of relatively volatile elements or compounds with different adsorbability is based on the molecular flow in an evacuated open column with imposed temperature gradient. The analytes put into the column's closed 'hot' end begin to migrate owing to random flights of their molecules between two consecutive collisions with the wall. Each strike results in adsorption of the entity on the surface for a random time whose mean increases d ownstream ; as a result, various analytes come to practical rest in individual temperature ranges. Here, the microscopic picture of the molecular histories is described in quantitative details, assuming that the velocity vectors of the desorbing molecules obey the cosine law angular distribution. The probability density functions for the full and projected flight lengths in long cylinders are derived. They were used in Monte Carlo simulation of great many migration histories to obtain the peaking profiles of the deposits. Numerous particular sets of experimental regimes and conditions were simulated to elucidate influence of these variables on the profiles and the characteristic deposition temperatures

  12. Physical Processes for Driving Ionospheric Outflows in Global Simulations

    Science.gov (United States)

    Moore, Thomas Earle; Strangeway, Robert J.

    2009-01-01

    We review and assess the importance of processes thought to drive ionospheric outflows, linking them as appropriate to the solar wind and interplanetary magnetic field, and to the spatial and temporal distribution of their magnetospheric internal responses. These begin with the diffuse effects of photoionization and thermal equilibrium of the ionospheric topside, enhancing Jeans' escape, with ambipolar diffusion and acceleration. Auroral outflows begin with dayside reconnexion and resultant field-aligned currents and driven convection. These produce plasmaspheric plumes, collisional heating and wave-particle interactions, centrifugal acceleration, and auroral acceleration by parallel electric fields, including enhanced ambipolar fields from electron heating by precipitating particles. Observations and simulations show that solar wind energy dissipation into the atmosphere is concentrated by the geomagnetic field into auroral regions with an amplification factor of 10-100, enhancing heavy species plasma and gas escape from gravity, and providing more current carrying capacity. Internal plasmas thus enable electromagnetic driving via coupling to the plasma, neutral gas and by extension, the entire body " We assess the Importance of each of these processes in terms of local escape flux production as well as global outflow, and suggest methods for their implementation within multispecies global simulation codes. We complete 'he survey with an assessment of outstanding obstacles to this objective.

  13. The Design and Semi-Physical Simulation Test of Fault-Tolerant Controller for Aero Engine

    Science.gov (United States)

    Liu, Yuan; Zhang, Xin; Zhang, Tianhong

    2017-11-01

    A new fault-tolerant control method for aero engine is proposed, which can accurately diagnose the sensor fault by Kalman filter banks and reconstruct the signal by real-time on-board adaptive model combing with a simplified real-time model and an improved Kalman filter. In order to verify the feasibility of the method proposed, a semi-physical simulation experiment has been carried out. Besides the real I/O interfaces, controller hardware and the virtual plant model, semi-physical simulation system also contains real fuel system. Compared with the hardware-in-the-loop (HIL) simulation, semi-physical simulation system has a higher degree of confidence. In order to meet the needs of semi-physical simulation, a rapid prototyping controller with fault-tolerant control ability based on NI CompactRIO platform is designed and verified on the semi-physical simulation test platform. The result shows that the controller can realize the aero engine control safely and reliably with little influence on controller performance in the event of fault on sensor.

  14. Quantum simulation of 2D topological physics in a 1D array of optical cavities.

    Science.gov (United States)

    Luo, Xi-Wang; Zhou, Xingxiang; Li, Chuan-Feng; Xu, Jin-Shi; Guo, Guang-Can; Zhou, Zheng-Wei

    2015-07-06

    Orbital angular momentum of light is a fundamental optical degree of freedom characterized by unlimited number of available angular momentum states. Although this unique property has proved invaluable in diverse recent studies ranging from optical communication to quantum information, it has not been considered useful or even relevant for simulating nontrivial physics problems such as topological phenomena. Contrary to this misconception, we demonstrate the incredible value of orbital angular momentum of light for quantum simulation by showing theoretically how it allows to study a variety of important 2D topological physics in a 1D array of optical cavities. This application for orbital angular momentum of light not only reduces required physical resources but also increases feasible scale of simulation, and thus makes it possible to investigate important topics such as edge-state transport and topological phase transition in a small simulator ready for immediate experimental exploration.

  15. Basic guidelines to introduce electric circuit simulation software in a general physics course

    Science.gov (United States)

    Moya, A. A.

    2018-05-01

    The introduction of electric circuit simulation software for undergraduate students in a general physics course is proposed in order to contribute to the constructive learning of electric circuit theory. This work focuses on the lab exercises based on dc, transient and ac analysis in electric circuits found in introductory physics courses, and shows how students can use the simulation software to do simple activities associated with a lab exercise itself and with related topics. By introducing electric circuit simulation programs in a general physics course as a brief activitiy complementing lab exercise, students develop basic skills in using simulation software, improve their knowledge on the topology of electric circuits and perceive that the technology contributes to their learning, all without reducing the time spent on the actual content of the course.

  16. Physical and metallurgical phenomena during simulations of plasma disruptions

    International Nuclear Information System (INIS)

    Brossa, F.; Cambini, M.; Quataert, D.; Rigon, G.; Schiller, P.

    1988-01-01

    The metallographic analysis executed on austenitic stainless steel specimens subjected to simulated plasma disruptions allows us to present a complete picture of the most important phenomena. (i) The experiments show that for the calculation of melt layer and evaporation it is necessary to take considerable convection in the melt layer into account. (ii) The rapid solidification of the melt layer leads to a change in the crystalline structure and to the formation of cracks. (iii) Alloying elements with a high vapour pressure evaporate preferentially. (iv) The stresses generated during cooling induce in some case phase changes. (v) During neutron irradiation helium is formed in all first wall materials by (n, α) processes. This helium forms bubbles under disruptions. (orig.)

  17. High performance thermal insulation systems (HiPTI). Vacuum insulated products (VIP). Proceedings of the international conference and workshop

    Energy Technology Data Exchange (ETDEWEB)

    Zimmermann, M.; Bertschinger, H.

    2001-07-01

    These are the proceedings of the International Conference and Workshop held at EMPA Duebendorf, Switzerland, in January 2001. The papers presented at the conference's first day included contributions on the role of high-performance insulation in energy efficiency - providing an overview of available technologies and reviewing physical aspects of heat transfer and the development of thermal insulation as well as the state of the art of glazing technologies such as high-performance and vacuum glazing. Also, vacuum-insulated products (VIP) with fumed silica, applications of VIP systems in technical building systems, nanogels, VIP packaging materials and technologies, measurement of physical properties, VIP for advanced retrofit solutions for buildings and existing and future applications for advanced low energy building are discussed. Finally, research and development concerning VIP for buildings are reported on. The workshops held on the second day covered a preliminary study on high-performance thermal insulation materials with gastight porosity, flexible pipes with high performance thermal insulation, evaluation of modern insulation systems by simulation methods as well as the development of vacuum insulation panels with a stainless steel envelope.

  18. Simulation experience enhances physical therapist student confidence in managing a patient in the critical care environment.

    Science.gov (United States)

    Ohtake, Patricia J; Lazarus, Marcilene; Schillo, Rebecca; Rosen, Michael

    2013-02-01

    Rehabilitation of patients in critical care environments improves functional outcomes. This finding has led to increased implementation of intensive care unit (ICU) rehabilitation programs, including early mobility, and an associated increased demand for physical therapists practicing in ICUs. Unfortunately, many physical therapists report being inadequately prepared to work in this high-risk environment. Simulation provides focused, deliberate practice in safe, controlled learning environments and may be a method to initiate academic preparation of physical therapists for ICU practice. The purpose of this study was to examine the effect of participation in simulation-based management of a patient with critical illness in an ICU setting on levels of confidence and satisfaction in physical therapist students. A one-group, pretest-posttest, quasi-experimental design was used. Physical therapist students (N=43) participated in a critical care simulation experience requiring technical (assessing bed mobility and pulmonary status), behavioral (patient and interprofessional communication), and cognitive (recognizing a patient status change and initiating appropriate responses) skill performance. Student confidence and satisfaction were surveyed before and after the simulation experience. Students' confidence in their technical, behavioral, and cognitive skill performance increased from "somewhat confident" to "confident" following the critical care simulation experience. Student satisfaction was highly positive, with strong agreement the simulation experience was valuable, reinforced course content, and was a useful educational tool. Limitations of the study were the small sample from one university and a control group was not included. Incorporating a simulated, interprofessional critical care experience into a required clinical course improved physical therapist student confidence in technical, behavioral, and cognitive performance measures and was associated with high

  19. Multi-physic simulations of irradiation experiments in a technological irradiation reactor

    International Nuclear Information System (INIS)

    Bonaccorsi, Th.

    2007-09-01

    A Material Testing Reactor (MTR) makes it possible to irradiate material samples under intense neutron and photonic fluxes. These experiments are carried out in experimental devices localised in the reactor core or in periphery (reflector). Available physics simulation tools only treat, most of the time, one physics field in a very precise way. Multi-physic simulations of irradiation experiments therefore require a sequential use of several calculation codes and data exchanges between these codes: this corresponds to problems coupling. In order to facilitate multi-physic simulations, this thesis sets up a data model based on data-processing objects, called Technological Entities. This data model is common to all of the physics fields. It permits defining the geometry of an irradiation device in a parametric way and to associate information about materials to it. Numerical simulations are encapsulated into interfaces providing the ability to call specific functionalities with the same command (to initialize data, to launch calculations, to post-treat, to get results,... ). Thus, once encapsulated, numerical simulations can be re-used for various studies. This data model is developed in a SALOME platform component. The first application case made it possible to perform neutronic simulations (OSIRIS reactor and RJH) coupled with fuel behavior simulations. In a next step, thermal hydraulics could also be taken into account. In addition to the improvement of the calculation accuracy due to the physical phenomena coupling, the time spent in the development phase of the simulation is largely reduced and the possibilities of uncertainty treatment are under consideration. (author)

  20. Semi-physical Simulation Platform of a Parafoil Nonlinear Dynamic System

    International Nuclear Information System (INIS)

    Gao Hai-Tao; Yang Sheng-Bo; Zhu Er-Lin; Sun Qing-Lin; Chen Zeng-Qiang; Kang Xiao-Feng

    2013-01-01

    Focusing on the problems in the process of simulation and experiment on a parafoil nonlinear dynamic system, such as limited methods, high cost and low efficiency we present a semi-physical simulation platform. It is designed by connecting parts of physical objects to a computer, and remedies the defect that a computer simulation is divorced from a real environment absolutely. The main components of the platform and its functions, as well as simulation flows, are introduced. The feasibility and validity are verified through a simulation experiment. The experimental results show that the platform has significance for improving the quality of the parafoil fixed-point airdrop system, shortening the development cycle and saving cost

  1. Simulating elastic light scattering using high performance computing methods

    NARCIS (Netherlands)

    Hoekstra, A.G.; Sloot, P.M.A.; Verbraeck, A.; Kerckhoffs, E.J.H.

    1993-01-01

    The Coupled Dipole method, as originally formulated byPurcell and Pennypacker, is a very powerful method tosimulate the Elastic Light Scattering from arbitraryparticles. This method, which is a particle simulationmodel for Computational Electromagnetics, has one majordrawback: if the size of the

  2. High performance computing network for cloud environment using simulators

    OpenAIRE

    Singh, N. Ajith; Hemalatha, M.

    2012-01-01

    Cloud computing is the next generation computing. Adopting the cloud computing is like signing up new form of a website. The GUI which controls the cloud computing make is directly control the hardware resource and your application. The difficulty part in cloud computing is to deploy in real environment. Its' difficult to know the exact cost and it's requirement until and unless we buy the service not only that whether it will support the existing application which is available on traditional...

  3. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  4. Architecting Web Sites for High Performance

    Directory of Open Access Journals (Sweden)

    Arun Iyengar

    2002-01-01

    Full Text Available Web site applications are some of the most challenging high-performance applications currently being developed and deployed. The challenges emerge from the specific combination of high variability in workload characteristics and of high performance demands regarding the service level, scalability, availability, and costs. In recent years, a large body of research has addressed the Web site application domain, and a host of innovative software and hardware solutions have been proposed and deployed. This paper is an overview of recent solutions concerning the architectures and the software infrastructures used in building Web site applications. The presentation emphasizes three of the main functions in a complex Web site: the processing of client requests, the control of service levels, and the interaction with remote network caches.

  5. High performance cloud auditing and applications

    CERN Document Server

    Choi, Baek-Young; Song, Sejun

    2014-01-01

    This book mainly focuses on cloud security and high performance computing for cloud auditing. The book discusses emerging challenges and techniques developed for high performance semantic cloud auditing, and presents the state of the art in cloud auditing, computing and security techniques with focus on technical aspects and feasibility of auditing issues in federated cloud computing environments.   In summer 2011, the United States Air Force Research Laboratory (AFRL) CyberBAT Cloud Security and Auditing Team initiated the exploration of the cloud security challenges and future cloud auditing research directions that are covered in this book. This work was supported by the United States government funds from the Air Force Office of Scientific Research (AFOSR), the AFOSR Summer Faculty Fellowship Program (SFFP), the Air Force Research Laboratory (AFRL) Visiting Faculty Research Program (VFRP), the National Science Foundation (NSF) and the National Institute of Health (NIH). All chapters were partially suppor...

  6. Monitoring SLAC High Performance UNIX Computing Systems

    International Nuclear Information System (INIS)

    Lettsome, Annette K.

    2005-01-01

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface

  7. Simulation models for computational plasma physics: Concluding report

    International Nuclear Information System (INIS)

    Hewett, D.W.

    1994-01-01

    In this project, the authors enhanced their ability to numerically simulate bounded plasmas that are dominated by low-frequency electric and magnetic fields. They moved towards this goal in several ways; they are now in a position to play significant roles in the modeling of low-frequency electromagnetic plasmas in several new industrial applications. They have significantly increased their facility with the computational methods invented to solve the low frequency limit of Maxwell's equations (DiPeso, Hewett, accepted, J. Comp. Phys., 1993). This low frequency model is called the Streamlined Darwin Field model (SDF, Hewett, Larson, and Doss, J. Comp. Phys., 1992) has now been implemented in a fully non-neutral SDF code BEAGLE (Larson, Ph.D. dissertation, 1993) and has further extended to the quasi-neutral limit (DiPeso, Hewett, Comp. Phys. Comm., 1993). In addition, they have resurrected the quasi-neutral, zero-electron-inertia model (ZMR) and began the task of incorporating internal boundary conditions into this model that have the flexibility of those in GYMNOS, a magnetostatic code now used in ion source work (Hewett, Chen, ICF Quarterly Report, July--September, 1993). Finally, near the end of this project, they invented a new type of banded matrix solver that can be implemented on a massively parallel computer -- thus opening the door for the use of all their ADI schemes on these new computer architecture's (Mattor, Williams, Hewett, submitted to Parallel Computing, 1993)

  8. The MCUCN simulation code for ultracold neutron physics

    Science.gov (United States)

    Zsigmond, G.

    2018-02-01

    Ultracold neutrons (UCN) have very low kinetic energies 0-300 neV, thereby can be stored in specific material or magnetic confinements for many hundreds of seconds. This makes them a very useful tool in probing fundamental symmetries of nature (for instance charge-parity violation by neutron electric dipole moment experiments) and contributing important parameters for the Big Bang nucleosynthesis (neutron lifetime measurements). Improved precision experiments are in construction at new and planned UCN sources around the world. MC simulations play an important role in the optimization of such systems with a large number of parameters, but also in the estimation of systematic effects, in benchmarking of analysis codes, or as part of the analysis. The MCUCN code written at PSI has been extensively used for the optimization of the UCN source optics and in the optimization and analysis of (test) experiments within the nEDM project based at PSI. In this paper we present the main features of MCUCN and interesting benchmark and application examples.

  9. High Performance Interactive System Dynamics Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Duckworth, Jonathan C [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-14

    This brochure describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  10. High Performance Interactive System Dynamics Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Duckworth, Jonathan C [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-14

    This presentation describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  11. High-performance phase-field modeling

    KAUST Repository

    Vignal, Philippe; Sarmiento, Adel; Cortes, Adriano Mauricio; Dalcin, L.; Collier, N.; Calo, Victor M.

    2015-01-01

    and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  12. AHPCRC - Army High Performance Computing Research Center

    Science.gov (United States)

    2010-01-01

    computing. Of particular interest is the ability of a distrib- uted jamming network (DJN) to jam signals in all or part of a sensor or communications net...and reasoning, assistive technologies. FRIEDRICH (FRITZ) PRINZ Finmeccanica Professor of Engineering, Robert Bosch Chair, Department of Engineering...High Performance Computing Research Center www.ahpcrc.org BARBARA BRYAN AHPCRC Research and Outreach Manager, HPTi (650) 604-3732 bbryan@hpti.com Ms

  13. Governance among Malaysian high performing companies

    Directory of Open Access Journals (Sweden)

    Asri Marsidi

    2016-07-01

    Full Text Available Well performed companies have always been linked with effective governance which is generally reflected through effective board of directors. However many issues concerning the attributes for effective board of directors remained unresolved. Nowadays diversity has been perceived as able to influence the corporate performance due to the likelihood of meeting variety of needs and demands from diverse customers and clients. The study therefore aims to provide a fundamental understanding on governance among high performing companies in Malaysia.

  14. DURIP: High Performance Computing in Biomathematics Applications

    Science.gov (United States)

    2017-05-10

    Mathematics and Statistics (AMS) at the University of California, Santa Cruz (UCSC) to conduct research and research-related education in areas of...Computing in Biomathematics Applications Report Title The goal of this award was to enhance the capabilities of the Department of Applied Mathematics and...DURIP: High Performance Computing in Biomathematics Applications The goal of this award was to enhance the capabilities of the Department of Applied

  15. High Performance Computing Operations Review Report

    Energy Technology Data Exchange (ETDEWEB)

    Cupps, Kimberly C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  16. Planning for high performance project teams

    International Nuclear Information System (INIS)

    Reed, W.; Keeney, J.; Westney, R.

    1997-01-01

    Both industry-wide research and corporate benchmarking studies confirm the significant savings in cost and time that result from early planning of a project. Amoco's Team Planning Workshop combines long-term strategic project planning and short-term tactical planning with team building to provide the basis for high performing project teams, better project planning, and effective implementation of the Amoco Common Process for managing projects

  17. vSphere high performance cookbook

    CERN Document Server

    Sarkar, Prasenjit

    2013-01-01

    vSphere High Performance Cookbook is written in a practical, helpful style with numerous recipes focusing on answering and providing solutions to common, and not-so common, performance issues and problems.The book is primarily written for technical professionals with system administration skills and some VMware experience who wish to learn about advanced optimization and the configuration features and functions for vSphere 5.1.

  18. High performance work practices, innovation and performance

    DEFF Research Database (Denmark)

    Jørgensen, Frances; Newton, Cameron; Johnston, Kim

    2013-01-01

    Research spanning nearly 20 years has provided considerable empirical evidence for relationships between High Performance Work Practices (HPWPs) and various measures of performance including increased productivity, improved customer service, and reduced turnover. What stands out from......, and Africa to examine these various questions relating to the HPWP-innovation-performance relationship. Each paper discusses a practice that has been identified in HPWP literature and potential variables that can facilitate or hinder the effects of these practices of innovation- and performance...

  19. High Performance Electronics on Flexible Silicon

    KAUST Repository

    Sevilla, Galo T.

    2016-09-01

    Over the last few years, flexible electronic systems have gained increased attention from researchers around the world because of their potential to create new applications such as flexible displays, flexible energy harvesters, artificial skin, and health monitoring systems that cannot be integrated with conventional wafer based complementary metal oxide semiconductor processes. Most of the current efforts to create flexible high performance devices are based on the use of organic semiconductors. However, inherent material\\'s limitations make them unsuitable for big data processing and high speed communications. The objective of my doctoral dissertation is to develop integration processes that allow the transformation of rigid high performance electronics into flexible ones while maintaining their performance and cost. In this work, two different techniques to transform inorganic complementary metal-oxide-semiconductor electronics into flexible ones have been developed using industry compatible processes. Furthermore, these techniques were used to realize flexible discrete devices and circuits which include metal-oxide-semiconductor field-effect-transistors, the first demonstration of flexible Fin-field-effect-transistors, and metal-oxide-semiconductors-based circuits. Finally, this thesis presents a new technique to package, integrate, and interconnect flexible high performance electronics using low cost additive manufacturing techniques such as 3D printing and inkjet printing. This thesis contains in depth studies on electrical, mechanical, and thermal properties of the fabricated devices.

  20. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  1. Unravelling the structure of matter on high-performance computers

    International Nuclear Information System (INIS)

    Kieu, T.D.; McKellar, B.H.J.

    1992-11-01

    The various phenomena and the different forms of matter in nature are believed to be the manifestation of only a handful set of fundamental building blocks-the elementary particles-which interact through the four fundamental forces. In the study of the structure of matter at this level one has to consider forces which are not sufficiently weak to be treated as small perturbations to the system, an example of which is the strong force that binds the nucleons together. High-performance computers, both vector and parallel machines, have facilitated the necessary non-perturbative treatments. The principles and the techniques of computer simulations applied to Quantum Chromodynamics are explained examples include the strong interactions, the calculation of the mass of nucleons and their decay rates. Some commercial and special-purpose high-performance machines for such calculations are also mentioned. 3 refs., 2 tabs

  2. Simulation-based Education for Endoscopic Third Ventriculostomy: A Comparison Between Virtual and Physical Training Models.

    Science.gov (United States)

    Breimer, Gerben E; Haji, Faizal A; Bodani, Vivek; Cunningham, Melissa S; Lopez-Rios, Adriana-Lucia; Okrainec, Allan; Drake, James M

    2017-02-01

    The relative educational benefits of virtual reality (VR) and physical simulation models for endoscopic third ventriculostomy (ETV) have not been evaluated "head to head." To compare and identify the relative utility of a physical and VR ETV simulation model for use in neurosurgical training. Twenty-three neurosurgical residents and 3 fellows performed an ETV on both a physical and VR simulation model. Trainees rated the models using 5-point Likert scales evaluating the domains of anatomy, instrument handling, procedural content, and the overall fidelity of the simulation. Paired t tests were performed for each domain's mean overall score and individual items. The VR model has relative benefits compared with the physical model with respect to realistic representation of intraventricular anatomy at the foramen of Monro (4.5, standard deviation [SD] = 0.7 vs 4.1, SD = 0.6; P = .04) and the third ventricle floor (4.4, SD = 0.6 vs 4.0, SD = 0.9; P = .03), although the overall anatomy score was similar (4.2, SD = 0.6 vs 4.0, SD = 0.6; P = .11). For overall instrument handling and procedural content, the physical simulator outperformed the VR model (3.7, SD = 0.8 vs 4.5; SD = 0.5, P educational objectives. Training focused on learning anatomy or decision-making for anatomic cues may be aided with the VR simulation model. A focus on developing manual dexterity and technical skills using endoscopic equipment in the operating room may be better learned on the physical simulation model. Copyright © 2016 by the Congress of Neurological Surgeons

  3. The Centre of High-Performance Scientific Computing, Geoverbund, ABC/J - Geosciences enabled by HPSC

    Science.gov (United States)

    Kollet, Stefan; Görgen, Klaus; Vereecken, Harry; Gasper, Fabian; Hendricks-Franssen, Harrie-Jan; Keune, Jessica; Kulkarni, Ketan; Kurtz, Wolfgang; Sharples, Wendy; Shrestha, Prabhakar; Simmer, Clemens; Sulis, Mauro; Vanderborght, Jan

    2016-04-01

    The Centre of High-Performance Scientific Computing (HPSC TerrSys) was founded 2011 to establish a centre of competence in high-performance scientific computing in terrestrial systems and the geosciences enabling fundamental and applied geoscientific research in the Geoverbund ABC/J (geoscientfic research alliance of the Universities of Aachen, Cologne, Bonn and the Research Centre Jülich, Germany). The specific goals of HPSC TerrSys are to achieve relevance at the national and international level in (i) the development and application of HPSC technologies in the geoscientific community; (ii) student education; (iii) HPSC services and support also to the wider geoscientific community; and in (iv) the industry and public sectors via e.g., useful applications and data products. A key feature of HPSC TerrSys is the Simulation Laboratory Terrestrial Systems, which is located at the Jülich Supercomputing Centre (JSC) and provides extensive capabilities with respect to porting, profiling, tuning and performance monitoring of geoscientific software in JSC's supercomputing environment. We will present a summary of success stories of HPSC applications including integrated terrestrial model development, parallel profiling and its application from watersheds to the continent; massively parallel data assimilation using physics-based models and ensemble methods; quasi-operational terrestrial water and energy monitoring; and convection permitting climate simulations over Europe. The success stories stress the need for a formalized education of students in the application of HPSC technologies in future.

  4. Cactus and Visapult: An ultra-high performance grid-distributedvisualization architecture using connectionless protocols

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Shalf, John

    2002-08-31

    This past decade has seen rapid growth in the size,resolution, and complexity of Grand Challenge simulation codes. Thistrend is accompanied by a trend towards multinational, multidisciplinaryteams who carry out this research in distributed teams, and thecorresponding growth of Grid infrastructure to support these widelydistributed Virtual Organizations. As the number and diversity ofdistributed teams grow, the need for visualization tools to analyze anddisplay multi-terabyte, remote data becomes more pronounced and moreurgent. One such tool that has been successfully used to address thisproblem is Visapult. Visapult is a parallel visualization tool thatemploys Grid-distributed components, latency tolerant visualization andgraphics algorithms, along with high performance network I/O in order toachieve effective remote analysis of massive datasets. In this paper wediscuss improvements to network bandwidth utilization and responsivenessof the Visapult application that result from using connectionlessprotocols to move data payload between the distributed Visapultcomponents and a Grid-enabled, high performance physics simulation usedto study gravitational waveforms of colliding black holes: The Cactuscode. These improvements have boosted Visapult's network efficiency to88-96 percent of the maximum theoretical available bandwidth onmulti-gigabit Wide Area Networks, and greatly enhanced interactivity.Such improvements are critically important for future development ofeffective interactive Grid applications.

  5. Laser additive manufacturing of high-performance materials

    CERN Document Server

    Gu, Dongdong

    2015-01-01

    This book entitled “Laser Additive Manufacturing of High-Performance Materials” covers the specific aspects of laser additive manufacturing of high-performance new materials components based on an unconventional materials incremental manufacturing philosophy, in terms of materials design and preparation, process control and optimization, and theories of physical and chemical metallurgy. This book describes the capabilities and characteristics of the development of new metallic materials components by laser additive manufacturing process, including nanostructured materials, in situ composite materials, particle reinforced metal matrix composites, etc. The topics presented in this book, similar as laser additive manufacturing technology itself, show a significant interdisciplinary feature, integrating laser technology, materials science, metallurgical engineering, and mechanical engineering. This is a book for researchers, students, practicing engineers, and manufacturing industry professionals interested i...

  6. 29th Workshop on Recent Developments in Computer Simulation Studies in Condensed Matter Physics

    International Nuclear Information System (INIS)

    2016-01-01

    Thirty years ago, because of the dramatic increase in the power and utility of computer simulations, The University of Georgia formed the first institutional unit devoted to the application of simulations in research and teaching: The Center for Simulational Physics. Then, as the international simulations community expanded further, we sensed the need for a meeting place for both experienced simulators and newcomers to discuss inventive algorithms and recent results in an environment that promoted lively discussion. As a consequence, the Center for Simulational Physics established an annual workshop series on Recent Developments in Computer Simulation Studies in Condensed Matter Physics. This year's highly interactive workshop was the 29th in the series marking our efforts to promote high quality research in simulational physics. The continued interest shown by the scientific community amply demonstrates the useful purpose that these meetings have served. The latest workshop was held at The University of Georgia from February 22-26, 2016. It served to mark the 30 th Anniversary of the founding of the Center for Simulational Physics. In addition, during this Workshop we celebrated the 60 th birthday of our esteemed colleague Prof. H.-Bernd Schuttler. Bernd has not only contributed to the understanding of strongly correlated electron system, but has made seminal contributions to systems biology through the introduction of modern methods of computational physics. These Proceedings provide a “status report” on a number of important topics. This on-line “volume” is published with the goal of timely dissemination of the material to a wider audience. This program was supported in part by the President's Venture Fund through the generous gifts of the University of Georgia Partners and other donors. We also wish to offer thanks to the Office of the Vice-President for Research, the Franklin College of Arts and Sciences, and the IBM Corporation for partial

  7. Sleep restriction during simulated wildfire suppression: effect on physical task performance.

    Science.gov (United States)

    Vincent, Grace; Ferguson, Sally A; Tran, Jacqueline; Larsen, Brianna; Wolkow, Alexander; Aisbett, Brad

    2015-01-01

    To examine the effects of sleep restriction on firefighters' physical task performance during simulated wildfire suppression. Thirty-five firefighters were matched and randomly allocated to either a control condition (8-hour sleep opportunity, n = 18) or a sleep restricted condition (4-hour sleep opportunity, n = 17). Performance on physical work tasks was evaluated across three days. In addition, heart rate, core temperature, and worker activity were measured continuously. Rate of perceived and exertion and effort sensation were evaluated during the physical work periods. There were no differences between the sleep-restricted and control groups in firefighters' task performance, heart rate, core temperature, or perceptual responses during self-paced simulated firefighting work tasks. However, the sleep-restricted group were less active during periods of non-physical work compared to the control group. Under self-paced work conditions, 4 h of sleep restriction did not adversely affect firefighters' performance on physical work tasks. However, the sleep-restricted group were less physically active throughout the simulation. This may indicate that sleep-restricted participants adapted their behaviour to conserve effort during rest periods, to subsequently ensure they were able to maintain performance during the firefighter work tasks. This work contributes new knowledge to inform fire agencies of firefighters' operational capabilities when their sleep is restricted during multi-day wildfire events. The work also highlights the need for further research to explore how sleep restriction affects physical performance during tasks of varying duration, intensity, and complexity.

  8. Fracture modelling of a high performance armour steel

    Science.gov (United States)

    Skoglund, P.; Nilsson, M.; Tjernberg, A.

    2006-08-01

    The fracture characteristics of the high performance armour steel Armox 500T is investigated. Tensile mechanical experiments using samples with different notch geometries are used to investigate the effect of multi-axial stress states on the strain to fracture. The experiments are numerically simulated and from the simulation the stress at the point of fracture initiation is determined as a function of strain and these data are then used to extract parameters for fracture models. A fracture model based on quasi-static experiments is suggested and the model is tested against independent experiments done at both static and dynamic loading. The result show that the fracture model give reasonable good agreement between simulations and experiments at both static and dynamic loading condition. This indicates that multi-axial loading is more important to the strain to fracture than the deformation rate in the investigated loading range. However on-going work will further characterise the fracture behaviour of Armox 500T.

  9. Technical Basis for Physical Fidelity of NRC Control Room Training Simulators for Advanced Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Minsk, Brian S.; Branch, Kristi M.; Bates, Edward K.; Mitchell, Mark R.; Gore, Bryan F.; Faris, Drury K.

    2009-10-09

    The objective of this study is to determine how simulator physical fidelity influences the effectiveness of training the regulatory personnel responsible for examination and oversight of operating personnel and inspection of technical systems at nuclear power reactors. It seeks to contribute to the U.S. Nuclear Regulatory Commission’s (NRC’s) understanding of the physical fidelity requirements of training simulators. The goal of the study is to provide an analytic framework, data, and analyses that inform NRC decisions about the physical fidelity requirements of the simulators it will need to train its staff for assignment at advanced reactors. These staff are expected to come from increasingly diverse educational and experiential backgrounds.

  10. Computational physics an introduction to Monte Carlo simulations of matrix field theory

    CERN Document Server

    Ydri, Badis

    2017-01-01

    This book is divided into two parts. In the first part we give an elementary introduction to computational physics consisting of 21 simulations which originated from a formal course of lectures and laboratory simulations delivered since 2010 to physics students at Annaba University. The second part is much more advanced and deals with the problem of how to set up working Monte Carlo simulations of matrix field theories which involve finite dimensional matrix regularizations of noncommutative and fuzzy field theories, fuzzy spaces and matrix geometry. The study of matrix field theory in its own right has also become very important to the proper understanding of all noncommutative, fuzzy and matrix phenomena. The second part, which consists of 9 simulations, was delivered informally to doctoral students who are working on various problems in matrix field theory. Sample codes as well as sample key solutions are also provided for convenience and completness. An appendix containing an executive arabic summary of t...

  11. XVI 'Jacques-Louis Lions' Spanish-French School on Numerical Simulation in Physics and Engineering

    CERN Document Server

    Roldán, Teo; Torrens, Juan

    2016-01-01

    This book presents lecture notes from the XVI ‘Jacques-Louis Lions’ Spanish-French School on Numerical Simulation in Physics and Engineering, held in Pamplona (Navarra, Spain) in September 2014. The subjects covered include: numerical analysis of isogeometric methods, convolution quadrature for wave simulations, mathematical methods in image processing and computer vision, modeling and optimization techniques in food processes, bio-processes and bio-systems, and GPU computing for numerical simulation. The book is highly recommended to graduate students in Engineering or Science who want to focus on numerical simulation, either as a research topic or in the field of industrial applications. It can also benefit senior researchers and technicians working in industry who are interested in the use of state-of-the-art numerical techniques in the fields addressed here. Moreover, the book can be used as a textbook for master courses in Mathematics, Physics, or Engineering.

  12. Toward a theory of high performance.

    Science.gov (United States)

    Kirby, Julia

    2005-01-01

    What does it mean to be a high-performance company? The process of measuring relative performance across industries and eras, declaring top performers, and finding the common drivers of their success is such a difficult one that it might seem a fool's errand to attempt. In fact, no one did for the first thousand or so years of business history. The question didn't even occur to many scholars until Tom Peters and Bob Waterman released In Search of Excellence in 1982. Twenty-three years later, we've witnessed several more attempts--and, just maybe, we're getting closer to answers. In this reported piece, HBR senior editor Julia Kirby explores why it's so difficult to study high performance and how various research efforts--including those from John Kotter and Jim Heskett; Jim Collins and Jerry Porras; Bill Joyce, Nitin Nohria, and Bruce Roberson; and several others outlined in a summary chart-have attacked the problem. The challenge starts with deciding which companies to study closely. Are the stars the ones with the highest market caps, the ones with the greatest sales growth, or simply the ones that remain standing at the end of the game? (And when's the end of the game?) Each major study differs in how it defines success, which companies it therefore declares to be worthy of emulation, and the patterns of activity and attitude it finds in common among them. Yet, Kirby concludes, as each study's method incrementally solves problems others have faced, we are progressing toward a consensus theory of high performance.

  13. Learning and the variation in focus among physics students when using a computer simulation

    Directory of Open Access Journals (Sweden)

    Åke Ingerman

    2012-07-01

    Full Text Available This article presents a qualitative analysis of the essential characteristics of university students’ “focus of awareness” whilst engaged with learning physics related to the Bohr model with the aid of a computer simulation. The research is located within the phenomenographic research tradition, with empirical data comprising audio and video recordings of student discussions and interactions, supplemented by interviews. Analysis of this data resulted in descriptions of four qualitatively distinct focuses: Doing the Assignment, Observing the Presentation, Manipulating the Parameters and Exploring the Physics. The focuses are further elucidated in terms of students’ perceptions of learning and the nature of physics. It is concluded that the learning outcomes possible for the students are dependent on the focus that is adopted in the pedagogical situation. Implications for teaching physics using interactive-type simulations can be drawn through epistemological and meta-cognitive considerations of the kind of mindful interventions appropriate to a specific focus.

  14. Playa: High-Performance Programmable Linear Algebra

    Directory of Open Access Journals (Sweden)

    Victoria E. Howle

    2012-01-01

    Full Text Available This paper introduces Playa, a high-level user interface layer for composing algorithms for complex multiphysics problems out of objects from other Trilinos packages. Among other features, Playa provides very high-performance overloaded operators implemented through an expression template mechanism. In this paper, we give an overview of the central Playa objects from a user's perspective, show application to a sequence of increasingly complex solver algorithms, provide timing results for Playa's overloaded operators and other functions, and briefly survey some of the implementation issues involved.

  15. An integrated high performance fastbus slave interface

    International Nuclear Information System (INIS)

    Christiansen, J.; Ljuslin, C.

    1992-01-01

    A high performance Fastbus slave interface ASIC is presented. The Fastbus slave integrated circuit (FASIC) is a programmable device, enabling its direct use in many different applications. The FASIC acts as an interface between Fastbus and a 'standard' processor/memory bus. It can work stand-alone or together with a microprocessor. A set of address mapping windows can map Fastbus addresses to convenient memory addresses and at the same time act as address decoding logic. Data rates of 100 MBytes/s to Fastbus can be obtained using an internal FIFO buffer in the FASIC. (orig.)

  16. Strategy Guideline. High Performance Residential Lighting

    Energy Technology Data Exchange (ETDEWEB)

    Holton, J. [IBACOS, Inc., Pittsburgh, PA (United States)

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  17. Monitoring Change of Body Fluid during Physical Exercise using Bioimpedance Spectroscopy and Finite Element Simulations

    Directory of Open Access Journals (Sweden)

    Lisa Röthlingshöfer

    2011-12-01

    Full Text Available Athletes need a balanced body composition in order to achieve maximum performance. Especially dehydration reduces power and endurance during physical exercise. Monitoring the body composition, with a focus on body fluid, may help to avoid reduction in performance and other health problems.For this, a potential measurement method is bioimpedance spectroscopy (BIS. BIS is a simple, non-invasive measurement method that allows to determine different body compartments (body fluid, fat, fat-free mass. However, because many physiological changes occur during physical exercise that can influence impedance measurements and distort results, it cannot be assumed that the BIS data are related to body fluid loss alone.To confirm that BIS can detect body fluid loss due to physical exercise, finite element (FE simulations were done. Besides impedance, also the current density contribution during a BIS measurement was modeled to evaluate the influence of certain tissues on BIS measurements.Simulations were done using CST EM Studio (Computer Simulation Technology, Germany and the Visible Human Data Set (National Library of Medicine, USA. In addition to the simulations, BIS measurements were also made on athletes. Comparison between the measured bioimpedance data and simulation data, as well as body weight loss during sport, indicates that BIS measurements are sensitive enough to monitor body fluid loss during physical exercise.doi:10.5617/jeb.178 J Electr Bioimp, vol. 2, pp. 79-85, 2011

  18. Analysis of GEANT4 Physics List Properties in the 12 GeV MOLLER Simulation Framework

    Science.gov (United States)

    Haufe, Christopher; Moller Collaboration

    2013-10-01

    To determine the validity of new physics beyond the scope of the electroweak theory, nuclear physicists across the globe have been collaborating on future endeavors that will provide the precision needed to confirm these speculations. One of these is the MOLLER experiment - a low-energy particle experiment that will utilize the 12 GeV upgrade of Jefferson Lab's CEBAF accelerator. The motivation of this experiment is to measure the parity-violating asymmetry of scattered polarized electrons off unpolarized electrons in a liquid hydrogen target. This measurement would allow for a more precise determination of the electron's weak charge and weak mixing angle. While still in its planning stages, the MOLLER experiment requires a detailed simulation framework in order to determine how the project should be run in the future. The simulation framework for MOLLER, called ``remoll'', is written in GEANT4 code. As a result, the simulation can utilize a number of GEANT4 coded physics lists that provide the simulation with a number of particle interaction constraints based off of different particle physics models. By comparing these lists with one another using the data-analysis application ROOT, the most optimal physics list for the MOLLER simulation can be determined and implemented. This material is based upon work supported by the National Science Foundation under Grant No. 714001.

  19. Verification of results of core physics on-line simulation by NGFM code

    International Nuclear Information System (INIS)

    Zhao Yu; Cao Xinrong; Zhao Qiang

    2008-01-01

    Nodal Green's Function Method program NGFM/TNGFM has been trans- planted to windows system. The 2-D and 3-D benchmarks have been checked by this program. And the program has been used to check the results of QINSHAN-II reactor simulation. It is proved that the NGFM/TNGFM program is applicable for reactor core physics on-line simulation system. (authors)

  20. Physical time scale in kinetic Monte Carlo simulations of continuous-time Markov chains.

    Science.gov (United States)

    Serebrinsky, Santiago A

    2011-03-01

    We rigorously establish a physical time scale for a general class of kinetic Monte Carlo algorithms for the simulation of continuous-time Markov chains. This class of algorithms encompasses rejection-free (or BKL) and rejection (or "standard") algorithms. For rejection algorithms, it was formerly considered that the availability of a physical time scale (instead of Monte Carlo steps) was empirical, at best. Use of Monte Carlo steps as a time unit now becomes completely unnecessary.

  1. Monte Carlo 2000 Conference : Advanced Monte Carlo for Radiation Physics, Particle Transport Simulation and Applications

    CERN Document Server

    Baräo, Fernando; Nakagawa, Masayuki; Távora, Luis; Vaz, Pedro

    2001-01-01

    This book focusses on the state of the art of Monte Carlo methods in radiation physics and particle transport simulation and applications, the latter involving in particular, the use and development of electron--gamma, neutron--gamma and hadronic codes. Besides the basic theory and the methods employed, special attention is paid to algorithm development for modeling, and the analysis of experiments and measurements in a variety of fields ranging from particle to medical physics.

  2. Learning from avatars: Learning assistants practice physics pedagogy in a classroom simulator

    Science.gov (United States)

    Chini, Jacquelyn J.; Straub, Carrie L.; Thomas, Kevin H.

    2016-06-01

    [This paper is part of the Focused Collection on Preparing and Supporting University Physics Educators.] Undergraduate students are increasingly being used to support course transformations that incorporate research-based instructional strategies. While such students are typically selected based on strong content knowledge and possible interest in teaching, they often do not have previous pedagogical training. The current training models make use of real students or classmates role playing as students as the test subjects. We present a new environment for facilitating the practice of physics pedagogy skills, a highly immersive mixed-reality classroom simulator, and assess its effectiveness for undergraduate physics learning assistants (LAs). LAs prepared, taught, and reflected on a lesson about motion graphs for five highly interactive computer generated student avatars in the mixed-reality classroom simulator. To assess the effectiveness of the simulator for this population, we analyzed the pedagogical skills LAs intended to practice and exhibited during their lessons and explored LAs' descriptions of their experiences with the simulator. Our results indicate that the classroom simulator created a safe, effective environment for LAs to practice a variety of skills, such as questioning styles and wait time. Additionally, our analysis revealed areas for improvement in our preparation of LAs and use of the simulator. We conclude with a summary of research questions this environment could facilitate.

  3. High-Performance Tiled WMS and KML Web Server

    Science.gov (United States)

    Plesea, Lucian

    2007-01-01

    This software is an Apache 2.0 module implementing a high-performance map server to support interactive map viewers and virtual planet client software. It can be used in applications that require access to very-high-resolution geolocated images, such as GIS, virtual planet applications, and flight simulators. It serves Web Map Service (WMS) requests that comply with a given request grid from an existing tile dataset. It also generates the KML super-overlay configuration files required to access the WMS image tiles.

  4. Component-based software for high-performance scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  5. Power efficient and high performance VLSI architecture for AES algorithm

    Directory of Open Access Journals (Sweden)

    K. Kalaiselvi

    2015-09-01

    Full Text Available Advanced encryption standard (AES algorithm has been widely deployed in cryptographic applications. This work proposes a low power and high throughput implementation of AES algorithm using key expansion approach. We minimize the power consumption and critical path delay using the proposed high performance architecture. It supports both encryption and decryption using 256-bit keys with a throughput of 0.06 Gbps. The VHDL language is utilized for simulating the design and an FPGA chip has been used for the hardware implementations. Experimental results reveal that the proposed AES architectures offer superior performance than the existing VLSI architectures in terms of power, throughput and critical path delay.

  6. Component-based software for high-performance scientific computing

    International Nuclear Information System (INIS)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly

  7. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  8. Transport in JET high performance plasmas

    International Nuclear Information System (INIS)

    2001-01-01

    Two type of high performance scenarios have been produced in JET during DTE1 campaign. One of them is the well known and extensively used in the past ELM-free hot ion H-mode scenario which has two distinct regions- plasma core and the edge transport barrier. The results obtained during DTE-1 campaign with D, DT and pure T plasmas confirms our previous conclusion that the core transport scales as a gyroBohm in the inner half of plasma volume, recovers its Bohm nature closer to the separatrix and behaves as ion neoclassical in the transport barrier. Measurements on the top of the barrier suggest that the width of the barrier is dependent upon isotope and moreover suggest that fast ions play a key role. The other high performance scenario is a relatively recently developed Optimised Shear Scenario with small or slightly negative magnetic shear in plasma core. Different mechanisms of Internal Transport Barrier (ITB) formation have been tested by predictive modelling and the results are compared with experimentally observed phenomena. The experimentally observed non-penetration of the heavy impurities through the strong ITB which contradicts to a prediction of the conventional neo-classical theory is discussed. (author)

  9. A High Performance COTS Based Computer Architecture

    Science.gov (United States)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  10. High-performance computing for airborne applications

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Manuzatto, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-01-01

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  11. Development of high performance cladding materials

    International Nuclear Information System (INIS)

    Park, Jeong Yong; Jeong, Y. H.; Park, S. Y.

    2010-04-01

    The irradiation test for HANA claddings conducted and a series of evaluation for next-HANA claddings as well as their in-pile and out-of pile performances tests were also carried out at Halden research reactor. The 6th irradiation test have been completed successfully in Halden research reactor. As a result, HANA claddings showed high performance, such as corrosion resistance increased by 40% compared to Zircaloy-4. The high performance of HANA claddings in Halden test has enabled lead test rod program as the first step of the commercialization of HANA claddings. DB has been established for thermal and LOCA-related properties. It was confirmed from the thermal shock test that the integrity of HANA claddings was maintained in more expanded region than the criteria regulated by NRC. The manufacturing process of strips was established in order to apply HANA alloys, which were originally developed for the claddings, to the spacer grids. 250 kinds of model alloys for the next-generation claddings were designed and manufactured over 4 times and used to select the preliminary candidate alloys for the next-generation claddings. The selected candidate alloys showed 50% better corrosion resistance and 20% improved high temperature oxidation resistance compared to the foreign advanced claddings. We established the manufacturing condition controlling the performance of the dual-cooled claddings by changing the reduction rate in the cold working steps

  12. Strategy Guideline: Partnering for High Performance Homes

    Energy Technology Data Exchange (ETDEWEB)

    Prahl, D.

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  13. Management issues for high performance storage systems

    Energy Technology Data Exchange (ETDEWEB)

    Louis, S. [Lawrence Livermore National Lab., CA (United States); Burris, R. [Oak Ridge National Lab., TN (United States)

    1995-03-01

    Managing distributed high-performance storage systems is complex and, although sharing common ground with traditional network and systems management, presents unique storage-related issues. Integration technologies and frameworks exist to help manage distributed network and system environments. Industry-driven consortia provide open forums where vendors and users cooperate to leverage solutions. But these new approaches to open management fall short addressing the needs of scalable, distributed storage. We discuss the motivation and requirements for storage system management (SSM) capabilities and describe how SSM manages distributed servers and storage resource objects in the High Performance Storage System (HPSS), a new storage facility for data-intensive applications and large-scale computing. Modem storage systems, such as HPSS, require many SSM capabilities, including server and resource configuration control, performance monitoring, quality of service, flexible policies, file migration, file repacking, accounting, and quotas. We present results of initial HPSS SSM development including design decisions and implementation trade-offs. We conclude with plans for follow-on work and provide storage-related recommendations for vendors and standards groups seeking enterprise-wide management solutions.

  14. Transport in JET high performance plasmas

    International Nuclear Information System (INIS)

    1999-01-01

    Two type of high performance scenarios have been produced in JET during DTE1 campaign. One of them is the well known and extensively used in the past ELM-free hot ion H-mode scenario which has two distinct regions- plasma core and the edge transport barrier. The results obtained during DTE-1 campaign with D, DT and pure T plasmas confirms our previous conclusion that the core transport scales as a gyroBohm in the inner half of plasma volume, recovers its Bohm nature closer to the separatrix and behaves as ion neoclassical in the transport barrier. Measurements on the top of the barrier suggest that the width of the barrier is dependent upon isotope and moreover suggest that fast ions play a key role. The other high performance scenario is a relatively recently developed Optimised Shear Scenario with small or slightly negative magnetic shear in plasma core. Different mechanisms of Internal Transport Barrier (ITB) formation have been tested by predictive modelling and the results are compared with experimentally observed phenomena. The experimentally observed non-penetration of the heavy impurities through the strong ITB which contradicts to a prediction of the conventional neo-classical theory is discussed. (author)

  15. A Linux Workstation for High Performance Graphics

    Science.gov (United States)

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  16. High performance separation of lanthanides and actinides

    International Nuclear Information System (INIS)

    Sivaraman, N.; Vasudeva Rao, P.R.

    2011-01-01

    The major advantage of High Performance Liquid Chromatography (HPLC) is its ability to provide rapid and high performance separations. It is evident from Van Deemter curve for particle size versus resolution that packing materials with particle sizes less than 2 μm provide better resolution for high speed separations and resolving complex mixtures compared to 5 μm based supports. In the recent past, chromatographic support material using monolith has been studied extensively at our laboratory. Monolith column consists of single piece of porous, rigid material containing mesopores and micropores, which provide fast analyte mass transfer. Monolith support provides significantly higher separation efficiency than particle-packed columns. A clear advantage of monolith is that it could be operated at higher flow rates but with lower back pressure. Higher operating flow rate results in higher column permeability, which drastically reduces analysis time and provides high separation efficiency. The above developed fast separation methods were applied to assay the lanthanides and actinides from the dissolver solutions of nuclear reactor fuels

  17. Building Trust in High-Performing Teams

    Directory of Open Access Journals (Sweden)

    Aki Soudunsaari

    2012-06-01

    Full Text Available Facilitation of growth is more about good, trustworthy contacts than capital. Trust is a driving force for business creation, and to create a global business you need to build a team that is capable of meeting the challenge. Trust is a key factor in team building and a needed enabler for cooperation. In general, trust building is a slow process, but it can be accelerated with open interaction and good communication skills. The fast-growing and ever-changing nature of global business sets demands for cooperation and team building, especially for startup companies. Trust building needs personal knowledge and regular face-to-face interaction, but it also requires empathy, respect, and genuine listening. Trust increases communication, and rich and open communication is essential for the building of high-performing teams. Other building materials are a shared vision, clear roles and responsibilities, willingness for cooperation, and supporting and encouraging leadership. This study focuses on trust in high-performing teams. It asks whether it is possible to manage trust and which tools and operation models should be used to speed up the building of trust. In this article, preliminary results from the authors’ research are presented to highlight the importance of sharing critical information and having a high level of communication through constant interaction.

  18. Design of High Performance Permanent-Magnet Synchronous Wind Generators

    Directory of Open Access Journals (Sweden)

    Chun-Yu Hsiao

    2014-11-01

    Full Text Available This paper is devoted to the analysis and design of high performance permanent-magnet synchronous wind generators (PSWGs. A systematic and sequential methodology for the design of PMSGs is proposed with a high performance wind generator as a design model. Aiming at high induced voltage, low harmonic distortion as well as high generator efficiency, optimal generator parameters such as pole-arc to pole-pitch ratio and stator-slot-shoes dimension, etc. are determined with the proposed technique using Maxwell 2-D, Matlab software and the Taguchi method. The proposed double three-phase and six-phase winding configurations, which consist of six windings in the stator, can provide evenly distributed current for versatile applications regarding the voltage and current demands for practical consideration. Specifically, windings are connected in series to increase the output voltage at low wind speed, and in parallel during high wind speed to generate electricity even when either one winding fails, thereby enhancing the reliability as well. A PMSG is designed and implemented based on the proposed method. When the simulation is performed with a 6 Ω load, the output power for the double three-phase winding and six-phase winding are correspondingly 10.64 and 11.13 kW. In addition, 24 Ω load experiments show that the efficiencies of double three-phase winding and six-phase winding are 96.56% and 98.54%, respectively, verifying the proposed high performance operation.

  19. PHYSICS

    CERN Multimedia

    P. Sphicas

    The CPT project came to an end in December 2006 and its original scope is now shared among three new areas, namely Computing, Offline and Physics. In the physics area the basic change with respect to the previous system (where the PRS groups were charged with detector and physics object reconstruction and physics analysis) was the split of the detector PRS groups (the old ECAL-egamma, HCAL-jetMET, Tracker-btau and Muons) into two groups each: a Detector Performance Group (DPG) and a Physics Object Group. The DPGs are now led by the Commissioning and Run Coordinator deputy (Darin Acosta) and will appear in the correspond¬ing column in CMS bulletins. On the physics side, the physics object groups are charged with the reconstruction of physics objects, the tuning of the simulation (in collaboration with the DPGs) to reproduce the data, the provision of code for the High-Level Trigger, the optimization of the algorithms involved for the different physics analyses (in collaboration with the analysis gr...

  20. Circuit simulation and physical implementation for a memristor-based colpitts oscillator

    Science.gov (United States)

    Deng, Hongmin; Wang, Dongping

    2017-03-01

    This paper implements two kinds of memristor-based colpitts oscillators, namely, the circuit where the memristor is added into the feedback network of the oscillator in parallel and series, respectively. First, a MULTISIM simulation circuit for the memristive colpitts oscillator is built, where an emulator constructed by some off-the-shelf components is utilized to replace the memristor. Then the physical system is implemented in terms of the MULTISIM simulation circuit. Circuit simulation and experimental study show that this memristive colpitts oscillator can exhibit periodic, quasi-periodic, and chaotic behaviors with certain parameter's variances. Besides, in a sense, the circuit is robust with circuit parameters and device types.

  1. Top scientific research center deploys Zambeel Aztera (TM) network storage system in high performance environment

    CERN Multimedia

    2002-01-01

    " The National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory has implemented a Zambeel Aztera storage system and software to accelerate the productivity of scientists running high performance scientific simulations and computations" (1 page).

  2. Nuclear forces and high-performance computing: The perfect match

    International Nuclear Information System (INIS)

    Luu, T; Walker-Loud, A

    2009-01-01

    High-performance computing is now enabling the calculation of certain hadronic interaction parameters directly from Quantum Chromodynamics, the quantum field theory that governs the behavior of quarks and gluons and is ultimately responsible for the nuclear strong force. In this paper we briefly describe the state of the field and show how other aspects of hadronic interactions will be ascertained in the near future. We give estimates of computational requirements needed to obtain these goals, and outline a procedure for incorporating these results into the broader nuclear physics community.

  3. Carbon nanotubes for high-performance logic

    NARCIS (Netherlands)

    Chen, Zhihong; Philip Wong, H.-S.; Mitra, S.; Bol, A.A.; Peng, Lianmao; Hills, Gage; Thissen, N.F.W.

    2014-01-01

    Single-wall carbon nanotubes (CNTs) were discovered in 1993 and have been an area of intense research since then. They offer the right dimensions to explore material science and physical chemistry at the nanoscale and are the perfect system to study low-dimensional physics and transport. In the past

  4. Studies on high performance Timeslice building on the CBM FLES

    Energy Technology Data Exchange (ETDEWEB)

    Hartmann, Helvi [Frankfurt Institute for Advanced Studies, Goethe University, Frankfurt (Germany); Collaboration: CBM-Collaboration

    2015-07-01

    In contrast to already existing high energy physics experiments the Compressed Baryonic Matter (CBM) experiment collects all data untriggered. The First-level Event Selector (FLES), which denotes a high performance computer cluster, processes the very high incoming data rate of 1 TByte/s and performs a full online event reconstruction. For this task it needs to access the raw detector data in time intervals referred to as Timeslices. In order to construct the Timeslices, the FLES Timeslice building has to combine data from all input links and distribute them via a high-performance network to the compute nodes. For fast data transfer the Infiniband network has proven to be appropriate. One option to address the network is using Infiniband (RDMA) Verbs directly and potentially making best use of Infiniband. However, it is a very low-level implementation relying on the hardware and neglecting other possible network technologies in the future. Another approach is to apply a high-level API like MPI which is independent of the underlying hardware and suitable for less error prone software development. I present the given possibilities and show the results of benchmarks ran on high-performance computing clusters. The solutions are evaluated regarding the Timeslice building in CBM.

  5. Study of physical properties, gas generation and gas retention in simulated Hanford waste

    International Nuclear Information System (INIS)

    Bryan, S.A.; Pederson, L.R.; Scheele, R.D.

    1993-04-01

    The purpose of this study was to establish the chemical and physical processes responsible for the generation and retention of gases within high-level waste from Tank 101-SY on the Hanford Site. This research, conducted using simulated waste on a laboratory scale, supports the development of mitigation/remediation strategies for Tank 101-SY. Simulated waste formulations are based on actual waste compositions. Selected physical properties of the simulated waste are compared to properties of actual Tank 101-SY waste samples. Laboratory studies using aged simulated waste show that significant gas generation occurs thermally at current tank temperatures (∼60 degrees C). Gas compositions include the same gases produced in actual tank waste, primarily N 2 , N 2 O, and H 2 . Gas stoichiometries have been shown to be greatly influenced by several organic and inorganic constituents within the simulated waste. Retention of gases in the simulated waste is in the form of bubble attachment to solid particles. This attachment phenomenon is related to the presence of organic constituents (HEDTA, EDTA, and citrate) of the simulated waste. A mechanism is discussed that relates the gas bubble/particle interactions to the partially hydrophobic surface produced on the solids by the organic constituents

  6. I - Detector Simulation for the LHC and beyond: how to match computing resources and physics requirements

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Detector simulation at the LHC is one of the most computing intensive activities. In these lectures we will show how physics requirements were met for the LHC experiments and extrapolate to future experiments (FCC-hh case). At the LHC, detectors are complex, very precise and ambitious: this implies modern modelisation tools for geometry and response. Events are busy and characterised by an unprecedented energy scale with hundreds of particles to be traced and high energy showers to be accurately simulated. Furthermore, high luminosities imply many events in a bunch crossing and many bunch crossings to be considered at the same time. In addition, backgrounds not directly correlated to bunch crossings have also to be taken into account. Solutions chosen for ATLAS (a mixture of detailed simulation and fast simulation/parameterisation) will be described and CPU and memory figures will be given. An extrapolation to the FCC-hh case will be tried by taking as example the calorimeter simulation.

  7. II - Detector simulation for the LHC and beyond : how to match computing resources and physics requirements

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Detector simulation at the LHC is one of the most computing intensive activities. In these lectures we will show how physics requirements were met for the LHC experiments and extrapolate to future experiments (FCC-hh case). At the LHC, detectors are complex, very precise and ambitious: this implies modern modelisation tools for geometry and response. Events are busy and characterised by an unprecedented energy scale with hundreds of particles to be traced and high energy showers to be accurately simulated. Furthermore, high luminosities imply many events in a bunch crossing and many bunch crossings to be considered at the same time. In addition, backgrounds not directly correlated to bunch crossings have also to be taken into account. Solutions chosen for ATLAS (a mixture of detailed simulation and fast simulation/parameterisation) will be described and CPU and memory figures will be given. An extrapolation to the FCC-hh case will be tried by taking as example the calorimeter simulation.

  8. Quantum simulations with photons and polaritons merging quantum optics with condensed matter physics

    CERN Document Server

    2017-01-01

    This book reviews progress towards quantum simulators based on photonic and hybrid light-matter systems, covering theoretical proposals and recent experimental work. Quantum simulators are specially designed quantum computers. Their main aim is to simulate and understand complex and inaccessible quantum many-body phenomena found or predicted in condensed matter physics, materials science and exotic quantum field theories. Applications will include the engineering of smart materials, robust optical or electronic circuits, deciphering quantum chemistry and even the design of drugs. Technological developments in the fields of interfacing light and matter, especially in many-body quantum optics, have motivated recent proposals for quantum simulators based on strongly correlated photons and polaritons generated in hybrid light-matter systems. The latter have complementary strengths to cold atom and ion based simulators and they can probe for example out of equilibrium phenomena in a natural driven-dissipative sett...

  9. Discrete-event simulation for the design and evaluation of physical protection systems

    International Nuclear Information System (INIS)

    Jordan, S.E.; Snell, M.K.; Madsen, M.M.; Smith, J.S.; Peters, B.A.

    1998-01-01

    This paper explores the use of discrete-event simulation for the design and control of physical protection systems for fixed-site facilities housing items of significant value. It begins by discussing several modeling and simulation activities currently performed in designing and analyzing these protection systems and then discusses capabilities that design/analysis tools should have. The remainder of the article then discusses in detail how some of these new capabilities have been implemented in software to achieve a prototype design and analysis tool. The simulation software technology provides a communications mechanism between a running simulation and one or more external programs. In the prototype security analysis tool, these capabilities are used to facilitate human-in-the-loop interaction and to support a real-time connection to a virtual reality (VR) model of the facility being analyzed. This simulation tool can be used for both training (in real-time mode) and facility analysis and design (in fast mode)

  10. Developing Digital Simulations and its Impact on Physical Education of Pre-Service Teachers

    Directory of Open Access Journals (Sweden)

    Esther Zaretsky

    2006-08-01

    Full Text Available The creation of digital simulations through the use of computers improved physical education of pre-service teachers. The method which was based on up-to-date studies focuses on the visualization of the body's movements in space. The main program of the research concentrated on building curriculum for teaching physical education through computerized presentations. The pre-service teachers reported about their progress in a variety of physical skills and their motivation in both kinds of learning was enhanced.

  11. On physical and numerical instabilities arising in simulations of non-stationary radiatively cooling shocks

    Science.gov (United States)

    Badjin, D. A.; Glazyrin, S. I.; Manukovskiy, K. V.; Blinnikov, S. I.

    2016-06-01

    We describe our modelling of the radiatively cooling shocks and their thin shells with various numerical tools in different physical and calculational setups. We inspect structure of the dense shell, its formation and evolution, pointing out physical and numerical factors that sustain its shape and also may lead to instabilities. We have found that under certain physical conditions, the circular shaped shells show a strong bending instability and successive fragmentation on Cartesian grids soon after their formation, while remain almost unperturbed when simulated on polar meshes. We explain this by physical Rayleigh-Taylor-like instabilities triggered by corrugation of the dense shell surfaces by numerical noise. Conditions for these instabilities follow from both the shell structure itself and from episodes of transient acceleration during re-establishing of dynamical pressure balance after sudden radiative cooling onset. They are also easily excited by physical perturbations of the ambient medium. The widely mentioned non-linear thin shell instability, in contrast, in tests with physical perturbations is shown to have only limited chances to develop in real radiative shocks, as it seems to require a special spatial arrangement of fluctuations to be excited efficiently. The described phenomena also set new requirements on further simulations of the radiatively cooling shocks in order to be physically correct and free of numerical artefacts.

  12. Physical and Liquid Chemical Simulant Formulations for Transuranic Waste in Hanford Single-Shell Tanks

    International Nuclear Information System (INIS)

    Rassat, Scot D.; Bagaasen, Larry M.; Mahoney, Lenna A.; Russell, Renee L.; Caldwell, Dustin D.; Mendoza, Donaldo P.

    2003-01-01

    CH2M HILL Hanford Group, Inc. (CH2M HILL) is in the process of identifying and developing supplemental process technologies to accelerate the tank waste cleanup mission. A range of technologies is being evaluated to allow disposal of Hanford waste types, including transuranic (TRU) process wastes. Ten Hanford single-shell tanks (SSTs) have been identified whose contents may meet the criteria for designation as TRU waste: the B-200 series (241-B-201, -B-202, -B 203, and B 204), the T-200 series (241-T-201, T 202, -T-203, and -T-204), and Tanks 241-T-110 and -T-111. CH2M HILL has requested vendor proposals to develop a system to transfer and package the contact-handled TRU (CH-TRU) waste retrieved from the SSTs for subsequent disposal at the Waste Isolation Pilot Plant (WIPP). Current plans call for a modified ''dry'' retrieval process in which a liquid stream is used to help mobilize the waste for retrieval and transfer through lines and vessels. This retrieval approach requires that a significant portion of the liquid be removed from the mobilized waste sludge in a ''dewatering'' process such as centrifugation prior to transferring to waste packages in a form suitable for acceptance at WIPP. In support of CH2M HILL's effort to procure a TRU waste handling and packaging process, Pacific Northwest National Laboratory (PNNL) developed waste simulant formulations to be used in evaluating the vendor's system. For the SST CH-TRU wastes, the suite of simulants includes (1) nonradioactive chemical simulants of the liquid fraction of the waste, (2) physical simulants that reproduce the important dewatering properties of the waste, and (3) physical simulants that can be used to mimic important rheological properties of the waste at different points in the TRU waste handling and packaging process. To validate the simulant formulations, their measured properties were compared with the limited data for actual TRU waste samples. PNNL developed the final simulant formulations

  13. Flexible and biocompatible high-performance solid-state micro-battery for implantable orthodontic system

    KAUST Repository

    Kutbee, Arwa T.; Bahabry, Rabab R.; Alamoudi, Kholod O.; Ghoneim, Mohamed T.; Cordero, Marlon D.; Almuslem, Amani S.; Gumus, Abdurrahman; Diallo, Elhadj M.; Nassar, Joanna M.; Hussain, Aftab M.; Khashab, Niveen M.; Hussain, Muhammad Mustafa

    2017-01-01

    To augment the quality of our life, fully compliant personalized advanced health-care electronic system is pivotal. One of the major requirements to implement such systems is a physically flexible high-performance biocompatible energy storage

  14. High Performance Hybrid Propulsion System for a Small Launch Vehicle, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Physical Sciences Inc. (PSI) proposes to design, develop and demonstrate an innovative high-performance, green, storable hybrid propellant system in a high mass...

  15. Effects of a Haptic Augmented Simulation on K-12 Students' Achievement and Their Attitudes Towards Physics

    Science.gov (United States)

    Civelek, Turhan; Ucar, Erdem; Ustunel, Hakan; Aydin, Mehmet Kemal

    2014-01-01

    The current research aims to explore the effects of a haptic augmented simulation on students' achievement and their attitudes towards Physics in an immersive virtual reality environment (VRE). A quasi-experimental post-test design was employed utilizing experiment and control groups. The participants were 215 students from a K-12 school in…

  16. Optimizing a physical security configuration using a highly detailed simulation model

    NARCIS (Netherlands)

    Marechal, T.M.A.; Smith, A.E.; Ustun, V.; Smith, J.S.; Lefeber, A.A.J.; Badiru, A.B.; Thomas, M.U.

    2009-01-01

    This research is focused on using a highly detailed simulation model to create a physical security system to prevent intrusions in a building. Security consists of guards and security cameras. The problem is represented as a binary optimization problem. A new heuristic is proposed to do the security

  17. Compact physical model of a-IGZO TFTs for circuit simulation

    NARCIS (Netherlands)

    Ghittorelli, M.; Torricelli, F.; Garripoli, C.; Van Der Steen, J.L.J.P.; Gelinck, G.H.; Abdinia, S.; Cantatore, E.; Kovacs-Vajna, Z.M.

    2017-01-01

    Amorphous InGaZnO (a-IGZO) is a candidate material for thin-film transistors (TFTs) owing to its large electron mobility. The development of high functionality circuits requires accurate and efficient circuit simulation that, in turn, is based on compact physical a-IGZO TFTs models. Here we propose

  18. Intel Xeon Phi coprocessor high performance programming

    CERN Document Server

    Jeffers, James

    2013-01-01

    Authors Jim Jeffers and James Reinders spent two years helping educate customers about the prototype and pre-production hardware before Intel introduced the first Intel Xeon Phi coprocessor. They have distilled their own experiences coupled with insights from many expert customers, Intel Field Engineers, Application Engineers and Technical Consulting Engineers, to create this authoritative first book on the essentials of programming for this new architecture and these new products. This book is useful even before you ever touch a system with an Intel Xeon Phi coprocessor. To ensure that your applications run at maximum efficiency, the authors emphasize key techniques for programming any modern parallel computing system whether based on Intel Xeon processors, Intel Xeon Phi coprocessors, or other high performance microprocessors. Applying these techniques will generally increase your program performance on any system, and better prepare you for Intel Xeon Phi coprocessors and the Intel MIC architecture. It off...

  19. Robust High Performance Aquaporin based Biomimetic Membranes

    DEFF Research Database (Denmark)

    Helix Nielsen, Claus; Zhao, Yichun; Qiu, C.

    2013-01-01

    on top of a support membrane. Control membranes, either without aquaporins or with the inactive AqpZ R189A mutant aquaporin served as controls. The separation performance of the membranes was evaluated by cross-flow forward osmosis (FO) and reverse osmosis (RO) tests. In RO the ABM achieved a water......Aquaporins are water channel proteins with high water permeability and solute rejection, which makes them promising for preparing high-performance biomimetic membranes. Despite the growing interest in aquaporin-based biomimetic membranes (ABMs), it is challenging to produce robust and defect...... permeability of ~ 4 L/(m2 h bar) with a NaCl rejection > 97% at an applied hydraulic pressure of 5 bar. The water permeability was ~40% higher compared to a commercial brackish water RO membrane (BW30) and an order of magnitude higher compared to a seawater RO membrane (SW30HR). In FO, the ABMs had > 90...

  20. High performance nano-composite technology development

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D. [KAERI, Taejon (Korea, Republic of); Kim, E. K.; Jung, S. Y.; Ryu, H. J. [KRICT, Taejon (Korea, Republic of); Hwang, S. S.; Kim, J. K.; Hong, S. M. [KIST, Taejon (Korea, Republic of); Chea, Y. B. [KIGAM, Taejon (Korea, Republic of); Choi, C. H.; Kim, S. D. [ATS, Taejon (Korea, Republic of); Cho, B. G.; Lee, S. H. [HGREC, Taejon (Korea, Republic of)

    1999-06-15

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  1. High Performance OLED Panel and Luminaire

    Energy Technology Data Exchange (ETDEWEB)

    Spindler, Jeffrey [OLEDWorks LLC, Rochester, NY (United States)

    2017-02-20

    In this project, OLEDWorks developed and demonstrated the technology required to produce OLED lighting panels with high energy efficiency and excellent light quality. OLED panels developed in this program produce high quality warm white light with CRI greater than 85 and efficacy up to 80 lumens per watt (LPW). An OLED luminaire employing 24 of the high performance panels produces practical levels of illumination for general lighting, with a flux of over 2200 lumens at 60 LPW. This is a significant advance in the state of the art for OLED solid-state lighting (SSL), which is expected to be a complementary light source to the more advanced LED SSL technology that is rapidly replacing all other traditional forms of lighting.

  2. How to create high-performing teams.

    Science.gov (United States)

    Lam, Samuel M

    2010-02-01

    This article is intended to discuss inspirational aspects on how to lead a high-performance team. Cogent topics discussed include how to hire staff through methods of "topgrading" with reference to Geoff Smart and "getting the right people on the bus" referencing Jim Collins' work. In addition, once the staff is hired, this article covers how to separate the "eagles from the ducks" and how to inspire one's staff by creating the right culture with suggestions for further reading by Don Miguel Ruiz (The four agreements) and John Maxwell (21 Irrefutable laws of leadership). In addition, Simon Sinek's concept of "Start with Why" is elaborated to help a leader know what the core element should be with any superior culture. Thieme Medical Publishers.

  3. High performance nano-composite technology development

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D. [KAERI, Taejon (Korea, Republic of); Kim, E. K.; Jung, S. Y.; Ryu, H. J. [KRICT, Taejon (Korea, Republic of); Hwang, S. S.; Kim, J. K.; Hong, S. M. [KIST, Taejon (Korea, Republic of); Chea, Y. B. [KIGAM, Taejon (Korea, Republic of); Choi, C. H.; Kim, S. D. [ATS, Taejon (Korea, Republic of); Cho, B. G.; Lee, S. H. [HGREC, Taejon (Korea, Republic of)

    1999-06-15

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  4. High performance nano-composite technology development

    International Nuclear Information System (INIS)

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D.; Kim, E. K.; Jung, S. Y.; Ryu, H. J.; Hwang, S. S.; Kim, J. K.; Hong, S. M.; Chea, Y. B.; Choi, C. H.; Kim, S. D.; Cho, B. G.; Lee, S. H.

    1999-06-01

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  5. Development of high-performance blended cements

    Science.gov (United States)

    Wu, Zichao

    2000-10-01

    This thesis presents the development of high-performance blended cements from industrial by-products. To overcome the low-early strength of blended cements, several chemicals were studied as the activators for cement hydration. Sodium sulfate was discovered as the best activator. The blending proportions were optimized by Taguchi experimental design. The optimized blended cements containing up to 80% fly ash performed better than Type I cement in strength development and durability. Maintaining a constant cement content, concrete produced from the optimized blended cements had equal or higher strength and higher durability than that produced from Type I cement alone. The key for the activation mechanism was the reaction between added SO4 2- and Ca2+ dissolved from cement hydration products.

  6. High Performance with Prescriptive Optimization and Debugging

    DEFF Research Database (Denmark)

    Jensen, Nicklas Bo

    parallelization and automatic vectorization is attractive as it transparently optimizes programs. The thesis contributes an improved dependence analysis for explicitly parallel programs. These improvements lead to more loops being vectorized, on average we achieve a speedup of 1.46 over the existing dependence...... analysis and vectorizer in GCC. Automatic optimizations often fail for theoretical and practical reasons. When they fail we argue that a hybrid approach can be effective. Using compiler feedback, we propose to use the programmer’s intuition and insight to achieve high performance. Compiler feedback...... enlightens the programmer why a given optimization was not applied, and suggest how to change the source code to make it more amenable to optimizations. We show how this can yield significant speedups and achieve 2.4 faster execution on a real industrial use case. To aid in parallel debugging we propose...

  7. High performance anode for advanced Li batteries

    Energy Technology Data Exchange (ETDEWEB)

    Lake, Carla [Applied Sciences, Inc., Cedarville, OH (United States)

    2015-11-02

    The overall objective of this Phase I SBIR effort was to advance the manufacturing technology for ASI’s Si-CNF high-performance anode by creating a framework for large volume production and utilization of low-cost Si-coated carbon nanofibers (Si-CNF) for the battery industry. This project explores the use of nano-structured silicon which is deposited on a nano-scale carbon filament to achieve the benefits of high cycle life and high charge capacity without the consequent fading of, or failure in the capacity resulting from stress-induced fracturing of the Si particles and de-coupling from the electrode. ASI’s patented coating process distinguishes itself from others, in that it is highly reproducible, readily scalable and results in a Si-CNF composite structure containing 25-30% silicon, with a compositionally graded interface at the Si-CNF interface that significantly improve cycling stability and enhances adhesion of silicon to the carbon fiber support. In Phase I, the team demonstrated the production of the Si-CNF anode material can successfully be transitioned from a static bench-scale reactor into a fluidized bed reactor. In addition, ASI made significant progress in the development of low cost, quick testing methods which can be performed on silicon coated CNFs as a means of quality control. To date, weight change, density, and cycling performance were the key metrics used to validate the high performance anode material. Under this effort, ASI made strides to establish a quality control protocol for the large volume production of Si-CNFs and has identified several key technical thrusts for future work. Using the results of this Phase I effort as a foundation, ASI has defined a path forward to commercialize and deliver high volume and low-cost production of SI-CNF material for anodes in Li-ion batteries.

  8. High Performance Systolic Array Core Architecture Design for DNA Sequencer

    Directory of Open Access Journals (Sweden)

    Saiful Nurdin Dayana

    2018-01-01

    Full Text Available This paper presents a high performance systolic array (SA core architecture design for Deoxyribonucleic Acid (DNA sequencer. The core implements the affine gap penalty score Smith-Waterman (SW algorithm. This time-consuming local alignment algorithm guarantees optimal alignment between DNA sequences, but it requires quadratic computation time when performed on standard desktop computers. The use of linear SA decreases the time complexity from quadratic to linear. In addition, with the exponential growth of DNA databases, the SA architecture is used to overcome the timing issue. In this work, the SW algorithm has been captured using Verilog Hardware Description Language (HDL and simulated using Xilinx ISIM simulator. The proposed design has been implemented in Xilinx Virtex -6 Field Programmable Gate Array (FPGA and improved in the core area by 90% reduction.

  9. High performance APCS conceptual design and evaluation scoping study

    International Nuclear Information System (INIS)

    Soelberg, N.; Liekhus, K.; Chambers, A.; Anderson, G.

    1998-02-01

    This Air Pollution Control System (APCS) Conceptual Design and Evaluation study was conducted to evaluate a high-performance (APC) system for minimizing air emissions from mixed waste thermal treatment systems. Seven variations of high-performance APCS designs were conceptualized using several design objectives. One of the system designs was selected for detailed process simulation using ASPEN PLUS to determine material and energy balances and evaluate performance. Installed system capital costs were also estimated. Sensitivity studies were conducted to evaluate the incremental cost and benefit of added carbon adsorber beds for mercury control, specific catalytic reduction for NO x control, and offgas retention tanks for holding the offgas until sample analysis is conducted to verify that the offgas meets emission limits. Results show that the high-performance dry-wet APCS can easily meet all expected emission limits except for possibly mercury. The capability to achieve high levels of mercury control (potentially necessary for thermally treating some DOE mixed streams) could not be validated using current performance data for mercury control technologies. The engineering approach and ASPEN PLUS modeling tool developed and used in this study identified APC equipment and system performance, size, cost, and other issues that are not yet resolved. These issues need to be addressed in feasibility studies and conceptual designs for new facilities or for determining how to modify existing facilities to meet expected emission limits. The ASPEN PLUS process simulation with current and refined input assumptions and calculations can be used to provide system performance information for decision-making, identifying best options, estimating costs, reducing the potential for emission violations, providing information needed for waste flow analysis, incorporating new APCS technologies in existing designs, or performing facility design and permitting activities

  10. High Performance Graded Index Polymer Optical Fibers

    National Research Council Canada - National Science Library

    Garito, Anthony

    1998-01-01

    ...) plastic optical fibers (POF) and graded index (GI) POFs are reported. A set of criteria and analyses of physical parameters are developed in context to the major issues of POF applications in short-distance communication systems...

  11. Implementing a modeling software for animated protein-complex interactions using a physics simulation library.

    Science.gov (United States)

    Ueno, Yutaka; Ito, Shuntaro; Konagaya, Akihiko

    2014-12-01

    To better understand the behaviors and structural dynamics of proteins within a cell, novel software tools are being developed that can create molecular animations based on the findings of structural biology. This study proposes our method developed based on our prototypes to detect collisions and examine the soft-body dynamics of molecular models. The code was implemented with a software development toolkit for rigid-body dynamics simulation and a three-dimensional graphics library. The essential functions of the target software system included the basic molecular modeling environment, collision detection in the molecular models, and physical simulations of the movement of the model. Taking advantage of recent software technologies such as physics simulation modules and interpreted scripting language, the functions required for accurate and meaningful molecular animation were implemented efficiently.

  12. On the Dependence of Cloud Feedbacks on Physical Parameterizations in WRF Aquaplanet Simulations

    Science.gov (United States)

    Cesana, Grégory; Suselj, Kay; Brient, Florent

    2017-10-01

    We investigate the effects of physical parameterizations on cloud feedback uncertainty in response to climate change. For this purpose, we construct an ensemble of eight aquaplanet simulations using the Weather Research and Forecasting (WRF) model. In each WRF-derived simulation, we replace only one parameterization at a time while all other parameters remain identical. By doing so, we aim to (i) reproduce cloud feedback uncertainty from state-of-the-art climate models and (ii) understand how parametrizations impact cloud feedbacks. Our results demonstrate that this ensemble of WRF simulations, which differ only in physical parameterizations, replicates the range of cloud feedback uncertainty found in state-of-the-art climate models. We show that microphysics and convective parameterizations govern the magnitude and sign of cloud feedbacks, mostly due to tropical low-level clouds in subsidence regimes. Finally, this study highlights the advantages of using WRF to analyze cloud feedback mechanisms owing to its plug-and-play parameterization capability.

  13. Physics-based statistical model and simulation method of RF propagation in urban environments

    Science.gov (United States)

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  14. High-performance computing in accelerating structure design and analysis

    International Nuclear Information System (INIS)

    Li Zenghai; Folwell, Nathan; Ge Lixin; Guetz, Adam; Ivanov, Valentin; Kowalski, Marc; Lee, Lie-Quan; Ng, Cho-Kuen; Schussman, Greg; Stingelin, Lukas; Uplenchwar, Ravindra; Wolf, Michael; Xiao, Liling; Ko, Kwok

    2006-01-01

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R and D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high-performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high-performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long-range wakefields)

  15. Simulation of petroleum recovery in naturally fractured reservoirs: physical process representation

    Energy Technology Data Exchange (ETDEWEB)

    Paiva, Hernani P.; Miranda Filho, Daniel N. de [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil); Schiozer, Denis J. [Universidade Estadual de Campinas (UNICAMP), SP (Brazil)

    2012-07-01

    The naturally fractured reservoir recovery normally involves risk especially in intermediate to oil wet systems because of the simulations poor efficiency results under waterflood displacement. Double-porosity models are generally used in fractured reservoir simulation and have been implemented in the major commercial reservoir simulators. The physical processes acting in petroleum recovery are represented in double-porosity models by matrix-fracture transfer functions, therefore commercial simulators have their own implementations, and as a result different kinetics and final recoveries are attained. In this work, a double porosity simulator was built with Kazemi et al. (1976), Sabathier et al. (1998) and Lu et al. (2008) transfer function implementations and their recovery results have been compared using waterflood displacement in oil-wet or intermediate-wet systems. The results of transfer function comparisons have showed recovery improvements in oil-wet or intermediate-wet systems under different physical processes combination, particularly in fully discontinuous porous medium when concurrent imbibition takes place, coherent with Firoozabadi (2000) experimental results. Furthermore, the implemented transfer functions, related to a double-porosity model, have been compared to double-porosity commercial simulator model, as well a discrete fracture model with refined grid, showing differences between them. Waterflood can be an effective recovery method even in fully discontinuous media for oil-wet or intermediate-wet systems where concurrent imbibition takes place with high enough pressure gradients across the matrix blocks. (author)

  16. Assessment of Robotic Patient Simulators for Training in Manual Physical Therapy Examination Techniques

    Science.gov (United States)

    Ishikawa, Shun; Okamoto, Shogo; Isogai, Kaoru; Akiyama, Yasuhiro; Yanagihara, Naomi; Yamada, Yoji

    2015-01-01

    Robots that simulate patients suffering from joint resistance caused by biomechanical and neural impairments are used to aid the training of physical therapists in manual examination techniques. However, there are few methods for assessing such robots. This article proposes two types of assessment measures based on typical judgments of clinicians. One of the measures involves the evaluation of how well the simulator presents different severities of a specified disease. Experienced clinicians were requested to rate the simulated symptoms in terms of severity, and the consistency of their ratings was used as a performance measure. The other measure involves the evaluation of how well the simulator presents different types of symptoms. In this case, the clinicians were requested to classify the simulated resistances in terms of symptom type, and the average ratios of their answers were used as performance measures. For both types of assessment measures, a higher index implied higher agreement among the experienced clinicians that subjectively assessed the symptoms based on typical symptom features. We applied these two assessment methods to a patient knee robot and achieved positive appraisals. The assessment measures have potential for use in comparing several patient simulators for training physical therapists, rather than as absolute indices for developing a standard. PMID:25923719

  17. Assessment of robotic patient simulators for training in manual physical therapy examination techniques.

    Directory of Open Access Journals (Sweden)

    Shun Ishikawa

    Full Text Available Robots that simulate patients suffering from joint resistance caused by biomechanical and neural impairments are used to aid the training of physical therapists in manual examination techniques. However, there are few methods for assessing such robots. This article proposes two types of assessment measures based on typical judgments of clinicians. One of the measures involves the evaluation of how well the simulator presents different severities of a specified disease. Experienced clinicians were requested to rate the simulated symptoms in terms of severity, and the consistency of their ratings was used as a performance measure. The other measure involves the evaluation of how well the simulator presents different types of symptoms. In this case, the clinicians were requested to classify the simulated resistances in terms of symptom type, and the average ratios of their answers were used as performance measures. For both types of assessment measures, a higher index implied higher agreement among the experienced clinicians that subjectively assessed the symptoms based on typical symptom features. We applied these two assessment methods to a patient knee robot and achieved positive appraisals. The assessment measures have potential for use in comparing several patient simulators for training physical therapists, rather than as absolute indices for developing a standard.

  18. Assessment of robotic patient simulators for training in manual physical therapy examination techniques.

    Science.gov (United States)

    Ishikawa, Shun; Okamoto, Shogo; Isogai, Kaoru; Akiyama, Yasuhiro; Yanagihara, Naomi; Yamada, Yoji

    2015-01-01

    Robots that simulate patients suffering from joint resistance caused by biomechanical and neural impairments are used to aid the training of physical therapists in manual examination techniques. However, there are few methods for assessing such robots. This article proposes two types of assessment measures based on typical judgments of clinicians. One of the measures involves the evaluation of how well the simulator presents different severities of a specified disease. Experienced clinicians were requested to rate the simulated symptoms in terms of severity, and the consistency of their ratings was used as a performance measure. The other measure involves the evaluation of how well the simulator presents different types of symptoms. In this case, the clinicians were requested to classify the simulated resistances in terms of symptom type, and the average ratios of their answers were used as performance measures. For both types of assessment measures, a higher index implied higher agreement among the experienced clinicians that subjectively assessed the symptoms based on typical symptom features. We applied these two assessment methods to a patient knee robot and achieved positive appraisals. The assessment measures have potential for use in comparing several patient simulators for training physical therapists, rather than as absolute indices for developing a standard.

  19. Training Knowledge Bots for Physics-Based Simulations Using Artificial Neural Networks

    Science.gov (United States)

    Samareh, Jamshid A.; Wong, Jay Ming

    2014-01-01

    Millions of complex physics-based simulations are required for design of an aerospace vehicle. These simulations are usually performed by highly trained and skilled analysts, who execute, monitor, and steer each simulation. Analysts rely heavily on their broad experience that may have taken 20-30 years to accumulate. In addition, the simulation software is complex in nature, requiring significant computational resources. Simulations of system of systems become even more complex and are beyond human capacity to effectively learn their behavior. IBM has developed machines that can learn and compete successfully with a chess grandmaster and most successful jeopardy contestants. These machines are capable of learning some complex problems much faster than humans can learn. In this paper, we propose using artificial neural network to train knowledge bots to identify the idiosyncrasies of simulation software and recognize patterns that can lead to successful simulations. We examine the use of knowledge bots for applications of computational fluid dynamics (CFD), trajectory analysis, commercial finite-element analysis software, and slosh propellant dynamics. We will show that machine learning algorithms can be used to learn the idiosyncrasies of computational simulations and identify regions of instability without including any additional information about their mathematical form or applied discretization approaches.

  20. Use of Simulation Learning Experiences in Physical Therapy Entry-to-Practice Curricula: A Systematic Review

    Science.gov (United States)

    Carnahan, Heather; Herold, Jodi

    2015-01-01

    ABSTRACT Purpose: To review the literature on simulation-based learning experiences and to examine their potential to have a positive impact on physiotherapy (PT) learners' knowledge, skills, and attitudes in entry-to-practice curricula. Method: A systematic literature search was conducted in the MEDLINE, CINAHL, Embase Classic+Embase, Scopus, and Web of Science databases, using keywords such as physical therapy, simulation, education, and students. Results: A total of 820 abstracts were screened, and 23 articles were included in the systematic review. While there were few randomized controlled trials with validated outcome measures, some discoveries about simulation can positively affect the design of the PT entry-to-practice curricula. Using simulators to provide specific output feedback can help students learn specific skills. Computer simulations can also augment students' learning experience. Human simulation experiences in managing the acute patient in the ICU are well received by students, positively influence their confidence, and decrease their anxiety. There is evidence that simulated learning environments can replace a portion of a full-time 4-week clinical rotation without impairing learning. Conclusions: Simulation-based learning activities are being effectively incorporated into PT curricula. More rigorously designed experimental studies that include a cost–benefit analysis are necessary to help curriculum developers make informed choices in curriculum design. PMID:25931672